<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Graduate Theses</title>
<link href="https://hdl.handle.net/1721.1/131023" rel="alternate"/>
<subtitle/>
<id>https://hdl.handle.net/1721.1/131023</id>
<updated>2026-04-08T10:05:13Z</updated>
<dc:date>2026-04-08T10:05:13Z</dc:date>
<entry>
<title>Hemorheological Considerations in the Development of Microfluidic Blood Oxygenation Devices</title>
<link href="https://hdl.handle.net/1721.1/165342" rel="alternate"/>
<author>
<name>Pincot, André M.</name>
</author>
<id>https://hdl.handle.net/1721.1/165342</id>
<updated>2026-04-07T03:05:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hemorheological Considerations in the Development of Microfluidic Blood Oxygenation Devices
Pincot, André M.
Novel supersaturation oxygenation technology promises a leap forward in the enhancement of ECMO capabilities and deployment of a more efficient, versatile, and portable form of extracorporeal oxygenation technology. The showcased membrane bubble generation supersaturation technique offers superior oxygenation performance to conventional ECMO allowing for reductions in blood flow rate, thus promising to reduce the shear-based thrombosis that limits current oxygenation technology in medium to long-term treatment of severely aff licted patients. The membrane supersaturation concept promises to address that gap in reliable, extended treatment by greatly reducing shear to delay and prevent thrombus formation in the device and associated extracorporeal life support (ECLS) circuit. The bubbles produced by the membrane generator are small in size, sufficient to completely diffuse and fully oxygenate a larger volume of blood when combined with an additional deoxygenated blood flow. Further, the technique’s high oxygen flux will offer new options for reducing size footprint and ruggedization for austere conditions given further investment and development. This will necessitate the creation of custom membrane solutions and further optimization of device channel geometries via simulation using advanced blood models such as the tensorial enhanced structural stress thixotropic-viscoelastic (t-ESSTV) constitutive model developed and discussed in this work. A characteristic feature of human blood rheology is a distinctive stress hysteresis during a ramp up in the shear rate from zero, followed by a ramp back to zero. This is a result of the fact that human blood has a longer characteristic time of shear-induced rouleaux breakdown compared to the shear aggregation of the rouleaux. We demonstrate this telltale phenomenon of human blood rheology using a triangle ramp protocol to control time-dependent changes in the shear rate. The unique hysteresis data are then used along with steady state data to fit parameters of a recently published thixo-elasto-viscoplastic rheological model, the tESSTV model. These best-fit parameter values from the hysteresis ramps are then used to predict step-up/down in shear rate, small amplitude oscillatory shear, uni-directional large amplitude oscillatory shear, and large amplitude oscillatory shear flow. Additionally, correlations between the calculated fitting parameters and physiological data are analyzed to inform the interpretation of model behavior in physical terms. The goodness of fit of the triangle ramp protocol and rheological hysteresis data are then evaluated alongside recently developed techniques to assess thixotropy via computation of hysteresis loop area. The results indicate the efficacy of the t-ESSTV model in potentially predicting the complex characteristics of blood rheology in useful ways for future use in modeling circulating flows under a variety of mechanical and biological loading conditions and predicting understanding rheological effects on resulting pathologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record Notes</title>
<link href="https://hdl.handle.net/1721.1/165340" rel="alternate"/>
<author>
<name>Jiang, Sharon</name>
</author>
<id>https://hdl.handle.net/1721.1/165340</id>
<updated>2026-04-07T03:05:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record Notes
Jiang, Sharon
The large amount of time clinicians spend sifting through patient notes and documenting in electronic health records (EHRs) is a leading cause of clinician burnout. By proactively and dynamically retrieving relevant notes during the documentation process, we can reduce the effort required to find relevant patient history. In this work, we conceptualize the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context, at a particular point in time. Our evaluation focuses on the dynamic retrieval in the emergency department, a high acuity setting with unique patterns of information retrieval and note writing. However, our framework is general and can be applied to other clinical settings and with other data modalities (e.g., labs, medications, imaging). We apply our framework to the oncology setting to demonstrate its utility to other clinical workflows. We show that our methods can achieve an AUC of 0.963 in the ED and 0.937 in oncology when predicting which notes will be read in an individual note writing session. We additionally conduct user studies with several clinicians and find that our framework can help clinicians retrieve relevant information more efficiently.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microneedles for Easier Fish Skin Penetration and Longer &#13;
Attachment</title>
<link href="https://hdl.handle.net/1721.1/165339" rel="alternate"/>
<author>
<name>Raad, Jad</name>
</author>
<id>https://hdl.handle.net/1721.1/165339</id>
<updated>2026-04-07T03:05:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Microneedles for Easier Fish Skin Penetration and Longer &#13;
Attachment
Raad, Jad
Aquaculture is the farming of aquatic animals for commercial purposes. This growing industry supplies around 50% of the world’s seafood and has reduced overfishing. However, it has also facilitated the spread of diseases between fish by growing them in close quarters, which results in poor growth and higher mortality levels. Injection vaccination is the most common way to combat this issue, but it is labor-intensive and stress-intensive on the fish. As an alternative to this method, the Marelli lab proposed using impermeable silk microneedle patches to encapsulate the medication and deliver it through diffusion to rainbow trout fry. When a microneedle patch was tested on a 7 g fry, it had difficulty penetrating the skin and only stayed attached for 10 min after injection. Consequently, it caused significant stress to the fish upon &#13;
insertion and fell short of the 4 hrs required for complete payload diffusion into the animal. This work aimed to reduce the force necessary for the needle to pierce fish skin and augment the &#13;
force needed to dislodge it, allowing for easier piercing and longer animal attachment time. Thus, the study intended to decrease the patch’s insertion force and increase its retraction force. The initial needles were cone-shaped and had an angle of 21º. To assess the effects of needle tip angle and overall shape on the forces, the new needles’ tip angle varied between 15º, 20º, and 25º, and a cylindrical base was added to them and varied between 0%, 33%, and 66% of the total needle height. The insertion and retraction forces of microneedle patches were quantified and revealed that sharper needles and needles with cylindrical bases amounting to 66 % of the total &#13;
needle height reduced the insertion force. In contrast, the retraction force was independent of both factors. The 25º 66%, 15º 33%, and 15º 0% needles displayed the lowest insertion forces and were tested on zebrafish to quantify how long they could stay attached. Preliminary tests on the live animals demonstrated that the new needles stayed attached to the fish for up to 8 hrs. This improved upon the initial Marelli lab design, which remained attached for 30 min at most. &#13;
Overall, pursuing live fish testing would allow for selecting the best-performing design and further developing it as a viable alternative to current vaccination methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Invertebrate-inspired Approach to Design and Manufacturing in Soft Robotics</title>
<link href="https://hdl.handle.net/1721.1/165338" rel="alternate"/>
<author>
<name>Arase, Cathleen</name>
</author>
<id>https://hdl.handle.net/1721.1/165338</id>
<updated>2026-04-07T03:05:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Invertebrate-inspired Approach to Design and Manufacturing in Soft Robotics
Arase, Cathleen
Soft robotics has many potential applications including deep sea biological sampling, fruit picking, physical therapy, assistive devices, surgery and other grasping tasks; however, within that realm many soft actuators lack the ability to output high force. In order to attempt to overcome this challenge, many soft roboticists are interested in variable stiffness actuators, but soft-rigid hybrid robots may also be helpful in solving this challenge. In fact, many invertebrates are able to undergo large deformations and have the ability to change their stiffness. Many of these invertebrates integrate components such as spicules or ossicles, which are small bones, making the invertebrates essentially a soft-rigid hybrid system. Taking inspiration from these invertebrates, soft rigid hybrid systems can be designed to increase the capabilities of soft actuators. Within the field of soft robotics, there are many practical problems to be overcome in the development of soft-rigid hybrid hybrid machines, including design, manufacturability, and delamination between soft and rigid components. This thesis focuses on work towards addressing these problems. The work explores invertebrates and invertebrate-inspired soft-rigid hybrid robots as a framework for understanding constraints in soft robotic systems. It then proceeds to explore manufacturing techniques for creating cast soft-rigid hybrid robots. Following this, it explores a novel method for decreasing the delamination forces between rigid overmolded components and soft walls of actuators, and finally it concludes with steps towards creating a soft actuator that incorporates those components as well as a comparison to a rigid example using a linkage mechanism for grasping.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a High-Throughput Cryoprotection Screening Platform for Cell Therapies</title>
<link href="https://hdl.handle.net/1721.1/165337" rel="alternate"/>
<author>
<name>Dey Barsukova, Anita</name>
</author>
<id>https://hdl.handle.net/1721.1/165337</id>
<updated>2026-04-07T03:05:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development of a High-Throughput Cryoprotection Screening Platform for Cell Therapies
Dey Barsukova, Anita
Type 1 Diabetes is a devastating disease in which the immune system attacks insulin-producing beta cells in the pancreas, disrupting the normal blood glucose regulation mechanism and resulting in damage to major organ systems. An emerging therapy for Type 1 includes transplanting stem cell-derived beta cell aggregates into patients, restoring normal regulation of blood glucose and eliminating the need for insulin injections. Reliable cryopreservation methods are required to meet global demand for these aggregates, but current protocols result in low cell viability post thaw and require complex post processing to remove the toxic cryopreservation agent (CPA) formulation before implantation. In this work, a high-throughput screening method is developed to identify a novel non-toxic CPA formulation that would enable the scale-up of this new Type 1 Diabetes treatment. The development and validation of workflow steps are presented, in addition to data from pilot experiments that execute all workflow steps.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Light Reflectance Sensing in the Gastrointestinal Tract with Ingestible Devices for Disease Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/165333" rel="alternate"/>
<author>
<name>Chen, Hao (Jack)</name>
</author>
<id>https://hdl.handle.net/1721.1/165333</id>
<updated>2026-04-07T03:05:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Applications of Light Reflectance Sensing in the Gastrointestinal Tract with Ingestible Devices for Disease Diagnosis
Chen, Hao (Jack)
Disease diagnosis in the gastrointestinal tract can be challenging, often requiring difficult endoscopic procedures or expensive imaging techniques. Pill-sized ingestible sensors represent an alternative method for disease diagnosis in the gastrointestinal tract that is minimally-invasive and cost-effective, thus promoting patient adherence and preventative screening of diseases. In this thesis, I investigate the design of ingestible sensors that emit light and measure light reflectance in the gastrointestinal tract for three applications: the detection of gastric mucosal contact, the diagnosis of upper gastrointestinal bleeding, and the diagnosis of small intestinal ischemia. To enable these applications, I develop arrays of LEDs and photodiodes that monitor the changes in reflectivity of the tissue and changes in color of the tissue. The sensor arrays are fabricated and assembled in ingestible form factors and validated in ex vivo and in vivo experiments with swine. The results demonstrate that the sensing of light reflectance enables accurate differentiation of gastric mucosa versus gastric lumen for the detection of mucosal contact, accurate detection of gastric bleeding even in the presence of red drinks or gastric fluid, and accurate detection of small intestinal ischemia even in the presence of bile and chyme. For the application to diagnose small intestinal ischemia, I present initial mechanical and electrical designs of an ingestible capsule system that activates in the small intestines via the dissolution of a pH-sensitive polymer, then performs duty cycling to enable ischemia detection during the entire small intestinal transit time. I aim to continue the development and validation of these ingestible sensors with the vision of providing minimally-invasive devices to enable cost-effective screening and monitoring of gastrointestinal diseases and conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Detection and Observation of Radiation Chemistry Species on an MR-LINAC</title>
<link href="https://hdl.handle.net/1721.1/165331" rel="alternate"/>
<author>
<name>Warner, Noah Stanley</name>
</author>
<id>https://hdl.handle.net/1721.1/165331</id>
<updated>2026-04-07T03:05:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Framework for Detection and Observation of Radiation Chemistry Species on an MR-LINAC
Warner, Noah Stanley
Radiation therapy, used in over half of cancer treatments, aims to target tumors while preserving healthy tissue. Existing techniques lack the ability to measure tissue damage during therapy, causing potential over- or under-irradiation, leading to severe side effects. Radiation inflicts DNA damage via direct and indirect mechanisms, the extents of each are inconsistent between patients, causing differences in response to radiation. Magnetic resonance-linear accelerators (MR-linacs) are promising to evaluate indirect DNA damage by measuring radiation chemistry species (RCS) produced during irradiation. In this work, MRI methods were developed to observe free radical production, radiation chemistry was modeled for select RCS scavengers and verified experimentally. These methods were then employed to measure MRI signal changes for complex combinations of RCS scavengers and radiosensitizing nanoparticles. Radiation chemistry experimental T1 changes were used to fit the relaxivity of the superoxide free radical and this value was assumed for all subsequent calculations. MRI T1 changes due to free radical production by radiation are presented in solutions consisting of water, 10 mM coumarin, 20 μM mito-TEMPO, 5 mM glutathione, a 20 μM mito-TEMPO and 5 mM glutathione mixture, 10 μM gold nanoparticles and 60 μM phosphate buffered saline. Radiation chemistry simulations completed for water and 10 mM coumarin show good agreement with their respective experimental T1 changes. Largest T1 changes and largest rates of production of superoxide were found in the 20 μM mito-TEMPO and 5 mM glutathione mixture, while smallest T1 changes and smallest rates of production of superoxide were found in the 20 μM mito-TEMPO solution. The main conclusions of this work show that a framework to detect T1 changes due to the production of free radical species during imaging and irradiation on a MR-linac has been developed, with the predominant source of T1 change over time due to free radicals attributed to the production of superoxide.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-cell dissection of mature conventional dendritic cells in the tumor microenvironment in metastatic melanoma</title>
<link href="https://hdl.handle.net/1721.1/165328" rel="alternate"/>
<author>
<name>Wang, Cassia B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165328</id>
<updated>2026-04-07T03:05:40Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Single-cell dissection of mature conventional dendritic cells in the tumor microenvironment in metastatic melanoma
Wang, Cassia B.
Although immunotherapy has revolutionized cancer treatment, the response rate of metastatic melanoma to immune checkpoint inhibitors (ICI) remains at less than 50%. One of the determinants of response might be explained by the underlying molecular mechanisms in the tumor microenvironment (TME), which is the composition of tumor cells and its surrounding environment of other cell types which play various roles in facilitating or inhibiting the progression of cancer. It was in our interest to specifically investigate the immunological factors driving observed clinical outcomes. Using single-cell technologies, mature conventional dendritic cells (mDCs) were identified in a cohort of metastatic melanoma samples and were present at a higher proportion in a subset of ICI anti-PD1-treated patients with better progression free survival (PFS). Elaborating on this finding, we generalized the characterization of mDCs in metastatic melanoma by using methods to determine mDCs’ association with other subtypes found in the TME, reveal the molecular features of mDCs compared to other conventional dendritic cells (cDCs), and find differentiating factors among samples with different mDC proportions. Through computational analysis of single-cell transcriptomes and epigenomes in metastatic melanoma, we aim to uncover critical immunological features and interactions within the TME, with potential for enhancing melanoma outcomes.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revisiting MHD Generators with HTS Magnets</title>
<link href="https://hdl.handle.net/1721.1/165327" rel="alternate"/>
<author>
<name>Clingerman, Matthew Hikaru</name>
</author>
<id>https://hdl.handle.net/1721.1/165327</id>
<updated>2026-04-07T03:05:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Revisiting MHD Generators with HTS Magnets
Clingerman, Matthew Hikaru
Magneto-hydrodynamic (MHD) power generators can convert thermal and kinetic energy to electrical energy without any moving mechanical parts. They have the promise of competing against typical turbo-generators in a power plant. The advent of high temperature superconducting (HTS) magnets can give MHD generators the edge over other generators as the efficiency increases with the magnetic field strength. A robust mathematical model is derived to account for the plasma physics, fluid dynamics, and magneto-hydrodynamics involved with directing and harnessing the flow of an ionized gas. The resulting analytical model is computationally solved and then analyzed. &#13;
&#13;
It is clear that HTS magnets greatly benefit MHD generators. For a coal-fired power plant, the enthalpy ratios between the input and output of the generator surpass 50%. In other words, over half of the thermal energy produced by the power plant is converted to electricity by the MHD generator. The remaining fraction of energy is directed to a bottoming cycle for additional energy conversion. In the end, modest estimates put the overall efficiency of this system over 65%, compared to the current most advanced coal power plants of less than 45% efficiency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oxide coarsening and agglomeration during melt-based additive manufacturing of dispersion-strengthened alloys</title>
<link href="https://hdl.handle.net/1721.1/165321" rel="alternate"/>
<author>
<name>Hou, Wenyuan (Roger)</name>
</author>
<id>https://hdl.handle.net/1721.1/165321</id>
<updated>2026-04-07T03:05:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Oxide coarsening and agglomeration during melt-based additive manufacturing of dispersion-strengthened alloys
Hou, Wenyuan (Roger)
Dispersion-strengthened alloys densified with laser powder bed fusion, a melt-based additive manufacturing technique, have coarser dispersoids, lower dispersoid number densities, and greater tendency to form slag compared to conventional wrought dispersion-strengthened alloys. These differences degrade creep and fatigue resistance, and mitigating their extent is critical to printing high-performance components for demanding high-temperature structural applications. In this work, experiments and modeling were used to assess how printing parameters, alloy chemistry, and powder feedstock collectively affect dispersoid evolution and slag formation. Laser powder bed fusion parameter studies were used to assess the effects in Ni-20Cr-Y₂O₃ feedstock produced via resonant acoustic mixing then consolidated with systematic variations in laser parameters (power, speed), Y₂O₃ concentration, and Al content. Dispersoid structure was subsequently characterized using small angle neutron scattering. The finest dispersion achieved among fully dense (&gt;99.5 rel. density) specimens has mean dispersoid diameter 21 nm and number density 230 μm-3. Dispersoid diameter was shown to decrease with the following adjustments: decreasing laser power, increasing scan speed, decreasing Y₂O₃ concentration, and keeping Al content below 0.3 wt%. Model predictions for dispersoid diameter were consistent with experimental values, and several key factors which influence the evolution of dispersoids were identified: convection-influenced thermal excursion, Y₂O₃ solubility, reaction with Al, nucleation, and diffusion-driven growth. The model also considers oxide dissolution over multiple melt cycles to establish bounds for slag-free printing of ODS alloys, showing a tradeoff between build rate and the quality of the oxide feedstock.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of lesion preparation-induced calcium fractures in vascular intervention&#13;
for atherosclerotic disease: in silico assessment</title>
<link href="https://hdl.handle.net/1721.1/165318" rel="alternate"/>
<author>
<name>Sogbadji, Jonas</name>
</author>
<id>https://hdl.handle.net/1721.1/165318</id>
<updated>2026-04-07T03:05:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Impact of lesion preparation-induced calcium fractures in vascular intervention&#13;
for atherosclerotic disease: in silico assessment
Sogbadji, Jonas
Atherosclerosis is the most common form of obstructive vascular disease and is the predominant cause of mortality world-wide. Endovascular interventions like balloon angioplasty and stent implantation have dominated as therapies with tremendous impact and yet are least effective in most severe disease – especially those with heavily calcified lesions.&#13;
&#13;
Intravascular lithotripsy (IVL) has been proposed to “prepare” lesions and optimize endovascular intervention with the idea of removing and/or modifying lesions resistive stiffness so as to make balloon or stent placement more effective. Despite clinical enthusiasm, there remains a lack of understanding as to how this occurs, and which lesions would be most amenable to and most affected by IVL.&#13;
&#13;
The range and extent of lesions are substantial presenting a formidable challenge in managing their modification. This complexity hampers the extrapolation of findings from both clinical and preclinical models. In silico models offer a means by which to examine diverse lesion morphologies and a range of lesion modifications to address these deficiencies, and in particular to understand if there is a correlation between calcium morphology alteration and improvement of stenting outcomes. We build a computational platform to connect stenting outcomes to IVL induced calcium modification. Three models were inspired by clinical optical coherence tomography image analyses and a stenting procedure was simulated for a number of variations within each model. Results show that expansion of stents and treated arteries rose with the volume of tissue affected and excised. For one particular model, stent expansion reached a local maximum. 3 In silico models provide a valuable perspective for considering complex vascular interventions – not only in simulating effects that are challenging to recapitulate in preclinical models but in helping develop a tool that can predict susceptible candidate lesions and help determine the ideal extent of lesion modification to optimize overall effect.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Certifiable Cooperative Localization for Underwater Navigation</title>
<link href="https://hdl.handle.net/1721.1/165275" rel="alternate"/>
<author>
<name>Morrison, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/165275</id>
<updated>2026-03-28T03:03:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Certifiable Cooperative Localization for Underwater Navigation
Morrison, John P.
Accurate underwater positioning remains one of the most significant obstacles to autonomous underwater vehicle (AUV) operations. Satellite-based navigation signals are unavailable underwater, so AUVs must dead-reckon using inertial sensors, coupled with velocity or heading references. Due to random noise and variable biases in inertial sensor measurements, the AUV’s position uncertainty grows steadily over the course of the mission, but can be reduced through range measurements to fixed or mobile references. The associated range-aided simultaneous localization and mapping (SLAM) problem is particularly challenging to solve with existing optimization methods. Individual range measurements provide limited geometric constrains on vehicle position and are subject to non-linear errors due to multi-path propagation. Attempts to optimize typical range-aided SLAM cost functions often return solutions which represent local, rather than global minima, resulting in unpredictable vehicle behavior when used for closed-loop navigation. This thesis applies a recently developed certifiable optimization algorithm, Certifiably Correct Range-aided SLAM (CORA), to the problem of cooperative localization between AUVs. CORA leverages aspects of the range-aided SLAM problem structure to find solutions which can be certified to be globally optimal. This method is integrated into a novel cooperative localization scheme, in which each vehicle maintains a locally held, periodically updated copy of the centralized, multi-agent factor graph. The cooperative localization framework presented here leverages acoustic modems for both range measurement and the sharing of sub-graphs through inter-vehicle communication. This approach was validated through extensive field trials using two modular, low-cost Spurdog AUVs were equipped with WHOI Micromodem2 payloads. Results from single and multi-vehicle deployments demonstrated that CORA substantially outperforms existing solvers when faced with poor landmark initialization and reduced observability as a result of real-world communication failures. The results presented here demonstrate the added value of coupling certifiable estimation with cooperative localization for multi-AUV localization problems, particularly in challenging, GPS-denied environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Mitigating Small-Diameter Tool Wear in Nickel-Based Superalloy Machining</title>
<link href="https://hdl.handle.net/1721.1/165185" rel="alternate"/>
<author>
<name>Brush, Alexander Sparry</name>
</author>
<id>https://hdl.handle.net/1721.1/165185</id>
<updated>2026-03-17T03:06:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterizing and Mitigating Small-Diameter Tool Wear in Nickel-Based Superalloy Machining
Brush, Alexander Sparry
This thesis investigates tool failure in the micromachining of single-crystalline René N4 turbine blades coated with ceramic thermal barrier layers. The work done in this thesis was promoted through a partnership between Massachusetts Institute of Technology (MIT) and GE Vernova. This thesis is complemented by the thesis written by Luke Placzek. Together these works offer a comprehensive case study in process analysis and manufacturing optimization.&#13;
This thesis begins with groundwork to document tool failure mechanisms and frequencies through photographic analysis. This was done alongside a study of historical data to analyze tool breakage frequency in the context of the turbine blade. Based on these insights, an Analysis of Variance (ANOVA) test followed by a Tukey’s Honestly Significant Difference (HSD) test identified statistically significant differences in tool breakage rates across machines and rows. A detailed study of tool wear progression was conducted to better understand how small-diameter endmills wear when machining the nickel-based superalloy René N4. Utilizing all these findings, an updated tool path was created to optimize tool life &#13;
This work lays the foundation for an improved machining strategy to reduce tool breakage in manufacturing turbine blades. Estimations show that the refined CAM strategies may reduce tool breakage by roughly 33 percent. Preliminary models estimate the implementation of the suggested improvements will save GE Vernova 2.5 million dollars per year.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A cross-industry analysis using Q-Methodology for streamlining engineering workflow</title>
<link href="https://hdl.handle.net/1721.1/165181" rel="alternate"/>
<author>
<name>Gupta, Harshit</name>
</author>
<id>https://hdl.handle.net/1721.1/165181</id>
<updated>2026-03-17T03:06:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A cross-industry analysis using Q-Methodology for streamlining engineering workflow
Gupta, Harshit
Non-value added times arising from disconnected systems, legacy architecture, repeated iterations, product version mismatch, and manual processes remains one of the most persistent inefficiencies in modern design and manufacturing organizations which can be resolved by leveraging digital technology. Through this thesis, a framework has been laid out to understand and summarize the gaps among the various departments of an organization from the standpoint of information flow across the complete manufacturing workflow. The goal is to find gaps and pain points in the adoption of the ’Digital Thread’ with the objective of becoming software-driven enterprises. The objective is to identify opportunities to automate and optimize processes, and how information can be streamlined across departments. As a snapshot, the project investigates how digital transformation can bridge the gap between design and manufacturing, with a focus on concurrent engineering in high-mix, low-volume production, and high-volume, low-mix production environments. The research uses Q-methodology to understand how the perception of use of digital tools vary across industries and organizations, especially among vertically integrated and supplier dependent enterprises. Evaluation is done across different roles in an organization, ranging from executives and strategy teams to engineers, metrology specialists, and shop floor managers perceive current workflows, bottlenecks, and opportunities for improvement. The analysis reveals differences and similarities in interests and opinions to map the landscape of the current and growing needs across different industries and product portfolio. The results of the thesis can be used by participating teams to re-design workflow, communication and process plans and add flexibility through automation to the existing process. The thesis conclusion will also help PTC to understand the capabilities that their softwares are missing out on that can be integrated in their future iterations to help serve their customers better for faster and better product development. The shift towards software-driven manufacturing is the need of the hour with increasing stress on re-industrialization and the Thesis contributes to the current evolving discussion. The Thesis ends with a discussion on potential avenues for exploration gathered from participants through qualitative interviews that can be used as a roadmap to get a sense of future directions of the dynamic industry.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>User-Responsive Solutions for Cognitive Load Reduction in CAD Platforms</title>
<link href="https://hdl.handle.net/1721.1/165180" rel="alternate"/>
<author>
<name>Bai, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/165180</id>
<updated>2026-03-17T03:06:37Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">User-Responsive Solutions for Cognitive Load Reduction in CAD Platforms
Bai, Jane
Amid the evolution of cloud-based Computer-Aided Design (CAD) platforms, traditional educational approaches fail to address the diversity of cognitive obstacles that users across expertise levels and learning behaviors fac. This thesis investigates whether behavior-adaptive CAD tools can reduce friction as hypothesized by Cognitive Load Theory (CLT) while enhancing skill development in modern engineering environments.A two-phase mixed-methods approach was employed that combined large-scale behavioral persona identification with controlled user testing. TF-IDF and PCA on the results of an MIT-wide survey identified four distinct behavioral archetypes corresponding to unique tool usage patterns and learning preferences independent of technical proficiency. A/B testing of three behavior-adaptive custom tools which addressed workflow optimization, parametric knowledge retention, and contextual-aware passive modelling guidance was done with novice and advanced users. Command logging captured behavioral features and analysis discovered significant cognitive load reduction, improved workflow efficiency, and higher-retained skill development. NLP of post-session survey responses revealed deeper conceptual engagement. From these results, a three-stage model progressing from friction reduction through behavioral analytics to continuous personalization optimization was developed to inform business applications. The findings demonstrate that effective CAD education requires addressing individual behavioral patterns rather than traditionally uniform skill-based approaches. Behavior-adaptive tools enhance learning pathways and workflows by preserving user agency over creative and parametric decisions during modelling while reducing cognitive friction.  &#13;
&#13;
Keywords: Computer-Aided Design (CAD), Cognitive Load Theory (CLT), Behavioral Analytics, Behavior-Adaptive Learning, Engineering Education
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smart Manufacturing of Desktop Fiber Extrusion Devices (FrED): Design Optimization and Digital Factory Implementation</title>
<link href="https://hdl.handle.net/1721.1/165178" rel="alternate"/>
<author>
<name>Ng, Yong</name>
</author>
<id>https://hdl.handle.net/1721.1/165178</id>
<updated>2026-03-17T03:06:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Smart Manufacturing of Desktop Fiber Extrusion Devices (FrED): Design Optimization and Digital Factory Implementation
Ng, Yong
This thesis presents the design and implementation of FrED Factory, a lab-scale, digitally integrated smart manufacturing environment developed to support scalable production and experiential learning in advanced manufacturing. Built around the Fiber Extrusion Device (FrED), a desktop analog to an industrial optical fiber draw tower. The project addresses both physical manufacturability and digital system coordination, aiming to simulate real-world Industry 4.0 practices in an educational setting.&#13;
To ensure repeatable and efficient production, key design components were optimized through tolerance analysis of laser cut acrylic frames. Standard Operating Procedures (SOPs) were developed to guide consistent execution of processes including 3D printing, laser cutting, procurement, and assembly. A structured Bill of Materials (BOM) was implemented to manage subassemblies and support real-time inventory tracking. On the digital front, the FrED Factory leverages Tulip, a no code Manufacturing Execution System (MES), to deploy dynamic work instructions, manage work orders, and monitor shopfloor performance. Tulip’s EdgeMC hardware was used to integrate Internet of Things (IoT) devices for machine status tracking. MQTT protocols were applied to capture 3D printer activity via OctoPrint, and current sensors were deployed to automatically log Quality Control (QC) station usage.&#13;
The result is a modular, scalable, and data-rich smart factory environment that enables students to gain hands-on experience with modern manufacturing systems. For educators, the FrED Factory provides a tangible platform for teaching digital manufacturing, while industry professionals can view it as a blueprint for applying lean, connected workflows in small-scale, high-mix production environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Materials for Ion Transport Management in Anion Exchange Membrane Electrolyzers</title>
<link href="https://hdl.handle.net/1721.1/165177" rel="alternate"/>
<author>
<name>Aamer, Zara</name>
</author>
<id>https://hdl.handle.net/1721.1/165177</id>
<updated>2026-03-17T03:06:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrated Materials for Ion Transport Management in Anion Exchange Membrane Electrolyzers
Aamer, Zara
Electrochemical CO₂ separation systems leveraging anion exchange membranes (AEMs) offer significant energetic advantages over traditional bipolar membrane electrodialysis (BPMED), but suffer from hydroxide crossover, which reduces current efficiency (CE) and system performance. This work explores the transport dynamics of carbonate and hydroxide ions in AEM systems and introduces a hybrid PES-AEM bilayer membrane architecture to mitigate hydroxide crossover while preserving sufficient CO₂ recovery. We demonstrate that the bilayer system achieves a reduced relative transport factor (R = 1.4) and enables up to 3.8x improvement in CE compared to conventional AEM systems at realistic capture conditions. Further analysis reveals that transport properties in the least conductive domain of a multi-membrane system dominate overall behavior, allowing non-selective, low-conductivity materials such as porous PES to reduce hydroxide crossover effects. This study outlines key membrane material parameters influencing relative ionic transport and highlights the potential of hybrid architectures to unlock energy-efficient CO₂ electrochemical regeneration for direct air capture (DAC) integration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dimensioning Defects with Monocular Vision in Automated Optical Inspection</title>
<link href="https://hdl.handle.net/1721.1/165176" rel="alternate"/>
<author>
<name>Boyd, Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/165176</id>
<updated>2026-03-17T03:06:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Dimensioning Defects with Monocular Vision in Automated Optical Inspection
Boyd, Logan
Automated optical inspection (AOI) systems are common tools for quality control in industrial manufacturing. AOI systems use robotic systems to load components, take images, and detect defects, often also characterizing the defects by size or class. Among various approaches to this machine vision, monocular systems are popular because they are cheap and simple to integrate while offering intuitive visualization. However, monocular vision alone lacks depth resolution and struggles to accurately dimension defects on 3D surfaces, especially if the imaged component’s pose is ambiguous. This paper presents a transparent, open-sourced, end-to-end image processing pipeline for dimensioning surface defects on industrial components using RGB images. The pipeline estimates component pose through a 2D-3D correspondence, segments defects with machine learning or image comparison techniques, then projects the component’s CAD mesh into the image to calculate the lengths of segmented defect instances. The pipeline was developed on a 3D-printed test object and demonstrated with each of three segmentation methods, yielding defect dimensions with average error between 0.6-1.2mm.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Manufacturing Readiness Reviews and Fiber Extrusion Processes: A Two-Sided Approach to Product Maturity in Optics and Sensing</title>
<link href="https://hdl.handle.net/1721.1/165175" rel="alternate"/>
<author>
<name>Groll, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/165175</id>
<updated>2026-03-17T03:06:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advancing Manufacturing Readiness Reviews and Fiber Extrusion Processes: A Two-Sided Approach to Product Maturity in Optics and Sensing
Groll, Matthew
This thesis engages with two important facets of the manufacturing discipline. The first half reflects on the ongoing efforts of the Charles Stark Draper Laboratory towards growing capabilities in production with an emphasis on standardization for responsible organizational scaling. Specific work is presented towards advancing the Manufacturing Readiness Review (MRR) process, informed by staff interviews, in the form of recommended approaches and template materials for technology leads to employ during future manufacturing review cycles. The second half covers more hands-on, active product and process development work for MIT’s Fiber Extrusion Device (FrED). Findings relate towards both improving the production process of the device as well as capabilities and observations of the extruded fiber. Inventory management recommendations are detailed for different production scenarios, and successful extrusion of acrylic, novel to the current studied capabilities of the FrED, is demonstrated. Observations on the resulting fiber’s optical properties are characterized along with a repeatable approach for doing so. While distinct, together these topics provide holistic insights into moving from concept to production.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaled Deployments of Seismic Penetrators to Measure&#13;
Stability of Antarctic Ice Shelves</title>
<link href="https://hdl.handle.net/1721.1/165174" rel="alternate"/>
<author>
<name>Steen, Parker</name>
</author>
<id>https://hdl.handle.net/1721.1/165174</id>
<updated>2026-03-17T03:06:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scaled Deployments of Seismic Penetrators to Measure&#13;
Stability of Antarctic Ice Shelves
Steen, Parker
Ice shelves play a critical role in regulating the flow of Antarctic ice sheets and thereby global sea level rise. Recent ice shelf collapses are poorly understood due to a lack of seismic measurements of an ice shelf’s response to extreme environmental forces, such as ocean tides and tsunamis. Instrumenting ice shelves is a challenge due to transportation limitations, unpredictable weather, and dangerous crevassing. Air-dropped seismic penetrators have been developed in the Seismogeodetic Ice Penetrator (SGIP) project to alleviate manual installation pain points and access remote locations. The design of two SGIPs dropped into the Ross Ice Shelf in 2025 is reconsidered to determine how the design must and could evolve to be able to deploy seismic sensors at a scale necessary to achieve science goals. The power budget for a remotely dropped penetrator that transmits all recorded data is determined. Power architectures with solar panels or a wind turbine are optimized to minimize the total height of a penetrator powered by primary batteries by 23% with Iridium and 29% with Starlink. A Barrowman aerodynamic model is evaluated against empirical results. The model is calibrated and used to consider penetrator drops from fixed-wing aircraft, with results suggesting that horizontal belly drops are optimal but that vertical aft or side drops are possible. A unit cost curve is found for scaled production volumes. Finally, scaled deployments with LC-130H and Basler aircraft are considered to optimize the aircraft cost of seismic data, finding both aircraft to be viable, but the LC-130H more cost effective.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reachability Prediction and Optimal Path Planning for Autonomous Ocean Vehicles</title>
<link href="https://hdl.handle.net/1721.1/165173" rel="alternate"/>
<author>
<name>Mule, Ellen M.</name>
</author>
<id>https://hdl.handle.net/1721.1/165173</id>
<updated>2026-03-17T03:06:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reachability Prediction and Optimal Path Planning for Autonomous Ocean Vehicles
Mule, Ellen M.
For intelligent ocean exploration and sustainable ocean utilization, the need for smart autonomous underwater vehicles (AUVs), surface craft, and small aircraft is rapidly increasing. Creating time-optimal navigation routes for these vehicles has wide-ranging applications, including ocean data collection, transportation and distribution of goods, naval operations, search and rescue, detecting marine pollution, ocean cleanup, conservation, and solar-wind-wave energy harvesting. In this thesis, we employ the Massachusetts Institute of Technology – Multidisciplinary Simulation, Estimation, and Assimilation Systems (MIT-MSEAS) time-optimal and hazard-time-optimal path planning theory and schemes based on exact Hamilton–Jacobi partial differential equations (PDEs) and Level Set methods. We apply this methodology to ocean gliders and floats during several real-time sea experiments—the Mini-Adaptive Sampling Test Run (MASTR) and Grand Adaptive Sampling Experiment (GRASE) in the Gulf of Mexico, and the New England Seamounts Acoustic (NESMA) experiment in the North Atlantic. Using the MIT-MSEAS multi-resolution ocean modeling and data assimilation system to provide deterministic and probabilistic ocean current forecasts, we compute time-reachable sets as well as time-optimal paths for a variety of ocean vehicle missions. The governing differential equations for reachability analysis and time-optimal path planning were numerically integrated in real time, forced by our large-ensemble ocean forecasts. We illustrated deterministic and probabilistic forward reachability analyses, glider recovery planning, time-optimal routing for gliders in distress, and planning of future glider and float deployments. Results show that the actual paths of gliders were contained within our reachable set forecasts and in accord with the dynamic reachability fronts. These forecasts were successfully employed for glider recovery and informed strategic decisions for future missions. Additionally, we demonstrated the ability to incorporate risk such as severe weather or vessel traffic into hazard-time-optimal path planning for simulated collaborative air-sea drone missions. Overall, the integration of data-driven multi-resolution ocean modeling with exact reachability theory and numerical schemes enables principled, operationally relevant path planning for diverse ocean missions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytical Model for Orbital Motion Under J₂</title>
<link href="https://hdl.handle.net/1721.1/165172" rel="alternate"/>
<author>
<name>Nedungadi Martinod, Marco Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/165172</id>
<updated>2026-03-17T03:06:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Analytical Model for Orbital Motion Under J₂
Nedungadi Martinod, Marco Antonio
As the number of operational satellites and debris objects in Earth orbit continues to accelerate, the ability to predict orbital trajectories with both accuracy and efficiency has become an indispensable capability. Numerical integration of the full Cartesian equations of motion offers generality but at high computational cost, while traditional analytical theories are efficient but often restricted by singularities in the classical orbital element set. Analytical formulations expressed in nonsingular elements can combine efficiency with global validity, and provide physical insight into the structure of orbital perturbations.&#13;
&#13;
This thesis develops a globally valid analytical model for orbital motion under the Earth's second zonal harmonic (J₂) in the modified equinoctial element (MEE) framework. The MEE set eliminates the singularities present in circular and equatorial orbits, allowing uniform treatment across all regimes. Two principal contributions are made. First, explicit first-order mean equations of motion are derived using a generalized averaging method applied to the J₂ disturbing function. The resulting system reduces to two planar rotations of the eccentricity and inclination vectors with constant rates, together with a secular drift in the true longitude. These equations reproduce Brouwer's classical secular results when mapped back to Keplerian elements, while retaining the nonsingular advantages of the MEE formulation. Second, closed-form mean--osculating transformations are obtained, enabling consistent recovery of short-period variations from the mean solution. These transformations allow a dual representation: efficient mean propagation combined with reconstruction of instantaneous orbital states.&#13;
&#13;
The analytical model is validated against high-fidelity Cartesian propagation across a set of representative orbit classes, including LEO, GEO, GTO, and Molniya orbits. In all cases, the mean element evolution predicted by the MEE-based theory shows close agreement with numerical integration. Over week-long propagation intervals, relative position errors remain small, while computational cost is substantially reduced compared to Cowell integration. These results establish the MEE-based analytical framework as both theoretically rigorous and practically effective, providing a foundation for accurate, efficient, and globally valid orbit prediction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of Transonic Fan Response to Inlet Distortion</title>
<link href="https://hdl.handle.net/1721.1/165171" rel="alternate"/>
<author>
<name>Levy, Benjamin Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/165171</id>
<updated>2026-03-17T03:06:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterization of Transonic Fan Response to Inlet Distortion
Levy, Benjamin Adam
This thesis seeks to characterize transonic fan response to three-dimensional inlet flow distortion, which is a challenge of business jet propulsor-airframe integration. The specific context is ensuring fan operability in crosswind while retaining high cruise efficiency. A body force approach is used with a pre-processing workflow that simplifies the inputs of the body force model. This enables rapid assessment of changes to the fan work distribution, a step towards achieving potential benefits of fan-inlet co-optimization. The workflow is used to explore the sensitivities of fan response to an applied non-uniformity, to fan work distribution, and to bulk swirl. Incidence, as a metric for evaluating distortion, is found to offer an improved assessment of fan operability trends compared to metrics that only depend on the stagnation pressure distribution. Such metrics are not found to capture sensitivities of fan response to increasing circumferential extent of the stagnation pressure defect. Sensitivity of the local response of the fan in the low stagnation pressure region to the radial work distribution are dominated by effects seen in 2D distortions: steeper local pressure ratio characteristics increase the attenuation of the stagnation pressure non-uniformity. However, such designs generate more severe stagnation pressure non-uniformities downstream of the rotor at other spanwise positions due to radial variations in the distortion pattern and rotor pressure rise. The effect of bulk swirl on the characteristic slope produces coupling of stagnation pressure and swirl, where combined counter-swirl and stagnation pressure distortion is found to produce more severe fan operability penalties than the superposition of each separate effect. The characterization of inlet distortion response contributed by this thesis is a necessary step in optimizing the propulsor inlet design with constraints on off-design operability.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Sketch to CAD Code: Multimodal AI for Controllable Design Generation</title>
<link href="https://hdl.handle.net/1721.1/165165" rel="alternate"/>
<author>
<name>Man, King Yiu Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/165165</id>
<updated>2026-03-17T03:06:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From Sketch to CAD Code: Multimodal AI for Controllable Design Generation
Man, King Yiu Brandon
Generative artificial intelligence (AI) has demonstrated transformative potential in creative and technical fields, yet its application to engineering design remains underdeveloped. Unlike domains where AI outputs can be directly consumed, engineering design demands integration across heterogeneous tools, multimodal data, and highly structured workflows. This thesis develops and evaluates AI-driven approaches for enabling copilot-style systems that assist engineers throughout the early stages of design, where decisions have the greatest impact on cost and performance. We identify and address three central challenges: the need for user control over abstract generative processes, the scarcity of high-quality engineering datasets, and the complexity of integrating AI into diverse design toolchains. Our first contribution is Sketch2Prototype, a multi-stage framework that transforms conceptual sketches into text, images, and manufacturable 3D meshes. Evaluated on a dataset of 1,087 sketches, the system produces more diverse and manufacturable prototypes than direct sketch-to-3D methods, while enabling iterative refinement through a controllable intermediate text stage. Our second contribution is VideoCAD, a synthetic dataset of over 41,000 annotated CAD modeling videos—up to twenty times longer in action horizon than prior UI agent datasets—capturing pixel-precise, long-horizon interactions in a professional CAD environment. We benchmark state-of-the-art behavior cloning models and large language models on VideoCAD, and introduce VideoCADFORMER, a transformer-based architecture that achieves superior performance on long-horizon CAD action prediction. Finally, we present VisionCAD, a fine-tuned Large Language Model (LLM) that constructs CAD Generation code from point cloud and image data, trained with a dataset of over two million image, point cloud, and CADQuery triplets. Together, these contributions demonstrate that generative AI, multimodal learning, and large-scale dataset generation can be combined to accelerate design exploration, improve manufacturability, and integrate seamlessly into engineering workflows. By addressing both the data and workflow bottlenecks, this work lays the foundation for AI copilots that enhance productivity, creativity, and precision in engineering design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FrED and the FrED Factory:&#13;
The MIT Approach to Designing a Smart Learning Factory</title>
<link href="https://hdl.handle.net/1721.1/165164" rel="alternate"/>
<author>
<name>Bradley, Russel</name>
</author>
<id>https://hdl.handle.net/1721.1/165164</id>
<updated>2026-03-17T03:06:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">FrED and the FrED Factory:&#13;
The MIT Approach to Designing a Smart Learning Factory
Bradley, Russel
It’s hard to learn manufacturing without being in a factory. Existing manufacturing education approaches—including educational kits, machine shops, and learning factories—often fail to capture the natural variability and the flow of products, processes, and people inherent in volume production, which drive the dynamics of real manufacturing systems. This paper/thesis talks about the design and development of the learning factory at MIT, also known as the FrED Factory. FrED Factory is a fully operational factory within campus that produces and delivers manufacturing education kits, Fiber Extrusion Device (FrED), while simultaneously delivering education. The combination of a learning factory producing learning products creates a unique ecosystem of manufacturing education. The FrED and the FrED Factory ecosystem have impacted learners with authentic learning experiences. Project-based learning experiences are delivered through groups of students working to develop FrED and the FrED Factory. The products of this development, the learning device and learning factory, amplify impact by serving as platforms for manufacturing education. The FrED and FrED Factory initiative has impacted learners from K-12, undergraduate, graduate, and professional education at MIT and beyond.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing CMOS Devices for Use in Future X-Ray Astrophysics Instruments</title>
<link href="https://hdl.handle.net/1721.1/165162" rel="alternate"/>
<author>
<name>Lupo, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/165162</id>
<updated>2026-03-17T03:06:05Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterizing CMOS Devices for Use in Future X-Ray Astrophysics Instruments
Lupo, Jonathan
Complementary Metal-Oxide-Semiconductor (CMOS) detectors and Charge-Coupled Devices (CCDs) are the two primary imaging technologies used in optical and X-ray detection. Both rely on pixel arrays that convert incoming photons into electrical charge but differ in readout architecture: CCDs shift charge across the array to a common output node, while CMOS devices incorporate amplifiers and readout circuitry at each pixel. CCDs have long been favored in astronomy for their high sensitivity, low noise, and deep depletion regions that enhance detection of higher-energy X-rays. However, they suffer from slow readout, high power demands, and susceptibility to radiation-induced charge transfer losses. CMOS detectors, in contrast, offer fast readout, low power consumption, and increased resilience in radiation environments, while enabling on-chip processing and high time resolution. These advantages make CMOS increasingly attractive for astrophysical applications, particularly in capturing faint, transient, or rapidly varying X-ray phenomena. This work evaluates the potential of two modified commercial CMOS detectors from Sony’s uEye SE series, the IMX226 and IMX662, for low- to intermediate-energy X-ray astrophysics. To enhance sensitivity, the optical windows were removed and, for the IMX226, the microlens array was eliminated to reduce absorption at low energies. The detectors were characterized at the MIT Kavli Institute X-ray Detector Lab, with performance evaluated in terms of X-ray response, readout noise, pixel-to-pixel gain variation, linearity, dark current, and contributions to overall energy resolution. Detector testing used X-ray emission lines from Polonium-250 and Iron-55 at 277 eV (C), 677 eV (F), 5.9 keV (MnKa), and 6.4 keV (MnKb). Measurements were performed in a vacuum chamber to minimize absorption, with optical linearity tested separately on an optical assembly setup using an integrating sphere. Both detectors showed strong potential as low-cost X-ray sensors, with energy resolutions approaching theoretical limits across key emission lines. Readout noise was low (2.28 e⁻ for IMX226, 3.54 e⁻ for IMX662), gain variation was minimal when measured (≤0.32%), and linearity remained stable with errors below 0.6% across high- and low-energy regimes. Dark current was negligible for the IMX662 and modest for the IMX226 (0.57 e⁻/pixel/sec). While readout noise and gain variation explain much of the measured energy resolution, additional unaccounted noise was observed, indicating that further optimization is required.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision-Language Models for Engineering Design:&#13;
From Technical Documentation Benchmarking to CAD&#13;
Generation</title>
<link href="https://hdl.handle.net/1721.1/165151" rel="alternate"/>
<author>
<name>Doris, Annie Clare</name>
</author>
<id>https://hdl.handle.net/1721.1/165151</id>
<updated>2026-03-17T03:05:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Vision-Language Models for Engineering Design:&#13;
From Technical Documentation Benchmarking to CAD&#13;
Generation
Doris, Annie Clare
Engineering product development is slowed by two bottlenecks: interpreting technical requirements and producing accurate, editable computer-aided design (CAD) models. This thesis evaluates and advances vision-language models (VLMs) – large-scale foundation models that process both text and images – to support engineers in these time-consuming tasks. While benchmarks exist for evaluating VLM performance in areas such as medical imaging, optical character recognition, and robotics, benchmarks for engineering design tasks remain scarce. We develop DesignQA, which remedies this problem, by combining visual data, textual design requirements, CAD images, and engineering drawings in a benchmark. It enables us to rigorously quantify the VLMs’ abilities to understand and apply engineering requirements in technical documentation. Developed with a focus on real-world engineering challenges, DesignQA uniquely combines visual data – including textual design requirements, CAD images, and engineering drawings – derived from the Formula SAE student competition. The benchmark features automatic evaluation metrics and is divided into segments – Rule Comprehension, Rule Compliance, and Rule Extraction – based on tasks that engineers perform when designing according to requirements. We evaluate state-of-the-art models (at the time of writing) like GPT-4o, GPT-4, Claude-Opus, Gemini-1.0, and LLaVA-1.5 against the benchmark. Our study uncovers the existing gaps in VLMs’ abilities to interpret complex engineering documentation, including the inability to reliably retrieve relevant rules from the Formula SAE documentation and challenges in analyzing engineering drawings. These findings underscore the need for VLMs that can better handle the multifaceted questions characteristic of design according to technical documentation. After establishing an engineering-design-specific benchmark, we investigate whether additional training can improve VLM performance on engineering tasks. In particular, we address CAD generation from images, a problem motivated by scenarios such as sketch-toCAD workflows, recovery of lost files, or cases where only an image is available due to privacy concerns. While recent developments in AI-driven CAD generation show promise, existing models are limited by incomplete representations of CAD operations, an inability to generalize to real-world images, and low output accuracy. We develop CAD-Coder, an open-source VLM fine-tuned to generate CadQuery code directly from images, trained on GenCAD-Code (163,671 image–code pairs). On a 100-sample test subset, CAD-Coder outperforms strong VLM baselines (e.g., GPT-4.5, Qwen2.5-VL-72B), achieving a 100% valid-syntax rate and the highest 3D-solid similarity. It also shows early generalization, producing CAD code from real photographs and executing operations (e.g., filleting) not seen during fine-tuning. The performance and adaptability of CAD-Coder highlight the potential of VLMs fine-tuned on design-specific tasks to streamline workflows for engineers. We conclude with directions for design-specific VLMs, including synthetic-data pipelines to improve dataset coverage and reinforcement-learning strategies that exploit objective geometric rewards. Together, DesignQA and CAD-Coder indicate a practical path toward VLM assistants that accelerate requirement-aware engineering design and image-to-CAD workflows. All code, data, and trained models are released publicly to support reproducibility and future research.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-Augmented CAD Onboarding: A Personalized Approach to Reducing Learning Friction</title>
<link href="https://hdl.handle.net/1721.1/165144" rel="alternate"/>
<author>
<name>Aiouche, Nada</name>
</author>
<id>https://hdl.handle.net/1721.1/165144</id>
<updated>2026-03-17T03:06:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">AI-Augmented CAD Onboarding: A Personalized Approach to Reducing Learning Friction
Aiouche, Nada
Autodesk Fusion is a leading cloud‑based CAD platform, yet new users often face steep learning curves due to scattered resources, inconsistent guidance, and a lack of personalization. This thesis addresses these challenges and proposes an adaptive AI assistant, embedded within Fusion, as a potential solution to streamline onboarding, reduce search time, surface hidden tools, and deliver guidance tailored to the user’s learning style. By centralizing learning support within the design environment, the proposed system aims to reduce cognitive load and keep users focused on productive work rather than on searching for help. Based on surveys, interviews, and controlled user testing comparing tasks with and without simulated AI support, the study suggests that personalized, context‑aware assistance can improve task flow, reduce frustration, and provide particular benefits for beginners. Findings indicate that such a solution not only accelerates skill acquisition but also supports long‑term engagement by making the early stages of learning more intuitive and less discouraging. Finally, this thesis outlines practical next steps Autodesk can take to develop, integrate, and validate such a system to realize its full potential in accelerating adoption, improving retention, and enhancing the overall user experience.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Low-Cost Bioreactor System for Synechocystis sp. PCC 6803: Integrated Cultivation, Lysis, and Filtration for Sustainable Glucose&#13;
Harvesting</title>
<link href="https://hdl.handle.net/1721.1/165143" rel="alternate"/>
<author>
<name>Baho, Ingie</name>
</author>
<id>https://hdl.handle.net/1721.1/165143</id>
<updated>2026-03-17T03:06:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Low-Cost Bioreactor System for Synechocystis sp. PCC 6803: Integrated Cultivation, Lysis, and Filtration for Sustainable Glucose&#13;
Harvesting
Baho, Ingie
This thesis describes the design, modeling, and fabrication of a three-part bioreactor and biomass processing system designed to cultivate Synechocystis sp. PCC 6803 and extract its intracellular glucose. The resulting glucose can support sustainable biomanufacturing for diverse downstream applications, including serving as a feedstock for K. rhaeticus to produce cellulose, as a precursor for biofuel production, or as an ingredient in food supplements. The system incorporates a photobioreactor, a lysis module for acid and ultrasound-based cell disruption, and a pressure-driven filtration setup. The photobioreactor was equipped with a pH, dissolved oxygen, and temperature probe; and optical density was continuously monitored using a custom-built module. The lysis unit contained an ultrasound, a pH, and temperature probe in addition to pumps connected to acid and base chambers. The filtration unit was connected to a compressed air tank and designed with a pressure control valve, safety valve, and syringe filter. Glucose concentration was quantified offline using high-performance liquid chromatography (HPLC). Various light regimes were tested, and under an incident light intensity of approximately 400 µmol m⁻² s⁻¹ at a color temperature of 6500 K, cultures were shown to reach a biomass productivity of 90 mg L⁻¹ day⁻¹, with a specific growth rate of 0.166 day⁻¹ and glucose concentrations up to 5.08 mg L⁻¹. Innovative culture strategies were explored at a small scale, including the cultivation of Synechocystis sp. PCC 6803 in spent K. rhaeticus media to promote economic and sustainable media recycling. When supplemented with additional nutrients, the spent media supported Synechocystis growth up to an OD680 of 0.5. To further characterize the photobioreactor and expected growth based on environmental parameters, both mathematical and machine learning models were built. While the mathematical models were not experimentally validated, the machine learning model model achieved a strong predictive accuracy with a mean absolute error and variance of 0.0009±0.0003 over a 10-fold cross-validation. The system demonstrates up to 65% reduction in cost compared to commercial alternatives.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Passive, Air-Based Squeeze Film Damping for Kinematic Couplings</title>
<link href="https://hdl.handle.net/1721.1/165142" rel="alternate"/>
<author>
<name>Gazdus, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/165142</id>
<updated>2026-03-17T03:06:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design of Passive, Air-Based Squeeze Film Damping for Kinematic Couplings
Gazdus, Hannah
In precision machine design, kinematic couplings are a common choice for aligning and fixturing parts due to their high repeatability. Their centering ability, along with their high stiffness from hertzian contact, enables kinematic couplings to minimize errors. Although kinematic couplings are applied in dynamic situations such as machining, they are currently designed using only static methods with little regard to vibration-induced error. Machine designers thus do not fully understand how kinematic couplings will behave in situ and do not take advantage of easily applicable damping methods to minimize vibration-induced error. This thesis provides a framework for dynamically modeling kinematic couplings with air-based squeeze film damping. This method of damping takes advantage of the inherent air layer between the top and bottom plates of a kinematic coupling; being so simple to leverage, this work advocates for the inclusion of such damping in every kinematic coupling. This work demonstrates that squeeze film damping can increase a coupling’s damping over 100X, significantly raising dynamic stiffness and reducing vibration-induced error. This work’s design principles will allow for more rigorous and thorough development of kinematic couplings, which is especially necessary for applications where vibration-induced errors must be minimized.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D-Printed Tangential Flow Filtration and High-Throughput Microfluidic Electroporation for Scalable Microbial Processing</title>
<link href="https://hdl.handle.net/1721.1/165132" rel="alternate"/>
<author>
<name>Cui, Yuhe</name>
</author>
<id>https://hdl.handle.net/1721.1/165132</id>
<updated>2026-03-17T03:06:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">3D-Printed Tangential Flow Filtration and High-Throughput Microfluidic Electroporation for Scalable Microbial Processing
Cui, Yuhe
Bacterial transformation via electroporation is fundamental to modern biotechnology applications including therapeutic protein production, biomaterial synthesis, and agricultural enhancement. However, conventional electroporation workflows face critical bottlenecks that limit their scalability and industrial applicability; mainly inefficient electrocompetent cell preparation and low-throughput transformation processes.&#13;
This thesis presents two complementary 3D-printed technologies that independently address these limitations for scalable microbial processing. First, a novel spiral channel tangential flow filtration (TFF) system was developed that replaces conventional centrifugation-based methods for preparing electrocompetent cells. The spiral geometry enhances mixing dynamics and enables continuous washing of bacterial cultures, dramatically reducing preparation time while improving cell recovery compared to traditional centrifugation and membrane filtration approaches that suffer from time constraints, labor intensity, and membrane fouling.&#13;
Second, a 3D-printed microfluidic electroporation platform featuring geometry-optimized electric field distribution was designed. Building upon established M-TUBE principles, the bilaterally converged channel architecture creates localized field enhancement at reduced applied voltages, enabling high-efficiency transformation of larger cell volumes. This design overcomes the throughput limitations of conventional cuvette-based systems that require manual handling and process only small volumes.&#13;
Both technologies leverage additive manufacturing to create cost-effective alternatives to traditional protocols. Computational fluid dynamics simulations and experimental validation demonstrate significant improvements in processing time, transformation efficiency, and throughput compared to conventional methods. These complementary technologies demonstrate the potential for future integration into a complete workflow for scalable microbial transformation, with promising implications for broader implementation in industrial biotechnology, synthetic biology, and large-scale research applications.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Digital Threads Across Engineering Organizations: A Q-methodology Analysis of Challenges with a Novel Factor Selection Approach</title>
<link href="https://hdl.handle.net/1721.1/165131" rel="alternate"/>
<author>
<name>Kong, Kanglin</name>
</author>
<id>https://hdl.handle.net/1721.1/165131</id>
<updated>2026-03-17T03:06:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrating Digital Threads Across Engineering Organizations: A Q-methodology Analysis of Challenges with a Novel Factor Selection Approach
Kong, Kanglin
This research investigates the integration of digital tools across design, manufacturing, and quality management functions through a cross-industry analysis informed by Q-methodology. Despite considerable investments in digital transformation, many manufacturing organizations face persistent gaps between design intent and production execution, exacerbated by fragmented digital threads, limited adoption of Model-Based Definition, and the continued reliance on manual, error-prone workflows. Through qualitative interviews and quantitative Q-sort analyses conducted among participants across diverse industries, this study identifies key patterns of pain points and solutions perceived differently by stakeholder groups. It reveals insights into how variations in industry characteristics influence digital maturity, particularly regarding the adoption and integration of Product Lifecycle Management, Model-Based Enterprise, and Design for Manufacturability practices. Findings underscore the critical role of enhancing digital thread connectivity, ensuring early integration of manufacturability feedback, embedding automated cost analytics, and facilitating supplier readiness for full MBD adoption. Furthermore, the research highlights the necessity of strategic organizational change management alongside technological advancements. This work provides a nuanced understanding of organizational perceptions and identifies tangible pathways toward a cohesive, software-driven approach for bridging gaps among engineering functions, thereby informing future strategies for manufacturers and software vendors alike.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-in-the-Loop Task Directed Exploration and&#13;
Planning in Unknown Environments</title>
<link href="https://hdl.handle.net/1721.1/165129" rel="alternate"/>
<author>
<name>Jois, Aneesh Ramesh</name>
</author>
<id>https://hdl.handle.net/1721.1/165129</id>
<updated>2026-03-17T03:06:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Human-in-the-Loop Task Directed Exploration and&#13;
Planning in Unknown Environments
Jois, Aneesh Ramesh
For robots to perform everyday tasks autonomously, like humans, they should be able to perceive, explore and act in novel environments while pursuing high level goals. This capability is known as task-directed exploration, and is essential in domains ranging from household assistance robots to disaster response. However, existing approaches each fall short in fulfilling the task directed exploration problem. Classical symbolic planners require brittle, hand crafted domain models and assume complete knowledge of the environment. POMDP based formulations provide a principled approach to planning under uncertainty but are computationally intractable in large, open world settings. Foundation models such as large language models (LLMs) and vision language models (VLMs) offer strong commonsense knowledge and pattern recognition capabilities but lack the structured spatial grounding and adaptivity required for embodied execution. This thesis presents a unified framework that closes this gap by tightly integrating foundation models with a real time semantic mapping and planning stack. The system consists of four components: (i) a dual layer perception module that combines a deterministic 3D scene graph with a frontier based probabilistic belief field, using vision language models for object labeling and large language models for room classification; (ii) a symbolic task planner that converts natural language instructions into high level activity plans; (iii) an exploration executive that selects informative waypoints, monitors task progress, and dynamically triggers replanning and human queries; and (iv) a unified value of information (VoI) metric that governs both autonomous exploration and selective human interaction, enabling the robot to reason about uncertainty and task utility in a principled way. Demonstrated in realistic simulated environments, the proposed framework allows agents to ground natural language goals in their surroundings, explore efficiently, reason over partial knowledge, and adapt plans as new information is acquired, while involving the user only when doing so meaningfully improves performance.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Design of an Indirectly Irradiated Thermochemical Hydrogen Production Reactor Capable of Radiant Heat Recovery</title>
<link href="https://hdl.handle.net/1721.1/165125" rel="alternate"/>
<author>
<name>Scott, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/165125</id>
<updated>2026-03-17T03:05:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Model-Based Design of an Indirectly Irradiated Thermochemical Hydrogen Production Reactor Capable of Radiant Heat Recovery
Scott, Peter
Renewable/green hydrogen is of great interest as an alternative fuel for decarbonizing sectors such as shipping, aviation, chemicals, and heavy industry. The high cost of green hydrogen through electrolysis, an off-the-shelf mature technology, has led researchers to explore alternative water-splitting methods including thermochemistry, which can also be used for cosplitting of H2O and CO2 to produce syngas that can be converted to liquid fuels. Moreover, the process can operate on stored high temperature heat, making 24/7 operation possible. This thesis focuses specifically on the two-step thermochemical redox cycle using non-stoichiometric metal oxides. While the process has been demonstrated at the lab and pilot scales, efficiencies have so far been limited by the large temperature swing between the reduction and oxidation conditions, resulting in high sensible heat losses. In our previous work, we have introduced the Reactor Train System (RTS), a concept that features multiple and identical individually sealed, indirectly irradiated, metal oxide-containing reactors which move between a hot reduction zone and a cooler oxidation zone, engaging in counterflow radiative heat recovery in between. Prior modeling of the RTS, which revealed promise for high efficiency and heat recovery effectiveness, used either zero- or one-dimensional models of the RTS reactors and assumed a basic reactor design that featured a sapphire window for radiative heat transfer between the source and the redox material. A detailed conceptual design and higher-fidelity modeling of the RTS reactors is the focus of this thesis. This thesis is a comprehensive documentation of the model-based iterative design process of a novel thermochemical hydrogen reactor with highly unique and challenging functional requirements, from initial concept to early prototyping. The primary engineering challenge is that the structural pressure vessel also acts as the heat transfer interface, and must serve both purposes while undergoing extreme thermal cycling. The original windowed reactor concept is first investigated using a radiative heat transfer model, with findings of unfavorable heat losses and concerns regarding practicality guiding us towards a reactor design using a fully ceramic vessel acting also as a heat transfer interface. A more advanced thermomechanical model was then used to select a geometry which we call the Multi-Tubular Radiative Recovery Reactor (MiTR3 ) instead of one larger ceramic vessel, and to study the design parameters of the MiTR3 such as tube wall thickness with critical insight into the stress and failure probability of the ceramic tubes. Besides its mechanical strength and favorable thermal properties, this design is scalable and adaptable to different operating conditions and redox materials. Moreover, it utilizes easy to assemble off-the-shelf components. We then further augmented our modeling capabilities with multidimensional, time-dependent thermo-fluid and chemical reaction physics, incorporating both reduction and oxidation kinetics into the conservation equations for full-cycle simulations using ceria as the metal oxide. This enabled further study of the impact of important parameters, especially operational parameters such as redox material loading and form factor, gas flow rates, etc., and a deeper understanding of realistic system level efficiencies and productivities that take into consideration the impact of auxiliary components such as vacuum pumping and gas separation technologies on both. Finally, our ongoing experimental work with a benchtop-scale, single-tube reactor prototype aimed at derisking components and validating modeling results is presented, alongside plans for future prototyping efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Probabilistic Dynamically-Orthogonal Primitive Equation Forecasts for the Gulf of Mexico</title>
<link href="https://hdl.handle.net/1721.1/165123" rel="alternate"/>
<author>
<name>Rodriguez, Victor Alonso</name>
</author>
<id>https://hdl.handle.net/1721.1/165123</id>
<updated>2026-03-17T03:06:36Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards Probabilistic Dynamically-Orthogonal Primitive Equation Forecasts for the Gulf of Mexico
Rodriguez, Victor Alonso
Forecasting circulation in the Gulf of Mexico requires an explicit treatment of uncertainty associated with the Loop Current and its eddies, whose geometry and timing can fluctuate irregularly and lead to chaotic deterministic forecasts. Building on the dynamically orthogonal (DO) methodology for evolving low-rank stochastic representations and on efficient DO numerical schemes for geophysical fluid flows, this thesis develops and assesses massive probabilistic Primitive Equation (PE) hindcasts for the Gulf using the Dynamically Orthogonal Primitive Equations (DO–PE) framework as implemented for realistic ocean dynamics in previous MIT-MSEAS studies. The workflow extracts a time-dependent stochastic subspace from a balanced MIT MSEAS PE ensemble via singular-value decomposition, represents the initial nonGaussian coefficient cloud with Gaussian mixture models, and subsequently evolves the DO–PE mean, modes, and coefficients under dynamics, numerics, and forcings consistent with the MIT MSEAS PE modeling system. A 12-day hindcast simulation experiment spanning 28 May–8 June 2015 quantifies skill and convergence across truncations, with weak-type tests (means, standard deviations, kernel-density marginals) and strong-type tests against matched full-order realizations started from identical initial states. Consistent patterns emerge. Uncertainty concentrates along the Loop Current jet, the Yucatán inflow, and eddy peripheries. For weak convergence, as the retained dynamic modes increase from 15 to 60, standard-deviation maps sharpen and expand coherently along these dynamically active features, and the statistics indicate convergence with the normalized RMSEs for both mean and standard deviation fields decreasing in a largely monotonic fashion. At depth and for sea-surface height, late-time mean-error behavior can become mildly non-monotonic, indicating sensitivity to mode allocation among variables. In strong-convergence experiments, DO–PE reconstructions initialized at coefficient quantiles closely track the corresponding full-order trajectories: pathwise misfits remain modest, organize along shear zones, and their RMSE time series lie below persistence and within the envelopes implied by the weak-type spread, reinforcing that truncation primarily filters small-scale content while preserving trajectory-level evolution over the 10–12-day window. Together, these results demonstrate a practical, reproducible pipeline for massive probabilistic forecasting in the Gulf of Mexico that respects PE dynamics while quantifying and localizing forecast uncertainty in flow-dependent ways (details, configuration, and figures in Chapters 3–4). This thesis also introduces dynamic web pages for the interactive visualization of DO–PE output, facilitating the inspection of mean fields, modes, and standard deviations over time in Chapter 5.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization and Control of Sorption-Based Atmospheric Water Harvesting Devices</title>
<link href="https://hdl.handle.net/1721.1/165121" rel="alternate"/>
<author>
<name>Čas, Jan Luka</name>
</author>
<id>https://hdl.handle.net/1721.1/165121</id>
<updated>2026-03-17T03:06:12Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimization and Control of Sorption-Based Atmospheric Water Harvesting Devices
Čas, Jan Luka
Water scarcity is a global challenge with only one-third of the world’s population having consistent access to clean drinking water. Atmospheric water harvesting is a promising approach owing to the significant amount of water, i.e., 13000 trillion liters, present in the atmosphere. While significant recent research has focused on developing innovative sorbent materials, components and system designs, there is limited understanding of how to optimize device performance through active control. Key operating parameter selection, specifically, desorption temperature and cycle length, has relied on experimental trial and error. In this thesis, model predictive control (MPC) was used to dynamically optimize power input and cycle time in atmospheric water harvesting devices, for the first time. Real time optimization using a custom defined cost function was achieved based on a simplified heat and mass transfer model. The model allowed for the cost function to be based on water output and therefore eliminated the need for setpoint definition a priori. Through a modular, customizable software and hardware stack, the device demonstrated reliability and maintainability while preserving user interaction. MPC was evaluated against five distinct sorbent isotherm types, using three distinct operating modes: maximizing water production, maximizing operational profit and increasing thermal efficiency. All modes outperformed a constant temperature setpoint by dynamically determining the appropriate end time of the cycle, which depending on the material varied up to 10,000 s. Furthermore, the controller was able to increase thermal efficiency up to 3 percentage points compared to the reference by dynamically tapering power input to match water production. Experimental validation was performed with a device built by the Device Research Laboratory. The results showed excellent agreement between measured water output and real-time prediction, which provides a viable strategy for future controller deployment. This work paves a way for more sophisticated device operation through real-time optimization of power input and cycle length and highlights a modular software and hardware design to realize high performance atmospheric water harvesting devices.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Heuristic-Based Framework for Cost-Effective Product Design Enabled by Manufacturing Automation: Application to Large-Scale Sheet Metal Structures</title>
<link href="https://hdl.handle.net/1721.1/165120" rel="alternate"/>
<author>
<name>Flores Medina, Enrique</name>
</author>
<id>https://hdl.handle.net/1721.1/165120</id>
<updated>2026-03-17T03:06:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Heuristic-Based Framework for Cost-Effective Product Design Enabled by Manufacturing Automation: Application to Large-Scale Sheet Metal Structures
Flores Medina, Enrique
Sheet metal is prominent as a raw material for fabrication due to its flexible nature. Through cutting, bending, and joining, it can take a plethora of shapes, explaining its vast adoption in the construction, automotive, and aerospace industries. Furthermore, with automation, the labor and human error associated with its manufacturing can be mitigated. Nonetheless, the versatility of sheet metal can fade under the non-trivial dimensional and thickness constraints of some automated processes, particularly bending. This research, conducted in the context of a large-scale sheet metal manufacturer offering high customization, aims to maximize sheet metal’s automation capabilities while retaining its flexibility. To achieve this, two approaches are used: 1) the adoption of rollformed steel profiles with automated tube laser cutting as an additional manufacturing value stream, and 2) the development of a design automation tool that, upon receiving the dimensions and structural load conditions of a rectangular prism (called sub-module), generates a low-cost, automation compliant design. Findings show that optimal modules generally use medium to low-gauge channels as connected structural members, and thin-gauge sheet metal panels as slabs and shear walls, minimizing material use: the main cost component. Generated designs show cost reductions of up to 32% when compared to legacy counterparts. For the most produced product, this translates to yearly cost savings that range from $1.7 to $5.2 million.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Evaluation of a Bionic Knee with Myoneural Control</title>
<link href="https://hdl.handle.net/1721.1/165119" rel="alternate"/>
<author>
<name>McCullough, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/165119</id>
<updated>2026-03-17T03:06:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design and Evaluation of a Bionic Knee with Myoneural Control
McCullough, John A.
Building bionic limbs requires the convergence of surgical innovation and robotic engineering: surgical constructs must reliably extract and amplify intent signals from the body, while robotic systems must accurately interpret these signals to deliver precise, responsive assistance and meaningful feedback. Individuals with above-knee amputation often experience reduced mobility and diminished agency when using conventional prosthetic devices. These limitations can impede human-prosthesis embodiment, the integration of the prosthesis into the user’s body schema.&#13;
This thesis advances the goal of seamless human-machine integration by presenting the design and evaluation of a powered knee prosthesis. The hardware, software, and embedded systems of a prior prototype were upgraded to create a modular, field-deployable research platform. The resulting system incorporates a control framework that enables volitional actuation of the knee joint via electromyographic signals recorded from surgically reconstructed agonist-antagonist muscle pairs.&#13;
To evaluate the system, one participant with an above-knee amputation completed a series of experimental tasks using both their prescribed microprocessor-controlled prosthesis and the bionic knee. Neural control performance was assessed through blindfolded free-space tasks, while functional capability was evaluated during sit-to-stand transitions, squatting, level-ground walking, and stair ascent.&#13;
The bionic prosthesis, weighing 2.6 kg, comparable to commercially available powered knees, demonstrated robust, real-time control across all tasks. Volitional neural inputs enabled intuitive and responsive joint actuation, resulting in superior performance and perceived embodiment relative to the passive device. During sit-to-stand and squatting tasks, ground reaction force data revealed increased weight-bearing on the prosthetic side, reflecting enhanced user confidence. Gait analysis showed improved temporal symmetry during walking with the bionic knee, indicating more balanced interlimb coordination. Embodiment scores were consistently higher across all measured domains, with the participant describing the prosthesis as “feeling like my leg” and “helping me.”&#13;
These findings underscore the potential of neurally integrated prosthetic systems to restore volitional control, improve functional performance, and promote a more embodied user experience.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for in-Space Robotic Assembly of Modular CubeSats</title>
<link href="https://hdl.handle.net/1721.1/165118" rel="alternate"/>
<author>
<name>Freitag, Leila</name>
</author>
<id>https://hdl.handle.net/1721.1/165118</id>
<updated>2026-03-17T03:06:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Planning for in-Space Robotic Assembly of Modular CubeSats
Freitag, Leila
As the space industry continues to grow, developments such as the proliferation of small satellites have lowered the barrier to entry to space, making it faster and easier to launch payloads into orbit. However, the need for rapid deployment in space remains, particularly for rapid replacement of satellites that are nodes in larger constellations or supporting time-sensitive missions such as natural disaster monitoring. On-orbit assembly provides a solution to meet this demand. This thesis describes the development of Orbital Locker, a robotic system designed to enable the autonomous in-space assembly and deployment of modular satellites. The concept of operations involves a free-flying satellite that acts as a storage ``locker'', carrying modular CubeSat components and assembling and deploying them on request. Orbital Locker is an initial small-scale demonstration that is intended to be scaled up, consisting of a Cartesian gantry robot, and CubeSat modules dimensioned such that three modules stack to form a 1U CubeSat. The focus of this thesis is the software architecture of the system including module identification and assembly planning, and assembly testing in microgravity. Module identification makes use of fiducial markers to localize modules within the Locker, tracking the inventory of parts available. The assembly planner uses a graph-based method to optimize the steps required to assemble a desired satellite. It first generates a graph representation of possible assembly states and then uses a graph search algorithm to find the optimal sequence. Results from microgravity testing of the autonomous assembly on a ZeroG flight are presented, where a 1U CubeSat form factor was assembled in 72 seconds. Throughout this work, emphasis is placed on the extensibility of the system to support future scaled-up systems containing a larger inventory of modules.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Factors Observations in Flightcrew Response to&#13;
System Failure Events in Transport Category Aircraft&#13;
from 2000 to 2024</title>
<link href="https://hdl.handle.net/1721.1/165117" rel="alternate"/>
<author>
<name>Perez Gago, Cecilia</name>
</author>
<id>https://hdl.handle.net/1721.1/165117</id>
<updated>2026-03-17T03:05:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Human Factors Observations in Flightcrew Response to&#13;
System Failure Events in Transport Category Aircraft&#13;
from 2000 to 2024
Perez Gago, Cecilia
Understanding the effects of changes in aircraft technology on pilot response to system failure is crucial in the context of recent aviation safety events. This thesis makes human factors observations on pilot response to system malfunction in transport category aircraft through an analysis of final investigation reports produced by investigative authorities worldwide from 2000-2024. In the collected reports, system failure events in aircraft of newer generations correlated with higher percentages of appropriate response. Pilot response appropriateness was found to vary between systems, with particularly low appropriate response to failure of instruments and navigation, fuel, and autoflight systems (in decreasing order). When comparing the findings from the 2000-2024 data collection to those from a 1990-2000 study, pilot appropriate response was found to have increased for failures of the hydraulic and electrical systems. Pilot response to instruments and navigation, and autoflight failures was found to be low in both studies. Crew Alerting System (CAS) messages as initial stimuli for failure awareness were found to support increased levels of appropriate response percentages for failure of the electrical and hydraulic systems. CAS messages did not lead to a substantial improvement in appropriate response to failure of instruments and navigation, fuel, or the autoflight system. Finally, Endsley’s Situation Awareness theory was used as a framework to derive observations in the formulation of pilot responses to system failure across cases. CAS messages and system synoptic displays were observed to contribute to appropriate pilot perception, comprehension, and projection of failure of simple systems. Significant underlying complexity in the function of the autoflight and instruments and navigation systems, and the increased use of sensing, correlated with difficulty in comprehension and projection of system behavior following multiple failure events in 2000-2024 reports. Additionally, examples of failures across systems which displayed delayed or subtle stimuli, and unexpected system dependencies, were observed to lead to difficulties in flightcrew achievement of Level 2 and Level 3 Situation Awareness. Changes in aircraft technology were deemed to have had a varying effect on pilot situation awareness during failure of different airplane systems. Improvements in pilot response were observed in relatively simple systems, and gaps were identified given increased vulnerabilities in failure of systems with high functional complexity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrative Spatial Technologies for Mapping Axonal Vulnerability in Alzheimer’s Disease</title>
<link href="https://hdl.handle.net/1721.1/165116" rel="alternate"/>
<author>
<name>Leible, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/165116</id>
<updated>2026-03-17T03:06:06Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrative Spatial Technologies for Mapping Axonal Vulnerability in Alzheimer’s Disease
Leible, Daniel
Alzheimer’s Disease (AD) is the most common neurodegenerative disorder and is histopathologically defined by the accumulation of extracellular amyloid β (Aβ) plaques and intracellular neurofibrillary tau tangles. Pathology progression in AD follows a highly stereotyped, hierarchical pattern, implying a circuit-specific neuronal vulnerability to the underlying pathophysiological processes. Understanding the molecular and subcellular mechanisms driving this selective vulnerability has the potential to enable targeted, circuit-specific therapeutic approaches for early intervention in the detrimental spread of disease.&#13;
This thesis systematically reviews the current mechanistic understanding of selective vulnerability and early disease development in AD and explores how emerging integrative spatial technologies can address remaining open questions. First, molecular and subcellular processes underlying axonal Aβ and tau accumulation are examined, with a focus on cytoskeletal dynamics and axonal transport deficits. Second, intrinsic structural and metabolic risk factors shared by vulnerable axons are outlined, offering a potential explanation for the early regional onset of pathology. Since AD pathology appears to spread from these initial sites along synaptic connections, mechanisms of transsynaptic propagation of vulnerability are discussed next. Finally, the thesis compares integrative spatial technologies used to map disease progression and proposes neuronal barcoding as a promising strategy to overcome existing limitations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SLAM for Structured Environments Using Mechanically&#13;
Scanned Imaging Sonar</title>
<link href="https://hdl.handle.net/1721.1/165115" rel="alternate"/>
<author>
<name>Motz, Andrew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/165115</id>
<updated>2026-03-17T03:05:58Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">SLAM for Structured Environments Using Mechanically&#13;
Scanned Imaging Sonar
Motz, Andrew J.
As the modern utilization of the maritime environment only grows, uncrewed systems present the future of safety, efficiency, and capability. For submerged operations, Autonomous Underwater Vehicles (AUVs) enable scientists, industry, and militaries to access remote, inhospitable locations and execute a variety of tasks beyond the capabilities of human occupied or operated systems. Much of this autonomy relies on the vehicle having a detailed understanding of its own position. Inertial Navigation Systems provide an estimate of the distance traveled by combining numerous sensors, but are subject to unbounded error accumulation over long distances. Traditional methods of correcting for this error found in terrestrial robotics are largely unavailable in the undersea domain due to the absorption and scattering effects of electromagnetic signals in water. Acoustic communications and imaging such as Sound Navigation and Ranging (SONAR) is the most reliable and trusted method for AUVs. This thesis presents a novel method for performing Simultaneous Localization and Mapping (SLAM) through acoustic means utilizing a Mechanically Scanned Imaging Sonar (MSIS). MSIS utilize a single beam sonar mechanically rotated around the vehicle to scan a full 360◦ area. Compared with other sonar systems of similar capabilities, they require less size, weight, and power, and are available at a lower price point.&#13;
The primary contribution of this thesis is a SLAM processing pipeline from MSIS to global position estimate. The pipeline extracts information from the MSIS data regarding the vehicle’s relative location compared to observed landmarks and then probabilistically matches the observed data to a best estimate vehicle position. The system is compatible with either an a priori map or a constantly updated SLAM global map. Individual beams from the MSIS are fused together into a submap. Contrast-based image processing identifies features of interest in the submap and appropriate features are then classified as observed landmarks. A probabilistic coarse-to-fine voting scheme identifies the most likely pose of the vehicle using the global map. When performing SLAM without an a priori map, observed landmarks are then evaluated and either added to the global map or used to update the position of known landmarks. While prior works have established MSIS SLAM by focusing on a single return per sonar beam, this thesis utilizes submaps to extract numerous features from a series of consecutive beams, allowing for more detailed and comprehensive feature mapping.&#13;
Experimental validation was performed using an ISS360 sonar mounted on a REMUS-100 AUV with the processing pipeline running via Robot Operating System on the vehicle backseat computer. The vehicle was assisted by divers traversing underneath the WHOI Iselin pier and performed both localization and SLAM using the submerged pier pilings. The system performed real-time localization, successfully bounding previously unbounded localization drift to an average of 3.4m, resulting in over a 90% reduction in absolute error after approximately one hour of submerged operations. The SLAM results mirrored the a priori accuracy demonstrating similar error bounds validating the system performance.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational study of static replication for barrier options</title>
<link href="https://hdl.handle.net/1721.1/165075" rel="alternate"/>
<author>
<name>Sun, Hai Po.</name>
</author>
<id>https://hdl.handle.net/1721.1/165075</id>
<updated>2026-03-11T03:04:39Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Computational study of static replication for barrier options
Sun, Hai Po.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1997; Includes bibliographical references (leaves 75-76).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, control and experimentation of a two dimensional linear motor</title>
<link href="https://hdl.handle.net/1721.1/165074" rel="alternate"/>
<author>
<name>Castañeda Vega, José Israel.</name>
</author>
<id>https://hdl.handle.net/1721.1/165074</id>
<updated>2026-03-11T03:04:33Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Modeling, control and experimentation of a two dimensional linear motor
Castañeda Vega, José Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1997; Includes bibliographical references (leaf 118).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of anode dimensions in mercury-vapour thermionic rectifiers</title>
<link href="https://hdl.handle.net/1721.1/165073" rel="alternate"/>
<author>
<name>Fussell, Lewis.</name>
</author>
<id>https://hdl.handle.net/1721.1/165073</id>
<updated>2026-03-11T03:04:42Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of anode dimensions in mercury-vapour thermionic rectifiers
Fussell, Lewis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaf 50).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boiling and spreading rates of instantaneous liquid methane spills on water</title>
<link href="https://hdl.handle.net/1721.1/165070" rel="alternate"/>
<author>
<name>Chatlos, David Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/165070</id>
<updated>2026-03-11T03:04:37Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Boiling and spreading rates of instantaneous liquid methane spills on water
Chatlos, David Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1982; Supervised by Robert C. Reid.; Includes bibliographical references (leaves 86-88).
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.</title>
<link href="https://hdl.handle.net/1721.1/165068" rel="alternate"/>
<author>
<name>Wright, Francine Elaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/165068</id>
<updated>2026-03-11T03:04:45Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.
Wright, Francine Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Vita.; Bibliography: leaves 65-66.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geology of Deception Gulch and the Verde Central mine</title>
<link href="https://hdl.handle.net/1721.1/164923" rel="alternate"/>
<author>
<name>Benedict, P. C.
            (Platt Carrico),
            1900-1969.</name>
</author>
<id>https://hdl.handle.net/1721.1/164923</id>
<updated>2026-02-20T03:04:15Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">Geology of Deception Gulch and the Verde Central mine
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology and Geophysics, 1923
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear elastic analysis of reinforced concrete structures by the finite element method</title>
<link href="https://hdl.handle.net/1721.1/164915" rel="alternate"/>
<author>
<name>Tulga, Said Şahin.</name>
</author>
<id>https://hdl.handle.net/1721.1/164915</id>
<updated>2026-02-20T03:04:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Nonlinear elastic analysis of reinforced concrete structures by the finite element method
Tulga, Said Şahin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subcontractor bidding strategy</title>
<link href="https://hdl.handle.net/1721.1/164914" rel="alternate"/>
<author>
<name>Gilbane, Thomas Freeman.</name>
</author>
<id>https://hdl.handle.net/1721.1/164914</id>
<updated>2026-02-20T03:04:10Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Subcontractor bidding strategy
Gilbane, Thomas Freeman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Bibliography: leaves 104-105.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Large Language Models as Circuit Design Assistants</title>
<link href="https://hdl.handle.net/1721.1/164861" rel="alternate"/>
<author>
<name>Cox, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164861</id>
<updated>2026-02-13T03:49:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Evaluating Large Language Models as Circuit Design Assistants
Cox, Matthew J.
Large language models (LLMs) have exploded in capability in recent years. Previous attempts at AI systems for circuit design have had limited proficiency and been restricted in problem scope. LLMs, with their breadth of knowledge and reasoning ability, are a promising technology for a much more general-purpose circuit design assistant. We developed a dataset of electrical engineering problems and solutions with which to test an LLM-based system, since no such publicly available dataset exists to our knowledge; unmodified GPT-4 was able to solve 42% of the problems. We did a preliminary comparison of several knowledge bases to use for RAG knowledge injection, finding that a small, curated set of resources performed better than a larger, less-focused set of resources, though there were confounding factors which may have skewed the result. While this work is a start, significant future work is needed to continue developing an LLM-based circuit design assistant.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration</title>
<link href="https://hdl.handle.net/1721.1/164860" rel="alternate"/>
<author>
<name>Nguyen, Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/164860</id>
<updated>2026-02-13T03:49:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration
Nguyen, Gary
Code coverage is a longstanding metric for evaluating how thoroughly a program has been tested. Achieving high coverage remains a priority goal for quality assurance and software stability. Exhaustive enumeration of possible input paths to every code region is desirable in theory but computationally infeasible in practice, especially in large-scale codebases. Fuzzing is a widely used technique for input generation and is effective at exploring smaller programs but often struggles with more complex conditional logic and nested modules. Concolic execution, which exhaustively explores paths using constraint solving, can work effectively with complex conditional logic but suffers from path explosion. Targeted branch exploration is a similar approach for input generation but sidesteps the path explosion problem by focusing more on specific constraint paths of interest.&#13;
&#13;
In this thesis, I introduce a hybrid system that combines fuzzing and targeted branch exploration with the goal of improving code coverage by leveraging the complementary strengths of each. The system uses fuzzing to quickly generate a broad input corpus and follows up with targeted branch exploration to explore paths that fuzzing struggles to reach. Findings from experiments on two C projects of different complexities show that the system did not outperform the individual techniques in terms of raw coverage, revealing limitations of the approach and opportunities for future improvement.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry</title>
<link href="https://hdl.handle.net/1721.1/164858" rel="alternate"/>
<author>
<name>He, Wenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/164858</id>
<updated>2026-02-13T03:49:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry
He, Wenhao
Quantum simulations of electronic structure promise to deliver significant speedups over classical methods, but remain limited by the number of qubits on near-term devices. A key strategy to reduce quantum resource requirements is to truncate the molecular Hilbert space via compact and efficient basis sets. However, most optimized basis sets either rely on predefined heuristics or require expensive classical computations, such as CASSCF orbital optimization or ℓ1-norm minimization of the Hamiltonian. In this work, we introduce a general machine learning framework for fast basis set prediction in quantum computational chemistry. Our method employs an equivariant graph neural network that outputs a Hermitian matrix encoding optimized molecular orbitals. The eigenvectors of this matrix define a transferable and efficient basis set, trained on orbitals obtained via CASSCF and Hamiltonian ℓ1 norm optimization. We evaluate our model on hydrogen chains and demonstrate that the predicted bases achieve energy accuracy and Hamiltonian sparsity comparable to orbital-optimized methods, while reducing classical preprocessing time. In addition, the predicted orbitals can be directly used as high-quality initial guesses for CASSCF calculations, further accelerating their convergence.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition</title>
<link href="https://hdl.handle.net/1721.1/164856" rel="alternate"/>
<author>
<name>Feng, Dewei</name>
</author>
<id>https://hdl.handle.net/1721.1/164856</id>
<updated>2026-02-13T03:49:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition
Feng, Dewei
The ability of AI to sense and identify various substances based on their smell alone can have profound impacts on allergen detection (e.g., smelling gluten or peanuts in a cake), monitoring the manufacturing process, and sensing hormones that indicate emotional states, stress levels, and diseases. Despite these broad impacts, there are virtually no large-scale benchmarks, and therefore little progress, for training and evaluating AI systems’ ability to smell in the real world. In this paper, we use portable gas and chemical sensors to create SmellNet, the first large-scale database that digitizes a diverse range of smells in the natural world. SmellNet contains about 180,000 time steps of 50 substances (spanning nuts, spices, herbs, fruits, and vegetables) with 50 hours of data. Using SmellNet, we trained AI models for real-time classification of substances based on their smell alone. Our best methods leverage sequence models, contrastive learning to integrate high-resolution Gas Chromatography–Mass Spectrometry molecular data, and a new temporal difference method that identifies sharp changes in sensor readings. Our best models achieve up to 65.35% accuracy on pre-recorded data, and generalize to real-world conditions with 10.71% accuracy on nuts and 25.38% on spices in the challenging 50-way online classification task. Despite these promising results, SmellNet highlights many technical challenges in building AI for smell, including richer feature learning, on-edge smell models, and robustness to environmental changes.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization</title>
<link href="https://hdl.handle.net/1721.1/164855" rel="alternate"/>
<author>
<name>Meindl, Jamison Chivvis</name>
</author>
<id>https://hdl.handle.net/1721.1/164855</id>
<updated>2026-02-13T03:49:16Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization
Meindl, Jamison Chivvis
Global optimization of expensive, derivative-free black-box functions requires extreme sample efficiency. While Bayesian optimization (BO) is the current state-of-the-art, its performance hinges on surrogate and acquisition function hyperparameters that are often hand-tuned and fail to generalize across problem landscapes. We present ZeroShotOpt, the first general-purpose, pretrained model for continuous black-box optimization tasks ranging from 2 D to 20 D. Our approach leverages offline reinforcement learning on large-scale optimization trajectories collected from 12 BO variants. To scale pretraining, we generate millions of synthetic Gaussian process-based functions with diverse landscapes, enabling the model to learn transferable optimization policies. As a result, ZeroShotOpt achieves robust zero-shot generalization on a wide array of unseen synthetic and real-world benchmarks, matching or surpassing the sample efficiency of leading global optimizers, including BO, while also offering a reusable foundation for future extensions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes</title>
<link href="https://hdl.handle.net/1721.1/164854" rel="alternate"/>
<author>
<name>Nguyen, Thienan D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164854</id>
<updated>2026-02-13T03:49:26Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes
Nguyen, Thienan D.
Colloidal quantum dot light emitting diodes reveal to be promising candidates for the next generation of display technologies. Their brighter emissions, greater color purity, and higher efficiency make them highly desirable in consumer electronics. As such, research into the performance and stability of these novel LEDs are crucial for their operation in displays. These investigations are ongoing, with focused efforts on improving the operating stability through different quantum dot materials and passivation methods. However, less attention is paid in confidently understanding the fundamental relationships between current, voltage, and luminance by which these devices operate. These electrical characteristics reveal insights into the operation of these devices and the behavior of charge carriers. Additionally, temperature-dependent electrical measurements can showcase different behavior at different temperatures and deviations from the expected performance at set temperatures. Temperature dependent processes are revealed and from such, a better understanding of how the device operates is gained. In this thesis, an investigation into the temperature-dependent electrical characteristics of quantum dot light emitting diodes was conducted by measuring the current-voltage-luminance, JVL, relationships at various cryogenic temperatures. These temperatures ranged from 78K, liquid nitrogen boiling point, to 293K, room temperature. This investigation revealed the temperature dependent nature and origin of turn-on voltage, current, EQE, EQE roll-off, and hysteresis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/164853" rel="alternate"/>
<author>
<name>Rich, Benjamin R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164853</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering
Rich, Benjamin R.
Knowledge Graph Question Answering (KGQA) encompasses a set of techniques aimed at generating accurate, interpretable responses to natural language queries posed over structured, graph-based datasets. Recent approaches to KGQA involve reducing the knowledge graph (KG) to a relevant subgraph, which is then encoded in natural language as a series of triples (subject, predicate, object) and passed to a large language model (LLM) for interpretation and answer generation. These methods have shown state-of-the-art accuracy. However, this paradigm is undermined by a critical vulnerability: the retrieval of irrelevant or erroneous facts can amplify LLM hallucinations and degrade system trustworthiness, while the reasoning process remains opaque. This thesis addresses this challenge by extending an existing stateof-the-art KGQA architecture with uncertainty-aware subgraph retrieval methods. To achieve this, we modify the retrieval component to learn the epistemic uncertainty of each candidate triple’s relevance to a given query. We implement these modifications using Bayesian methods and learn a well-calibrated approximation of the posterior distribution over triple relevance. By explicitly modeling this uncertainty, the retriever model is shown to provide a fine-grained confidence score for each piece of evidence. We expose these metrics downstream to the LLM during reasoning and evaluate whether LLMs can reason over uncertainty-related metrics to improve KGQA. We find that LLMs cannot reason effectively over uncertainties in most cases, but that agentic workflows that provide selective access to uncertainty metrics may enhance performance. We evaluate our approach against established benchmarks using HIT-rate and set-comparison accuracy metrics. Additionally, we introduce reasoning-path and statistical trust metrics derived from calibrated uncertainty scores. Our analysis reveals a significant positive correlation between path-based uncertainty metrics and the veracity of the Large Language Model’s (LLM) answers. These findings establish a robust foundation for developing uncertainty-grounded trust mechanisms in LLM-agnostic KGQA systems. As a proof of concept, a lightweight classifier trained exclusively on the LLM’s inputs and outputs demonstrates substantial predictive power in identifying correct responses. Finally, we briefly explore using uncertainty to identify out-of-distribution (OOD) queries.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applied Compiler Optimizations for Proving Code</title>
<link href="https://hdl.handle.net/1721.1/164852" rel="alternate"/>
<author>
<name>Ruiz, Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/164852</id>
<updated>2026-02-13T03:49:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applied Compiler Optimizations for Proving Code
Ruiz, Ricardo
The recent popularity of massively distributed, trustless systems has created a demand for cryptographic proofs: systems to prove that a piece of data is a valid output for a given program. These systems exist, but face very high runtimes for the generation of proofs. Significant effort has been invested in optimizing the prover systems, but relatively less has been focused on optimizing the code that gets read as an input. This paper proposes a new approach to optimizing prover systems by modifying the compiler to produce proof-ready code. It proposes a benchmarking framework for comparing the relative proof costs of RISC-V instructions; the resulting analyis find that shift instructions do not offer heavy savings over multiplication. The finding suggests that strength reduction, a fundamental optimization in modern compilers, can sabotage end-to-end performance. The paper proposes methods for applying this knowledge to better optimize code, leaving the door open for future researchers to continue to make code proofs more performant and accessible.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery</title>
<link href="https://hdl.handle.net/1721.1/164850" rel="alternate"/>
<author>
<name>Xie, Yuxin</name>
</author>
<id>https://hdl.handle.net/1721.1/164850</id>
<updated>2026-02-13T03:49:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery
Xie, Yuxin
Adeno-associated viruses (AAV) are one of the most promising vectors for gene therapy because of their established safety, low immunogenicity, and capability to achieve sustained gene expression. However, many naturally occurring AAV variants have limitations in their potency, particularly in penetrating biological barriers like the blood-brain barrier (BBB). Additionally, their broad and nonspecific tropism can translate into suboptimal cross-species transduction efficiency and potential toxicity, complicating the clinical transition from animal model to humans. These challenges impede the use of naturally occurring AAVs for therapeutic gene delivery in many neurological disorders-such as autism spectrum disorders (ASD), Parkinson’s disease (PD), Huntington’s disease (HD)—as well as other systemic conditions like cystic fibrosis (CF). To overcome these barriers, we developed a computational framework based on ancestral sequence reconstruction (ASR) to engineer synthetic ancestral AAV capsids with the goal of enhanced targeting specificity and potency. We first validated this computational framework by replicating the previously engineered Anc80L65 capsid. Then, with 75 naturally occurring functional AAV sequences and additional experimentally screened variants exhibiting brain-targeting potency, we built an evolutionary framework. We applied multiple computational methods such as enhanced multiple sequence alignment, maximum-likelihood-based phylogenetic tree inference, and ancestral sequence reconstruction with Bayesian inference. With this methodology, we predicted several novel ancestral AAV capsid sequences at critical evolutionary nodes, particularly those representing functional transitions with potential improved blood-brain barrier penetration and CNS tropism. Our computational framework thus streamlines and accelerates the process of designing ancestral AAV variants with targeted gene therapy applications.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Unprecedented Extreme Scenarios with Limited Data</title>
<link href="https://hdl.handle.net/1721.1/164848" rel="alternate"/>
<author>
<name>Chang, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/164848</id>
<updated>2026-02-13T03:49:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Generating Unprecedented Extreme Scenarios with Limited Data
Chang, Kai
Quantifying and predicting rare and extreme events persists as a crucial yet challenging task in understanding complex dynamical systems, ubiquitous in science and engineering. Many practical challenges arise from the infrequency and severity of these events, including the considerable variance of simple sampling methods and the substantial computational cost of high-fidelity numerical simulations. Numerous data-driven methods have recently been developed to tackle these challenges. However, a typical assumption for the success of these methods is the occurrence of multiple extreme events, either within the training dataset or during the sampling process. This leads to accurate models in regions of quiescent events but with high epistemic uncertainty in regions associated with extremes. To overcome this limitation, we introduce the framework of Extreme Event Aware (e2a or eta) or η-learning which does not assume the existence of extreme events in the available data. η-learning reduces the uncertainty even in ‘unchartered’ extreme event regions, by enforcing the extreme event statistics of a few observables during training, which can be available or assumed through qualitative arguments or other forms of analysis. This type of statistical regularization results in models that fit the observed data, but also enforces consistency with the prescribed statistics of some observables, enabling the generation of unprecedented extreme events even when the training data lack extremes therein. Theoretical results based on optimal transport offer a rigorous justification and highlight the optimality of the introduced method. Additionally, extensive numerical experiments illustrate the favorable properties of the ηlearning framework on several prototype problems and real-world precipitation downscaling problems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A*-Decoding: Token-Efficient Inference Scaling</title>
<link href="https://hdl.handle.net/1721.1/164846" rel="alternate"/>
<author>
<name>Chatziveroglou, Ioannis</name>
</author>
<id>https://hdl.handle.net/1721.1/164846</id>
<updated>2026-02-13T03:49:18Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A*-Decoding: Token-Efficient Inference Scaling
Chatziveroglou, Ioannis
Inference-time scaling has emerged as a powerful alternative to parameter scaling for improving language model performance on complex reasoning tasks. While existing methods have shown strong performance gains under fixed compute budgets, there has been little focus on optimally utilizing that budget during inference. In this work, we introduce A*-decoding, a search-based inference-time strategy that builds on the A* search algorithm to optimally utilize a fixed compute budget by prioritizing high-quality reasoning paths during generation. We frame language model decoding as a structured search in a state space of partial solutions, applying the A* transition model to identify promising continuations guided by an external process supervision signal. In our experiments, A*-decoding reaches the performance levels of strong inference scaling baselines like best-of-N and particle filtering while using up to 3x fewer tokens and 30% fewer PRM passes under equivalent compute budgets. On the MATH500 and AIME 2024 benchmarks, A*-decoding enables Llama-3.2-1B-Instruct to match the performance of the 70x larger Llama-3.1-70B-Instruct, and allows Qwen3-1.7B to reach o1-like reasoning accuracy. These results highlight the power of structured search in decoding, offering an alternative to brute-force sampling or scale-driven gains. Our work demonstrates how thoughtful inference-time strategies can enhance reasoning in SLMs, pointing toward future advances in more efficient and scalable language model deployment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics</title>
<link href="https://hdl.handle.net/1721.1/164845" rel="alternate"/>
<author>
<name>Varma, Vikram</name>
</author>
<id>https://hdl.handle.net/1721.1/164845</id>
<updated>2026-02-13T03:49:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics
Varma, Vikram
Imaging the structural and functional connections between cells in the brain allows neuroscientists to understand the brain by studying neuronal wiring diagrams. To automatically segment and classify images that are used in the construction of these neuronal wiring diagrams, or connectomes today, machine learning segmentation techniques require an image scanned with an electron microscope at either a slow dwell time or with small pixel sizes. However, a scalable and more rapid implementation of connectome construction has not yet been realized because of the significant cost of multi-beam electron microscopes and the relatively slow time in which connectomes can be constructed using a single-beam electron microscope. Segmented connectomes include sections that can be segmented properly with a fast scanned image as well as sections that require slow scanning for proper segmentation. Due to this fact, a potential way to enhance the time in which connectomes can be produced and segmented is to first scan samples at fast resolution and perform segmentation using a convolutional neural network, identify those areas of interest that require more detailed imaging through a learning-based error detection network, and then rescan only those identified high interest areas to produce a fused image for segmentation. The proposed thesis will analyze various machine learning methods for segmentation using the U-Net network and review proposed enhancements to the U-Net network that can better utilize electron microscopy images for construction of segmented connectomes. The successful use of fused electron microscopy images will potentially enable higher speed and lower cost electron microscopy imaging for connectomics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications</title>
<link href="https://hdl.handle.net/1721.1/164844" rel="alternate"/>
<author>
<name>Zhang, Erin Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/164844</id>
<updated>2026-02-13T03:49:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications
Zhang, Erin Wei
Waveguide integrated devices that operate in the mid-infrared (mid-IR) wavelength range (2.5-12 µm) are used for sensing the fundamental absorption bands in a variety of molecules. Germanium (Ge) is commonly used for photodetection in the nearinfrared (near-IR) wavelength range of 1.2-1.6 µm due to its strong absorption from a 0.8 eV direct band gap. At longer wavelengths in the mid-IR range, Ge exhibits transparency that makes it a desirable waveguide material for sensing applications. Its epitaxial growth compatibility with silicon (Si) substrates makes Ge-on-Si an effective platform for mid-IR waveguides. For back-end-of-line (BEOL) integration of waveguides in sensing applications, the thermal budget limits the temperature to below 450°C. In this work, we investigated the use of h-line exposure as a commercially viable, low-cost option for patterning low temperature (LT) Ge-on-Si waveguides using direct write lithography. Waveguide dimensions for optimal confinement in single-mode transverse electric (TE) polarization at wavelengths of 3 µm and 10.4- 11.3 µm were modeled and the direct lithography process was refined. Through dose testing and adjustments to the raster direction and pixel resolution, it was found that direct write lithography lacked the resolution required for low-loss waveguides. Scanning electron microscopy (SEM) revealed inconsistent waveguide widths and sidewall roughness, and e-beam lithography was identified as the preferred lithography process. For future integration of LT-Ge in a foundry process design kit (PDK), a universal thickness of 1.7 µm was found to support single-mode waveguide operation from 3-11.3 µm wavelength.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Log-Based Coordination Systems for Managed Cloud Environments</title>
<link href="https://hdl.handle.net/1721.1/164843" rel="alternate"/>
<author>
<name>Jimenez, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164843</id>
<updated>2026-02-13T03:49:14Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Assessing Log-Based Coordination Systems for Managed Cloud Environments
Jimenez, Gabriel
The distributed systems landscape is undergoing a significant shift toward managed cloud environments, reducing the prevalence of self-hosted coordination services such as ZooKeeper. While ZooKeeper remains a proven and feature-rich solution for coordination tasks, its deployment in cloud environments can introduce component redundancy. This is because the underlying cloud platform already provides internal mechanisms to ensure coordination guarantees. This thesis investigates the design and evaluates the performance of a log-based coordination service library tailored for managed cloud environments. The proposed library removes the ensemble management overhead inherent in ZooKeeper by delegating durability and consistency responsibilities to the cloud provider’s data layer. This architectural simplification enables a modular design, allowing for tailored implementations that exploit the strengths and mitigate the limitations of a system's specified data layer. The library demonstrated feature parity with ZooKeeper for a targeted subset of coordination features, including leader election, membership tracking, and ephemeral state management. The same is noted for migration from an existing ZooKeeper-based application to this work's library, requiring minimal design changes while preserving coordination guarantees. While the results show that this design does not yet match mature coordination services in raw performance, they highlight potential avenues for further research, particularly in optimizing log-based coordination systems for the unique characteristics of cloud-managed data layers. Given the industry’s steady movement toward cloud-native infrastructure, these findings provide a foundation for future exploration into lightweight, platform-integrated coordination solutions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks</title>
<link href="https://hdl.handle.net/1721.1/164841" rel="alternate"/>
<author>
<name>Echezona, Chukwuemekalum</name>
</author>
<id>https://hdl.handle.net/1721.1/164841</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks
Echezona, Chukwuemekalum
As the Internet continues to grow in size and complexity, Distributed Denial of Service (DDoS) attacks grow in size and complexity alongside it. One particularly common form of DDoS attack is the TCP SYN flood, which exploits the TCP handshake process to exhaust server resources. This thesis investigates the use of a novel proof-of-work (PoW) based mitigation method to respond to such attacks, specifically in the context of WebRTC video conferencing applications. PoW aims to shift the computational burden from the server to the client, by utilizing a hard to solve puzzle that is easily verifiable. Guided by the same evaluation framework used by the original contributors, we conducted controlled experiments using SPHERE, a national research testbed, and the open-source Jitsi Meet video conference application to simulate DDoS attacks and measure their impact on video quality metrics such as upload/download bitrate and video framerate. Our experiments involved multiple scenarios with and without active attacks and PoW mitigation activate. Results demonstrate that PoW imposes minimal overhead on legitimate clients while maintaining high efficacy when faced with the threat of a SYN Flood attack, regardless of whether the attackers do the proof-of-work before sending traffic. These findings highlight PoW as a promising low overhead mitigation method for WebRTC conference systems under the threat of DDoS attacks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding</title>
<link href="https://hdl.handle.net/1721.1/164840" rel="alternate"/>
<author>
<name>Huang, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/164840</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding
Huang, Natalie
The lifelong Multi-Agent Path Finding (MAPF) problem requires planning collision-free trajectories for agents operating continuously in dynamic environments. Traditional solvers such as Priority-Based Search (PBS) use fixed branching heuristics, which can be inefficient in high-congestion scenarios. This work explores how learning-based methods can improve PBS decision-making. We develop supervised learning (SL) policies trained from high-quality beam search trajectories and reinforcement learning (RL) policies learned directly through simulation, enabling adaptive branching strategies. Evaluations on warehouse-style and Kiva-style maps with varying agent densities show that learned policies can significantly boost throughput in congested warehouse layouts, while identifying scenarios where classical heuristics remain competitive. Our findings provide guidance on solver selection based on environment layout and congestion characteristics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Interpret Language Model Diffs</title>
<link href="https://hdl.handle.net/1721.1/164839" rel="alternate"/>
<author>
<name>Goel, Avichal</name>
</author>
<id>https://hdl.handle.net/1721.1/164839</id>
<updated>2026-02-13T03:49:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning to Interpret Language Model Diffs
Goel, Avichal
Finetuning-induced changes to a model’s weights (a “model diff”) are semantically meaningful but often difficult to interpret. This makes us wonder: can we describe the content of an unknown model diff using natural language? We introduce diff interpretation training, a method that teaches a model describe its own finetuning-induced modifications. Our approach uses synthetic model diffs to train a lightweight adapter, which in turn can be applied to a compatible finetuned model to make it self-describing. Using two simple task settings, we demonstrate that our method can successfully decode model diffs into accurate natural language descriptions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods</title>
<link href="https://hdl.handle.net/1721.1/164837" rel="alternate"/>
<author>
<name>Botto Tornielli, Marcos Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/164837</id>
<updated>2026-02-13T03:49:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods
Botto Tornielli, Marcos Julian
With the substantial computing resources available today, computational fluid dynamics simulations allow scientists and engineers to simulate physical problems very accurately. However, achieving this accuracy requires a sufficiently refined computational mesh, which is a primary driver for the high cost of complex simulations. Mesh adaptation methods provide an automated way to determine the regions where a mesh needs the most refinement and generate a new mesh that efficiently targets these regions. In this thesis, we build on previous work in a posteriori error estimation and mesh adaptation for finite element methods to propose a new mesh adaptation method based on L² error control by solution post-processing. A key feature of our method is its natural extension to higher-order discretizations while providing a problem-independent adaptation methodology. Problem-independent adaptation methods do not depend on specific information about the partial differential equation (PDE) problem being solved, and can therefore be applied to a wide range of problems without modification. We present numerical results applying the approximate L² error control method to a two-dimensional advection-diffusion problem with anisotropic features. These results demonstrate the proposed method’s ability to generate well-adapted anisotropic meshes for solutions with polynomial orders 1, 2, and 3. We also apply the approximate L² error control method to a more complex two-dimensional Reynolds-Averaged Navier-Stokes problem with turbulent flow over a flat plate. We compare the convergence of the drag coefficient and the characteristics of adapted meshes obtained with the proposed method and with an output-based adaptation approach. As expected, the approximate L² error control method is not as effective as the output-based approach in reaching a converged drag coefficient value, but it nevertheless demonstrates the ability to effectively control the approximate L² error in the Mach field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors</title>
<link href="https://hdl.handle.net/1721.1/164836" rel="alternate"/>
<author>
<name>Murphy, Devin</name>
</author>
<id>https://hdl.handle.net/1721.1/164836</id>
<updated>2026-02-13T03:49:25Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors
Murphy, Devin
Resistive matrix-based tactile sensors offer a scalable and intuitive approach to capturing human-environment interactions, yet deploying them in real-world systems remains challenging because they must remain portable, adaptive, and long-lasting. This thesis presents the WiReSens Toolkit, an open-source hardware and software platform for developing resistive tactile sensing systems that meet the demands of real world applications. The toolkit features adaptive hardware for interfacing with resistive sensors and a web-based GUI that mediates access to otherwise complex functionality, including 1) multi-device programming and wireless visualization across three distinct communication protocols 2) autocalibration methods for adaptive sensitivity and 3) intermittent data transmission for low-power operation. As a use case for the toolkit, the thesis then introduces a method for the automatic design and fabrication of custom tactile sensing gloves using flexible printed circuit boards (FPCBs), enabling rapid, scalable production. Together, these contributions lower barriers to adoption and support broader exploration of tactile sensing in HCI, robotics, and ubiquitous computing.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantization Methods for Matrix Multiplication and Efficient Transformers</title>
<link href="https://hdl.handle.net/1721.1/164834" rel="alternate"/>
<author>
<name>Savkin, Semyon</name>
</author>
<id>https://hdl.handle.net/1721.1/164834</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantization Methods for Matrix Multiplication and Efficient Transformers
Savkin, Semyon
We study quantization in Machine Learning. First, we introduce NestQuant — a technique for quantization of matrix products and post-training quantization of LLMs. Beyond reducing the memory footprint, quantization accelerates inference, as the primary bottleneck during autoregressive generation is often the memory bandwidth. NestQuant leverages two nested lattices to construct an efficient vector codebook for quantization, along with practical encoding and decoding algorithms. The approach is grounded in recent theoretical work that characterizes the optimal rate–distortion trade-off for matrix products. Empirically, on Llama-3-8B, it reduces the perplexity gap between full-precision and quantized models by more than 55% relative to the current state-of-the-art technique (SpinQuant). Second, we investigate data-domain quantization for RF signals. We propose a tokenized transformer for source separation that discretizes RF waveforms into learned tokens and operates directly on the resulting sequences, outperforming strong convolutional baselines. Together, these contributions connect information-theoretic limits with deployable systems: structured vector quantizers accelerate LLM inference and enable competitive discrete representations for RF tasks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors</title>
<link href="https://hdl.handle.net/1721.1/164833" rel="alternate"/>
<author>
<name>Chun, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164833</id>
<updated>2026-02-13T03:49:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors
Chun, Ethan
Barometric tactile sensors offer a cheap, robust, and customizable means for robots to perceive the world. Central to their operation are models that extract useful information from the sensors’ raw pressure readings. In this work, I focus on improving data-driven methods for single-point contact localization and force estimation using a previously presented three-quarter sphere barometric tactile sensor. To allow modeling of time-dependent effects in the sensor material, I introduce a multi-threaded data collection system that captures ground truth contact and sensor data at exactly 100 Hz. I construct both feed-forward and recurrent networks using this data, finding that a recurrent network achieves a 15% lower mean absolute error for angular contact localization on the sphere compared to prior methods. The recurrent architecture’s computational efficiency ensures that the architecture can still run within the constraints of the sensors’ microcontroller. Despite this improvement, I find that more expressive models such as LSTMs tend to overfit on the collected data and physical phenomena observed during deployment were not well represented by the training metrics. To better understand the extent that these data-driven methods alone can improve sensor performance, I shift focus away from the modeling and analyze the physical sensor instead. I find that viscous effects in the sensor can render the prediction task unlearnable without historical data and that thermal effects introduce a train-test distribution shift. Finally, I discuss design criteria for a theoretical future barometric tactile sensor that may mitigate the effects found during my modeling and analysis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets</title>
<link href="https://hdl.handle.net/1721.1/164832" rel="alternate"/>
<author>
<name>Rojas Collins, Elias G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164832</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets
Rojas Collins, Elias G.
Modern probabilistic programming applications, from large-scale Bayesian inference to real-time decision making, require both the expressiveness of CPU-oriented languages such as Gen.jl and the massive parallelism of GPU-backed array languages such as GenJAX, yet existing platforms force users to trade modeling flexibility for performance. This thesis introduces GenUflect, a metalanguage that embeds multiple Gen-compatible dialects inside a single program, allowing each sub-component to run on the most appropriate language and hardware target while preserving Gen’s programmable-inference interface. GenUflect extends Gen’s dynamic-modeling language with the @union, @vmap, @amortize, @amortize≤, and @runtime_union combinators; these macros compile at build-time (or justin-time) to autonomous generative functions written in the target dialect, link them through a lightweight FFI layer, and manage cross-device data via zero-copy MirrorArrays and lazily materialized traces. The resulting programs remain sound by construction because each foreign subtrace is itself a valid Gen generative function. Empirical studies demonstrate that this hybrid approach yields large practical gains. On a split linear-vs-sinusoidal regression task, GenUflect matches pure GenJAX throughput while running higher-order control logic on the CPU, and is up to two orders of magnitude faster than a pure Gen implementation for datasets of 105 points. In a collapsed-Gibbs sampler for a Dirichlet-process mixture model, GenUflect’s elastic allocation (@amortize≤) lets vectorized GPU kernels adapt to a growing number of clusters; the same inference that takes over an hour in Gen executes in seconds with GenUflect. A probabilistic inverse-graphics pipeline further showcases how heterogeneous submodels can cooperate seamlessly within unified inference code. By coupling language interoperability with automated data movement and compile-time code generation, GenUflect bridges the gap between flexibility and speed, enabling scalable, expressive probabilistic programs that natively exploit both CPUs and accelerators.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under-Coverage of Double Machine Learning Due to Implementation Choices</title>
<link href="https://hdl.handle.net/1721.1/164831" rel="alternate"/>
<author>
<name>Siegmann, Charlotte B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164831</id>
<updated>2026-02-13T03:49:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Under-Coverage of Double Machine Learning Due to Implementation Choices
Siegmann, Charlotte B.
Double ML estimators can estimate coefficients of interest with far fewer functional form assumptions than linear econometric methods. However, DML requires researchers to make a range of implementation choices, including the selection of the function class, the random seed, and hyperparameter configurations. While asymptotic theory suggests these choices should not affect final estimates, we show that for 10 economic analyses (8 of them published and peer-reviewed), implementation choices affect the results. In half of the datasets, different implementation choices even change the interpretation of findings between negative, null, or positive effects. We link these results to a framework for empirically assessing the performance of machine-learning-based estimators, focusing on precision, coverage, and susceptibility to manipulation. This is meant to complement asymptotic theory. We demonstrate that the coverage of DML confidence intervals is too low—placing an upper bound of 48% on the expected coverage of conventional 95% confidence intervals for published DML economics papers. We show that in the status quo, the susceptibility of DML to manipulation by researchers is high, but propose ways to mitigate this susceptibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm</title>
<link href="https://hdl.handle.net/1721.1/164830" rel="alternate"/>
<author>
<name>Zhu, Qianyu Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/164830</id>
<updated>2026-02-13T03:49:05Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm
Zhu, Qianyu Julie
A central task in Bayesian inference and scientific computing is to compute expectations with respect to probability distributions that are only known up to a normalizing constant. Markov chain Monte Carlo (MCMC) methods, and in particular Langevin dynamics, provide a powerful framework for this task by constructing stochastic processes that converge to the target distribution. However, practical implementations face two challenges: slow mixing when the target distribution is anisotropic or multimodal, and persistent discretization bias introduced by numerical schemes. This thesis investigates irreversible perturbations of overdamped Langevin dynamics, aiming to accelerate mixing while controlling discretization error. Irreversible perturbations introduce skew-symmetric drift terms that preserve the target distribution while inducing rotational flow, thereby enhancing exploration. Although prior work has established their benefits in continuous-time settings, the impact of discretization and the design of optimal perturbations for discrete-time algorithms remain open problems. We develop a framework for optimizing constant (position-independent) irreversible perturbations in the Unadjusted Langevin Algorithm (ULA). Our approach balances two competing objectives: maximizing the spectral gap of the continuous dynamics to accelerate convergence, and minimizing discretization error that drives estimation bias. Motivated by this, we introduce new criteria that jointly evaluate bias and efficiency, and we show how these criteria identify perturbations that improve performance beyond existing constructions. Theoretical analysis is complemented by numerical experiments on Gaussian and nonGaussian targets. These experiments demonstrate that appropriately designed irreversible perturbations can reduce mean-squared error without sacrificing stability, while poorly chosen perturbations can degrade performance. The results highlight the importance of geometry-aware design and motivate systematic optimization strategies for irreversible perturbations. Overall, this work extends the theoretical and practical understanding of irreversible Langevin dynamics, bridging the gap between continuous-time spectral analysis and discrete-time numerical performance. It provides principled tools for constructing efficient MCMC samplers, with potential applications in high-dimensional Bayesian inference and modern machine learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Camera Motion Compensated Viewpoint Shift</title>
<link href="https://hdl.handle.net/1721.1/164829" rel="alternate"/>
<author>
<name>Snowdon, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164829</id>
<updated>2026-02-13T03:49:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Single Camera Motion Compensated Viewpoint Shift
Snowdon, Adam
Eye contact is a necessary tool for human connection and in most video conferencing situations, eye contact is not possible. Standard laptop and webcam configurations position the camera at the top of the screen, meaning that when the user looks at other people’s faces in the center of the screen, the camera captures the user looking downward, creating the impression of poor eye contact for remote participants. Solutions involving 3D modeling of the face to synthesize a gaze-corrected view have been explored and exist but are too computationally costly for most personal computers. To address this computational challenge, we draw inspiration from 2D frame interpolation techniques to synthesize a virtual camera view that repositions the user’s apparent gaze toward the camera. Our method uses a single camera located at the top of the user’s screen and requires only a brief setup period. Assuming there is only one user, our approach creates a virtual camera view that transforms the user’s viewpoint from the screen center to the camera position, enabling more realistic eye contact in video conference calls.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data</title>
<link href="https://hdl.handle.net/1721.1/164828" rel="alternate"/>
<author>
<name>Pan, Jessica N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164828</id>
<updated>2026-02-13T03:48:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data
Pan, Jessica N.
Mapping the brain’s complex neural networks requires tracing the long-distance pathways of individual axons, a task that demands a comprehensive 3D reconstruction of the brain. Recently, spatially resolved transcriptomics (SRT) methods enable the study of gene expression and biomolecule distribution in each neuron in its spatial context, opening the door to more thoroughly investigating cell-cell interactions between neurons. However, SRT methods are limited to slices of tissue; therefore, computational alignment is essential to reconstruct a cohesive 3D volume while correcting for both batch effects and inherent sample variability. This thesis presents a novel framework that addresses these challenges through three primary contributions. First, a memory-efficient, non-referenced-based algorithm was developed to align the superficial surfaces of adjacent, high-resolution tissue slices. Second, these surface transformations were interpolated through the tissue slices on a proof-of-concept dataset of three adjacent slices. Third, methods for co-transforming fluorescent protein imaging data were explored to fully resolve the cell boundaries between neurons. These three methods are necessary steps towards creating a fully-resolved, multimodal 3D model of the brain.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators</title>
<link href="https://hdl.handle.net/1721.1/164827" rel="alternate"/>
<author>
<name>Garg, Shruti</name>
</author>
<id>https://hdl.handle.net/1721.1/164827</id>
<updated>2026-02-13T03:49:11Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators
Garg, Shruti
Non-convex optimization is essential to tackle increasingly complex and practical problems in kinematic motion planning. Although introducing non-convexity often sacrifices guarantees of feasibility and optimality–making solutions more susceptible to local minima or failure to converge–many robotic systems and tasks are non-convex by nature, necessitating at least somewhat non-convex formulations. In this thesis, we aim to mostly constrain non-convexity to the objective. This optimization structure helps preserve certain feasibility guarantees in theory and usability in practice while enhancing optimality of solutions, even if global optimality is not achieved. In the first chapter, we demonstrate the effectiveness of non-convex objectives in scenarios where motion planning involves a non-convex parameterization of the configuration space. We keep constraints strictly convex, with the non-convexity quarantined to the objective. This structure guarantees a feasible solution given a feasible initial guess. We primarily use our method to post-process Graphs of Convex Sets solutions in three domains: constrained bimanual motion, motion with guaranteed non-collision, and planning in SO(3). In each case, the non-convex objective compensates for distortion introduced by the parameterization, resulting in more efficient and natural motion. In the second chapter, we propose teleoperation scheme with full-body motion planning for non-holonomic mobile manipulators. Our key contribution is a Differential Inverse Kinematics (DiffIK) formulation that crafts non-convex objectives to avoid singularities and joint limits leading to more robust feasible motion. Unlike before, the constraints are not strictly convex, so the optimization has no guarantees of feasibility. However, we mitigate the non-convexity in the constraints as much as we can by linearizing around the robot’s current position and approximating the highly non-convex non-holonomic constraint. We explore multiple formulations for singularity avoidance and empirically demonstrate that integrating these objectives into DiffIK improves motion quality for teleoperation for the RBY-1 robot.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation</title>
<link href="https://hdl.handle.net/1721.1/164826" rel="alternate"/>
<author>
<name>Pai, Sameer</name>
</author>
<id>https://hdl.handle.net/1721.1/164826</id>
<updated>2026-02-13T03:49:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation
Pai, Sameer
A key challenge in the robotic manipulation of deformable objects is the lack of accurate and efficient systems for estimating their pose in real-time, especially in the presence of occlusion. In this thesis we propose CableSplat, a novel non-parametric method leveraging 3D Gaussian Splatting to estimate the pose of a linear deformable object given RGB images of the object from multiple viewpoints. To facilitate the evaluation of the performance of this method, we develop both simulated and real-world pipelines to collect calibrated and segmented recordings of cables undergoing various manipulations and transformations. We find that our method is consistently able to estimate cable pose to within an average error of ∼2.5mm across simulated tasks. Furthermore, performance on a scene reconstruction metric drops only slightly between simulated and real-world data, suggesting high-fidelity state estimation even in the real world. CableSplat is therefore a promising candidate for the extension of existing manipulation systems to deformables.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease</title>
<link href="https://hdl.handle.net/1721.1/164825" rel="alternate"/>
<author>
<name>Guo, Sophie J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164825</id>
<updated>2026-02-13T03:49:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease
Guo, Sophie J.
Advances in artificial intelligence (AI) and generative AI for representation learning have transformed our ability to model complex biological systems. Single-cell RNA sequencing (scRNA-seq) provides unprecedented resolution into cellular heterogeneity, offering a powerful substrate for modeling disease circuitry. However, predicting patient-level phenotypes from scRNA-seq remains challenging due to limited sample sizes, variable cell counts, and the computational burden of modeling long-context dependencies. We present scPhen, a flexible, parametric deep-learning framework for phenotype prediction from single-cell transcriptomic data, applied here to Alzheimer’s disease (AD) as a paradigm of complex, heterogeneous pathology. scPhen consists of a cell embedding module and a patient embedding module, designed to capture both fine-grained molecular patterns and higher-order cell–cell relationships. The framework supports multiple architectural backbones, including Transformers, Graph Neural Networks (GNNs), and state-space models such as Mamba, Mamba2, and BiMamba2, allowing exploration of tunable components for optimized performance. Across classification and regression tasks, state-space models, and in particular BiMamba2, demonstrated superior predictive accuracy and computational efficiency compared to Transformer-based and hybrid approaches. We further integrated attention-based multiple instance learning to enable variable cell counts per patient and to prioritize phenotype-informative cellular subsets. Interpretability analyses using Integrated Gradients and cell-level attention scores revealed gene programs and cell populations associated with AD progression, highlighting known neuroinflammatory signatures and suggesting novel molecular targets. By unifying cutting-edge sequence modeling architectures with scalable single-cell analysis, scPhen provides a generalizable, high-resolution approach to phenotype prediction. While demonstrated here in AD, this framework is readily extensible to other complex diseases and multi-modal cellular datasets, bridging computational innovation and biological discovery.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Task Functional Localizers Using Naturalistic fMRI</title>
<link href="https://hdl.handle.net/1721.1/164824" rel="alternate"/>
<author>
<name>Wilke, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/164824</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Predicting Task Functional Localizers Using Naturalistic fMRI
Wilke, Jordan
Functional magnetic resonance imaging (fMRI) data collected during naturalistic stimuli has shown promise for predicting individual traits, biomarkers of disease and functional brain localizations, potentially offering advantages over traditional resting-state approaches. This study investigated the use of interpretable deep learning models to predict demographics and functional task localizer activations from fMRI time-series data collected while participants viewed naturalistic stimuli. Using the data of 143 subjects from the Human Connectome Project, I analyzed 7T fMRI scans from participants watching movies to predict sex, age, and functional localizer activations across multiple cognitive tasks. I employed state-of-the-art machine learning architectures, including DICE and Glacier models, specifically chosen for their interpretable design features that build directed connectivity matrices and produce weighted temporal attention maps. These models aimed to capture dynamic brain activity patterns while maintaining the ability to understand which temporal features drive predictions. The results successfully reproduced previous findings for sex classification but showed poor performance for age prediction, with correlations ranging from -0.175 to 0.243. For functional localizer predictions, models initially appeared to achieve high performance with some specific contrasts having correlations around 0.9 and Dice scores generally above 0.6. However, detailed analysis revealed that these models were primarily predicting group averages rather than learning meaningful inter-subject variability, as evidenced by chance-level subject identification accuracy. This finding contrasts with previous works that demonstrated successful prediction of individual differences in functional localizations. The failure to capture inter-subject variability represents a significant limitation, as individual differences in functional regions of interest are crucial for applications such as pre-surgical mapping and disease prediction. My findings suggest that predicting from raw fMRI time-series may require different approaches than those used here, with preprocessed functional connectivity matrices showing promising results, and highlight the importance of sufficient training data to separate signal from noise when learning directly from naturalistic stimuli. Despite these challenges, this work establishes important methodological foundations and identifies key limitations that must be addressed in future research combining naturalistic stimuli with machine learning for fMRI prediction tasks. The findings emphasize the need for models that can capture individual functional differences while maintaining the interpretability necessary for understanding how naturalistic stimuli drive brain-based predictions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference</title>
<link href="https://hdl.handle.net/1721.1/164823" rel="alternate"/>
<author>
<name>Chung, Karen</name>
</author>
<id>https://hdl.handle.net/1721.1/164823</id>
<updated>2026-02-13T03:49:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
Chung, Karen
GPU-compatible probabilistic programming languages (PPLs) have enabled high-performance, data-parallel programmable inference. However, these systems face fundamental trade-offs between expressiveness and performance, as their GPU code generation is automated and black-boxed, limiting optimization opportunities and imposing restrictions on program expressivity. This thesis introduces GenCUDA, a probabilistic programming system that addresses this limitation by embedding the CUDA GPU programming language directly into a C++/CUDA frontend, enabling GPU programmable inference with fine-grained control over runtime and memory profiles. GenCUDA extends the Gen probabilistic programming architecture by providing a dynamic modeling language (DML) that allows users to write performance-critical sections of generative functions as CUDA kernels while maintaining automatic trace management and the generative function interface (GFI). The system supports both sequential and parallel execution contexts through specialized effect handlers that seamlessly compose CPU and GPU code paths. Key technical contributions include: (1) a high-performance GPU distributions library achieving 10-100× speedups over TensorFlow-Probability, (2) memory-efficient trace management via template-optimized parallel effect handlers, and (3) vectorized generative functions that enable massive parallelization of inference algorithms. We demonstrate GenCUDA’s capabilities through comprehensive benchmarks on inference algorithms applied to diverse models including factor graphs, mixture models, and Hidden Markov Models. Results show significant performance improvements over JAX-based implementations: up to 3× speedup for importance sampling on a hierarchical model, 5.7× speedup for parallel Gibbs sampling on factor graphs, and memory efficiency improvements for large-scale mixture models supporting up to 6× as many clusters compared to existing frameworks’ limits. The system maintains the composability and expressiveness of probabilistic programming while unlocking GPU performance optimization techniques such as kernel fusion and memory hierarchy exploitation that are inaccessible to higher-level frameworks. GenCUDA demonstrates that embedding low-level GPU programming within automated probabilistic inference workflows can achieve both performance gains and algorithmic expressivity without sacrificing the modularity of probabilistic programming paradigms.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming</title>
<link href="https://hdl.handle.net/1721.1/164822" rel="alternate"/>
<author>
<name>Kotak, Mit</name>
</author>
<id>https://hdl.handle.net/1721.1/164822</id>
<updated>2026-02-13T03:49:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming
Kotak, Mit
E(3)-equivariant neural networks have demonstrated success across a wide range of 3D modeling tasks. Until recently, they were bottlenecked due to their high memory and wall-time requirements. In this thesis we first provide an overview of recent GPU kernel efforts by both academia and industry that address this issue. These approaches tradeoff performance for engineering complexity, while still being algorithmically bottlenecked at 10 % GPU utilization. We instead trade off engineering complexity for performance. This not only lowers the barrier to GPU programming but also builds an abstraction layer to reason about future algorithmic innovations that can improve GPU utilization. Our kernel &#119861;3, based on the tiling- optimizations in just 100 lines of PyTorch-like code. We explore the performance-simplicity tradeoff with two case studies and demonstrate the practicality of our kernel workflow through downstream integration with a production model. We hope this work serves as inspiration to broaden and deepen existing equivariant kernel efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From coarse fate choice to precise pattern: post-mitotic progenitor targeting</title>
<link href="https://hdl.handle.net/1721.1/164820" rel="alternate"/>
<author>
<name>Nie, Mel F.</name>
</author>
<id>https://hdl.handle.net/1721.1/164820</id>
<updated>2026-02-13T03:49:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From coarse fate choice to precise pattern: post-mitotic progenitor targeting
Nie, Mel F.
Planarians possess remarkable regenerative abilities, driven by pluripotent stem cells called neoblasts. While neoblasts are known to give rise to progenitor cells that form various tissues, whether and the extent to which these progenitors migrate across the animal remains unclear. Irradiation experiments eliminate all neoblasts outside shielded areas, allowing for the visualization of cell migration from the remaining neoblasts, but irradiated animals may not reflect homeostatic progenitor migration patterns. To address this, 5-ethynyl-2’-deoxyuridine (EdU) labeling and plug transplant techniques were used to trace progenitor movement in non-irradiated planarians. Using whole-mount fluorescence in situ hybridization (FISH) and the quantification of EdU-labeled cells, this study demonstrates that progenitor cells are capable of migrating long distances and exhibit a pronounced anterior bias in their movement and integration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection</title>
<link href="https://hdl.handle.net/1721.1/164818" rel="alternate"/>
<author>
<name>Wagh, Rohan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164818</id>
<updated>2026-02-13T03:49:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection
Wagh, Rohan M.
The growing accessibility of generative models has enabled the rapid proliferation of deepfake content, posing significant challenges in image-based biometric security and media authenticity. In this thesis, six diverse facial deepfake image datasets are assembled, and four modern detection models are evaluated in a cross-domain scenario. We observe that individual models fail to generalize to images generated by techniques outside the scope of their training data. This often hinders the applicability of a single model in real-world deepfake detection. This thesis proposes ensemble strategies as a means of addressing this lack of generalization. We find that the ensemble models outperform individual models in classifying deepfake images, particularly in terms of accuracy and recall. An exhaustive evaluation of combinations of models shows that ensembles of similar models provide limited benefit, whereas ensembles of complementary models lead to significant improvements in classification performance. Ensembling models based specifically on accuracy and recall metrics also produces models that lower the rate of more harmful false negative predictions. This work highlights the value of ensemble models in improving generalization across diverse image families and provides a framework for building robustness in real-world deepfake detection systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests</title>
<link href="https://hdl.handle.net/1721.1/164700" rel="alternate"/>
<author>
<name>Tan, Lip-Bu.</name>
</author>
<id>https://hdl.handle.net/1721.1/164700</id>
<updated>2026-02-03T04:58:28Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests
Tan, Lip-Bu.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation</title>
<link href="https://hdl.handle.net/1721.1/164696" rel="alternate"/>
<author>
<name>Smith, Mathew D.
            (Mathew Darin)</name>
</author>
<id>https://hdl.handle.net/1721.1/164696</id>
<updated>2026-02-03T04:58:24Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation
Smith, Mathew D.
            (Mathew Darin)
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1997; Includes bibliographical references (leaves 43-45).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The crystallization of sucrose</title>
<link href="https://hdl.handle.net/1721.1/164695" rel="alternate"/>
<author>
<name>Brown, Ernest K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164695</id>
<updated>2026-02-03T04:58:17Z</updated>
<published>1929-01-01T00:00:00Z</published>
<summary type="text">The crystallization of sucrose
Brown, Ernest K.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1929; Includes bibliographical references (leaf 81).
</summary>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems of Visualization for Musical Futures</title>
<link href="https://hdl.handle.net/1721.1/164673" rel="alternate"/>
<author>
<name>Naseck, Perry</name>
</author>
<id>https://hdl.handle.net/1721.1/164673</id>
<updated>2026-01-30T03:24:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Systems of Visualization for Musical Futures
Naseck, Perry
This thesis investigates how large-scale visual systems can communicate the presence, agency, and foresight of improvising musical agents–human and AI–during live performance. We propose a framework for manifesting AI collaborators on stage through five principles: musical transparency, live improvisational reactivity, demonstrated virtuosity, communication for collaboration, and visual fit. Two public performances operationalize these ideas: an addressable-light sculpture that renders harmonic space, and a stage-sized kinetic sculpture built from novel, low-cost Generic Pan Tilt fixtures that visualize the AI’s planned “musical futures.” The latter combines a real-time, MIDI-conditioned, Transformer-based hand-motion model with deterministic, pattern-based mappings that signal states such as resting and regeneration. Audience surveys indicate that viewers perceived links between musical turns and kinetic gestures while requesting clearer explanatory cues. We document the open-source hardware, firmware, and control protocols of the Generic Pan Tilt platform and reflect on design tradeoffs for accessibility, reliability, and expressivity. Finally, we outline a real-time analysis toolchain–motif detection, parallelism, and continuous energy/tension estimators–that emits OSC triggers for lighting, media, kinetic, and spatial-audio systems, enabling reactive shows beyond timecode. Together, these systems advance performable visualizations of human-improvised and AI-driven musical futures.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Rules for LLM-Generated Code: A RealWorld Case Study</title>
<link href="https://hdl.handle.net/1721.1/164672" rel="alternate"/>
<author>
<name>Lawrence, Jennifer M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164672</id>
<updated>2026-01-30T03:24:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design Rules for LLM-Generated Code: A RealWorld Case Study
Lawrence, Jennifer M.
This thesis conducts a case study exploring the interaction between software design, extensibility, and LLM code generation. The central problem we investigate is whether LLMs violate software design principles in ways that introduce bugs and ultimately hinder extensibility. We examine several repositories belonging to the RealWorld collection, a project that demonstrates combinations of frameworks, database, and programming languages for building full stack web apps modeled on an existing social media application. We create a concept-based implementation of the RealWorld API. Concept Design defines software systems in terms of the abstract purposes and relationships of self-contained units of functionality. It enforces stringent design standards and aims to aid humans better understand complex software behavior. To test code extensibility, we develop three phases of new functionality to be added to the RealWorld API. Each phase is intended to mimic real-world software development, adding functionality that is commonly found in social media platforms while increasing nuance and complexity. The code for these extensions is generated by an AI agent, then reviewed by a human coder who classifies and fixes any bugs. In this study, we examine how LLMs interact with software paradigms like Concept Design, the kinds of design violations they produce, and whether these violations correlate with bugs that impede extensibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cognify: An On-Device, AI-powered Learning Assistant</title>
<link href="https://hdl.handle.net/1721.1/164671" rel="alternate"/>
<author>
<name>Huang, Siyong</name>
</author>
<id>https://hdl.handle.net/1721.1/164671</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Cognify: An On-Device, AI-powered Learning Assistant
Huang, Siyong
Large Language Models (LLMs) have proven highly effective for a wide range of natural language processing tasks, but their size and compute requirements often restrict their use to powerful cloud-based infrastructures. In recent years, significant progress has been made in shrinking LLMs while maintaining performance levels comparable to much larger models. We are approaching the point where the capabilities of massive, multi-billion parameter models can be realistically replicated on consumer-grade devices. This thesis builds upon that foundation by developing an AI-powered note-taking application that runs entirely offline, using only the compute resources available on a personal laptop. The application is designed to listen to lectures alongside the student and provide support in real-time—through transcription, notes generation, and enabling context-aware search. Achieving this level of interactivity locally introduces challenges in reducing end-to-end latency, which this project addresses through both model-level optimizations and the design of efficient prompting and inference algorithms. A demo of the app can be found on Youtube.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Analysis of the Apple AMX Matrix Accelerator</title>
<link href="https://hdl.handle.net/1721.1/164670" rel="alternate"/>
<author>
<name>Zhou, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164670</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Performance Analysis of the Apple AMX Matrix Accelerator
Zhou, Jonathan
Apple Silicon integrates a dedicated Apple Matrix Coprocessor (AMX) that executes outer-product style computations with high throughput, but its public programming model remains largely hidden behind the Accelerate framework. This thesis turns AMX into a more predictable and practical target by combining (i) empirical throughput characterization, (ii) a case study on AMX specific matrix multiplication (GEMM) design, and (iii) an interpretable rule-based latency model that predicts cycle counts for short AMX instruction sequences. First, microbenchmarks quantify AMX load/store and compute limits across matrix and vector modes and data types. We analyze throughput in both GFLOPS and AMX instructions per cycle, and also observe output register based throughput limitations. Second, we develop an in-place GEMM that uses masked outer products and strategically overlapping tiles to avoid scratch buffers used by Accelerate, outperforming Accelerate while preserving simplicity. Third, we introduce a compact latency model that decomposes cycles into per-instruction BaseTime, symmetric SwitchLatency for instruction changes, and instruction FullLatency (data dependency) terms. Fitted with non-negative coordinate descent on length-2 loops and validated on length-3 sequences via a lightweight loop simulation, the model obtains reasonably high accuracy while remaining helpful for those trying to understand the architecture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Large Language Models from a Data SystemsPerspective</title>
<link href="https://hdl.handle.net/1721.1/164667" rel="alternate"/>
<author>
<name>Chen, Peter Baile</name>
</author>
<id>https://hdl.handle.net/1721.1/164667</id>
<updated>2026-01-30T03:24:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Large Language Models from a Data SystemsPerspective
Chen, Peter Baile
Strong retrieval and reasoning capabilities are essential for large language models (LLMs) to effectively handle a broad spectrum of downstream tasks, such as open-domain question answering and solving math or science problems. While current LLM-based frameworks achieve strong performance on complex retrieval and reasoning tasks, they do so at a high computational cost. Additionally, they often lack structured, systematic problem-solving strategies, leading to unexpected failures. In particular, these models typically operate in an iterative, online, and isolated fashion—failing to exploit relationships across data sources, opportunities for offline computation, and the benefits of reusability—resulting in less-than-optimal outcomes. In contrast, traditional data management systems are engineered for both efficiency and accuracy, with careful coordination across all stages of the query pipeline. Inspired by these principles, this work proposes novel approaches to improve LLMbased retrieval and reasoning by incorporating optimization techniques from data systems. Our evaluation across a range of knowledge- and reasoning-intensive datasets demonstrates significant gains in both accuracy and computational efficiency.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits</title>
<link href="https://hdl.handle.net/1721.1/164665" rel="alternate"/>
<author>
<name>Bui, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164665</id>
<updated>2026-01-30T03:24:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits
Bui, Eric
The performance and scalability of superconducting quantum circuits depends critically on the microwave environment. Minimizing signal reflections and suppressing thermal noise are essential for achieving high-fidelity readout and preserving qubit coherence. A significant challenge arises from the use of conventional cryogenic components such as isolators and circulators, which exhibit nonideal out-of-band reflection characteristics. Reflections degrade impedance matching and limit the performance of broadband quantum limited amplifiers. Superconducting implementations of reflectionless microwave filters offer a promising solution to mitigate these issues. The focus of this work is the fabrication and cryogenic characterization of reflectionless filters compatible with superconducting qubit fabrication flows. Devices were implemented on high resistivity silicon substrates using aluminum ground planes, integrated nichrome resistors, and crossovers formed with SiO2 interlayer dielectric. Cryogenic measurements at 20 mK demonstrate high return loss, confirming the viability of these filters for co-fabrication with traveling-wave parametric amplifiers (TWPAs) and circuit quantum electrodynamics (cQED) architectures. The filters exhibit low insertion loss in the passband to maintain quantum measurement efficiency and provide broadband reflection suppression across frequencies relevant to superconducting qubits, offering a scalable way to manage microwave noise in superconducting quantum processors.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval</title>
<link href="https://hdl.handle.net/1721.1/164664" rel="alternate"/>
<author>
<name>Dongo Aguirre, Gyalpo Melchisedeck</name>
</author>
<id>https://hdl.handle.net/1721.1/164664</id>
<updated>2026-01-30T03:24:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval
Dongo Aguirre, Gyalpo Melchisedeck
Until now, state-of-the-art research into AI-driven clinical workflows has been confined to proprietary, closed-source systems from vendors like Epic and Oracle, or private experiments like Stanford’s ChatEHR, creating a critical barrier to academic innovation. This thesis introduces CONDOR, the first fully open-source and replicable research environment designed to simulate an agentic, conversational AI interacting with a high-fidelity Electronic Health Record (EHR). By integrating an open-source, FHIR-native EHR (Medplum) with a complex, realistic public clinical dataset (MIMIC-IV FHIR), CONDOR provides a foundational testbed that has been previously unavailable to the research community. The framework’s primary contribution is a novel alignment and evaluation methodology that adapts the principles of SelfCite to the clinical domain. We propose a ‘ClinicalConfidence‘ score to quantify the trustworthiness of generated statements and programmatically generate a high-quality preference dataset for alignment using Simple Preference Optimization (SimPO). We compare a standard vector-based Retrieval-Augmented Generation (RAG) baseline against a more advanced GraphRAG architecture that leverages a two-tiered knowledge graph of patient data and medical ontologies. Our results demonstrate that the full CONDOR system, combining GraphRAG with SimPO alignment, significantly improves citation quality and verifiability, establishing a new open-source benchmark for the development of safe and reliable clinical AI.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation</title>
<link href="https://hdl.handle.net/1721.1/164663" rel="alternate"/>
<author>
<name>Nair, Anushka Manchanda</name>
</author>
<id>https://hdl.handle.net/1721.1/164663</id>
<updated>2026-01-30T03:24:50Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation
Nair, Anushka Manchanda
As of 2025, social platforms have become a primary news source, magnifying the reach of misleading content [1]. Exposure to misinformation has been linked to shifts in public attitudes and behavior, including vaccine uptake [2] and voting behaviors [3]. However, current misinformation detection approaches can often focus on a narrow definition of misinformation: factual claims that can be clearly judged as true or false. However, recent research suggests the problem lies elsewhere: overt falsehoods (“vaccines contain microchips”) can carry little harm, while technically accurate but decontextualized narratives can be more influential. Allen et al. (2024) [4] found that factually accurate ”vaccine-skeptical” content had a much greater impact on vaccine hesitancy than misinformation flagged by fact-checkers. These narratives can work by omitting information, misleading framing, or cherry-picked evidence, forms of manipulation that can elude traditional fact-checking. Though professional fact-checkers are often able to recognize these tactics and the broader context of information, they cannot keep pace with the volume of online content. This thesis designs a Large Language Model (LLM) based pipeline meant to partner with, rather than replace, human fact checkers. The system decomposes content into its explicit and implicit claims, rhetorical tactics, and the “missing context” questions it raises; retrieves evidence from fact-check databases and reliable sources; and synthesizes grounded explanations while assigning calibrated harm scores to guide triage. Evaluated on fact-checked tweets, the pipeline matched expert judgments in 92.6% of cases where experts agreed, and flagged for review posts where experts disagreed, a gray zone requiring human judgment. The system’s explanations ranked higher than crowdsourced Community Notes in helpfulness, clarity, and trustworthiness when assessed by an LLM, and harm evaluations aligned with human reviewers in 87.5% of cases, enabling prioritization of content with greatest potential impact. Despite constraints of sample size and processing latency, the results demonstrate the feasibility of a human–AI workflow that treats disagreement as a signal and directs scarce attention towards high-impact misinformation that current automated systems can miss.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems</title>
<link href="https://hdl.handle.net/1721.1/164661" rel="alternate"/>
<author>
<name>Sneh, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/164661</id>
<updated>2026-01-30T03:24:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems
Sneh, Tal
Recent advances in silicon photonics have yielded impressive results in fields including biophotonic optical tweezers and trapped-ion quantum systems. However, the majority of these demonstrations, while offering advantages in size, cost, and dense integration, lag behind their bulk-optic counterparts, limited by a lack of critical advanced functionality such as spatial control of light in the near field or polarization control at visible wavelengths. This thesis addresses this gap by designing and experimentally demonstrating the first, to the best of our knowledge, cell experiments using single-beam integrated optical tweezers, chip-based 3D printers, and integrated polarization rotators and splitters at blue wavelengths. First, we demonstrate optical trapping and tweezing of microspheres using a nearfield-focusing integrated optical phased array, at a standoff distance over two orders of magnitude larger than prior integrated demonstrations. We then use this system to perform the first cell experiments using single-beam integrated optical tweezers. Second, we use a tunable integrated optical phased array operating at red wavelengths to print designs in a visible-light-curing resin, demonstrating the first chip-based 3D printer. Third, we design and experimentally demonstrate the first integrated polarization rotators and splitters operating at blue wavelengths, enabling polarization control on chip for sophisticated integrated manipulation of trapped-ion and neutral-atom quantum systems. Finally, we develop key polarization-diverse integrated-photonics devices and utilize them to implement a variety of integrated-photonics-based polarization-gradient-cooling systems, culminating in the first demonstration of polarization-gradient cooling of a trapped ion by an integrated-photonics-based system.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription</title>
<link href="https://hdl.handle.net/1721.1/164659" rel="alternate"/>
<author>
<name>Parthasarathi, Sruthi</name>
</author>
<id>https://hdl.handle.net/1721.1/164659</id>
<updated>2026-01-30T03:24:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription
Parthasarathi, Sruthi
In recent years, a wide range of computational techniques have been developed to extract information from recorded performances of Western music. However, these methods often achieve limited success when applied to non-Western musical traditions. Carnatic music, in particular, poses unique challenges due to the absence of a standardized notation system and the lack of a consistent mapping between frequency bands and note categories. This project introduces a dynamic programming–based transcription framework, incorporating novel methods for label estimation, contour segmentation, and related subtasks, and establishes the foundations for end-to-end automatic transcription of this art form.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Diverse Treatment Policies from Observational Health Data</title>
<link href="https://hdl.handle.net/1721.1/164658" rel="alternate"/>
<author>
<name>Ejilemele, Abe</name>
</author>
<id>https://hdl.handle.net/1721.1/164658</id>
<updated>2026-01-30T03:24:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Diverse Treatment Policies from Observational Health Data
Ejilemele, Abe
Learning policies for real world tasks often requires modeling human behavior, especially in domains like healthcare and driving. In these settings, skills are learned from expert human demonstrations, but such data are typically multimodal, violating the common single expert assumption. We study sequential clinical treatment decision making in the offline imitation learning setting, where environment interaction is prohibited, reflecting the challenges of experimentation in safety critical domains. Existing methods for multi expert offline imitation learning often restrict the latent space, underspecify its structure, or omit objective terms that prevent latent collapse and encourage behavior discovery. We propose a fully offline approach that addresses these shortcomings and improves learning from multi expert demonstrations through modifications to the formulation of the latent approximate posterior and the model architecture. We suggest that our method is more robust to real world settings where the true number of demonstrators may not be known. We also incorporate an occupancy matching term into our objective that injects awareness of the rollout distribution over trajectories into our behavior cloning objective. We evaluate our method against baselines on both simulated multi expert demonstrations from an extended S-CVSim and real world demonstrations from MIMIC. Our approach achieves consistently higher next step action prediction and behavior discovery performance. While ground truth expert policies are unavailable for MIMIC, visual analysis shows our method uncovers clinically meaningful variations in expert strategies, reflecting treatment population diversity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/164656" rel="alternate"/>
<author>
<name>Yankelevich, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/164656</id>
<updated>2026-01-30T03:24:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics
Yankelevich, Beatriz
As the field of superconducting quantum computing advances, networking qubits within a single system becomes essential for building modular processors. Modularity allows the system to circumvent scalability constraints and enable architectures and computational schemes that exploit non-local connectivity to enhance processing capabilities. This work proposes non-local entanglement generation methods based on the theory of chiral quantum waveguide dynamics, which is the quantum-optical framework that describes systems of atoms coupled non-reciprocally to a continuum of modes. We leverage these effects to design a chiral communication module composed of multiple superconducting qubits, capable of both directional single photon routing and the realization of chiral, driven-dissipative entanglement protocols.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/164655" rel="alternate"/>
<author>
<name>Kim, Ji Won</name>
</author>
<id>https://hdl.handle.net/1721.1/164655</id>
<updated>2026-01-30T03:24:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction
Kim, Ji Won
Accurate prediction of antibody-antigen binding is a central challenge in computational immunology. Its direct implication for therapeutic antibody design and vaccine development has made it one of the most rapidly growing fields. Recent advances in protein language models and structure prediction have provided new tools for modeling, yet these approaches often fall short in capturing the fine-grained features that drive binding specificity in antibody and antigens. This thesis evaluates multiple strategies for improving predictive performance. First, we investigate a custom multiple sequence alignment (MSA) experiment. Standard Boltz-2 training relies on MSAs from broad protein databases, which capture global diversity but under-represent lineage-specific constraints. To address this, we constructed antibody-specific MSAs to test whether restricting the search space to antibody repertoires improves model learning. Unfortunately, gains in downstream binding prediction were limited, suggesting that further work needs to be done in training models for specific databases in the first place. Our second line of investigation focused on fine-tuning Boltz-2, a generative structural foundation model, using curated antibody–antigen data. By leveraging Boltz-2’s internal sequence embeddings, we trained a predictive model for binding affinity. This approach yielded stronger ROC performance compared to baseline models, achieving a validation AUROC of 0.645, demonstrating the advantages of structural generative priors for antibody–antigen binding prediction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deterministic Circuit Range Avoidance is (Likely) Intractable</title>
<link href="https://hdl.handle.net/1721.1/164654" rel="alternate"/>
<author>
<name>Ilango, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/164654</id>
<updated>2026-01-30T03:24:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Deterministic Circuit Range Avoidance is (Likely) Intractable
Ilango, Rahul
Circuit Range Avoidance (denoted Avoid) is a computational problem where, given a Boolean circuit with more output bits than input bits, one must output a string outside of the range of the circuit. A simple counting argument implies that such a string must always exist and also guarantees that outputting a uniformly random string is correct with good probability. A natural question is whether this can be derandomized: does there exist an efficient deterministic algorithm for Avoid? We give the first evidence that deterministically solving Avoid is intractable. We show that there is no polynomial-time algorithm for Avoid under plausible assumptions in complexity theory and cryptography. Specifically, our assumptions are that NP ≠ coNP and that subexponentially-secure indistinguishability obfuscation exists.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities</title>
<link href="https://hdl.handle.net/1721.1/164652" rel="alternate"/>
<author>
<name>Ranade, Esha</name>
</author>
<id>https://hdl.handle.net/1721.1/164652</id>
<updated>2026-01-30T03:24:42Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities
Ranade, Esha
Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks and are increasingly being used for language generation. Significant advancements in this field have unlocked capabilities that enable their adoption in sophisticated roles, including acting as evaluators or "judges" of text for various attributes such as factuality, relevance, fluency, and reasoning quality. However, their understanding and ability to assess subjective attributes, such as the level of formality in a piece of writing, and produce content matching these subjective attributes remains unclear and underexplored. This research develops a methodology to study how LLMs evaluate subjective attributes. It has three primary contributions: (i) a reproducible user study to generate human-annotated labels for different attributes, (ii) an analysis of the extent to which different LLMs provide subjective labels aligned with human annotators, and (iii) an analysis of the extent to which LLMs generate content aligned with specified intended subjective labels, relative to humans. The user study and the analyses have been conducted both with and without a reference scale. The scale itself, the survey design, and the evaluation questions have all undergone multiple rounds of iteration informed by study tester feedback to improve clarity, consistency, and reliability for the final study. Comparisons between human-generated ratings and LLM-generated ratings for both human-generated content and LLM-generated content reveal the extent to which LLMs align with human judgment, providing insights into their capabilities and limitations. While humans typically do better in their roles, LLMs are able to attain reliably high levels of success in producing and judging text, despite tending to err on the more-formal side. Both groups’ performance increases significantly with the aid of a formalized reference scale. Across the suite of models tested, OpenAI’s GPT family leads overall performance, with Anthropic’s Claude and Meta’s LLaMA series showing notable strengths in specific formality ranges. Although this work focuses on the formality attribute of text, the methodology developed can be used to evaluate other subjective qualities of text, such as conciseness, usefulness, or persuasiveness. Ultimately, these findings may guide future efforts to fine-tune LLMs to produce text that more precisely matches the desired stylistic or ethical standards.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Burst Parallelism of SigmaOS processes with CRIU</title>
<link href="https://hdl.handle.net/1721.1/164651" rel="alternate"/>
<author>
<name>Tang, Frederick</name>
</author>
<id>https://hdl.handle.net/1721.1/164651</id>
<updated>2026-01-30T03:24:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Accelerating Burst Parallelism of SigmaOS processes with CRIU
Tang, Frederick
σOS is a multi-tenant cloud operating system designed to integrate the agility of serverless environments with the interactivity of microservices. A goal of achieving this integration is the ability to start new instances of server processes quickly. However, σOS only handles σcontainer initialization, and does not assist with runtime and app initialization costs. One approach to overcome this challenge is to checkpoint processes using Checkpoint/Restore in Userspace (CRIU). CRIU is a linux toolset which can start new server instances by restoring them from a saved checkpointed state, avoiding the full cost of reinitialization and setup. This thesis introduces σCRIU, which adapts CRIU for burst-parallel spawning of microservices in σOS. σCRIU implements a number of optimizations: compressing checkpointed proc metadata to reduce network communication costs, implementing demand-paging using a lazy page service, and caching kernel metadatadata to reduce CRIU’s restore operation latency. These optimizations allow σCRIU to start new microservices on remote machines quickly while still making use of CRIU’s existing proven checkpoint and restore technology.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)</title>
<link href="https://hdl.handle.net/1721.1/164649" rel="alternate"/>
<author>
<name>Gosalia, Mehek</name>
</author>
<id>https://hdl.handle.net/1721.1/164649</id>
<updated>2026-01-30T03:24:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)
Gosalia, Mehek
This work introduces a novel pipeline for scene reconstruction that jointly prioritizes semantic accuracy and visual fidelity, addressing a gap in current approaches. Prior pipelines often emphasize either semantic analysis or photorealistic rendering, but rarely both. This method combines scene analysis, segmentation, and retexturing to yield reconstructions that preserve structural semantics, while convincingly reflecting the visual qualities of the original image. The motivation lies in the limitations of existing systems. Existing databaseassisted approaches depend on proprietary datasets that restrict stylistic diversity or using in-the-wild assets. This constrains expressiveness and often produces results that are visually misaligned. Conversely, pipelines optimized for visual realism neglect semantic correctness, generating outputs that may appear plausible but lack categorical or structural grounding. Our framework addresses this by first enforcing semantic accuracy via selecting database assets, then editing those assets to be stylistically faithful to the reference, producing reconstructions that are both interpretable and expressive. We begin with database-assisted scene analysis, using an open-source asset database containing chairs, lamps, sofas, tables, and benches. Input images are depth-mapped, segmented, and parsed into object masks, which are matched to database assets based on semantic labels and visual correspondence. Each asset is broken into semantic segments and rescaled per-component using vision-language model predictions to match the reference object better. Finally the asset is retextured based on the image mask of the reference object in the input image. Evaluation on six diverse scenes—both photographs and artworks—shows the pipeline produces semantically grounded, visually accurate reconstructions under non-research conditions. Future work will focus on expanding the asset database, reducing reliance on proprietary texturing, and releasing an open-source implementation to broaden accessibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization</title>
<link href="https://hdl.handle.net/1721.1/164648" rel="alternate"/>
<author>
<name>Wang, Janet Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/164648</id>
<updated>2026-01-30T03:24:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization
Wang, Janet Z.
Singlet fission (SF)-sensitized silicon (Si) solar cells offer a path towards surpassing the Shockley-Queisser efficiency limit for single-junction solar cells. However, realizing efficient charge transfer from the SF material to Si remains a significant challenge that requires careful interface engineering. Prior work showed that Si microwire cells sensitized with tetracene (Tc) and a zinc phthalocyanine (ZnPc) donor layer can boost photocurrent and external quantum efficiency (EQE). Planar devices are simpler to fabricate than microwire devices and reproduce the planar geometry of optical test samples to connect studies of the interface to device performance. This thesis integrates modeling and experimental approaches to guide the design of planar SF-sensitized Si solar cells. We developed a fabrication process for planar cells comparing varied oxide passivation layer growth conditions and surface treatments, Si(100) versus Si(111) orientation, and junctions formed by diffusion doping versus ion implantation. Complementary surface photovoltage (SPV) measurements on matching optical stacks show evidence of an illumination-induced transient positive charge density at the Tc/ZnPc/oxide/Si interface, consistent with increased field effect passivation. We find that SPV responses on AlOx/n-Si are dominated by substrate band bending; consequently, SiOx is the preferred passivation to suppress the background and isolate the SPV signals driven by the organics. A drift–diffusion model shows that the diffusion doping (exponential) emitters reduce surface recombination rates compared to ion implantation (Gaussian) emitters. We also show that a positive fixed charge density at the surface enhances short wavelength EQE, with the effect strongest for Gaussian emitters. Together, these results provide practical design rules for planar SF-sensitized Si cells and the study of charge transfer at organic-Si interfaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microsecond Time Synchronization for Computing Fiber&#13;
Networks</title>
<link href="https://hdl.handle.net/1721.1/164647" rel="alternate"/>
<author>
<name>Li, Jenny Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/164647</id>
<updated>2026-01-30T03:24:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Microsecond Time Synchronization for Computing Fiber&#13;
Networks
Li, Jenny Y.
We present a microsecond-accurate time synchronization method and time localization system for a sensor network of spatially-separated, low-power Bluetooth nodes, with the goal of integrating this system into thermally-drawn computing fibers. Each node consists of an nRF54L15 SoC paired with an ICS-43434 digital I2S microphone, enabling synchronized audio data collection. Our design leverages Bluetooth LE connection events to synchronize local clocks with sub-10 µs accuracy across a multi-peripheral topology; we trigger precise, CPU-independent hardware events to timestamp audio samples. We demonstrate that timestamped I2S data stored in external SPI flash can be correlated across devices to extract TDoA measurements for localizing sound sources. Cross-correlation techniques allow us to estimate direction and position, with localization errors reduced from 4.17 m to 0.39 m through clock synchronization. This prototype provides a roadmap for embedding synchronized sensing and computation within fibers and smart textiles, with implications for on-body audio perception and distributed sensing in flexible electronics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From String to Structure: Graph Threading for Physical Assembly</title>
<link href="https://hdl.handle.net/1721.1/164646" rel="alternate"/>
<author>
<name>Lin, Rebecca Y. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164646</id>
<updated>2026-01-30T03:24:37Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From String to Structure: Graph Threading for Physical Assembly
Lin, Rebecca Y. E.
Many artistic and engineering applications—from beadwork to deployable structures—create intricate, and sometimes dynamic, designs by threading cord through tubular components. We model the underlying design challenge—threading tubes so that they achieve a target connectivity when the string is pulled taut—as graph threading. In this formulation, tubes and their junctions correspond to edges and vertices of a graph, and the goal is to find a closed walk that induces a connected graph at every vertex while avoiding U-turns. We study two optimization objectives motivated by fabrication and deployment: minimizing length to reduce material cost and assembly time, and minimizing turn to reduce frictional resistance during deployment. For the length metric, we present a polynomial-time algorithm via reduction to minimum-weight perfect matching, prove tight worst-case bounds on optimal threadings, and identify special cases with faster algorithms. For the turn metric, we characterize the complexity landscape, proving NP-hardness for graphs of maximum degree 4, tractability for degree 3, and giving exact and approximation algorithms for restricted variants, including rectangular grid graphs. Finally, we turn from theory to fabrication, proposing multi-configuration threading—a new approach for achieving multiple predetermined configurations within a single system. As in earlier chapters, framing the problem in graph-theoretical terms provides access to powerful problem-solving techniques, guiding both algorithmic analysis and physical design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/164644" rel="alternate"/>
<author>
<name>Khoo, Ling Min Serena</name>
</author>
<id>https://hdl.handle.net/1721.1/164644</id>
<updated>2026-01-30T03:24:40Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction
Khoo, Ling Min Serena
Elucidating the structure of small molecules from complex mixtures using liquid chromatography tandem mass spectrometry (LC-MS/MS) is a challenging task with far-reaching implications in many areas such as drug discovery, environmental science and metabolism research. Yet, despite its importance and significant efforts to develop machine learning (ML) models for the task of elucidating the molecular structures of unknown compounds from LC-MS/MS spectra, the performance of these ML-based models remains limited. As a result, the performance of current ML-based models has been reported as insufficient for practical applications, thereby warranting a deeper investigation into their limitations to advance ML-based molecular structure elucidation from LC-MS/MS and enable their utility in real-world settings. Here, we leverage data attribution methods to systematically identify and validate hypotheses about the sources of generalization challenges that hinder current model performance. Our goal is to automatically uncover insights into the failure modes of existing ML models for LC-MS/MS, thereby laying the foundation for developing more robust and accurate models.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Dynamic Objects in Scenes with Generative Particle Systems</title>
<link href="https://hdl.handle.net/1721.1/164643" rel="alternate"/>
<author>
<name>Li, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164643</id>
<updated>2026-01-30T03:24:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Dynamic Objects in Scenes with Generative Particle Systems
Li, Eric
Humans readily interpret the motion of deformable and rigid bodies, even when encountering unfamiliar objects with minimal shape or texture cues. In such cases, motion serves as a critical signal for recognition and understanding. Inspired by this ability, we propose a generative model that represents 3D matter as small Gaussians (“particles”) drawn from clusters capturing groups of coherently moving matter. We develop an e!cient inference algorithm based on parallelized block Gibbs sampling to recover stable particle motion and rigid groupings. Our model provides a tractable, object-centric generalization of as-rigidas-possible (ARAP) regularizers used in motion tracking. To assess alignment with human perceptual judgments, we test our approach on random dot kinematograms—sparse motion displays in which dot trajectories convey latent object structure, often used to probe visual understanding of motion and grouping. In this setting, our approach captures human-like responses, including graded patterns of uncertainty across ambiguous conditions. Applied to naturalistic RGB videos, it infers dense particle representations that track object motion and deformation over time. These results demonstrate that our model enables persistent latent scene structure suitable for object-level reasoning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Arm Qubit for Faster, Higher Fidelity Readout and Gates</title>
<link href="https://hdl.handle.net/1721.1/164642" rel="alternate"/>
<author>
<name>Kline, Jeremy B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164642</id>
<updated>2026-01-30T03:24:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Arm Qubit for Faster, Higher Fidelity Readout and Gates
Kline, Jeremy B.
Currently, superconducting qubit processors are bottlenecked by errors during two-qubit gates, readout, and idle time. All three error contributions could be reduced if we improved the speed of operations (without introducing additional leakage errors) compared to the qubit lifetime. Readout and two-qubit gates are multimode interactions and therefore are limited by the coupling strength between the modes. In this thesis, we introduce a two-mode superconducting qubit which uses one mode to facilitate strong coupling to other modes of the quantum processor and one mode to store data with high coherence. Simulations show that this architecture could enable order-of-magnitude reductions in error during readout and two-qubit gates.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering Algorithms for Component Placement in Printed Circuit Boards</title>
<link href="https://hdl.handle.net/1721.1/164641" rel="alternate"/>
<author>
<name>Petrusenko, Vlada</name>
</author>
<id>https://hdl.handle.net/1721.1/164641</id>
<updated>2026-01-30T03:24:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Clustering Algorithms for Component Placement in Printed Circuit Boards
Petrusenko, Vlada
In 2024, approximately 12 billion printed circuit boards (PCBs) were manufactured globally [1], with the trend increasing gradually, and the majority of PCB layouts still being completed manually. The manual design process amounts to millions of hours of tedium that can be eased with automation. One of the biggest challenges is that the complex Printed Circuit Board designs typically have hundreds, sometimes thousands of components and even more net connections between them. This makes both manual and automated placement very time-consuming. As a way to improve placement performance, in this thesis, we constructed a custom weighted undirected graph representation of components and nets for any board that would encode physical and electrical constraints. Additionally, we integrated the Louvain and Leiden clustering algorithms for component clustering in PCB placement. We also showed comparative metrics with the spectral clustering algorithm applied to unweighted graph representations, which is the prior state of this project, but it has no knowledge of electrical and physical constraints associated with PCB designs and would thus produce results that require more manual correction. This new clustering approach was able to generate more optimal clustering and reduced average runtime by 51.05%, decreased estimated length of routing by 7.72%, and improved component association score by 12.8%.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis</title>
<link href="https://hdl.handle.net/1721.1/164602" rel="alternate"/>
<author>
<name>McGreivy, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164602</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis
McGreivy, James C.
Generative Large Language Models (LLMs) are a promising approach to structuring knowledge contained within otherwise unmanageable corpora of research literature produced by large-scale and long-running scientific collaborations. Within experimental particle physics, such structured knowledge bases could expedite methodological and editorial review. Complementarily, within the broader scientific community, generative LLM systems grounded in published work could make for reliable companions allowing non-experts to analyze openaccess data. Techniques such as Retrieval Augmented Generation (RAG) rely on semantically matching localized text chunks, but struggle to maintain coherent context when relevant information spans multiple segments, leading to a fragmented representation devoid of global cross-document information. In this work I utilize the hierarchical organization of experimental physics articles to build a tree representation of the corpus, and present the SciTreeRAG system which leverages this structure with the aim of constructing contexts more focused and contextually rich than a standard RAG. Additionally, I develop methods for using LLMs to transform the unstructured corpus into a structured knowledge graph representation. I then implement SciGraphRAG, a retrieval system that leverages this knowledge graph to access global cross-document relationships eluding standard RAG, with the goal of encapsulating domain-specific connections and expertise. I demonstrate proof-of-concept implementations of both systems using the corpus of the LHCb experiment at CERN.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications</title>
<link href="https://hdl.handle.net/1721.1/164601" rel="alternate"/>
<author>
<name>Gower, Elizabeth Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/164601</id>
<updated>2026-01-21T04:07:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications
Gower, Elizabeth Ann
Anthropogenic activity has increased atmospheric carbon dioxide (CO₂) levels, disrupting the global carbon cycle and driving widespread environmental change. The ocean acts as a major sink. Accurate and scalable in situ monitoring of oceanic carbon chemistry is vital for understanding the impacts of climate change and informing marine carbon dioxide removal (mCDR) strategies. Many existing in situ instruments for marine applications are constrained by their size, cost, power requirements, or reliance on consumable reagents. Developing low-cost, compact, low-power, and accurate in situ sensors would significantly enhance the spatiotemporal resolution of oceanographic data and enable widespread monitoring of dissolved gases throughout the ocean. This, in turn, would deepen our understanding of how, where, and when changes are occurring within the marine carbon cycle. Two key variables essential for studying this cycle are the partial pressure of carbon dioxide (pCO₂) and dissolved inorganic carbon (DIC). This thesis presents the development of two sensors, one for in situ pCO₂ measurement and another for novel DIC quantification, both designed to be affordable, reliable, and scalable tools for advancing our understanding of ocean chemistry and the global carbon system. First, the development, calibration, and open-ocean deployment of a miniaturized Dissolved Multi-Gas Sensor (DMGS) that measures pCO₂ and partial pressure of oxygen (pO₂) is presented. The sensor was integrated into a custom-built surface drifter designed to entangle with Sargassum mats and send data autonomously. The drifter utilized commercial off-theshelf (COTS) components and cost roughly $1000 to build. After lab testing, a drifter was deployed in the Great Atlantic Sargassum Belt (GASB) and collected data for 22-days. In addition to gas data, the drifter tracked temperature, light intensity, humidity, pressure, and location sending measurements via an Iridium satellite. The resulting data captured dynamic changes in localized gas concentrations, temperature, and light levels that highlighted photosynthetic and respiratory activity within Sargassum patches. These drifters demonstrate the value of in situ data to investigate marine biogeochemical processes that contribute to the marine carbon cycle, especially in areas with high biologic activity. Next, this thesis presents the iterative development of a novel DIC sensor with potential for future in situ applications. Initial prototypes tested the feasibility of using a COTS CO2 sensor in both static and flow-through configurations, however sensor saturation issues prompted a shift to a pressure-based detection method. Multiple test setups were evaluated for pressure stability and sensor sensitivity, culminating in a bottle-based flow system that demonstrated the potential for reagent-minimized, pressure-based DIC quantification. With the final setup, a COTS pressure sensor that sat behind a gas permeable membrane was found to repeatably and accurately quantify DIC from acidified seawater. This approach of quantifying DIC via pressure change is novel in the field of gas sensing and maintains a low-cost, accessible design. Together, the sensors developed in this thesis expand the toolkit for marine carbon monitoring and provide a foundation for affordable, distributed sensing networks. These technologies enable higher-resolution insights into ocean biogeochemistry and support critical monitoring, reporting, and verification (MRV) frameworks needed to evaluate the effectiveness of mCDR techniques. Continued refinement of these low-cost platforms could play a key role in understanding and mitigating anthropogenic impacts on marine systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View</title>
<link href="https://hdl.handle.net/1721.1/164600" rel="alternate"/>
<author>
<name>Firouzian, Fardean</name>
</author>
<id>https://hdl.handle.net/1721.1/164600</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View
Firouzian, Fardean
This thesis applies Reference Class Forecasting (RCF) to multifamily real estate underwriting as a means of countering optimism bias, strategic misrepresentation, and other distortions embedded in the traditional “inside view.” Adapted from its proven application in infrastructure and corporate capital budgeting, RCF anchors projections in the actual performance distributions of comparable assets rather than in deal-specific narratives. The research centers on the development of the “Comp Warehouse,” a structured repository of property-level financials organized by market, asset class, vintage, and unit scale. By benchmarking assumptions against statistically valid reference classes, the approach enforces empirical discipline and highlights opportunities for “operational alpha”—the marginal increase in net operating income (NOI) achieved when underperforming assets converge on median peer performance. A South Florida case study demonstrates the method’s utility in an acquisition context. Analysis of 48 assets across Melbourne, Miami, Fort Lauderdale, and West Palm Beach shows that while rent levels cluster tightly around market medians, operating expenses vary widely, producing large dispersion in realized NOI. Applying the framework to a 191-unit Class A property in Fort Lauderdale illustrates how RCF can ground underwriting assumptions by distinguishing between defensible revenue-driven growth strategies and less plausible expense-reduction projections proposed in a bidding scenario. Recognizing constraints of both scale and frequency, this thesis also explores artificial intelligence as a tool for automating the ingestion and standardization of operating statements and rent rolls. Properly deployed in a human-in-the-loop framework, AI can reduce data friction, expand sample sizes, and sharpen forecasting precision. The contribution of this thesis is twofold: it demonstrates the feasibility of applying RCF to the multifamily sector—an asset class whose relative standardization, liquidity, and data availability make it especially conducive to outside-view benchmarking—and it situates the methodology within a technology-native architecture designed to scale empirical discipline, enhance underwriting rigor, and systematically capture operational alpha.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications</title>
<link href="https://hdl.handle.net/1721.1/164599" rel="alternate"/>
<author>
<name>He, Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/164599</id>
<updated>2026-01-21T04:07:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications
He, Kaiwen
Homomorphic secret sharing (HSS) is a powerful cryptographic primitive that enables efficient, low-communication secure computation without the use of fully homomorphic encryption. Public-key HSS is a well-known variant that supports inputs from multiple parties, but all parties must agree on a joint public key before any party can encode their inputs, requiring extra rounds of communication in applications. Recently, Couteau et al. (EUROCRYPT 2025) constructed multi-key HSS (MKHSS)—a new primitive which allows parties to encode their inputs under independent keys—under the DCR assumption. MKHSS assumes only a reusable common reference string, without the need for prior interactions between parties or a public-key infrastructure. In this paper, we construct and implement the first concretely-efficient MKHSS scheme under the same assumptions used by Couteau et al. Using an algorithmic insight that reduces the largest modulus in Couteau et al. from N⁴ to N², our optimized implementation can homomorphically multiply inputs in 5.0 milliseconds—while an implementation of Couteau et al. requires 224.6 milliseconds—thereby achieving a 45× speedup. A powerful application of MKHSS is to realize attribute-based non-interactive key exchange (ANIKE), which generalizes password-based key exchange (PAKE) to arbitrary attribute policies. ANIKE is currently only known from MKHSS. We use our implementation to evaluate the first concretely-efficient ANIKE schemes for a range of practically useful policies. Using our implementation, two parties can perform a geolocation-based key exchange in 1.65 seconds and a fuzzy PAKE on an 8-word passphrase in 7.59 seconds for realistic parameters, on a single core. Compared to using Couteau et al., which requires 62.5 and 253 seconds, we achieve 38× and 33× speedups, respectively.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reciprocity and Normality in the Scattering Matrix of Disordered Media</title>
<link href="https://hdl.handle.net/1721.1/164598" rel="alternate"/>
<author>
<name>Bharadwaj, Shreyas K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164598</id>
<updated>2026-01-21T04:07:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reciprocity and Normality in the Scattering Matrix of Disordered Media
Bharadwaj, Shreyas K.
The scattering matrix formalism provides a practical characterization of wave transport in linear, source-free systems by relating a set of operationally defined input and output spatial channels. The matrix is structured as a block operator, with diagonal blocks encoding same-side reflection matrices (RMs) and off-diagonal blocks encoding transmission matrices (TMs) in opposing propagation directions. Under Helmholtz reciprocity, symmetry relations are imposed: RMs are symmetric, and forward and reverse TMs are mathematical transposes of each other. These relations were employed as constraints to correct system-induced aberrations in measured scattering matrices of complex optical media via a matrix-based gradient descent procedure. Resulting phase corrections corresponded closely with classical aberration modes without heuristic parameterizations, suggesting that these modes naturally arise to restore reciprocity-induced symmetry. Vectorial TMs were measured for single- and double-pass propagation through step-index MMFs and scattering samples, with corrected phase terms showing agreement across sample types. Furthermore, matrix normality was introduced as a descriptor of stable modal transport. Normal matrices admit unitary diagonalization, reflecting orthogonal eigenchannels and spectrally coherent propagation. Near-normal behavior was observed in fiber TMs, while RMs of scattering slabs remained strongly non-normal, as quantified by a normalized Henrici departure. Sufficient conditions for normality were identified in terms of the system Green’s function and its bi-compression onto the measurement basis. A complementary dispersion experiment investigated two regimes: nearly-normal MMFs, where the Wigner–Smith time-delay operator was jointly diagonalizable and supported accurate first-order spectral models; and mechanically compressed fibers, where loss of normality produced noncommuting operators and collapse of model fidelity. These results suggest that normality captures well-behaved modal transport, underpinning the validity of parametric models and other operator-based analyses of disordered media. Together, reciprocity and normality impose complementary constraints on wave transport: reciprocity governs global symmetry, while normality captures internal coherence of modal propagation. Relevance is noted for matrix-based imaging, inverse scattering theory, and non-Hermitian wave physics, where symmetry and modal stability remain central.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mesh Differentiable Rendering for Real-World Scenes</title>
<link href="https://hdl.handle.net/1721.1/164597" rel="alternate"/>
<author>
<name>Charatan, David</name>
</author>
<id>https://hdl.handle.net/1721.1/164597</id>
<updated>2026-01-21T04:07:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Mesh Differentiable Rendering for Real-World Scenes
Charatan, David
Differentiable rendering has established itself as an effective tool for 3D reconstruction and novel view synthesis. Most state-of-the-art differentiable rendering methods use purpose-built renderers to optimize specialized, nonstandard 3D representations. However, most downstream applications of differentiable rendering rely on 3D meshes, which are near-universally supported due to their suitability for a wide range of rendering, simulation, and 3D modeling workflows. While prior methods have explored using 3D meshes directly within gradient-based optimization, they have been limited to object-centric scenes and cannot reconstruct real-world, unbounded scenes. This work addresses this shortcoming via a differentiable rendering formulation that combines an off-the-shelf, non-differentiable triangle rasterizer with a 3D representation that consists of nested mesh shells. During every forward pass, these shells are extracted from an underlying signed distance field. Then, the shells are independently rasterized and the resulting images are alpha-composited using opacities derived from the shells' per-vertex signed distance values. Notably, the shells' vertex positions are updated only via the underlying signed distance field, not via backpropagation through the rasterizer itself. This makes our method compatible with off-the-shelf, non-differentiable triangle rasterizers. To the best of our knowledge, our method is the first differentiable mesh rendering method that scales to unbounded, real-world 3D scenes, where it produces high-quality novel view synthesis results whose quality approaches the quality of state-of-the-art, non-mesh-based methods. Our method's performance is also competitive with state-of-the-art surface rendering methods on object-centric scenes. Ultimately, our method suggests that it may be possible to solve the differentiable rendering problem using tools from the conventional graphics toolbox rather than relying on specialized renderers.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning</title>
<link href="https://hdl.handle.net/1721.1/164596" rel="alternate"/>
<author>
<name>Duguey, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164596</id>
<updated>2026-01-21T04:07:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning
Duguey, Gabriel
As we plan tomorrow’s electricity system, we face fundamental questions: where should new power plants go, which technologies deserve investment, and how much transmission is enough? These decisions are the domain of Capacity Expansion Planning (CEP), a class of optimization models that guide long-term infrastructure investments in power systems. To be realistic, CEP models must capture fine-grained spatial and temporal variations because demand varies by city and climate, while wind and solar output depend on weather patterns that shift hour by hour and location by location. But representing the system with thousands of time steps and hundreds of nodes makes the optimization problem computationally too large to solve. &#13;
&#13;
This thesis addresses the core question: how can spatial and temporal aggregation in CEP models be designed to preserve planning-relevant patterns that drive investment decisions? Existing approaches often treat aggregation as a neutral preprocessing step, relying on heuristics like political boundaries or geographic proximity. In contrast, we propose a task-aware pipeline that treats aggregation as an integral modeling decision, explicitly aligned with planning objectives.&#13;
&#13;
The approach builds a composite similarity metric that blends diverse planning-relevant signals, including, but not limited to, duration curves, ramping behavior, and spatial correlation, and uses k-medoids clustering to define spatial zones. Temporal aggregation is then applied to daily system-wide profiles, selecting representative days that maintain cross-zonal interactions. The result is a reduced spatio-temporal dataset fed into a CEP model. The resulting investment decisions are re-evaluated at full resolution to evaluate their feasibility and real cost.&#13;
&#13;
Experiments on a New England case study show the pipeline consistently outperforms common baselines like political boundaries, geographic proximity, or capacity factor statistics. Among 50 feature weightings, the best design reduces system cost by 13% compared to heuristics. Correlation-based features drive the best results, while raw amplitude and geographic location often degrade performance when used alone.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development</title>
<link href="https://hdl.handle.net/1721.1/164595" rel="alternate"/>
<author>
<name>McDonough, Kate</name>
</author>
<id>https://hdl.handle.net/1721.1/164595</id>
<updated>2026-01-21T04:07:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development
McDonough, Kate
Duddington Farm is a 312-acre site north of Baltimore, Maryland. A stream restoration project was completed at the location nearly a decade ago in concert with the State of Maryland, the Manor Conservancy, Ecotone, and landowners Harry and Tara McDonough. The project was conducted with some success, however due to a lack of State oversight and long-term management provisions, the ecology has since declined. The following proposal outlines a new model for long-term land restoration and conservation, whereby land conservation and restoration are financed not solely through short term grants and fragile easements, but through the thoughtful use of modest real estate interventions. A small cluster of homes is developed on one portion of the site. The act increases the value of the land, generates equity, and establishes a permanent conservation fund. The design protects habitat and invites people into a deeper relationship with the natural world. The plan offers scalability in taking the land value capture and applying it to future land conservation projects, compounding returns and projecting a model to preserve hundreds of thousands of acres of critical land across the United States. This model highlights Indigenous ecological knowledge (TEK) and traditional practices of engaging with the land, highlighting a deeper understanding of how humans and nature can coexist in mutually healthy ways. The model is designed at a time when watersheds, national parks, and old-growth forests are faced with the greatest threat to global ecology. Duddington Farm is used as a retrospective case, but the broader goal is to create a regenerative framework for conservation-based development across critical watershed regions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finite Elements</title>
<link href="https://hdl.handle.net/1721.1/164593" rel="alternate"/>
<author>
<name>Collin, Teodoro Fields</name>
</author>
<id>https://hdl.handle.net/1721.1/164593</id>
<updated>2026-01-21T04:07:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Automated Finite Elements
Collin, Teodoro Fields
Finite element methods (FEMs) are a powerful and ubiquitous tool for solving engineering problems. Experimenting with different finite elements can improve the quality and efficiency of solutions. Furthermore, in some cases, the wrong (but nonetheless most common) choice of finite element will produce solutions which converge to the wrong answer regardless of mesh resolution. However, in practice, the choice of finite element is not explored due to the complexity of re-deriving and re-implementing finite element methods. Trying a new finite element is challenging because practitioners must manually deduce formulas to use these elements and they must implement these formulas within the context of a potentially complex system. We address this problem by introducing ElementForge, a finite element system that is parametric over the literate mathematical specification of a finite element in a domain-specific language (DSL). The ElementForge compiler reasons about tensor spaces, tensors, and tensor bases from first principles to derive implementations of finite elements. The ElementForge compiler is able to automatically derive implementations of finite elements previously only derived by hand. Further, ElementForge minimally couples several key mathematical concepts, mainly tensor fields, mesh topologies, sparse tensors, and assembled finite element operators, to produce a complete finite element system that is parametric over the choice of element. Consequently, the elements derived by the compiler can be applied parametrically to new meshes, PDEs, and boundary conditions. We evaluate our system by implementing several simulations with different finite elements, demonstrating that our system can explore tradeoffs in generality, accuracy, speed, and representational complexity. For example, we are able to implement the Morley, Bell, Argyris, and Hermite like elements with less than 50 lines of code and use them all in a single simulation.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture</title>
<link href="https://hdl.handle.net/1721.1/164588" rel="alternate"/>
<author>
<name>Cao, Biru</name>
</author>
<id>https://hdl.handle.net/1721.1/164588</id>
<updated>2026-01-21T04:07:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture
Cao, Biru
This thesis presents LumiModeling, a real-time visualization tool based on Gaussian Splatting (GS) that simulates the dynamic interplay between materiality and lighting in architectural environments. While conventional design workflows rely on geometric modeling and photorealistic rendering, they often abstract complex material behaviors and fall short in capturing light-material interactions. In contrast, GS enables the reconstruction of high-fidelity 3D models from 2D image sets, representing viewdependent effects such as reflection, transparency, and surface roughness. A comparative analysis using real-world data from the MIT Stata Center and the Met Warehouse demonstrates GS’s advantages over mesh-based photogrammetry, particularly in rendering reflective and transparent materials. This work extends existing GS capabilities by implementing a relightable pipeline based on the existing model Relightable3DGaussian (Gao et al., 2023), in which each Gaussian point is augmented with physical parameters, including BRDF, surface normals, and incident lighting. The Stata Center dataset is used to test the relighting of GS. A user study involving architecture professionals reveals that perceptual focus shifts from geometry to materiality and lighting as visual realism increases. The findings highlight the potential of relightable GS in architectural visualization and anticipate its integration into future design workflows.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation</title>
<link href="https://hdl.handle.net/1721.1/164587" rel="alternate"/>
<author>
<name>Kupershmidt, Adi</name>
</author>
<id>https://hdl.handle.net/1721.1/164587</id>
<updated>2026-01-21T04:07:52Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation
Kupershmidt, Adi
Urban planners face significant challenges in systematically and quantitatively evaluating past planning practices, stemming, among other reasons, from the scarcity of accessible structured data. The period from a plan’s initiation to implementation can span generations; recorded data from the planning processes are often deemed obsolete for addressing present concerns by the time of post-occupancy evaluation. This research examines whether generative AI can help bridge this gap and under what conditions - highlighting both challenges and opportunities - by introducing a system that responsively transforms qualitative zoning data into structured, queryable formats to support the quantitative analysis of planning practices. &#13;
A database of ~150 approved semi-structured urban plans under Tel Aviv municipality’s local jurisdiction supports this project's case study. The system relies on proprietary LLMs (ChatGPT, Claude), streamlining a natural language query input through 3 agentic tasks: (1) RAG (Retrieval Augmented Generation) based querying, generating free-text answers from all plans, (2) structuring the answers to a valid JSON, and (3) visualizing structured data. Key findings indicate an 85.45% precision of the system, as evaluated through an end-to-end assessment of 11 representative queries, each validated against 40 manually labeled plans. The tool provides actionable insights, enabling queries such as trends in sheltered bicycle parking approvals or the status of affordable housing planning over the past decade.&#13;
This research underlines the significance of flexibly structuring non- and semi-structured data for urban science. It addresses the growing gap between static legacy data collection and real-time policymaking, democratizing access to planning information and fostering informed decision-making practices. Integrating cutting-edge AI-driven tools contributes to the current discourse on AI applications for city management and planning by providing a replicable model for more cities and planning datasets to build upon and improve.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence</title>
<link href="https://hdl.handle.net/1721.1/164581" rel="alternate"/>
<author>
<name>Shen, ChenAn</name>
</author>
<id>https://hdl.handle.net/1721.1/164581</id>
<updated>2026-01-21T04:07:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence
Shen, ChenAn
This thesis examines the behavioral impacts of New York City’s congestion pricing policy on weekday peak-hour travel into the pricing zone. Using a two-stage Bayesian Multinomial Logit framework applied to monthly aggregate mobility data, the study disentangles underlying preference shifts from observed mode share changes in response to the toll. Stage 1 estimates population-level travel sensitivities to cost and time, while Stage 2 uses a hierarchical structure to capture heterogeneity across demographic segments defined by income, age, and gender. The analysis spans January–June 2025 and compares results to the same months in 2024 as a counterfactual scenario without pricing. Findings show that while the policy generated a sustained mode shift away from private automobiles toward public transit, preference adaptation varied by demographic group and evolved over time. Some cohorts reinforced the intended policy effects through reduced transit travel time sensitivity, while others exhibited partial reversal as cost sensitivity shifted. These dynamic patterns underscore the importance of evaluating both immediate and evolving behavioral responses when designing congestion pricing strategies and highlight the value of aggregate behavioral modeling for timely, data-driven policy assessment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model</title>
<link href="https://hdl.handle.net/1721.1/164579" rel="alternate"/>
<author>
<name>Gamble IV, James Monroe</name>
</author>
<id>https://hdl.handle.net/1721.1/164579</id>
<updated>2026-01-21T04:07:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model
Gamble IV, James Monroe
This paper examines how asset limits in means-tested welfare programs shape household saving behavior. I exploit cross-state variation in Temporary Assistance for Needy Families (TANF) asset limits by linking these limits to individual-level data from the Survey of Income and Program Participation (SIPP) and estimating ordinary least squares (OLS) regressions with state and year fixed effects. I find that a $1 increase in the liquid asset limit corresponds to a $0.75 decrease in non-housing wealth among single mothers without a high school diploma. This suggests that less stringent asset tests reduce incentives to save, consistent with models in which more generous public insurance lowers the need for precautionary saving.&#13;
&#13;
To interpret these findings, I develop a dynamic life-cycle model of saving under income and medical expense risk, calibrated to key moments from the Hubbard, Skinner, and Zeldes framework. The model embeds Medicaid-style transfer rules and a guaranteed consumption floor. Simulations indicate that a $7,000 consumption floor can reduce median assets by up to 20% among low-education households, reflecting a decrease in self-insurance as public support increases. I then extend the model to include Achieving a Better Life Experience (ABLE) accounts, which are tax-advantaged savings vehicles for individuals with disabilities exempt from means testing. Simulations indicate that ABLE eligibility increases early-life consumption by approximately $10,000 and reduces retirement savings, with account holders shifting more spending into their working years. Together, these results yield a direct mapping from policy levers, including asset-limit generosity, earnings disregards, childcare subsidies, and ABLE exemption rules, to predicted shifts in median household assets. This offers policymakers a practical tool to balance public insurance and private precautionary savings.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease</title>
<link href="https://hdl.handle.net/1721.1/164574" rel="alternate"/>
<author>
<name>Burgos Robles, Emanuel Felipe</name>
</author>
<id>https://hdl.handle.net/1721.1/164574</id>
<updated>2026-01-21T04:08:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease
Burgos Robles, Emanuel Felipe
The gut microbiome plays a critical role in inflammatory bowel diseases (IBDs), yet current analyses treat bacterial species as functionally uniform, ignoring extensive strain-level diversity that may drive disease mechanisms. Here, we developed a strain-resolved metatranscriptomics framework to investigate how transcriptional activity varies across bacterial lineages and relates to IBD pathogenesis. Using paired metagenomics and metatranscriptomics data from 1,067 fecal samples (103 IBD and 335 non-IBD patients), we first constructed phylogenetic trees for over 250 bacterial species using the single nucleotide variants within essential housekeeping genes, enabling the identification of bacterial strains. Next, we devised a statistical approach to assign mRNA reads to these strains, leveraging the natural genetic variation that is present across them. My analysis revealed that closely related bacterial strains exhibit dramatically different transcriptional programs, with some strains enriched in IBD patients showing upregulation of genes involved in stress response, sugar metabolism pathways, and antimicrobial resistance. Notably, we identified transcriptionally active but genomically low-abundance taxa, highlighting the importance of measuring the transcriptional activities of strains beyond species composition. Lineage-aware differential expression analysis uncovered strain-specific adaptations to inflammatory environments. This strain-resolved approach provides a powerful framework for understanding microbial functional heterogeneity and identifying specific bacterial lineages that could potentially contribute to disease pathogenesis, potentially guiding more targeted microbiome-based therapeutic interventions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment</title>
<link href="https://hdl.handle.net/1721.1/164571" rel="alternate"/>
<author>
<name>Xu, Bangjie</name>
</author>
<id>https://hdl.handle.net/1721.1/164571</id>
<updated>2026-01-21T04:08:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment
Xu, Bangjie
This thesis presents an innovative methodology using Large Language Model-based methods to extract and quantify housing regulations from municipal zoning codes, making possible the most comprehensive examination of regulatory costs at the municipal level across California to date. A multi-staged extraction framework is devised that delivers 85-95% accuracy in the identification and standardization of complex regulatory requirements from legal documents. Applying this methodology to over twenty California cities over the period 2015-2025, it is estimated that regulatory constraints raise the cost of developing a housing unit by roughly between 5% to 10% (or $50,000 and $100,000+) per housing unit, with the most acute constraints in the state’s coastal metros. This method is used to find that factors such as regulation costs limit housing supply elasticity from 1.24 in low-regulation jurisdictions to 0.08 in high-regulation areas. The LLM-based framework allows us to conduct analyses at an unprecedented scale and granularity and to reveal, for example, that the relaxation of regulation by streamlining policies like the Los Angeles Transit Oriented Communities program boosts housing production in eligible zoned areas by 43%. This study makes significant contributions to the restructuring of California’s housing regulation system in response to the affordability crisis, and its methodology presents a replicable tool for regulatory analysis in other policy domains.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston</title>
<link href="https://hdl.handle.net/1721.1/164564" rel="alternate"/>
<author>
<name>Murphy, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164564</id>
<updated>2026-01-21T04:08:06Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston
Murphy, Ryan
Boston is in the midst of a severe housing crisis, driven by decades of underproduction, rising construction costs, restrictive zoning, and an inelastic real estate market that has resulted in persistent affordability challenges. This thesis explores the untapped potential of city-owned land as a powerful tool to increase housing supply and affordability in Boston. Using Boston’s 2022 Citywide Land Audit and detailed development assumptions, the analysis estimates that between 19,000 and 31,000 new housing units could be constructed across city-controlled parcels, including between 3,200 and 6,100 affordable units under the current Inclusionary Development Policy. The research draws on case studies from peer cities such as Chicago and Atlanta where municipal land has been successfully leveraged through transparent disposition processes, fast-tracked entitlements, and flexible affordability models. It argues for a policy shift in Boston toward a more streamlined, market-aware, and scalable land release strategy that prioritizes speed, cross-subsidization, and financial feasibility. Key recommendations include expanding the Welcome Home, Boston program to include mixed-income and rental housing, implementing predictable RFP cycles, offering tax abatements, and expediting the entitlement process.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Zipping for Transformable and Dynamic Systems</title>
<link href="https://hdl.handle.net/1721.1/164563" rel="alternate"/>
<author>
<name>Hagemann, Niklas</name>
</author>
<id>https://hdl.handle.net/1721.1/164563</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modular Zipping for Transformable and Dynamic Systems
Hagemann, Niklas
There is a need for products, machines and environments that can change shape, transform and evolve according to their use. This thesis proposes the design of a simple, modular actuator based on reversible folding and interlocking (zipping) of flexible 3D printed strips. The proposed zipper design allows for continuous control states between a compact and fully deployed state. The modular actuators can be integrated into a variety of systems to enable compact, shape- and stiffness-changing structures, robots and other devices. Designs are presented for single- and double-zipper modules using the same basic zipper design. The modules can be used as modular components of compact robotic systems with the ability to expand and contract according to their environment, or used as adjustable structural components to create deployable, shape-and stiffness-changing objects. The zipper design points the way towards simplified mono-material components that embed transformation and reversibility into everyday devices, products and spaces, and enabling objects that are as easy to transform, reconfigure and reverse as they are to manufacture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embodied Representation of Time in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/164562" rel="alternate"/>
<author>
<name>Kim, Suwan</name>
</author>
<id>https://hdl.handle.net/1721.1/164562</id>
<updated>2026-01-21T04:08:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Embodied Representation of Time in Virtual Reality
Kim, Suwan
Recent advancements in 3D graphics and AI-assisted generative techniques have accelerated the creation of realistic scenes for immersive technologies, including virtual reality, yet most systems continue to encode time as a linear parameter, relying on timeline-based playback. Mesh-based representations are typically constrained by fixed topologies and rely on predefined animations, which limit their capacity to encode temporal change as a spatial or perceptual phenomenon. In reality, human experience of time is embodied and dynamic, perceived through interaction and memory. Existing digital systems fail to capture this dimension, reducing time to a passive parameter. This thesis proposes a framework for representing time as an embodied and spatial dimension within virtual reality by embedding it directly into the geometry and interaction logic of point cloud data. The system consists of three parts: (1) processing 2D images into layered volumetric point clouds to enable structural fluidity and temporally responsive spatial form; (2) enabling perceptual and spatial modulation in response to user distance and contact, with color influencing the character of change and opacity shaping its perceptual reveal at both global and local scales; and (3) enabling real-time visualization of modulated point cloud through a custom pipeline optimized for mobile virtual reality. By embedding temporal dynamics directly into geometry and interaction logic, this thesis contributes a novel representational approach to spatiotemporal modeling in immersive systems. By doing so, we create new opportunities for architectural visualization, interactive simulations, game design, and reimagining how we perceive and construct digital spaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants</title>
<link href="https://hdl.handle.net/1721.1/164560" rel="alternate"/>
<author>
<name>Yao, Randol H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164560</id>
<updated>2026-01-21T04:08:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants
Yao, Randol H.
Valuable knowledge developed in one part of the world may remain “trapped" locally due to frictions in how knowledge is recognized and shared globally. This paper examines how granting US patents to foreign-origin inventions—by elevating their visibility and credibility— untraps the knowledge and facilitates global diffusion. Using examiner leniency as an instrument, complemented by a difference-in-differences design, I find that US grants of home country patents significantly increase both the likelihood and intensity of forward citations, including marked increases from third countries. A novel measure of “trappedness” reveals that knowledge from historically more trapped countries and sectors sees larger diffusion benefits after the US grants. These findings highlight the central role of the US as a platform of global knowledge recognition and diffusion, particularly in turning overlooked ideas into globally relevant innovations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon</title>
<link href="https://hdl.handle.net/1721.1/164559" rel="alternate"/>
<author>
<name>Rafferty, Lieutenant Commander Keefe</name>
</author>
<id>https://hdl.handle.net/1721.1/164559</id>
<updated>2026-01-21T04:07:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon
Rafferty, Lieutenant Commander Keefe
Submarine canyons have a proven and direct influence on continental shelf circulation and flow dynamics, especially in relation to western boundary currents. There are two key circulation features northeast of Taiwan on the East China Sea continental shelf: (1) the cold dome, a cyclonic feature that appears primarily in summer and is associated with upwelling, and (2) Kuroshio intrusions onto the continental shelf in the vicinity of Mien-Hua Canyon. This paper is a descriptive physical oceanography study with a focus on characterizing the circulation patterns northeast of Taiwan surrounding Mien-Hua Canyon, closely correlating these patterns with the migration of the Kuroshio and its variability and intrusions onto the southern East China Sea continental shelf, leading to the formation of the cold dome. The Institute of Oceanography at the National Taiwan University and WHOI executed a joint international field survey at Mien-Hua Canyon aiming to improve the understanding of canyon flow dynamics between the East China Sea continental shelf northeast of Taiwan and the Kuroshio as the North Pacific Gyre westward boundary current. This joint oceanographic expedition expands on previous joint US/Taiwan physical oceanographic and ocean acoustic studies in the China Seas dating back to ASIAEX in the South China Sea during 2000-2001 and QPE in the East China Sea during 2008-2009. The strengthening and weakening of Kuroshio transport and intensity northeast of Taiwan is closely correlated to the timescales of mesoscale westward propagating eddies arriving to the East Taiwan Channel. When a canyon has a Rossby number ~1 or Rossby radius equivalent to the width of the canyon in a region of left-bounded flow, induced cyclonic flow will experience an upwelling regime within the canyon system with dominant upwelling located at the downstream canyon rim vertically constrained by Rossby Height. Observational analysis of canyon bottom-moored ADCPs and vertical temperature arrays supports previous theory on submarine canyon dynamics on a continental shelf. Satellite sea surface temperature and absolute dynamic topography observations render the formation of a cold dome northeast of Taiwan coincident with this joint oceanographic survey.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities</title>
<link href="https://hdl.handle.net/1721.1/164558" rel="alternate"/>
<author>
<name>Roh, Soohyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164558</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities
Roh, Soohyun
Pay differences between organizations are a key source of wage inequality. I propose a novel account of these differences by starting from the consumers that these businesses serve. Firms that serve high-income consumers specialize jobs into higher-paying and higher-skilled positions focused on quality, while those that serve lower-income consumers emphasize cost minimization by requiring workers to perform a wider range of general tasks. Matching consumer foot traffic data and establishment-level wage records, I find that establishments serving higher-income consumers pay their workers more. This effect holds comparing among establishments in the same neighborhoods and industries. Longitudinally, establishments increase wages when they shift toward higher-income customers. Analysis of online job postings further reveals that jobs at higher-income-serving firms involve a narrower set of tasks that command higher market value. These findings show how consumer markets shape firms’ internal job design and contribute to pay inequality across organizations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.</title>
<link href="https://hdl.handle.net/1721.1/164557" rel="alternate"/>
<author>
<name>Mulcahy, Robby L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164557</id>
<updated>2026-01-21T04:07:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.
Mulcahy, Robby L.
The United States federal government is the largest property owner in the country, with more than 370 million square feet of real estate under its control. Much of this portfolio is outdated, underutilized, and located in the urban cores of American cities. Nowhere is this more evident—or more consequential—than in Washington, D.C., where the federal government controls approximately 27% of the office market. As federal agencies adopt hybrid work models, and as the operational needs of government evolve, the existing real estate footprint has become increasingly inefficient, expensive, and misaligned with civic and market realities. This thesis investigates the opportunity to rethink federal land ownership and management as a catalyst for urban regeneration, civic stewardship, and housing production.&#13;
&#13;
Using the James V. Forrestal Building as a focal case study, the research examines the historical, policy, and spatial dynamics that have led to the current moment of reckoning. Located on Independence Avenue SW, straddling 10th Street between the National Mall and the Wharf, Forrestal is emblematic of the postwar federal design ethos: monumental, inward-facing, and hostile to street life. Once a symbol of bureaucratic permanence, the building now stands as a physical and symbolic barrier to urban connectivity and civic vitality. The case of Forrestal is used to explore broader questions: How can the federal government dispose of surplus property more effectively? What policy tools exist—or are needed—to unlock value and enable redevelopment? And what role should cities play in shaping the outcomes of federal land disposition?&#13;
&#13;
The thesis employs a mixed-methods approach that includes policy analysis, stakeholder interviews, precedent case studies, and spatial analysis of Southwest D.C. The work identifies a range of obstacles to effective disposition, including Title V of the McKinney-Vento Homeless Assistance Act, opaque OMB budget scoring rules, jurisdictional fragmentation, and the absence of a coordinating authority across federal agencies. It also identifies key lessons from successful projects such as The Yards, Walter Reed, and the Volpe Center, where thoughtful structuring and strong federal-local partnerships enabled transformative redevelopment of surplus land.&#13;
&#13;
The thesis concludes with ten detailed recommendations for reform, including reauthorization of the Federal Assets Sale and Transfer Act (FASTA), modernization of Title V and OMB scoring, the creation of Federal Redevelopment Zones, and the prioritization of housing, civic infrastructure, and design quality in disposition strategy. It argues that the federal government must shift from a passive landlord to an active steward of public land—one that collaborates with cities, integrates public benefit, and reflects democratic values through the built environment.&#13;
&#13;
In this moment of shifting federal needs, declining office demand, and urban transformation, the question is not whether federal real estate reform is needed—it is whether we will seize the opportunity. The fate of buildings like Forrestal will shape not only the skyline of Washington, D.C., but also the federal government’s legacy in America’s cities for generations to come.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyst Incentives</title>
<link href="https://hdl.handle.net/1721.1/164556" rel="alternate"/>
<author>
<name>Green, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/164556</id>
<updated>2026-01-21T04:08:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Analyst Incentives
Green, Brice
Analyst forecasts have been shown to reflect substantial behavioral biases and predict a number of macroeconomic phenomena. While we typically treat reported forecasts as statistical expectations, under uncertainty the reported point estimate will be sensitive to the payoff structure facing the forecaster. Using data on careers from LinkedIn, I describe the incentive structures faced by analysts, shedding light the extent to which pay and career success are tied to performance. Further, I extend a causal estimator to identify credible counterfactual forecasts and provide tentative causal evidence of the relationship between forecast errors and promotions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?</title>
<link href="https://hdl.handle.net/1721.1/164555" rel="alternate"/>
<author>
<name>Chomik-Morales, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/164555</id>
<updated>2026-01-21T04:07:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?
Chomik-Morales, Jessica
This longterm narrative investigates the life and work of Dr. Eugenio Vargas-Peña, a neuropsychiatrist in Asunción, Paraguay who built a fully functional lab in his countryside home. Vargas-Peña conducts brain research independently, guided by decades of self-study, clinical practice, and an unwavering belief in the value of curiosity-driven inquiry. The piece interweaves historical context, character study, and personal narrative, using the author's own background in neuroscience and science communication to frame an inquiry into legitimacy, recognition, and alternative pathways in science. It asks: What defines a scientist today? Who gets to decide which ideas are taken seriously? And what are the consequences-creative or catastrophic-of working outside institutional boundaries? Through the lens of one man's eccentric yet earnest intellectual journey, this thesis invites broader reflection on the pressures shaping contemporary research and the enduring romance of unorthodox scholarship.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution</title>
<link href="https://hdl.handle.net/1721.1/164508" rel="alternate"/>
<author>
<name>Elsabbagh, Fares</name>
</author>
<id>https://hdl.handle.net/1721.1/164508</id>
<updated>2026-01-13T04:08:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution
Elsabbagh, Fares
Fast simulation of digital circuits is crucial to build modern chips. Current processors and SoCs integrate hundreds of complex components, including cores, accelerators, and memory hierarchies. Simulating these systems is necessary to verify correctness and explore the design space. Simulation can happen at different levels of abstraction. In this work we focus on Register-Transfer-Level (RTL) simulation. While RTL simulators are frequently used in development due to their quick compilation times, their runtime performance is slow. This is because as the designs are scaled up, multicore communication and scheduling overheads limit performance and scalability.&#13;
&#13;
We present ASH, a parallel architecture tailored to RTL simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. ASH hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs that represent different types of architectures. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task Scheduling Techniques to Accelerate RTL Simulation</title>
<link href="https://hdl.handle.net/1721.1/164507" rel="alternate"/>
<author>
<name>Sheikhha, Shabnam</name>
</author>
<id>https://hdl.handle.net/1721.1/164507</id>
<updated>2026-01-13T04:08:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Task Scheduling Techniques to Accelerate RTL Simulation
Sheikhha, Shabnam
Fast simulation of digital circuits is crucial to build modern chips. Slow simulation lengthens chip design time and makes bugs more frequent. While simulation can happen at different levels of abstraction, Register-Transfer-Level (RTL) simulation is the usual bottleneck in chip design, as it is needed for ongoing debugging and evaluation. Current simulators scale poorly across CPU cores, because they are unable to exploit the fine-grained parallelism inherent in simulation workloads.&#13;
&#13;
We present ASH, a parallel architecture tailored to simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Dataflow execution exposes abundant parallelism, as each task can run as soon as its inputs are available. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. Selective execution introduces dynamic data dependences since skipped tasks do not communicate data. ASH employs speculative execution to handle these dependencies. ASH’s hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware. The key compiler techniques include a novel partitioning for minimizing data communication while maintaining load balance, and a strategic coarsening mechanism to reduce the overheads of fine-grained tasks.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility</title>
<link href="https://hdl.handle.net/1721.1/164506" rel="alternate"/>
<author>
<name>Baum, Amelia Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/164506</id>
<updated>2026-01-13T04:08:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility
Baum, Amelia Rose
Public transit agencies face significant and growing challenges related to workforce shortages, absenteeism, and employee retention, which threaten service reliability. Reports found that 90% of U.S. transit agencies are experiencing a workforce shortage, with 84% claiming that the shortage affects their ability to provide scheduled service. Industry-wide, operator absence is a significant contributor to missed work at transit agencies nationwide and has, in many cases, delayed the full reinstatement of service at transit agencies following the COVID-19 pandemic. The quality of bus operators' work is significantly impacted by inflexible crew scheduling constraints. However, most studies focus on pay, benefits, and infrastructure, neglecting the importance of scheduling. This thesis aims to fill this gap by examining the potential for crew scheduling improvements to enhance the quality of life for bus operators through a three-part case study at the Chicago Transit Authority. Part 1 analyzes the historical work preferences of CTA bus operators, providing actionable insights for scheduling improvements. Part 2 presents a high-fidelity proof of concept in HASTUS, using block schedules (10-hour-a-day runs that are intended to be run by an operator 4 days a week) and rostering to reduce negative work traits, increase consecutive and weekend days off for most operators, while maintaining schedules for the top 20% of senior operators. Part 3 evaluates the new 10-hour, 4-day-per-week packaged schedules via an LLM-based paired alternatives survey of operators at one CTA garage, measuring the desirability of the proof of concept and collecting qualitative feedback. Overall, the new schedules substantially improve the quality of work for operators by guaranteeing at least one weekend day off, at least two consecutive days off, and increasing day-to-day schedule consistency and overnight rest time, while maintaining constant vehicle requirements and total pay hours. The survey results show that 72% of operators at the 74th Street garage support the new schedule paradigm, demonstrating strong support for their potential adoption and encouraging future exploration of a block schedule hybrid rostering paradigm at the CTA and other transit agencies.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sampling Methods for Fast and Versatile GNN Training</title>
<link href="https://hdl.handle.net/1721.1/164495" rel="alternate"/>
<author>
<name>Alkhatib, Obada</name>
</author>
<id>https://hdl.handle.net/1721.1/164495</id>
<updated>2026-01-13T04:08:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Sampling Methods for Fast and Versatile GNN Training
Alkhatib, Obada
Graph neural networks (GNNs) have become a commonly used class of machine learning models that achieve state-of-the-art performance in various applications. A prevalent and effective approach for applying GNNs on large datasets involves mini-batch training with sampled neighborhoods. Numerous sampling algorithms have emerged, some tailored for specific GNN applications. In this thesis, I explore ways to improve the efficiency and expressivity of existing and emerging sampling schemes. &#13;
&#13;
First, I explore system solutions to facilitate the development of fast implementations of different sampling methods. I introduce FlexSample, a system for efficiently incorporating custom sampling algorithms into GNN training. FlexSample leverages the types of performance optimizations found in SALIENT, a state-of-the-art system for fast training of GNNs with node-wise sampling. In experiments with 4 GNN models which use layer-wise and subgraph sampling, FlexSample achieves up to 1.3× speed-up for end-to-end training over PyTorch Geometric with the same sampling code. Furthermore, FlexSample extends SALIENT with highly-optimized C++ implementations of FastGCN and LADIES layer-wise sampling, which achieve 2×–5× speed-up over their respective Python implementations.&#13;
&#13;
Second, I introduce a novel framework for learning neighbor sampling distributions as part of GNN training. Key components of this framework, which I name PertinenceSample, are: (i) a differentiable approximation of node-wise sampling for GNNs; and (ii) a parametrization of node sampling distributions as node- or edge-wise weights of attention-like GNN layers. I present an initial exploration of the potential of PertinenceSample for improving node classification accuracy in the presence of noisy edges. Specifically, in two synthetic experiments where roughly half of a node’s neighbors may have similar features but different labels, I demonstrate that extending a GraphSAGE model with a 2-layer perceptron for learning the PertinenceSample weights can improve classification accuracy from 50%–75% to (nearly) 100%.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting</title>
<link href="https://hdl.handle.net/1721.1/164487" rel="alternate"/>
<author>
<name>Murzynowski, Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/164487</id>
<updated>2026-01-13T04:08:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting
Murzynowski, Philip
Graph neural networks (GNNs) are an important class of methods for leveraging the information present in graph structures to perform various learning tasks. Distributed GNNs can improve the performance of GNN execution by dividing computation among multiple machines and scale to large graphs by partitioning graph features and the graph structure. Although distributed GNNs are able to achieve self-relative speedup, they are often slower than well-optimized code running on a single machine. For example, evaluation of the prevalent Distributed DGL system on graphs in the Open Graph Benchmark shows Distributed DGL can achieve speedup of over 2× when moving from one to four nodes, but execution of Distributed DGL on 4 nodes is 2× slower than a well-optimized GNN system, such as the SALIENT system, on a single machine.&#13;
&#13;
In my thesis, I argue that it is possible for a distributed GNN system to be both fast and scalable. Specifically, I show that it is possible to match the performance of well-optimized, non-distributed codes for GNN training and also achieve good scalability when running in the distributed setting. I present a system called Distributed SALIENT and motivate its design through profiling and identifying bottlenecks that arise in the distributed setting. Key components of Distributed SALIENT include the use of well-optimized code for local computations, pipelining of inter-machine communication, and a careful trade-off between data partitioning and partial replication.&#13;
&#13;
I evaluate Distributed SALIENT on the Open Graph Benchmark (OGB) and show that Distributed SALIENT achieves good speedup compared to SALIENT’s well-optimized single-node code while only using replication factors of roughly 5%. In fact, in experiments with training a 3-layer GraphSAGE model on the large OGB papers100M data set, Distributed SALIENT on 8 nodes is 8.6x faster than SALIENT on 1 node.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park</title>
<link href="https://hdl.handle.net/1721.1/164480" rel="alternate"/>
<author>
<name>Zhao, Celina</name>
</author>
<id>https://hdl.handle.net/1721.1/164480</id>
<updated>2026-01-13T04:08:22Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park
Zhao, Celina
In December 2016, China launched the Giant Panda National Park (GPNP). A massive ecological initiative aimed at safeguarding its beloved national symbol and international icon of conservation, the park marked an unequivocal win for giant pandas. But for the 100,000 people already living in and around the borders, the outcome was not as clear. &#13;
The GPNP seeks to establish a harmonious balance between biodiversity protection and human development. But the vast amount of land covered by the park means not all places are equally primed to achieve that goal. A handful of communities have been designated as exclusive entrance communities, with lavish funding to become the face of the national park. In others, a persistent question simmers: Are pandas more important than people? &#13;
Central to this story is how individuals are adapting to and reimagining their futures. Rather than a binary of winners and losers, the GPNP has sparked a wide range of human responses -showing that the path to a sustainable future between people and pandas is far from black and white.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reviewing I.S. : how to handle legacy systems?</title>
<link href="https://hdl.handle.net/1721.1/164457" rel="alternate"/>
<author>
<name>Orlando, Ricardo,
            1966-</name>
</author>
<id>https://hdl.handle.net/1721.1/164457</id>
<updated>2026-01-07T03:23:47Z</updated>
<published>1999-01-01T00:00:00Z</published>
<summary type="text">Reviewing I.S. : how to handle legacy systems?
Orlando, Ricardo,
            1966-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Includes bibliographical references (leaves 100-106).
</summary>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors</title>
<link href="https://hdl.handle.net/1721.1/164456" rel="alternate"/>
<author>
<name>Trapp, Donald L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164456</id>
<updated>2026-01-07T03:23:33Z</updated>
<published>1962-01-01T00:00:00Z</published>
<summary type="text">The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors
Trapp, Donald L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1962; Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 135-136).
</summary>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of a control system for the terminal phase of a satellite rendezvous</title>
<link href="https://hdl.handle.net/1721.1/164454" rel="alternate"/>
<author>
<name>Hollister, Walter M.,
            1930-</name>
</author>
<id>https://hdl.handle.net/1721.1/164454</id>
<updated>2026-01-07T03:23:50Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The design of a control system for the terminal phase of a satellite rendezvous
Hollister, Walter M.,
            1930-
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 47).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noise analysis of circuit models representing maser operation.</title>
<link href="https://hdl.handle.net/1721.1/164451" rel="alternate"/>
<author>
<name>Hempstead, Robert Douglas.</name>
</author>
<id>https://hdl.handle.net/1721.1/164451</id>
<updated>2026-01-07T03:23:54Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Noise analysis of circuit models representing maser operation.
Hempstead, Robert Douglas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1965; Bibliography: leaves 106-108.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard</title>
<link href="https://hdl.handle.net/1721.1/164448" rel="alternate"/>
<author>
<name>Ferguson, William Lloyd.</name>
</author>
<id>https://hdl.handle.net/1721.1/164448</id>
<updated>2026-01-07T03:23:43Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard
Ferguson, William Lloyd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 194-195.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atmospheric Impacts of Hydrogen as an Aviation Fuel</title>
<link href="https://hdl.handle.net/1721.1/164348" rel="alternate"/>
<author>
<name>Gibney, Evan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164348</id>
<updated>2025-12-17T03:06:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Atmospheric Impacts of Hydrogen as an Aviation Fuel
Gibney, Evan M.
Hydrogen is being investigated as a promising zero-carbon aviation fuel, offering the potential to eliminate direct CO₂ emissions while being produced with low lifecycle greenhouse gas emissions. Despite these benefits, there are additional indirect climate and air quality costs associated with direct hydrogen emissions which are often overlooked. We quantify the perturbation in the atmospheric composition associated with the introduction of hydrogen-fueled aircraft, broadening the current understanding of the non-CO₂ effects of these fleets. We use the GEOS-Chem High Performance (GCHP) global chemistry-transport model to conduct a spatially discretized, multi-year impact assessment of the atmospheric impacts of hydrogen-fueled aviation. We implement a flux surface boundary condition for hydrogen to provide an improved representation of the soil sink, relative to the default fixed boundary condition. This results in a net surface exchange of-16.7 Tg H₂ per year. Two hydrogen scenarios are evaluated using the updated GCHP implementation, which are representative of a high and low mitigation scenario for direct hydrogen emission rates. For the two scenarios, respectively, we observe increases in the mean atmospheric methane mixing ratio of 3.34 ppbv and 10.7 ppbv, corresponding to an increased methane lifetime of between 0.24% and 0.77%, respectively. The increased methane lifetime as well as in-situ oxidation of stratospheric hydrogen results in an increased stratospheric water vapor burden of 0.42 Tg and 2.3 Tg (or 0.052% and 0.28%) for the high and low mitigation scenarios, respectively. Additionally, we show the perturbation to tropospheric ozone levels to be between-0.047% and +0.30%, where the decreased ozone results from the removal of NOₓ emissions associated with fuel cells and low hydrogen emission rates. This analysis provides the foundation for understanding the implications of potential future hydrogen-based aviation fleets on climate and air quality.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet</title>
<link href="https://hdl.handle.net/1721.1/164347" rel="alternate"/>
<author>
<name>Ocharoenchai, Nanticha</name>
</author>
<id>https://hdl.handle.net/1721.1/164347</id>
<updated>2025-12-17T03:06:34Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet
Ocharoenchai, Nanticha
Discussions about climate change and carbon sequestration have largely revolved around plant structures we can easily see, like leaves that absorb CO₂ for photosynthesis and woody trunks that store carbon as biomass. Carbon credits that companies and consumers buy to compensate for emissions they’ve produced are primarily calculated based on these parts, as are models that predict climate change impacts. But researchers are now beginning to understand that what we see aboveground is only part of the equation. The other part lies beneath our feet in an intricate, expansive, covert realm where plant roots, microbial communities and soil dynamics interact. These belowground systems are crucial for cycling carbon through the Earth and regulating the climate, but relatively little is known about them compared to aboveground systems. This is especially true in tropical regions, where one-third of the world’s terrestrial carbon storage lies. However, these systems are evolving quickly with climate change, contradicting what models have previously projected. With so many global decisions based on such models, these uncertainties hold planetary significance for our future. A group of scientists is climbing an uphill battle, racing against time to understand this understudied field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Winter</title>
<link href="https://hdl.handle.net/1721.1/164346" rel="alternate"/>
<author>
<name>White, Mackenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/164346</id>
<updated>2025-12-17T03:06:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Engineering Winter
White, Mackenzie
As winters warm and snowfall becomes less reliable, ski resorts worldwide increasingly depend on artificial snow to stay open. Snowmaking, once a stopgap, has become the backbone of entire seasons in a sprawling choreography of pumps and pressurized mist designed to hold trails together. At resorts like Vermont’s Bromley Mountain, snowmakers work through the night, drawing millions of gallons from limited reservoirs and operating within narrowing windows of cold air. What emerges is a portrait of winter in transition: less predictable, more expensive, increasingly manufactured. The efforts to preserve winter recreation carry growing costs in energy, water, and equitable access. Many smaller, independent ski areas struggle to meet the demands of climate adaptation, while larger resorts expand their operations, widening the divide in who can afford to sustain operations. In the American West, where rivers depend heavily on snowpack melt, the spread of snowmaking ties winter recreation to a water system already under immense strain. As artificial snow becomes the norm, winter is increasingly a season bought, built, and rationed, raising the question of whether attempts to keep the season alive are accelerating the changes that threaten to erase it.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IP Networks Over Heterogeneous Embedded Serial Links</title>
<link href="https://hdl.handle.net/1721.1/164271" rel="alternate"/>
<author>
<name>Perry, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164271</id>
<updated>2025-12-11T03:08:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">IP Networks Over Heterogeneous Embedded Serial Links
Perry, Nathan
The Internet Protocol (IP) provides a number of key benefits to networked devices: it serves as a "narrow waist" enabling functional modularity by decoupling lower-layer devices from application behavior, it provides a notion of transitive connectivity and a number of standardized methods to achieve it, and most importantly, it is ubiquitous, enabling almost all networked applications to mutually communicate.&#13;
&#13;
Many embedded microcontrollers cannot take advantage of the benefits of IP because they lack the dedicated networking hardware which is as a practical matter required to interact with nontrivial networks. I observe that multihop point-to-point IP networks can in principle be constructed over the communication media that microcontrollers commonly do have, such as UARTs, I2C, SPI, and CAN bus, but software support is lacking to make this networking approach accessible.&#13;
&#13;
Therefore, this thesis develops and evaluates interstice, a platform-independent, open-source software library designed to enable the flexible implementation of modular packet forwarders in userspace. It can be used to interconnect devices and their IP stacks across a variety of conventional&#13;
and unconventional links. Interstice exposes a reprogrammable, dynamically-updatable packet-forwarding strategy, enabling forwarder nodes in principle to act as hubs, bridges, full routers, or implement firewalls or NAT, as application requirements and platform constraints permit.&#13;
&#13;
This approach enables benefits for modular, networked systems of microcontrollers which need to talk to the outside world: using IP enables internal microcontrollers to communicate with external devices such as PCs and smartphones without the need for application gateways. Further, to the extent that such networks are runtime-reconfigurable, features of IP such as address assignment, dynamic routing, and link-agnosticity can be incredibly beneficial.&#13;
&#13;
Interstice is evaluated here primarily against networks of various types of serial links (UART, I2c, CAN) speaking PPP, selected to demonstrate utility of the approach to connect embedded devices lacking dedicated networking peripherals, and further that link drivers can be specialized to take advantage of the specific characteristics of each link. The approach is showcased in application scenarios including a networked milling machine, and is analyzed for a number of performance metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives</title>
<link href="https://hdl.handle.net/1721.1/164270" rel="alternate"/>
<author>
<name>Li, Yuqing Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/164270</id>
<updated>2025-12-11T03:08:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives
Li, Yuqing Lucy
Imagination is the origin of reality. Cultivating new infrastructural and ecological imaginaries is crucial to addressing the climate crisis. Where is the space to prototype new social and technological relations? Transient electronics is an emerging field in advanced materials focused on making electronics that don’t last. Devices are designed to be transient for biomedical, environmental monitoring, or energy storage applications. It is a fascinating and unconventional direction that advances the area of biocompatibility, redefining waste and time-programmable decay {Making electronics that, 2022}. However, in a manufacturing system that fundamentally favors the inert and invariant, transient properties can be precisely the qualities that make adaptation most challenging, often failing at the very stage of imagination. Taking inspiration from transient electronics, this thesis consists of a set of novel biomaterials, a workflow, and three fictional stories to enrich our imagination and instill agency amidst entangled humanitarian, ecological, and technological crises. BioLIG is a material for prototyping accessible and compostable electronics. It uses laser-induced graphene as an organic, bio-derived conductor and affordable biomaterials as the substrate. Three sheets and two inks make up a toolkit to create biocomposites with different properties, colors, and textures specifically designed for prototyping sensors and circuits with transient behaviours. Through a series of characterisations, BioLIG is evaluated and demonstrates that with one material, its electrical performance is on par with synthetic substrates. However, the goal is not to create a replacement material but to prototype new social and technological relations to transient materials. Through a questionnaire, I collected stories, ideas, and questions from makers, designers, and artists for BioLIG and used those as the basis for imagination. In a speculative house, on three floors, three stories unfold of a hoarder, a city forester, and a family living in a time with a leap in our relationship to fabrication, to electronics, and to decay.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches</title>
<link href="https://hdl.handle.net/1721.1/164266" rel="alternate"/>
<author>
<name>Justen, Lennart J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164266</id>
<updated>2025-12-11T03:08:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches
Justen, Lennart J.
Civilization confronts a growing challenge: advancing transformative biological science while safeguarding against catastrophic misuse, a tension amplified by the rapid convergence between biology and artificial intelligence. The COVID-19 pandemic starkly revealed our vulnerabilities to self-replicating, exponential biological phenomena, yet current defenses remain dangerously inadequate—often blind to novel pathogens until too late and lacking barriers against rapid airborne transmission. This thesis argues that robust biosecurity enables, rather than hinders, progress, and advances three key defensive capabilities. First, it evaluates blood metagenomics for pathogen-agnostic surveillance, reanalyzing public datasets to quantify viral signatures and guide the implementation of much-needed early-warning systems sensitive to novel pathogens. Second, it advances far-UVC, a type of ultraviolet between 200-235 nm, for continuous indoor air disinfection, critically assessing its safety profile through an international expert review and establishing research priorities essential for deploying this vital physical defense against airborne threats. Third, it develops rigorous methodologies for evaluating AI's rapidly evolving biological capabilities, benchmarking frontier models across diverse tasks to track progress, reveal limitations in current assessments, and guide responsible innovation in this powerful dual-use technology. Collectively, these contributions help accelerate technologies to mitigate biological risks, thereby helping secure the conditions for continued, beneficial advancement of biology in the age of AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164265" rel="alternate"/>
<author>
<name>Poole-Dayan, Elinor</name>
</author>
<id>https://hdl.handle.net/1721.1/164265</id>
<updated>2025-12-11T03:08:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Poole-Dayan, Elinor
Deliberative assemblies—representative samples of citizens engaged in collective decision-making through facilitated learning and deliberation—are increasingly recognized as powerful tools for revitalizing democratic governance. Yet, core aspects of how deliberation shapes which ideas advance, how perspectives evolve, and why certain recommendations succeed remain opaque and underexamined. This thesis addresses these gaps by investigating: (1) How might we trace the evolution and distillation of ideas into concrete recommendations within deliberative assemblies? and (2) How does the deliberative process shape delegate perspectives and influence voting dynamics over the course of the assembly?&#13;
&#13;
&#13;
To answer these questions, I develop LLM-based methodologies for empirically analyzing transcripts from a tech-enhanced student deliberative assembly. The first framework identifies and visualizes the space of expressed suggestions, revealing that seemingly large gaps between ideas and final recommendations often reflect productive deliberative filtering—while also surfacing overlooked viable ideas.&#13;
A second analysis integrates post-assembly survey data with transcript-grounded voting patterns to uncover the primary drivers of vote change: edits to recommendations, evolving opinions, and strategic shifts in response to updated priorities. Building on this, I introduce a framework for reconstructing each delegate’s evolving stance across the assembly, linking shifts in perspective to specific deliberative moments and justifications.&#13;
&#13;
Together, these methods contribute novel empirical insight into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics otherwise invisible in traditional assembly outputs. The findings lay groundwork for new tools that support facilitators and delegates during live assemblies, improve transparency for decision-makers, and elevate ideas that may otherwise be missed.&#13;
&#13;
Looking ahead, this work opens pathways for comparative research across assemblies and highlights the potential for human-centered AI to meaningfully enhance deliberative democratic practice. As societies seek new modes of participatory governance amid growing polarization and institutional mistrust, tools that strengthen deliberation without compromising its core human character are urgently needed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164262" rel="alternate"/>
<author>
<name>Wong, Wing Cheung Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164262</id>
<updated>2025-12-11T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies
Wong, Wing Cheung Michael
With trust in traditional democratic institutions waning, it is increasingly important to examine how potential new institutions could be created and bolstered, with particular emphasis on restoring trust and empowering the public. One potential solution, the citizen's or deliberative assembly, can serve to bridge the governance and legitimacy gap between real-world policy decision-making processes and citizen-driven impact by leveraging random sortition and a well-designed deliberation process. In this thesis, I explore how AI-driven sensemaking via GPT4o-mini--a Large-Language Model (LLM)--synthesized with custom-built visualization tools, can potentially reveal the dynamics within citizen deliberative assemblies where representative, randomly selected citizens navigate public interest issues through facilitated deliberation--and how such tools can serve to amplify transparency within both the assembly process itself and the issues they explore. Through building three different prototype visualization frameworks and the development of an AI-powered topic identification process called backcasting, I analyze novel datasets from two tech-enhanced assemblies; fully recorded discussions from both an on-the-ground citizens' assembly in Deschutes County, Oregon, as well as an MIT student assembly on sustainability. In backcasting, assembly outcomes are linked to transcriptions of assembly discussions via LLM tagging, uncovering what, when, who, and where participants deliberate about topics that eventually become proposals/recommendations/outcomes. Furthermore, I analyze the sentiment with which an assembly delegate presented their view on a certain recommendation (agreement, disagreement, etc.) in addition to the supporting reasoning patterns this delegate used to express their view, if any (e.g. whether they draw from personal experience, reference outside expertise, etc.). To evaluate the final prototype tool, I interview subject matter and assembly experts, assembly organizers/facilitators, as well as assembly delegate members to assess the potential and drawbacks of this visualization tool and AI sensemaking backbone. Positive feedback obtained from these user studies include the clear potential for research, narrative building, and facilitation improvement, in addition to greater perceived transparency into the workings of an assembly process. Further work is still needed, however, to address significant lingering issues, such as adjusting presentation to better serve specific use cases and to reduce complexity and confusion, the most referenced drawback of Delibrary. Overall, my thesis aims to \textbf{build transparent insights into the human-led structures of assemblies, enabling relevant stakeholders--from delegates, policy makers, to the general public--to achieve a better understanding of the assembly process and engender legitimacy perception by illustrating that delegates drawn from all walks of life do have meaningful voice in an impactful process}. By helping to promote this understanding and perception of legitimacy of an effective and respectful deliberation process, I strive to ultimately help scaffold healthier democratic decision-making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Storybooks for Early AI Literacy</title>
<link href="https://hdl.handle.net/1721.1/164170" rel="alternate"/>
<author>
<name>Pu, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/164170</id>
<updated>2025-12-04T03:09:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interactive Storybooks for Early AI Literacy
Pu, Isabella
As artificial intelligence (AI) becomes increasingly present in children's everyday environments, there is an urgent need for developmentally appropriate tools that help young learners understand and shape these technologies. To be effective, these tools must not only successfully convey complex concepts but also engage children in ways that are meaningful, accessible, and fun.&#13;
&#13;
This thesis introduces the Interactive Storybooks for Early AI Literacy, a series of ten interactive storybooks for children ages 6–9 that combine narrative, mini-games, and scaffolded creative AI interactions to teach core AI and robotics concepts. The storybooks follow an overarching narrative featuring a friendly robot, Doodlebot, who must learn creative tasks with the child's help, framing the child as an AI designer and introducing them to the concept of training AI models through the narrative. The storybooks additionally contain interactive games and activities which help keep kids excited and engaged, while providing structured opportunities to experiment with and explore AI creation tools.&#13;
&#13;
First, a pilot study was conducted at a community summer camp with four Interactive Storybooks. Children expressed joy and pride in their AI creations, used the characters as emotional anchors for learning, and began to successfully articulate key AI concepts. Four engagement archetypes emerged: the Reader, the Gamer, the Showcaser, and the Social Connector, each representing a distinct way children interacted with the storybooks. However, despite behavioral signs of engagement, many children described the narrative portions as boring and claimed to prefer games.&#13;
&#13;
To explore this tension, a home deployment study compared two versions of the system: a "Books" condition with the full narrative and a "Games" condition with only instructional text. Both conditions included the same mini-games and AI interactions. While children in both groups reported similar levels of enjoyment, those in the Books condition showed significantly higher learning gains, greater increases in perceived knowledge and confidence, and stronger connections to the characters. Children in the Books condition also more frequently referenced the narrative when describing AI concepts and demonstrated more creative and iterative behavior during and after gameplay.&#13;
&#13;
Overall, these findings suggest that combining storytelling, gameplay, and creative AI interactions is an effective and engaging approach to teaching AI and robotics to young children. Narrative context appears to support concept recall, deepen emotional investment, and promote thoughtful experimentation, even with complex concepts for this age group, like AI and robotics. Based on insights from both studies, this thesis concludes with six design recommendations for creating developmentally appropriate, emotionally resonant AI education tools for early learners using narrative and play.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volume Mount Devices</title>
<link href="https://hdl.handle.net/1721.1/164144" rel="alternate"/>
<author>
<name>Han, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/164144</id>
<updated>2025-12-04T03:09:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Volume Mount Devices
Han, Alan
As Moore's Law ends and AI demands increasingly tax our climate and resources, the limitations of two-dimensional electronics integration have become critical bottlenecks. Surface-mount devices (SMDs) remain entrenched in industry practice despite being insufficient for today's computing challenges and sustainability needs. This thesis introduces the volume mount device (VMD), a three-dimensional electronics packaging standard that bypasses the traditional die-to-server stack while offering a scalable, reversible framework inspired by natural ecosystems' circularity.&#13;
The VMD approach embeds both electrical function and mechanical structure into modular elements that assemble freely in 3D space. Rather than building circuits on planar PCBs, this system constructs functional circuits by linking components into a self-constraining lattice architecture. My current implementation leverages existing supply chains by incorporating SMD components on small tile PCBs, while establishing a pathway toward eventually replacing SMDs at the IC packaging level.&#13;
I developed a hybrid assembly system combining 3D printing and pick-and-place automation to build multi-layered electronic assemblies efficiently. Where prior work achieved only tens of parts at hundreds of components per hour (CPH), my system demonstrates automated assembly of hundreds of integrated elements at approximately 1000 CPH. I evaluate various geometric configurations, assess performance overhead compared to conventional approaches, and develop cost-effective, self-aligning connector interfaces for reliable joints—creating a foundation for electronics systems that can be assembled, disassembled, and reassembled as needed while improving resilience against supply chain disruptions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision</title>
<link href="https://hdl.handle.net/1721.1/164137" rel="alternate"/>
<author>
<name>Willis, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/164137</id>
<updated>2025-12-04T03:09:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision
Willis, Jacob
Fast radio bursts (FRBs) are a novel form of radio transients discovered in 2007. These bright, extragalactic radio signals have an inferred all-sky rate of hundreds of detections per day. The properties of FRBs hold valuable clues about the extreme physical processes driving them while also holding information about the astrophysical plasmas they traverse on their journey to Earth. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has led the field with the hundreds of FRB detections the collaboration has published to date. However, these detections typically have localization regions so large that we cannot identify a single host galaxy, never mind its local environment. To improve upon this, CHIME/FRB has been transformed into a very long baseline interferometry (VLBI) array, drastically increasing the angular resolution of CHIME/FRB from arcminute to sub-arcsecond precision.&#13;
&#13;
In this work, I present my contributions to commissioning the CHIME/FRB VLBI Outrigger station located at the Green Bank Observatory (GBO) in West Virginia. This includes measuring and validating GBO's exact position to enable the localization of FRBs to sub-arcsecond precision.&#13;
&#13;
For VLBI networks spanning thousands of kilometers, the difference in the local ionospheric environments is significant and leads to errors in the CHIME/FRB Outrigger localizations. I present a thin shell model of the ionosphere to parameterize the local ionospheric environment for each VLBI station. This model may be used to interpolate the error induced by the ionosphere in FRB observations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV</title>
<link href="https://hdl.handle.net/1721.1/164136" rel="alternate"/>
<author>
<name>Chou, Pin-Chun</name>
</author>
<id>https://hdl.handle.net/1721.1/164136</id>
<updated>2025-12-04T03:09:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV
Chou, Pin-Chun
The first measurement of Z-hadron two-particle correlation function are reported in PbPb collisions at √ˢNN = 5.02 TeV, using the PbPb collision data taken in 2018. The integrated luminosity of the PbPb data is 1.67 ±0.03 nb⁻¹ which made the analysis possible for the first time. Collision data with at at least one Z boson with 40 &lt;pT &lt;200 GeV/c are analyzed. The azimuthal angle distributions with respect to the Z bosons, whih are sensitive to modification of in-medium parton shower and medium recoils, are measured in central PbPb collisions. A significant modification of the two particle correlation in pseudorapidity difference and azimuthal angle difference is observed with respect to the reference measured in pp collisions. Those results are compared to phenomenological models that include medium-recoil, medium response and thermalization of the QGP wakes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color</title>
<link href="https://hdl.handle.net/1721.1/164134" rel="alternate"/>
<author>
<name>Myers, Paris G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164134</id>
<updated>2025-12-04T03:09:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color
Myers, Paris G.
Structural color is nature’s programmable color palette. While pigments and dyes absorb light to produce color, structural color uses nanoscale, light-reflecting structures to appear iridescently colored. We present MorphoChrome, an optical device for real-time, handheld, programmable structural color fabrication. Analogous to painting with light, MorphoChrome creates multicolor, structurally colored designs&#13;
by exposing a commercially available holographic photopolymer film to user-controlled wavelengths. Within the device, red, green, and blue laser diodes go through an optical prism, combining light and producing mixed color outputs on the film. Additionally, we introduce a resin-based process to adhere and integrate the structurally-colored film with flexible and rigid objects and diverse making processes. In this thesis, we focus on the device optical design and fabrication, color-mixing,&#13;
color output UI controller, device aperture tips, and holographic photo-polymer film adherence process. We evaluate the available color space and color resolution, and demonstrate creative fabrication applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/164132" rel="alternate"/>
<author>
<name>Agarwal, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/164132</id>
<updated>2025-12-04T03:09:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs
Agarwal, Gauri
Understanding the ripple effects of events—both real and speculative—is essential for navigating complex futures. Large Language Models (LLMs) have emerged as powerful tools that offer a user-friendly and narrative experience for question answering and reasoning across large corpuses of unstructured data [15, 96]. While LLMs can respond to complex ‘what-if’ questions, they typically provide single, unverifiable answers. Even with retrievalaugmented generation (RAG) that grounds LLM responses on external sources, the opacity of reasoning pathways undermines trust in model outputs [97]. Next Week Tonight builds on the narrative and reasoning capability of LLMs further by enhancing the exploration of what-if futures and making it more transparent and evidencebased. NWT exposes the underlying knowledge graph, allowing users to inspect inference pathways directly. This also enables the generation of multiple, diverse scenarios from a single condition—each following different but explainable causal chains. In testing 15 counterfactual prompts that span diverse news topics, NWT produced scenario narratives that were rated as significantly more causally coherent, transparent, and easier to audit than standard chat completions. Beyond technical performance, NWT reinvents scenario planning as an interactive narrative experience - encouraging curiosity, critical thinking, and deeper engagement with the complexities of future events. By surfacing not only what could happen but why and how, NWT aims to empower analysts, policymakers, and the public to navigate uncertainty with greater clarity and confidence. Github: https://github.com/viral-medialab/next-week-tonight
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring Clonal Dynamics in Blood using Single-Cell Measurements</title>
<link href="https://hdl.handle.net/1721.1/164129" rel="alternate"/>
<author>
<name>Perry, Andrea N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164129</id>
<updated>2025-12-04T03:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inferring Clonal Dynamics in Blood using Single-Cell Measurements
Perry, Andrea N.
In this work, we uniquely tag hematopoietic (blood) stem cells with genetic barcodes and follow their progeny over time to ask whether clonally related cells in myeloproliferative neoplasms (MPNs) favor particular blood cell fates. Myeloproliferative neoplasms are clonal disorders driven most frequently by the JAK2-V617F mutation, which arises in a single hematopoietic stem cell (HSC) and ultimately dominates the normal process of blood cell production. Although all patients carry the same driver mutation, they still branch into three distinct disease forms—essential thrombocythemia (ET), polycythemia vera (PV), or primary myelofibrosis (PMF)—and the reason for this variation remains unknown. One compelling hypothesis is that the JAK2-V617F mutation may arise in HSC subsets with intrinsic biases toward platlet-producing cells (as in ET) or red blood cell precursors (PV). To investigate this question, we analyzed bone-marrow cKit⁺ cells from mice engineered for inducible MPN disease and CRISPR array repair lineage tracing (CARLIN), using single-cell RNA sequencing. Our gene expression analysis shows that the mutation keeps key signaling and stress-response genes switched on and boosts growth-promoting enzymes, collectively pushing blood production toward the myeloid line. At the resolution of individual CARLIN clones (i.e. cells grouped by a shared progenitor), however, we observe no robust mutation-induced lineage bias—an outcome attributable to limited clone recovery and inter-mouse variability. Crucially, this work establishes a scalable analysis pipeline for future, higher-yield CARLIN experiments. Enhancing lineage-tracing sensitivity, barcode diversity, and biological replication will be essential to test whether these interferon-/stress-response and kinase programs manifest as subtle, clone-level fate biases in JAK2-driven MPN.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks</title>
<link href="https://hdl.handle.net/1721.1/164059" rel="alternate"/>
<author>
<name>Zarkos, Christos V.</name>
</author>
<id>https://hdl.handle.net/1721.1/164059</id>
<updated>2025-11-26T03:06:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks
Zarkos, Christos V.
Serialization frameworks are a fundamental operation of datacenters, as they enable language- and platform-neutral communication and storage. However, software serialization faces major performance bottlenecks, resulting in a significant fraction of cloud cycles dedicated to this process. Prior work has proposed specialized hardware accelerators to address these overheads. While these proposals achieve considerable speedups, they are expensive in terms of verification, fabrication, and deployment, and often hardcode too many details about the (de)serialization framework in hardware. We propose SERenaDE, a serialization framework designed to integrate general-purpose accelerators currently deployed in datacenters in order to accelerate and offload serialization to hardware. Specifically, we repurpose the Intel In-Memory Analytics Accelerator (IAA), an accelerator engine offering fast compression, to enable fast and transparent to the user serialization and deserialization, completely removing software serialization from the execution pipeline. We evaluate our system on latest-generation production machines, both with synthetic microbenchmarks, and open-source representative fleet-wide benchmarks. Our results show comparable performance in terms of per-request latency across all types of messages, while significantly improving throughput - especially at the tail -, maintaining thread scalability and achieving high compression ratios alongside substantial speedups for larger messages. Under 95th latency percentile latency constraints SERenaDE improves serialization and deserialization throughput by 13% and 30% respectively, while achieving from 0.2x to 6.94x smaller serialized message sizes for messages of a total memory layout larger than 4KB.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Optimized Design of 3D Shapes with Part-Based Control</title>
<link href="https://hdl.handle.net/1721.1/164056" rel="alternate"/>
<author>
<name>Zhan, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/164056</id>
<updated>2025-11-26T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Physics-Optimized Design of 3D Shapes with Part-Based Control
Zhan, Sean
We introduce PhysiOPart, a computational approach for rapid generative design of 3D objects optimized for physical integrity. PhysiOPart enables users to edit and combine object parts to explore a vast design space. To model continuous surfaces of arbitrary resolution without topology restrictions, we parametrize parts with neural implicit representations. However, when parts are assembled to form an object, the resulting geometry is not guaranteed to be functional. Existing generative modeling approaches use task-specific neural predictors to approximate physical behaviors with limited accuracy. We propose an end-to-end differentiable physics simulation pipeline that performs linear static analysis to optimize for user-specified objectives, leveraging learned geometry priors. Our part-based formulation with finite element method is highly customizable, allowing for user-defined per-part materials, loads, and boundary conditions. The optimized designs exhibit improved physical behavior and can be fabricated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Assembly of Curved Structures from Flat Configuration</title>
<link href="https://hdl.handle.net/1721.1/164055" rel="alternate"/>
<author>
<name>Zaman, Akib</name>
</author>
<id>https://hdl.handle.net/1721.1/164055</id>
<updated>2025-11-26T03:06:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast Assembly of Curved Structures from Flat Configuration
Zaman, Akib
Imagine deploying an emergency shelter that transitions seamlessly from a flat configuration to a lifted structure, or a folded robot that is sent through a tunnel and subsequently activated to expand into a larger form at the endpoint, with a single, collective pull of strings. This scenario raises two critical questions: (i) how to decompose the structure into a flat state that encodes the 3D geometry, and (ii) where to place strings through the unit modules to achieve complete actuation. Although these questions have been explored individually, comprehensive solutions remain scarce. To address this challenge, this thesis presents a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. Target structures are decomposed into rigid, spatially varied quad tiles optimized to approximate a user-provided surface, forming a flat mechanical linkage. A two-step algorithm is then applied to determine a physically realizable string path that controls only a subset of tiles, enabling smooth actuation from flat to assembled configuration. First, the minimal subset of tiles required for string control is computed by considering both the structure’s geometry and inter-tile interactions. Second, a valid string path is identified through these tiles that minimizes friction, thereby transforming the flat linkage into the target 3D form upon tightening a single string. The resulting designs can be manufactured in flat form using computational fabrication techniques: such as 3D printing, CNC milling, or molding, thereby simplifying both production and transportation. Validation is provided through a series of physical prototypes and application case studies, ranging from medical devices and space shelters to large-scale architectural installations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems</title>
<link href="https://hdl.handle.net/1721.1/164033" rel="alternate"/>
<author>
<name>Zhang, Ziyu</name>
</author>
<id>https://hdl.handle.net/1721.1/164033</id>
<updated>2025-11-26T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems
Zhang, Ziyu
The recent advancement of large language models (LLMs) and large multimodal models (LMMs) greatly enhances the capabilities of AI systems such as recommendation systems and coding assistants, making them more practical for real-world deployment. However, these models cannot directly interact with large volumes of data in a knowledge corpus during inference/task time due to inherent architectural limits and cost concerns. Encoding data into vector embeddings and leveraging approximate nearest neighbor search (ANNS) have thus become an important data processing primitive in AI systems following the introduction of retrievel-augmented generation (RAG). However, the complexity of tasks these AI systems aim to solve introduces challenges for existing ANNS algorithms. I developed methods to expand existing ANNS algorithms to address two such challenges: freshness and heterogeneity in the data.&#13;
&#13;
Graph-based ANNS algorithms have been proven to have superb cost versus approximation quality trade-off yet follow a simple intuition of best-first search. I focus on adapting graph-based ANNS algorithms to two settings featuring emerging challenges. (1) Data is updated constantly. Existing algorithms are inefficient under deletions and not robust against different orderings in the workload. I propose methods addressing these problems and developed an algorithm supporting updates effectively and efficiently based on Vamana, a state-of-the-art graph-based ANNS algorithm. (2) Data is heterogeneous in format, modality, and how they relate to a query, making the similarity difficult to capture by the canonical ANNS definition. I explore ways to model the similarity between heterogeneous sources and using graph-based ANNS approaches to perform semantic search in this setting. I test this approach under an end-to-end multimodal question-answering system developed in-house.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of pGaN-gate power HEMTs</title>
<link href="https://hdl.handle.net/1721.1/164028" rel="alternate"/>
<author>
<name>Yu, Yue</name>
</author>
<id>https://hdl.handle.net/1721.1/164028</id>
<updated>2025-11-26T03:06:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterization of pGaN-gate power HEMTs
Yu, Yue
This thesis presents a comprehensive study of p-GaN gate GaN High Electron Mobility Transistors (HEMTs) with a focus on understanding how fabrication process variations and gate structural designs impact key electrical performance metrics. Five industry-fabricated wafers, each processed with distinct etch depths, contact strategies, and p-GaN surface configurations, were characterized using a combination of DC and pulsed I–V measurements. Full-transistor modules were evaluated alongside specialized test structures to enable both system-level and localized analysis. DC measurements using the Keysight B1505A system revealed that more aggressive gate contact schemes improved ON-resistance and transconductance, but often at the cost of increased gate leakage and reduced threshold control. Pulsed-IV characterization with the Auriga AU4750 system uncovered dynamic Ron degradation behavior and charge trapping effects, especially under high drain bias conditions. Extracted time constants demonstrated process-dependent trends, with wafers retaining more of the p-GaN surface exhibiting slower charge detrapping and more severe transient effects. Specialized test structures provided additional insights into gate lateral conduction, sheet resistance, and contact asymmetry, reinforcing the connection between device layout, processing, and observed variability. These findings highlight critical trade-offs in the design and fabrication of p-GaN gate GaN HEMTs and offer design-aware strategies for optimizing performance and reliability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crystallization of Glauber's salt</title>
<link href="https://hdl.handle.net/1721.1/164009" rel="alternate"/>
<author>
<name>Coberly, C. Wheeler.</name>
</author>
<id>https://hdl.handle.net/1721.1/164009</id>
<updated>2025-11-25T06:32:25Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Crystallization of Glauber's salt
Coberly, C. Wheeler.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 39).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of angular scintillation of radar echoes</title>
<link href="https://hdl.handle.net/1721.1/164006" rel="alternate"/>
<author>
<name>Graham, James William.</name>
</author>
<id>https://hdl.handle.net/1721.1/164006</id>
<updated>2025-11-25T06:32:44Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">Analysis of angular scintillation of radar echoes
Graham, James William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1952
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and construction of an ultra-high vacuum field-ion microscope.</title>
<link href="https://hdl.handle.net/1721.1/164002" rel="alternate"/>
<author>
<name>Olson, Gregory Bruce.</name>
</author>
<id>https://hdl.handle.net/1721.1/164002</id>
<updated>2025-11-25T06:32:36Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">The design and construction of an ultra-high vacuum field-ion microscope.
Olson, Gregory Bruce.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Bibliography: leaf 35.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipleasing as a prospective method of l/t financing for international shipowners.</title>
<link href="https://hdl.handle.net/1721.1/163999" rel="alternate"/>
<author>
<name>Angelicoussis, John Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/163999</id>
<updated>2025-11-25T06:32:28Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Shipleasing as a prospective method of l/t financing for international shipowners.
Angelicoussis, John Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1974; Includes bibliographical references.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates</title>
<link href="https://hdl.handle.net/1721.1/163997" rel="alternate"/>
<author>
<name>Lehman, LeNore Louise.</name>
</author>
<id>https://hdl.handle.net/1721.1/163997</id>
<updated>2025-11-25T06:32:40Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates
Lehman, LeNore Louise.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1988; Includes bibliographical references.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information</title>
<link href="https://hdl.handle.net/1721.1/163996" rel="alternate"/>
<author>
<name>Huttenlocher, Daniel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163996</id>
<updated>2025-11-25T06:32:20Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information
Huttenlocher, Daniel P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Bibliography: leaves 73-77.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies</title>
<link href="https://hdl.handle.net/1721.1/163994" rel="alternate"/>
<author>
<name>Perkins, Edwin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163994</id>
<updated>2025-11-25T06:32:33Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies
Perkins, Edwin H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1930; Includes bibliographical references (leaf 115).
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A precision method for the determination of dew points of complex gaseous systems</title>
<link href="https://hdl.handle.net/1721.1/163991" rel="alternate"/>
<author>
<name>Cox, John Tatum.</name>
</author>
<id>https://hdl.handle.net/1721.1/163991</id>
<updated>2025-11-25T06:32:38Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">A precision method for the determination of dew points of complex gaseous systems
Cox, John Tatum.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AbsInt-AI: Language Models for Abstract Interpretation</title>
<link href="https://hdl.handle.net/1721.1/163731" rel="alternate"/>
<author>
<name>Wang, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163731</id>
<updated>2025-11-18T06:27:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AbsInt-AI: Language Models for Abstract Interpretation
Wang, Michael
Static program analysis is a foundational technique in software engineering for reasoning about program behavior. Traditional static analysis algorithms model programs as logical systems with well-defined semantics, enabling strong guarantees such as never missing a bug. However, traditional analyses almost always rely on uniform, hard-coded heap abstractions. While more adaptive abstractions are possible in theory, they are rarely implemented in practice due to their complexity and fragility. This limits their precision and flexibility, especially in dynamic languages like JavaScript, where heap structures are heterogeneous and difficult to analyze statically. In this work, we introduce AbsInt-AI, a language-model-guided static analysis framework based on abstract interpretation with adaptive, per-object heap abstractions for JavaScript. This enables the analysis to leverage high-level cues, such as naming conventions and access patterns, without requiring brittle, hand-engineered heuristics. Importantly, the LM agent operates within a bounded interface and never directly manipulates program state, preserving the soundness guarantees of abstract interpretation. ABSINT-AI reduces false positives by up to 34% for bug detection compared to traditional static analysis while maintaining soundness. Our ablations show that the LM’s interactions with the analysis environment are crucial, outperforming non-agentic direct LM predictions by 25%.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift</title>
<link href="https://hdl.handle.net/1721.1/163730" rel="alternate"/>
<author>
<name>Sharma, Harsha</name>
</author>
<id>https://hdl.handle.net/1721.1/163730</id>
<updated>2025-11-18T06:27:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift
Sharma, Harsha
Video-streaming platforms tune dozens of playback parameters across thousands of client devices. Our measurements from Prime Video show that device-specific tuning can enhance stream quality. Yet traditional blackbox optimization methods like Bayesian optimization become prohibitively expensive due to the large configuration space and the constant emergence of new device types. We introduce AZEEM, a scalable recommendation system leveraging few-shot prediction to rapidly identify promising configurations for new devices. The key insight behind AZEEM is that devices exhibit performance similarities that enable predictions from limited observations. Trained on offline data of device-playback configuration interactions, AZEEM efficiently narrows down the search space to a small set of configurations likely to contain optimal or near-optimal candidates. Additionally, AZEEM addresses temporal distribution shift—where the best-performing configurations change over time—by recommending a small, robust set of candidates rather than a single configuration. Evaluations using largescale real-world datasets show that AZEEM reduces exploration cost by 5.8 − 13.6× and improves stream quality compared to state-of-the-art Bayesian optimization and multi-armed bandit approaches, enabling effective device-specific optimization at scale. The material in this thesis is primarily sourced from the paper "Predict, Prune, Play: Efficient Video Playback Optimization Under Device Diversity and Drift" authored by Harsha Sharma, Pouya Hamadanian, Arash Nasr-Esfahany, Zahaib Akhtar, Mohammad Alizadeh, which is currently under submission.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks</title>
<link href="https://hdl.handle.net/1721.1/163729" rel="alternate"/>
<author>
<name>Song, Shixin</name>
</author>
<id>https://hdl.handle.net/1721.1/163729</id>
<updated>2025-11-18T06:27:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks
Song, Shixin
Address Space Layout Randomization (ASLR) is one of the most prominently deployed mitigations against memory corruption attacks. ASLR randomly shuffles program virtual addresses to prevent attackers from knowing the location of program contents in memory. Microarchitectural side channels have been shown to defeat ASLR through various hardware mechanisms. We systematically analyze existing microarchitectural attacks and identify multiple leakage paths. Given the vast attack surface exposed by ASLR, it is challenging to effectively prevent leaking the ASLR secret against microarchitectural attacks. Motivated by this, we present Oreo, a software-hardware co-design mitigation that strengthens ASLR against these attacks. Oreo uses a new memory mapping interface to remove secret randomized bits in virtual addresses before translating them to their corresponding physical addresses. This extra step hides randomized virtual addresses from microarchitecture structures, preventing side channels from leaking ASLR secrets. Oreo is transparent to user programs and incurs low overhead. We prototyped and evaluated our design on Linux using the hardware simulator gem5.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Counting Substructures with Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/163728" rel="alternate"/>
<author>
<name>Tahmasebi, Behrooz</name>
</author>
<id>https://hdl.handle.net/1721.1/163728</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Counting Substructures with Graph Neural Networks
Tahmasebi, Behrooz
To achieve a graph representation, most Graph Neural Networks (GNNs) follow two steps: first, each graph is decomposed into a number of subgraphs (which we call the recursion step), and then the collection of subgraphs is encoded by several iterative pooling steps. While recently proposed higher-order networks show a remarkable increase in the expressive power through a single recursion on larger neighborhoods followed by iterative pooling, the power of deeper recursion in GNNs without any iterative pooling is still not fully understood. To make it concrete, we consider a pure recursion-based GNN which we call Recursive Neighborhood Pooling GNN (RNPGNN). The expressive power of an RNP-GNN and its computational cost quantifies the power of (pure) recursion for a graph representation network. We quantify the power by means of counting substructures, which is one main limitation of the Message Passing graph Neural Networks (MPNNs), and show how RNP-GNN can exploit the sparsity of the underlying graph to achieve low-cost powerful representations. We also compare the recent lower bounds on the time complexity and show how recursion-based networks are near optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand</title>
<link href="https://hdl.handle.net/1721.1/163727" rel="alternate"/>
<author>
<name>Norton, Wil J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163727</id>
<updated>2025-11-18T06:27:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand
Norton, Wil J.
In robot hands, compliance improves the quality of grasps and allows for robustness in contact with the environment, which is why soft robot hands, which are inherently compliant, generate such interest despite being complex to control and model. In prior work, our lab developed a soft-rigid hybrid architecture for a robot finger, with the intention of making a compliant finger that is as easy to control as a rigid robot. This thesis details the work done to take this architecture and develop it into a five-fingered dexterous gripper capable of highly compliant grasping — over several iterations, we create an integrated tendon-driven hand that is robust, maintainable, and inexpensive. We develop a precise controller for the soft-rigid hybrid finger, and extend it for both position and task space control of the hand — additionally we implement variable stiffness control within the controller without the need for additional hardware, via adjusting gain values in the control loop. We test the ability of the hand to complete the full set of human grasping postures, and demonstrate that the soft-rigid architecture enables a high degree of generalization, able to complete 28 of the 33 identified human grasp postures. Additionally, tests illustrate the hand’s advantages in completing traditionally difficult manipulation tasks such as picking up thin deformable objects (such as a dollar bill or folding cloth) as well as in interfacing with soft or delicate target objects. We adapt a teleoperation system to map the movements of the robot gripper to a glove worn by a human operator, and evaluate the usability of the hand as a teleoperation target for completing several tasks — we illustrate promising results that the compliance of the hand compensates for operator error and allows for fast completion of tasks requiring environmental or object contact, traditionally difficult tasks for existing rigid robots. Finally, we discuss the use of the teleoperation system to record demonstrations which we then use to train an imitation learning model, utilizing an implementation of denoising diffusion probabilistic models, to complete grasping tasks. We show that our soft-rigid fingers allow a dexterous hand to be trained to perform autonomous grasping with a relatively small set of expert demonstrations, and that the compliance of the physical structure allows for variance in the environment and object position to be compensated for by the physical properties of the hand.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus</title>
<link href="https://hdl.handle.net/1721.1/163726" rel="alternate"/>
<author>
<name>Qu, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/163726</id>
<updated>2025-11-18T06:27:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus
Qu, Ashley
Barrett’s Esophagus (BE) is a key precursor to esophageal adenocarcinoma (EAC), but current screening and risk assessment methods are ineffective and costly. Many BE cases remain undiagnosed due to asymptomatic patients, and existing risk algorithms rely on patient data rather than biomarkers. This work aims to start building a risk progression model by using a multi-modal imaging system combining autofluorescence spectroscopy, optical coherence tomography, and diffuse reflectance spectroscopy to perform label-free optical biopsies on ex-vivo tissue. These images will be co-registered and validated with histological biomarkers for BE. The ultimate goal is to develop a non-invasive endoscopic capsule and algorithm to better assess BE progression and enhance early detection of EAC.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complexity of Basis-Restricted Local Hamiltonians</title>
<link href="https://hdl.handle.net/1721.1/163725" rel="alternate"/>
<author>
<name>Ma, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/163725</id>
<updated>2025-11-18T06:27:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complexity of Basis-Restricted Local Hamiltonians
Ma, Henry
A major goal of quantum complexity theory is to understand which computational problems can be solved with access to certain quantum resources. The subfield of Hamiltonian complexity specifically considers computational problems that ask about properties of local Hamiltonians, which are of critical importance in quantum complexity because they can be viewed as quantum generalizations of classical constraint satisfaction problems. In this work, we study the complexity of certain restricted variants of the Quantum-k-Sat problem, a quantum analog of the NP-complete k-Sat problem. We introduce new variants of Quantum-k-Sat which place a basis restriction on the input Hamiltonian H = Σᵢ hᵢ . Each variant is defined by a fixed collection of bases B₁, . . . , Bᵣ of n-qubit space. We require that each Hamiltonian term hi must be diagonal in one of these bases. Our results resolve the complexity of certaim basis-restricted variants of Quantum-k-Sat. First we show the Quantum-6-Sat problem with Hamiltonian terms restricted to be diagonal in an X/Z mixed basis is QMA₁-complete. Second, we combine basis restriction with the restriction of commutativity, and show the following easiness result, which applies generally to higher-level quantum systems (qudits) and bases Q and R (which are real-valued and satisfy an overlap condition): The commmuting Quantum-Sat problem on qudits, where Hamiltonian terms are either diagonal in the Q basis, the R basis, or a single mixed Q/R basis, is in NP.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future of Personalized, Aligned Language Models</title>
<link href="https://hdl.handle.net/1721.1/163724" rel="alternate"/>
<author>
<name>Han, Seungwook</name>
</author>
<id>https://hdl.handle.net/1721.1/163724</id>
<updated>2025-11-18T06:27:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Future of Personalized, Aligned Language Models
Han, Seungwook
Aligning Large Language Models (LLMs) to cater to different human preferences, learning new skills, and unlearning harmful behavior is an important problem. Search-based methods, such as Best-of-N or Monte-Carlo Tree Search, are effective, but impractical for LLM adaptation due to their high inference cost. On the other hand, using Reinforcement Learning (RL) for adaptation is computationally efficient, but performs worse due to the optimization challenges in co-training the value function and the policy. We present a new framework for reward optimization, Value Augmented Sampling (VAS), that can maximize different reward functions using data sampled from only the initial, frozen LLM. VAS solves for the optimal reward-maximizing policy without co-training the policy and the value function, making the optimization stable, outperforming established baselines, such as PPO and DPO, on standard benchmarks, and achieving comparable results to Best-of-128 with lower inference cost. Unlike existing RL methods that require changing the weights of the LLM, VAS does not require access to the weights of the pre-trained LLM. Thus, it can even adapt LLMs (e.g., ChatGPT), which are available only as APIs. In addition, our algorithm unlocks the new capability of composing several rewards and controlling the extent of each one during deployment time. By bringing together stability, flexibility, and efficiency, we explore the future of aligned, personalized language models that can be adapted seamlessly to meet a wide spectrum of human preferences.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock</title>
<link href="https://hdl.handle.net/1721.1/163723" rel="alternate"/>
<author>
<name>Ji, Yewon</name>
</author>
<id>https://hdl.handle.net/1721.1/163723</id>
<updated>2025-11-18T06:27:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock
Ji, Yewon
Seoul, South Korea, exhibits an exceptionally rapid residential demolition-reconstruction cycle of approximately 30 - 40 years, resulting in one of the world’s shortest apartment building lifespans. This entrenched status quo, fueled by post-war policies, real estate speculation, and finance models treating housing primarily as a short-term asset, contrasts sharply with other developed nations. This research critiques South Korea’s model of rapid demolition for its significant, often overlooked, environmental impacts and social costs. To evaluate alternatives, the methodology comprises three key stages: A) a comparative analysis of the financial frameworks and sustainability outcomes characterizing Western residential longevity versus the unique Korean housing model; B) the formulation of a novel alternative practice focused on adaptive reuse and retrofitting, specifically tailored to integrate within South Korea’s economic system and cultural context; and C) the practical demonstration and assessment of this practice through a design case study, incorporating strategies like phased interventions and low-carbon materials such as mass timber. The analysis reveals that this alternative extends building lifespan and achieves substantial carbon reductions by preserving the embodied carbon within existing structures. It offers long-term financial benefits, presenting a viable economic pathway aligning key stakeholder interests through enduring value over speculative gains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach</title>
<link href="https://hdl.handle.net/1721.1/163722" rel="alternate"/>
<author>
<name>Noorbakhsh, Kimia</name>
</author>
<id>https://hdl.handle.net/1721.1/163722</id>
<updated>2025-11-18T06:27:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach
Noorbakhsh, Kimia
Assessing and enhancing human learning through question-answering is vital, especially when dealing with large documents, yet automating this process remains challenging. While large language models (LLMs) excel at summarization and answering queries, their ability to generate meaningful questions from lengthy texts remains underexplored. We propose Savaal, a scalable question-generation system with three objectives: (i) scalability, enabling question-generation from hundreds of pages of text (ii) depth of understanding, producing questions beyond factual recall to test conceptual reasoning, and (iii) domainindependence, automatically generating questions across diverse knowledge areas. Instead of providing an LLM with large documents as context, Savaal improves results with a threestage processing pipeline. Our evaluation with 76 human experts on 71 papers and PhD dissertations shows that Savaal generates questions that better test depth of understanding by 6.5× for dissertations and 1.5× for papers compared to a direct-prompting LLM baseline. Notably, as document length increases, Savaal’s advantages in higher question quality and lower cost become more pronounced.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ab initio modeling of superconducting nanowire single-photon detectors</title>
<link href="https://hdl.handle.net/1721.1/163720" rel="alternate"/>
<author>
<name>Simon, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163720</id>
<updated>2025-11-18T06:27:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ab initio modeling of superconducting nanowire single-photon detectors
Simon, Alejandro
Single-photon detectors are widely used in modern communication, sensing, and computing technology. Among these detectors, superconducting nanowire single-photon detectors (SNSPDs) possess the highest detection efficiencies, the shortest timing jitter, and the lowest dark count rates. However, for several applications, including those in the biological, astronomical, and quantum computation fields, there remains a desire to push the capabilities of modern detectors even further. To realize these improvements, it is necessary to develop an understanding of the physical mechanisms underpinning single-photon detection in these devices. However, current models are phenomenological, requiring experimental data for input, or can only recover qualitative agreement, severely limiting their predictive ability. In this thesis, we begin by describing the existing theoretical frameworks used to model superconducting materials and devices, both in equilibrium and nonequilibrium. We then illustrate an example of a phenomenological approach to modeling superconducting devices by developing an electrothermal model for the superconducting nanowire cryotron and demonstrating its efficacy in predicting the DC behavior and power dissipation of the device. Finally, we expand upon the current state-of-the-art SNSPD theory by utilizing recent advances in density functional theory to develop an ab initio model for the photon detection mechanism of SNSPDs. We then validate the predictions of our model with experimental data from the literature. The resulting model requires no experimental input, provides quantitative predictions of SNSPD performance, and can be extended to describe other superconducting devices, thus enabling the possibility of conducting a systematic search of materials for enhanced device performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock</title>
<link href="https://hdl.handle.net/1721.1/163718" rel="alternate"/>
<author>
<name>Velez, Gustavo A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163718</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock
Velez, Gustavo A.
Optical lattice clocks require careful preparation of atomic ensembles in order to ensure homogeneous interactions with the clock laser. We demonstrate loading and laser cooling of an ensemble of ytterbium-171 atoms in a 2D optical dipole trap created by an optical cavity. Our loading method ensures that all atoms are located in the intersection of 2 perpendicular dipole traps as verified through absorption imaging. Raman sideband cooling was used to cool the atomic ensemble from 15.7 uK to 6.3 uK as measured through optical sideband spectroscopy on the 578 nm clock transition. Together, these steps improved the transfer of atoms during a Rabi oscillation from the ground to the clock state from approximately 45 percent excitation fraction to 80 percent excitation fraction. The final atomic ensemble preparation is now sufficient for running an atomic clock.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving and Analyzing Model Merging Methods for Adaptation</title>
<link href="https://hdl.handle.net/1721.1/163717" rel="alternate"/>
<author>
<name>Pari, Jyothish</name>
</author>
<id>https://hdl.handle.net/1721.1/163717</id>
<updated>2025-11-18T06:27:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving and Analyzing Model Merging Methods for Adaptation
Pari, Jyothish
In this work, we explore the limitations of combining models by averaging intermediate features, referred to as model merging, and propose a new direction for achieving collective model intelligence through what we call compatible specialization. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications</title>
<link href="https://hdl.handle.net/1721.1/163716" rel="alternate"/>
<author>
<name>Pan, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/163716</id>
<updated>2025-11-18T06:27:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications
Pan, Eileen
LLMs already permeate medical settings, supporting patient messaging, medical scribing, and chatbots. While prior work has examined bias in medical LLMs, few studies focus on realistic use cases or analyze the source of the bias. To assess whether medical LLMs exhibit differential performance by gender, we audit their responses and investigate whether the disparities stem from implicit or explicit gender cues. We conduct a large-scale human evaluation of GPT-4 responses to medical questions, including counterfactual gender pairs for each question. Our findings reveal differential treatment based on the original patient gender. Specifically, responses for women more often recommend supportive resources, while those for men advise emergency care. Additionally, LLMs tend to downplay medical urgency for female patients and escalate it for male patients. Given rising interest in “LLM-as-a-judge” approaches, we also evaluate whether LLMs can serve as a proxy for human annotators in identifying disparities. We find that LLM-generated annotations diverge from human assessments in heterogeneous ways, particularly regarding error detection and relative urgency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications</title>
<link href="https://hdl.handle.net/1721.1/163715" rel="alternate"/>
<author>
<name>López Ángeles, Christian Emmanuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163715</id>
<updated>2025-11-18T06:27:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications
López Ángeles, Christian Emmanuel
Two-dimensional materials, such as graphene, hold promise for sensing applications. Graphene's remarkable surface-to-volume ratio, when employed as a transducer, enables the sensor channel to be readily modulated in response to chemical changes in proximity to its surface, effectively converting chemical signals into the electrical domain. However, their utilization has been constrained due to variations in device-to-device performance arising from synthesis and fabrication processes. To address this challenge, we employ Graphene Field Effect Transistors (GFETs) in developing a robust and multiplexed chemical sensing platform. This platform comprises a silicon chip with multiple arrays of sensing units distributed on its surface. This chip is coupled with custom-designed high-speed readout electronics for structural monitoring applications. For example, in harsh environmental conditions, structures constructed from reinforced concrete may experience degradation due to corrosion, a chemical process initiated by carbonation from atmospheric CO₂ and significant fluctuations in temperature and humidity. Under normal conditions, concrete maintains a pH level within the alkaline range of 13 to 14. However, when subjected to carbonation, its pH decreases to values between 8 and 9. Our platform excels in real-time pH monitoring. By conducting I-V sweep measurements in the sensor channel, we have established a correlation between [H⁺] concentration and the device transfer characteristics, i.e. gate-source voltage (&#119881;_&#119866;&#119878;) at graphene's Dirac point with an accuracy of roughly 97%. Additionally, we evaluate changes in graphene channel resistance induced by pH variations. This system and correlation allow for the prompt detection of any deviations induced by corrosion within a concrete environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards More Interpretable AI With Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163714" rel="alternate"/>
<author>
<name>Engels, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/163714</id>
<updated>2025-11-18T06:26:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards More Interpretable AI With Sparse Autoencoders
Engels, Joshua
While large language models demonstrate remarkable capabilities across diverse domains, the specific representations and algorithms they learn remain largely unknown. The quest to understand these mechanisms holds dual significance: scientifically, it represents a fundamental inquiry into the principles underlying intelligence, while practically–and with growing urgency– it is vital for mitigating risks from these very same increasingly powerful systems. The initial section of this thesis tackles this challenge of interpreting internal language model representations (features) by employing sparse autoencoders (SAEs). An SAE decomposes neural network hidden states into a potentially more interpretable basis. In Chapter 2, we introduce an unsupervised, SAE-based methodology that successfully identifies inherently multi-dimensional features. Notably, we establish that language models causally represent concepts such as days of the week and months of the year using circular structures. This work provided the first definitive evidence of causal, multi-dimensional features, thereby refuting the one-dimensional linear representation hypothesis. Chapter 3 further assesses whether SAEs identify “true” atomic language model features. We compare the generalization performance and data efficiency of linear probes trained on SAE latents against those trained on the original hidden state basis. The negative outcomes of these experiments suggest limitations in SAEs for capturing the true ontology of language models. Motivated by the aforementioned limitations, the second part of this thesis investigates sparse autoencoders themselves, exploring potential improvements and characterizing their failure modes. Chapter 4 examines the portion of activations not reconstructed by SAEs, which we term “Dark Matter.” We find that a significant fraction of this dark matter is linearly predictable, and furthermore, that specific tokens poorly reconstructed by SAEs remain largely consistent across SAE sizes and sparsities. This suggests that SAEs may systematically fail to capture certain input subspaces, which we hypothesize to contain inherently dense features. Subsequently, Chapter 5 investigates a method to enhance SAE utility: freezing the learned SAE parameters and finetuning the surrounding language model components to minimize KL divergence with the original model’s output distribution. This technique results in a 30% to 55% decrease in the cross-entropy loss gap incurred by inserting the SAE into the model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163713" rel="alternate"/>
<author>
<name>Lawson, Riley E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163713</id>
<updated>2025-11-18T06:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems
Lawson, Riley E.
In the analysis and operation of electric power systems, understanding the rates at which dynamic phenomena evolve is critical. Classically, power systems operate on multiple time scales, with slower mechanical dynamics from synchronous machines, faster electromechanical controls and protection, and very fast electrical dynamics from transmission networks. This time scale separation results in system modeling techniques which neglect certain component dynamics. However, in systems with significant penetration of power electronic devices and under fast time scale phenomena, the rates at which dynamics evolve become less separated, necessitating the modeling of all system dynamics. In large-scale systems, this becomes computationally challenging due to the high dimensionality of the interconnected system model. This work investigates the role transmission line dynamics play at very fast time scales in power systems. Theoretical results are presented to analyze which transmission line dynamics contribute significantly to power system dynamics, allowing for the intelligent incorporation of transmission line dynamics into computationally tractable models. For the first time, the use of control co-design techniques are demonstrated algorithmically to design fast power electronics-enabled control to stabilize unstable dynamics in electric power systems. This technique allows the design of controls, in an iterative way, to create stable interconnected systems. Finally, transmission line modeling impacts on the design of protection on fast time scales is analyzed. This work presents techniques to protect from short circuits in response to load disconnections, and introduces DC circuit breaker configurations to cause current commutation. In the modern day, power systems operators possess the technology to implement fast control of dynamics, however, due to insufficient information on how to model and prepare for them, system operators instead rely on using conventional, overly conservative control schemes. This work aims to bridge this gap by presenting methodologies to incorporate these dynamics into next-generation system models, and how to design control and protection to mitigate the risks these fast dynamics pose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundation Models for Protein Phenotype Prediction</title>
<link href="https://hdl.handle.net/1721.1/163712" rel="alternate"/>
<author>
<name>Calef, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163712</id>
<updated>2025-11-18T06:27:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundation Models for Protein Phenotype Prediction
Calef, Robert
Understanding the roles of human proteins remains a major challenge, with approximately 20% of human proteins lacking known functions and more than 40% missing context-specific functional insights. Even well-annotated proteins are often poorly characterized in diverse biological contexts, disease states, and perturbations. We present ProCyon, a foundation model for modeling, generating, and predicting protein phenotypes across five interrelated knowledge domains: molecular functions, therapeutic mechanisms, disease associations, functional protein domains, and molecular interactions. To support this, we created ProCyon-Instruct, a dataset of 33 million protein phenotype instructions, representing a comprehensive resource for multiscale protein phenotypes. By co-training a large language model with multimodal molecular encoders, ProCyon integrates phenotypic and protein data. A novel architecture and instruction tuning strategy allow ProCyon to process arbitrarily interleaved proteinand-phenotype inputs, achieve zero-shot task transfer, and generate free-form text phenotypes interleaved with retrieved protein sequence, structure, and drug modalities in a single unified model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Functionalization of CNFET arrays for chemical sensing</title>
<link href="https://hdl.handle.net/1721.1/163711" rel="alternate"/>
<author>
<name>Song, Jaekang</name>
</author>
<id>https://hdl.handle.net/1721.1/163711</id>
<updated>2025-11-18T06:26:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Functionalization of CNFET arrays for chemical sensing
Song, Jaekang
Practical deployment of gas sensors for general-purpose applications requires integrated chips that operate at room temperature. However, real-world implementation has been limited by challenges such as the integration of highly sensitive and selective sensors, as well as insufficient statistical validation. In this work, we present an integrated gas sensor array comprising 2048 carbon nanotube field-effect transistors (CNFETs), functionalized with conductive metal-organic frameworks (cMOFs) and metal nanoparticles. Our functionalization approach enhances sensor responses by up to two orders of magnitude and enables on-chip pattern generation. Furthermore, the large number of redundant sensors allows for statistically significant measurements. The improved sensitivity is attributed to increased Schottky barrier modulation. We also demonstrate the chip’s capability to classify bacteria and yeast based on the gas mixtures emitted from cultures grown on agar plates. This work highlights the potential of integrated gas sensors as a practical, rapid, and cost-effective approach for general gas sensing applications, including biomedical applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier</title>
<link href="https://hdl.handle.net/1721.1/163709" rel="alternate"/>
<author>
<name>Wang, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163709</id>
<updated>2025-11-18T06:26:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier
Wang, Jennifer
Advancing error-corrected quantum computing and fundamental science necessitates quantum-limited amplifiers with near-ideal quantum efficiency and multiplexing capability. However, existing solutions achieve one at the expense of the other; for example, Josephson traveling wave parametric amplifiers (JTWPAs) are highgain, broadband, and chip-based quantum amplifiers that conventionally incur a bandwidth-noise tradeoff. When operated at 20-dB gain and instantaneous bandwidths of a few GHz, JTWPAs typically reach near-quantum limited intrinsic efficiencies of 70% - 85% relative to that of an ideal phase-preserving quantum amplifier. This is due to information leakage to the sidebands of the JTWPA, which can be recovered by adiabatically transforming the input modes to Floquet modes of the system within the device. In this thesis, we experimentally demonstrate the first Floquet-mode travelingwave parametric amplifier (Floquet TWPA). Fabricated in a superconducting qubit process, this Floquet TWPA achieves minimal dissipation, quantum-limited noise performance, and broadband operation. Our device exhibits &gt; 20-dB amplification over a 3-GHz instantaneous bandwidth, &lt;0.5 -dB average in-band insertion loss, and the highest-reported intrinsic quantum efficiency for a TWPA of 92.1±7.6%, relative to an ideal phase-preserving amplifier. When measuring a superconducting qubit, our Floquet TWPA enables a system measurement efficiency of 65.1 ± 5.8%, the highest-reported in a superconducting qubit readout experiment utilizing phase-preserving amplifiers to the best of our knowledge. Finally, we discuss the noise limitations of our current experimental setup, as well as impedance matching strategies that will enable us to push towards ideal JTWPA performance. These general-purpose Floquet TWPAs are suitable for fast, high-fidelity multiplexed readout in large-scale quantum systems and future monolithic integration with quantum processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Scalable Robot Learning without Physical Robots</title>
<link href="https://hdl.handle.net/1721.1/163708" rel="alternate"/>
<author>
<name>Park, Younghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/163708</id>
<updated>2025-11-18T06:26:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Scalable Robot Learning without Physical Robots
Park, Younghyo
The development of generalist robots—capable of performing a wide range of tasks in diverse environments—requires large-scale datasets of robot interactions. Unlike language or vision domains, where data can be passively collected at scale, robotic data collection remains costly, labor-intensive, and constrained by physical hardware. This thesis explores two complementary directions to overcome this challenge. First, we examine the limitations of training robots from scratch using reinforcement learning (RL). While RL has achieved promising results in simulation, its scalability is hindered by a largely overlooked bottleneck: environment shaping. Designing suitable rewards, action and observation spaces, and task dynamics typically requires extensive human intervention. We formalize environment shaping as a critical optimization problem and introduce tools and benchmarks to study and eventually automate this process, a necessary step toward general-purpose RL. Second, we introduce an alternative paradigm for robot data collection that does not rely on real-world robots. Using the Apple Vision Pro, we develop DART, an augmented reality (AR) teleoperation platform that streams human hand motions to cloud-hosted robot simulations. This setup enables scalable, low-latency collection of high-quality robot demonstrations without the overhead of physical setup or maintenance. Our user studies show that DART more than doubles data collection throughput while reducing operator fatigue, and policies trained in simulation using this data successfully transfer to the real world. Together, these contributions address two key bottlenecks in robot learning: the human effort required for RL environment design, and the dependence on physical robots for data. They lay the groundwork for scalable, accessible approaches to training generalist robot models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link href="https://hdl.handle.net/1721.1/163707" rel="alternate"/>
<author>
<name>Golden, Courtney K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163707</id>
<updated>2025-11-18T06:27:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney K.
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures use tiled arrays of high-bandwidth local storage to achieve very high aggregate memory bandwidth. However, current distributedSRAM architectures suffer from either poor programmability due to over-specialization or poor compute performance due to inefficient general-purpose hardware. This thesis proposes Quartz, a new architecture that uses short dataflow tasks and reconfigurable compute in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures tile hardware based on inter-tile messages to execute tasks on local data with fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load. This is especially challenging for computations where one operand’s sparsity pattern (i.e., distribution of nonzeros) exhibits dynamic behavior across iterations, and we are the first to provide techniques to address this case. To ensure programmability, we show how a wide range of computations (expressed in an extended version of tensor algebra’s Einsum notation) and flexible data distributions can be systematically captured in small tasks for execution on Quartz. We evaluate Quartz in simulation, using an 8-chiplet design with 2,048 tiles and 824 MB of SRAM per chiplet, running six different iterative sparse applications from scientific computing and graph analytics. Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 26.2× speedup over the prior state-of-the-art programmable distributed-SRAM architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials</title>
<link href="https://hdl.handle.net/1721.1/163706" rel="alternate"/>
<author>
<name>Gupta, Ayush Sagar</name>
</author>
<id>https://hdl.handle.net/1721.1/163706</id>
<updated>2025-11-18T06:27:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials
Gupta, Ayush Sagar
In the next several years and decades, the expanded use of artificial intelligence and edge computing will demand more powerful and energy-efficient electronics. Two-dimensional (2D) semiconductors, and in particular transition metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS₂), are promising candidates for future field-effect transistors. TMDs can enable aggressive lateral and vertical device scaling, and they can add computing power density and new memory and sensing capabilities via 3D integration. However, several key challenges remain before 2D-channel transistors become commercially viable, including large contact resistances at the source and drain due to the van der Waals surface of 2D materials and the Fermi level pinning effect. A variety of methods have been explored to make ohmic contacts to MoS₂, the most promising of which so far is to use semimetals such as Bi and Sb, however these materials suffer from thermal instability. This thesis addresses these challenges by (1) exploring the ultimate limit of contact metal workfunction scaling to better understand the metal-MoS₂ interface, and (2) introducing a new method of reducing contact resistance to 2D materials by inserting dipole layers at the contact interface. Initial work on ultralow-workfunction (ULWF) metal deposition on MoS₂ and subsequent device fabrication is presented, though further study is required to mitigate effects from deposition equipment and the reactive nature of these metals. In parallel, the Janus TMD MoSSe is explored as an example system for dipole contacts, with extensive material characterization of the Janus TMD MoSSe being performed, and the effect of a dipole layer on the contact properties of FETs being established. Together, these results are a significant step towards solving one of the major hurdles for the commercial introduction of 2D-channel transistors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Specialization of Vision Representations with Personalized&#13;
Synthetic Data</title>
<link href="https://hdl.handle.net/1721.1/163705" rel="alternate"/>
<author>
<name>Chae, Nayoung (Julia)</name>
</author>
<id>https://hdl.handle.net/1721.1/163705</id>
<updated>2025-11-18T06:26:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Specialization of Vision Representations with Personalized&#13;
Synthetic Data
Chae, Nayoung (Julia)
Modern vision models excel at general purpose downstream tasks. It is unclear, however, how they may be used for personalized vision tasks, which are both fine-grained and data-scarce. Recent works have successfully applied synthetic data to general-purpose representation learning, while advances in Text-to-Image (T2I) diffusion models have enabled the generation of personalized images from just a few real examples. Here, we explore a potential connection between these ideas, and formalize the challenge of using personalized synthetic data to learn personalized representations, which encode knowledge about an object of interest and may be flexibly applied to any downstream task relating to the target object. We introduce an evaluation suite for this challenge, including reformulations of two existing datasets and a novel dataset explicitly constructed for this purpose, and propose a contrastive learning approach that makes creative use of image generators. We show that our method improves personalized representation learning for diverse downstream tasks, from recognition to segmentation, and analyze characteristics of image generation approaches that are key to this gain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Microservice Design Parameters</title>
<link href="https://hdl.handle.net/1721.1/163704" rel="alternate"/>
<author>
<name>Chen, Qihang</name>
</author>
<id>https://hdl.handle.net/1721.1/163704</id>
<updated>2025-11-18T06:27:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Microservice Design Parameters
Chen, Qihang
Production-level cloud services are increasingly deployed as microservices. An important question is given application logic, how to design an effective microservice architecture. Existing studies have underscored the importance of microservice cohesiveness and coupling, using these metrics to drive automatic design optimizations. However, they have not accounted for the potential impact that such design changes may have on overall system performance, which is confirmed by our case study. In this work, we present a system that can automatically identify microservice designs that are well-balanced across performance, coupling, and cohesiveness to meet cloud provider’s requirements. the system uses a multi-round dynamic programming approach, selectively identifies promising design candidates, generates the corresponding microservice code, measures and compares the results to ultimately determine the optimal design. The designs produced by our system typically achieve over 20% throughput improvement under the same QoS with less than a 10% increase in average LCOM, and often outperform the original benchmark architectures across all evaluated metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River</title>
<link href="https://hdl.handle.net/1721.1/163703" rel="alternate"/>
<author>
<name>Martínez Chapa, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/163703</id>
<updated>2025-11-18T06:27:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River
Martínez Chapa, Daniela
Full of dichotomies, the Santa Catarina River is both dry and wet, present but forgotten, central yet disconnected, valued yet feared. How should an intermittent river in a dense urban context be regenerated? This thesis reimagines its ecological, hydrological, and public potential. Set in Monterrey, Mexico, this research addresses the urgent need to rethink water management in the face of the intensifying climate crisis through different urban systems and regeneration strategies within the river basin. Focusing on the Santa Catarina River, long dismissed as a plot, void, or threat, this work proposes how an intermittent river might be re-understood not as an absence of activities or function but as a space of seasonal abundance, ecological possibility, and urban interaction. Historically engineered for control, the river has been used as a flood channel, markets, sports complexes, transportation corridors, and more. However, rarely has it been seen, treated, or protected as a river. Through the development of a pilot zone, this research suggests a replicable framework of regenerative strategies to slow down, retain, and absorb water flows, supporting both dry and wet season dynamics. These include restoring riparian ecologies, reintroducing soft edges, enabling groundwater recharge, and designing permeable, public, and accessible urban interventions that reconnect the city with the riverbed. This thesis is not a fixed proposal but a living toolkit, an adaptable model to be tested, expanded, and reimagined in the pilot as time and nature take over. At stake is not only the river’s future but also the city’s capacity to shift from resistance to relation, becoming one with it, becoming a city in the river.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Banjiha Stories (2025)</title>
<link href="https://hdl.handle.net/1721.1/163702" rel="alternate"/>
<author>
<name>Park, Habin</name>
</author>
<id>https://hdl.handle.net/1721.1/163702</id>
<updated>2025-11-18T06:27:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Banjiha Stories (2025)
Park, Habin
Banjiha are everywhere in Seoul. You don’t always see them—tucked below eye level, half-hidden underground—but they’re there. First built as military bunkers after the Korean War, later turned into last-resort housing, banjiha have become symbols of urban failure—spaces of neglect, flooding disasters, a problem to be erased. Both media portrayals and policy responses have advocated for their disappearance. But does removal truly protect the people who call these spaces home? This thesis moves beyond the idea that banjiha are simply failures of the city. Through three homes —three lives, it traces how these spaces are shaped, not only by policies and architecture but by the people who inhabit them. A home vulnerable to flooding, where protections exist—but not with the greatest risk. A place worn by time, held together by quiet repairs. A financial foothold in a city where affordable housing is disappearing. A space of temporary sacrifice. A shelter to return to, again and again. This is not just a story of risk or resilience, neglect or demolition. It is a story of how people live; how they adapt, negotiate, and make do in spaces that were never designed with them in mind. Rather than asking how to erase banjiha, this thesis asks: What can we learn by noticing them? What would it mean to shift the conversation—from removal to recognition, from assumption to understanding? To see these homes is to recognize not just their constraints, but the small interventions that could reshape them: a door that opens both ways so no one is trapped, policies that hold upstairs owners accountable for leaks, materials layered to prevent mold rather than mask it. Not grand reinventions, but deliberate shifts—openings for a different way forward. But before deciding what must change, we must first learn to see.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Inference for Inference Time Scaling of Language Models</title>
<link href="https://hdl.handle.net/1721.1/163701" rel="alternate"/>
<author>
<name>Puri, Isha</name>
</author>
<id>https://hdl.handle.net/1721.1/163701</id>
<updated>2025-11-18T06:26:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Inference for Inference Time Scaling of Language Models
Puri, Isha
Large language models (LLMs) have achieved significant performance gains via scaling up model sizes and/or data. However, recent evidence suggests diminishing returns from such approaches, motivating a pivot to scaling test-time compute. Existing deterministic inference-time scaling methods, usually with reward models, cast the task as a search problem, but suffer from a key limitation: early pruning. Due to inherently imperfect reward models, promising trajectories may be discarded prematurely, leading to suboptimal performance. We propose a novel inference-time scaling approach by adapting particle-based Monte Carlo methods. Our method maintains a diverse set of candidates and robustly balances exploration and exploitation. Our empirical evaluation demonstrates that our particle filtering methods have a 4–16x better scaling rate over deterministic search counterparts on both various challenging mathematical and more general reasoning tasks. Using our approach, we show that Qwen2.5-Math-1.5B-Instruct surpasses GPT-4o accuracy in only 4 rollouts, while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts. Our work not only presents an effective method to inference-time scaling, but also connects rich literature in probabilistic inference with inference-time scaling of LLMs to develop more robust algorithms in future work. Code, videos, and further information available at probabilistic-inference-scaling.github.io/
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Systematic Integration of Inverter-Based Resources in Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/163700" rel="alternate"/>
<author>
<name>Pierre, Jordina</name>
</author>
<id>https://hdl.handle.net/1721.1/163700</id>
<updated>2025-11-18T06:26:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward Systematic Integration of Inverter-Based Resources in Electricity Markets
Pierre, Jordina
This thesis introduces a multi-layer control architecture for inverter-based resources (IBRs), separating fast local feedback control from slower self-dispatch and system-level market coordination. Existing integration methods for IBRs limit their control flexibility and completely restrict their market participation potential. Two common practices include treatment of IBRs as negative loads and setting a fixed power factor during grid commissioning. Modeling IBRs as negative loads excludes them from dispatch coordination in electricity markets, significantly limiting incentive for contribution to grid reliability and flexibility. Likewise, a fixed power factor prevents the IBR from providing voltage support through reactive power absorption/injection. With a fixed power factor, constant real and reactive power limits are imposed on the inverter, even during voltage transients, ignoring the fact that an inverter’s available capacity can vary significantly due to internal current constraints and the power provided by the renewable energy source. To address the need for reactive power adjustment in IBRs and pave the way for their active participation in electricity markets , this work presents a coordinated control approach that enables IBRs to transition into active, self-dispatching participants. This thesis proposes a first layer hybrid PLL plus Q-V droop based controller in the first layer which governs millisecond-scale autonomous behavior, including low-voltage ride-through and real-time power adjustment based on voltage deviations at the point of common coupling and irradiance fluctuations from the renewable energy source, in this case solar. Given implementation from the first layer and predicted irradiance, Layer 2, which will be implemented in future work, uses a model predictive controller to provide bid functions for both real and reactive power while keeping voltage at the Point of Common Coupling within its limits. Finally, the third layer performs centralized market clearing through a security-constrained optimization by the system operator. By advocating for self-dispatched, constraint aware control, this thesis challenges the prevailing passive modeling paradigm and offers a structured, physics-informed alternative. It demonstrates how IBRs can evolve into reliable, market-integrated assets, enabling smarter renewable integration and a more resilient, cost-effective and decarbonized grid.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximations to worst-case data dropping: unmasking failure modes</title>
<link href="https://hdl.handle.net/1721.1/163699" rel="alternate"/>
<author>
<name>Huang, Jenny Yijian</name>
</author>
<id>https://hdl.handle.net/1721.1/163699</id>
<updated>2025-11-18T06:28:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Approximations to worst-case data dropping: unmasking failure modes
Huang, Jenny Yijian
A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Checking this non-robustness directly poses a combinatorial optimization problem and is intractable even for simple models and moderate data sizes. Recently various authors have proposed a diverse set of approximations to detect this non-robustness. In the present work, we show that, even in a setting as simple as ordinary least squares (OLS) linear regression, many of these approximations can fail to detect (true) non-robustness in realistic data arrangements. We focus on OLS in the present work due its widespread use and since some approximations work only for OLS. Of the approximations that do not fail our tests, we find not only that a simple recursive greedy algorithm is the most conceptually straightforward but also that it can be orders of magnitude faster to run than the others.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics</title>
<link href="https://hdl.handle.net/1721.1/163698" rel="alternate"/>
<author>
<name>Darmawi-Iskandar, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163698</id>
<updated>2025-11-18T06:28:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics
Darmawi-Iskandar, Patrick
Rising global energy demands, driven by the advent of artificial intelligence (AI), cloud computing, and Internet of Things (IoT) devices, underscore the need for more efficient power electronics. In particular, power switches based on wide bandgap semiconductors such as gallium nitride (GaN) have emerged as promising alternatives to traditional silicon devices for low-voltage (10-100 V) applications. This work investigates the design, fabrication, and scaling of p-GaN-gate highelectron-mobility transistors (HEMTs). A p-GaN-gate epitaxial structure was developed with considerations for short channel effects. A self-aligned, gate-first process employing tungsten metallization was implemented to enable gate lengths as small as 100 nm. Device scaling was studied systematically, revealing the importance of gate aspect ratio and gate-to-drain spacing in managing short channel effects and maintaining breakdown voltage. Electrical characterization showed strong device performance, although contact resistance accounted for a substantial portion of total on-resistance. To address this, a modified fabrication approach incorporating regrown contacts was introduced, resulting in reduced contact resistance and improved overall device characteristics. The combined results highlight practical strategies for enhancing the performance and scalability of p-GaN-gate HEMTs for next-generation low-voltage power electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62</title>
<link href="https://hdl.handle.net/1721.1/163697" rel="alternate"/>
<author>
<name>Li, Tien Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163697</id>
<updated>2025-11-18T06:28:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62
Li, Tien Yi
This thesis is a history of diary-writing in China from 1918 through 1961. Diaries are an increasingly popular but still inadequately understood primary source for historians of modern China. Previous scholars have suggested that, in the twentieth century, diary-writing became increasingly popular due to Japanese and Soviet influences, the increasing availability of manufactured blank diaries, and ruling governments that used diary-writing as a way of enforcing ideological conformity. This thesis traces an alternative history, starting from the popularization of published diaries in Shanghai in the long 1920s; to diaries’ emergence as a recognizable genre that could discoursed be theorized; to the moment the genre gained its reputation as a kind of self-expression par excellence; to its widespread inclusion into school curricula; to loosely connected attempts on the part of educators to delimit a normative way of diarywriting that, ironically, increasingly regimented self-expression. In doing so, this thesis contributes to the existing historiography by offering three correctives: I argue that 1) the initial proliferation of diaries was economically––not ideologically––motivated, 2) the popularization of diary-writing was not a concerted effort orchestrated by China’s political leaders but at best a loosely connected effort led by a middling class of educators, textbook writers, and intellectuals, and 3) diary-writing was not only regimented by communist ideology in the Maoist era but shifting moral principles and anxieties throughout the twentieth century. All in all, this thesis demonstrates the value of diaries for studying moral knowledge, epistemologies, and anxieties at the grassroots in midcentury China.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners</title>
<link href="https://hdl.handle.net/1721.1/163695" rel="alternate"/>
<author>
<name>Koo, Jaehyun</name>
</author>
<id>https://hdl.handle.net/1721.1/163695</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners
Koo, Jaehyun
This thesis contributes to the burgeoning field of batch-dynamic parallel algorithms by presenting parallel batch-dynamic graph algorithms for coreness decomposition and spanners, as well as a number of other related problems. The first class of problems we consider involves approximating coreness decomposition and several closely related concepts, such as (subgraph) density estimation, arboricity estimation, and low out-degree orientations. These are extremely useful structures for organizing graphs based on their density. Our algorithms process any batch of edge insertions and deletions in polylogarithmic depth while using work that is linear in the batch size (up to logarithmic factors), in the worst case. The second class of problems we consider concerns graph spanners. Over the past two to three decades, graph sparsifications that approximately preserve key graph properties have become essential tools in algorithm design. In particular, spanners—reducing the number of edges while approximately preserving pairwise distances—have been widely studied. We present the first such algorithms for computing and maintaining spanners. These algorithms achieve near-optimal amortized runtime—processing each batch in polylogarithmic depth with work nearly linear in the batch size for any number of processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163694" rel="alternate"/>
<author>
<name>Fey, Nolan</name>
</author>
<id>https://hdl.handle.net/1721.1/163694</id>
<updated>2025-11-18T06:27:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
Fey, Nolan
Achieving athletic loco-manipulation on robots requires moving beyond traditional tracking rewards—which simply guide the robot along a reference trajectory—to task rewards that drive truly dynamic, goal-oriented behaviors. Commands such as “throw the ball as far as you can” or “lift the weight as quickly as possible” compel the robot to exhibit the agility and power inherent in athletic performance. However, training solely with task rewards introduces two major challenges: these rewards are prone to exploitation (reward hacking), and the exploration process can lack sufficient direction. To address these issues, we propose a two-stage training pipeline. First, we introduce the Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the sim-to-real gap for complex actuation mechanisms without requiring access to torque sensing. UAN mitigates reward hacking by ensuring that the learned behaviors remain robust and transferable. Second, we use a pre-training and fine-tuning strategy that leverages reference trajectories as initial hints to guide exploration. With these innovations, our robot athlete learns to lift, throw, and drag with remarkable fidelity from simulation to reality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Wound Designates a Subject</title>
<link href="https://hdl.handle.net/1721.1/163693" rel="alternate"/>
<author>
<name>Lum, Luca E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163693</id>
<updated>2025-11-18T06:28:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Wound Designates a Subject
Lum, Luca E.
What haunts when haunting itself has been foreclosed? This thesis develops “ghostlessness” as a conceptual and aesthetic framework across my work in moving image, drawing, and writing. Ghostlessness refers to conditions that suppress haunting where it would otherwise emerge or be felt. Drawing from theoretical elaborations of hauntology, where the present is understood as structured by both suppressed pasts and unrealized futures, ghostlessness names the absence—or foreclosure—of that temporal disruption. It marks a contemporary condition in which systems oriented toward predictive governance and managed futurity preemptively neutralize rupture, sealing wounds before they can fester, reroute, or become sites of transformation. Through the works gathered here, I explore how ghostlessness functions not simply as absence but as affective and infrastructural suppression—rendering the spectral illegible, unaddressable, or unreal. Against this, my practice seeks to recapture the value of haunting in death-ridden, crisis-laden times where its presence is more prevalent than ever – hence its management, erasure, and suppression: ghostlessness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stylizing 3D Models With Generative AI for Fabrication</title>
<link href="https://hdl.handle.net/1721.1/163692" rel="alternate"/>
<author>
<name>Tejedor, Leandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163692</id>
<updated>2025-11-18T06:28:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stylizing 3D Models With Generative AI for Fabrication
Tejedor, Leandra
This thesis presents two novel approaches for modifying 3D models using generative AI for stylization while ensuring the resulting models preserve the properties required for fabrication. The first method, Style2Fab, separates functional and stylistic sections of 3D models to enable targeted modifications that preserve the model's intended functionality. By distinguishing between these sections, Style2Fab allows for alterations that maintain the model's functional purpose while providing flexibility in its aesthetic design. This approach ensures that the modified models retain their original functionality after stylistic changes.&#13;
&#13;
The second method, MechStyle, incorporates finite element analysis (FEA) into the generative modeling pipeline to maintain the structural integrity of the modified models. By analyzing changes in stress values during a simulated drop test at various stages of the stylization process, MechStyle restricts changes to those that preserve the model's structural viability. This ensures that the resulting models are both stylistically accurate to the user's desired results and structurally sound for 3D printing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Recovering Planted Subgraphs</title>
<link href="https://hdl.handle.net/1721.1/163691" rel="alternate"/>
<author>
<name>Rajaraman, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/163691</id>
<updated>2025-11-18T06:28:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Recovering Planted Subgraphs
Rajaraman, Amit
Given an arbitrary subgraph H = Hₙ and p = pₙ ∈ (0, 1), the planted subgraph model is defined as follows. A statistician observes the union of the “signal,” which is a random “planted” copy H* of H, together with random noise in the form of an instance of an Erdős–Rényi graph ´ G(n, p). Their goal is to then recover the planted H* from the observed graph. Our focus in this work is to understand the minimum mean squared error (MMSE), defined in terms of recovering the edges of H*, as a function of p and H, for large n. A recent paper [MNS⁺23] characterizes the graphs for which the limiting (as n grows) MMSE curve undergoes a sharp phase transition from 0 to 1 as p increases, a behavior known as the all-or-nothing phenomenon, up to a mild density assumption on H. However, their techniques fail to describe the MMSE curves for graphs that do not display such a sharp phase transition. In this paper, we provide a formula for the limiting MMSE curve for any graph H = Hₙ, up to the same mild density assumption. This curve is expressed in terms of a variational formula over pairs of subgraphs of H, and is inspired by the celebrated subgraph expectation thresholds from probabilistic combinatorics [KK07]. Furthermore, we give a polynomial-time description of the optimizers of this variational problem. This allows one to efficiently approximately compute the MMSE curve for any dense graph H when n is large. The proof relies on a novel graph decomposition of H as well as a new minimax theorem which may be of independent interest. Our results generalize to the setting of minimax rates of recovering arbitrary monotone boolean properties planted in random noise, where the statistician observes the union of a planted minimal element A ⊆ [N] of a monotone property and a random Ber(p)^⊗N vector. In this setting, we provide a variational formula inspired by the so-called “fractional” expectation threshold [Tal10], again describing the MMSE curve (in this case up to a multiplicative constant) for large n.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network</title>
<link href="https://hdl.handle.net/1721.1/163690" rel="alternate"/>
<author>
<name>Liu, Ziqian</name>
</author>
<id>https://hdl.handle.net/1721.1/163690</id>
<updated>2025-11-18T06:28:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network
Liu, Ziqian
As modern communication systems increasingly rely on centralized network infrastructure, they become more vulnerable to disruptions caused by disasters, failures, or cyberattacks. To address this risk, CityMesh proposes a decentralized fallback wireless network that leverages existing Wi-Fi devices, such as access points (APs), in buildings to maintain essential connectivity during outages. However, achieving scalable and reliable message delivery in such a network, without introducing excessive overhead, poses significant challenges. This thesis presents a new routing protocol for CityMesh, designed to operate efficiently at city scale. We first identify the limitations of traditional shortest-path source routing in CityMesh’s context, including the use of unreliable links and overhead from redundant transmissions. To address these issues, we introduce a safer path selection metric that prioritizes link reliability, a waypoint-based routing compression scheme, and a conduit mechanism to increase robustness to local failures. Our protocol further supports compact routing tables through a grid-based addressing scheme, enabling constant-size packet headers and scalable routing decisions. Additionally, we propose a suppression strategy to reduce unnecessary transmissions both between and within buildings. Finally, we extend our approach to reconnect disconnected network segments by formulating a relay placement strategy based on map data and geometric heuristics. Additionally, to reconnect fragmented network segments, we develop a practical relay placement algorithm by leveraging on the convex hull optimization and re-using global map knowledge, which ensures fast relay point computation in feasible locations such as roads and bridges. Simulations across 20 global cities show that our routing protocol achieves up to 2× higher packet delivery rates and reduces transmission overhead by up to 28× compared to GPSR under high packet loss and realistic localization error. The routing table footprint sampled across 4 randomly selected cities shows on average under 2 KB memory usage per device. Our fast relay placement algorithm also demonstrates only a small number of relays are needed to achieve full network connectivity for most of the cities, which validates CityMesh’s core premise that existing urban Wi-Fi infrastructure is sufficient to support a robust, scalable decentralized fallback network with minimal augmentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GPU-accelerated Inference for Discrete Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/163689" rel="alternate"/>
<author>
<name>Ghavami, Matin</name>
</author>
<id>https://hdl.handle.net/1721.1/163689</id>
<updated>2025-11-18T06:28:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GPU-accelerated Inference for Discrete Probabilistic Programs
Ghavami, Matin
This thesis presents a comprehensive approach to GPU-accelerated inference for discrete probabilistic programs.  We make two key contributions : (1) a factor graph IR implemented in JAX that supports variable elimination and Gibbs sampling, and (2) a modeling DSL with a compiler that lowers programs to the factor graph IR. Our system enables significant performance optimizations through static analysis of the factor graph structure. Variable elimination is optimized by reduction to tensor contraction with optimized contraction paths, while Gibbs sampling is automatically parallelized through graph coloring techniques. Empirical evaluations on standard benchmarks demonstrate orders of magnitude performance improvements over existing systems, with the parallelized Gibbs sampler showing speed-ups of up to 144x on Bayesian networks and even greater improvements for models with regular graph topologies such as Ising models and hidden Markov models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures</title>
<link href="https://hdl.handle.net/1721.1/163688" rel="alternate"/>
<author>
<name>Hernandez-Cornejo, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163688</id>
<updated>2025-11-18T06:27:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures
Hernandez-Cornejo, Mark A.
This thesis is concerned with DIY "off-the-cloud" networks as socio-technical models that can reinscribe a community's organizational processes, identity, and culture. It questions how these networks can break away from corporate and extractive services of "the cloud" in order to achieve digital sovereignty as well as resist the hegemonic understanding of Western universal technology. Rather than grafting an outside network onto a community, how might the nodes of a network emerge from the cultural ontologies and local knowledge systems, creating a "vernacular cloud," with political, epistemic, and ontological implications? The social practice of what I call 'net/work' involves the facilitation of local digital territories that create a grassroots politics of "organic internets." In Chapter One, recent attempts to break from monopolized services like Google and Facebook are examined, providing insight into why these networks are formed and how they “de-link” from “the cloud.” Drawing from Walter Mignolo's understanding of "de-linking," the thesis argues that this process is a political project that is also epistemologically and economically non-western. Chapter Two examines the notion of 'community' in community networks through the lens of grassroots organizing such as mutual aid, delving into the care and maintenance required for system administration. Chapter Two builds on Geri Augusto's understanding of "re/trans" as a project that has developed new assemblages of knowledge and integrated them into different landscapes. It examines community networks from the Global South, where network nodes have the potential to be cosmo-ontological. Chapter Three provides examples of the principles outlined in Chapters One and Two from my work in pursuit of technical autonomy within an organization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings</title>
<link href="https://hdl.handle.net/1721.1/163687" rel="alternate"/>
<author>
<name>Lesina-Debiasi, Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/163687</id>
<updated>2025-11-18T06:27:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings
Lesina-Debiasi, Simon
Building operations and the construction sector are one of the largest contributors to global carbon emissions and energy consumption. While novel construction materials and insulation offer lower embodied carbon solutions, improved heating and cooling devices offer cost and energy effective building services. Above all, “smart” devices promise remote control, oversight, and optimization of building operations. With the rising implementation of AI solutions to every sector, it is important to see the digital devices as an interface to the material machinery they are connected to. The way through which we are introduced to these systems as solutions to environmental problems leaves out the operational and infrastructural costs of the devices. Making material design decisions that are conscious of the mining operations that source the rare earth minerals, to the pumping of oil for polymer coatings, to the chemical baths that separate it from the ore, all the way to the hard drives in server rigs that are cooled with water and driven by electricity, the cloud is nothing but materiality and resources. When evaluating buildings operations and construction techniques for sustainability considerations and environmental impact, connected services such as data networks and optimizations that rely on large server infrastructures and cloud computing are not part of the scope. This thesis reveals the missing components of energy evaluations in “smart” devices within the walls, floors, windows, doors, and roofs of our building, to create a framework through which building efficiency and sustainability can be reconsidered. Through historic research, literature reviews, and experiments, this work shines some light on the environmental impact of data infrastructure to which our buildings are connected. The work presented in this thesis does not claim to be comprehensive nor to solve the problem of optimizing buildings for energy efficiency. Instead, the goal is to build upon existing and established research on data infrastructure, smart technology, climate research etc. showing that, while the efforts currently taken might be improving the efficiency in a building on-site, considerations that are impacting the energy consumption off site need to be taken into consideration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies</title>
<link href="https://hdl.handle.net/1721.1/163686" rel="alternate"/>
<author>
<name>Ramirez Cuebas, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/163686</id>
<updated>2025-11-18T06:27:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies
Ramirez Cuebas, Adriana
Urban landscapes are increasingly recognized as critical to climate mitigation, yet remain underrepresented in carbon accounting frameworks relative to buildings and infrastructure. This thesis advances landscape carbon assessment by introducing a typology-based Life Cycle Assessment (LCA) framework for landscape architecture. &#13;
The framework integrates anthropogenic emissions and natural carbon dynamics while addressing uncertainty. It proceeds through three layers of analysis: 1) developing landscape system and project categories for carbon footprint benchmarking, 2) benchmarking the performance of the proposed landscape systems and urban typologies; and 3) assessing the mitigation potential of decarbonization strategies across systems and project types.&#13;
Concrete pavers on reinforced concrete slabs and asphalt pavements (78 to 104 kgCO₂e/m²) are the most carbon intensive in the production-to-construction stage. Turfgrass and shrubs show wide variability, functioning as sources or sinks depending on species mix, maintenance, and flux magnitudes, underscoring the need for species-specific, ecologically dynamic modeling (-21 to 42 kgCO₂e/m² and -35 to 258 kgCO₂e/m²). Canopy systems act as consistent carbon sinks (-611 to -388 kgCO₂e/m² over 50 years) despite significant emissions from transportation and structural soil.&#13;
Landscape systems were used to benchmark four urban typologies—streetscapes, plazas, courtyards, and urban parks. Their 50-year carbon footprints range from –80 to 21 kgCO₂e/m² in urban parks, –13 to 63 in courtyards, 22 to 79 in plazas, and 3 to 80 in streetscapes. Applying decarbonization strategies makes all typologies achieve net carbon sink status at the high bound. Urban parks achieve neutrality immediately post-construction, courtyards in 13 years, plazas in 26 years, and streetscapes by year 33. At higher emission estimates, urban parks and courtyards deepen carbon sink performance, plazas cross into net sink territory, and streetscapes approach neutrality. The detailed findings highlight the influence of planting density, maintenance regimes, and land cover composition.&#13;
By structuring assessment around land covers and urban typologies, this thesis delivers a transferable carbon accounting framework aligned with design practice, offering actionable insights for embedding climate accountability into landscape architecture and public policy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163685" rel="alternate"/>
<author>
<name>Pahl, David</name>
</author>
<id>https://hdl.handle.net/1721.1/163685</id>
<updated>2025-11-18T06:27:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction
Pahl, David
This thesis investigates the simulation and design of the hardware architecture required for large‑scale quantum error correction (QEC). Specifically, we design microwave circuits for fast and high‑fidelity readout and devise a long‑range coupler (LRC) that spans five qubit lattice sites, suitable for low‑overhead quantum low‑density parity‑check (qLDPC) codes [1]. We present a prototypical nine‑qubit qLDPC code incorporating two long‑ range couplers and optimized readout circuits, achieving state‑of‑the‑art readout fidelities of up to 99.63% in 56 ns and demonstrating strong, well‑targeted couplings mediated by the LRC. Our simulations employ an efficient microwave abstraction based on ABCD transfer matrices, modeling complete qubit devices as networks of circuit elements. We use this formalism to develop a closed‑loop optimization algorithm that determines optimal readout parameters in seconds. The ABCD framework also accurately captures the multi‑mode behavior of the LRC, offering a valuable tool for developing large‑scale, low‑ overhead QEC devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost-Based Optimization for Semantic Operator Systems</title>
<link href="https://hdl.handle.net/1721.1/163684" rel="alternate"/>
<author>
<name>Russo, Matthew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163684</id>
<updated>2025-11-18T06:27:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cost-Based Optimization for Semantic Operator Systems
Russo, Matthew D.
Recently, AI developers have turned to modular AI systems in order to achieve state-ofthe-art performance on challenging benchmarks and industry problems. New programming frameworks have enabled developers to build these systems by composing them out of semantic operators—i.e., LLM-powered maps, filters, joins, aggregations, etc.—inspired by relational operators from data management systems. While these systems of semantic operators can achieve strong performance on benchmarks, they can be difficult to optimize. For example, an optimizer may need to determine which model, prompting strategy, and retrieval mechanism to use for each operator. Existing optimizers are limited in the number of optimizations they can apply, and most (if not all) cannot optimize system quality, cost, or latency subject to constraint(s) on the other dimensions. In this thesis, we build an extensible, cost-based optimizer called Abacus, which searches for the best implementation of a semantic operator system given a (possibly constrained) optimization objective. The optimizer estimates operator performance by leveraging a minimal set of training examples and, if available, prior beliefs about operator performance. We evaluate the optimizer on a range of workloads including biomedical multi-label classification (BioDEX), information extraction from legal contracts (CUAD), and multi-modal question answering (MMQA). We demonstrate that systems optimized by our work achieve 18.7%-39.2% better quality and up to 23.6x lower cost and 4.2x lower latency than the next best system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link href="https://hdl.handle.net/1721.1/163683" rel="alternate"/>
<author>
<name>Pipis, Charilaos</name>
</author>
<id>https://hdl.handle.net/1721.1/163683</id>
<updated>2025-11-18T06:27:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Pipis, Charilaos
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. [2008] for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden [2008] and its generalization to games of non-polynomial type proposed by Farina and Pipis [2024a]. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player. This work will appear in STOC 2025, [Daskalakis et al., 2025].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/163682" rel="alternate"/>
<author>
<name>Lange, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/163682</id>
<updated>2025-11-18T06:27:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees
Lange, Jane
We present algorithms for finding two types of objects that explain the classification of a black-box model f : {±1}^d → {±1} on an instance x ∈ {±1}^d . The first is a certificate: a small set of x’s features that in conjunction essentially determines f(x). The second is a counterfactual: a nearest instance x′ for which f(x′) ≠ f(x). We obtain both algorithms via a connection to the problem of implicitly learning decision trees. The implicit nature of this learning task allows for efficient algorithms even when the complexity of f necessitates an intractably large surrogate decision tree. We solve the implicit learning task by bringing together techniques from learning theory, local computation algorithms, and complexity theory. Our approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and we make the case that it enjoys advantages of both. Our certification algorithm runs in time poly(d, C(f)) and outputs a certificate of size poly(C(f)), where C(f) is the “average certificate complexity" of f. Our counterfactual algorithm runs in time S(f)^[O(∆f (x))] ·log d, where S(f) is the sensitivity of f (a discrete analogue of the Lipschitz constant) and ∆f (x) is the distance from x to its nearest counterfactual. We further prove a lower bound of S(f)^[Ω(∆f (x))] + Ω(log d) for finding counterfactuals, thereby showing that the guarantees of our algorithm are essentially optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices</title>
<link href="https://hdl.handle.net/1721.1/163681" rel="alternate"/>
<author>
<name>Lee, Jungsoo</name>
</author>
<id>https://hdl.handle.net/1721.1/163681</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices
Lee, Jungsoo
As the demand for computation in neural networks continues to rise, conventional computing resources are increasingly constrained by their limited energy efficiency. One promising solution to this challenge is analog in-memory computing (AIMC), which enables efficient matrix-vector multiplications by encoding synaptic weights into the conductance of nonvolatile memory devices. These devices are structured into crossbar arrays. To explore the potential of non-volatile memory devices in AIMC, investigations involve simulating crossbar array operations using IBM’s AIHWKIT. With this tool, I investigate the implementation of various analog computing algorithms, including TikiTaka. AIMC is evaluated for simple MNIST classification tasks and more complex deep learning models, Long Short-Term Memory (LSTM) networks. I demonstrate that devices can be categorized based on their asymmetry and non-linear weight modulation behavior. Performance improvements through the Tikitaka algorithm are observed only when the device provides a sufficient converge-dragging force; otherwise, the algorithm may even degrade performance. I also investigate how pulse-to-pulse noise and device-to-device variability affect system performance, as well as how different peripheral circuit configurations influence the overall behavior. Finally, I propose an Analog Low-Rank Adapter (Analog LoRA) by applying analog computing to the fine-tuning of large language models. I explore the necessary conditions for Analog LoRA to achieve performance comparable to its digital counterpart. Based on these findings, I present design guidelines for effectively applying analog computing to various machine learning tasks on edge devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides</title>
<link href="https://hdl.handle.net/1721.1/163680" rel="alternate"/>
<author>
<name>Jiao, Yixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163680</id>
<updated>2025-11-18T06:26:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides
Jiao, Yixuan
Two-dimensional transition metal dichalcogenides (TMDs) such as monolayer MoS₂ offer great promise for next generation nanoelectronics due to their atomic thickness, tunable bandgaps, and excellent electrostatic control. However, industrial semiconductor manufacturing demands CMOS-compatible, wafer-scale growth and conventional CVD methods often exceed thermal budgets and introduce contaminants, while achieving uniform, defect-free monolayers remain difficult. This thesis presents in-depth discussion on low-temperature MOCVD system design and optimization methodology for uniform monolayer TMD synthesis. We investigate the effect of alkali halide promoters (e.g. NaCl) and novel alkali-free promoters (e.g. NH4Cl and crystal violet) on synthesis of monolayer MoS₂. By optimizing the NaCl-promoted route, we achieve coalesced monolayer MoS₂ films with enlarged grain domains and demonstrate field-effect transistors with improved mobility. In parallel, we develop a CMOS-compatible crystal violet seeding method that avoids alkali metal contaminants and yields uniform monolayer coverage. To support process development, a rapid characterization pipeline was introduced: optical/SEM imaging combined with machine learning to quickly map thickness, grain size, and infer electronic quality across the wafer. These contributions collectively advance the integration of 2D TMD materials into CMOS fabrication, enabling monolithic 3D integration in future electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating the Search for Artificial Life with Foundation&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/163679" rel="alternate"/>
<author>
<name>Kumar, Akarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/163679</id>
<updated>2025-11-18T06:27:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automating the Search for Artificial Life with Foundation&#13;
Models
Kumar, Akarsh
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. Artificial Life (ALife) has not yet integrated FMs, thus presenting a major opportunity for the field to alleviate the historical burden of relying chiefly on manual design and trial-anderror to discover the configurations of lifelike simulations. This paper presents, for the first time, a successful realization of this opportunity using vision-language FMs. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) discovers simulations that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse simulations. Because of the generality of FMs, ASAL works effectively across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. A major result highlighting the potential of this technique is the discovery of previously unseen Lenia and Boids lifeforms, as well as cellular automata that are open-ended like Conway’s Game of Life. Additionally, the use of FMs allows for the quantification of previously qualitative phenomena in a human-aligned way. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-aware Joint Physical Tracking and Prediction</title>
<link href="https://hdl.handle.net/1721.1/163678" rel="alternate"/>
<author>
<name>Dasgupta, Arijit</name>
</author>
<id>https://hdl.handle.net/1721.1/163678</id>
<updated>2025-11-18T06:26:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty-aware Joint Physical Tracking and Prediction
Dasgupta, Arijit
Humans possess a remarkable capacity to track and predict the motion of objects even when visual information is temporarily absent. This thesis investigates how missing sensory evidence—such as during occlusion—alters current and future beliefs about object motion, and introduces an uncertainty-aware framework to model this process. A behavioral experiment was conducted in which participants continuously predicted the future destination of a ball moving in 2.5D environments with occlusion. Results demonstrate that participants dynamically updated their predictions throughout occlusion, exhibiting adaptive belief revision and physically grounded reasoning. To model this behavior, a structured Bayesian modeling and inference approach for joint tracking and prediction was developed that integrates perception, state estimation, and future prediction in a unified process. The approach, implemented via a Sequential Monte Carlo algorithm embedded within a GPU-accelerated and parallel probabilistic programming system, maintains time-varying beliefs over both present and future object states, conditioned on observed images. These belief states are explicitly represented in symbolic form, enabling interpretable, frame-by-frame introspection of uncertainty and prediction over time. When compared against human responses, the model closely matched the temporal evolution of time-aligned decisions and outperformed plausible alternative hypotheses that failed to reason during occlusion. These findings affirm that the absence of changing visual evidence does not engender a void in physical reasoning, but is evidence in itself—processed and revised through structured, probabilistic inference. By integrating probabilistic programming with human behavioral data through structured Bayesian modeling and inference, this thesis advances a computational account of intuitive physical reasoning and provides a foundation for building interpretable, uncertainty-aware AI systems that mirror human-like physical intelligence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward an Age-Ready Suburbia</title>
<link href="https://hdl.handle.net/1721.1/163677" rel="alternate"/>
<author>
<name>Du, Minghao</name>
</author>
<author>
<name>Zhuang, Kaicheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163677</id>
<updated>2025-11-18T06:27:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward an Age-Ready Suburbia
Du, Minghao; Zhuang, Kaicheng
As America’s population ages, suburban neighborhoods face urgent challenges. Originally designed for young, car-dependent families, the suburban landscape today often presents barriers to aging in place, including poor walkability, inaccessible housing, and limited access to essential services and care. This thesis investigates these challenges and proposes a strategy for reimagining suburban environments through demographic analysis, spatial mapping, persona-driven research, architectural prototyping, and community planning. It traces the historical evolution of suburbia, critically evaluates existing senior housing typologies, and advances new frameworks for retrofitting residential neighborhoods to better support aging populations. Focusing on Sacramento, California, the research identifies high-priority areas where aging, affordability challenges, and mobility barriers intersect. Grounded by a pilot care home project, the study demonstrates how modest interventions, such as retrofitting single-family homes into small-scale residential care environments, can enhance both livability and care access. The first phase of the pilot project has been constructed, offering a demonstration of the proposed model’s feasibility. A phased development and financial strategy are also outlined to ensure broader applicability. While rooted in Sacramento, the thesis offers a framework relevant to many suburban contexts across the United States, particularly naturally occurring retirement communities (NORCs) where older adults are aging in place. Rather than creating isolated senior enclaves, the work promotes a distributed, community-integrated model that strengthens neighborhood resilience and supports intergenerational living. By combining design innovation with policy awareness and development feasibility, the thesis presents a scalable and adaptable approach to reshaping suburbs for an aging society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163676" rel="alternate"/>
<author>
<name>Pahl, Lukas</name>
</author>
<id>https://hdl.handle.net/1721.1/163676</id>
<updated>2025-11-18T06:27:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction
Pahl, Lukas
The ability to coherently and reliably manipulate quantum information marks a fundamental technological leap—realizable through a universal, fault‑tolerant quantum computer. Achieving this goal requires progress across all layers of the quantum computing stack, from physical qubits to theoretical algorithms. In this work, we address multiple layers of this stack. We develop a software architecture for scalable device calibration using modular calibration graphs. We introduce real‑time frequency stabilization techniques, demonstrating improved single‑qubit gate fidelities and progress toward multiqubit feedback. Finally, we explore how quantum error correction overhead can be reduced using low‑density parity‑check codes. We present logical protocols for a non‑local nine‑qubit code, which significantly outperforms comparable surface code implementations in both qubit efficiency and computational capability. These results represent practical steps toward overcoming key challenges in fault‑tolerant quantum computing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ModelDiff: A Framework for Comparing Learning Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163675" rel="alternate"/>
<author>
<name>Shah, Harshay</name>
</author>
<id>https://hdl.handle.net/1721.1/163675</id>
<updated>2025-11-18T06:27:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ModelDiff: A Framework for Comparing Learning Algorithms
Shah, Harshay
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters. Our code is available at https://github.com/MadryLab/modeldiff.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sanctuary for Who?</title>
<link href="https://hdl.handle.net/1721.1/163581" rel="alternate"/>
<author>
<name>Salazar, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/163581</id>
<updated>2025-11-06T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sanctuary for Who?
Salazar, Juan
Philadelphia, often recognized as the poorest major city in the United States, became a Sanctuary City in 2014. The designation committed the region to policies limiting cooperation with federal law enforcement in the persecution of undocumented communities. Policies have ranged from refusing to detain individuals without judicial warrants to prohibiting Immigration and Customs Enforcement (ICE) from accessing municipal databases or facilities for detention purposes. At the community level, the notion of the Sanctuary City sought to promote organizing against unlawful persecution of residents. Over the past eleven years, however, the framework of protection it promised has faltered under mounting federal pressure. The Sanctuary City's symbolic authority and limited scope have failed to shield residents from persecution or restrict ICE's intensifying operations within the area. In 2019, Juntos, the city's foremost immigrant advocacy organization, criticized Philadelphia's Sanctuary status as inadequate. Cited the ongoing persecution of its communities and the declining quality of life for all residents, the organization urged the city to abandon the term "Sanctuary." They instead petition the city to focus instead on meaningfully protecting all residents of Philadelphia, stating, "Let us instead work together to build the kind of city we all want to live in." Junto's critique forms the basis of this thesis, using it as an invitation to reimagine the Sanctuary City as a shift from a policy framework toward a general ethic and design sensibility. This thesis proposes that Philadelphia's crux, like all cities, lies in its ability to sustain communities' pursuit of a dignified life. As a primary agent in the formation of cities, the architect must then make this struggle their own and deploy the tools of their discipline to protect life and inspire dignity. By framing Philadelphia as a city shaped by deindustrialization, disinvestment, and policing, the thesis explores how architecture can respond to these forces by reviving the city's industrial character and establishing new boundaries able to safeguard community rights. Integrating legal, spatial, and semantic insights from federal authorities' rules of engagement will provide novel typologies and programs for the city that address its systemic inequities while fostering environments where life and dignity can flourish. By inscribing meaningful boundaries, and re-equipping the city to make for itself, the thesis suggests architecture becomes a tool for collective protection and urban regeneration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts</title>
<link href="https://hdl.handle.net/1721.1/163580" rel="alternate"/>
<author>
<name>Hirt, Natasha K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163580</id>
<updated>2025-11-06T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts
Hirt, Natasha K.
To meet the needs of growing populations, rates of new construction are increasing at a record pace worldwide. The built environment, already one of the single largest contributors to global CO₂e emissions, will become a significant environmental challenge in the coming decades. To mitigate the anticipated environmental impact of future construction, we need to rethink how we build.&#13;
&#13;
One strategy, which is the subject of this work, is improving the material efficiency of flexural systems like floors. Floors are among the most materially wasteful structural components in buildings, and while decades of research have explored optimal floor system design, the complexity of proposed solutions has limited their practical implementation. Furthermore, the industrial tools available to structural designers do not lend themselves to flexible experimentation or large-scale analysis. As a result, most flexural systems today rely on approximations and rules of thumb rather than mathematically optimal designs, data-driven decision making, or iterative design processes.&#13;
&#13;
This thesis bridges the gap between practical engineering, material efficiency, and design freedom. It presents novel, code-compliant tools for the computational analysis and optimization of flat slabs supported by a network, or grillage, of beams, using a model system of reinforced concrete supported by steel W-sections. The method is used to perform a large-scale analysis of 24,192 unique combinations of beam topologies and assembly design decisions. The results of this analysis find improvements in structural embodied carbon of up to 53.4% over the business-as-usual design case, and also yield generalizable takeaways about the key factors influencing material efficiency in floor slabs. &#13;
&#13;
One of the advantages of the method is its flexibility in taking on a range of complex design challenges. These are presented as extensions to the method, and include designing with a constrained inventory for a series of real-world case studies, and automatically deriving novel structural geometries from dense ground structures.&#13;
&#13;
The method and results shown in this thesis expand the range of analysis tools that engineers have access to, enabling a wide range of creative designs and explicitly linking design decisions to environmental impact.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inscrutability: An Epistemological Experiment</title>
<link href="https://hdl.handle.net/1721.1/163579" rel="alternate"/>
<author>
<name>Huang, Brian Hudson</name>
</author>
<id>https://hdl.handle.net/1721.1/163579</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inscrutability: An Epistemological Experiment
Huang, Brian Hudson
Through four different projects, this thesis explores the idea of dimensions of representation, a concept introduced by 20th century French philosopher Michel Foucault in his book The Order of Things. Foucault argues that the Classical episteme, which Foucault defines as the discourse surrounding knowledge-making that lasted from the 17th century to the 19th century, was determined by the idea of dimensions of representations. Dimensions of representations states that during the Classical episteme, knowledge was formulated by representations of the external world, such as through systems of classification, ordering, and relations, rather than through resemblance. The first project, Holes in the Sieve (2023) will address the problematics of classification through a infamous case in the history of paleoanthropology: the Piltdown Man. The second project, Contrapposto in Space (2024) addresses how representation has been instrumentalized in technoscience through space research. Finally, the last two projects, the Poem Box (2024) and Micropoetry (2025) posit a way forward at the limits of representation by engaging with semiotic theory. By engaging with language games, poetry opens up the possibility to deny the position of being knowable, allowing one to disappear into inscrutability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan</title>
<link href="https://hdl.handle.net/1721.1/163578" rel="alternate"/>
<author>
<name>El Haq, Haidar</name>
</author>
<id>https://hdl.handle.net/1721.1/163578</id>
<updated>2025-11-06T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan
El Haq, Haidar
Throughout Indonesia’s colonial and postcolonial histories, the peatlands of Kalimantan have been not only politically contested spaces but also sites of ontological struggle. From transmigrasi programs to Suharto’s Mega-Rice Project and most notably today’s carbon offset regimes, peat has been transformed into a paradoxical ecology: degraded yet investible, conserved yet profitable. These transformations enclose land, force communities to choose between extraction or restoration, criminalize fire, and abandon regenerative forms of cultivation. These are histories of ontological occupation institutionalized: the marginalization of both peat’s inhabitants and the soil itself as world-making agents, shaped by speculative regimes of governance, rooted in planetary imaginaries of climate salvation and fantasies of productivity. This thesis proposes Koalisi Lahan–Gambut (Peat–Land Coalition), a speculative parainstitution that explores how coalitional spatial practices might reclaim inhabitation in peat ecologies. Situated in a Ngaju village within the buffer zone of one of the world’s largest carbon offset territories—between deep peat and riverine edges, between restoration enclosures and plantation areas—the coalition works through the murkiness of peat, the heterogeneity of its inhabitants, and the crowded terrain of overlapping institutional claims. It foregrounds the frictions between gambut (peat) and lahan (land). Structured across three inquiries, the document presents a Living Glossary that assembles field terms and relational epistemologies drawn from Kalimantan’s peatlands; a genealogy of Governance, Carbon Fix, and Buffer Zone that traces the historical and institutional processes that rendered peatlands governable; and Landing in the Buffer Zone, which turns to the coalition’s situated experiments in becoming-with, inhabiting, and reclaiming the space between peat and land.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine</title>
<link href="https://hdl.handle.net/1721.1/163576" rel="alternate"/>
<author>
<name>Tamburro, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163576</id>
<updated>2025-11-06T03:07:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine
Tamburro, Alexandra
Reducing lubricating oil consumption (LOC) in reciprocating engines is an increasingly important objective in the pursuit of lower greenhouse gas emissions, longer maintenance intervals, and compliance with tightening environmental regulations. In 2022, the U.S. transportation sector alone was responsible for 29% of national greenhouse gas emissions, 87% of which originated from systems powered by reciprocating engines [1]. While significant progress has been made in fuel efficiency, oil consumption remains as a key contributor to carbon emissions. This research investigates the impact of design parameters in three-piece oil control rings (TPOCRs) and liner surface finish on oil consumption behavior.&#13;
&#13;
Utilizing a hydrogen-fueled engine—where the only source of CO₂ emissions is from consumed lubricating oil—this study develops a high-fidelity, FTIR-based method for direct LOC measurement. A derivation of oil consumption based on air and fuel mass flow rates and measured CO₂ emissions is presented, alongside a sensitivity analysis which identified FTIR measurement uncertainty and ambient CO₂ variation as dominant error sources. All experiments were conducted at 2000 RPM under medium load (4 bar IMEP). The experimental results showed that under the tested condition, 1) increasing liner roughness increases the LOC and 2) changing the orientation of any rails with asymmetrical profile to favor up-scraping results in an elevation of LOC.  Analyses applying liner vaporization and TPOCR models showed that the changes in liner oil film thickness brought by the TPOCR changes have negligible effect on the LOC from the oil evaporation.  Increases in upper-rail up-scraping ability and the oil accumulation inside the TPOCR groove can both elevate the LOC although further investigation is needed to understand the oil transport paths leading to the LOC.&#13;
&#13;
This work provides a foundation for future optimization of TPOCR design by highlighting key ring-liner interactions and oil transport mechanisms. Further study of asymmetric geometries and surface characteristics will provide further insights for reducing oil consumption in engine platforms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape</title>
<link href="https://hdl.handle.net/1721.1/163575" rel="alternate"/>
<author>
<name>Bhupathi, Hari Raghavendran</name>
</author>
<id>https://hdl.handle.net/1721.1/163575</id>
<updated>2025-11-06T03:07:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape
Bhupathi, Hari Raghavendran
In 2021, the United States committed to achieving net-zero greenhouse gas emissions by 2050, requiring a fundamental transformation of its energy infrastructure. This thesis develops a nationwide optimization model to minimize capital expenditures and understand the trade-off between renewable capacity, storage, and transmission networks. The results show that the least-cost configuration, achieved when nuclear and battery capital costs fall by 50%, requires approximately $3.25 trillion in new investment - a 37% reduction relative to the baseline scenario. Comparative scenario analysis reveals a marked shift toward centralized storage when nuclear costs decline, which improves reliability and reduces contingency requirements - mirroring inventory pooling dynamics in supply chains. Concurrently, wind capacity additions fall sharply, with each 10% reduction in nuclear cost halving the predicted wind capacity addition. Transmission infrastructure evolves accordingly: 765 kV lines decline as nuclear becomes more decentralized, while 230 kV lines expand modestly to manage increased intermittency. By&#13;
quantifying trade-offs across technologies and identifying system tipping points, this work offers a framework for policymakers and long-horizon investors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163574" rel="alternate"/>
<author>
<name>Shah, Sharmi</name>
</author>
<id>https://hdl.handle.net/1721.1/163574</id>
<updated>2025-11-06T03:06:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation
Shah, Sharmi
Reliable tactile feedback is essential for robotic systems to interact effectively with their environments, especially in dynamic manipulation tasks where detecting contact onset, direction, and force is critical for control and planning. This thesis advances the development of barometer-based tactile sensors for low-force interactions, building upon prior work from the Biomimetic Robotics Lab. Previous work demonstrated that neural networks could infer contact location and three-axis contact force from barometers embedded within an elastomer. However, these models did not account for the viscoelastic behavior of the elastomer, which degrades sensor repeatability and bandwidth. To address these limitations, this thesis introduces a recurrent neural network (RNN) architecture that captures viscoelastic transients in the sensor response. The proposed methods are evaluated on two sensor geometries: a spherical sensor and a slimmer ellipsoid variant. An automated data collection pipeline is developed to generate temporally-continuous, uniformly sampled datasets across the sensor surface. RNN models trained on this data show that temporal modeling improves force prediction accuracy across both designs. To improve angle prediction accuracy, a binning strategy is used to enforce a uniform prior over contact orientations. The resulting "Binned RNN" neural networks are small-scale and demonstrate high sensitivity, enabling responsive tactile feedback. The utility of these tactile sensors is demonstrated by integrating the sensors onto a dexterous two-finger gripper and performing light grasping and estimation of object reorientation using solely tactile measurements. This work shows that accounting for viscoelastic effects through informed sampling and temporal modeling enhances the practical performance of elastomer-based tactile sensors in robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can diffusion models capture extreme event statistics?</title>
<link href="https://hdl.handle.net/1721.1/163573" rel="alternate"/>
<author>
<name>Stamatelopoulos, Stamatios</name>
</author>
<id>https://hdl.handle.net/1721.1/163573</id>
<updated>2025-11-06T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Can diffusion models capture extreme event statistics?
Stamatelopoulos, Stamatios
For many important problems it is essential to be able to accurately quantify the statistics of extremes for specific quantities of interest, such as extreme atmospheric weather events or ocean-related quantities. While there are many classical approaches to perform such modeling tasks, recent interest has been increasing in the usage of generative models trained on available data. Despite the sporadic success of such methods, it is not clear for what systems or datasets a system-agnostic generative AI tool is capable of generating previously ‘unseen’ extreme events in a manner that accurately extrapolates the tails for the observable of interest. Here, we propose an apriori criterion, which based on the geometry of the training dataset, it can predict whether a generative AI tool will be able to extrapolate the tails, i.e. generate previously unseen extreme events. The idea is to quantify whether existing extreme events lie in the interior of the dataset or its boundary. In the former case it is shown that generative AI tools can work in an ‘interpolation’ mode and generate new extreme events. On the other hand, if the topology of the dataset is such that extremes live in the boundary of the domain then the generative AI algorithm needs to operate in an extrapolation mode which does not lead to accurate results. We illustrate our findings on a specific class of Diffusion Models (DMs) called Denoising Diffusion Probabilistic Models (DDPMs) and we test on three datasets, a simple on-hyperball dataset following a Weibull distribution for the radii of the data points of dimensionality 2 • 10³, a dataset sampled from the so-called Majda-McLaughlin-Tabak Wave Model (MMT), of dimensionality 8.1 • 10³ and a dataset consisting of Lagrangian turbulence trajectories, of dimensionality 2 • 10³.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings</title>
<link href="https://hdl.handle.net/1721.1/163572" rel="alternate"/>
<author>
<name>Ajienka, Soala Lolia</name>
</author>
<id>https://hdl.handle.net/1721.1/163572</id>
<updated>2025-11-06T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings
Ajienka, Soala Lolia
This thesis proposes the weaving together of two lost traditions - the practice of primary glassmaking in southern Nigeria and the U-shaped bungalow typology of multi-family housing - as a means to address both the qualitative and quantitative housing deficits in Port Harcourt and to support the broader requisites of macroeconomic productivity in Nigeria. The thesis frames the argument that the materiality and application of glass can reconnect the inhabitation and construction of Face Me, I Face You (FMIFY) housing to Nigerian history, culture, and identity. By charting a blueprint for localized material production and engaging questions of affordability, cost structure, and financing, this work positions design as a technical solution and an act of cultural authorship. As an architect, builder, and member of the community, I advocate for a new practice in which the bond between local craftsmanship and housing development is re-established - through material choices, construction systems, economic benchmarking and spatial design strategies. This body of work braids together three interconnected narratives: First, it traces the historical evolution of the U shaped bungalow typology, revealing its roots as a colonial adaptation of the rural compound house, the economic conditions that have led to its physical obsolescence yet sustained market relevance and examining how its cultural significance was gradually diluted through climate-insensitive design and the introduction of imported materials. Second, this body of work rediscovers Nigeria’s precolonial glassmaking traditions, with a focus on artisanal production methods that offer environmental efficiency, energy intelligence, and deep cultural resonance - qualities in stark contrast to the high-energy, standardized imported glass that dominates today’s housing. Third, it integrates these two recoveries through built interventions: redesigning roof structures to support artisanal glass rondels, optimizing daylighting, ventilation, and thermal comfort, and reorganizing courtyards to revive their role as culturally vibrant, socially essential spaces. By leveraging indigenous glassmaking practices and small-batch production models, this thesis advocates for the creation of a circular economy, generating local employment, reducing embodied energy, and restoring cultural resilience - while delivering environmentally sensitive and economically viable housing solutions that demonstrate comparable return on costs for their owners. Foregrounding opacity as a design value, the project seeks to balance communal life with cultural and spatial notions of privacy, challenging the hegemony of imported transparency. Through the strategic curation of apertures, the careful modulation of light and shadow, and the integration of locally crafted glass rondels, the thesis re-envisions the Face Me I Face You typology. Ultimately, this work positions artisanal glass not only as a building material, but as a medium for recalibrating housing production in southern Nigeria toward systemic resilience and self-determination.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163571" rel="alternate"/>
<author>
<name>Ulloa, Gabriella E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163571</id>
<updated>2025-11-06T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation
Ulloa, Gabriella E.
DexWrist is a compliant robotic wrist designed to advance robotic manipulation in highly-constrained environments, enable dynamic tasks, and speed up data collection. DexWrist is designed to be close to the functional capabilities of the human wrist and achieves mechanical compliance and a greater workspace as compared to existing robotic wrist designs. The DexWrist can supercharge policy learning by (i) enabling faster teleoperation and therefore making data collection more scalable; (ii) completing tasks in fewer steps which reduces trajectory lengths and therefore can ease policy learning; (iii) DexWrist is designed to be torque transparent with easily simulateable kinematics for simulated data collection; and most importantly (iv) expands the workspace of manipulation for approaching highly cluttered scenes and tasks. More details about the wrist can be found at: https://sites.google.com/view/dexwrist/home.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Labor: Sensable Instructions through Digital Jigs</title>
<link href="https://hdl.handle.net/1721.1/163570" rel="alternate"/>
<author>
<name>Griffin, Danny</name>
</author>
<id>https://hdl.handle.net/1721.1/163570</id>
<updated>2025-11-06T03:07:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guiding Labor: Sensable Instructions through Digital Jigs
Griffin, Danny
Contemporary architects find themselves at a juncture, navigating the transition from traditional modes of instruction to an asymmetrical integration of digital technologies. Drawings remain central to architectural practice, yet a widening gap persists between tools for making drawings and tools for interpreting them. Since Alberti’s division between intellectual and productive labor, architectural instructions have been generated in remote offices and executed on distant construction sites. Digital tools have expanded the information density of drawings, yet the process of interpretation remains predominantly analog. Graphical conventions, though precise, are abstract, and so paper instructions alone lack spatial meaning. Builders ultimately rely on the aid of analog locating techniques to translate these abstractions into actions. Tools as simple as strings and squares have long been present on construction sites, enabling this translation. Over time, the shape and function of such devices has evolved in response to different pressures of location, from the Gothic template which left room for the builder to improvise, to the industrial jig that constrained movement to ensure replicability. The limitations of analog locating became clear when the plumb bob, long trusted to mark which direction was vertical, proved inadequate for navigating trajectories of flying objects. The solution was to embed physical devices with memory, marking a transition from tools which measure where they are to those that know where they are going. This shift from stateless to stateful devices gradually entered construction sites, and though we might distrust the devices that make possible the steering of missiles, this paradigm shift offers a productive challenge to the field of architecture. If simplifying complex construction is worthwhile, then communication pathways which more faithfully transfer information from digital model to physical destination must be explored. Central to this transformation are the tools which anchor instructions on site: interfaces already mediating between architect and builder, which must now evolve to interpret digital signals from afar. Digital jigs will be the conduits of paperless instruction on physical sites, enabling what this thesis terms sensable instructions: instructions receivable by both machines and humans.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface</title>
<link href="https://hdl.handle.net/1721.1/163569" rel="alternate"/>
<author>
<name>Bei, Yining</name>
</author>
<id>https://hdl.handle.net/1721.1/163569</id>
<updated>2025-11-06T03:06:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface
Bei, Yining
Designers often rely on keyboard and mouse for 3D modeling, a method that can feel unintuitive or restrictive—especially in collaborative or spatially immersive settings. This thesis explores how multimodal interaction, specifically the combination of hand gestures and voice commands, can support more natural, efficient, and accessible 3D modeling in virtual reality (VR). Built on a custom Unity-based system integrating Meta Quest hand tracking and Wit.ai voice recognition, the study investigates how these two input modes—gesture and speech—can be used together to manipulate and modify 3D geometry in real time. The research proceeds in three phases: (1) a formative study analyzing how users intuitively deploy gestures, revealing common preferences, task breakdown strategies, and limitations in gesture inputs; (2) system design and implementation of both gesture-only and gesture + speech interfaces for navigation and object manipulation (e.g., translation, scaling, duplication); and (3) a comparative user study evaluating gesture-only, gesture + speech, and keyboard + mouse workflows in terms of learning curve, task efficiency, and user satisfaction. Results show that gesture + speech enables smoother transitions across modeling subtasks and allows users to offload certain parameters (e.g., numeric values, distances) to voice while using gestures for spatial control. Participants reported higher engagement and lower cognitive load compared to keyboard-based workflows, especially in tasks involving spatial scale and collaboration. This thesis demonstrates the feasibility and design potential of multimodal interaction for immersive modeling workflows and offers insights for future XR design tools that seek to blend precision with embodied interaction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices</title>
<link href="https://hdl.handle.net/1721.1/163566" rel="alternate"/>
<author>
<name>Stamler, Natasha Lia</name>
</author>
<id>https://hdl.handle.net/1721.1/163566</id>
<updated>2025-11-06T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices
Stamler, Natasha Lia
Access to clean water is a serious challenge around the world, with almost 2/3 of the global population experiencing water scarcity at some point during the year, especially in dry regions. One solution to this problem is sorbent-based atmospheric water harvesting (SAWH) due to its ability to produce drinking water in a range of environments, including at low humidity. SAWH device operation is composed of adsorption and desorption phases. During adsorption, moist air flows into the device and is adsorbed onto the sorbent bed. This is followed by the desorption phase during which the sorbent is heated to desorb the water as vapor, which is then transported to a colder condenser surface on which it is condensed as liquid water. Finally, the condensed water can be collected outside the device. However, current state-of-the-art SAWH devices are inefficient, with less than 70% of their adsorbed water being collected. This means the adsorbed water is either not condensed or condensed but not collected. This work discusses the impact of the coupling between desorption and condensation on the efficiency of SAWH devices. In general, SAWH systems can suffer from three scenarios of inefficient desorption-condensation: flux-limited, when the desorption rate in the device is insufficient to fully utilize the condenser’s condensation capacity; transport-limited, when the time scale of the vapor transport from the sorbent bed to the condenser is slow compared to the desorption operation time; and condenser-limited, when the condenser has a poor thermal design compared to the vapor flux. We developed a system-level model of a SAWH device to inform design strategies to mitigate these three bottlenecks and optimize device performance. Additionally, we quantified hydrocarbons, common airborne contaminants, as a mechanism for slowing water collection. Experimental findings are used to develop a model for the impact of airborne hydrocarbon adsorption on surface wettability and water retention for six metals commonly used as condenser materials. The findings from these models can inform design recommendations for SAWH devices as well as various other industrial applications in which water condenses on metal surfaces such as refrigeration and power generation. Future work will focus on continued experimental validation of the models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior</title>
<link href="https://hdl.handle.net/1721.1/163565" rel="alternate"/>
<author>
<name>Rodriguez, Camille Dyani</name>
</author>
<id>https://hdl.handle.net/1721.1/163565</id>
<updated>2025-11-06T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior
Rodriguez, Camille Dyani
Vimentin, a type III intermediate filament, is an understudied component of the cytoskeletal system. However, in recent studies we can see its structural and mechanical properties aid in a cell's survival and migration. It forms a hyperelastic network and works synergistically with actin and microtubule to protect against large deformations.  Despite vimentin intermediate filaments critical role in many biological processes, there are limited studies on its role in collective migration in 3D in vitro. To elucidate vimentin’s role in a collective cell cluster, single MCF-7 cells are embedded in a Matrigel-Alginate gel, which then grow into multicellular systems. The MCF-7 cells utilized are vimentin null, chemically inducible to form vimentin networks that interact with the other components of the cytoskeleton. These MCF-7 allow for controlled expression of mature vimentin intermediate filament (VIFs) which then form networks. We study these multicellular clusters over the course of 14 days. We demonstrate that there are key differences in morphology and mechanics, with the presence of vimentin. Our results suggests VIFs create more irregular cell clusters with more visible dynamic interplay with the environment. Uninduced (no VIFs) clusters were overall less dynamic and exhibited spherical morphology and minimal protrusions. Cluster with mature VIFs tended to form more elongated multicellular clusters with increased number of projections into the surrounding gel. In these induced (with VIFs) clusters these projections are shown to be constantly protruding and retracting along with the nuclei continually reorganizing. Our results show that these projections are accompanied with increased protrusive and contractile gel displacements. These results indicate that vimentin network generate an dynamic and functional morphology, along with mechanically perturbing their environment in the early stages of cell cluster collective behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for Dynamic Nonprehensile Object Transport</title>
<link href="https://hdl.handle.net/1721.1/163564" rel="alternate"/>
<author>
<name>Wang, Eric K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163564</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning for Dynamic Nonprehensile Object Transport
Wang, Eric K.
Generalized planning methods for dynamic manipulation struggle to efficiently solve kinodynamic constraints. Gradient-based methods suffer from initialization sensitivity, local optimum convergence, and lack of feasibility guarantees, while sampling-based methods can require large computation times if there exist challenging boundary conditions. Iterative Time Optimal Path Parameterization, or iTOPP, guarantees a feasible local minimum for a dynamic grasping problem by iteratively decreasing transit time for a trajectory initially generated to satisfy kinodynamic contact constraints. We demonstrate solutions that can handle initial or final goal states defined as quasistatically infeasible, in which purely quasistatic motions cannot generate a warm start trajectory. We also design an indirect adaptive controller that can track a desired dynamic grasping trajectory assuming unknown object mass and location parameters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites</title>
<link href="https://hdl.handle.net/1721.1/163563" rel="alternate"/>
<author>
<name>Webb, Alisa Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/163563</id>
<updated>2025-11-06T03:08:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites
Webb, Alisa Nicole
Throughout the aerospace industry, carbon fiber reinforced polymer (CFRP) laminated composites are used extensively in spacecraft and aircraft vehicles due to their high specific strength and stiffness and other properties. Processing these advanced structural CFRP composites, especially in prepreg form, is often completed via autoclaves where elevated temperatures and pressures of typically 180 ◦C (350 ◦F) and 0.7 MPa (7 bars), respectively, are applied to cure the polymer matrix and compress the constituent laminae together. However, autoclaves are energy intensive, expensive, and impose geometrical constraints on components due to thermal gradients within the chamber. Thus, there exists a need to find alternative manufacturing techniques. Throughout this thesis, an alternative method to autoclave processing is presented using vacuum-bag only (VBO) techniques with nanoporous networks (NPNs) in the interlaminar regions in autoclave-required epoxy prepreg CFRP composites. Nanoporous materials are defined as materials containing pores in the mid nanometer to low micrometer range. Once placed in the interlaminar region of the laminate, voids are reduced by the induced capillary pressures of the NPNs, and trapped gas evacuates through the NPN. By utilizing capillary flow porometry, capillary pressure and through-thickness permeability are quantified for various NPNs, along with other porous materials. Capillary pressure and permeability exhibit an inversely proportional relationship for all tested materials with CNT-based and polymer aerogel NPNs providing capillary pressures higher than an autoclave pressure of 0.7 MPa. Accordingly, an Ashby-type plot is presented as an aid for NPN selection for composites manufacturing. Previous studies of unidirectional glass fiber reinforced polymer (GFRP) composites and unidirectional CFRP composites show success with NPN-enabled VBO-manufacturing using aligned carbon nanotubes (A-CNTs) and electrospun polymer nanofiber (EPN) mats. However, success with woven prepreg has not been consistently achieved before this thesis. Autoclave woven epoxy CFRP laminates of IM7/8552 are manufactured using EPN and polymer aerogel NPNs with a VBO procedure. Once manufactured, these laminates were characterized for quality through void content analysis. 0.11 void vol% was achieved which is well within the 1 vol% of void requirement for aerospace-grade composite components. To aid the in the understanding of NPNs, in situ experiments utilizing microcomputed tomography are developed to investigate the (presumed Newtonian) flow of resin throughout the NPN as a function of temperature, which varies throughout a typical manufacturer recommended cure cycle (MRCC), along with the void evolution throughout the cure cycle. Based on this new in situ understanding, a manufacturing process modification is devised to produce void-free woven laminates at the 152.4 mm laminate scale. Through manufacturing, material characterization, and designed in situ experiments, this thesis demonstrates the use of NPNs for VBO-manufacturing of low-void content aerospace-grade CFRP composites to replace autoclaves for energy and cost savings.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/163562" rel="alternate"/>
<author>
<name>Gao, Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163562</id>
<updated>2025-11-06T03:08:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making
Gao, Jin
Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World</title>
<link href="https://hdl.handle.net/1721.1/163560" rel="alternate"/>
<author>
<name>Apostolopoulou, Katerina</name>
</author>
<id>https://hdl.handle.net/1721.1/163560</id>
<updated>2025-11-06T03:08:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World
Apostolopoulou, Katerina
With over 86,000 kilometers of crude oil pipelines—and more than 2.13 million kilometers of total oil and gas pipelines in the United States as of 2024—many segments are already corroded and aging, deeply embedded within urban and ecological systems that are increasingly endangered. As the global energy transition accelerates, this thesis investigates the future of these infrastructures, reconsidering the vast network of decommissioned and declining legacy pipelines not as obsolete relics, but as latent spatial assets for ecological repair, climate resilience, and socio-environmental justice. Moving beyond narratives of extraction and decay, the project repositions pipelines as linear territories of opportunity—capable of being retrofitted into new civic, ecological, and infrastructural frameworks. Central to the project is the transformation of the pipeline’s linear, extractive logic into a circular and connective one: a loop that is both finite and infinite, territorial and experiential. Focusing on a strategically selected loop of crude oil pipelines spanning 14 states, the thesis constructs a cartographic and architectural framework to reimagine these lines as sites of ecological repair, social infrastructure, and alternative energy distribution—where design, much like a biological scaffold, acts as a catalyst for regeneration along landscapes shaped by extraction. Through spatial analysis, typological classification, and mapping, five territorial conditions are defined along the pipeline loop, each offering distinct opportunities for intervention. These are tested through speculative design prototypes that transform the pipeline through operations of repurpose, renewable energy distribution, or ecological remediation. The interventions reframe invasive infrastructures into public and environmental assets—generating new spaces for inhabitation, production, and collective memory. Ultimately, the thesis proposes a post-carbon design paradigm rooted in ecological reciprocity, collective agency, and infrastructural care—revealing hidden energy landscapes and inscribing them with new values: resilience, equity, and repair.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces</title>
<link href="https://hdl.handle.net/1721.1/163559" rel="alternate"/>
<author>
<name>Salmon, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/163559</id>
<updated>2025-11-06T03:08:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces
Salmon, Jason
The automobile industry is critical to modern society. Simultaneously, the constant release of toxic emissions such as greenhouse gases into the atmosphere is detrimental to health and the environment. Vehicles which exploit cleaner energy sources would be preferable to reduce the horrific scale of human-initiated damage such as climate change. However, solar road vehicles—though designed and fabricated by some—have not reached a sufficient level to be production-worthy. The low efficiency of solar cells and the high energy demands of the average land vehicle are irreconcilable for most manufacturers using industry methods and design precedent. Therefore, this work centres around the design and control of a solar road vehicle which fundamentally breaks from the mould of the typical road vehicle design—a vehicle which employs extensive articulated surfaces (dubbed "solar wings") which can be angled to directly face the sun, thereby maximising solar irradiation. A solar tracker using Bayesian inference achieving promising results in both convergence and accuracy is presented. Additionally, a systematic method for optimizing a solar road vehicle with solar wings is developed and documented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/163557" rel="alternate"/>
<author>
<name>Romero, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163557</id>
<updated>2025-11-06T03:07:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy
Romero, Catalina
Raman spectroscopy is a powerful optical technique that enables rapid, label-free molecular analysis. This offers significant potential to be used across pharmaceutical development, microbiome research, and food diagnostics. However, the utility of Raman spectroscopy in high-throughput applications has been limited by the lack of cost-effective, modular automation platforms capable of handling large volumes of samples with precision and repeatability. Conventional Raman workflows are constrained by manual sample handling, slow throughput, and high user variability, limiting their applicability in high-volume testing environments. To address these challenges, this thesis presents the development and initial validation of a custom two-axis (XY) gantry and a robotic well plate stacker automation platform designed to streamline the sample handling workflow in Raman spectroscopy systems, facilitating high-throughput, precise, and reproducible positioning of microplate samples under a Raman microscope. This thesis also provides a commercialization framework for the system as a standalone automation product, targeting pharmaceutical high-throughput screening, microbiome analysis, and food safety testing. The platform serves the unmet needs in these industries, where labor-intensive and inconsistent sample positioning limits scalability. The commercialization analysis includes an evaluation of market sizing, competitive benchmarking, pricing models, and go-to-market strategies. The modular platform has the potential to enable broader adoption of Raman-based analysis tools by reducing labor intensity and improving repeatability in sample positioning workflows. This work lays the foundation for the future integration of optical feedback and automated analysis, with the goal of transforming how Raman-based diagnostics are conducted at scale.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors</title>
<link href="https://hdl.handle.net/1721.1/163555" rel="alternate"/>
<author>
<name>Spino III, Pascal</name>
</author>
<id>https://hdl.handle.net/1721.1/163555</id>
<updated>2025-11-06T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors
Spino III, Pascal
This thesis investigates how intelligent robot behavior can emerge from physical interactions rather than sensing, computation, and actuation in the traditional sense. Two robotic systems are presented to explore this concept in different domains. The first is a swarm of simple rolling robots whose collective morphology is shaped by distributed control laws and magnetic interactions, enabling decentralized construction-like behaviors such as bridge formation. The second is a soft underwater robot inspired by anguilliform swimming, which achieves efficient locomotion through a single actuator that leverages fluid–structure interactions in a compliant silicone tail. Useful behavior arises in both systems from the physical design and the dynamics of environmental interaction, rather than from algorithmic or computational complexity. These results demonstrate that physical intelligence can serve as a powerful design principle for building adaptive, robust, and minimal robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hidden Monuments</title>
<link href="https://hdl.handle.net/1721.1/163553" rel="alternate"/>
<author>
<name>Lee, Sesil</name>
</author>
<id>https://hdl.handle.net/1721.1/163553</id>
<updated>2025-11-06T03:08:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hidden Monuments
Lee, Sesil
Jeju Island’s burial culture is embedded in the island’s distinct landscape, where sandam burial mounds are not isolated monuments but quietly coexist with fields, ranches, and forests. These sites are living records of intangible heritage—ancestral beliefs, Beolcho rituals, and vernacular stone-stacking practices—manifested not through formalized memory, but through their modest yet persistent presence in the landscape. Today, however, these spaces are under threat: policies favoring cremation, rapid urbanization, and shifting land values render them increasingly invisible or obsolete. In the past few decades, two-thirds of sandam have been displaced, and with fewer than six out of over 100,000 burial sites designated as cultural heritage, traditional models of conservation are inadequate—unable to engage with the dispersed, landscape-bound nature of these burial grounds. This project reimagines Jeju’s burial mounds not as relics to be preserved, but as spatial anchors for cultural and communal expressions. Through a series of small-scale architectural interventions—gates, stages, passages, and shelters—deployed along paths tracing sandam clusters, the work explores how memory can be practiced rather than displayed. By offering ways to engage with the buried, the forgotten, and the living simultaneously, the project expands the idea of heritage: not as a static record, but as a participatory and evolving relationship between people, land, and memory.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163551" rel="alternate"/>
<author>
<name>Wucherer, Abigail</name>
</author>
<id>https://hdl.handle.net/1721.1/163551</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems
Wucherer, Abigail
In the drive towards a globally decarbonized energy economy, rapid swap battery packs provide a potential means to improve electric vehicle adoption in high utilization industrial vehicles where lengthy charge times are a barrier to electrification. High voltage, high current battery connectors are a critical component for coupling the pack to the electric vehicle, distributing power from the battery to the drivetrain. Most state-of-the-art connections require precision alignment of contact surfaces, and bolted preload or retention mechanisms, hindering the implementation of rapid swap battery systems. The need for robust, high life cycle, high-power contacts motivates a new approach to connector design. The integration of electrical connectors with the battery mount’s structural loop creates a new design space where preload, geometry, and contact resistance may be optimized. This co-design approach enables mechanical and electrical functional requirements to be considered in conjunction to ensure reliable fulfillment in both areas while reducing the time for battery pack swaps. This work introduces two distinct approaches for aligning the pack to the vehicle, locking the battery in place, and engaging electrical contact with geometry unique to the system design. These approaches offer higher reliability, mechanical and electrical longevity, and automatic alignment capabilities during loading of the battery pack. Across both designs, the contact resistance is the primary metric for evaluating the electrical performance, and the contact pressure is used to evaluate the risk of mechanical wear. The first approach integrates a quasi-kinematic coupling-based connector with integrated electrical contacts, allowing for repeatable and accurate positioning of the battery pack to the vehicle. A slotted ball and socket design approach is considered to accommodate for angular misalignment and establish repeatable contact area through elastic averaging. The second approach proposes a planar contact to further reduce the contact pressure and increase contact longevity without the need for expensive and rare hardened coatings. This system relies on a rail and flat system for guiding the battery pack into a locked position and engaging the planar contacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)</title>
<link href="https://hdl.handle.net/1721.1/163550" rel="alternate"/>
<author>
<name>Hakemy, Arezo</name>
</author>
<id>https://hdl.handle.net/1721.1/163550</id>
<updated>2025-11-06T03:08:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)
Hakemy, Arezo
Early Afghan war rugs delineate place through their pictorial design, embedding spatial memory into the tactile surface of the woven field. Emerging in the wake of the Soviet invasion in the late 1970s, these rugs integrate modern war iconography of tanks, helicopters, and maps into a medium historically tied to regional identity, spiritual practice, and craft. While earlier scholarship has often read these rugs as commodities of war tourism, this thesis moves beyond this interpretation to foreground the rug as a placemaking device, one that asserts territory and agency through mapping techniques. Afghan war rugs frame and define space on a land that has largely been considered placeless, at times porous and seemingly unknown. Through their borders, these rugs resist the geopolitical narratives that have long reduced Afghanistan to a war-torn frontier. The border serves as a framing device, structuring the rug’s design while simultaneously asserting territorial presence. Whether following a prescribed cartoon or improvising patterns, the weaver actively engages in “border-ing,” exercising cartographic agency by embedding personal, traditional, and political motifs into the rug. This research interrogates how early Afghan war rugs engage in spatial representation against the backdrop of the Soviet-Afghan war from 1979-1989. From historical colonial mapping projects to Soviet and American cartographic investigations, Afghanistan’s borders have long been sites of surveillance, resource extraction, and imperial ambition. Yet, in contrast to these external mapping practices, the war rug’s design is a resistant act of placemaking. Examining the rug as both artifact and map, this study explores how Afghan weavers reclaim their landscapes through rug making, embedding memory and materiality into woven form.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes</title>
<link href="https://hdl.handle.net/1721.1/163547" rel="alternate"/>
<author>
<name>Bondarenko, Lina</name>
</author>
<id>https://hdl.handle.net/1721.1/163547</id>
<updated>2025-11-06T03:07:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes
Bondarenko, Lina
Modern knowledge systems have physically and conceptually “flattened” the world, erasing the ecological, political, and sensory complexities inherent to sloped terrain. By attending closely to the slope—as both a material condition and a generative metaphor—this thesis foregrounds movement as a form of resistance to regimes of exploitation, abstraction, and estrangement that have historically transformed land into data and place into property. Weaving together interdisciplinary methodologies from performance studies, landscape architecture theory, feminist geography, ecological theology, environmental history, sensory ethnography, and media studies, SSSSSSSSSS dances an inclined methodological structure, oscillating deliberately between critical systemic analysis and situated sensory experience. Ch1. sets the stage among steep slopes and introduces the discipline to movement as pedagogy, enacting the urgency for new methodologies into schemes of the project’s medium and the book’s format. Ch.2 is a feminist investigation of the ways modern infrastructures and spaces have been designed to reinforce land abstraction and commodification in the name of improvement-- severing embodied relationality, contributing to societal apathy toward ecological and social crises. Imperial post-enlightenment statecraft, the suppression of wildness, and the standardization of level form have flattened our upright movements to enact a state of senslessness. Contradicting Ch.2’s straight critique, Ch.3 attempts to reweave the sinuous nuance of symbiogenesis between soils and species, revealing that humans are but one among many sloped organisms moving, and inclining, and co-evolving as the lithosphere; we have been slorgs all along. Slorgs belong to divine mythologies of terrain’s elevations and have reciprocated in admiration, mimicking topographic spatial functions and adorning the summits with artistic interventions--some inadvertently contributing to the damaging regimes of Ch.2. Interwoven through both chapters, outliers resisting those forces of governance and exploitation are often those displaced by them-- those moving in ways the system polices and erases from comprehension-- refugees, queers, witches, tricksters, artists, herbalists, and healers. The intended medium of SSSSSSSSSS coalesces in Ch.4: inviting the general public to participatory happenings with hills, composing scores, coaxing their inner slorgs to slither askew, sloping themselves as moving loci for sympoietic becoming. Multi-species attune to a social, sensed, somatic experience, co-composing spatial relations among local steep soils. Slorgs challenge the abstractions of dominant epistemologies in the temporal, situated act of trusting their own proprioception in collective balance, affirming the multidimensional value of embodied, ecological geo-choreography. Social Sensory Somatic Scores for Soils, Structures, Spaces, and Species of Steep Slopes are presented through photographs in Ch.4 and in moving image, available as supplemental material.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision</title>
<link href="https://hdl.handle.net/1721.1/163544" rel="alternate"/>
<author>
<name>Klimenko, Nikita</name>
</author>
<id>https://hdl.handle.net/1721.1/163544</id>
<updated>2025-11-06T03:07:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision
Klimenko, Nikita
As the impacts of climate change on cities become more pronounced, urban authorities are under pressure to prepare existing streetscapes for increased levels of heat stress. While many aspects of existing urban morphology have an impact on heat exposure (e.g. sky view factor, glazing levels, facade materials), they cannot be rapidly changed at large across existing urban infrastructures. Urban authorities across the world increasingly turn to planting trees as a way of cooling urban streetscapes. Urban vegetation is indeed known to have a cooling effect, primarily due to trees providing shade and preventing urban materials from heating up, as well as due to their ability to maintain their own internal temperature due to evapotranspiration. Even though the positive impacts of urban trees on thermal comfort are long known and well-studied, little work is dedicated to how these impacts vary across trees of different species and morphology. This is due to both the complexity of studying vegetation life cycles at sufficient scale, as well as due to the dispersed nature of the issue across disciplines of biology, urban climate, design, and data science. Nevertheless, this specific knowledge is vital to urban planners for deciding which trees have the most cooling effect in specific parts of the city. This thesis embraces the notion of trees as ‘cooling machines’ and dissects the diverse morphological and contextual factors that affect the role of individual trees on local urban heatscape. Leveraging a set of computer vision methodologies, including species recognition, context-aware segmentation, and photogrammetry, the thesis examines a large dataset of thermal imagery of urban trees collected in Los Angeles and Dubai to describe the impact of individual tree species, height and form, as well as spatial context on the cooling effect. Building on this approach, the thesis proposes a prototyping framework for architects to cure urban heatscapes via targeted curation of tree planting schemes, tying the visual and thermal aspects of urban greenery. This approach will allow cities to leverage the power of urban vegetation in the most efficient way, and tame urban heat in a scalable and globally affordable manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence</title>
<link href="https://hdl.handle.net/1721.1/163543" rel="alternate"/>
<author>
<name>Dundar Arifoglu, Nasibe Nur</name>
</author>
<id>https://hdl.handle.net/1721.1/163543</id>
<updated>2025-11-06T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence
Dundar Arifoglu, Nasibe Nur
This thesis reconsiders architectural authorship and the extended processes through which the built environment is shaped, using a series of playful, participatory interventions to expose the human-centric assumptions embedded in spatial decision-making. Presented as a collection of games and booklets, the work invites participants to engage with a wide spectrum of architectural processes—from site understanding and planning to permitting, construction, and post-occupancy—through the perspectives of multiple agents entangled in shared environments. These agents include beings, materials, living organisms, legal frameworks, and other forces typically excluded from spatial authorship, challenging conventional boundaries and expanding the discourse around the entangled forces and relations that shape the spaces we inhabit. A series of playful explorations opens space for friction, misalignment, and shared authorship. Each booklet engages a distinct stage of the architectural process through participatory formats that make visible the biases, exclusions, and regulatory fictions often treated as neutral. By gamifying these systems, the work reveals how architectural decision-making tends to privilege hierarchy, human control, and speed—often at the expense of multispecies co-existence. This thesis positions play as a critical lens: a way to rehearse alternative futures, to listen differently, to embody other perspectives, and to surface the black-box logics embedded in architectural norms. It invites readers and players to participate in unbuilding these assumptions. And the games evolve—with each use, each misreading, each encounter, and each agent who joins the conversation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time</title>
<link href="https://hdl.handle.net/1721.1/163542" rel="alternate"/>
<author>
<name>Chaussabel, Celia Quynh-Mai</name>
</author>
<id>https://hdl.handle.net/1721.1/163542</id>
<updated>2025-11-06T03:06:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time
Chaussabel, Celia Quynh-Mai
As the architectural discipline grapples with its role in resource depletion, carbon emissions, and waste generation, there is a growing urgency to stop sourcing new materials and to reuse materials from existing buildings instead. One challenge to integrating reused materials into current building practices is technical: inventorying, deconstructing, reconditioning, and designing with reused materials is slower and more labor-intensive than with new ones. But another challenge is cultural: the materials that make up architecture are currently perceived as unmoving and single-use, with little consideration for their trajectories from raw resource to landfill. This thesis is focused on developing an aesthetic sensibility and design methodology that helps us re-envision materials as objects on a trajectory instead: Objectiles, or object-projectiles. Objectiles are objects on an adventure across space-time to collect as many uses as possible. Rather than remaining associated with one primary use, Objectiles are impressionable, bearing ambiguous traces of all the uses they encounter as they re-circulate. Through the aesthetic qualities that hint at their many uses, Objectiles invite us to time travel - to imagine the potential past and future narratives that may precede or follow their present physical state. Embedding the aesthetics of Objectiles into architecture can lead to the development of a new collective consciousness of the materials that surround us. They can make us aware that all the objects around us have trajectories that extend beyond their present state, and lead to an alternative material culture of greater care in how we use, re-circulate, and dispose of all objects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits</title>
<link href="https://hdl.handle.net/1721.1/163541" rel="alternate"/>
<author>
<name>Ai, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/163541</id>
<updated>2025-11-06T03:07:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits
Ai, Rui
The independence axiom (IA) proposed by Von Neumann and Morgenstern [50] is the cornerstone of the expected utility theory. However, some empirical experiments show that the IA is often violated in the real world. We propose a new kind of multi-armed bandit problem where the expectation of outcomes may influence the agent’s utility which we call expectation-dependent multi-armed bandits and rationalize the choice of agents in Machina’s paradox lacking the IA. We design provably efficient algorithms with low minimax regrets and show their consistency of time horizon T with corresponding regret lower bounds, revealing statistical optimality. Furthermore, as we first consider bandits whose corresponding utility depends on both reality and expectation, it provides a bridge between machine learning and economic behavior theory, shedding light on how to interpret some counterintuitive economic scenarios, like bounded rationality explored by Zhang et al. [54].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differentially Private Synthetic Data Generation for Relational Databases</title>
<link href="https://hdl.handle.net/1721.1/163540" rel="alternate"/>
<author>
<name>Alimohammadi, Kaveh</name>
</author>
<id>https://hdl.handle.net/1721.1/163540</id>
<updated>2025-11-06T03:06:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Differentially Private Synthetic Data Generation for Relational Databases
Alimohammadi, Kaveh
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table. In practice, data is often distributed across multiple tables with relationships across tables. This study presents the first-of-its-kind algorithm that can be combined with \emph{any} existing DP mechanisms to generate synthetic relational databases. The algorithm iteratively refines the relationship between individual synthetic tables to minimize their approximation errors in terms of low-order marginal distributions while maintaining referential integrity; consequently eliminates the need to flatten a relational database into a master table (saving space), operates efficiently (saving time), and scales effectively to high-dimensional data. We provide both DP and theoretical utility guarantees for our algorithm. Through numerical experiments on real-world datasets, we demonstrate the effectiveness of our method in preserving fidelity to the original data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications</title>
<link href="https://hdl.handle.net/1721.1/163539" rel="alternate"/>
<author>
<name>Zhang, Chenhui</name>
</author>
<id>https://hdl.handle.net/1721.1/163539</id>
<updated>2025-11-06T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications
Zhang, Chenhui
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose VLEO-Bench, a comprehensive evaluation framework to quantify the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our framework includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art</title>
<link href="https://hdl.handle.net/1721.1/163538" rel="alternate"/>
<author>
<name>Feng, Haozhen</name>
</author>
<id>https://hdl.handle.net/1721.1/163538</id>
<updated>2025-11-06T03:06:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art
Feng, Haozhen
This thesis investigates the collective lives of Chinese women sent to Xinjiang in state-led migration after 1949 and the erasure of their gendered narratives. Drawing on a unique family history and archival evidence, the thesis reveals how the personal identities of these female “Aid to Xinjiang” participants were stripped away and subsumed under the grand socialist nation-building myth. Through practice-based artistic research, the project attempts to restore their lost voices and unacknowledged suffering and labor, framing the exhibition as a form of praxis. By analyzing the exhibition alongside case studies and critical analysis, the thesis, inspired by Bernard Stiegler’s theory of the “history of representational forms” and interwoven with ideas from philosophers like Judith Butler and Nicholas Mirzoeff, interrogates the gendered silences in official history and highlights the tension between state mythologies and personal memories. In doing so, the exhibition as an interdisciplinary form of research not only restores agency to a silenced group of women, but also demonstrates how artistic practice can serve as an alternative historiography to challenge dominant narratives and recover marginalized voices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles</title>
<link href="https://hdl.handle.net/1721.1/163537" rel="alternate"/>
<author>
<name>Pryal, Erik Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/163537</id>
<updated>2025-11-06T03:06:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles
Pryal, Erik Jeffrey
Due to their energy-constrained nature, Autonomous Underwater Vehicles (AUVs) need effective docking and charging stations to extend their mission durations. However, diverse AUV designs challenge the universal compatibility of docking stations. This study provides a framework for understanding what makes a docking station universal and offers two potential solutions: the Tapered Funnel Docking Station and the Magnetic Hub Docking Station. The Tapered Funnel features a conical entry that progressively narrows to accommodate various AUV diameters. The Magnetic Hub passively secures the AUV using magnetic forces and an external appendage guided into position by a square duct. MATLAB simulations evaluate these two charging station designs for compatibility with AUVs, alignment capabilities, and docking efficacy under realistic conditions. Both designs are tested through Monte Carlo simulations to address varying AUV approach conditions, showcasing their potential as universally feasible solutions. Future exploration into material durability, sensor integration, and power transfer efficiency will refine these designs for field applicability. This research lays the groundwork for universal docking standards and proposes adaptable solutions to alleviate operational limitations in underwater missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity</title>
<link href="https://hdl.handle.net/1721.1/163536" rel="alternate"/>
<author>
<name>Blowes, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/163536</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity
Blowes, Rachel
In the context of the global climate crisis, there is a need to develop low embodied carbon building systems. Moreover, construction and demolition generate substantial amounts of waste. The use of salvaged materials for structural applications presents the opportunity to divert this waste while reducing the embodied carbon of new structural components. This thesis proposes a typology for dowel-laminated timber (DLT) slabs built up from waste lumber offcuts. A mechanical model for a segmented DLT system composed of geometrically heterogeneous offcuts is developed. Prototypes of this mass timber system are fabricated and tested to observe their failure behavior and to evaluate the mechanical model. A computational workflow is introduced which employs algorithmic methods for inventory assignment and structural optimization to design slabs which meet deflection requirements under loading. These approaches are undertaken to evaluate whether DLT systems can leverage the irregularity of salvaged lumber dimensions to produce structurally efficient forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time</title>
<link href="https://hdl.handle.net/1721.1/163535" rel="alternate"/>
<author>
<name>Aubry, Vinzenz</name>
</author>
<id>https://hdl.handle.net/1721.1/163535</id>
<updated>2025-11-06T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time
Aubry, Vinzenz
This thesis proposes a conceptual lens for understanding contemporary generative arts by introducing the terms Allopoietics and Liquid Media. Building on generative and participatory art, it focuses on the real-time processes among artworks, publics, spaces, and time through which meaning dynamically emerges. Drawing on the author’s artistic works—Conjunktion, Looking at the Sun, and Public Eyes—as well as critical engagement with hermeneutics, process philosophy, and media theory, this thesis explores how agency is distributed across these processes, offering a means to reconsider all elements as equally generative. Allopoietics, derived from cybernetics, describes the generative capacity of systems to produce outcomes beyond the sum of their actants, emphasizing collective unfolding over isolated creation. Liquid Media expands the notion of interfacing beyond traditional media to include publics, space, and time, conceptualizing these as mutable and entangled actants. These concepts outline an Aesthetics of Real Time that evaluates the dynamic relations among increasingly immediate systems. By proposing these new terms, the thesis invites a shift in perspective from object to process: viewing artworks not as stable materializations but as parts of real-time systems of collective meaning-making. While emerging from an artistic practice, this conceptual framework resonates with insights from contemporary sociology and cultural studies, where notions of fluidity, distributed agency, and relationality increasingly shape our understanding of complex systems and realities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Functional in Vitro Model of the Neuromuscular Interface</title>
<link href="https://hdl.handle.net/1721.1/163532" rel="alternate"/>
<author>
<name>Schwendeman, Laura A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163532</id>
<updated>2025-11-06T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Functional in Vitro Model of the Neuromuscular Interface
Schwendeman, Laura A.
The neuromuscular system is responsible for the coordination of movement throughout the body, and while research has revealed many of the mechanisms involved in the function of the neuromuscular system, there are still many gaps in our understanding of how all of the components of the system work and how they are affected by environmental factors and disease. This work focuses on developing methods and an in vitro model for studying a subsystem of the neuromuscular system known as the neuromuscular junction (NMJ), which is the connection between skeletal muscle and motor neurons and is relevant in many neuromuscular degenerative diseases. This work identifies that current in vitro NMJ models are cohesively lacking the ability to support long-term, functionally contractile muscle tissue while providing compartmentalization and clear optical access for live imaging of muscle and motor neuron co-cultures. This work therefore presents STAMP, a microgroove patterning method for creating aligned, more physiologically relevant, functional, and optically accessible skeletal muscle tissue cultures on top of fibrin hydrogels. Through investigating a series of different sizing parameters, STAMP is shown to effectively align mouse and human skeletal muscle monolayers in vitro and influence the direction of muscle contraction under electrical and optogenetic stimulation while preserving skeletal muscle tissue integrity and viability. The STAMP approach provides a way to mold hydrogels and the morphology of muscle tissue and will be beneficial for addressing the need for compliant and optically clear substrates in modeling the neuromuscular junction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer</title>
<link href="https://hdl.handle.net/1721.1/163531" rel="alternate"/>
<author>
<name>Sonner, Jessica E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163531</id>
<updated>2025-11-06T03:06:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer
Sonner, Jessica E.
Female soccer players demonstrate high levels of agility but remain underrepresented in research and experience anterior cruciate ligament (ACL) tears two to eight times more frequently than their male counterparts [1]. These injuries are often associated with high-torsion movements at the knee, such as quick change-of-direction maneuvers in soccer [2]. To examine gender-based differences in agility, this study introduces an in-game metric based on change-of-direction speeds, derived from center-ofmass tracking data from the 2022 Men’s and 2023 Women’s FIFA World Cups. Results show that across positions, ball proximity, and game segments, female athletes tend to change direction both faster and more frequently than male athletes—supporting current injury hypotheses and informing gender-specific cleat design considerations. Beyond individual movement, this study also examines collective team behavior through a fluid mechanics lens. No significant gender differences were found in power spectral densities or second-order structure functions, suggesting symmetry in the underlying coordination dynamics. A direct cascade was observed in the 0–15m range, indicating a consistent transfer of energy across spatial scales. Team dispersion and the Area-Dominant Spread Index correlated with structure function slopes, bridging spatial metrics with turbulence-based models of group behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leaky Vessels</title>
<link href="https://hdl.handle.net/1721.1/163527" rel="alternate"/>
<author>
<name>Cong, Frank (Haotian)</name>
</author>
<id>https://hdl.handle.net/1721.1/163527</id>
<updated>2025-11-06T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Leaky Vessels
Cong, Frank (Haotian)
This thesis serves as a written synthesis of my art practice. It starts with Louis Pasteur’s swan neck flask, Robert Boyle’s air pump, the theater of proof, and cabinets of natural historians to discuss the intentional gesture of containment, exclusion, and controlled permeability in scientific containers and the knowledge production paradigm behind them. I argue that these containers possess another intrinsic gesture – to leak – that opens space for social and cultural dimensions to engage. I propose “leaky vessels” as an analytical tool and a methodology that foregrounds the tension between intentional and unintentional in order to attend to the issues of care, belief, and labor that arise within this dynamic. Chapter 2 develops the concept of “leaky” in three aspects – aesthetic intervention, historical residue, institutional sabotage – by analyzing art practices by Eve Andrée Laramée, Oron Catts and Ionat Zurr, Candice Lin, Maria Thereza Alves, Critical Art Ensemble, and Claire Pentecost. Each case demonstrates how alternative approaches to apparatuses can expose and unsettle the systems of control that govern knowledge authority, allowing seepage, contamination, and embodied histories to return to spaces designed to exclude them. Chapters 3 and 4 turn inward to examine my own art practice, Guardian and The Guarded (2024), RapidRise (2024), and Sweat Dough (2025). In Chapter 3, I discuss the experience of entering the biomaker space at MIT and cultivating animal cells in a pendant, interrogating how care, proximity, and cosmology might challenge the lab’s sterile and utilitarian logic. Chapter 4 discusses the other two projects that operate outside the lab, where I investigate how bodily entanglement with dough fermentation can leak into the broader context of food cultures, labor histories, and symbolic inheritance. Together, these chapters propose a practice that embraces contamination and relationality. Those that leak in and leak out are precisely where new layers of meaning reside.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of torpedo depth control</title>
<link href="https://hdl.handle.net/1721.1/163520" rel="alternate"/>
<author>
<name>Carleton, John Thomas.</name>
</author>
<id>https://hdl.handle.net/1721.1/163520</id>
<updated>2025-11-05T05:14:46Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Dynamics of torpedo depth control
Carleton, John Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaf 72).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the engineering aspects of a wind tunnel magnetic suspension system</title>
<link href="https://hdl.handle.net/1721.1/163518" rel="alternate"/>
<author>
<name>Chrisinger, John Edvil.</name>
</author>
<id>https://hdl.handle.net/1721.1/163518</id>
<updated>2025-11-05T05:14:10Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">An investigation of the engineering aspects of a wind tunnel magnetic suspension system
Chrisinger, John Edvil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 62).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163461" rel="alternate"/>
<author>
<name>Palleiko, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163461</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation
Palleiko, Andrew
Imitation learning is a popular approach for obtaining intelligent robotic policies by learning from human demonstrations. Within this field, there is significant interest in the development of multi-task architectures that can efficiently learn diverse sets of tasks. Skill-based imitation learning methods, which abstract action sequences into ``skill'' representations for planning, offer structural advantages for handling the challenges of multi-task imitation learning that make them an attractive option for this problem. This work presents a novel skill-based imitation learning architecture formulation, with a causal transformer VAE skill-abstraction network paired with an autoregressive transformer planning policy. We find that our skill-abstraction network shows promise in identifying meaningful skills, but that the chosen planning architecture is poorly suited for predicting these skills due to multimodality in the resulting latent space. This is followed by a set of evaluations applied to an existing skill-based method with comparisons to a non-skill-based network on a multi-task dataset. We systematically investigate the performance impacts of six different policy and dataset conditions: data quantity, task variety, retry behavior, control precision, goal representations, and zero-shot transfer. Our experiments reveal limited increases in skill-based policy performance with more demonstrations or task variety, but improvements across architectures through exposure to demonstration retry behavior. Overall, the skill-based architecture demonstrates superior robustness to goal representation variations and low-level process noise than the non-skill-based policy, while neither architecture achieves meaningful zero-shot generalization to novel task combinations. These findings provide insights into the current state of IL methods, with the additional goal of establishing a framework for the evaluation of future multi-task IL architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance</title>
<link href="https://hdl.handle.net/1721.1/163460" rel="alternate"/>
<author>
<name>Huang, Dingcheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163460</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance
Huang, Dingcheng
In modern human-robot collaboration (HRC) applications, multiple perception modules jointly extract visual, auditory, and contextual cues to achieve comprehensive scene understanding, enabling the robot to provide appropriate assistance to human agents intelligently. While executing multiple perception modules on a frame-by-frame basis enhances perception quality and information gains in offline settings, it inevitably accumulates latency, leading to a substantial decline in system performance in streaming perception scenarios. Recent work in scene understanding, termed Relevance, has established a solid foundation for developing efficient methodologies in HRC. However, modern perception pipelines still face challenges related to information redundancy and suboptimal allocation of computational resources. Drawing inspiration from the relevance concept and the inherent sparsity of information in HRC events, we propose a novel lightweight perception scheduling framework that efficiently leverages output from previous frames to estimate and schedule necessary perception modules in real-time. Our experimental results demonstrate that the proposed perception scheduling framework effectively reduces computational latency by up to 27.52% compared to conventional parallel perception pipelines, while also achieving a 72.73% improvement in MMPose accuracy and comparable YOLO accuracy. Additionally, the framework demonstrates high keyframe accuracy, achieving rates of up to 98% in dynamic scenes. The results validate the framework’s capability to enhance real-time perception efficiency without significantly compromising accuracy. Additionally, the framework shows potential as a scalable and systematic solution for multi-modal streaming perception systems in human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163454" rel="alternate"/>
<author>
<name>Lindberg, Ian G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163454</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems
Lindberg, Ian G.
This thesis explores the design and development of several mechanical elements relevant to two technologies Important to a global transition to green energy, hydrogen and electric vehicles. The portion of the thesis relating to hydrogen focuses on preloading mechanisms and high temperature seals, two design spaces crucial to the implementation of solid oxide hydrogen generation. Due to the high operating temperatures (600°C - 800°C), seal materials commonly used in other applications are inadequate and glass or vermiculite based seals must be used. The delicateness of these seals makes them a common failure point, and consistent application of a preloading force is key to mitigating this. The concept of a variable-bypass piston is proposed as a preloading mechanism suitable for the high temperatures present inside solid oxide electrolyzer systems, and the development of seal geometries as well as flow characterization of porous steel wool seals to enable parametric design is documented. As an alternative to current sealing methods, initial development of a composite seal utilizing materials and manufacturing methods originating in the semiconductor industry was also conducted. The final section of the thesis proposes the concept and covers initial testing of fluid transfer through a kinematic coupling, a topic of potential interest for implementing liquid pack cooling in a system of rapidly swappable batteries for electric vehicles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots</title>
<link href="https://hdl.handle.net/1721.1/163453" rel="alternate"/>
<author>
<name>Bawa, Maheera</name>
</author>
<id>https://hdl.handle.net/1721.1/163453</id>
<updated>2025-10-30T03:24:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots
Bawa, Maheera
Skeletal muscle powers all voluntary motion in many living creatures, enabling behaviors such as walking, jumping, swimming, and flying. The field of biohybrid robotics aims to use biological actuators, such as skeletal muscle, to power adaptable robots that respond to their environment. Previous work in this field has focused on deploying 3D skeletal muscle tissues to power robotic function. In natural systems, muscles can also be organized in 2D formats to power a range of movements such as fish-like swimming and peristaltic pumping. However, long-lasting 2D cultures of skeletal muscle have been precluded by force-generating cells delaminating from their underlying substrate. Building on previous work from our lab demonstrating a method to culture contractile skeletal muscle in 2D formats, this work aims to enhance the performance of these systems by tuning substrate stiffness and topography. We show that optimizing system parameters prolongs actuator lifetime and enhances force by 100x.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production</title>
<link href="https://hdl.handle.net/1721.1/163452" rel="alternate"/>
<author>
<name>Fillon, Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163452</id>
<updated>2025-10-30T03:24:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production
Fillon, Marie
This thesis presents the development and production of FrED (Fiber Extrusion Device), an educational manufacturing system designed to bridge the gap between theoretical instruction and hands-on practice in process control, computer vision, and smart manufacturing. Building on an existing prototype, this work focused on transitioning FrED from a proof-of-concept into a production-ready system by designing scalable workflows, improving hardware and software integration, and developing tools to ensure traceability and repeatability across builds. A major contribution of this thesis was the enhancement and implementation of a smart factory environment capable of supporting batch production. This included designing and deploying applications using Tulip Interfaces to manage inventory, guide subassembly processes, and monitor production metrics in real time. A modular SKU system and structured bin labeling framework were introduced to reduce errors, maintain version control, and support future growth. Station-specific apps were developed and refined to ensure consistent assembly and simplify onboarding across a rotating team of users. In parallel, this thesis contributed to the evaluation and refinement of a vision-based diameter measurement system using a low-cost USB camera. The system was analyzed under various operating conditions and its limitations under motion and variable lighting were quantified. Multiple image processing strategies were explored and robustness metrics were developed to inform future improvements. To ensure pedagogical relevance, the system was tested in user-facing workshops and public demo sessions. Feedback informed updates to both the assembly process and instructional content. By the end of the development cycle, the system supported the successful production of 35 complete FrED units, establishing a replicable model for small-scale manufacturing. This thesis demonstrates how modular digital infrastructure can enable scalable hardware deployment. It also highlights the practical challenges of transitioning from prototype to production and proposes tools and methods that can support broader adoption of smart manufacturing principles in learning environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection</title>
<link href="https://hdl.handle.net/1721.1/163451" rel="alternate"/>
<author>
<name>Sanghai, Rohan S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163451</id>
<updated>2025-10-30T03:24:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection
Sanghai, Rohan S.
Omni-wheels, known for enabling holonomic motion in robotic systems, often introduce vibration due to their complex geometry and multiple contact points. Unlike caster wheels with established testing standards, omni-wheels lack comprehensive characterization methods. While parallel studies by Ilkbahar [1] and Donnellan [2] explore their rolling resistance and static load capacity, a systematic analysis of vibration characteristics remains absent from the literature. This thesis presents an investigation of the vibration behavior of various omniwheel designs using a Design of Experiments (DOE) approach. A full factorial experimental design was developed, considering factors such as wheel type, rotational speed, applied load, and wheel orientation angle. Individual regression models were developed for each of six wheel types, treating operational parameters as continuous variables. Vibration levels were measured using root mean square (RMS) acceleration, derived from Fast Fourier Transform (FFT) and Power Spectral Density (PSD) analyses of accelerometer data. Results show that rotational speed consistently increased vibration across all wheel designs, while lateral motion (90° angle) consistently reduced vibration compared to forward motion. The effect of applied load varied significantly between wheel designs, with some wheels showing reduced vibration under load while others remained unaffected. Wheels DZ(1) and Vex(5) demonstrated the lowest average vibration levels, though post-test inspection revealed trade-offs with durability, including roller deformation and material degradation. Interaction effects, particularly between angle and speed, were statistically significant for all wheel types, indicating that the benefits of lateral motion are enhanced at higher speeds. This research provides a framework for optimizing omni-wheel selection to minimize vibration by developing wheel-specific predictive models that quantify sensitivities and interaction effects across various designs and conditions, improving system performance and stability. The findings highlight that wheel selection must consider not only vibration performance but also trade-offs with durability and rolling resistance, establishing vibration characteristics as a critical consideration alongside other performance metrics when selecting omni-wheels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester</title>
<link href="https://hdl.handle.net/1721.1/163449" rel="alternate"/>
<author>
<name>Scali, William T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163449</id>
<updated>2025-10-30T03:24:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester
Scali, William T.
Magnetohydrodynamic (MHD) power generation presents a promising approach for harvesting energy from marine environments, offering a sustainable alternative for powering naval assets and coastal infrastructure. While energy harvesting technologies are widely used in terrestrial and aerial applications, their implementation in marine environments remains limited. This thesis explores the feasibility of an MHD Inductive Marine Energy Harvester, optimizing its design for undersea naval applications to enhance energy efficiency and reduce carbon emissions with minimized construction costs. A theoretical 2D model was developed based on Maxwell’s equations and Fourier analysis to characterize the physics governing MHD power generation in seawater. This model was extended to multiple concentric gaps on one device, refining predictions of power output under varying flow regimes. Numerical simulations using MATLAB enabled the evaluation of key parameters, including fluid conductivity, magnetic field strength, and shroud design, to optimize energy conversion efficiency. Furthermore, geographical and coastal tide analyses were conducted to determine optimal deployment locations, maximizing power extraction from natural marine currents. Economic viability was assessed through a cost-benefit analysis, comparing the energy yield per unit cost of the harvester against existing renewable energy technologies and other maritime power sources. Results indicate that under specific conditions, MHD generators can effectively supplement energy demands, reducing reliance on conventional fuel or other electrical power sources. The findings of this research contribute to the advancement of marine renewable energy technologies, demonstrating the potential of MHD induction-based harvesting as a scalable solution for sustainable power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation</title>
<link href="https://hdl.handle.net/1721.1/163448" rel="alternate"/>
<author>
<name>Hall, Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/163448</id>
<updated>2025-10-30T03:24:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation
Hall, Jeff
Over the last 50 years, the leading global environmental hazard has not been hurricanes, lightning, tornadoes, floods, or earthquakes, but extreme heat events. With climate models projecting an increase in the frequency, intensity, and duration of heatwaves in the coming decades this threat to life is expected to only increase. Air conditioning has been demonstrated to reduce mortality during heatwaves yet uses an order of magnitude more energy than necessary to keep a human cool. Using principles of similitude to extrapolate the capability of existing vapor compression equipment, an objective function to maintain energy balance in a human exposed to extreme heat is developed across a design space. The function shows that in a standard forced convection air conditioning system, there no opportunity to provide emergency cooling of a human due to the slow mass flow rate needed to cool air in a single stream. As such, status-quo attempts to cool humans with general-purpose air conditioning will always be an inefficient use of energy. By focusing on keeping people cool, not spaces, we propose three paths forward for critical human cooling that appropriately match the energy needs of humans: radiative cooling, liquid cooling devices, and low-mass flow air conditioning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly</title>
<link href="https://hdl.handle.net/1721.1/163446" rel="alternate"/>
<author>
<name>Almquist, Ethan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163446</id>
<updated>2025-10-30T03:24:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly
Almquist, Ethan T.
Design requirements on modern naval platforms are increasing the complexity and criticality of onboard electric plants. They form the backbone of warship operational capability and are at the heart of maritime decarbonization. Tasks such as assessing the ship's capacity in a damaged state, optimizing the mission profile of a fleet of vehicles, and evaluating broad design spaces in an efficient manner are increasingly difficult as electric network complexity increases. Traditional modeling techniques are either too computationally expensive, or lack the fidelity necessary to produce meaningful insights into the electric network's operation. Behavioral modeling bridges this gap, but is underdeveloped to support the system architectures of tomorrow's ships. This work details the advancement of behavioral modeling of electrical systems to incorporate hybrid AC/DC and ring bus architectures, the development of parallelization techniques, and SPARCS: a software package offering Shipboard Parallelized Analytics with a Rapid Configuration Simulator.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video</title>
<link href="https://hdl.handle.net/1721.1/163443" rel="alternate"/>
<author>
<name>Chityat, Inbar</name>
</author>
<id>https://hdl.handle.net/1721.1/163443</id>
<updated>2025-10-30T03:24:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video
Chityat, Inbar
Preterm neonates represent a vulnerable population which traditional contact-based monitoring devices are not optimized for their small size and complicated physiology. Adhesive sensors and wires can cause infections, discomfort, and impair the delivery of clinical care. Therefore, these most fragile patients could significantly benefit from remote health monitoring. This thesis establishes the foundation for a multimodal device designed for noncontact monitoring of neonates in the Neonatal Intensive Care Unit (NICU) that integrates a video camera and a radar. The device is used to estimate vital signs such as respiratory rate (RR), using both unimodal (solely video or radar) and multimodal fusion approaches that combine data from both sensors. Preliminary testing was conducted on neonatal simulator mannequins, followed by a clinical study at Tufts Medical Center NICU which collected data from 16 neonates so far (with the goal of reaching 20). The collected data was processed, labeled, and organized using image processing techniques and manual review, and then analyzed using a Video Vision Transformer (ViViT) architecture, incorporating early, intermediate, and late fusion strategies. Initial analysis was conducted on the mannequin data and the first neonatal subject. The results show that for estimating RR in neonates, the early fusion approach outperformed the unimodal methods. In movement detection, compared to human labeling, the fusion techniques achieved high accuracy and precision. To conclude, this study demonstrates that multimodal analysis has the potential to outperform unimodal approaches by improving accuracy against gold standard monitoring, particularly in challenging real-life conditions, including motion artifacts and poor lighting. This work represents a step toward more robust, non-invasive monitoring solutions for neonatal care, with implications for broader applications in remote health monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures</title>
<link href="https://hdl.handle.net/1721.1/163442" rel="alternate"/>
<author>
<name>Finlason, Katana R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163442</id>
<updated>2025-10-30T03:24:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures
Finlason, Katana R.
As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products</title>
<link href="https://hdl.handle.net/1721.1/163441" rel="alternate"/>
<author>
<name>Edington, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163441</id>
<updated>2025-10-30T03:24:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products
Edington, David J.
In the electrification of heavy industry, rapid swappable batteries provide an effective means to minimize vehicle downtime and the cost of operation. However, to allow this technology to take hold, further development of electrical contacts that can both pass high amperage and undergo a high cycle life needs to occur. The development of these electrical contacts is a highly experimental process, and thus establishing a method and test equipment to determine the physical and electrical characteristics of these contacts over their lifetime will allow for the accelerated development of these products. This body of work serves as a design guide to establish a physical testing mechanism to assess contact resistance degradation and physical wear over the lifespan of an electric connector. Data will then be collected on initial contact prototypes to characterize their performance. With this data, designs may be iterated and improved upon in pursuit of creating a universal standard for battery swap technology on electric vehicles in heavy industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system</title>
<link href="https://hdl.handle.net/1721.1/163439" rel="alternate"/>
<author>
<name>Kim, Beomjun</name>
</author>
<id>https://hdl.handle.net/1721.1/163439</id>
<updated>2025-10-30T03:24:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system
Kim, Beomjun
Due to the intermittency of renewable resources, achieving a high coverage of renewable generation at low cost is one of the main hurdles to realizing zero-carbon electricity generation. In this study, we analyze the roles of energy storage systems (ESS) and transmission infrastructure in the cost-optimal deployment of a renewable electricity grid in the United States. We find that storage and transmission serve distinctly different functions: transmission is useful for addressing hours-long resource lows, but only plays a supplementary role in mitigating long-duration resource lows. Conversely, storage can handle both short-duration and long-duration resource lows. These different functions are driven in part by the large spatial footprints of the most extreme long duration resource lows. Furthermore, the total cost of renewable energy in the system and the cost-determining technological components in the system are dependent on the renewables penetration toward total demand—known as the energy availability factor (EAF). When the EAF is sufficiently low, the cost of a cost-optimized system is driven solely by generation costs. For low to intermediate EAF, both generation and transmission costs are dominant factors. At high EAF, generation and storage costs become the dominant factors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wedged Vortex Generator Applications for Marine Vessels</title>
<link href="https://hdl.handle.net/1721.1/163438" rel="alternate"/>
<author>
<name>Kimmeth, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163438</id>
<updated>2025-10-30T03:24:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wedged Vortex Generator Applications for Marine Vessels
Kimmeth, Jack
This thesis investigates the effectiveness of vortex generators (VGs) in reducing viscous drag in hydrodynamic applications. Initial experimental and computational fluid dynamics analyses identified wedge-shaped VGs as the optimal design for flow manipulation. Comparative testing of three wedge shaped VG sizes at 1.3 m/s revealed the most effective configuration, which was subsequently evaluated across speeds ranging from 1.0 m/s to 1.6 m/s. The results showed a viscous drag reduction of 7.9% at 1.4 m/s. These findings were extrapolated to a full-scale bulk carrier using appropriate geometric and dynamic scaling factors. Total resistance was partitioned using Holtrop-Mennen approximations, allowing the drag reduction to be realistically applied to operational conditions on a trans-Pacific route. Material and installation cost estimates were also developed. Finally, implications for propulsion efficiency, flow-induced vibrations, and cavitation are discussed, with recommendations for future self-propelled model testing to further explore these effects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prosody in Kichwa</title>
<link href="https://hdl.handle.net/1721.1/163437" rel="alternate"/>
<author>
<name>Chango Masaquiza, Soledad</name>
</author>
<id>https://hdl.handle.net/1721.1/163437</id>
<updated>2025-10-30T03:24:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prosody in Kichwa
Chango Masaquiza, Soledad
This thesis investigates the prosodic system of Salasaka Kichwa, focusing on the interaction between pitch, morphosyntactic structure, and word order in both elicited and spontaneous speech. Based on data from ten native speakers of the Salasaka community, the study analyzes approximately 150 utterances using Praat and ToBI-style prosodic annotation. The findings reveal a consistent alignment between the nuclear pitch accent and the leftmost constituent of the verb phrase in neutral declarative sentences, supporting the hypothesis that Salasaka Kichwa exhibits a head-final syntactic structure. This default prosodic alignment is disrupted by the presence of focus-sensitive or interrogative morphemes such as -mi and -chu, which reliably attract the pitch peak regardless of their position in the clause. In ditransitive constructions, pitch prominence consistently targets the dative-marked argument. Accusative-marked objects also receive prominence, but only when modified; in such cases, it is typically the modifying adjective or contrastive element that bears the highest pitch. Overall, the study demonstrates that prosodic prominence in Salasaka Kichwa is not governed by syntactic structure alone. Instead, it emerges from a layered interaction between morphology, information structure, and pragmatic marking offering new insights into how prosody encodes grammatical and communicative functions in underdescribed head-final languages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging</title>
<link href="https://hdl.handle.net/1721.1/163436" rel="alternate"/>
<author>
<name>Nguyen, David H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163436</id>
<updated>2025-10-30T03:24:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging
Nguyen, David H.
This thesis presents three model predictive control (MPC) formulations for robotic table tennis swinging, addressing the challenge of generating precise, real-time paddle trajectories for dynamic ball interactions. We explore key differences in optimization structure, solver strategy, and real-time implementation, evaluating each approach through hardware experiments that measure strike condition tracking and hit success. The final controller integrates the full task of a table tennis possession by planning the return ball trajectory through the contact dynamics, and generating a swing to achieve it. This controller improves the hit rate of the system from 88.3% to 97.6% and significantly enhances strike condition accuracy and smoothness enabling control over the landing location and spin of the ball.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots</title>
<link href="https://hdl.handle.net/1721.1/163434" rel="alternate"/>
<author>
<name>Johnston, Julie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163434</id>
<updated>2025-10-30T03:24:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots
Johnston, Julie E.
The UH-60, used for troop transport, MEDVAC, and mission control, has evolved over the last 45 years from the Alpha Model to the Lima and Mike models that are currently utilized. Previous studies investigated the impact of Whole-Body Vibrations (WBV) on aviators and the resulting musculoskeletal injury, but none have investigated the efficacy of the Mike model’s Active Vibration Control System (AVCS) on reducing the impact of helicopter vibrations on musculoskeletal health.&#13;
Computational analyses of a biomechanical model using OpenSim and motion capture at varying levels of vibration was conducted. This quantifies the response of the spine and the surrounding muscles when vibratory loads are applied while positioned to manipulate the flight controls. A musculoskeletal model was developed to represent the aviator in the seated posture required to effectively manipulate the flight controls. To develop the model, the team recorded motion capture data with a pilot in a pilot test for concept validation. This data was then processed and input in the OpenSim inverse kinematics tool to determine joint angle and to demonstrate the muscle-tendon length of several muscles in the back. Unlike the initial predictions, the muscles in the right side of the back were not consistently longer than those of the left side. &#13;
A survey was also developed that builds upon previous efforts, seeking to understand the aviator’s perspective on musculoskeletal injury and prevention, with a focus on the back. Aviators are asked to describe the cause of their injury, methods of injury prevention, and recovery techniques encompassing numerous subpopulations of flight experience: Lima-majority, Mike-only, Mike-majority, and an even mixture of L/M. The data attempts to characterize the impact of the AVCS on aviator spine health. The AVCS should decrease the rate of injury by reducing the vibratory loads experienced by the aviator. This survey is unique to previous questionnaires as it focuses on the user’s perspective of differences between the two models, and the injury or pain felt by each service member.&#13;
While it was expected to see a trend of reduced injury occurrence amongst the Mike-only aviators versus those with Lima-majority flight hours, this was not the case. Injury prevalence was consistent across most populations, indicating the potential inefficacy of the AVCS. Analysis of open-ended responses, particularly from the hybrid group, provide some context for the perceived impacts of using the AVCS. Some population demographics were not represented in this survey due to the nature of the unit being surveyed, which may impact the validity of some results.&#13;
By quantifying the perceived efficacy of the AVCS as it relates to chronic musculoskeletal injury using a survey of pilot experience factors (flight hours, airframes, operating theatres, etc.), and by representing the maladaptive posture of the pilots with a computational simulation based on experimental pilot data; a full picture is developed of the risk of issue related to the near and long-term health of US Army Aviators. The aim is to expand the overall understanding of how vibration is impacting the musculoskeletal health of aviators and their perceived impact on lifelong health from the profession. The ultimate goal is to aid in the design of additional countermeasures to improve aviator spine health and to serve as a platform for optimization of systems like AVCS.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices</title>
<link href="https://hdl.handle.net/1721.1/163432" rel="alternate"/>
<author>
<name>Hoo, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163432</id>
<updated>2025-10-30T03:24:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices
Hoo, Stephanie
Pop-up Satellite Archival Tags (PSATs) are a combination of satellite and archival tags used by marine biologists to collect large scale movement and behavioral data of large pelagic life for up to two years [1]. However, current commercial PSATs have an unusually high failure rate when tagged on tuna and cost upwards of $4000, making it both difficult and expensive to collect data [14]. Upon investigation, the top two failure modes of tuna-affixed PSATs have been identified as drag from movement/tissue healing and pressure cycling [14]. Current commercial PSAT manufacturers do not account for the vortices shed by fish when testing their designs— a large oversight that could account for their high failure rate [15]. The work herein determined the effects of vortex shedding on PSAT hydrodynamic behavior, used these results to inform the design of novel PSAT body shapes, and conducted a head-to-head comparison of these designs with existing commercial PSATs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats</title>
<link href="https://hdl.handle.net/1721.1/163431" rel="alternate"/>
<author>
<name>Buchanan, Maxwell Calvin</name>
</author>
<id>https://hdl.handle.net/1721.1/163431</id>
<updated>2025-10-30T03:24:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats
Buchanan, Maxwell Calvin
Marine corrosion presents a persistent threat to the reliable operation of U.S. Coast Guard Fast Response Cutters (FRCs). This thesis investigates hybrid cathodic protection strategies combining impressed current cathodic protection (ICCP) systems and sacrificial zinc anodes to combat corrosion on such vessels. Observing over 550 cumulative months of ICCP system data across 46 FRCs, this thesis identifies operational trends, failure modes, and unique regional behaviors. To validate observed patterns and explore failure scenarios, the study implements finite element modeling using COMSOL Multiphysics. These simulations replicate normal operation, reference electrode failure, propeller passivation, localized zinc loss, and hull coating failure for both a generic 35m hull and the FRC hull. These models emphasize how system behavior responds to material variations, temperature, and system health, offering a diagnostic framework for optimizing ICCP configurations. Field and laboratory experiments further ground the computational findings. These include shipboard hull potential surveys and analysis of zinc anode wastage across multiple cutters. Controlled experiments on nickel aluminum bronze (NAB) passivation using miniaturized ICCP test systems are explored for further study. Initial results show variation in zinc consumption and corrosion behavior depending on ICCP setpoints, with higher protection levels (-1050 mV) often correlating with reduced zinc depletion. The thesis also explores energy diagnostics onboard FRCs via non-intrusive load monitoring (NILM). A case study on the USCGC WILLIAM CHADWICK describes monitoring auxiliary machinery loads through NILM signatures and suggests expansion to critical panels and DC systems. By integrating fleet data, physical experimentation, and simulation, this thesis advances future efforts in patrol boat corrosion monitoring, ICCP optimization, and resilient microgrid management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design</title>
<link href="https://hdl.handle.net/1721.1/163430" rel="alternate"/>
<author>
<name>Burgess, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163430</id>
<updated>2025-10-30T03:24:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design
Burgess, Michael
In robotics, replicating the natural proficiency with which humans perform manipulation tasks has proven challenging. Modern control schemes are predominantly learning-based and thus depend heavily on data collected via teleoperated demonstrations. Humans rely on our tactile perception to perform contact-rich and dynamic manipulation tasks. By more seamlessly incorporating high-resolution tactile sensing and haptic feedback into teleoperation interfaces, we can work to create stronger demonstration data to support the development of more effective learned control policies. In this thesis, we present two contributions toward this goal. First, we develop an algorithm to estimate the compliance of grasped objects in real-time from tactile images to provide haptic feedback to remote users. This algorithm combines both analytical and learning-based approaches to better generalize across both object shapes and materials. Second, we create a 1-DoF robotic gripper design with integrated tactile sensing. Inspired by the principle of self-similarity, this gripper is designed to better conform to complex object geometries than traditional designs and more securely grasp objects of many shapes and sizes. Together, these contributions can be utilized to create robust, tactile-aware teleoperation platforms. These platforms would facilitate more effective data collection and thereby promote the development of more performative autonomous action in generalized robotic manipulation scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits</title>
<link href="https://hdl.handle.net/1721.1/163428" rel="alternate"/>
<author>
<name>Turliuk, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163428</id>
<updated>2025-10-30T03:23:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits
Turliuk, Jennifer
What is the net impact of artificial intelligence on climate change? Existing studies focus on AI's footprint, but few analyze AI's trade-offs. This paper develops a framework to quantify both the Greenhouse Gas (GHG) emissions and the climate change costs and benefits of AI systems, addressing the time value of carbon and the installed base of existing AI infrastructure. We examine the energy demands of AI, which are growing rapidly and threatening companies' net-zero commitments, while also analyzing AI's potential to enable emissions reductions through applications such as optimized energy systems, demand response, grid management, and electrification acceleration. This research introduces the Net Climate Impact Score (NCIS) of AI, a novel equation to calculate the net climate impact of AI technologies that considers both immediate emissions and potential future benefits, and provides a methodology for assessing AI projects holistically. We demonstrate that while current AI applications are predominantly emissions-intensive, strategic deployment focused on energy system transformation could potentially deliver net climate benefits within specific time frames and applications. However, improvements in energy efficiency and emissions reductions resulting from AI are, absent climate policy, likely to generate both direct and indirect rebound effects that could undermine the emissions reductions and reduce the climate benefits of AI. The research concludes with policy and industry recommendations that propose technological pathways that could maximize AI's positive impact while minimizing its environmental footprint.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications</title>
<link href="https://hdl.handle.net/1721.1/163427" rel="alternate"/>
<author>
<name>Pressel, Adam Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/163427</id>
<updated>2025-10-30T03:24:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications
Pressel, Adam Jay
Switched-mode power amplifiers (SMPAs) are desired that can work across a wide range of power levels and load impedances with fast response speed while maintaining high efficiency. Such designs would be valuable for many applications including plasma generation and wireless power transfer. We introduce a new wide-range SMPA architecture that provides direct output voltage modulation, enabling it to modulate output power and compensate for resistive load variations. Dynamic frequency modulation is leveraged to address reactive load variations. The new architecture enables all the semiconductor switches to maintain zero-voltage switching across all operating conditions. Experimental results shows that the wide-range half bridge power amplifier was able to deliver a wide power range of 25 W - 95 W power range across each individual resistive load in the range of 5 Ω - 20 Ω with up to j15 Ω reactance. The maximum dc-ac efficiency is 86 with 20 Ω load and 110.5 W load power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine</title>
<link href="https://hdl.handle.net/1721.1/163426" rel="alternate"/>
<author>
<name>Mannier, Robert B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163426</id>
<updated>2025-10-30T03:24:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine
Mannier, Robert B.
Harnessing marine energy offers significant potential for advancing clean and sustainable power generation. This thesis focuses on the design and optimization of a diffuser-augmented hydrokinetic turbine, supported by a tension-leg platform, to harness ocean and tidal currents for renewable energy production. By incorporating diffuser technology, the turbine’s efficiency is enhanced, increasing the coefficient of power and enabling effective energy capture even in environments with lower current speeds.&#13;
The research involves 2D and 2D axisymmetric modeling of the diffuser and turbine using Actuator Disk Theory (ADT), with tools such as Rhino and Star CCM+. Mounted on a floating tension-leg platform anchored to the seabed, the turbine is designed to exceed the Betz limit, maximizing power output and advancing offshore energy harvesting capabilities.&#13;
This thesis is solely focused on the design and optimization of the hydrokinetic turbine, providing an in-depth analysis of diffuser performance. The findings contribute to the development&#13;
of marine renewable energy technologies, promoting sustainable and efficient power generation from ocean and tidal currents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation</title>
<link href="https://hdl.handle.net/1721.1/163422" rel="alternate"/>
<author>
<name>Trono Figueras, Renato</name>
</author>
<id>https://hdl.handle.net/1721.1/163422</id>
<updated>2025-10-30T03:23:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation
Trono Figueras, Renato
The reduction of sonic boom loudness to within acceptable limits is a crucial factor for the viability of supersonic aircraft. This thesis presents a computational framework for simulating sonic boom propagation using an output-based adaptive, higher-order finite element method. The research employs the Variational Multiscale with Discontinuous Subscales (VMSD) method, integrating Continuous Galerkin (CG) and Discontinuous Galerkin (DG) features, referred to as VMSD-BR2. This approach leverages static condensation to manage computational cost while utilizing DG stabilization techniques for enhanced stability and adjoint consistency. A key component of this work is the application of the dual weighted residual (DWR) method for output error estimation, which in turns drives the mesh optimization process. The method’s efficacy is validated using smooth solutions for the viscous Burgers equation and the adjoint PDE for a volume output functional. Additionally, artificial viscosity is incorporated via a shock sensor PDE approach to handle shock presence, with necessary corrections applied to the DWR error estimate. The VMSD-BR2 method is applied then to a real-world scenario solving the augmented Burgers equation, which models the propagation of sonic booms. The results include the pressure perturbation field, adapted meshes, ground-level B-SEL filtered pressure, and perceived loudness at ground, demonstrating the method’s practical application.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>C. elegans as a Platform for Multimodal Neural Data Integration</title>
<link href="https://hdl.handle.net/1721.1/163421" rel="alternate"/>
<author>
<name>Simeon, Quilee</name>
</author>
<id>https://hdl.handle.net/1721.1/163421</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">C. elegans as a Platform for Multimodal Neural Data Integration
Simeon, Quilee
Systems neuroscience has traditionally been fragmented into investigations at discrete levels of organization, creating methodological and conceptual gaps that hinder unified understanding of neural function. This thesis examines the nematode Caenorhabditis elegans as a platform for integrating diverse neural data modalities, offering a pathway to bridge these gaps. The hermaphrodite C. elegans, with its completely mapped connectome, optical transparency, genetic tractability, and stereotyped nervous system of only 302 neurons, presents an opportunity for comprehensive measurements across multiple dimensions of neural function. The review is organized around three fundamental neural data modalities accessible in C. elegans: (1) molecular genetic profiles, (2) network connectivity, and (3) neural activity dynamics. Historically studied in isolation, these complementary data types are increasingly being bridged through technological and computational innovations. We examine experimental advances enabling whole-nervous-system measurements of these modalities, as well as data standardization efforts and computational frameworks for cross-modal integration. While understanding the relationship between neural activity and behavior remains a fundamental goal of systems neuroscience, this thesis focuses on neural data acquisition and integration rather than behavioral analysis, which has been extensively covered elsewhere.1 We conclude with some original proposals to overcome current limitations in multimodal data acquisition and synthesis, and suggest future directions toward a holistic understanding of how molecular components, network connectivity, and cellular physiology collectively give rise to neural function in C. elegans. These integrative approaches establish a roadmap that may eventually scale to more complex nervous systems and advance our understanding of neural computation across species.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World</title>
<link href="https://hdl.handle.net/1721.1/163420" rel="alternate"/>
<author>
<name>Sutcliffe, Douglas</name>
</author>
<id>https://hdl.handle.net/1721.1/163420</id>
<updated>2025-10-30T03:24:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World
Sutcliffe, Douglas
Fusion energy presents a promising solution for current global decarbonization goals. This thesis presents an adaptable model for evaluating mineral sufficiency in the global deployment of fusion power. Using the ARC Magnetic Confinement (MC) Deuterium-Tritium (D-T) fusion concept as a framework, this research integrates mineral usage estimates from the International Energy Agency (IEA) with MIT Energy Initiative’s (MITEI) energy production forecasts by generation technology. Using MITEI’s $2,800/kW cost scenario for fusion power generation, the model situates the demand for fusion-critical minerals within the broader context of growing mineral needs driven by the clean energy transition, and offers specific, quantitative insights into mineral sufficiency risks. The study finds that beryllium will face significant shortages solely due to fusion demand, with resource exhaustion projected to occur within 40 years. When accounting for additional demands from Electric Vehicles (EVs), battery storage, and transmission infrastructure, chromium and nickel are projected to exhaust economically extractable reserves within 21 to 35 years at current prices. The research further reveals that for nine of the thirty elements evaluated, over 50% of production is concentrated in a single country, and for half of the minerals China is the largest producer, introducing geopolitical risks. Notably, at just 13 kg per reactor, the demand for Rare Earth Elements (REEs) is not exposed to a significant risk, even without the top producing country. The research also surfaces current reactor designs and strategies which could help mitigate each identified risk.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.</title>
<link href="https://hdl.handle.net/1721.1/163419" rel="alternate"/>
<author>
<name>Espinal, Michael A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163419</id>
<updated>2025-10-30T03:23:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
Espinal, Michael A.
Foams, widely used in packaging, insulation, protective gear, and medical implants, are versatile materials but mechanically inefficient due to their bending-dominated microstructure, leading to an exponential loss of stiffness and strength at low relative densities. Architected materials address this limitation through engineered microstructures that achieve near-linear scaling of properties with relative density. However, truss- and plate-based designs suffer from stress concentrations, while shell-based architectures, though more mechanically efficient, remain highly sensitive to defects and are challenging to fabricate at scale via additive manufacturing. Spinodal architected materials, derived from scalable spinodal decomposition processes, offer a promising alternative with aperiodic, double-curvature microstructures that enhance mechanical efficiency at low relative densities. Nevertheless, their behavior beyond the elastic regime remains largely unexplored. This thesis investigates the nonlinear mechanics of spinodal architected materials by combining a comprehensive experimental dataset with computational modeling. A total of 107 unique morphologies were fabricated and subjected to uniaxial compression along three principal directions, resulting in a dataset of 321 stress-strain curves. Morphologies were generated via simulated spinodal decomposition, allowing controlled variation of anisotropy. Explicit finite element simulations, validated against experimental data, revealed that plastic energy dissipation dominates the large-strain mechanical response. To quantitatively link local morphology to global mechanical behavior, we introduce the Normal Participation Factor (NPF) — a scalar geometric parameter that captures the alignment between surface normals and the loading direction. We demonstrate that the NPF is a material-agnostic proxy for equivalent plastic strain and is linearly correlated with the total energy dissipated during deformation. Combining insights from both experiments and simulations, we establish the NPF as a first-order predictive tool for mechanical behavior under large strains, enabling structure-property predictions without reliance on costly simulations or extensive experimental testing. Altogether, this work lays the foundation for developing finite-strain structure-property relationships in spinodal architected materials, advancing their potential for real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery</title>
<link href="https://hdl.handle.net/1721.1/163415" rel="alternate"/>
<author>
<name>Co, Dominic Lim</name>
</author>
<id>https://hdl.handle.net/1721.1/163415</id>
<updated>2025-10-30T03:23:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery
Co, Dominic Lim
By 2050, the United Nations estimates that 68 percent of the world’s population will live in cities, with 90 percent of that growth concentrated in rapidly urbanizing informal communities across Africa, Latin America, and Asia. In these contexts, informality, defined as unregulated commerce, adaptive reuse of space, incremental construction, and self-organized infrastructure, shapes the everyday choreography Jane Jacobs called the “sidewalk ballet.” Yet because governments rarely collect census-grade data on such activity, informality remains poorly documented and weakly understood. This thesis introduces a transferable computational framework to formalize informality by transforming street imagery into an auditable taxonomy of informal street-level elements, activities, and practices. The framework is tested in two contrasting districts, i.e. District 1 and District 5 of Ho Chi Minh City, where sidewalks are highly contested by vendors, pedestrians, and regulators. The contribution of this thesis is two-fold. First, this thesis contributes a three-stage pipeline for classifying sidewalk informality. Using Seesaw (Moll et al., 2022), a CLIP-based feedback loop retrieves and soft-labels candidate scenes. This is followed by manual verification and fine-tuning a lightweight ResNet on binary categories (e.g. stationary vs mobile vendors, etc.). Compared to the zero-shot model Qwen-VL-Max, the fine-tuned ResNet delivered more balanced performance (precision/recall: 0.62– 0.78) and better handled nuanced, context-sensitive distinctions. In contrast, Qwen-VL-Max favored recall and object salience but struggled with subtle or spatial cues like mobile vs. stationary setups. Second, this thesis also developed a taxonomy and annotated dataset of informality which was used to reveal spatial inequities in sidewalk use. By converting curbside complexity into structured, updateable categories, the framework enables planners to recognize the adaptive value of informal practices, target genuine hazards, and design interventions for more equitable urban planning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/163414" rel="alternate"/>
<author>
<name>Dickerman, Matthew F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163414</id>
<updated>2025-10-30T03:23:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization
Dickerman, Matthew F.
The maritime shipping industry, responsible for approximately 3% of global greenhouse gas emissions, faces growing pressure to achieve net-zero emissions by 2050 under the International Maritime Organization (IMO) framework. Alternative fuels such as liquefied natural gas, ammonia, and methanol present challenges related to energy density, infrastructure, safety, and cost. Nuclear microreactors offer high energy density, zero operational emissions, and multi-year endurance, but require coordinated regulatory development and stakeholder engagement for commercial adoption.&#13;
&#13;
This thesis evaluates the feasibility of integrating microreactors into container ship designs employing electric propulsion and standardized intermodal logistics. Holos-Quad microreactors are selected based on their modular architecture, transportability, and compatibility with marine operations. Detailed ship concepts are developed for Feeder, Panamax, and New-Panamax classes, accompanied by a phased fleet development strategy.&#13;
&#13;
Economic modeling compares the lifecycle costs of conventional and microreactor-powered ships, incorporating capital expenditures, operating costs, financing assumptions, and carbon pricing. Fleet-level analysis indicates that microreactor-powered ships can achieve comparable or improved profitability while eliminating nearly 44 million metric tons of CO2e emissions across a ten-ship fleet. Sensitivity analyses confirm the robustness of these results across a wide range of future scenarios.&#13;
&#13;
By integrating stakeholder analysis, technical feasibility assessments, and economic modeling, this research establishes a commercially viable framework for zero-emission nuclear-powered shipping, offering a scalable pathway toward sustainable maritime operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A magnetic levitation testbed for development of real-time control frameworks applied in fusion</title>
<link href="https://hdl.handle.net/1721.1/163413" rel="alternate"/>
<author>
<name>Lee, Yehoon</name>
</author>
<id>https://hdl.handle.net/1721.1/163413</id>
<updated>2025-10-30T03:23:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A magnetic levitation testbed for development of real-time control frameworks applied in fusion
Lee, Yehoon
This thesis presents the development of a magnetic levitation device as a hardware-in-theloop platform to be used for research in Control and Data Acquisition frameworks applied to fusion experiments. Specifically, the testbed is aimed to demonstrate distributed, modular control using a plasma control system framework being developed at the Plasma Science and Fusion Center at MIT. This framework integrates a real-time control framework, MARTe2, and a data management framework, MDSplus, to provide platform flexibility and robust data management for rapid prototyping of control systems. Both frameworks are widely used individually in fusion experiments worldwide. The magnetic levitation setup is centered around a single electromagnet coil which levitates a permanent disk magnet from above. Implemented with the integrated MARTe2/MDSplus framework, the controller, actuator, and sensors are distributed over the network. With the magnetic levitation testbed, this thesis achieves three objectives: 1. formulation of a physicsbased model of the system, 2. development of a controller in a modular, networked framework, and 3. training and implementation of learning-based methods within the framework. First, a state-space model for single-axis magnetic levitation is formulated based on theory and refined with magnetic field measurements. A feedback controller is then developed and implemented with MATLAB Simulink. Afterwards, a vision-based observer is developed to estimate position and tilt of the levitated magnet. Pose-image datasets are auto-labeled using fiducial markers and are used to train a convolutional neural network. Finally, the trained network will be applied in system identification of the final controlled system. Through the process of system development, this thesis proposes that the integrated MARTe2/MDSplus framework is robust in performing real-time control of a networked system, and its structural modularity is advantageous for developing and testing learning-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation</title>
<link href="https://hdl.handle.net/1721.1/163412" rel="alternate"/>
<author>
<name>Nieves, Charmaine</name>
</author>
<id>https://hdl.handle.net/1721.1/163412</id>
<updated>2025-10-30T03:23:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation
Nieves, Charmaine
Bacterial cell genetic engineering is fundamental for research aiming to learn more about bacterial species for a broad range of applications. One method of intracellular delivery of foreign DNA during the genetic engineering process is the use of electroporation to create pores along the bacterial cell membrane. Current methods for assessing pore formation do not directly measure cell permeabilization or enable same-day assessment. In this thesis, a novel fast-screening protocol combining SYTOX green, microfluidics, and fluorescence imaging is evaluated for its capability to assess multiple conditions for cell permeabilization within a single day. By imaging bulk suspensions of post-electroporated cells stained with intracellularly delivered SYTOX, multiple electroporation conditions can be rapidly screened for cell permeabilization. This fast-screening protocol utilizes standard microbiology equipment and low-cost microfluidic imaging chambers, lowering the barrier to adoption and significantly reducing experimental time compared to conventional protocols involving foreign DNA delivery. Importantly, by decoupling permeabilization assessment from foreign DNA uptake, this method isolates the effect of membrane permeabilization from confounding factors such as restriction-modification systems. As a result, it provides a more accurate qualitative and quantitative assessment of bacterial membrane disruption. This approach enables same-day evaluation of electroporation conditions regardless of bacterial growth rate, potentially accelerating the optimization process for intracellular delivery in gene editing applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes</title>
<link href="https://hdl.handle.net/1721.1/163411" rel="alternate"/>
<author>
<name>Chong, Jinger S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163411</id>
<updated>2025-10-30T03:23:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes
Chong, Jinger S.
Accurate human motion prediction with uncertainty estimation is essential for safe and efficient human-robot collaboration, where robots must anticipate and react to human movements in real-time. Existing methods either rely on sophisticated techniques that demand extensive training data and sacrifice interpretability, or use simpler approaches like conventional Gaussian Processes (GPs) that fall short in performance. To address this gap, we propose a novel structured multitask variational GP framework that explicitly incorporates joint dependencies to reflect human kinematics. We further enhance this framework by integrating angular velocity constraints, which improve the physical plausibility of predictions. The addition of constraints alone yields up to a 66% reduction in mean angle error (MAE) and an 84% improvement in the likelihood of predicting ground truth (NLL), outperforming standard GP baselines across a wide range of motion types and prediction horizons. Among model variants, our structured GP with constraints offers the best tradeoff—achieving MAE within 1.1–2.6% and NLL within 0.001–0.012 of the best-performing model, while maintaining significantly lower overconfidence rates (OCR), particularly at short horizons where the independent GP model OCR reaches nearly 45%. These results underscore the importance of incorporating structure and context in human motion prediction, demonstrating that even simpler probabilistic models like GPs can achieve substantial performance gains when augmented with such information.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics</title>
<link href="https://hdl.handle.net/1721.1/163410" rel="alternate"/>
<author>
<name>Roy, Ronak</name>
</author>
<id>https://hdl.handle.net/1721.1/163410</id>
<updated>2025-10-30T03:23:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics
Roy, Ronak
The high-level control algorithms that are responsible for achieving dynamic locomotion in legged robots depend on accurate torque production for matching real-life performance with simulated performance. To achieve accurate torque production, actuators must run high-bandwidth, low-level torque control. Developing high performance low-level controllers requires accurate actuator models. This thesis covers the physical model of a Permanent Magnet Synchronous Motors (PMSM), a very common type of actuator in dynamic robotics. This thesis details the derivation of the PMSM linear model, how to adapt the model dependent on the physical construction of a real motor, and the implementation of FieldOriented Control (FOC) to achieve torque control. This thesis also describes a novel design of a high-precision dynamometer, which allows a motor to be coupled with an impedance and a torque sensor in order to accurately characterize the torque production characteristics of the motor. Using this dynamometer and other experimental setups, this thesis validates the model and determines parameters for multiple different actuators. Finally, this thesis proposes an augmented PMSM model that considers the nonlinear saturation behavior of the motor, validating the principle with hardware experiments, and demonstrates a nonlinear torque model and gain-scheduled current controller that improve torque tracking performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels</title>
<link href="https://hdl.handle.net/1721.1/163409" rel="alternate"/>
<author>
<name>Ilkbahar, Kayra B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163409</id>
<updated>2025-10-30T03:23:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels
Ilkbahar, Kayra B.
Omnidirectional wheels (omni wheels) are a type of wheel technology similar to caster wheels but capable of simultaneous longitudinal and lateral motion, making them suitable for holonomic motion applications. In recent years, their popularity has grown substantially in areas such as educational robotics, autonomous vehicles, and industrial automation. Despite their similarity to caster wheels in both function and application, omni wheels are a much less mature technology and few agreed-upon standards exist for their design and testing. This thesis covers the design of a test procedure and its requisite test apparatus to characterize the rolling resistance of omni wheels across various test conditions, and focuses specifically on the mechanical and electrical design of an apparatus which can measure the rolling resistance coefficient of omni wheels while modulating their load weight, travel angle, and travel speed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Structural Approach to Measuring Time-varying Risk&#13;
Aversion</title>
<link href="https://hdl.handle.net/1721.1/163345" rel="alternate"/>
<author>
<name>von Turkovich, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/163345</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Structural Approach to Measuring Time-varying Risk&#13;
Aversion
von Turkovich, Nick
Non-homothetic preferences have the potential to rationalize important asset pricing facts including time-varying risk premia and business cycle movements in asset prices (e.g., Campbell and Cochrane (1999)). This paper offers a structural approach to measuring time-varying risk aversion. Motivated by the literature on consumption commitments (e.g., Flavin and Nakagawa (2008), Chetty and Szeidl (2016), Chetty, Sandor, and Szeidl (2017)), I develop a model in which investors have nonseparable preferences over housing and nonhousing consumption, and investors must consume a minimum amount of housing each period. Non-housing consumption is assumed to be flexibly chosen. The key insight is that the intratemporal optimality condition between the two goods reveals information about the surplus consumption ratio, a key variable driving risk aversion. A cointegrating relationship between relative quantities and prices allow us to identify the elasticity of intratemporal substitution and measure surplus housing consumption. Using aggregate U.S. consumption data from 1959 to the present, the measured surplus consumption ratio demonstrates clear business cycle fluctuations, rising during expansions and falling during recessions. Consistent with the theory, this measure also predicts future excess returns.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities</title>
<link href="https://hdl.handle.net/1721.1/163344" rel="alternate"/>
<author>
<name>Epstein, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163344</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities
Epstein, Andrew
The commonwealth of Massachusetts has ambitious decarbonization goals enshrined in law and has been establishing the regulations to achieve them. Through its Department of Public Utilities regulatory rulings, the state has required local gas and electric utilities to pursue decarbonization not only by reducing the emissions of their electric supply but also by actively supporting gas load reduction. The residential heating sector dominates this effort, with programs like MassSave incentivizing customer adoption and now MA DPU 20-80-B&#13;
requiring gas utilities to demonstrate that they have sufficiently evaluated the possibility of non-pipeline alternatives, including but not limited to electrifying customers instead of reinvesting in the gas system for all future gas investments.&#13;
&#13;
This paper looks at a single Massachusetts utility, National Grid, and evaluates where its customers are switching to electric heat and which mechanisms are driving current adoption. It further evaluates where geographically National Grid could invest in electrification instead of replacing gas investments under the new 20-80-B order. In doing so it establishes a model for cost benefit calculations related to prospective NPA projects. This paper then examines the degree to which ongoing electrification efforts are aligned with one another. Finally, this paper explores concerns that the process of electrification might be regressive, leaving behind those who cannot afford to electrify their systems and leaving them to pay ever-increasing prices as the full gas system is paid for through rates from a decreasing population of consumers. In evaluation of such concerns, it determines the geographic correlation between ongoing decarbonization efforts and communities already facing housing burden.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables</title>
<link href="https://hdl.handle.net/1721.1/163343" rel="alternate"/>
<author>
<name>Salata, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/163343</id>
<updated>2025-10-22T03:34:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables
Salata, Elizabeth
Electrical connection errors arise frequently during manufacturing. It is optimal to repair these errors during General Assembly Trim Line stations when the wiring harnesses are still exposed and easily accessible. However, the time required to locate the cause of the errors often exceeds Trim station cycle times, so most repairs are delayed until after General Assembly. Due to the implications of shutting down the line, this results in significantly higher repair times, scrap costs, and resources. To overcome these challenges, there is clear evidence supporting the use of Augmented Reality (AR) tools to innovate and streamline manufacturing processes. This master's thesis identified deficiencies in the current standard operating procedure for addressing errors and used a human-centered design approach to develop a novel error diagnostic process using an AR overlay technique to pin point on the vehicle where the problem lies. This thesis also conducted an experiment to assess the performance, success rate, and perceived cognitive load of the two processes. The data collected from the experiment provided sufficient evidence that the diagnostic process developed for this thesis reduces the elapsed time to locate the connection error by 75% with a statistically significant reduction in overall perceived cognitive load. The likelihood of widespread adoption of the AR overlay process was assessed from an estimate of further AR hardware development, safety considerations in automotive manufacturing environments, and the level of enthusiasm of all stakeholders who were consulted for this research project.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers</title>
<link href="https://hdl.handle.net/1721.1/163342" rel="alternate"/>
<author>
<name>Sirgo, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/163342</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers
Sirgo, Alex
As the demand for data centers continues to grow, so does their energy consumption, making it increasingly important to develop sustainable and cost-effective strategies for powering them with carbon-free electricity. This thesis explores a techno-economic modeling framework that evaluates combinations of solar, wind, and battery energy storage systems to assess their ability to meet a data center’s electricity demand with on-site renewable generation. The model fills a gap in current literature by focusing on real-time energy matching using co-located infrastructure, rather than traditional off-site procurement methods like power purchase agreements and renewable energy credits.&#13;
&#13;
Using real-world weather and price data, the simulation calculates hourly generation, storage behavior, and grid interactions across a 20-year period. A financial model then calculates the levelized cost of energy (LCOE) for each system configuration. Results show that wind energy generally provides the lowest-cost renewable supply option, while hybrid solar and wind configurations improve renewable penetration. Battery storage plays a key role in shifting excess generation to periods of undersupply, but its economic viability depends on system sizing. Across different system configurations, renewable penetration ranged from 31.3% to 97.8%, while LCOE varied from $27.5/MWh to over $100/MWh, illustrating the trade-offs between cost and grid independence.&#13;
&#13;
By providing a structured analysis of the trade-offs between renewable penetration and cost, this research offers insight into how data centers and other energy-intensive facilities can design dedicated carbon-free energy systems. The findings underscore the importance of balancing resource diversity and storage investment to achieve decarbonization goals while maintaining economic viability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagnostics in Additive Manufacturing Using Image-Based Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/163341" rel="alternate"/>
<author>
<name>Varma, Arun Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163341</id>
<updated>2025-10-22T03:34:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Diagnostics in Additive Manufacturing Using Image-Based Machine Learning
Varma, Arun Alejandro
Additive Manufacturing (AM) is a vital capability in the aerospace industry. Blue Origin manufactures a substantial share of engine parts via metal AM. To meet growing customer demand, the company must dramatically increase engine throughput and, thus, 3D prints. Blue Origin has identified non-destructive testing (NDT) – particularly, Computed Tomography (CT) scanning – as an unsustainable bottleneck to expanding AM capacity. Not only is this process expensive, but, critically, there are not enough aerospace-grade CT machines in the world to support projected throughput. Without process change, meeting customer demand will soon become impossible. Yet, these scans provide important quality control, and any reduction in NDT must be accompanied by assurances of engine part integrity. This thesis introduces a diagnostic system that safely alleviates the bottleneck, and further yields insights that end-stage NDT alone cannot provide. The proposal is a machine learning system that evaluates the manufacturing process itself, examining layer-by-layer photographs captured during printing. It is predicated on two hypotheses: (1) These images, considered together, provide a synthetic 3D illustration of the build process; and (2) Machines can be taught to assess these process signatures dependably. The resulting system provides rich diagnostics. It achieves near-perfect anomaly recognition – 100% when using conservative defect thresholds. Operationally, the system can (at minimum) safely enable a 37-54% reduction in NDT, translating to millions of dollars in annual cost savings. In practice, this reduction will likely be higher. The system further enables early process intervention and a more data-driven approach to manufacturing intelligence. This work turns what began as an unsustainable bottleneck into an opportunity for enhanced quality control, process intelligence, and long-term manufacturing resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Substitution among Social Media Platforms: Evidence from App Tracking Panel Data</title>
<link href="https://hdl.handle.net/1721.1/163340" rel="alternate"/>
<author>
<name>Lagutina, Rina</name>
</author>
<id>https://hdl.handle.net/1721.1/163340</id>
<updated>2025-10-22T03:34:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Substitution among Social Media Platforms: Evidence from App Tracking Panel Data
Lagutina, Rina
This thesis explores a novel approach to competitive intelligence in the social media ecosystem by leveraging external mobile panel data to study substitution dynamics. It focuses on contextspecific behavioral patterns to identify which platforms compete for user attention in given situations. Using mobile app session data from April 2023 for approximately 5,000 users, the analysis segments usage into three behavioral contexts – morning, evening, and at-home sessions – and characterizes user-app interactions through descriptive statistics. K-means clustering is applied to identify archetypes of usage behavior across these contexts, revealing distinct patterns such as quick-check habits, deep content consumption, and intensive texting. By comparing app usage profiles across contexts, the study uncovers shifts in how and when platforms are used, highlighting subtle substitution dynamics. To validate the findings, the study analyzes app usage during service outages, testing if potential substitutes see increased engagement when a competing platform is unavailable. These insights offer a richer, contextaware framework for product managers to uncover indirect competition and tailor platform strategies to specific user behaviors. Limitations include reliance on behavioral data without content-level detail, mobile-only focus, and demographic skew in the panel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh</title>
<link href="https://hdl.handle.net/1721.1/163339" rel="alternate"/>
<author>
<name>Bari, Md Mustabeen Ul</name>
</author>
<id>https://hdl.handle.net/1721.1/163339</id>
<updated>2025-10-22T03:34:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh
Bari, Md Mustabeen Ul
This thesis develops a systems-based policy framework for Generative Artificial Intelligence (GenAI) implementation in developing economies, with specific application to Bangladesh. While GenAI's potential productivity and labor market impacts are well-studied in developed economies, limited research addresses the challenges faced by developing countries positioned primarily as technology consumers rather than producers. The research employs causal loop diagramming to map interactions between five critical policy domains: human capital development, digital infrastructure, data sovereignty, sectoral stimulus, and governance.&#13;
&#13;
The resulting framework identifies four primary reinforcing mechanisms that can accelerate adoption and three balancing mechanisms related to labor displacement. To validate the framework, the research analyzes contrasting implementation approaches from India and Egypt, demonstrating the importance of cross-domain synergies in effective policy design.&#13;
&#13;
Applied to Bangladesh, the framework yields a dual-entry strategy focusing on healthcare and education sectors as initial implementation domains, leveraging the country's strategic advantages while addressing resource constraints through a consortia-based implementation model that creates institutional resilience. The thesis contributes both a reusable conceptual toolkit for analyzing GenAI policy in resource-constrained settings and an initial context-anchored roadmap for Bangladesh. Future research should refine the framework through longitudinal case studies while developing more detailed, stakeholder-engaged implementation plans for Bangladesh that include concrete budget allocations, institutional responsibilities, and measurable outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Value of Digitizing Manufacturing Environments</title>
<link href="https://hdl.handle.net/1721.1/163338" rel="alternate"/>
<author>
<name>Briggi, Conor S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163338</id>
<updated>2025-10-22T03:34:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Value of Digitizing Manufacturing Environments
Briggi, Conor S.
There is significant variability and dispute around the value of digitally transformed manufacturing environments and no single methodology is broadly accepted. The variability stems from time-dependencies, implementation effectiveness, and the dynamic environments digital solutions are deployed in. However, an accurate accounting of this value is essential to company strategic planning. The research outlines how to approach this variability, cost parameters to consider, primary sources of value generation, and best practices for implementing Smart Factories. A tool that addresses these issues was successfully developed and deployed at Stanley Black &amp; Decker, helping the company to assess performance of the digitization efforts and tailor the delivered solution to optimize manufacturing performance. Results from this tool showed a positive expected return on investment and are provided to contextualize efforts in similar areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance</title>
<link href="https://hdl.handle.net/1721.1/163337" rel="alternate"/>
<author>
<name>Lorente Anon, Carla</name>
</author>
<id>https://hdl.handle.net/1721.1/163337</id>
<updated>2025-10-22T03:34:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance
Lorente Anon, Carla
Predictive maintenance plays a critical role in industrial operations by enabling organizations to detect potential equipment failures before they occur. However, while sensor data can identify anomalies such as excessive vibration or temperature fluctuations, technicians often struggle to efficiently diagnose and resolve the root causes of these alarms. This research presents a generative AI-powered chatbot designed to enhance the root cause diagnosis process in predictive maintenance by leveraging multimodal retrieval-augmented generation (RAG) and advanced AI-driven troubleshooting capabilities.&#13;
&#13;
The chatbot integrates multiple functionalities to support maintenance teams in resolving alarms quickly and accurately. Its time series analysis module processes real-time sensor data, identifying abnormal patterns and guiding users through a structured troubleshooting workflow. The retrieval-augmented generation (RAG) engine allows the chatbot to retrieve and synthesize relevant troubleshooting information from technical manuals, historical maintenance records, and structured knowledge bases, ensuring that technicians receive precise, grounded outputs. Additionally, the chatbot supports multimodal interactions, enabling users to upload images, audio, and video for more comprehensive diagnostics. By analyzing uploaded images of damaged components, transcribing spoken maintenance reports, and processing video footage of equipment malfunctions, the chatbot enhances problem identification and resolution.&#13;
&#13;
Another key feature of the chatbot is its interactive guided conversation system, which enables multi-turn dialogues that refine diagnostics dynamically based on technician input. Instead of providing static troubleshooting steps, the chatbot continuously adapts its responses to ensure that users receive the most relevant recommendations as the diagnostic process unfolds. To maintain safety and reliability, the system incorporates AI guardrails, filtering inappropriate or irrelevant inputs while ensuring that generated responses align with best practices for industrial maintenance.&#13;
&#13;
An evaluation framework is proposed to assess the chatbot’s effectiveness, focusing on retrieval accuracy, response relevance, and diagnostic efficiency. Initial results demonstrate approximately 30% reduction in diagnostic time, highlighting the chatbot’s potential to improve maintenance workflows, reduce downtime, and enhance technician productivity. This research underscores the transformative role of multimodal generative AI in predictive maintenance and lays the foundation for broader industrial applications. As a result of this work, a patent has been filed to protect the novel architecture and methods developed. Future work could focus on expanding retrieval capabilities to include video, integrating intelligent task automation for dynamic work order generation, and refining alarm prioritization using adaptive risk-based assessments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intangible Investments and the Accrual-Cash Flow Relationship</title>
<link href="https://hdl.handle.net/1721.1/163332" rel="alternate"/>
<author>
<name>Soares, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/163332</id>
<updated>2025-10-22T03:34:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intangible Investments and the Accrual-Cash Flow Relationship
Soares, Fabio
This paper investigates whether the weakening negative relationship between accruals and operating cash flows can be attributed to the immediate expensing of intangible investments under current accounting standards. Building on the framework proposed by Green et al. (2022), I examine how the mechanical capitalization of intangible investments affects the accrual-cash flow relationship across firms with varying R&amp;D intensities. I show that the capitalization impacts the relationship in unexpected ways, indicating that the proposed rationale cannot fully explain the observed trend. I further exploit differences in accounting treatments under IFRS and US GAAP to test whether increased capitalization of intangible investments through development costs strengthens the relationship. I find that the relationship is significantly more negative under IFRS than US GAAP, independently of R&amp;D expenditure, suggesting that increased capitalization alone does not explain the differences. Additionally, the positive trend observed for high R&amp;D firms in both standards highlights that increased capitalization is insufficient to reverse the weakening trend. These results challenge the view that current accounting practices are the primary cause of the weakening accrual-cash flow relationship and underscore the need for further exploration of alternative explanations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits</title>
<link href="https://hdl.handle.net/1721.1/163331" rel="alternate"/>
<author>
<name>Zeng, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/163331</id>
<updated>2025-10-22T03:34:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits
Zeng, Arnaud
This thesis examines how sports leagues and media companies are evolving to better connect with Generation Z, a generation whose changing expectations and habits – on-demand and socially driven – are reshaping the landscape of sports consumption. With fewer Gen Z fans watching full games on traditional mediums, the industry is being pushed to rethink its approach, adapting not just how content is delivered, but also what kind of content is created. Through a combination of expert interviews and industry data, this paper looks at the rise of short-form content, the importance of digital-first platforms, and the growing influence of storytelling&#13;
through influencers or behind the scenes. It also explores how new competition formats are exploiting what it now means to be a fan. The goal is to understand how the sports ecosystem is adjusting to remain relevant to its youngest audience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation</title>
<link href="https://hdl.handle.net/1721.1/163330" rel="alternate"/>
<author>
<name>Xi, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/163330</id>
<updated>2025-10-22T03:34:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation
Xi, Tiffany
In the footwear industry, the speed in which footwear designs reach the market impacts the ability for a company to accurately meet the demands of its customers as the probability of consumer preferences changing increases with time. This research investigates the impact of incorporating metal additive manufacturing capabilities into the product creation process of a major athletic footwear company. The study aims to determine whether and under which applications metal additive manufacturing can increase the speed at which footwear designs reach the market, while maintaining or improving the desired product quality.&#13;
    A case study approach was employed, focusing on the development of rubber outsole molds using metal additive manufacturing technology. The study compared two process flows that excluded and included metal additive manufacturing. The case study evaluated these processes based on the speed of the development process and the quality of the produced footwear samples. The footwear sample quality was measured against production-equivalent samples obtained from the company’s manufacturing partner. The results demonstrated that incorporating metal additive manufacturing capabilities led to a reduction in the time required for mold design and fabrication. This speed advantage was primarily attributed to the ability to directly fabricate detailed textures into the mold, eliminating the need for outsourced etching processes.&#13;
    The visual quality of samples produced did not fully match those created by the company's manufacturing partners but were sufficient for initial sample development. Importantly, the traction properties were comparable to those of the manufacturing partner's samples, indicating that the functional quality of the samples is adequate for product development purposes.&#13;
This research provides valuable insights into the potential of metal additive manufacturing in accelerating footwear product development. Future work recommendations include exploring advanced modeling and design software and examining the impact of machine parameters on build quality. The findings of this study have implications for both the footwear industry and other sectors considering the integration of metal additive manufacturing technologies into their product development processes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach</title>
<link href="https://hdl.handle.net/1721.1/163329" rel="alternate"/>
<author>
<name>Zhang, Yu (Sherry)</name>
</author>
<id>https://hdl.handle.net/1721.1/163329</id>
<updated>2025-10-22T03:34:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach
Zhang, Yu (Sherry)
As impact investing increasingly aspires to drive systemic change, the question of how to evaluate such efforts remains underexplored. Traditional evaluation approaches often grounded in linear causality and program-level outputs, and struggled to capture the complexity, interdependence, and emergent nature of systemic transformation. This thesis investigates how systemic investing can be evaluated by integrating systems thinking, evaluation theory, and investing practice. It develops a conceptual framework of thirteen hallmarks that characterize systemic investing evaluation across dimensions such as time horizons, stakeholder engagement, cross-sector collaboration, and capital dynamics. Drawing on 46 real-world cases, the research identifies 112 indicators to make these hallmarks observable and assessable in practice. To support practical application, the thesis also introduces an AI-assisted scoring tool that automates the evaluation of narrative content using the framework. Together, these contributions aim to support more reflective, adaptive, and system-aware evaluation practices in the emerging field of systemic investing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow</title>
<link href="https://hdl.handle.net/1721.1/163328" rel="alternate"/>
<author>
<name>Sen, Shweta</name>
</author>
<id>https://hdl.handle.net/1721.1/163328</id>
<updated>2025-10-22T03:34:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow
Sen, Shweta
Conventional strategies for container load planning (CLP) predominantly emphasize maximizing container utilization, which can result in suboptimally-timed inventory arrival, increased inventory holding costs, and downstream operational inefficiencies. Using a real-world case study from a global footwear and apparel retailer, this research formulates a novel multi-objective mixed-integer linear programming (MOMILP) model that jointly considers container utilization, transportation and storage costs, and timing accuracy of inventory delivery. The proposed model utilizes a branch-and-bound algorithm to evaluate numerous load configurations, assessing the impact of different load rules and weighting parameters on transportation performance metrics and inventory flow. Results highlight the cruciality of prioritizing delivery precision in transportation management decisions, demonstrating that solely maximizing volume utilization can adversely affect overall cost efficiency when downstream inventory storage and operational requirements are considered. This work also provides a process map of load planning activities and identifies targeted operational improvements, such as consolidation bypass and purchase order (PO) partitioning, that can enhance inventory flow smoothness, reduce transportation costs, and support more responsive logistics networks. Collectively, this work extends existing CLP methodologies by incorporating delivery timing and inventory storage considerations into load planning decisions, offering practical enhancements for logistics optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Principles and Practices of Gap-Closing Investing</title>
<link href="https://hdl.handle.net/1721.1/163327" rel="alternate"/>
<author>
<name>Kapor, Mitchell</name>
</author>
<id>https://hdl.handle.net/1721.1/163327</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Principles and Practices of Gap-Closing Investing
Kapor, Mitchell
This thesis examines the principles and practices of gap-closing investing, a distinctive model of early-stage venture capital investing that seeks to close gaps in access, opportunity, and outcomes for low-income communities and communities of color. Developed by Dr. Freada Kapor Klein and Mitchell Kapor through Kapor Capital, gap-closing investing integrates social impact objectives with a performance-driven investment strategy. The thesis combines historical analysis of socially responsible investing and impact investing with case studies of venturebacked startups to situate gap-closing investing within a broader tradition of values-based finance. It traces the ethical roots of impact investing to religious traditions, the emergence of socially responsible investing funds in the 1970s, and the formalization of impact investing terminology in the late 2000s. Gap-closing investing is distinguished by a developmental approach to startup growth, a redefinition of founder selection criteria emphasizing “distance traveled” over pedigree, and a focus on mitigating structural barriers through capital allocation. The thesis critically compares gap-closing investing to Corporate Social Responsibility (CSR) and Environmental, Social, and Governance (ESG) frameworks, arguing that gap-closing uniquely centers systemic impact as a core investment goal rather than a secondary consideration. The findings challenge the perception that impact investing is inherently concessionary, using performance data from Kapor Capital’s portfolio to demonstrate that intentional, equity-focused investing can produce both superior financial returns and measurable social outcomes. Gap-closing investing is presented as both a pragmatic investment strategy and a model for using venture capital to drive systemic change toward a more inclusive economy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Model for Battery State of Health</title>
<link href="https://hdl.handle.net/1721.1/163326" rel="alternate"/>
<author>
<name>Garza Lozano, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163326</id>
<updated>2025-10-22T03:34:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Model for Battery State of Health
Garza Lozano, Catalina
As battery energy storage systems (BESS) become critical components of grid infrastructure, accurately assessing their State of Health (SoH) is essential for optimizing performance, reducing costs, and ensuring contractual compliance. This thesis investigates the development of accurate, real-time SoH estimation models for utility-scale battery storage sites operated by NextEra Energy. Current SoH measurements—derived from annual capacity tests and Battery Management System (BMS) data—are often inaccurate or infrequent, leading to either over- or under-augmentation and resulting in financial inefficiencies. &#13;
&#13;
To address this gap, four state estimation models were developed and evaluated: an Unscented Kalman Filter (UKF), a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), a multitask RNN, and a Delayed Reinforcement Learning (DRL) model. Each model uses operational data—such as voltage, current, temperature, and State of&#13;
Charge (SoC)—to estimate degradation patterns and predict SoH at the rack, lineup, and site levels. Their outputs were compared against ground-truth capacity test results from a large-scale battery storage site.&#13;
&#13;
The DRL model demonstrated the highest accuracy, achieving a deviation of only 1.6 months compared to capacity test data, significantly outperforming existing BMS readings and the other three models. These findings underscore the value of advanced machine learning techniques in enabling proactive maintenance, optimized augmentation scheduling, and cost-efficient storage site management. This research offers a scalable framework for real-time SoH estimation across large fleets of battery storage assets and contributes to the broader goal of improving grid reliability through smarter energy storage management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems</title>
<link href="https://hdl.handle.net/1721.1/163325" rel="alternate"/>
<author>
<name>Sowards, Steffan</name>
</author>
<id>https://hdl.handle.net/1721.1/163325</id>
<updated>2025-10-22T03:34:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems
Sowards, Steffan
This work presents a study on the development and application of data-driven operational efficiency and throughput Key Performance Indicator (KPI) modeling for Robotic Mobile Fulfillment Systems (RMFS). Through rigorous analysis of extensive operational data from an operating RMFS, we demonstrate the efficacy of machine learning approaches in predicting and optimizing the performance of complex warehouse automation systems. The research employs advanced techniques, including gradient boosted bagged tree ensembles and AutoML, to capture complex input interactions and provide parallel predictions across multiple KPIs. Our models achieve a mean R² value of 0.7838 across all templates and KPIs, with particularly strong performance in our top performing metric across templates (mean R² of 0.9660).&#13;
&#13;
The study introduces a novel framework for feature engineering and selection, emphasizing actionable inputs while excluding intermediate variables to enhance model interpretability and practical utility. We validate our approach against novel operating conditions, demonstrating the models’ ability to generalize to unseen scenarios. Interpretability techniques, including SHAP analysis and permutation feature importance, provide valuable insights into system behavior and key performance drivers.&#13;
&#13;
This research establishes a generalizable framework for leveraging data-driven modeling in predicting and optimizing brownfield warehouse automation system behavior. The developed approach offers significant potential for enhancing operational decision-making, system design, and strategic planning in the rapidly evolving field of e-commerce fulfillment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Approach to Component Code Optimization for Wound Closure Portfolio</title>
<link href="https://hdl.handle.net/1721.1/163322" rel="alternate"/>
<author>
<name>Dubelier, Madeline</name>
</author>
<id>https://hdl.handle.net/1721.1/163322</id>
<updated>2025-10-22T03:34:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Approach to Component Code Optimization for Wound Closure Portfolio
Dubelier, Madeline
Product portfolio management involves strategically analyzing, optimizing, and expanding a company’s offerings to maximize value and align with business goals. While companies often focus on portfolio expansion to meet evolving customer needs and gain market share, product deletion is frequently overlooked, leading to code proliferation and undermining operational efficiency. Effective variety management often requires input from stakeholders across the supply chain, yet few published methods take this approach. This work presents a systematic supply chain management approach to portfolio optimization using a case study from Johnson &amp; Johnson MedTech. The case study is on pledgets, key components in non-absorbable suture systems. Recent pledget product quality issues exposed the need for a systematic approach to reducing component variety and operational efficiency. A current-state analysis addressed multiple dimensions of complexity. The evaluation combined qualitative and quantitative data and led to a five-stage optimization strategy. The proposed future state portfolio reduces component variety by 60%, guided by three constraints: continue to meet customer needs, protect competitiveness, and reduce manufacturing complexity. This method provides a replicable model for rationalizing legacy portfolios in the medical device industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimization-Based Approach to Efficient Clearance Inventory Allocation</title>
<link href="https://hdl.handle.net/1721.1/163320" rel="alternate"/>
<author>
<name>Perez Munoz, Karla Mayra</name>
</author>
<id>https://hdl.handle.net/1721.1/163320</id>
<updated>2025-10-22T03:34:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Optimization-Based Approach to Efficient Clearance Inventory Allocation
Perez Munoz, Karla Mayra
Allocating clearance inventory effectively remains a critical challenge in retail environments characterized by short decision cycles, fluctuating demand, and operational constraints. Decisions made during the clearance period are particularly impactful, as they determine the final opportunity to recover value from unsold products before they lose relevance or perishability. This thesis presents a mathematical optimization model designed to support the redistribution of discounted articles across a network of stores, with the objective of maximizing revenue while satisfying constraints related to stock availability, store capacity, and observed demand at the article-size level.Developed in collaboration with a leading global fashion retail company, the model was built to align with existing business processes and balances analytical rigor with simplicity in implementation. The model incorporates business-defined parameters and is tested using real operational data from selected distribution centers. It demonstrates significant improvements over the current practice of single-item allocation and addresses the computational challenges posed by the high dimensionality of real-world retail problems. By implementing efficient iterative procedures and demand-scaling mechanisms, the model ensures tractability while capturing the complexity of the business environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gas Network Preparations for Networked Geothermal</title>
<link href="https://hdl.handle.net/1721.1/163319" rel="alternate"/>
<author>
<name>Serbent, M. Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163319</id>
<updated>2025-10-22T03:34:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Gas Network Preparations for Networked Geothermal
Serbent, M. Patrick
As Massachusetts pursues its goal of achieving net-zero carbon emissions by 2050, the transition from natural gas to sustainable thermal energy solutions presents both opportunities and challenges for its 1.6 million natural gas customers. This thesis investigates the potential of networked geothermal systems as a viable alternative to traditional natural gas infrastructure, with a focus on leveraging existing gas network replacement programs, such as the Gas System Enhancement Plan (GSEP), to facilitate this shift. A four-phase methodology —encompassing site selection, model development, cost analysis, and business case formulation— evaluates the feasibility of integrating high-density polyethylene (HDPE) piping into leak-prone pipe replacement efforts as a preparatory step for future geothermal or hydrogen applications. Findings suggest that HDPE offers potential material and inventory cost advantages over medium-density polyethylene (MDPE), with added flexibility for low-carbon conversions, though significant upfront costs and regulatory uncertainties remain barriers. An example site already scheduled for main replacement work showed a 6% total increase in cost for the project based on the change in pipe from MDPE to HDPE. This work underscores the potential of aligning infrastructure modernization with climate goals, offering a framework for utilities like National Grid to navigate the energy transition in cold, densely populated regions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology</title>
<link href="https://hdl.handle.net/1721.1/163318" rel="alternate"/>
<author>
<name>Siddiqui, Sameed Muneeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163318</id>
<updated>2025-10-22T03:34:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology
Siddiqui, Sameed Muneeb
This thesis explores the dual imperatives of enhancing biosecurity and accelerating outbreak response. The research addresses two key areas. First, the thesis analyzes the implications of a national nucleic acid synthesis screening framework on outbreak response agility. A first-hand perspective is provided, identifying potential bottlenecks stemming from lagging customer verification and sequence screening approaches. Concrete solutions, such as pre-verification of first responders, priority processing channels, pre-approval of standard countermeasure sequences, and optimized computational screening, are proposed to mitigate these challenges and ensure rapid response capabilities without compromising biosecurity. Second, a machine learning architecture for biological sequence modeling, “Lyra” is presented. Lyra is grounded in the biological principle of epistasis and leverages state space models (SSMs) combined with projected gated convolutions to efficiently capture both local and long-range sequence interactions. We demonstrate new mathematical theory to connect SSMs with the approximation of polynomial functions - key to predicting epistatic effects. This subquadratic architecture achieves state-of-the-art performance on diverse biological tasks, including protein fitness landscape prediction, RNA function prediction, and CRISPR guide design, while utilizing substantially fewer parameters and computational resources than existing foundation models like transformers. The thesis concludes by highlighting the synergistic potential of advanced machine learning and thoughtful policy to significantly improve pandemic preparedness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In-or-Out: Creators’ Odyssey for Success</title>
<link href="https://hdl.handle.net/1721.1/163317" rel="alternate"/>
<author>
<name>Li, Zelin</name>
</author>
<id>https://hdl.handle.net/1721.1/163317</id>
<updated>2025-10-22T03:34:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">In-or-Out: Creators’ Odyssey for Success
Li, Zelin
The creator economy is flourishing, driven by shifts in advertising budgets and a surge in the supply of content creators. This has introduced a new challenge for firms: identifying which early-stage creators will grow to become stars. By identifying future stars, firms can choose who to invest their scarce resources in. They may also be able to purchase effective influence at a (proportionately) lower price than what they will pay once a creator becomes a star. Past research has shown that predicting which content will become viral is challenging. Instead, we focus on using content to predict which early-stage creators will grow their follower bases. We measure both the positioning of a creator’s early content and how the creator adjusts this positioning. We find that the initial position is not predictive of future success. However, subsequent adjustments in position are predictive, particularly if the creator’s initial follower base has grown consistently, rather than over a short period of rapid (viral) growth. Our insights inform the construction of predictive models that outperform baseline models in out-of-sample predictions of which creators will grow their followers the fastest.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining</title>
<link href="https://hdl.handle.net/1721.1/163316" rel="alternate"/>
<author>
<name>Moscoso Restovic, Rodrigo Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/163316</id>
<updated>2025-10-22T03:34:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining
Moscoso Restovic, Rodrigo Y.
Through a game-theoretic methodology this thesis examines collaborative approaches to managing water infrastructure within Chilean mining operations. The research examines cooperative stakeholder interactions to tackle water scarcity and growing demand in Chile's mining industry among mining firms, local residents and regulatory bodies. It utilizes game theory with a focus on cooperative games and bargaining models to develop a structured analytical framework for analyzing stakeholder dynamics including their incentives and cooperative opportunities.&#13;
The thesis centers on creating a mathematical model that shows stakeholders as rational entities who seek to maximize their benefits while facing resource constraints and regulatory limitations. The implementation of cooperative game theory allows for detailed examination of coalition building processes along with resource sharing agreements and benefit allocation practices which helps to define stable cooperative possibilities.&#13;
The primary findings show that mining companies achieve greater efficiency gains through water infrastructure collaboration than through separate individual investments. This thesis presents quantitative evidence that partnerships among mining projects generate significant financial savings and lead to better resource usage and positive environmental and social results.&#13;
Sensitivity analyses identify that cooperative stability depends on several critical factors, including the asymmetries existing in the different mining projects, the sequence in which investment decisions are made, and the transfer price for water selling for those projects that prefer free rides. The final part of the thesis presents concrete suggestions for policymakers and industry leaders to develop cooperative frameworks through specific policy mechanisms and incentive systems that support long-term collaboration.&#13;
The study advances existing academic knowledge by utilizing detailed game-theoretic approaches to address practical problems in sustainable mining practices. The findings reveal that strategic partnerships serve as fundamental tools for managing resources which can effectively tackle the urgent water scarcity challenges Chile faces.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Manufacturing Best Practices Using Multimodal AI</title>
<link href="https://hdl.handle.net/1721.1/163315" rel="alternate"/>
<author>
<name>Zachary, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/163315</id>
<updated>2025-10-22T03:34:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Driving Manufacturing Best Practices Using Multimodal AI
Zachary, Mark
Multimodal artificial intelligence offers promising solutions for enhancing operational excellence in contract manufacturing, where small job shops typically operate with limited standardization and high process variability. This research develops a part similarity tool that integrates geometric, material, and scale information to improve quoting accuracy and engineering efficiency in high-mix, low-volume production environments. After examining the fragmented manufacturing landscape and reviewing current AI applications in manufacturing, the study introduces an approach based on Variational Autoencoders for encoding 3D geometry alongside material properties and dimensional scale information. The technical implementation addresses challenges of multimodal fusion, missing data handling, and computational efficiency, while a qualitative ablation study demonstrates how this comprehensive approach outperforms single-modal methods in manufacturing relevance. Engineers benefit from improved insights for manufacturing planning, while estimators achieve more consistent cost predictions using the multimodal system. Reinforcement learning with human feedback provides a mechanism for continuous refinement, creating a framework that bridges geometric similarity with manufacturing context and reduces subjectivity in critical business processes. The research contributes both theoretical insights into multimodal learning and practical implementation strategies for standardizing operations in contract manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions</title>
<link href="https://hdl.handle.net/1721.1/163314" rel="alternate"/>
<author>
<name>Zeng, Bob</name>
</author>
<id>https://hdl.handle.net/1721.1/163314</id>
<updated>2025-10-22T03:34:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions
Zeng, Bob
This research explores the surge of Chinese manufacturing investments in Mexico as a strategic adaptation to recent global trade disruptions, specifically the U.S.–China trade tensions and the COVID-19 pandemic. By analyzing Chinese firms' motivations and strategies, the study highlights how they leverage Mexico’s strategic geographic proximity, favorable trade conditions under the USMCA, competitive labor market, and established industrial infrastructure to secure continued access to the North American market while minimizing tariff impacts and supply chain risks. Sector-specific analyses of the automotive, electronics, and renewable energy industries reveal distinct operational, regulatory, and cultural challenges encountered by these companies during their transition to Mexican production facilities. In addressing these challenges, Chinese firms have adopted strategies such as supply chain localization, rigorous adherence to North American regulatory frameworks, and effective cross-cultural management practices. Furthermore, the analysis situates this trend within the broader geopolitical context, emphasizing the role of evolving U.S. trade policies and proactive Mexican industrial initiatives in shaping the nearshoring landscape. The findings suggest that while Chinese investment in Mexico presents significant opportunities for industrial upgrading and enhanced bilateral cooperation, the longevity and effectiveness of these ventures depend on firms' strategic flexibility, deeper integration into local economies, and adept management of complex geopolitical and regulatory environments. By evaluating these elements, the research provides valuable insights into the drivers behind the increased Chinese presence in Mexico and the broader implications for global trade patterns, supply chain resilience, and regional economic integration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets</title>
<link href="https://hdl.handle.net/1721.1/163313" rel="alternate"/>
<author>
<name>Zhu, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163313</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets
Zhu, Yuan
This thesis examines the strategies and operational practices of Chinese fintech entrepreneurs in sub-Saharan African markets, with a focus on how they navigate regulatory fragmentation, localize business models, and build trust in low-infrastructure environments. Drawing on fieldwork and semi-structured interviews with founders, executives, and product leads from fifteen China-linked fintech firms across Nigeria, Kenya, and Francophone Africa, the study investigates how these actors engage with underdeveloped financial systems while adapting knowledge and models from China’s digital finance ecosystem. The research identifies several distinct approaches to market entry and adaptation, including platform integration, compliance-focused positioning, and informal ecosystem engagement. Findings suggest that these ventures do not simply export Chinese models but instead reconfigure them in response to local constraints in regulation, consumer trust, and institutional capacity. By analyzing firm-level strategies in diverse regulatory and market settings, this study contributes to broader discussions on transnational entrepreneurship, financial infrastructure development, and the evolving role of private actors in advancing digital inclusion across emerging economies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop</title>
<link href="https://hdl.handle.net/1721.1/163312" rel="alternate"/>
<author>
<name>Carson, Alix</name>
</author>
<id>https://hdl.handle.net/1721.1/163312</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop
Carson, Alix
Job shops with semi-autonomous work centers must understand their capacity utilization and financial state to maximize efficiency and profitability. Machine monitoring software allows managers to see the state of machines at any time and capture real-time capacity utilization. Job shops are positioned to maximize these work centers and must connect their manufacturing and operations strategy to the real-time shop data to maximize efficiency. This research is a case study in how a job shop can create a right-to-win strategy targeting jobs that are compatible and profitable for semiautonomous machines.&#13;
&#13;
ADDMAN Precision Baltimore (APBAL), a precision machine shop in the aerospace and defense industry, is facing labor constraints and underutilized work centers. This research aims to develop a structured quoting strategy and strategic pricing model to optimize job allocation between APBAL’s two semi-autonomous machining centers: the Makino Machining Complex 2 (MMC) and the Fanuc Robodrill. By integrating qualitative observations, historical job data, and machine utilization metrics, this study identifies inefficiencies in current job assignment practices. Key findings indicate that aligning work center assignments with projected profitability and capacity utilization can improve overall efficiency. A decision-making framework and pricing matrix are proposed to enhance job quoting accuracy, optimize machine usage, and increase APBAL’s competitiveness in securing high-volume contracts. The results offer a scalable framework for APBAL and its parent company, ADDMAN Engineering, to deploy across other machining facilities, ultimately improving operational performance and financial outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Technoeconomic Model for Maritime Applications of Green Power Technologies</title>
<link href="https://hdl.handle.net/1721.1/163311" rel="alternate"/>
<author>
<name>Tuana, Daniel I. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163311</id>
<updated>2025-10-22T03:34:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Technoeconomic Model for Maritime Applications of Green Power Technologies
Tuana, Daniel I. S.
Growing societal and regulatory pressures are causing industries around the world to consider greener alternatives to conventional fossil fuel power technologies. As a result, power solution suppliers like CAT are facing strategic uncertainties:  if, where, and when their core product markets will be disrupted by the novel adoption of alternative technologies. With the intention of helping to inform CAT’s future product and service strategy in conjunction with previous research related to powering mines and data centers, this thesis outlines the development of a code to estimate and compare the total cost of ownership of battery, hydrogen fuel cell, and nuclear power technologies to incumbent fossil fuel-driven systems in a variety of maritime scenarios including serving shoreside port electricity demand and on-water power demand across a diverse set of vessel segments. &#13;
The code leverages first principles, empirical models, and researched assumptions to model the performance and costs of power systems in response to stochastically generated and deterministic power demand profiles over the useful lifetimes of the assets. For vessel applications, the code also estimates the volumes and masses of the alternative systems as a basis to judge their practicality. Hypothetical power systems for four archetypal ports and six vessel segments (across a range of power nodes) were studied to identify potential opportunities in and adjacent to the marine markets CAT currently serves.&#13;
The outcomes of the study align with conventional intuition regarding the application of the technologies considered. Under certain conditions, the results support the technoeconomic case for the implementation of battery technology on short-haul vessels whose operations are predictable and would not be disrupted by shortened refueling/recharging intervals. Similarly, the results show that adoption of small modular nuclear reactors at ports and on large vessels with consistently large baseload power demand can provide economic advantages over incumbent fossil fuel technologies. The results of the simulations are sensitive to several technology-agnostic parameters including discount rates, fuel and electricity prices, demand growth rates, and other macro-economic conditions. In future, with ample case-specific data, the code developed for this thesis may provide convincing justification for the adoption of an alternative technology to serve the power demand of an individual port or vessel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Event Simulation as a Predictor for Factory Traffic Management</title>
<link href="https://hdl.handle.net/1721.1/163310" rel="alternate"/>
<author>
<name>Ramirez Echavarria, Esteban</name>
</author>
<id>https://hdl.handle.net/1721.1/163310</id>
<updated>2025-10-22T03:34:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Discrete Event Simulation as a Predictor for Factory Traffic Management
Ramirez Echavarria, Esteban
Manufacturing environments increasingly rely on automation and data-driven decision-making to optimize efficiency and production rates. This study explores the application of Discrete Event Simulation (DES) to model material flow and predict AGV (Automated Guided Vehicle), crane, and cart movements within a factory. The goal is to develop a digital twin that enables real-time decision-making, optimizes scheduling, and minimizes bottlenecks.&#13;
&#13;
To achieve this, we utilize SimPy, an open-source Python-based DES library, in conjunction with a custom-built API and React.js front-end interface. The study evaluates available DES software options and justifies the selection of SimPy based on flexibility, integration capabilities, and its suitability for modeling custom business rules. The solution is structured into modular components handling path planning, transporters, flows, stations, hot-cold starts, and utilities, ensuring adaptability to future improvements.&#13;
&#13;
A validation framework was established, utilizing historical data comparison and real-time validation to assess the simulation’s predictive accuracy. Over a 40-day testing period, the simulation achieved 89.6% accuracy and a sensitivity, or true positive rate (TPR), of 80.2%. The simulation provides a reliable first-pass scheduling tool that can be further refined with improved data collection.&#13;
&#13;
The findings indicate that while full automation of AGV deployment is not yet feasible, this study lays the foundation for future integration with the factory’s Vehicle Management System (VMS). Business implications include the potential for automated scheduling, enhanced material flow visibility, and optimization of capacity planning. Future work should focus on improving data accuracy, integrating live factory data streams, and refining algorithms for predictive scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry</title>
<link href="https://hdl.handle.net/1721.1/163309" rel="alternate"/>
<author>
<name>Netteberg, Sofie F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163309</id>
<updated>2025-10-22T03:34:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry
Netteberg, Sofie F.
This thesis presents the development and implementation of a new product placement optimization model for a large global apparel and footwear company’s supply chain, aimed at maximizing network-wide profits while aligning with long-term strategic goals amidst demand volatility. The model leverages a mixed-integer linear programming approach, integrating probabilistic demand simulations to optimize the placement of new products within the company’s existing network of third-party partner company factories. Key elements of the model, including decision variables, price and cost coefficients, an objective function, and constraints that reflect operational realities and strategic priorities, are discussed in detail. Through analysis and results validation, this research demonstrates how data-driven optimization can improve network profitability and adherence to companies’ long-term strategic supply chain objectives and develop networks that are more profitable. The thesis then includes an exploration of historic demand variability at the host company, followed by a recommendation to integrate probabilistic forecasting in network planning to generate production networks more robust to volatility in consumer product demand. The findings contribute to advancing data-driven decision-making in supply chain management and offer actionable insights for future product placement strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States</title>
<link href="https://hdl.handle.net/1721.1/163308" rel="alternate"/>
<author>
<name>Ni, Mengmeng</name>
</author>
<id>https://hdl.handle.net/1721.1/163308</id>
<updated>2025-10-22T03:34:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States
Ni, Mengmeng
This thesis investigates how government policy approaches shape regional entrepreneurial ecosystems and influence entrepreneurial strategy in strategic industries across China and the United States. Through comparative analysis of four region-industry pairs—Shanghai's semiconductor sector, Shenzhen's drone technology sector, Boston's biotechnology cluster, and New York's fintech ecosystem—the study examines the dynamic interplay between institutional design and entrepreneurial behavior. Drawing on Porter's Cluster Theory, Mazzucato's Entrepreneurial State concept, and the MIT REAP framework, the research develops a novel policy categorization encompassing four innovation governance tools: Cluster and Crisis Response Tools, Innovation Ecosystem Tools, Market-Shaping Tools, and Institutional Restructuring Tools. A qualitative case study methodology is employed, with in-depth firm-level analyses of Biren Technology in Shanghai and Moderna in Boston illustrating how entrepreneurs strategically respond to distinct institutional environments. The findings reveal four distinct models of innovation governance: Shanghai’s state-directed coordination, Shenzhen’s regulatory experimentation, Boston’s market-based orchestration, and New York’s regulation-centered oversight. Across contexts, entrepreneurs emerge as interpretive agents who actively leverage, adapt to, and at times reshape institutional conditions. This thesis contributes to the literature by offering comparative insights into the co-evolution of public policy and entrepreneurial strategy. It also provides practical implications for policymakers designing innovation ecosystems and for entrepreneurs navigating increasingly complex regulatory and technological landscapes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163307" rel="alternate"/>
<author>
<name>Gosen Cappellin, Carlos Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/163307</id>
<updated>2025-10-22T03:34:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain
Gosen Cappellin, Carlos Daniel
The medical technology company MedTechCo, specifically its Spine division, has deployed millions of implants in hospitals to meet demand. When inventory deployment and allocation are not managed appropriately to ensure that products are in the right place at the right time, excess inventory arises. Currently, MedTechCo Spine holds large amounts of excess inventory that are not utilized effectively. &#13;
&#13;
The objective of this research is to leverage a data-driven approach to define and reduce implant excess inventory at scale for MedTechCo’s Spine business unit in the United States. The research strategy used in this thesis begins with a root cause analysis to understand the causes of excess inventory. A robust data model was then developed to determine appropriate inventory levels by SKU, map all excess field inventory, and prioritize the most valuable excess SKUs. This data model was used to&#13;
automate the company’s ERP system to repurpose excess inventory, limit unnecessary inventory deployments to the field, and eliminate redundant backorders. Finally, an impact analysis was performed to measure the potential excess inventory reduction in both dollar value and units. &#13;
&#13;
Time constraints limited the implementation of the recommendations during the research period. However, MedTechCo Spine agreed to incorporate the proposed recommendations into its ERP system and operational processes in mid-2025. These recommendations will help reduce implant excess field inventory, unlocking tied-up capital, creating flexibility in the supply chain to meet demand changes, and enabling additional investment in innovation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163306" rel="alternate"/>
<author>
<name>Jaklis, Cyril</name>
</author>
<id>https://hdl.handle.net/1721.1/163306</id>
<updated>2025-10-22T03:34:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency
Jaklis, Cyril
Real estate is the world's largest untapped market, at $650 trillion (Statista, 2023), yet technological innovation, particularly in financial underwriting, is underrepresented. Excel spreadsheets, broker-driven data collection, and expensive public database subscriptions are still used by most institutional players and family offices. These outdated approaches result in inefficiencies and higher operational expenses. Firms are now waiting for more innovative tools to improve their workflows and predict their Net Operating Income (NOI). Development and maintenance costs are often underestimated due to optimistic estimates and unplanned material or labor cost price escalations. This paper examines how to increase the accuracy of underwriting by examining the full underwriting process, identifying operational inefficiencies, and analyzing how new technologies like Artificial Intelligence (AI) and Machine Learning (ML) are currently being utilized to better value properties and reduce error margins. The analysis covers the entire underwriting process, from data sourcing, collection, structuring, and analysis. It also reviews the platforms and software tools utilized to connect these phases, from initial appraisal to investment memo and investment committee (IC) decision-making. The objective is to understand practical constraints, recognize opportunities for optimization, and explore where investors can strategically position themselves to leverage these technologies while also providing a forward-looking outlook on the changing function of AI/ML in the sector over the next decade.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery</title>
<link href="https://hdl.handle.net/1721.1/163305" rel="alternate"/>
<author>
<name>Fenstermacher, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163305</id>
<updated>2025-10-22T03:34:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery
Fenstermacher, Andrew D.
Target Corporation has expanded its Last Mile Delivery (TLMD) capabilities through an omni-channel, "stores-as-hubs" strategy, using stores as fulfillment centers for online orders. The Target Sortation Centers was developed to receive packages from stores in the region, sort, route and dispatch these packages each day to accomplish faster delivery for online orders. Designed to never hold inventory, the goal is to have every package received delivered that same day. This presents new operational challenges common for brick-and-mortar retailers that develop an omni-channel strategy. This thesis investigates core processes in Sortation Centers to identify sources of volatility and propose improvements that enhance productivity and on-time delivery while minimizing labor costs and incomplete volume. Many of the current processes in Target’s Sortation Centers are manual and unstandardized. Moreover, improving operations and piloting changes is challenging, especially during peak seasons. To address these challenges, this study employs discrete event simulation (DES) using SimPy, informed by current operational data and in-person observations, to model and analyze current processes. Key findings reveal that pre-sorting TLMD volume from other national carrier volume at the stores prior to linehaul pick up for same day packages decrease the overall completion times for the day’s volume by 5.8% and lowers incomplete volume probability by up to 85% under excess volume scenarios. These process changes enhance site resilience to demand volatility without significant capital investment. The research underscores the value of DES for testing process improvements virtually and highlights the need for network-level optimization across Target’s omnichannel supply chain. Recommendations include piloting floor loading and pre-sorting in select markets, alongside future exploration of performance standards, automation, and standardized processes to further mitigate volatility impacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Vision for Cell Line Development</title>
<link href="https://hdl.handle.net/1721.1/163304" rel="alternate"/>
<author>
<name>Albright, Jackson A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163304</id>
<updated>2025-10-22T03:34:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computer Vision for Cell Line Development
Albright, Jackson A.
Anomalies in Cell Line Development prove to have significant impact on material and opportunity cost when screening for the Master Cell Bank that is used for all clinical drug development. Cell Line Development scientists spend hundreds of hours collectively identifying anomalies in fluorescent and brightfield imagery to ensure only high-performing cell clones are downselected for testing. The use of computer vision models alleviates this burden on scientists and better standardizes the selection process. Three techniques were tested for classifying anomalous and nominal fluorescent images: an autoencoder, an edge CNN and an RGB SVM. Examining performance through composite metrics such as F1 Score and MCC, the autoencoder (0.8744 and 0.8619, respectively) outperformed the edge CNN (0.8488 and 0.8257) and RGB SVM (0.8343 and 0.8252) for fluorescent anomaly classification. The high performance of the autoencoder came from training solely  on anomalous images and using a percentile-based threshold to classify images on their reconstruction error. Data robustness proved to be an issue, with certain test datasets having worse performance due to inherent variability of images within both nominal and anomalous classes. Gathering and labeling more datasets for training and testing will allow models to learn from this variability and provide higher confidence in model performance for real-time screening applications. Adjusting the structure of the traditional autoencoder to that of a variational autoencoder will also help with learning the variability of images within classes, and improve performance on previously unseen data. Overall, the current iteration of the models proves to be beneficial for anomaly detection in Cell Line Development and demonstrates that some modifications to data sourcing and model architecture could see even better performance. These same techniques could be applied to similar biopharmaceutical applications provided care is taken to properly source clean and labeled image data and construct appropriate model architectures for the images' inherent features.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes</title>
<link href="https://hdl.handle.net/1721.1/163303" rel="alternate"/>
<author>
<name>Bieske, Linn</name>
</author>
<id>https://hdl.handle.net/1721.1/163303</id>
<updated>2025-10-22T03:34:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes
Bieske, Linn
Background: Autonomous vehicle (AV) testing requires extensive real-world data collection, which is costly and time-consuming. Existing simulation techniques struggle to generate high-fidelity sensor data, particularly for multimodal signals like RGB camera images, LiDAR depth maps or LiDAR point clouds. Recent advances in generative AI, specifically diffusion models, offer a solution for improving synthetic driving scene simulations.&#13;
&#13;
Objective: This thesis enhances diffusion-based generative models to: 1) Encode LiDAR depth data into a stable diffusion model’s latent space, 2) Generate simultaneously, consistently and with high fidelity eight RGB camera images, 2D LiDAR depth maps and 3D LiDAR point clouds for a full 360-degrees range, and 3) Evaluate the realism and consistency of the generated sensor data.&#13;
&#13;
Methods: A multimodal, multi-view latent stable diffusion model was trained to generate complete 360’ synthetic driving scenes and simulate camera and LiDAR sensor signals for autonomous vehicles. The generated scenes were evaluated for sensor alignment, realism, and depth accuracy.&#13;
&#13;
Results: The diffusion model produced realistic, spatially consistent camera and LiDAR sensor data, reducing reliance on real-world validation miles and lowering AV testing costs. To further improve the quality of the multimodal driving scene generation it is recommended to retrain the VAE on LiDAR data. &#13;
&#13;
Conclusion: This work advances AV simulation by extending stable diffusion models to multimodal sensor data. Future improvements should focus on real-time generation and expanding to additional sensor types or hardware setups for enhanced simulation fidelity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs</title>
<link href="https://hdl.handle.net/1721.1/163302" rel="alternate"/>
<author>
<name>Liu, Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/163302</id>
<updated>2025-10-22T03:34:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs
Liu, Ying
This thesis develops and evaluates a series of predictive models to improve the efficiency of marketing resource allocation in the context of an outbound campaign for a premium membership product. The central objective is to identify customers most likely to respond positively to a membership offer, thereby minimizing outreach costs and maximizing return on investment. The study leverages a dataset from a large retail superstore that includes customer demographics, transactional behavior, and campaign response history. Data preprocessing involved the creation of engineered features such as age and tenure groupings and the transformation of categorical variables into factor types suitable for classification algorithms. Three modeling approaches were applied: classification with logistic regression, classification and regression trees (CART), and random forest. Logistic regression yielded strong predictive performance with an AUC of 0.851 and identified several statistically significant predictors, including spending on wine and meat products, recent purchase behavior, and tenure length. However, its primary limitation lies in its inability to accommodate cost asymmetries, as it lacks the capacity to incorporate a loss matrix which assigns different penalty to false positives and false negatives. The CART model addressed this limitation by introducing a customized loss matrix that reflects the asymmetric cost structure of marketing misclassifications—assigning a higher penalty to false negatives than to false positives. While this cost-sensitive structure aligned better with business objectives, the CART model achieved a moderate AUC of 0.767, reflecting limited classification accuracy and robustness. To overcome these limitations, a Random Forest model was implemented, combining the strengths of ensemble learning with cost-sensitive training. It achieved the highest AUC of 0.864 and allowed for the integration of a loss matrix during training. Feature importance analysis revealed that variables such as number of days since the last purchase, the amount spent on meat products, and a customer's enrollment length with the company were among the most influential predictors of customer response. The model not only improved classification performance but also supported strategic targeting through interpretable outputs. An economic evaluation demonstrated the practical value of the predictive model. Under a loss matrix where the cost of a false positive was set to $2 and a false negative to $10, the Random Forest model reduced total campaign costs by approximately 30% compared to a non-targeted approach. This cost savings translates into a meaningful economic impact, particularly when applied to large-scale campaigns. Overall, the findings support the use of Random Forest with a cost-sensitive design as a superior modeling framework in marketing applications. By aligning machine learning with real-world cost structures, this approach offers both statistical rigor and economic relevance for data-driven decision-making in customer acquisition strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data, Analytics, and Optimization for Production Planning</title>
<link href="https://hdl.handle.net/1721.1/163301" rel="alternate"/>
<author>
<name>Malinowski, Maxwell X.</name>
</author>
<id>https://hdl.handle.net/1721.1/163301</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data, Analytics, and Optimization for Production Planning
Malinowski, Maxwell X.
This thesis serves as a case study for the implementation of data analytics and optimization within a high mix low volume electronics production environment in the Aerospace and Defense industry. This case study demonstrates the benefits of data analysis for defining and quantifying operational bottlenecks and explores the implementation of an optimization model to better allocate resources for production planning. Results demonstrate the insights derived from using data and analytics in this environment, and further discussion explores what contributes to an effective implementation of an optimization model in a production setting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Debt Complexity and Equity Behavior</title>
<link href="https://hdl.handle.net/1721.1/163299" rel="alternate"/>
<author>
<name>Li, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163299</id>
<updated>2025-10-22T03:33:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Debt Complexity and Equity Behavior
Li, Jack
I examine how the complexity of firm debt affects the incorporation of news into equity prices. As residual claimants to firm cash flows, equity investors must be able to value all outstanding debt contracts, suggesting that complex debt can interfere with their ability to process news effectively. Using a model in which debt complexity causes a subset of investors to initially underweight news precision, I derive three predictions for the equity behavior of debt-complex firms around news events: (1) they exhibit greater post-announcement drift, (2) they show elevated trading volume both on announcement day and in the post-announcement period, and (3) their return volatility decreases on announcement day but increases during the post-announcement period. These predictions are supported by empirical evidence in the context of earnings announcements, suggesting that debt complexity introduces meaningful frictions in how news is incorporated into equity markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Wildness: Simulating Post-Extraction Wildland Regeneration</title>
<link href="https://hdl.handle.net/1721.1/163298" rel="alternate"/>
<author>
<name>Griggs, Crystal Ling</name>
</author>
<id>https://hdl.handle.net/1721.1/163298</id>
<updated>2025-10-22T03:34:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mapping Wildness: Simulating Post-Extraction Wildland Regeneration
Griggs, Crystal Ling
This thesis introduces a novel approach to wildlife habitat classification for ecological regeneration. It is focused by the extreme environmental degradation of mountaintop removal (MTR) in the Appalachian Mountains, a violent coal extraction process that has significantly altered the landscape of this ecologically sensitive region. By integrating remote sensing and Geographic Information Systems (GIS) with machine learning, this research aims to develop a method that transcends traditional human egocentric landscape assessments, advocating for a model that foregrounds the habitats and needs of critically endangered species by simulating landscape regeneration and assessing topographical alterations in terms of how design decisions impact wildlife. Central to this study is the concept of Umwelt, the subjective experiences of nonhuman species, including how their spatial perception and spectrum are used to discern details within their environment. Umwelt broadens traditional spatial understanding by emphasizing that each species experiences the world through its sensory filters, which shape its interactions within their habitat. This understanding guides the research’s approach to approximating the Umwelt of the Cerulean Warbler (Setophaga cerulea), a surrogate species in this work, which has faced steep declines due to habitat loss in Appalachia. Through the development of a habitat suitability model that utilizes advanced computational tools and multispectral imagery, the thesis endeavors to offer a new perspective on environmental planning and conservation efforts - a computational approach to near-approximations of Umwelt. The methodological framework seeks not only to classify post-extraction landscapes for their potential in supporting wildlife but also to inform design and land use decisions that are sensitive to the temporal and complex processes of natural habitat regeneration. By challenging the prevailing paradigms of landscape restoration, which often lack consideration for the intricacies of wildland dynamics such as the multitudes of species interactions and interdependencies, this research proposes a new methodology that empowers wildlife to guide the ecological recovery process. The findings underscore the potential of applied GIS and machine learning in environmental advocacy, setting a precedent for future research and practice aimed at the regeneration of ecosystems that considers the ecological realities of all species involved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications</title>
<link href="https://hdl.handle.net/1721.1/163297" rel="alternate"/>
<author>
<name>Ray, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163297</id>
<updated>2025-10-22T03:33:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications
Ray, Jennifer
As climate change concerns drive the need for decarbonization, hydrogen stands as a potential tool to help reduce emissions across the United States industrial and energy sectors. This thesis develops a flexible modeling framework for hydrogen adoption across multiple industrial applications, designed specifically to support strategic investment decision-making in an evolving market. The tool analyzes six major industries – steel, chemicals, energy storage, biofuels, vehicles, and natural gas– through two metrics: potential hydrogen consumption and threshold prices for economic viability. The framework applies scenario analysis to examine how government policy and technological advancement influence potential market trajectories.  &#13;
&#13;
Analysis reveals significant sensitivity to input assumptions. Even small variations in the assumed initial hydrogen production cost can result in significantly different adoption timelines. In scenarios where initial hydrogen production costs are $5/kg, widespread adoption requires maximum policy support and technological progress. However, reducing the initial cost by just $1 to $4/kg makes broader adoption feasible with less reliance on government intervention. Light-duty fuel cell electric vehicle penetration rate and steel industry growth rate emerge as the most sensitive parameters affecting overall hydrogen demand, followed by biofuel blending rate and hydrogen injection percentage into natural gas infrastructure.&#13;
The vehicles industry is identified as a first mover in widespread hydrogen adoption, followed by steelmaking and methanol production. Hydrogen adoption for natural gas blending, methanol for export, and methanol-to-gasoline applications occur later due to their lower threshold price for economic viability. Under optimal conditions with strong government support and significant technological advancements, total hydrogen demand could reach 48.8 million metric tons by 2050, approximately a sevenfold increase from scenarios with minimal support.&#13;
The tool’s value lies not in projecting a definitive, single-point forecast, but in providing a flexible framework that helps stakeholders navigate market uncertainties as the decarbonization landscape evolves. Future research should integrate supply-side dynamics, infrastructure requirements, and geographic variability to enhance projection accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Green Aluminum</title>
<link href="https://hdl.handle.net/1721.1/163296" rel="alternate"/>
<author>
<name>Schurr, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/163296</id>
<updated>2025-10-22T03:34:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Green Aluminum
Schurr, Kevin
Aluminum is an important metal to facilitate the energy transition. Its high strength to weight ratio and easy recyclability make it a useful material in many industries from automobiles to food packaging. However, the aluminum smelting process accounts for 2% of all global greenhouse gas emissions due to both the high amount of power needed to facilitate the electrolysis reaction and to the consumption of carbon anodes in the process. As regulatory changes in Europe raise the monetary cost of emitting carbon, smelters are investigating new technologies to integrate into their operations to cut Scope 1 and 2 emissions. Two such technologies are carbon capture systems to abate process emissions and small modular nuclear reactors to reduce emissions incurred during electric power generation. This work explores the technical and economic feasibility of leveraging these systems at Aluminum of Europe, a primary aluminum smelter subject to these changing European regulations. Results suggest that while these technologies have not been specifically adapted for aluminum production yet, they can play an important role in reducing the overall emissions from the smelting process under specific economic conditions. However, the analysis indicates that, at present, significant subsidies are required for such projects to be financially viable.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care</title>
<link href="https://hdl.handle.net/1721.1/163294" rel="alternate"/>
<author>
<name>Dugan, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163294</id>
<updated>2025-10-22T03:33:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care
Dugan, Andrew D.
Cardiogenic shock (CS) in the context of acute myocardial infarction (AMI) remains a significant challenge in critical care, with high mortality rates despite the availability of advanced mechanical circulatory support (MCS) devices like the Impella pump. However, adoption of these devices in clinical practice remains limited. This thesis explores two complementary strategies to address these challenges: developing machine learning (ML) models to predict shock severity and assessing the feasibility of integrating hospital Electronic Medical Record (EMR) data into Abiomed’s digital ecosystem to support standardized shock care.&#13;
In the first phase, ML models were trained on multiple clinical datasets to predict Society for Cardiovascular Angiography and Interventions (SCAI) shock stages based on patient data. While these models demonstrated strong predictive performance, feature analysis revealed that SCAI stages often reflect physician treatment decisions rather than purely patient physiology. This raises concerns about their utility as real-time clinical decision tools and suggests that ML applications may be better suited to prompting early data collection and intervention before severe shock develops.&#13;
The second phase evaluated the feasibility of EMR integration to support the broader adoption of standardized shock protocols. After considering regulatory, operational, and technical factors, third- party data aggregation emerged as the most practical path forward. Integrating EMR data could improve outcome tracking, support protocol adoption, and strengthen partnerships between Abiomed and hospitals, creating a foundation for more consistent and proactive shock management.&#13;
Together, these findings highlight the need for predictive tools that guide early clinical action and infrastructure that supports seamless data integration. By advancing both, Abiomed can expand its role in cardiogenic shock care, improve patient outcomes, and lead the evolution of data-driven, standardized treatment strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape</title>
<link href="https://hdl.handle.net/1721.1/163293" rel="alternate"/>
<author>
<name>Tike, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/163293</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape
Tike, Gauri
The automotive industry is undergoing a transformative shift with technological advancements in many areas such as Electric cars, Autonomous vehicles, Software Defined Vehicles, and decarbonization of mobility. Alternate means of transportation are also becoming available and sometimes even the cost is even lower than owning a car. The best way to get from point A to point B might not be the car in some of the cities. It might involve heterogenous modes of public transportation, using a bike, using ride hailing service or using a car for different portions of the route. Despite the concerns about the environment, we are still seeing an increase in global car ownership trends. These changing times pose challenges to legacy automakers. While they are experts in traditional car manufacturing, modern cars not only require traditional mechanical and electrical skills but also need deep expertise in developing software for these cars. With the growing EV adoption, we are seeing Chinese EV automakers are capturing market share quickly. What is the future of mobility with all these developments? What do traditional automakers need to do in this era to remain successful? In this report we will examine key trends in mobility: Global electric vehicles (EVs) adoption, software-defined vehicles (SDVs), autonomous vehicles (AVs), environmental implications. Based on this research we will propose strategic recommendations for traditional automakers in order to continue their success over the next decade and beyond.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Fintech Innovations: Strategic Insights from the United States and India</title>
<link href="https://hdl.handle.net/1721.1/163292" rel="alternate"/>
<author>
<name>Shanbhag, Rishabh Ganesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163292</id>
<updated>2025-10-22T03:34:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Navigating Fintech Innovations: Strategic Insights from the United States and India
Shanbhag, Rishabh Ganesh
This thesis examines how fintech ventures are reshaping financial services through new technologies and strategic choices tailored to different markets. It first looks at key innovations: digital payments, digital wealth management, and open banking, and how they have transformed everyday financial activities. The research then compares how fintech companies operate in the US and India by analyzing how market conditions, government initiatives, regulations, and consumer behaviors shape adoption. Finally, through case studies of Robinhood (US), Revolut (Global), and Paytm (India), the thesis examines how fintech firms navigate the choice between competing with traditional players and collaborating with them to scale under different market scenarios. Together, these insights aim to help entrepreneurs, investors and policymakers understand how strategy and technology come together in the fintech industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163291" rel="alternate"/>
<author>
<name>Harkavy, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163291</id>
<updated>2025-10-22T03:34:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing
Harkavy, Rachael
This thesis develops a digital framework for simulating and validating thermoplastic composite manufacturing processes, focusing on reducing the time associated with new product development. Using Finite Element Analysis (FEA) software (SimSof) and high-precision 3D scanning tools (ScanSof), the research introduces a geometric similarity metric to quantify deviations between simulated and real-world parts. By aligning simulations with production data, the study aims to replace costly physical trials with reliable digital models, accelerating customer onboarding and improving&#13;
manufacturing efficiency.&#13;
&#13;
Key contributions include establishing a systematic pipeline for integrating simulation tools into Oribi Composites’ workflow, defining critical parameters such as laminate width, material card accuracy, and mesh size, and validating their impact on simulation accuracy. Results demonstrate that accurate material modeling and parameter selection significantly enhance digital twin accuracy, while mesh size has minimal influence, allowing for computational cost savings. The research also highlights challenges in replicating real-world conditions digitally, including inconsistent material cards, and limited control over pressure profiles. Despite these limitations, the study proves that simulations can reliably predict manufacturable designs within&#13;
customer tolerances, reducing reliance on physical iterations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement</title>
<link href="https://hdl.handle.net/1721.1/163290" rel="alternate"/>
<author>
<name>Imaeda, Hiroko</name>
</author>
<id>https://hdl.handle.net/1721.1/163290</id>
<updated>2025-10-22T03:34:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement
Imaeda, Hiroko
Despite Japan’s reputation as an economically advanced nation, it faces one of the highest relative poverty rates among OECD countries, with nearly half of all single-mother households living below the poverty line. This thesis examines why poverty among single mothers persists despite a formal support ecosystem and proposes a systemic redesign grounded in life-stage-aligned, user-centered principles. Drawing on historical-institutional analysis, organizational theory, fieldwork interviews, and auto-ethnographic insights, the study identifies deeply embedded barriers that reinforce fragmented, crisis-oriented support systems misaligned with real-life trajectories. In response, it introduces the "Single Mother Journey" framework, reframing single mothers not as a static category but as a dynamic population with distinct, evolving needs. Through this lens, the thesis exposes critical gaps in preventive support, labor market misalignment, and information accessibility. Building on these findings, it proposes a future-ready support ecosystem, positioning corporations, local municipalities, NPOs, and education institutions as collaborative actors. It presents mumtec, a conceptual digital platform designed to consolidate fragmented services, personalize interventions by life stage, predict crisis points, and generate adaptive policy feedback. The thesis moves beyond surface-level critique by connecting institutional analysis with practical system design to offer a scalable framework for inclusive innovation. Listening to the silent voices of single mothers navigating precarity is an ethical imperative and a strategic necessity for sustainable, resilient societies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing automotive production scheduling to reduce finished vehicle inventory</title>
<link href="https://hdl.handle.net/1721.1/163289" rel="alternate"/>
<author>
<name>Johnson, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163289</id>
<updated>2025-10-22T03:33:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing automotive production scheduling to reduce finished vehicle inventory
Johnson, Christopher
This thesis addresses inefficiencies in automotive finished vehicle inventory management arising from misalignment between production scheduling and outbound logistics. Traditional production planning prioritizes manufacturing efficiency, causing significant inventory accumulation as vehicles await completion of full shipment loads. This research proposes an Integrated Production and Outbound Distribution Scheduling approach, introducing an optimization step within existing production scheduling workflows to align production sequences for expedited load formation. Back-testing on two automotive assembly lines over 82 weeks reveals a mean inventory reduction potential of 63–65%, with variability influenced by production volumes and vehicle configurations. A proof-of-concept implementation confirms the practical feasibility of optimized schedules, reducing inventory holding times by 33% without disrupting manufacturing operations. Computational performance analysis demonstrates good scalability for instances with fewer than 600 vehicles, though larger scenarios still yield meaningful inventory reductions. This work highlights substantial opportunities for automotive original equipment manufacturers to enhance efficiency by integrating outbound logistics into production scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management</title>
<link href="https://hdl.handle.net/1721.1/163288" rel="alternate"/>
<author>
<name>Gallardo Moncayo, Gabriel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163288</id>
<updated>2025-10-22T03:34:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management
Gallardo Moncayo, Gabriel A.
The increasing availability and reduced cost of Generative AI applications for the general public have motivated organizations across all industries to implement AI-based solutions in their daily operations. Still, they struggle to determine the capabilities and limitations of this technology when implementing it in their specific context. This thesis addresses these challenges through a practical case study: deploying a text-based Generative AI system (using Large Language Models - LLMs) for automated downtime event characterization within a global industrial operational technology (OT) setting by transforming unstructured&#13;
problem management reports into structured, actionable business insights. The developed software system contains a data pre-processing stage, followed by four LLM-based tasks (LLM-extraction, LLM-autoclassification, multi-aspect multi-level LLM-classification, and LLM-accuracy). We wrap everything in a well-structured and easy-to-understand evaluation framework that ensures the system’s output is format-reliable, accurate, and consistent. Through simple prompt engineering techniques and continuous failure modes analysis, we achieve high accuracy (&gt;89%) and consistency (&gt;79%) for downtime events characterization at 1% of the current cost. In the end, we prove that it is possible to implement an AI-based solution within current operational processes while properly communicating its capabilities and limitations and adapting its usage to the most added value purpose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support</title>
<link href="https://hdl.handle.net/1721.1/163287" rel="alternate"/>
<author>
<name>Gebner, Adam R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163287</id>
<updated>2025-10-22T03:33:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support
Gebner, Adam R.
This thesis investigates methods to improve demand forecasting and inventory management for raw wire. Challenges such as supply chain disruptions from the COVID-19 pandemic, operational variability, and loss of expertise exposed vulnerabilities in the existing manufacturing system, leading to shortages and inefficiencies. By leveraging extensive production data, this research develops and evaluates tools to mitigate these issues while aiming for a 100% service rate.&#13;
The project leveraged extensive production data to predict future wire requirements, optimize inventory, and achieve a 100% service rate. Key contributions include:&#13;
1. A data-driven demand simulation model, reducing forecast error and surpassing&#13;
baseline methods&#13;
2. Quantification of waste distributions and variability in wire consumption&#13;
3. An inventory simulation framework for policy evaluation and shortage mitigation&#13;
4. Clustering analysis to classify demand patterns and identify key wire categories&#13;
5. A decision support tool supporting real-time visibility into inventory levels and risks&#13;
The models and tools developed through this project provide enhanced capabilities to predict future wire requirements and manage inventory more effectively through continued development. Though the initial results indicate potential business value, areas for future work include incorporating additional data sources, exploring advanced machine learning techniques, and conducting longer-term pilot studies to quantify business impact. This project demonstrates the value of leveraging data analytics and simulation modeling to enhance supply chain decision-making in complex manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction</title>
<link href="https://hdl.handle.net/1721.1/163286" rel="alternate"/>
<author>
<name>Gerbino, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/163286</id>
<updated>2025-10-22T03:34:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction
Gerbino, Jacob
This thesis aims to develop a lean manufacturing framework with the goal of optimizing the use of floorspace in Boeing's Interiors Responsibility Center South Carolina (IRCSC). The primary goal is to eliminate wasted floorspace while increasing production capacity and efficiency. The motivation behind this project stems from the need to address the fully allocated production floorspace at IRCSC and the pressing requirement to add new product lines without expanding the facility's physical footprint. Additionally, the project seeks to prepare IRCSC for possible increases in production rates for the 787 Dreamliner Program, necessitating a redesign of work centers to support higher output levels while enhancing efficiency and reducing costs.&#13;
&#13;
The project employs the DMAIC (Define, Measure, Analyze, Improve, Control) methodology and lean tools such as spaghetti diagramming and value stream mapping to treat "Misused Space" as an additional form of waste, alongside the traditional forms of lean waste. The framework was applied to a sample interior product work center to test its effectiveness. The study involved mapping the current layout, observing technician travel, conducting time studies, and analyzing value stream maps. The methodology facilitated the creation of a new floorplan and scheduling system that consolidates cure times and balances workloads between work cells. Discrete event simulation was used to validate the proposed changes, ensuring they would achieve the desired improvements.&#13;
&#13;
The results of the study revealed inefficiencies in the current layout and scheduling practices of the work center. The proposed changes demonstrated a potential 25% reduction in floorspace and a 55% decrease in product throughput time. The new scheduling and work allocation strategy reduced product throughput time from nine days to four, and the new layout reduced worker travel distances by as much as 50% in some work cells. The lean manufacturing principles and scheduling optimizations discussed in this thesis should be applied to other work centers within IRCSC. Future research should explore advanced methodologies and tools to handle the complexities of more interconnected work centers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers</title>
<link href="https://hdl.handle.net/1721.1/163285" rel="alternate"/>
<author>
<name>Venkatanarayanan, Sriya</name>
</author>
<id>https://hdl.handle.net/1721.1/163285</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers
Venkatanarayanan, Sriya
This thesis investigates the barriers and enablers to predictive AI adoption in healthcare through a thematic synthesis of 13 academic articles and real-world case studies published over the last five years. Barriers were categorized into three domains: regulatory, cultural, and strategic. These included challenges such as fragmented regulation, clinician skepticism, data quality limitations, and poor alignment with clinical workflows. Cross-cutting patterns, stakeholder tensions, and recurring meta-themes revealed that these barriers are deeply interconnected. Drawing from over 200 individual findings, an actionable visual framework was developed to guide responsible and sustainable predictive AI integration. The proposed model, consisting of an internal “Pyramid” of enablers and an external “Circular Loop” of ecosystem conditions, provides a practical structure for aligning governance, engagement, and workflow with ongoing commitments to equity, collaboration, safety, and transparency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative AI in Private Equity for Accumulative Advantage</title>
<link href="https://hdl.handle.net/1721.1/163284" rel="alternate"/>
<author>
<name>Mahajan, Bonny</name>
</author>
<id>https://hdl.handle.net/1721.1/163284</id>
<updated>2025-10-22T03:33:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative AI in Private Equity for Accumulative Advantage
Mahajan, Bonny
This research explores the use of Generative AI (Gen AI) for achieving accumulative gains across various business and technical functions within commercial enterprises under private equity firms. While based on applied experiments in a private equity-owned, resource constrained portfolio company, many of the findings presented here may apply in other types of organizations.Through this study, we conduct case studies across key departments such as customer service, purchasing, engineering, employee management, and marketing. For each use case, we delve into the utilization of custom-built or publicly available Gen AI-based tools, aiming to understand the unique considerations and challenges that may arise when implementing Gen AI solutions in industries like manufacturing, which have traditionally been underserved by the tech sector.Through this research, we identify the critical role of humans in the loop, emphasizing the importance of UI/UX design, domain expertise, and local culture in the successful adoption and acceptance of Gen AI tools designed to enhance workforce efficiency in portfolio companies. This study also aims to illustrate how investing in Gen AI technologies is ultimately an investment in a company’s most valuable resource—its employees. By equipping employees with innovative tools, the organization not only improves productivity and job satisfaction but also fosters a culture of continuous improvement and adaptability. This research highlights the transformative potential of Gen AI in reshaping traditional business processes and driving sustainable growth in different functions of organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Standard Work for High Mix Low Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163283" rel="alternate"/>
<author>
<name>McNulty, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/163283</id>
<updated>2025-10-22T03:33:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Standard Work for High Mix Low Volume Manufacturing
McNulty, Will
This thesis examines the challenges of developing standard work at scale in a high-mix low-volume (HMLV) manufacturing environment. The research is conducted at Re:Build Composite Resources, a thermoset composites (TSC) manufacturer. In the context of the company, impending growth demands more skilled laminators and the manual, complex nature of TSC lamination exposes the need for improved and documented standard procedures. By documenting existing processes through operator shadowing, time studies, and quality data analysis a “best-known” standard was created for the production steps of a subset of parts. Two pilot parts—one focused on cutting scrap rates, the other on boosting throughput—demonstrated how standard work instructions and a standard work schedule designed for one-piece flow significantly reduced errors and production variability. The thesis also explores the effectiveness and limitations of using computer vision as a tool to automate work instruction and time study data set generation. Beyond the immediate improvements in quality, efficiency, and new operator onboarding, the project’s scalable framework lays out a roadmap for broader adoption&#13;
of standard work in fast-growing HMLV operations. By focusing first on parts that yield the most significant gains — either due to high volume or high unit cost — organizations can maximize returns on continuous-improvement efforts while not overburdening their engineering staff with excess analysis and documentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163282" rel="alternate"/>
<author>
<name>Bhirgoo, Priya Darshini</name>
</author>
<id>https://hdl.handle.net/1721.1/163282</id>
<updated>2025-10-22T03:34:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing
Bhirgoo, Priya Darshini
The pharmaceutical industry relies on high-temperature fluids such as pure steam to support critical operations including equipment cleaning and sterilization and on hot Water-For-Injection (WFI) as a key ingredient for drug substance manufacturing. These high-temperature process-driven heat demands are fulfilled through fossil fuel-based heating which contributes significantly to Scope 1 carbon emissions. Recognizing the link between environmental stressors and human health, Amgen has committed to achieving carbon neutrality by 2027. This thesis explores the feasibility and implications of transitioning from fossil fuel-based process heating to a fully electric system at one of Amgen’s drug substance manufacturing sites. Amgen’s existing fossil fuel-based steam system was analyzed through site visits, engineering reviews, and stakeholder engagements to quantify capital and operating costs, energy usage, and carbon emissions. A fully electric alternative was designed by researching commercial technologies and collaborating with suppliers as well as internal stakeholders. The analysis found that while the capital investment required for electrification is comparable to that of traditional steam systems, the operating costs for an electric system are significantly higher, driven by the higher price of electricity relative to natural gas. From a sustainability perspective, electrification eliminates on-site Scope 1 carbon emissions but shifts emissions to Scope 2, making the environmental benefit dependent on the carbon intensity of the local electricity grid. As grids transition to renewable energy sources, the potential for long-term emissions reductions strengthens. Future work should focus on evaluating the costs of necessary electrical infrastructure upgrades and identifying regions with lower-carbon, lower-cost electricity grids where electrified systems could be more readily implemented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations</title>
<link href="https://hdl.handle.net/1721.1/163281" rel="alternate"/>
<author>
<name>Tchelikidi, Cloe</name>
</author>
<id>https://hdl.handle.net/1721.1/163281</id>
<updated>2025-10-22T03:33:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations
Tchelikidi, Cloe
In mature, competitive sectors such as financial services and media and entertainment, customer loyalty is increasingly difficult to sustain. This thesis explores the emergence of cross-industry partnerships, specifically between credit card issuers and digital entertainment platforms, as a strategic response to rising churn and declining differentiation. Through a case study of the American Express Digital Entertainment Credit, the research examines how lifestyle-aligned benefits can foster deeper behavioral engagement, reduce switching, and enhance customer lifetime value. The thesis situates these partnerships within the broader evolution of loyalty strategies, marked by hyper-personalization, subscription fatigue, and platform convergence. Findings suggest that flexible, recurring rewards embedded in consumers’ daily routines offer a path to durable retention, especially among younger, digital-native cohorts. The study concludes that such partnerships are not peripheral marketing tools but increasingly core to competitive strategy in commoditized markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/163280" rel="alternate"/>
<author>
<name>Wu, Lanchen</name>
</author>
<id>https://hdl.handle.net/1721.1/163280</id>
<updated>2025-10-22T03:33:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry
Wu, Lanchen
This paper explores how financial pressures, regulatory enforcement, and market dynamics interact to shape pharmaceutical manufacturing quality and drug supply stability. Using a causal loop diagram (CLD), it examines how cost-cutting behavior affects control and validation capabilities, interacts with regulatory agency oversight, and contributes to recurring drug shortages. The analysis highlights how competition drive companies to operate at or near the minimum regulatory requirements, gradually eroding quality systems. Because of the nature of medical products, the quality of a drug cannot be directly assessed by individual users, distributors, or payers, making it necessary for government agencies like the FDA to rely on internal manufacturing data to ensure all drugs meet a minimum standard of quality. Regulatory oversight serves as a safeguard rather than a tool for guiding business decisions. However, its effectiveness is constrained by the frequency of inspections, the capacity of auditors, and limited resources—especially when government budgets are stretched and other priorities take precedence. The paper also discusses how manufacturers may avoid detection by strategically presenting information during inspections, making it harder for auditors to spot issues and allowing weakened controls to persist. Over time, these dynamics reinforce one another, creating a self-sustaining cycle in which cost pressures lead to a minimal compliance, quality issues, and regulatory responses that increase costs further. &#13;
As the number of manufacturers shrinks due to market consolidation, supply disruptions become more severe when failures occur. Regulatory discretion—intended to avoid immediate shortages—can unintentionally reduce incentives for long-term quality investment, further weakening the system’s resilience. &#13;
To address these issues, the paper proposes structural changes, including financial accountability for payers during shortages, tighter regulatory focus on process reliability, and linking regulatory flexibility to quality improvement obligations. These approaches aim to create balancing mechanisms that reduce cost-driven deterioration of quality and promote a more stable pharmaceutical supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ant Group’s Transformative Impact on China’s Financial Industry</title>
<link href="https://hdl.handle.net/1721.1/163279" rel="alternate"/>
<author>
<name>Pan, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/163279</id>
<updated>2025-10-22T03:33:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ant Group’s Transformative Impact on China’s Financial Industry
Pan, Kathryn
Ant Group, China’s leading digital finance company, has fundamentally transformed the nation’s financial industry through groundbreaking innovations in digital payments, micro-lending, wealth management, and investment advisory. This paper explores the company’s role in reshaping China’s financial ecosystem, analyzing its impact on traditional banking institutions, regulatory policies, and consumer behavior. Utilizing analytical frameworks such as Porter’s Five Forces, PEST analysis, and SWOT analysis, this study provides a comprehensive assessment of the external and internal factors influencing Ant Group’s development and competitive positioning.&#13;
This research highlights Ant Group’s key financial innovations, including its online transaction platform, offline payment services, online credit solutions, digital fund distribution channels, and AI-driven investment advisory. By leveraging advanced technologies such as artificial intelligence, blockchain, and big data analytics, Ant Group has enhanced service efficiency, expanded accessibility, and strengthened risk management capabilities. These innovations have significantly advanced financial inclusion, extending financial services to previously underserved populations. However, Ant Group’s rapid growth has also intensified regulatory scrutiny, prompting major restructuring efforts and adjustments to its business model.&#13;
This paper employs three major analytical frameworks: PEST analysis, Porter’s Five Forces, and SWOT analysis. The PEST analysis examines the political, economic, social, and technological factors shaping Ant Group’s trajectory, highlighting the impact of evolving government policies and macroeconomic conditions on its operations. Meanwhile, Porter’s Five Forces framework assesses the competitive dynamics within China’s financial sector, identifying key market pressures such as rising competitions and regulatory constraints. Finally, the SWOT analysis evaluates Ant Group’s internal strengths and weaknesses, as well as external opportunities and threats, offering a comprehensive perspective on the company’s strategic positioning.&#13;
Drawing from these analyses, the paper offers strategic recommendations to ensure Ant Group’s sustained growth and resilience in an increasingly complex financial environment. These recommendations include strengthening regulatory compliance, fostering strategic alliances with both domestic and international partners, and further leveraging technological advancements to expand its service offerings. Additionally, the study explores potential global expansion strategies, considering how Ant Group can adapt its innovative financial solutions to international markets while navigating diverse regulatory landscapes.&#13;
By examining Ant Group’s evolution and the broader implications of its digital finance model, this study contributes to a deeper understanding of fintech’s disruptive power in China’s financial sector. The findings provide valuable insights for industry leaders, policymakers, and scholars interested in the intersection of financial technology, regulation, and strategic business management. As digital finance continues to evolve, Ant Group’s trajectory serves as a critical case study in balancing innovation, regulation, and market competition within a rapidly shifting financial landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput</title>
<link href="https://hdl.handle.net/1721.1/163278" rel="alternate"/>
<author>
<name>Sircar, Julia Sarita</name>
</author>
<id>https://hdl.handle.net/1721.1/163278</id>
<updated>2025-10-22T03:33:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput
Sircar, Julia Sarita
Blue Origin is an aerospace company with ambitious throughput goals in response to increased commercial space exploration. Pressure to increase throughput is especially apparent within its BE-4 engine business, as the engines support Blue Origin and its customers. Blue Castings is one of the primary in-house manufacturing plants that supports BE-4 production; the plant manufactures rocket engine components through a process called investment casting. Investment casting, by nature, is a complex process involving long rework times, high incidence of defects, and significant process variability. These characteristics contribute to the discrepancies between Blue Origin’s target BE-4 production rate, the production rate feasible at Blue Castings, and its actual delivery rate. This thesis explores how defect management and prevention techniques can improve throughput at Blue Castings and reduce the number of Blue Origin’s schedule delays attributable to Blue Castings. The research began with a baseline investigation and analysis of Blue Castings’ actual and best-case throughput rates compared to its goal. Two gaps were identified: 1) a gap between actual and feasible throughput, and 2) a gap between feasible and target throughput. The analyses highlight the need for better process and quality management to close both gaps. Through a mixed-method approach, the researcher explored and piloted process and data improvements to understand their impact on throughput. This included qualitative and quantitative data collection through on-site interviews, random sampling of defect data, and queries from the manufacturing execution system. With this data, the researcher investigated how machine learning can predict rework severity and support defect prevention. A case study on a selected part number demonstrated the potential to improve throughput by reducing unnecessary rework. By aligning stock-on surface criteria to downstream machining requirements, average rework loops were reduced from thrice the industry benchmark to below the benchmark. This increased capacity at the rework work center and improved the overall delivery of this part. The research also demonstrated how a cross-functional collaboration to formalize producibility lessons reduces the creation of defects, promotes systematic knowledge-sharing, and accelerates improvements similar to the stock-on surface case study. In parallel, this research evaluated how Blue Castings could improve defect documentation and tracking without causing significant additional effort for operators. The researcher’s findings highlight the limitations of handwritten weld maps and inconsistent data capture practices on effectively preventing defects. Digitization of defect tracking is recommended to enable consistent defect data collection and improved root cause and trend analyses. As data quality improves, applying classification ML models for predictive analytics can scale throughput. This work provides recommendations for Blue Castings to implement mechanisms that reduce rework, improve producibility, and increase throughput to align with Blue Origin’s goals.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives</title>
<link href="https://hdl.handle.net/1721.1/163276" rel="alternate"/>
<author>
<name>Kaashoek, Justin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163276</id>
<updated>2025-10-22T03:33:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives
Kaashoek, Justin H.
Large language models (LLMs) can perform a wide range of search and optimization tasks over discrete spaces. This work seeks to explore the limits of LLM-guided search. We construct a set of text optimization tasks with different levels of "intuitiveness'' and evaluate whether LLMs can effectively optimize objectives. We show that the LLM's performance depends not only on its intuition for the objective, but also on the alignment between the objective and its priors. We also find that the LLM can successfully optimize an objective even without an explicit description of the objective. Our results largely focus on greedy search strategies; we develop a theoretical characterization of conditions under which greedy search is optimal, meaning the LLM's failures result from a fundamental inability to take gradient-like steps, not suboptimal search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems</title>
<link href="https://hdl.handle.net/1721.1/163273" rel="alternate"/>
<author>
<name>Harjono, Hanna-Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/163273</id>
<updated>2025-10-22T03:33:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems
Harjono, Hanna-Lee
Electrospray thrusters have emerged as highly promising propulsion options for small satellites due to their compact size, low weight, and power requirements. These thrusters offer precise, efficient, and scalable attitude control, making them ideal for missions requiring fine adjustments and advanced capabilities such as formation flying and docking maneuvers. However, to fully exploit the potential of electrospray thrusters, control strategies specific to them must be developed. In this work, a parameterized, PID gain-scheduled attitude controller that leverages the unique throttleability of electrospray thrusters is developed and validated. The developed controller is adaptable across operating conditions, as well as electrospray thrust coefficient values. Extensive modeling efforts are undertaken to incorporate the throttleability and operational constraints of electrospray thrusters, ensuring accurate performance predictions. The control system is simulated under various operating conditions to assess and verify its functionality and robustness against disturbance torques. Validation experiments in a magnetic levitation CubeSat testbed are proposed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen</title>
<link href="https://hdl.handle.net/1721.1/163271" rel="alternate"/>
<author>
<name>Goel, Viraat Yogi</name>
</author>
<id>https://hdl.handle.net/1721.1/163271</id>
<updated>2025-10-22T03:33:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen
Goel, Viraat Yogi
Technology transfer (TT), or the process by which a product's manufacturing is moved and scaled, is a complex business process with countless deliverables and stakeholders. This is especially true in biomanufacturing, where drug commercialization timelines are measured in years, manufacturing facilities are specially designed, and regulations must be stringently met. This systems-level complexity can create inefficiencies in the TT process, lengthening timelines and wasting resources. In this project, we use simulation modeling techniques to digitally model Amgen's Commercial Tech Transfer (CTT) process for biologic drugs. We use virtual experimentation to identify key bottlenecks in the TT workflow, quantify how workstream alterations impact project timelines, and identify process changes likely to shorten timelines. We also extend this analysis to Amgen's New Product Introduction (NPI) process, identifying how coordination between upstream and downstream processes may accelerate NPI timelines. Finally, we link this project to the ongoing development of TT data visualization dashboards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Stock Modeling for a Medical Devices Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163270" rel="alternate"/>
<author>
<name>Chong, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/163270</id>
<updated>2025-10-22T03:33:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Stock Modeling for a Medical Devices Supply Chain
Chong, Julie
This thesis examines the current inventory management practices at a leading manufacturer of medical devices, and identifies areas for significant improvement. The analysis reveals inefficiencies in safety stock management, with finished goods inventories being excessively high and raw material stocks being underestimated. The study applies single-echelon and multi-echelon inventory modeling to demonstrate potential cost savings through optimized safety stock levels. Additionally, it highlights the importance of reevaluating high service level targets and improving forecasting accuracy to reduce reliance on costly countermeasures. The thesis also emphasizes the need for effective management of component lead times and enhanced data visibility. Recommendations include transitioning to data-driven safety stock calculations, adopting multi-echelon inventory optimization, reassessing service level targets, enhancing forecasting accuracy, and improving component lead time management. By implementing these strategies, the company can enhance operational efficiency, reduce costs, and build greater resilience in its supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding home broadband coverage through existing Low Earth Orbit megaconstellations</title>
<link href="https://hdl.handle.net/1721.1/163269" rel="alternate"/>
<author>
<name>Gonzalez Martinez, Gretel</name>
</author>
<id>https://hdl.handle.net/1721.1/163269</id>
<updated>2025-10-22T03:33:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding home broadband coverage through existing Low Earth Orbit megaconstellations
Gonzalez Martinez, Gretel
Expanding broadband access to underserved areas continues to be a significant challenge for Internet Service Providers (ISPs). While its services perform well in high-density regions, they face scalability limitations in sparsely populated areas where infrastructure costs must be spread across a smaller customer base. This study explores the potential of Low Earth Orbit (LEO) satellite megaconstellations as a scalable solution for extending broadband coverage in the United States. By analyzing the technical capabilities, deployment timelines, and economic feasibility of partnering with LEO satellite providers, this research offers a strategic framework for integrating satellite broadband into ISPs service portfolio.&#13;
&#13;
A customer demand model identifies approximately 17 million unserved households within the addressable market of one of the largest U.S. telecommunications companies. The business case assessment evaluates broadband profitability by optimizing customer base size relative to proximity to existing infrastructure. While fiber optics remains the most profitable solution in high-density areas and fixed wireless access effectively utilizes excess 5G capacity, both require substantial infrastructure investment, limiting their feasibility for rural broadband expansion. In contrast, a satellite broadband partnership emerges as the most cost-effective solution for at least 1 million households, surpassing the profitability of currently existing offerings. With minimal capital investment, satellite technology enables rapid customer acquisition and scalable nationwide expansion. The analysis highlights that wholesale agreements play a critical role in profitability and the need to secure a minimum revenue share of 16.5% to reach the break-even point.&#13;
&#13;
Performance modeling and curve approximation techniques estimate that if Kuiper meets Federal Communications Commission (FCC) deployment milestones, it could serve 8.5 million customers by 2026, with full nationwide coverage projected by 2029. Under a 200x oversubscription model, Kuiper’s total subscriber capacity could scale to 32.8 million, demonstrating its ability to complement current broadband o!erings. While LEO broadband networks can achieve capacities in the tens of Tbps, they remain far below fiber networks, which operate in the thousands of Tbps. Rather than competing directly, satellite broadband is positioned as a complementary solution, addressing connectivity gaps in rural and underserved&#13;
regions.&#13;
&#13;
To capitalize on these findings, this study recommends leveraging existing LEO megaconstellations to expand broadband coverage nationwide. A phased rollout should begin with a beta program in California, the state with the highest number of unserved households, to validate network performance and optimize deployment for broader expansion. Partnering with an&#13;
existing LEO megaconstellation could e!ectively bridge the digital divide in rural areas, expand service offerings, and enable a stronger position in the growing satellite broadband market.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives</title>
<link href="https://hdl.handle.net/1721.1/163268" rel="alternate"/>
<author>
<name>Kim, Jason Gwanhee</name>
</author>
<id>https://hdl.handle.net/1721.1/163268</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives
Kim, Jason Gwanhee
This study examines the determinants of firms adopting performance-vesting long-term incentive (PLI) awards, a rapidly growing form of executive compensation. Using data provided by Equilar on Russell 3000 firms, I investigate how a firm's contracting environment and inter-firm networks influence the adoption and design of PLI awards. I find that stock liquidity and analyst coverage significantly increase the likelihood of adoption by enhancing the informativeness of performance measures. The findings suggest that firms adopt PLI awards to better align managerial incentives with shareholder interests, focusing on the measures that are both reliable and strategically aligned. I also show that board interlocks, particularly those involving compensation committee members, and shared compensation consultants play a significant role in facilitating the diffusion of PLI awards across firms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Business Value of Enterprise Digital Architecture</title>
<link href="https://hdl.handle.net/1721.1/163263" rel="alternate"/>
<author>
<name>Venkata Aditya, Saraswatula (Adi SV)</name>
</author>
<id>https://hdl.handle.net/1721.1/163263</id>
<updated>2025-10-22T03:33:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Business Value of Enterprise Digital Architecture
Venkata Aditya, Saraswatula (Adi SV)
Digital technologies are fundamentally reshaping markets and organizations globally. This thesis is exploratory research that seeks to explain how large multi-regional and global enterprises determine, prioritize, measure, and manage business value outcomes of digital investments over time. I examine the value construct of digital initiatives in firms from different industries by interviewing various stakeholders. Insights surfaced from this primary research are analyzed in conjunction with the concepts from current literature. Qualitative findings are proposed, and a list of value metrics is presented that can serve as a future reference for firms. A causal loop diagram is proposed to visualize firm capabilities and value dynamics.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163261" rel="alternate"/>
<author>
<name>Oludipe, Lanre</name>
</author>
<id>https://hdl.handle.net/1721.1/163261</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain
Oludipe, Lanre
The increasing demand for faster consumer delivery has led retailers to establish smaller regional distribution centers alongside traditional main distribution centers (MDCs). However, the limited capacity of some of these regional centers heightens the need for precise inventory forecasting and deployment to minimize excess inventory, particularly when few viable outlets exist for excess inventory. This research examines strategies to mitigate excess inventory at regional centers through inventory rebalancing, the integration of additional outlets, and modifications to existing inventory policies. A Monte Carlo simulation was conducted to compare the performance of the current system with a modified system incorporating these enhancements. The results showed that the modified system improved capacity utilization and reduced inventory deployment from the MDC without affecting margin. These improvements can enable more agile operations at smaller regional centers, reduce inventory buildup, and reduce the pressure of precise inventory deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications</title>
<link href="https://hdl.handle.net/1721.1/163260" rel="alternate"/>
<author>
<name>Knapp, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163260</id>
<updated>2025-10-22T03:33:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications
Knapp, Rachael
The global shift to electric vehicles (EVs) is progressing rapidly, driven by the need to reduce greenhouse gas (GHG) emissions and global reliance on fossil fuels. However, fleet electrification presents unique challenges, particularly in regard to rolling out the necessary charging infrastructure and operational efficiency. This study examines how various depot-based fleet charging strategies impact up-front capital and long-term operational expenditures. The operational feasibility of each method is evaluated through the use of a discrete event simulation. The study incorporates fleet data to assess the time required to charge the fleet, the number of chargers needed, and the number of associates needed to operate manual strategies. The analyzed charging methods include dedicated level 2 charging, vehicle swapping, level 2 cable swapping, level 3 cable swapping, sequential and simultaneous charging. Key findings indicate that while a 1:1 vehicle-to-charger ratio ensures charging reliability within the designated time, it incurs the highest capital costs. Alternative strategies, such as cable swapping and simultaneous charging, significantly reduce costs while successfully charging the fleet within the charging window.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs</title>
<link href="https://hdl.handle.net/1721.1/163259" rel="alternate"/>
<author>
<name>Kasliwal, Mohit</name>
</author>
<id>https://hdl.handle.net/1721.1/163259</id>
<updated>2025-10-22T03:33:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs
Kasliwal, Mohit
This thesis presents an integrated optimization framework designed for the large-scale deployment of electric vehicles (EVs) within commercial fleets, specifically focusing on balancing emissions reduction and operational cost efficiencies. Utilizing Verizon’s extensive fleet of over 10,000 light-duty vehicles across 1,000 sites as a case study, the research addresses the challenges and complexities in effective site selections for such a large and dispersed fleet. &#13;
The research involved developing and testing several optimization models under varying scenarios, including scenarios prioritizing maximum operational savings, maximum emissions reduction, and a hybrid model employing an internal cost of carbon (ICC) to balance both operational and environmental objectives. The model essentially develops a ranking system for sites – suggesting which sites to electrify in which year and order, with how many EV conversions (from existing ICE vehicles) at each site.&#13;
The results highlight the importance of tailoring EV deployment strategies to site-specific conditions, such as unique vehicle usage patterns, grid emissions profiles, regional operational costs, and available incentives. Particularly, smaller sites were found to offer greater relative benefits in terms of both cost savings and emissions reductions per unit of capital invested due to their high average mileage, making them strategic priorities for early electrification.&#13;
Operational feasibility was also thoroughly examined, recommending practical constraints such as limiting the number of sites electrified annually to ensure project manageability and effectiveness. &#13;
Sensitivity analyses addressed critical uncertainties such as battery degradation over the vehicle lifespan and the impact of extreme weather on EV performance. These analyses underscore the necessity of conservative battery range buffers ("safe ranges"). Robust load management strategies can be deployed to significantly reduce demand charges and optimize charging schedules based on time-of-use rates where available.&#13;
Recommendations from the study advocate for implementing a hybrid optimization strategy incorporating an ICC based on corporate goals, continuous adaptive management informed by ongoing data collection, and strategic infrastructure investments to future-proof EV deployments. Policy alignment is also critical to enhance economic viability via incentives and ensure regulatory compliance.&#13;
Finally, the thesis proposes future research directions, including investigation of advanced load management and integration with renewable energy sources, exploring bi-directional charging to add revenue streams, incorporating marginal operating emissions rate (MOER) data to further reduce grid emissions and exploring the resilience of EV fleets to power outages. These initiatives aim to further enhance strategic decision-making and ensure the long-term sustainability and efficiency of fleet electrification programs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Chain: Building Resilience in the Insurance Value Chain</title>
<link href="https://hdl.handle.net/1721.1/163258" rel="alternate"/>
<author>
<name>Chuah, Chung Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163258</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Breaking the Chain: Building Resilience in the Insurance Value Chain
Chuah, Chung Jin
This thesis examines how strategic transformation approaches reshape the resilience of the Property &amp; Casualty (P&amp;C) insurance industry in the light of ongoing technological disruption, climate change, and regulatory pressures. Through empirical analysis of 9 insurers, the study reveals that while all transformation types improve performance, phased 'test-refine-execute' strategies achieve superior outcomes by combining operational focus with strategic agility. The research identifies four implementation levers: (i) digital modernization, (ii) phased transformation execution, (iii) resource-allocation agility, and (iv) aligned leadership - which together explain why some transformations succeed where others fail."
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain Adaptation of VLM for Soccer Video Understanding</title>
<link href="https://hdl.handle.net/1721.1/163257" rel="alternate"/>
<author>
<name>Jiang, Tiancheng(Tony)</name>
</author>
<id>https://hdl.handle.net/1721.1/163257</id>
<updated>2025-10-22T03:33:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain Adaptation of VLM for Soccer Video Understanding
Jiang, Tiancheng(Tony)
Vision Language Models (VLMs) have demonstrated strong performance in multi-modal tasks by effectively aligning visual and textual representations. However, most video under- standing VLM research has been domain-agnostic, leaving the understanding of their transfer learning capability to specialized domains underexplored. In this work, we address this by exploring the adaptability of open-source VLMs to specific domains, and focusing on soccer as an initial case study. Our approach uses large-scale soccer datasets and LLM to create instruction-following data, and use them to iteratively fine-tune the general-domain VLM in a curriculum learning fashion (first teaching the model key soccer concepts to then question answering tasks). The final adapted model, trained using a curated dataset of 20k video clips, exhibits significant improvement in soccer-specific tasks compared to the base model, with a 37.5% relative improvement for the visual question-answering task and an accuracy improvement from 11.8% to 63.5% for the downstream soccer action classification task.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization</title>
<link href="https://hdl.handle.net/1721.1/163256" rel="alternate"/>
<author>
<name>Garber, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/163256</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization
Garber, Jeremy
This thesis analyzes and validates autonomous Finished Vehicle Logistics (FVLa) operations, at the plant of an automative Original Equipment Manufacturer (OEM), through the development of a Vehicle-Plug-In (VPI) system with Level 4 autonomous driving capabilities. The research combines process flow analysis with FlexSim simulation modeling to optimize operational parameters and assess safety performance. Results demonstrate FVLa operational feasibility with a recommended VPI inventory of 750 units and 6-hour replenishment cycle. The study's key contributions include a validated operational model using Economic Order Quantity calculations and a safety framework utilizing Bayesian Networks, establishing foundations for the planned 2028 implementation while maintaining required throughput rates and safety standards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonized Cement Manufacturing via Advanced Production Technologies</title>
<link href="https://hdl.handle.net/1721.1/163255" rel="alternate"/>
<author>
<name>Norwalk, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163255</id>
<updated>2025-10-22T03:33:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonized Cement Manufacturing via Advanced Production Technologies
Norwalk, Michael
Cement production is the second-largest source of industrial carbon dioxide emissions world-wide. Due to the chemical reactions inherent in its production and the temperatures required to drive those reactions, cement is considered a “hard-to-decarbonize” industry. In this study, three emerging technologies to reduce the carbon intensity of industrial processes, namely, direct high-temperature electric process heat, electric process heat utilizing thermal storage, and liquid amine-based carbon capture are assessed in the context of a greenfield cement production facility relative to a new-build conventional cement plant fueled with natural gas. Cement plants utilizing this set of technologies were modeled in five U.S. geographies to determine the relative economic returns. The economics were assessed, inclusive of available economic incentives, both for the scenario in which the cement produced is sold in the U.S. market and for the scenario in which the cement produced is exported to the European Union (E.U.) market to assess potential benefits from the E.U. carbon pricing system. The analysis indicates that at current technology prices, the economic returns of the assessed technologies, while in some cases profitable, continue to lag those of conventional production technology for the domestic U.S. market. As costs come down as technology is deployed, the economics of carbon capture solutions have the potential to be competitive with conventional technology. The E.U. carbon emissions penalties are effective in altering the economics in such a way that implementing carbon capture systems becomes the most attractive economic option, demonstrating the power of carbon emissions markets. With increased technology deployment as well as the adoption of targeted incentives in the U.S. market, the adoption of low carbon cement production technologies can be accelerated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Polarity Ion Electrospray Propulsion</title>
<link href="https://hdl.handle.net/1721.1/163253" rel="alternate"/>
<author>
<name>Shaik, Saba Zareen</name>
</author>
<id>https://hdl.handle.net/1721.1/163253</id>
<updated>2025-10-22T03:33:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Single-Polarity Ion Electrospray Propulsion
Shaik, Saba Zareen
Electrospray thrusters are highly efficient spacecraft propulsion devices that accelerate ions sourced from ionic liquid propellants to produce thrust. Typically, electrosprays are fired in a dual-polarity configuration in which the polarity of the ion beam is periodically reversed. This strategy is difficult to implement and imposes limitations on system size and performance. We instead propose a single-polarity design where negative ions are emitted continuously from the thruster, enabling extreme miniaturization, faster startup, better emission stability, and simpler power processing. This thesis investigates two challenges associated with the single-polarity design. First, system lifetime is of principal importance for electrospray propulsion systems in general and must be verified for a single-polarity implementation. Long-duration electrospray tests are performed, demonstrating that single polarity thrusters achieve comparable lifetimes and performance to state of the art systems with high mass utilization and minimal hardware degradation. An additional challenge is propellant electrochemistry, triggered when positive counterions accumulate in the ionic liquid. A suite of experiments is conducted to identify and characterize electrochemical processes, including electrical double-layer potential evolution and gas-phase product formation, in electrospray thrusters over long firing durations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China</title>
<link href="https://hdl.handle.net/1721.1/163252" rel="alternate"/>
<author>
<name>Zhang, Hanxue</name>
</author>
<id>https://hdl.handle.net/1721.1/163252</id>
<updated>2025-10-22T03:33:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China
Zhang, Hanxue
Semiconductors are fundamental to Artificial Intelligence(AI) and central to global technological competition. Against this backdrop, this thesis compares semiconductor primary investment environments in the United States and China, examining their implications for industry development and innovation. The study employs a mixed-methods approach, combining expert interviews, data analysis, and natural language processing (NLP). It draws on primary market investment, M&amp;A deals and government grants data to examine capital structures, investment stages, sectoral focus, and exit efficiency. Furthermore, it analyzes nearly 3,000 semiconductor industry reports(2020-2025) to identify evolving strategic priorities and thematic trends shaping these environments. Findings reveal that China’s state-led, vertically integrated model prioritizes upstream capacity building and supply chain autonomy, supported by government guidance funds, private capital, and policy-driven mechanisms. However, there remains a significant gap in leading-edge chips, necessitating precise investments and patient capital to bridge this divide. While the U.S. ecosystem, shaped by major technology firms and federal support, focuses on design innovation and cutting-edge technologies. However, structural constraints such as limited exit pathways, fragmented fabrication capacity, and insufficient industrial policies may hinder the U.S. in nurturing innovation-driven small and medium-sized enterprises (SMEs) in the semiconductor industry. This thesis highlights the structural divergence between the U.S. and China’s semiconductor ecosystems by examining policy, primary market capital, and investment dynamics. It offers policymakers and investors a strategic overview of how these forces shape innovation and resilience, while identifying emerging investment priorities and future development paths.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Automotive Production Volume Using Regression and Time Series Modelling</title>
<link href="https://hdl.handle.net/1721.1/163251" rel="alternate"/>
<author>
<name>Gong, Yutao</name>
</author>
<id>https://hdl.handle.net/1721.1/163251</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forecasting Automotive Production Volume Using Regression and Time Series Modelling
Gong, Yutao
Accurate forecasting of automotive production volumes is a critical capability for suppliers navigating an increasingly volatile industry. Overly optimistic forecasts, particularly from Original Equipment Manufacturers (OEMs), lead to misallocated capacity and lost opportunities across the supply chain. This thesis investigates whether advanced statistical models can improve upon benchmark industry forecasts and provide automotive suppliers with more reliable, practical tools for demand planning. Several forecasting methodologies are evaluated, including ARIMA, standard linear regression, Lasso regression, Theta model, and a hybrid Boosted Theta model. Models are tested across North America, Europe, and Greater China using 2000-2024 vehicle production and macroeconomic data. Results show that Theta model outperforms industry forecasts across both 1-year and 5-year horizons in North America and Europe. Its simplicity, low data requirements, and robustness to market volatility make it suitable for industrial use. The model was successfully implemented at Commonwealth Rolled Products, an aluminum rolling mill in Kentucky, portfolio company of American Industrial Partners (AIP), where it was adopted for 2025 planning and drove a shift towards data-centric forecasting practices. This research presents a real-world example of applying academic techniques to solving actual business problems, serving as a valuable reference for suppliers seeking to improve forecast accuracy and operational planning in the evolving automotive landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of university venture funds in supporting early-stage Japanese startups</title>
<link href="https://hdl.handle.net/1721.1/163250" rel="alternate"/>
<author>
<name>Brillaud, Nami</name>
</author>
<id>https://hdl.handle.net/1721.1/163250</id>
<updated>2025-10-22T03:33:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The role of university venture funds in supporting early-stage Japanese startups
Brillaud, Nami
This thesis explores how university venture funds in Japan are uniquely positioned to turn the country’s innovation capacity into entrepreneurial capacity by supporting early-stage startups. While Japan consistently ranks high in research output, much of this potential is not being translated into successful entrepreneurship. Risk capital is scarce compared to other ecosystems, particularly for deep tech, and support systems for early-stage startups are still limited. University venture funds – which inherently connect universities, entrepreneurs, and risk capital – are well positioned to bridge this gap. Yet despite their growing relevance, their evolving role in supporting Japanese early-stage startups is understudied.&#13;
&#13;
This study compares university venture funds with different profiles – ranging from leading and longstanding funds like UTEC, to public-private venture funds established through government initiatives, to recent funds with diversified structures – analyzing how they are structured, how they invest, and what results they have seen so far. It then builds on startup examples and interviews with university venture funds to identify how these funds can better support early-stage startups through improved fund operations, stronger pre-seed support, as well as a strategic approach to growth and exits. Ultimately, this thesis advocates for actionable solutions informed by global practices but adapted to Japan’s unique startup ecosystem.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Procurement Data for Cost Saving Application</title>
<link href="https://hdl.handle.net/1721.1/163249" rel="alternate"/>
<author>
<name>Pan, Haoting</name>
</author>
<id>https://hdl.handle.net/1721.1/163249</id>
<updated>2025-10-22T03:33:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Procurement Data for Cost Saving Application
Pan, Haoting
In an increasingly data-driven business environment, procurement analytics plays a critical role in optimizing costs and improving supply chain efficiency. This thesis examines the development and implementation of the Lifecycle Cost Management (LCM) tool at Caterpillar Inc., a global leader in heavy equipment manufacturing. Given Caterpillar's decentralized procurement structure, managing cost-saving initiatives across its 150 facilities (Caterpillar | Caterpillar Frequently Asked Questions (FAQs), n.d.) and 28,000 suppliers (Caterpillar | Caterpillar at a Glance, n.d.) poses a significant challenge. The LCM tool leverages machine learning models to identify overpriced purchase orders (POs) and generate actionable cost-saving opportunities.&#13;
This study explores the methodology used to enhance LCM's predictive capabilities, including data sourcing and cleaning, feature engineering, model selection, and validation. Various regression models, clustering techniques, and machine learning algorithms, such as Random Forest and XGBoost, are tested to identify cost outliers. A validation process is implemented to ensure that flagged outliers are cost-saving opportunities appropriate for execution.&#13;
Beyond technical development, the thesis addresses the processes of digital tool adoption within Caterpillar’s procurement teams. A change management approach is employed, incorporating buyer interviews, stakeholder engagement, and iterative user experience (UX) improvements. Through case studies, the study highlights the machine learning model performance and tangible financial impact of LCM. &#13;
The LCM tool has identified more than $100M data-driven potential savings, and hopes to realize 20% of the savings. Because Caterpillar’s procurement contracts are often long-term, these savings can be considered perpetual. &#13;
Findings indicate that while machine learning models effectively identify cost outliers, their success is contingent on robust data governance, stakeholder buy-in, and integration into procurement workflows. The study underscores the importance of data management, organizational alignment, and continuous refinement of digital procurement tools. Future work recommendations are enhancing data infrastructure, integrating AI-driven contract management and analysis, and refining cost estimation methodologies. The insights gained contribute to the broader application of procurement analytics and digital transformation in manufacturing enterprises.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment</title>
<link href="https://hdl.handle.net/1721.1/163248" rel="alternate"/>
<author>
<name>DiDio, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/163248</id>
<updated>2025-10-22T03:33:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment
DiDio, Isabella
Advancements in visual inspection technologies and machine learning algorithms present Johnson &amp; Johnson Vision with an opportunity to enhance quality control for Acuvue contact lenses, addressing inefficiencies such as unnecessary scrap, customer complaints, and lead time variability. With over 5 billion lenses produced annually across 100 manufacturing lines, the proposed inspection implementation of advanced camera optics and machine learning aims to improve defect detection accuracy, minimize manual inspection, and reduce customer complaints.&#13;
An impact evaluation and prioritization framework was developed to strategically implement these upgrades across 100 manufacturing lines, integrating historical data analysis, financial modeling, and engineering risk assessments. Key findings highlight that complaint reduction, scrap savings, and labor cost reductions are the primary drivers of cost savings, with inventory savings offering incremental benefits over time.&#13;
In conclusion, this research demonstrates the process of integrating advanced technologies into manufacturing processes. By aligning engineering solutions with strategic business objectives, the findings provide actionable insights for managing large-scale technological upgrades across global networks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007</title>
<link href="https://hdl.handle.net/1721.1/163246" rel="alternate"/>
<author>
<name>Tan, Yi-Ern Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163246</id>
<updated>2025-10-22T03:33:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007
Tan, Yi-Ern Samuel
In the late 1980s, Miyake Design Studio began to register patents concerning the Studio’s development of novel techniques to process pleated clothing. Their first patent, filed in 1989, was registered in designer Issey Miyake’s name, detailing the use of an industrial machine to pleat an entire garment after sewing, reversing the order of the conventional approach to creating pleated garments. In the years that followed, this entry into what I term “technical discourse” would proliferate with the Studio’s establishing of the PLEATS PLEASE brand specializing in pleated garments, and the A-POC (“a piece of cloth”) project with designer and textile engineer Fujiwara Dai. Each of these projects produced numerous patents, including a period between 1997 and 2008 I call the “Miyake Patent Explosion” when the Studio filed twenty patents with the Japan Patent Office and its international counterparts.&#13;
&#13;
In contrast to aesthetic discourses proposing the value of a work on its artistic merits and intellectual content, technical discourse points to the profusion of texts produced and circulated by the Studio—in this thesis, patents and legal claims—to uphold the utility of their products and their protection as intellectual property. By engaging with technical discourse, Miyake Design Studio were not only creating legal safeguards around the ideas it considered proprietary. Rather, their extensive production of technical discourse positioned Miyake as a figure who exceeded the boundaries of fashion, approaching its adjacent categories of unhyphenated design, architecture, and art within whose circles his objects circulate as currency.&#13;
&#13;
Exploring these texts as they are deployed in the defense of intellectual property, I argue that technical discourse can be treated as a form of historical archive that allows us to historicize claims to technological inheritance that bear upon the discussion of Miyake’s work. Specifically, I look to patents as a citational practice, or as Alain Pottage and Brad Sherman write, a “chain of reference” through which patent lawyers and engineers make deliberate connections between one technology and another to acknowledge, distinguish, and legitimize. Examining three episodes where technical discourse opens the way for historical narrative—a lawsuit over imitation goods, a case of mistaken identity in design criticism, and a moment of technological dissolution—I argue that we cannot divorce Miyake and his work from the technical complex that surrounds the Studio’s production of objects. Turning to these technical discourses that exist in the public record, I suspend the promise of monographic history that peers into the mind of the individual and probe instead the possibilities of seeing agencies beyond those attributed to the authorial figure of Miyake— his corporate apparatus, his allies, his admirers, his critics, his opponents, the receptive public.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of consolidation and plastic resistance on clays</title>
<link href="https://hdl.handle.net/1721.1/163105" rel="alternate"/>
<author>
<name>Marsal, Raúl J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163105</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">Investigation of consolidation and plastic resistance on clays
Marsal, Raúl J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1944; Vita. Appendix contains numerous pamphlets.
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An optical instrument for the synthesis of sound</title>
<link href="https://hdl.handle.net/1721.1/163102" rel="alternate"/>
<author>
<name>Brown, Sherwood Fiske.</name>
</author>
<id>https://hdl.handle.net/1721.1/163102</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An optical instrument for the synthesis of sound
Brown, Sherwood Fiske.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1930
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study</title>
<link href="https://hdl.handle.net/1721.1/163099" rel="alternate"/>
<author>
<name>Goody, Marvin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163099</id>
<updated>2025-10-10T03:04:41Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study
Goody, Marvin E.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1951; "A thesis submitted in partial fulfillment of the requirements for the degree of Master in Architecture, Massachusetts Institute of Technology, August 22, 1951."; Includes bibliographical references (leaves 93-95).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Operational Value Stream Analysis for Developmental Excellence</title>
<link href="https://hdl.handle.net/1721.1/163055" rel="alternate"/>
<author>
<name>Shaw, Eric T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163055</id>
<updated>2025-10-07T04:14:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Operational Value Stream Analysis for Developmental Excellence
Shaw, Eric T.
The aerospace and defense industry faces increasing challenges in new product development, where financial constraints and risk aversion hinder innovation. Using a multidisciplinary approach that integrates contract theory, computational fluid dynamics (CFD), and machine learning, this research explores the impacts of engineering requirements, financial alignment among stakeholders, and improved efficiencies in predictive modeling techniques for two separate air vehicle programs: A and B. A Monte Carlo analysis using SEER-H estimation software quantifies the financial and schedule impacts of engineering requirements, revealing a 10–30% cost increase due to volatility in air vehicle development design parameters. Moreover, a game-theoretic contract negotiation simulation illustrates the importance and opportunity of financial incentive alignment among key stakeholders. Additionally, predictive analytics leveraging machine learning models better capture the relevant flow mechanics, improving the circumferential distortion estimations in nacelle aerodynamics by over 10% compared to traditional heuristics. Finally, a CFD-based actuator disk source modeling approach demonstrates a 60% reduction in steady-state distortion at some portions of the flight envelope, due to the impact of the fan upstream influence on inlet flow distortion suggesting increased operational capability for the air vehicle program B. This research provides actionable recommendations to enhance the operational value stream of new air vehicle program development, emphasizing the need for pre-RFP requirements validation, advanced machine learning applications for predictive engineering, and refined CFD modeling to identify technical risks earlier in the design process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow</title>
<link href="https://hdl.handle.net/1721.1/163054" rel="alternate"/>
<author>
<name>Sonandres, Jake T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163054</id>
<updated>2025-10-07T04:15:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow
Sonandres, Jake T.
In this work, we present a computational framework for modeling the coupled dynamic interactions of highly flexible slender filaments immersed in a viscous flow and their entanglement with themselves and moving structures. This work is motivated by a novel drone countermeasure that entangles propellers with flexible filament clouds, inducing a loss of thrust and control authority in the drone. However, the framework is relevant to a wider range of applications, including actin filaments in cell biology, carbon nanotubes in composite materials, and rope-like structures in industrial settings. Each filament is modeled with the three-dimensional geometrically exact Kirchhoff-Love torsion-free finite element beam formulation. The fluid flow resulting from filament aerodynamic interaction is described through a Boundary Integral (BI) formulation of the incompressible Stokes equations based on the Stokeslet discretization. The heavy computational load of the resulting dense system is addressed through the use of fast GPU-based dense linear solvers. The BI formulation is coupled to the filament solid mechanics by enforcing momentum balance at the dynamically evolving filament-fluid interface. Additionally, the solid contact interactions between filaments are modeled with a point-to-point frictional contact algorithm that applies discrete contact and frictional forces at the closest point between the beam elements. We address the difficulties associated with contact between elements represented with third-order Hermitian polynomial shape functions and the strategies adopted to overcome these challenges. To capture propeller fouling for drone countermeasures, we incorporate a propeller and motor model whose thrust and torque responses are affected by contact interactions during entanglement. We verify our framework against simple analytical solutions and demonstrate its capabilities with numerical examples that attempt to capture large-scale filament entanglement behavior. In particular, we apply our methodology to demonstrate the process by which filament entanglement can restrict motion and reduce the efficacy of propellers. The results show that the framework can be used to understand the connection between filament entanglement, key system properties, and the resulting thrust generated by the propeller.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach</title>
<link href="https://hdl.handle.net/1721.1/163053" rel="alternate"/>
<author>
<name>Martin, Estelle Claude Aline</name>
</author>
<id>https://hdl.handle.net/1721.1/163053</id>
<updated>2025-10-07T04:14:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach
Martin, Estelle Claude Aline
Aviation contributes significantly to global greenhouse gas emissions, driven primarily by its dependency on fossil-based jet fuel. Sustainable Aviation Fuel (SAF) offers a short-term option to mitigate these emissions. However, its current scalability remains limited, constrained by access to sustainable biomass. Realizing SAF’s potential in the near term, using the agricultural and industrial systems already in place requires a detailed understanding of biomass availability, resource competition, and the scalability of SAF production. This thesis presents a comprehensive system analysis framework and a data-driven methodology for evaluating SAF production potential based on current agricultural output, without assuming land expansion or major yield improvements and preserving food utilization. It evaluates the SAF production potential from increasing biomass availability by redirecting biomass currently used for some non-food purposes, and by utilizing processing and agricultural residue. In-depth analysis of four high-potential case studies, one for each main biomass family (starchy, sugary, oily, and fats and greases), was used to construct a detailed model of the supply chain. This structure was then applied globally across all countries and relevant feedstocks to estimate SAF production potential and associated system requirements.&#13;
&#13;
Findings from the case studies show that these four high-potential opportunities could collectively meet only up to 13.1% of global jet fuel demand in 2023, assuming 100% neat SAF. The global analysis estimates that the SAF production potential from the considered streams of increased biomass availability could meet up to about two-thirds of global jet fuel demand, with 28.7% derived from agricultural residues, 25.9% from redirected main products, and 12.5% from processing residues. These contributions hence remain insufficient to fully displace fossil jet fuel. This work provides an estimate of what could be achieved using the existing agricultural and industrial systems, what resource would be required, and how it compares to global resource availability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal inference for complex systems and applications to turbulent flows</title>
<link href="https://hdl.handle.net/1721.1/163052" rel="alternate"/>
<author>
<name>Sánchez, Álvaro Martínez</name>
</author>
<id>https://hdl.handle.net/1721.1/163052</id>
<updated>2025-10-07T04:14:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Causal inference for complex systems and applications to turbulent flows
Sánchez, Álvaro Martínez
Causality lies at the heart of scientific inquiry, serving as the fundamental basis for understanding interactions among variables in physical systems. Despite its central role, current methods for causal inference face significant challenges due to nonlinear dependencies, stochastic interactions, self-causation, collider effects, and influences from exogenous factors, among others. While existing methods can effectively address some of these challenges, no single approach has successfully integrated all these aspects. Here, we address these challenges with SURD: Synergistic-Unique-Redundant Decomposition of causality (Nat. Commun., vol. 15, 2024, p. 9296). SURD quantifies causality as the increments of redundant, unique, and synergistic information gained about future events from past observations. The formulation is non-intrusive and applicable to both computational and experimental investigations, even when samples are scarce. We benchmark SURD in scenarios that pose significant challenges for causal inference and demonstrate that it offers a more reliable quantification of causality compared to previous methods. We further illustrate the applicability of our approach in two turbulent-flow scenarios: the energy transfer across scales in the turbulent energy cascade and the interaction between motions across scales in a turbulent boundary layer. Our results show that, without accounting for redundant and synergistic effects, traditional approaches to causal inference may lead to incomplete or misleading conclusions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Theoretic Process Analysis of Sociotechnical Systems</title>
<link href="https://hdl.handle.net/1721.1/163051" rel="alternate"/>
<author>
<name>Harrington, Polly</name>
</author>
<id>https://hdl.handle.net/1721.1/163051</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Theoretic Process Analysis of Sociotechnical Systems
Harrington, Polly
The safety and success of complex modern systems, such as hospitals, aircraft, or software, depend on their ability to integrate people and technical components. For example, doctors must be able to use their computerized surgical tools to treat their patients successfully, airplane pilots must be able to operate the required controls for takeoff and landing, and regulators must be able to interpret the data they receive to make critical decisions. However, designing systems that facilitate safe interactions between humans and technology is not a simple task. System designers must consider not only the constraints of the technical components but also human requirements throughout the entire system. However, accidents in modern systems continue to prove that more work is needed to identify and prevent unsafe interactions between humans and technology Systems Theoretic Process Analysis (STPA) is a hazard analysis methodology based on systems theory that has been used to improve system safety in various industries, including healthcare, aviation, nuclear power, and automotive design. However, if hazard analysts using STPA lack significant expertise in human factors engineering (HFE), they may be unable to thoroughly and rigorously identify critical unsafe interactions. This thesis presents a process for utilizing HFE to improve the results of STPA analyses on sociotechnical systems. In particular, the process focuses on the thorough identification of causal scenarios in sociotechnical systems by incorporating relevant human factors concepts. The process allows analysts without significant training in HFE to improve their ability to identify useful scenarios for humans in their system. The effectiveness of the improved process is demonstrated using a healthcare case study on over-the-counter clinical laboratory tests in the United States. By establishing a process for non-HFE experts to use when conducting STPA analyses, more systems can be developed that enhance human performance rather than increase conflict between humans and the engineered system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows</title>
<link href="https://hdl.handle.net/1721.1/163050" rel="alternate"/>
<author>
<name>Lin, Fayleon</name>
</author>
<id>https://hdl.handle.net/1721.1/163050</id>
<updated>2025-10-07T04:14:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows
Lin, Fayleon
A single lightning strike can deliver a steady current of hundreds of amps during its attachment to an aircraft. Therefore, it is imperative to have an adequate lightning protection system in the aircraft to minimize the probability of catastrophic accidents. Current guidelines for lightning protection systems are based on prior service experience and historical data, which might become insufficient with future generation aircraft. These often adopt novel and unconventional aircraft designs, often deviating significantly from current designs. Therefore, efforts are underway to update these guidelines with novel methods such as designs aided by numerical simulation that can accurately model the behavior of lightning attachment and the subsequent swept-stroke phase. To aid in the development of these numerical methods, ample data of not only the electrical arcs but also their interactions with the surrounding flow are necessary for validation. However, most studies on long electrical arcs lack a detailed investigation of the coupling between the electrical arcs and the surrounding flow field. For that purpose, teams from the Massachusetts Institute of Technology (MIT), ONERA, and Universitat Politècnica de Catalunya (UPC) conducted an extensive experimental campaign in April 2024 that investigates this coupling in detail for the first time. Data gathered from this experiment include electrical properties of the arc, high-speed video of the arc column, and the velocity field of the surrounding flow. Approximately 200 cases were conducted with various geometrical and electrical configurations. To meaningfully analyze all the data, a set of algorithms was developed to automatically process, analyze, and visualize these data. Detailed analysis of the root and column behavior was performed; electrical properties were verified to be consistent with literature values; and coupling between the velocities of the arc column and the flow field was determined by simultaneous visualization of both data forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions</title>
<link href="https://hdl.handle.net/1721.1/163049" rel="alternate"/>
<author>
<name>Bahlous-Boldi, Adam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163049</id>
<updated>2026-01-13T19:42:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions
Bahlous-Boldi, Adam A.
As space missions push toward smaller, lighter, and more deployable instrumentation, diffractive optical elements (DOEs) offer a compelling alternative to traditional optics. Their ability to focus light through engineered phase profiles rather than curved surfaces allows for large-aperture, flat optics that are far lighter and easier to package for launch. However, this benefit comes with trade-offs: DOEs are sensitive to wavelength mismatch, manufacturing errors, and environmental deformations—especially thermal gradients and membrane tensioning in space. This thesis develops a comprehensive framework for understanding and simulating the performance of DOEs under realistic operating conditions. Beginning from first principles, the work contrasts geometric and wave-optical models for Fresnel zone plates and multilevel diffractive lenses, leading to quantitative predictions of diffraction efficiency and PSF quality under non-idealities. A key contribution is the analytical and numerical analysis of how uniform thickness errors, wavelength mismatches, and thermal expansions degrade optical performance, both in efficiency and wavefront fidelity. To evaluate these effects in detail, a flexible simulation tool was developed in MATLAB, enabling both Fourier and integral-based propagation through arbitrarily deformed DOEs. These models are applied to a conceptual space-based LIDAR system—SPECIES—that uses a deployable DOE optic to demonstrate the feasibility and limitations of this approach. The results show that DOEs can tolerate some global deformations - for example, a 1 mm deformation results in a 38% performance loss in an F3 LiDAR system with a 1 mm detector diameter. However, they remain highly sensitive to fine-scale shape errors, posing significant challenges for high-precision applications like fiber coupling or imaging. The findings provide new insight into the tolerances, benefits, and trade-offs of DOEbased systems in space, and lay the groundwork for future missions seeking to leverage lightweight diffractive optics for remote sensing and optical communication.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure</title>
<link href="https://hdl.handle.net/1721.1/163048" rel="alternate"/>
<author>
<name>Davalos, Daniela L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163048</id>
<updated>2025-10-07T04:14:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure
Davalos, Daniela L.
Prolonged exposure to reduced gravity environments can lead to significant deconditioning of the cardiovascular, musculoskeletal, and ocular systems. These effects increase the risk of orthostatic intolerance, bone loss, and conditions such as Spaceflight Associated Neuro-ocular Syndrome (SANS). As spaceflight missions grow longer and more frequent, especially with increased extravehicular activity (EVA) on the Moon or Mars, it is critical to develop effective countermeasures and Earth-based analogs to simulate these gravitational environments and evaluate physiological impacts. This thesis addresses these challenges through two complementary approaches. First, it presents the design and development of the MIT Moonwalker IV, a passive mechanical offloading system that simulates partial gravity by applying vertical support via a spring-cable mechanism. In a treadmill-based pilot study, one participant showed at least a 50% reduction in metabolic demand while running under simulated Martian gravity. These findings validate the Moonwalker IV as a metabolic analog for EVA task simulation. Second, this thesis evaluates a collapsible lower body negative pressure (LBNP) suit as a wearable countermeasure for micro and partial gravity environments. By applying negative pressure to the lower body, the suit helps restore the mechanical loading and hydrostatic fluid gradients typically provided by Earth’s gravity. The suit was tested in both simulated reduced gravity via a head-down/head-up tilt paradigm and and true reduced gravity via parabolic flight. Each condition was evaluated both with and without –20 mmHg of LBNP. Results demonstrated that the collapsible LBNP suit produced cardiovascular responses comparable to those observed in traditional rigid LBNP chambers. It also induced lower body fluid shifts as measured by segmental leg bioimpedance, reduced intraocular pressure, and generated ground reaction forces similar to standing in 1G. These findings support the complementary use of Earth-based analog systems to simulate partial gravity and wearable devices to simulate Earth gravity in reduced gravity environments. They offer valuable tools for preparing astronauts and preserving physiological health during long-duration space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unforgettable Generalization in Language Models</title>
<link href="https://hdl.handle.net/1721.1/163047" rel="alternate"/>
<author>
<name>Zhang, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/163047</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unforgettable Generalization in Language Models
Zhang, Eric
When language models (LMs) are trained to forget (or “unlearn”) a skill, how precisely does their behavior change? We study the behavior of transformer LMs in which tasks have been forgotten via fine-tuning on randomized labels. Such LMs learn to generate near-random predictions for individual examples in the “training” set used for forgetting. Across tasks, however, LMs exhibit extreme variability in whether LM predictions change on examples outside the training set. In some tasks (like entailment classification), forgetting generalizes robustly, and causes models to produce uninformative predictions on new task instances; in other tasks (like physical commonsense reasoning and scientific question answering) forgetting affects only the training examples, and models continue to perform the “forgotten” task accurately even for examples very similar to those that appeared in the training set. Dataset difficulty is not predictive of whether a behavior can be forgotten; instead, generalization in forgetting is (weakly) predicted by the confidence of LMs’ initial task predictions and the variability of LM representations of training data, with low confidence and low variability both associated with greater generalization. Perhaps most surprisingly, random-label forgetting appears to be somewhat insensitive to the contents of the training set: for example, models trained on science questions with random labels continue to answer other science questions accurately, but begin to produce random labels on entailment classification tasks. Finally, we show that even generalizable forgetting is shallow: linear probes trained on LMs’ representations can still perform tasks reliably after forgetting. Our results highlight the difficulty and unpredictability of performing targeted skill removal from models via fine-tuning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems</title>
<link href="https://hdl.handle.net/1721.1/163045" rel="alternate"/>
<author>
<name>Wu, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/163045</id>
<updated>2025-10-07T04:14:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems
Wu, Benjamin
Multiple-Input Multiple-Output (MIMO) wireless communication systems incorporate forward error correction (FEC) to achieve high reliability under fading and interference. In this thesis, we explore the emerging FEC paradigm of Guessing Random Additive Noise Decoding (GRAND) in a point-to-point MIMO system. &#13;
Treating GRAND as an FEC decoder disjoint from the MIMO detector, we compare the soft-decision Ordered Reliability Bits GRAND (ORBGRAND) to CRC-Assisted Successive Cancellation List (CA-SCL) decoding of the CRC-Assisted Polar (CA-Polar) [105, 128] code found in the 5G New Radio standard. For this code, we find that ORBGRAND outperforms CA-SCL (list size 16) by 1 dB E_b/N₀ at block error rate of 10⁻³, under 16-QAM and Linear Minimum Mean Square Error detection, with two transmit antennas and four receive antennas. We also show that ORBGRAND, when paired with other moderate redundancy linear codes, can yield substantial savings in the range of 0.5 − 2 dB in E_b/N₀ over CA-SCL decoding (list size 16) of CA-Polar codes with the same code parameters, for a block error rate of 10⁻³. We provide extensive benchmarks comparing ORBGRAND to CA-SCL and other soft-decision GRAND variants. We also integrate a GRAND decoder producing soft output into a MIMO iterative detection and decoding (IDD) receiver. Specifically, we apply an established technique which utilizes soft-output GRAND as the component decoder for the block turbo decoding of product codes. This block turbo decoder is evaluated as a soft output decoder within a MIMO IDD receiver. We demonstrate competitive or superior performance relative to Belief Propagation (BP) decoding of 5G Low-Density Parity Check (LDPC) codes. This approach also marks a use of GRAND for low-rate, high-redundancy FEC in a MIMO system. With GRAND in MIMO still being an emerging area of research, this work is an exploratory evaluation of GRAND for FEC in MIMO, and highlights GRAND’s potential as a versatile and performant decoder in different MIMO receiver architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing</title>
<link href="https://hdl.handle.net/1721.1/163044" rel="alternate"/>
<author>
<name>Wu, Jessica L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163044</id>
<updated>2025-10-07T04:14:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing
Wu, Jessica L.
The increasing versatility of Large Language Models (LLMs) calls for developing effective routing systems to match tasks with the most suitable models, balancing accuracy and computational cost. This research introduces a novel meta-cascade routing framework that combines meta-routing, where a predictive model selects the appropriate LLM for a task, and cascading, where models are queried in sequence to optimize cost and performance. A critical component of this framework is the companion classifier, defined as a fine-tuned model trained to predict whether a particular LLM will generate an accurate response. We investigate whether incorporating features such as model responses into these classifiers can improve routing accuracy. Our preliminary experiments, using the Routerbench dataset, focus on training companion models that provide more stable and accurate routing decisions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq</title>
<link href="https://hdl.handle.net/1721.1/163042" rel="alternate"/>
<author>
<name>Teshome, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/163042</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq
Teshome, Christian
Data-intensive applications often involve operations over structured datasets, such as filtering, joining, and projecting records. Relational database systems generally use query planners to optimize high-level SQL queries into efficient execution plans. While these systems apply well-established query transformations, they typically assume the correctness of these transformations rather than formally proving them. The absence of formal guarantees can be a significant limitation for systems with strict correctness requirements. This thesis contributes to Fiat2, a Python-like high-level programming language for data-intensive workloads that integrates formal verification via the Coq proof assistant. We focus on proving the correctness of several rewrite-based query optimizations commonly used in database engines. Specifically, we formalize and prove the correctness of algebraic rewrites involving combinations of filters, joins, and projections, as well as join-reordering rewrites. All rewrites are proven in Coq to preserve the semantics of the original program under list semantics, meaning that the output lists are fully equivalent (or permutations, in the case of join reordering). These verified rewrites serve as a foundation for future optimization in Fiat2, enabling significant optimizations while preserving the semantics of the original queries with correctness guarantees. The results demonstrate the feasibility of integrating formally verified query optimizations into a practical high-level programming language.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Converting PyTorch Models to StreamIt Pipelines</title>
<link href="https://hdl.handle.net/1721.1/163041" rel="alternate"/>
<author>
<name>Rajvee, Muhender Raj</name>
</author>
<id>https://hdl.handle.net/1721.1/163041</id>
<updated>2025-10-07T04:14:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Converting PyTorch Models to StreamIt Pipelines
Rajvee, Muhender Raj
With the rise of large language models, there have been efforts to optimize machine learning inference to support a large volume of queries. Currently, the two main ways to do this are running optimized kernels for computing the forward inference pass and distributing computation across multiple GPUs or different cores in a GPU. Machine learning libraries such as PyTorch produce dynamic computation graphs in order to represent the forward pass of the model. PyTorch allows conversion of these dynamic graphs into static ones through just-in-time (JIT) compilation. These graphs can then be optimized further by the compiler. We propose an alternate way of optimizing these dynamic graphs. We convert the dynamic computation graph of PyTorch to pipelines in StreamIt, a domain specific language (DSL) for streaming applications, and use the multi-stage compilation property of BuildIt to compile this pipeline in stages to inference code. We found that, while the inference latencies of models compiled in this way are slightly higher, they are still comparable to those of PyTorch models and are open to future optimizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/163040" rel="alternate"/>
<author>
<name>Ramkumar, Vayd Sai</name>
</author>
<id>https://hdl.handle.net/1721.1/163040</id>
<updated>2025-10-07T04:14:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering
Ramkumar, Vayd Sai
In an era of information overload, verifying data reliability and provenance is critical, yet knowledge graphs (KGs) often remain complex for non-expert users. This thesis introduces TRACE, a Reasoning and Answer-path Comprehension Engine, a visualization tool enhancing transparency in KG question answering (KGQA). By abstracting intricate KGs into intuitive meta-nodes, TRACE simplifies exploration of large, multi-topic datasets. Its interactive interface allows users to navigate semantic communities and trace reasoning paths, fostering trust through clear answer derivation. Unlike cluttered traditional graph visualizations, TRACE’s meta-node approach provides a scalable, user-friendly solution, concealing technical complexities while enabling robust query validation. Large language models support natural language query parsing and community summarization, making KGs accessible to diverse audiences. TRACE positions itself as a vital widget for information platforms, empowering users to counter misinformation confidently. A user study and pipeline evaluation confirmed TRACE’s intuitive interface excels for complex queries, though multi-hop paths pose challenges, while processing tests demonstrated its scalable paradigm for large datasets. By prioritizing transparency and usability, TRACE redefines KGs as reliable tools for knowledge discovery, laying a foundation for future systems to deliver trustworthy, accessible information in a digital landscape fraught with uncertainty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spectral Analysis of Local Atomic Environments</title>
<link href="https://hdl.handle.net/1721.1/163039" rel="alternate"/>
<author>
<name>Phung, Tuong</name>
</author>
<id>https://hdl.handle.net/1721.1/163039</id>
<updated>2025-10-07T04:14:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Spectral Analysis of Local Atomic Environments
Phung, Tuong
The representation of local environments is a cornerstone challenge in computational materials science, with profound implications for property prediction and materials discovery. This thesis presents a comprehensive investigation of spectral descriptors constructed from spherical harmonic expansions to represent the geometries of local atomic environments. Systematic computational experiments evaluate the robustness of these descriptors to geometric perturbations and their capacity to differentiate structurally similar configurations. The findings reveal a clear performance hierarchy, with higher-order descriptors offering increased geometric expressivity and reconstruction accuracy in resolving challenging structural cases. This research further examines methods for inverting spectral representations back to atomic coordinates, demonstrating that directly optimizing three-dimensional positions through gradient-based techniques yields markedly better reconstruction accuracy than approaches operating in Fourier space. Dimensionality reduction via latent space embeddings is also explored, showing that essential geometric features can be preserved in significantly compressed representations. Through methodical analysis of descriptor limitations, performance boundaries, and sensitivity to hyperparameters, this work establishes practical benchmarks and implementation guidelines for spectral descriptors. These contributions strengthen the foundation for reliable machine learning models in computational materials science, advancing both the accuracy and efficiency of atomic-scale modeling for materials design and discovery.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Shipping Container for Package-Less Units</title>
<link href="https://hdl.handle.net/1721.1/163038" rel="alternate"/>
<author>
<name>Minja, Baraka</name>
</author>
<id>https://hdl.handle.net/1721.1/163038</id>
<updated>2025-10-07T04:14:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Shipping Container for Package-Less Units
Minja, Baraka
Package-less shipping aims to deliver units without company X’s added packaging. This requires the fulfillment systems and processes to have gentler handling. Part of this change involves the design and implementation of a container that will carry units from a distribution center to a delivery facility. This thesis presents the container analysis that was completed to determine what the optimal container features and container type are for package-less shipping. &#13;
Collapsible bags provide the best solution for package-less shipping in comparison to nestable and collapsible totes. Since ergonomic weight is the limiting constraint, the lower weight of the collapsible bag will allow for 1 or 2 more units per container. In addition, it benefits from 1) lower process cost for returning to dock (3.7% cost reduction as compared to a nestable tote) 2) better ergonomics (collapsible tote has undesirable pinch points) and 3) improved cycle time (estimated 2s to open/collapse compared to 4s for collapsible tote).&#13;
Additional considerations that require more analysis relate to units per container and relocation.  Based on company X’s past orders and unit types for the package-less shipping process, it is estimated that ~210 units per container (17.08 cu. Ft.) is the max achievable for NA before it reaches the ergonomic weight cap. However, company X is expecting the package-less shipping distribution center process to be constrained to ~105-133 units. Analysis of container relocation from delivery facilities to distribution centers indicates it is worthwhile investigating alternative relocation strategies in lieu of dedicated 53-foot container trailers to achieve lower relocation costs. &#13;
The collapsible bag is the best option assuming it has at least an expected lifetime of 2 years, which is when its NPV exceeds that of the two alternatives. These results are sensitive to assumptions made, and it is necessary to fine tune this analysis when the end-to-end package-less shipping process has been fully mapped out.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics</title>
<link href="https://hdl.handle.net/1721.1/163037" rel="alternate"/>
<author>
<name>Nakamura, Haley Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163037</id>
<updated>2025-10-07T04:14:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics
Nakamura, Haley Marie
Over the last decade, machine learning based facial recognition (FR) systems have continued to increase in popularity while spreading to unique deployment settings. Despite the large variance among FR input distributions, popular facial recognition benchmarks continue to characterize system performance using one aggregate score over a single dataset. In many cases, the limitations of this score are unclear to downstream users: assuming benchmark accuracy is high, how is it expected to change for an image sampled from a distinct distribution? Which transformations can the model handle robustly, and which cause failure? Meanwhile, there is a large body of human facial perception research that aims to understand the underlying mechanisms of human recognition. This field offers methodological inspiration for more informative evaluation techniques, including the characterization of recognition performance as a function of a quantifiable input transformation. This work performs such an analysis. The performance scores of five state-of-the-art FR models are characterized as a function of Gaussian blur strength, intersecting with color variation. The performance-blur relationship is modeled as an s-curve, creating a highly interpretable format for discussion. Blur strength was consistently statistically significant to performance, but color variation did not significantly impact any model. Results are then compared to prior human recognition experiments. The best models outperform humans in low-blur regimes while humans outperform all models in high-blur regimes. These results motivate the need for modern benchmarks that capture a range of input distributions. The analysis presented can lead to a deeper understanding of FR systems, and provide a clearer interpretation of how model performance changes under quantified distribution shifts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach</title>
<link href="https://hdl.handle.net/1721.1/163036" rel="alternate"/>
<author>
<name>Magzoub, Amna Ahmed Eltayeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163036</id>
<updated>2025-10-07T04:13:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach
Magzoub, Amna Ahmed Eltayeb
In highly regulated industries such as medical devices, accelerating New Product Development (NPD) without compromising quality or compliance is a persistent challenge. This thesis investigates the design transfer process, a critical, yet under- examined phase of NPD, as a strategic lever to reduce time-to-market. The project uses swimlane flowcharts and Design Structure Matrices (DSM) to map real-world processes, identify breakpoints, and classify rework (both planned and unplanned) in four case studies from Stryker Corporation. Key patterns emerged across case types: insufficient early-stage validation, misaligned cross-functional communication, and inadequate integration with suppliers were recurrent drivers of inefficiency. Compara- tive analysis revealed that concurrent engineering practices and knowledge sharing significantly reduce unplanned rework cycles and improve development speed. The study proposes actionable recommendations for optimizing design transfer including: leveraging corporate know-how through intentional knowledge transfer meetings dur- ing the process benchmarking process, increased risk-taking during the development process by embracing concurrent engineering approaches, and investing in early-stage co-development by adopting regular collaboration activities with suppliers. These findings can inform broader process improvements in the development of medical devices, and serve as a blueprint for other complex, cross-functional environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of the Solar Cycle on Satellite Orbital Lifetime</title>
<link href="https://hdl.handle.net/1721.1/163035" rel="alternate"/>
<author>
<name>Lisy, Celvi A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163035</id>
<updated>2025-10-07T04:15:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Effect of the Solar Cycle on Satellite Orbital Lifetime
Lisy, Celvi A.
The lifetime of a satellite in Low Earth Orbit (LEO) is affected by the 11-year solar cycle. At a fixed altitude, increasing solar activity increases atmospheric density which leads to an increase in drag, and a decrease in mission lifetime without using propulsion to recover altitude. Satellites may have longer orbital lifetimes if more of their mission is operational during a solar minimum due to lower solar activity and lower atmospheric drag. Satellites with larger area-to-mass ratios generally have shorter orbital lifetimes than satellites with small area-to-mass ratios. Missions that get delayed and have more of their operations during solar maximum than planned originally may have too short of a mission lifetime or, conversely, may be at risk of increasing their orbital lifetime past regulatory limits (five years for satellites in LEO according to the FCC) if they launch closer to solar minimum. For example, a satellite with an area-to-mass ratio of 0.014 m2/kg – such as a 1U CubeSat – and a one-year mission that is launched in 2021 without onboard propulsion, would have an orbital lifetime of 1.051 years. However, if that mission were delayed a year, a common occurrence in the industry, it would no longer be able to achieve its mission as its orbital lifetime with a deployment in 2022 is 0.44 years. Conversely, if the same 1U CubeSat is launched during solar max in January 2025, it would have an orbital lifetime of 2.2 years, and would re-enter in February of 2027. However, if that mission were delayed a year, the satellite would launch in January 2026 and instead be in orbit for 6.4 years before re-entering. They could be fined for violating the FCC deorbit limit of five years. This thesis quantifies the effect of launch or processing delays on satellite orbital lifetime based on their orbit altitude and vehicle parameters such as mass, cross sectional area, altitude, and bus size. In general, it is found that four-year and six-year delays have the greatest effect on a satellite’s orbital lifetime because the satellite will be deorbiting almost half a solar cycle (5.5 years) from its intended deployment year. However, two-year delays can still affect satellite operators, as they can increase the orbital lifetime, even by up to 1.5 years for low area-to-mass ratio satellites in 400 km orbits and almost five years for satellites in orbits higher than 500 km. Two-year delays can also decrease the orbital lifetime of a satellite by up to 1.7 years for low area-to-mass ratio satellites in 400 km orbits and almost two years at altitudes higher than 500 km.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors</title>
<link href="https://hdl.handle.net/1721.1/163034" rel="alternate"/>
<author>
<name>Rao, Sankarsh R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163034</id>
<updated>2025-11-24T15:39:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors
Rao, Sankarsh R.
This thesis provides an introduction to transmission line theory (telegrapher’s equations) as the mathematical background needed to correctly perform and interpret electrical measurements in nanosecond pulsed discharge reactors. The mathematical framework is implemented in a numerical tool called VI-View, which is made available to the community to aid with the interpretation of electrical measurements and help explain discrepancies between different experimental arrangements and probe configurations. A brief manual on how to use the tool is provided, followed by a series of six case studies relevant to experimental setups/situations encountered in practice. The analysis of these case studies summarizes best practices when performing electrical and energy measurements in nanosecond pulsed discharge reactors. Case Studies 1 and 2 cover in-situ and remote measurements for reactors using one voltage and one current probe. Case Study 3 covers how two current probes, one on the high-voltage end and one on the low-voltage end, can achieve the same energy measurements as Case Studies 1 and 2. Case Studies 4 and 5 show how cables with varying lengths and dissimilar properties — as can sometimes be encountered in practice — affect the electrical signals. Case Study 6 shows how a variable resistance — a step drop from 50MΩ to 10Ω — within a load can be a first approximation to a plasma reactor with a discharge. Finally, an outlook on how these case studies connect to real, experimental waveforms is presented along with the limitations of the tool.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies</title>
<link href="https://hdl.handle.net/1721.1/163033" rel="alternate"/>
<author>
<name>Reider, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/163033</id>
<updated>2025-10-07T04:15:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies
Reider, Sarah
Nitrogen Oxides (NOₓ) from aviation emissions are well known to have detrimental effects on air quality and the climate. Presently, they are regulated to preserve local air quality around airports. As part of the regulation process, aircraft engines are placed on a test stand with NOₓ levels measured at different thrust settings meant to mimic the aircraft’s emissions during landing and take-off. These are then constrained as a function of the engine’s overall pressure ratio (OPR) and rated thrust, with the allowed NOₓ emissions increasing with OPR. Despite increases in the stringency of this regulation, recent research suggests this regulation is insufficient for protecting surface air quality degradation from NOₓ emissions at cruise. Moreover, at high OPRs, NOₓ emissions increase substantially for relatively small reductions in fuel burn. In light of this, a new metric representative of cruise emissions is being investigated. This work considers effective methods to define this new regulation given a wide range of uncertainties in the tradeoff between NOₓ and CO₂ emissions at high OPRs. First, an estimate for the combined climate and air quality cost of NOₓ from aviation cruise emissions is estimated as ∼$95,000/tonne using a 2019 flight inventory. Then, cruise limits are proposed informed by the combined impact of NOₓ and CO₂ at cruise and with a similar slope to the current LTO standard. Finally, a Monte Carlo simulation is run, sampling NOₓ and CO₂ social costs for a series of hypothetical aircraft designed using the open-source Transportation Aircraft System OPTimization (TASOPT) model. This work takes a worst-case scenario approach, where the only response engine manufacturers can make to stricter standards is to reduce OPR and sacrifice fuel efficiency. Each aircraft’s emissions are evaluated during cruise to determine the probability of increasing environmental harm under different policy scenarios given these uncertainties. The combined cost of NOₓ and CO₂ are compared to the baseline engines that meet current regulations for each scenario. Results show defining a cruise metric informed by the weighted combined cost of CO₂ and NOₓ could reduce total environmental cost at cruise by 15 – 43% while carrying a 6 – 7.4% risk of increasing total environmental cost for wide-body aircraft engines in the most stringent scenario. Less stringent scenarios showed similar risks of increasing harm for smaller potential environmental savings. In all cases, the risks associated with the proposed limits are driven by low-likelihood extremes in the uncertainty distributions of NOₓ and CO₂, further suggesting the benefit of an environmentally conscious standard.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration</title>
<link href="https://hdl.handle.net/1721.1/163032" rel="alternate"/>
<author>
<name>Gomez, Annabel Reyna</name>
</author>
<id>https://hdl.handle.net/1721.1/163032</id>
<updated>2025-10-07T04:15:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration
Gomez, Annabel Reyna
To collaborate safely and intelligently with humans, robots must infer high-level semantic sates, such as intentions or interaction modes, from uncertain sensor input. While dynamic, probabilistic mode estimation is commonly used in fault diagnosis, this thesis extends the problem to activity recognition, where the goal is to estimate qualitative, symbolic human-object interaction states in real time. Robust human activity recognition is essential for collaborative and assistive robotics, particularly in dynamic or safety-critical environments. The core solution presented in this thesis is a mode-estimator and its efficient implementation using the A* with bounding conflicts (A*BC) algorithm. This performs best-first enumeration over symbolic activity states while integrating recursive Bayesian filtering to maintain belief under noisy observations. Unlike low-level trajectory tracking or deep-learned classifiers, qualitative spatial filtering operates at the right level of abstraction to recognize symbolic actions. It can also generalize across domains with minimal retraining and support efficient, probabilistically grounded reasoning about uncertainty in both perception and symbolic mode transitions. The proposed system fuses RGB-D perception, object segmentation, qualitative spatial reasoning (QSR), and probabilistic inference into a real-time pipeline capable of tracking and inferring symbolic human-object interaction states. Evaluated in a human-robot rehabilitation setting, this domain-independent system successfully infers latent human and object activity states from noise RGB-D data. It resolves ambiguity using Vision-Language Model (VLM)-guided semantic arbitration and demonstrates robustness and adaptability in unstructured environments. This work establishes qualitative spatial filtering with A*BC as a generalizable and efficient solution for semantic activity recognition, laying the foundation for future perception-driven collaborative systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163031" rel="alternate"/>
<author>
<name>Gada, Hiya Akhil</name>
</author>
<id>https://hdl.handle.net/1721.1/163031</id>
<updated>2025-10-07T04:15:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems
Gada, Hiya Akhil
The increasing penetration of renewable and inverter-based resources is transforming modern power systems into fast, nonlinear, and heterogeneous networks. These converterdominated systems operate on timescales much faster than traditional synchronous machines, making conventional modeling and control approaches, rooted in quasi-static phasor analysis and centralized architectures, inadequate for ensuring stability and scalability. This thesis adopts an energy space modeling approach grounded in first principles of energy conservation and system interconnection. It extends the previously introduced second-order energy dynamics model by relaxing the assumption that energy in tangent space can be treated as an independent disturbance. The resulting contribution is a third-order model that treats stored energy in tangent space as a dynamic state, enabling more expressive and accurate modeling of fast-timescale system behavior. Leveraging this extended energy space model, the thesis develops a multilayered distributed control architecture in which the nonlinear physical dynamics of each component are lifted to the higher-level linear energy space, capturing internal energy dynamics and real/reactive power flows, and integrated with the lower-level physical dynamics with well-defined mappings. Distributed controllers are designed in this energy space using only local states and minimal neighbor interaction, assuming a system-level coordination mechanism provides consistent references. Two control strategies, energy-based feedback linearizing control and sliding mode control, are developed and shown to achieve asymptotic convergence to reference outputs. The framework is validated on two systems: an inverter-controlled RLC circuit and a synchronous generator under load. Finally, the energy space framework is extended to structurally model inter-area oscillations (IAOs). An inter-area variable is defined as the difference between power incident on a tie-line from Area I and power reflected into tie-line from Area II. Simulations on a 3-bus, 2-area system confirm consistency with eigenmode analysis and show how tie-line strength and generator inertia affect IAO dynamics. A novel resonance phenomenon is also identified: instability arising from interaction between a system’s natural IAO frequency and time-varying disturbances from intermittent DERs. This previously unmodeled behavior is captured explicitly within the energy dynamics framework and may help explain recent blackout events in the Iberian Peninsula.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushing the Limits of Active Data Selection with Gradient Matching</title>
<link href="https://hdl.handle.net/1721.1/163030" rel="alternate"/>
<author>
<name>Zhang, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/163030</id>
<updated>2026-01-23T15:40:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pushing the Limits of Active Data Selection with Gradient Matching
Zhang, Chris
As modern machine learning systems grow in scale, the inefficiencies of training on large, noisy, and imbalanced datasets have become increasingly pronounced—particularly in computer vision, where real-world data often contain labeling errors, occlusions, and redundancy. While large models can partially compensate by training exhaustively on massive datasets, this indiscriminate approach is computationally expensive and often inefficient. Active data selection offers a more efficient alternative by prioritizing examples that contribute most to model improvement. However, existing selection strategies (such as Rho Loss) still fall short of the optimal achievable performance. In this work, we propose the Gradient Informed Selection Technique (GIST), an active data selection method that prioritizes examples based on their gradient alignment with a small, fixed holdout set. At each training step, GIST computes perexample gradients and selects those that are most aligned with the holdout gradient, thereby guiding model updates toward better generalization. We evaluate GIST on noisy (Clothing1M) and clean (ImageNet) datasets and show that it consistently outperforms baselines across a range of selection ratios—that is, the proportion of a batch of data that the model selects to update weights on. To address the computational overhead of gradient-based selection, we introduce efficient variants using restricted-layer gradients, low-rank approximations, and gradient quantization. We also analyze GIST’s selection behavior, showing that it implicitly balances classes and repeatedly selects high-utility examples—two factors that enhance both robustness and learning efficiency. Our findings suggest that a more effective data curriculum is both discoverable and practical, and that GIST is a step toward achieving it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data</title>
<link href="https://hdl.handle.net/1721.1/163029" rel="alternate"/>
<author>
<name>Yao, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163029</id>
<updated>2025-10-07T04:14:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data
Yao, Andrew
The weighted projection of a hypergraph is the weighted undirected graph with the same vertex set and edge weight equal to the number of hyperedges that contain the edge; the projection is the unweighted graph with the same vertex set and edge set consisting of edges with weight at least one. For d ≥ 3, after observing the unweighted and weighted projection of a random d-uniform hypergraph that is sampled using a generalization of the Erdős–Rényi random model, we study the recovery of a fraction of the hyperedges and the entire hypergraph. For both cases, we show that there is a sharp phase transition in the feasibility of recovery based on the density of the hypergraph, with recovery possible only when the hypergraph is sufficiently sparse. Particularly, we resolve numerous conjectures from [5]. Furthermore, we display an efficient algorithm that is optimal for both exact and partial recovery. We also analyze the phase transition for exact recovery by exhibiting a regime of probabilities that is below the exact recovery threshold by a polylogarithmic factor for which exact recovery is possible.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor</title>
<link href="https://hdl.handle.net/1721.1/163028" rel="alternate"/>
<author>
<name>Yuan, Joyce</name>
</author>
<id>https://hdl.handle.net/1721.1/163028</id>
<updated>2025-10-07T04:14:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor
Yuan, Joyce
As digital tools become more accessible, creating software is becoming a powerful way for anyone to make real-world impact. Computational action—the idea that learners can build computing artifacts with authentic relevance to their lives and communities—reframes computing as a tool for empowerment. Low-code platforms like MIT App Inventor support this vision by fostering digital agency through purposeful creation. Recent advances in large language models (LLMs) expand these possibilities further by enabling code generation from natural language, offering a timely opportunity to lower the barrier to app creation. MIT App Inventor has long championed accessibility, allowing even young learners in underserved regions to build meaningful mobile apps. Its natural language tool, Aptly, enables users to describe app ideas and generate functional code. However, Aptly’s reliance on cloud-based LLMs limits access for users without stable internet—often those who could benefit most. This thesis addresses that challenge by enabling AI-powered app creation to run entirely offline on mobile devices. We fine-tune and quantize LLaMA 3B using QLoRA and deploy it on iOS with MLC LLM, enabling on-device inference without internet. We also introduce a custom evaluation framework tailored to Aptly’s grammar, combining a Tree-sitter parser and a modified CodeBLEU metric to assess both semantic and syntactic quality. Using curated evaluation datasets, we benchmark out-of-box and fine-tuned models across prompting strategies. In our evaluations, fine-tuned GPT-4.1 achieved the highest normalized CodeBLEU score (0.36 ± 0.12) and parsed over 81% of completions, outperforming its baseline by more than 5%. QLoRA-finetuned LLaMA improved parseability by 11.7% over its base model, showing progress in adapting smaller models to the Aptly domain, though semantic fidelity remains a challenge. Our results show that offline natural language–to–app generation is feasible, and that smaller models can be adapted to the Aptly domain. By lowering the technical and infrastructural barriers to app creation, this work lays the foundation to empower AI-assisted programming that is accessible, offline, and on the phone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison</title>
<link href="https://hdl.handle.net/1721.1/163027" rel="alternate"/>
<author>
<name>Woo, Andrew Kyoungwan</name>
</author>
<id>https://hdl.handle.net/1721.1/163027</id>
<updated>2025-10-07T04:15:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison
Woo, Andrew Kyoungwan
Post-training adaptations such as supervised fine-tuning, quantization, and reinforcement learning can cause large language models (LLMs) with identical architectures to exhibit divergent behaviors. However, the mechanisms driving these behavioral shifts remain largely opaque, limiting the reliability and interpretability of adapted models. AutoDiff is a scalable, automated framework for tracing model divergence on a per-neuron basis. It exhaustively profiles every feed-forward (MLP) unit across a pair of models, identifies the neurons with the largest activation gaps, and links these differences to downstream behavioral changes. The pipeline identifies exemplars that maximize between-model activation divergence and clusters the highest-gap neurons into an interpretable, queryable difference report. Proof-ofconcept experiments on GPT-2 small validate AutoDiff’s ability to rediscover synthetic perturbations without manual supervision. A larger case study on Llama3.1–8B contrasts the base model with several adapted variants, surfacing neurons whose behavioral shifts align with observed topic-level gains and losses. By uncovering these mechanistic divergences, AutoDiff transforms black-box model updates into actionable insights, enabling safer deployment, principled debugging, and interpretable model evaluation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain</title>
<link href="https://hdl.handle.net/1721.1/163026" rel="alternate"/>
<author>
<name>Xia, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/163026</id>
<updated>2025-10-07T04:15:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain
Xia, Julia
Rapidly improving generative artificial intelligence has led to significant investments in datacenter infrastructure, driving power demand, and raising environmental concerns. This has led to a growing body of research towards modeling embodied and operational carbon of datacenter servers across a variety of paradigms. However, most existing models take in deterministic inputs and output a singular average value that does not capture the inherent variability in estimating embodied and operational carbon emissions. Further, these average outputs obscure the impact of interacting factors, such as those related to deployment or software characteristics; each of which has its own underlying uncertainty distribution. This means in most cases, these averages do not accurately represent a particular server’s context. This thesis explicitly parameterizes and quantifies the full probabilistic distribution of operational carbon in AI inference tasks. It explores several factors of variability— deployment, spatiotemporal, and computational profile— and quantifies their impact on the overall carbon footprint through statistical and sensitivity analysis. While this work focuses on operational carbon, uncertainty propagation and understanding of variability should be used across a datacenter server’s entire life cycle. When this methodology is used alongside the existing uncertainty-aware embodied carbon measurements, it enables a holistic assessment from cradle to grave. This facilitates informed decision-making in server replacement, workload scheduling, hardware procurement, capacity planning, and more scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization</title>
<link href="https://hdl.handle.net/1721.1/163025" rel="alternate"/>
<author>
<name>Wen, Haoran</name>
</author>
<id>https://hdl.handle.net/1721.1/163025</id>
<updated>2025-10-07T04:15:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization
Wen, Haoran
Current AI-assisted ideation systems, often based on linear chat interfaces, struggle to help users effectively manage the complexity of creative exploration, hindering both divergent thinking across multiple paths and the convergent synthesis of ideas. This thesis introduces and evaluates Ideator Explorer, a human-AI ideation system built upon an interactive graph visualization interface designed to overcome these limitations. The core of the system is its spatial, tree-like representation of branching idea sequences. Formative user studies indicate that this visualization approach is preferred over chat interfaces for its organizational benefits and its effectiveness in helping users track parallel lines of thought during exploration. The spatial layout inherently supports both the exploration of diverse idea branches (divergence) and the identification of potential connections (convergence). This research focuses on the design and evaluation of this interactive graph interface, examining how its specific visualization and interaction techniques impact the user’s ability to navigate, organize, and develop ideas within complex ideation processes. The primary contribution is a novel, visually driven interface paradigm for human-AI collaboration that enhances the management and exploration of the creative solution space.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Type Checker for Annotated Assembly Programs</title>
<link href="https://hdl.handle.net/1721.1/163024" rel="alternate"/>
<author>
<name>Zanders, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/163024</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Type Checker for Annotated Assembly Programs
Zanders, Julian
The rise of speculative-execution attacks, such as Spectre, has presented a security challenge to developers. Speculation on secret data can expose it, but running without speculation is suboptimal for runtime. To fix this, researchers have been evaluating “smart” speculation schemes, which determine when to speculate and when not to in order to balance runtime with security. Our lab proposes Octal, a solution that utilizes software and hardware in tandem. Data values are marked as secret or public using type inference, and the veracity of inference is checked using a type checker. Then, hardware can separate the secret and public values. My contributions were to the type checker, as well as some scripting to evaluate results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR</title>
<link href="https://hdl.handle.net/1721.1/163023" rel="alternate"/>
<author>
<name>Tsao, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163023</id>
<updated>2025-10-07T04:15:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR
Tsao, Nicholas
Robust real-time imaging systems have allowed for many advances in robotics and autonomous navigation, though limited visibility in many real-world settings remains a significant challenge. Non-Line-of-Sight (NLOS) sensing allows for imaging systems to “see around corners", expanding their range of perception, providing access information for realtime decision-making. A promising approach to NLOS sensing is through single-photon LiDAR, which is commonly used for range-finding in many imaging systems. In addition to range-finding, single-photon LiDAR systems can provide a deeply rich data source in the form of photon count histograms after reflecting off scene geometry, capturing detailed information from multiple bounces. NLOS imaging can be achieved by parsing third-bounce light from such single-photon LiDAR sensors, which can be used for a variety of detection and localization tasks, and recent work has demonstrated capabilities in a wide range of applications. This work aims to further develop the NLOS imaging system by demonstrating a fully functional NLOS system using low-cost, consumer-grade SPAD hardware for real-time NLOS imaging, detection, and localization. We lay the ground work for NLOS imaging systems by developing infrastructure for NLOS processing in real-time, and we examine the potential for NLOS systems to operate on cheap hardware using data-driven approaches. Our work implements and demonstrates full end-to-end capacity for these NLOS imaging systems in a number of applications including person detection and localization, facilitating future research in this field and paving the way for NLOS integration into consumer devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finetuning via Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163022" rel="alternate"/>
<author>
<name>Sivakumar, Ragulan</name>
</author>
<id>https://hdl.handle.net/1721.1/163022</id>
<updated>2025-10-07T04:14:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Finetuning via Sparse Autoencoders
Sivakumar, Ragulan
Currently, the field of interpretability is traditionally confined to diagnostics. However, this thesis presents a novel method using interpretability in sparse autoencoders to achieve better performance in small models via instruction finetuning. Specifically, we present UnderstandTune, an autonomous method for assembling high-quality instruction finetuning datasets with minimal human intervention, requiring only concise task descriptions rather than evaluation dataset distributions. Our empirical evaluations show that UnderstandTune consistently outperforms uninformed finetuning baselines across multiple benchmarks. Complementing this, Lalon introduces a mixture-of-informed-experts (MoIE) architecture that routes queries to specialized models independently finetuned via UnderstandTune. This modular approach achieves competitive performance against larger monolithic models in specialized domains, while utilizing fewer parameters, training examples, and computational resources. The framework’s modularity enables independent optimization of components from sparse autoencoders to MoIE routing mechanisms. This research demonstrates how interpretability can be used to enhance performance through intelligent data curation and suggests a new paradigm where interpretability and efficiency reinforce each other toward more capable, resource-efficient AI systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design</title>
<link href="https://hdl.handle.net/1721.1/163021" rel="alternate"/>
<author>
<name>Rubin, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/163021</id>
<updated>2025-10-07T04:14:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design
Rubin, Dana
Ribonucleic acid (RNA) is a fundamental molecule in biology, central to the regulation and execution of life’s most essential processes. Its diverse roles range from encoding genetic information to catalyzing biochemical reactions. Beyond its modern biological functions, RNA is also believed to have played a pivotal role in the origins of life which underscores the evolutionary significance of RNA. Unlocking the full potential of RNA research and design requires a deep understanding of the intricate relationship between RNA’s three-dimensional structure and sequence. Predicting RNA 3D structures remains a challenging problem due to the complexity of its folding landscape and the limited availability of high-resolution structural data. Inspired by recent advances in deep learning for protein folding and design, this thesis explores novel geometric and generative architectures for modeling RNA. We first present a systematic study on RNA structure prediction using equivariant neural networks within diffusion probabilistic models (DDPMs). Our folding model, named Klotho, captures local atomic interactions and structural features using SO(3)-equivariant message passing layers with a point cloud data representation. Ablation studies confirm that Klotho’s model performance scales with higher dimensionality and improves with enriching the input with secondary structure information and sequence embeddings from RNA foundation models. Building on this foundation, we introduce RiboGen, a multi modal deep learning model to jointly generate both RNA sequence and all-atom 3D structure. RiboGen integrates Flow Matching and Discrete Flow Matching within a unified multi modal representation and employs Euclidean Equivariant Neural Networks to learn geometric features. Our results demonstrate that RiboGen can generate chemically plausible, self-consistent RNA molecules, highlighting the potential of co-generative models to explore the sequence–structure landscape of RNA in a unified, data-driven framework. Together, these contributions advance the field of RNA modeling by offering scalable, symmetry-aware architectures for prediction and design. They lay the groundwork for future generative systems in RNA biology, therapeutic development, and biotechnological innovations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines</title>
<link href="https://hdl.handle.net/1721.1/163019" rel="alternate"/>
<author>
<name>Pan, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/163019</id>
<updated>2026-01-23T15:35:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines
Pan, Raymond
Predictive maintenance of wind turbines is a machine learning task aimed at minimizing repair costs and improving efficiency in the wind turbine and renewable energy industry. Existing machine learning solutions often fail to meet real-world deployment requirements due to fragmented pipelines, lack of domain integration, and reliance on black-box models. Zephyr, a data-centric machine learning framework, addresses these challenges by enabling Subject Matter Experts (SMEs) to incorporate their domain knowledge into the prediction process, and to leverage automated tools for labeling, feature engineering, and prediction tasks without requiring extensive technical knowledge. However, the current version of Zephyr still has limitations, including usability gaps and a reliance on external tools for certain steps. Case studies with real-world data from the renewable energy company Iberdrola demonstrate Zephyr’s potential to integrate domain expertise into wind turbine predictive maintenance (thus streamlining the process) but also expose a sub-optimal user experience. This thesis explores gaps in the current state of the Zephyr framework and proposes refinements to enhance its usability. Key improvements include the consolidation of current tooling and relevant external libraries into a single API, state management with careful logging and exception handling, and improved support for model evaluation. These enhancements aim to support seamless end-to-end predictive modeling workflows, and to provide a more refined and flexible user experience for the Zephyr user base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features</title>
<link href="https://hdl.handle.net/1721.1/163018" rel="alternate"/>
<author>
<name>Mishra, Kartikesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163018</id>
<updated>2025-10-07T04:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features
Mishra, Kartikesh
Recent vision-language navigation (VLN) approaches leverage large models, prompt engineering, and/or explicit reasoning for instruction interpretation and agent guidance. We introduce MiniNav, a minimalist framework employing frozen vision-language foundation models as patch-wise feature extractors, avoiding data and compute heavy fine-tuning and cumbersome language model reasoning. Our lightweight control policies (∼ 10⁵ trainable parameters) are trained on a compact dataset of language-based specified navigational behaviors (∼ 10² runs, ∼ 10⁴ frames per behavior). We demonstrate generalization to novel objects and scenes, including direct real-world transfer, despite training on only two objects in a single simulated environment. Through its simple and scalable design, MiniNav provides an alternative to computationally intensive pipelines for robust real-world instruction-following. Our solution can provide a reference for evaluating the effective edge of more complex and larger VLN policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models</title>
<link href="https://hdl.handle.net/1721.1/163017" rel="alternate"/>
<author>
<name>Mitchell, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163017</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models
Mitchell, Samuel
Financial fraud detection is a high-stakes field where rapid inference is essential. While state-of-the-art fraud detection models vary in terms of architectural decisions and appear to exhibit unique computational bottlenecks, we highlight that their run-times are all dominated by extensive information-gathering steps. These steps involve aggregating information from a large set of nodes or edges within a graph, and these intensive steps are performed O(|V |) or O(|E|) times during an inference forward pass, on a graph with |V | nodes and |E| edges. We introduce Strategic Sampling, a general method to accelerate these information-gathering steps. Our approach tailors sampling strategies based on the specific objective function used in each model’s information-gathering process, selecting the most relevant pieces of information to use in each step. This ensures that critical information is retained while significantly reducing the amount of data processed, thus speeding up the computation. We conceptually demonstrate how Strategic Sampling can be applied to message-passing Graph Neural Networks, Graph Transformers, and TGEditor (a state-of-the-art graph editing algorithm). To showcase the effectiveness of our proposed Strategic Sampling method, we implement it in the TGEditor codebase. Our results show that Strategic Sampling not only significantly reduces computation time by more than an order of magnitude, but also improves the F1 score, enhancing both efficiency and performance. This study underscores the potential of Strategic Sampling to universally boost the performance of various financial fraud detection models, paving the way for faster and more accurate fraud detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization Techniques for Trustworthy 3D Object Understanding</title>
<link href="https://hdl.handle.net/1721.1/163014" rel="alternate"/>
<author>
<name>Shaikewitz, Lorenzo Franceschini</name>
</author>
<id>https://hdl.handle.net/1721.1/163014</id>
<updated>2025-10-07T04:14:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization Techniques for Trustworthy 3D Object Understanding
Shaikewitz, Lorenzo Franceschini
Autonomous machines require reliable 3D object understanding to interpret and interact with their environment. In this thesis, we consider two tightly coupled 3D object understanding problems. Shape estimation seeks a consistent 3D model of an object given sensor data and some set of priors. Pose estimation seeks an estimate of the object’s position and orientation relative to an invariant shape frame. In general, these problems are non-convex and thus difficult to solve. We present algorithms which nonetheless solve shape and pose estimation efficiently and with assurances in terms of of optimality, uncertainty, or latency. We begin in the multi-frame tracking setting, where we propose the certifiably optimal estimator CAST⋆ for simultaneous shape estimation and object tracking. CAST⋆ uses 3D keypoint measurements extracted from an RGB-D image sequence and phrases the estimation as fixed-lag smoothing. Temporal constraints enforce rigidity and continuous motion. Despite the non-convexity of this problem, we solve it to certifiable optimality using a smallsize semidefinite relaxation. We also present a compatibility-based outlier rejection scheme to handle outliers, and evaluate the proposed approach on synthetic and real data. Next, we focus on estimating the pose of an object given its shape and a single RGB image (no depth). Assuming only bounded noise on 2D keypoint measurements (e.g., from conformal prediction), we derive an estimator for the most likely object pose which uses a semidefinite relaxation to initialize a local solver. We pair this with an efficient uncertainty estimation routine which relies on a generalization of the S-Lemma to propagate keypoint uncertainty to high-probability translation and rotation bounds. The high-probability bounds hold regardless of the accuracy of the pose estimate, and are reasonably tight when tested on the LineMOD-Occluded dataset. Lastly, we propose a sub-millisecond solution to simultaneous estimation of object shape and pose from a single RGB-D image. Our approach converts the first-order optimality conditions of the non-convex optimization problem to a nonlinear eigenproblem in the quaternion representation of orientation. We use self-consistent field iteration to efficiently arrive at a local stationary point, finding solutions more than an order of magnitude faster than Gauss-Newton or on-manifold local solvers on synthetically generated data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/163013" rel="alternate"/>
<author>
<name>Morrison, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163013</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks
Morrison, James C.
Next-generation (xG) wireless networks require accurate localization and synchronization for&#13;
efficient resource management and emerging applications. Non-terrestrial networks (NTN)&#13;
with low Earth orbit (LEO) satellites offer a promising alternative for positioning, navigation, and timing (PNT) by providing diversity and increasing the signal-to-noise ratio (SNR)&#13;
over global navigation satellite systems (GNSS). However, the primary challenge in NTNbased localization with LEO satellites is the lack of precise clock synchronization, which&#13;
introduces biases in time-of-arrival (TOA) measurements and limits localization accuracy.&#13;
This paper introduces a joint cooperative localization and synchronization (JCLS) framework that addresses this challenge through spatiotemporal cooperation, soft information,&#13;
and simultaneous synchronization. Furthermore, we propose a three-step algorithm for performing JCLS. The first step calculates a coarse position estimate using TOA measurements&#13;
and the Gauss-Newton method. Then, this coarse estimate is updated using the LevenbergMarquardt method which performs joint localization and synchronization. Finally, we derive a soft information-based filter that is used to continuously refine the position and clock error estimates as new measurements are available. We characterize the fundamental performance limits of JCLS using Fisher information, which offers insight into its localization and synchronization accuracy bounds. Furthermore, simulation results based on TOA measurements of the 3rd Generation Partnership Project (3GPP) 5G New Radio positioning&#13;
reference signal (PRS) demonstrate that the proposed algorithm for JCLS significantly improves localization and synchronization accuracy compared to non-cooperative methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multipartite Quantum Clock Synchronization Via Collective Symmetric States</title>
<link href="https://hdl.handle.net/1721.1/163012" rel="alternate"/>
<author>
<name>Keskin, Ufuk</name>
</author>
<id>https://hdl.handle.net/1721.1/163012</id>
<updated>2026-01-16T19:10:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multipartite Quantum Clock Synchronization Via Collective Symmetric States
Keskin, Ufuk
This thesis investigates multipartite quantum clock synchronization (QCS) tasks using a class of quantum states, called collective symmetric (CS) states, which generalize Dicke and N00N states. Employment of CS states in previous QCS procedures is shown to improve synchronization performance in various network scenarios. The focus of the paper is on QCS procedures that, after the distribution of quantum states, rely exclusively on local operations and classical communication (LOCC), ensuring compatibility with highly noisy quantum channels. Two synchronization scenarios are considered: (i) synchronization between the two nodes of an arbitrarily chosen pair of nodes, and (ii) global synchronization where all nodes wish to synchronize their clocks to a common average time. First, a framework in which the previous procedures operate employing the CS states is introduced. Using such framework, possible limitations of the QCS procedures in terms of estimation ambiguity and lack of robustness are pointed out. Second, a procedure referred to as the tactical delay procedure (TDP) is proposed for each of the two synchronization scenarios. The TDP resolves the mentioned limitations and outperforms the state-of-the-art multi-partite QCS procedures in terms of synchronization precision without requiring additional quantum resources.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Embedded HOWFSC Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163011" rel="alternate"/>
<author>
<name>Eickert, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/163011</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accelerating Embedded HOWFSC Algorithms
Eickert, Brandon
The quest to directly image planets of other solar systems demands not only state-of-the- art coronagraphs, but also places extreme performance demands on space-based processors. Direct imaging requires precise wavefront control to acquire the 1010 contrast necessary to reveal a dim, Earth-like exoplanet. This precise level of control is only possible if high-order wavefront sensing and control (HOWFSC) algorithms are executed with enough speed to offset wavefront error accumulation. Of the many aspects that make high-contrast imaging difficult, a central bottleneck is the speed at which we can run these algorithms. At the center of this work, we aim to accelerate the execution of two foundational HOWFSC algorithms: optical modeling and Electric Field Conjugation (EFC). Optical modeling underpins both Jacobian-based EFC, and a relatively new variant of EFC, called adjoint-based EFC.&#13;
The two main contributions of this thesis are to port bottleneck HOWFSC algorithms to the relevant computing environments, and quantify speedups attained by both algorithm choice and implementation optimization. This work explores the acceleration of optical modeling for a vector vortex coronagraph through the use of the FFTW library, and the acceleration of EFC by implementing adjoint-based EFC in an embedded context. We utilize functional analogs to radiation-hardened processors, using the NXP T1040 in place of the BAE RAD5545, and the NXP LS1046 in place of the LS1046-Space. We find that the FFTW library enabled a factor of six speedup for 4096 × 4096 fast Fourier transforms (FFTs), and a factor of five for 2048 × 2048 FFTs. With these significant speedups, the bottleneck within the vortex operations of the optical model shifts from the FFT to matrix multiplication. We additionally time the execution of the underlying routines of Jacobian-based EFC and AD-EFC to estimate that AD-EFC is 46 times faster than Jacobian-based EFC. Despite these speedups, AD-EFC is still a factor of 124 away from 100-second latency for our specific optical model. These results demonstrate that one to two orders of magnitude of speedup must be attained by either further optimizing algorithm implementations, or exploring other parallelization strategies, computing architectures, and mission paradigms to achieve a latency on the order of 100 seconds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formalizing Causal Models Through the Semantics of Conditional Independence</title>
<link href="https://hdl.handle.net/1721.1/163010" rel="alternate"/>
<author>
<name>Zhang, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/163010</id>
<updated>2026-01-21T18:53:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formalizing Causal Models Through the Semantics of Conditional Independence
Zhang, Anna
Many foundational tools in causal inference are based on graphical structure and can involve complex conditions that obscure the underlying causal logic. Given the inherent complexity and subtlety of cause-and-effect phenomena, establishing formal guarantees about these tools is both challenging and important. This thesis presents a semantics-driven formalization of causal models within the Coq proof assistant, enabling precise, mechanized reasoning about causal relationships. Central to this work is a new function-based definition of conditional independence, which captures how changes propagate through a causal graph. We prove that this semantic notion is equivalent to the standard graphical criterion of d-separation, thereby establishing a rigorous bridge between structural and semantic interpretations of independence. The formalization includes a library of graph-theoretic and causal-reasoning tools, encompassing key concepts such as mediators, confounders, and colliders. By linking the syntactic and semantic perspectives on causality, this work lays a robust foundation for formally verifying causal assumptions and guiding experimental design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination</title>
<link href="https://hdl.handle.net/1721.1/163009" rel="alternate"/>
<author>
<name>Zhang, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/163009</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination
Zhang, Jackson
Embodied multi-agent systems, comprising autonomous agents interacting within shared environments, enable intelligent, collaborative solutions for tasks requiring real-time coordination and adaptability. While applications span diverse fields, from disaster response to healthcare, planning in these systems remains challenging due to partial egocentric observations and limited environmental awareness. This work addresses these challenges by introducing a software module that synthesizes a shared world state from individual agent views, maintaining spatial information about objects and agents to support more effective joint action planning. Integrated into the LLAMAR framework, this module aims to improve planning accuracy and efficiency. The proposed approach is evaluated using metrics such as success rate, transport efficiency, and coverage performance. Our evaluation demonstrates that utilizing a perfect (oracle-generated) world state significantly enhances planning effectiveness. Notably, under these ideal conditions, the success rate of the LLAMAR planner improved by over 16%. These findings underscore the critical impact of accurate world state representation on multi-agent performance and highlight the potential for significant advancements in collaborative task execution in dynamic, unstructured settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft</title>
<link href="https://hdl.handle.net/1721.1/163007" rel="alternate"/>
<author>
<name>Shafer, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/163007</id>
<updated>2025-10-07T04:14:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft
Shafer, Emma
Thermochromic variable emissivity materials (VEMs) are a relatively new passive thermal control technology used for spacecraft radiators. VEMs passively change their emissivity based on their temperature, with VEMs having low emissivity at low temperatures and high emissivity at high temperatures. This property of VEMs allows for spacecraft to have reduced heater power and less extreme temperature swings without adding active thermal control systems. There is a potential for VEM technology to become more widely used in spacecraft radiators. Because thermochromic VEMs are still a relatively new technology, there has not yet been a study with a parametric sweep of some possible VEM profiles and common spacecraft parameters to determine the best-case uses of particular VEM profiles. This thesis models a single-node spacecraft in an equatorial low Earth orbit, varying the spacecraft’s shape, surface area, and thermal mass using Thermal Desktop. The temperature history of the spacecraft in orbit, particularly its orbit minimum temperature, orbit maximum temperature, orbit average temperature, and orbit temperature range, is recorded, and twelve VEM profiles are compared against default black and white paint materials to see how the twelve VEM profiles change orbit minimum temperature, maximum temperature, average temperature, and temperature range. The desired outcome is for the VEMs to reduce the temperature range the most compared to black or white paint while keeping temperatures within typical temperature requirements for spacecraft components. It is found that, compared to white paint, VEMs always increase the orbit minimum temperature, maximum temperature, average temperature, and temperature range across all nodal thermal masses and surface areas studied. For spacecraft with lower surface areas, having only white paint decreases the temperature too much for typical spacecraft components, so even though white paint always decreases temperature range compared to VEMs, it is recommended to have VEMs instead of white paint for lower surface area spacecraft due to VEMs being better than white paint at keeping components within typical temperature requirements. When VEMs are compared to black paint, it is found that black paint has lower minimum temperatures and greater maximum temperatures than all VEMs at greater surface areas. For lesser surface areas, the node covered in black typically has minimum and maximum temperatures in the middle of the VEMs’ minimum and maximum temperatures. For all surface areas and thermal masses, the average temperature of the black node is typically in the middle of the average temperatures of the nodes with VEMs; in relation to the VEMs’ average temperatures, the black average temperature decreases as node height increases. For all node heights and thermal masses, VEMs always decrease the temperature range compared to black. VEMs are shown to be better than black paint in having spacecraft components stay within typical temperature requirements, and which VEM to choose depends on what the specific spacecraft component is and its specific temperature requirements. The biggest difference in individual VEM profiles compared to each other is the orbit average temperature; the lower the VEM’s transition temperature, the lower the average temperature. Only at the greatest nodal surface areas and smallest nodal heights is there a significant difference in temperature range between individual VEM profiles; typically, the lower the transition temperature of the VEM, the less its temperature range. Future work includes expanding on the parameters studied and studying spacecraft in different orbits, different spacecraft shapes, and different VEM profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies</title>
<link href="https://hdl.handle.net/1721.1/163006" rel="alternate"/>
<author>
<name>Ahlers, Matthew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163006</id>
<updated>2026-01-05T16:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies
Ahlers, Matthew C.
Autonomous sailing vessels offer a promising solution for maritime research, offering low maintenance and sustainable platforms for environmental monitoring and data collection. These vessels utilize wind power, eliminating the need for conventional fuel and enabling long-duration operations with minimal environmental impact. Their applications range from oceanographic studies to maritime surveillance, where persistent and autonomous data collection is essential. This thesis explores the challenges and methodologies associated with path planning for autonomous sailing, particularly in the context of survey operations. Unlike traditional motorized vessels, sailing autonomy must account for wind variability, sail dynamics, and limited maneuverability, requiring specialized path-planning techniques to ensure efficient and reliable navigation. The research investigates various sail and hull configurations, the dynamics of windpowered propulsion, and the application of autonomy frameworks such as MOOS-IvP. A key focus is on optimizing continuous coverage path planning (CPP) to maximize efficiency while adapting to environmental constraints. By integrating real-time wind data and vessel performance characteristics, the study refines survey strategies that enhance mission effectiveness. Different survey strategies are implemented and evaluated using both simulation and real-world testing on the Charles River. These trials demonstrate the feasibility of fixed-path decomposition approaches and adaptive moving horizon control methods, evaluating methods with the impact of wind conditions on autonomous sailing performance. The results contribute to the development of robust and efficient survey strategies that improve the autonomy and reliability of wind-powered marine vessels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming</title>
<link href="https://hdl.handle.net/1721.1/163004" rel="alternate"/>
<author>
<name>Hao, Yilun</name>
</author>
<id>https://hdl.handle.net/1721.1/163004</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming
Hao, Yilun
While large language models (LLMs) have recently demonstrated strong potential in solving planning problems, LLMs, as zero-shot planners themselves, are still not capable of directly generating valid plans for complex planning problems such as multi-constraint or long-horizon tasks. This motivates the needs to develop a robust and reliable planning system for complex real-world planning problems. Furthermore, many frameworks aiming to solve complex planning problems often rely on task-specific preparatory efforts, such as task-specific in-context examples and pre-defined critics or verifiers, which limits their cross-task generalization capability. This motivates the needs to extend the robust and reliable planning systems to have strong generalization capability. In this thesis, we first develop an LLM-based planning framework that formalizes and solves complex multi-constraint planning problems as constrained satisfiability problems and can reliably identify the unsatisfiable cores for unsatisfiable requirements, provide failure reasons, and offers personalized modification suggestions. Then, we generalize the paradigm by proposing a general-purpose framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch, with no task-specific examples needed. Comprehensive experimental results have shown that our frameworks significantly outperform the baselines and have strong performance across tasks and LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163002" rel="alternate"/>
<author>
<name>Plaza Rivera, Christian O.</name>
</author>
<id>https://hdl.handle.net/1721.1/163002</id>
<updated>2025-10-07T04:13:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency
Plaza Rivera, Christian O.
Lithium (Li)-metal batteries (LMBs) present a promising avenue for high-energy applications. However, their practical adoption is constrained by challenges such as dendrite formation and unstable interphases. This study investigates the intricate interplay between electrolytedependent thermodynamics, kinetics, and transport properties in LMBs, focusing on the concentration effects in fluoroethylene carbonate (FEC) and 1,2-dimethoxyethane-based electrolytes containing lithium bis(fluorosulfonyl)imide. Due to FEC’s unique properties, these electrolytes facilitate significant upshifts in the Li redox potential and contribute to stable interphases and voltage profiles. Our findings reveal that the redox potential is primarily governed by the solvent’s electron-donating ability, reflecting underlying solvation dynamics, while the electrolyte permittivity influences reaction entropy trends. The results show entropy changes from increased molecular disorder at moderate concentrations to reduced entropy in highly concentrated regimes, driven by the formation of ion–solvent complexes. Kinetic analyses demonstrate a volcano-shaped dependence of exchange current density on concentration, centered at 2 M. Two prevailing perspectives propose that either kinetic–transport interplay or thermodynamic properties govern Coulombic efficiency (CE). However, separating these contributions is complex, since both higher exchange current density and upshifts in the Li redox potential enhance CE. Furthermore, CE strongly aligns with the combined effects of kinetics, thermodynamics, and transport, emphasizing the need for a holistic electrolyte design approach. Optimizing these three factors makes it possible to stabilize the interphase, promote uniform Li deposition, and elevate the overall safety and performance of next-generation LMBs.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications</title>
<link href="https://hdl.handle.net/1721.1/163001" rel="alternate"/>
<author>
<name>Shevgaonkar, Mihir</name>
</author>
<id>https://hdl.handle.net/1721.1/163001</id>
<updated>2025-10-07T04:13:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications
Shevgaonkar, Mihir
Electroaerodynamic (EAD) propulsion is a novel form of propulsion that is nearly silent and has no moving parts. The first functional untethered heavier-than-air EAD aircraft had an endurance of 90 seconds and could only fly in a straight line. To enable a practical fixed wing EAD aircraft that can fly outdoors with a payload for an extended period of time, improved power conversion technology is necessary. Prior work specifies a practical EAD aicraft as one with an endurance of 10 minutes, a payload capacity of 200 g, and full controllability. This work explores methods of increasing the specific power of power converters for EAD aircraft from 1.15 kilowatts per kilogram to over 2.0 kilowatts per kilogram. Such an increase can be achieved by utilizing magnetics integration and thermal management techniques, as well as adjustments in the operating point of the power converter. The power converter for the first generation EAD aircraft had an input voltage of 200 V, an output voltage of 40 kV, an output power of 600 W, a specific power of 1.15 kilowatts per kilogram, and an efficiency of 85 percent. In this work, a power converter with an input voltage of 200 V, an output voltage of 20 kV, an output power of 1476 W, a specific power of 2.7 kilowatts per kilogram, and an efficiency of 96 percent was demonstrated to work for a 40 second duration. At the end of the test, device temperatures continued to increase, so it has not been proven that the converter can work in thermal steady state as required for a 10 minute flight. Future work would involve modifying the test setup to allow for adequate ventilation of the ambient air around the converter, as well as modifying the converter with adequate thermal management so as to enable operation under thermal steady state.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise</title>
<link href="https://hdl.handle.net/1721.1/163000" rel="alternate"/>
<author>
<name>Cezairli, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/163000</id>
<updated>2025-10-07T04:12:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise
Cezairli, Mina
Operational interventions, such as enabling more fuel-efficient trajectories, are desirable in mitigating the environmental impact of air travel due to their relatively fast implementation potential. In particular, the vertical inefficiency arising from the altitude stratification in the airspace can be mitigated by relaxing vertical constraints. The feasibility of vertical flexibility is evaluated by quantifying the rate of close encounters and the frequency of alerts that would be needed to prevent them. Substantial diurnal variability in the number of close encounters was found in the airspace, with lower rates of events during the nighttime period. Furthermore, regional differences among Air Route Traffic Control Centers were observed in the number of close encounters. The frequency of controller intervention events that would have to occur was evaluated at 25 NM and 50 NM alerting distance levels, and it was found that, given sufficient technological capabilities for alerting at the 25 NM reaction distance, most centers would have fewer than 10 alerts per hour during the nighttime period. Boston, Miami, and Seattle appeared especially promising, with approximately one alert per hour for each region. Finally, the potential fuel benefit from enabling vertically optimal trajectories was estimated to be up to 100,000 gallons of fuel savings per month in the case of a CONUS-wide nighttime implementation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions</title>
<link href="https://hdl.handle.net/1721.1/162999" rel="alternate"/>
<author>
<name>Zhang, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/162999</id>
<updated>2025-10-07T04:13:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions
Zhang, Joseph
Understanding the interaction between weather and disruptions in complex air transportation network is important to the design and evaluation of preemptive measures and responses taken by air traffic managers. However, the occurrence of disruptive weather events is often rather limited compared to the amount of data available for nominal operations.  Additionally, in large-scale systems with many known and unknown confounding factors, it can be difficult to identify the relevance of existing data to different underlying distributions of interest. Furthermore, existing work generally follows a frequentist paradigm in predicting disruptions based on weather, and does not easily lend itself to inferring the causes of disruptions, which can be important both in building models and using them to make predictions, and generate test cases to stress-test proposed design decisions. In this thesis, we develop a hierarchical Bayesian model for air traffic network operations, and investigate methods for learning these models in data-constrained settings, by extend existing work on retrospectively analyzing failures. We also include a guiding case study performed on LaGuardia Airport, in which a generative model is developed for the interaction between weather conditions and airport-level parameters within a single airport, trained on unlabeled historical data, and evaluated by simulating disruptions on historical schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS</title>
<link href="https://hdl.handle.net/1721.1/162998" rel="alternate"/>
<author>
<name>Wu, Ivy</name>
</author>
<id>https://hdl.handle.net/1721.1/162998</id>
<updated>2025-10-07T04:13:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS
Wu, Ivy
σOS aims to provide both serverless and stateful support to cloud applications while maintaining strong isolation, security, and efficient startup times and scheduling among multiple users. While σOS and its container startup times have been successfully benchmarked for tasks written, compiled, and statically linked in Golang and Rust, it currently lacks support for other languages, including interpreted ones like Python. To bridge this gap, this paper presents the first integration of an interpreted language into σOS, enabling native Python support without compromising the system’s core principles. Our design, σPy, achieves this through three key ideas: (1) system call interposition via LD_PRELOAD to enable just-in-time dependency management, where Python libraries are fetched on-demand from tenant-specified AWS S3 buckets, avoiding overhead during container initialization; (2) a multi-layered mount namespace that spans the local machine, a per-realm Docker container, and a per-proc σcontainer, enabling efficient dependency caching at the per-tenant granularity; and (3) a hybrid C++, C, and Python API layer that bridges σOS’s Protobuf-based RPC system with Python’s dynamic types. Preliminary benchmarks demonstrate that σPy achieves performance comparable to that of compiled languages like Golang when interacting with the σOS API, with only 0.2 - 0.3 additional milliseconds of overhead on all tested API calls, validating the success of Python programs on the σOS architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating LLM Runtime Latency</title>
<link href="https://hdl.handle.net/1721.1/162997" rel="alternate"/>
<author>
<name>Wang, Sarah Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/162997</id>
<updated>2025-10-07T04:14:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulating LLM Runtime Latency
Wang, Sarah Y.
Large Language Models (LLMs) are expensive to run and can incur high latencies. Each LLM application has its own cost and latency targets. For example, AI voice assistants operate under low latency objectives, while large document batch processing jobs are typically cost-sensitive. However, navigating these trade-offs is not trivial, as LLM latency is highly task– specific and depends on factors such as the offered query load, the hardware configurations, request properties, and various model characteristics. To support the user in configuring their deployment according to their application needs, we introduce vLLMSim, an accurate simulator that estimates the latency of a given workload on different hardware configurations. vLLMSim advances two key avenues toward latency-aligned LLM deployments. First, the simulated latency metrics inform the user’s model and hardware choice, so they can use a configuration that is ideal for their workload. Second, our simulator enables researchers to quickly test latency-improving ideas, bypassing the need for time-consuming implementations before validating their effectiveness. In fact, vLLMSim is already used in two research projects with the goal of reducing latency and cost of LLM inference. In this thesis, we show how vLLMSim’s design allows it to accurately support the use cases above, while providing highly accurate runtime predictions. To support hardware exploration without GPU access, vLLMSim provides precomputed performance profiles that are sufficient to accurately simulate the user’s workload. The simulator code can be found here, and the instrumented vLLM code for creating profiles can be found here.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Latent Space Interpretation via In-the-loop Fine-Tuning</title>
<link href="https://hdl.handle.net/1721.1/162996" rel="alternate"/>
<author>
<name>Wen, Collin</name>
</author>
<id>https://hdl.handle.net/1721.1/162996</id>
<updated>2025-10-07T04:13:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Methods for Latent Space Interpretation via In-the-loop Fine-Tuning
Wen, Collin
With language models increasing exponentially in scale, being able to interpret and justify model outputs is an area of increasing interest. Although enhancing the performance of these models in chat mediums has been the focus of interaction with AI, the visualization of model latent space offers a novel modality of interpreting information. Embedding models have traditionally served as a means of retrieving relevant information to a topic by converting text into a high-dimensional vector. The high-dimensional vector spaces created via embedding offer a way to encode information that captures similarities and differences in ideas, and visualizing these nuances in terms of meaningful dimensions can offer novel insights into the specific qualities that make two item similar. Leveraging fine-tuning mechanisms, dimension reduction algorithms and Sparse Autoencoders (SAEs), this work surveys state-of-the-art techniques to visualize the latent space in highly interpretable dimensions. ConceptAxes, derived from these techniques, is a framework is provided to produce axes that can capture high-level ideas that are ingrained into embedding models. ConceptAxes with highly interpretable dimensions allow for better justification for the latent space and clusters. This method of increasing embedding transparency proves valuable in various domains: (1) AI-enhanced creative exploration can be more guided and customized for a particular experience and (2) high-level insights can be made more intuitive with vast text datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link href="https://hdl.handle.net/1721.1/162995" rel="alternate"/>
<author>
<name>Whitmore, Garrett</name>
</author>
<id>https://hdl.handle.net/1721.1/162995</id>
<updated>2025-10-07T04:13:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission
Whitmore, Garrett
This work outlines the software-related requirements necessary for successful operations of the NASA-sponsored Cubesat Laser Infrared CrosslinK (CLICK) B/C mission [1] [2]. This twin-cubesat mission will demonstrate peer-to-peer laser-communication capabilities novel at this small terminal scale. Optical laser communication terminals can have lower Size, Weight, and Power (SWaP) compared with traditional radio communication, as well as fewer licensing regulations and improved link security. CLICK-B/C follows from CLICK-A, a risk-reduction mission that successfully performed laser downlink with a ground station at MIT [3]. In addition to downlink, B/C will perform crosslink experiments at a data transmission rate over 20 Mbps at ranges between 20 and 580 km in Low-Earth Orbit (LEO). This thesis focuses on the software related to the function of the satellite payload, in particular, the improvements and additions made to the operating system, software systems that were ported over from CLICK-A, the integration and testing of these subsystems, and analyses done to prepare for in-flight operations before launch. An overview of the MIT &amp; UF payload hardware and electronics is given before detailing interactions with components as necessary. A deep dive into the payload software libraries, internal and external communication channels, and operating system build details are given. A description of functional testing and its results are laid out as well as a template crosslink experiment script and further specifications for mission-related analyses and pre-launch preparations. This work on software upgrades, verification, and examination is necessary for CLICK-B/C to reach its stated mission goals, here on Earth and in its orbit.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs</title>
<link href="https://hdl.handle.net/1721.1/162994" rel="alternate"/>
<author>
<name>Tockman, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/162994</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs
Tockman, Andrew
The field of formal methods has a rich history of practical application in verification of the correctness of software. Existing verification tooling operates at a wide range of rigor, from proving relatively weak properties via traditional static analysis to powerful theorem provers that can express very precise specifications. It is sometimes desirable to prove properties about programs that make reference to not just semantic behavior but also to other metaproperties of the program’s execution, such as runtime or I/O histories. There is also a wide variety of existing tooling for proving bounds on program runtime. However, there is no prior work on a maximally rigorous verification system that can prove predicates involving all of semantic behavior, runtime, and I/O. Our contribution is exactly that – we extend the existing Bedrock2 framework, which implements a C-like systems language within a powerful proof engine together with a verified compiler capable of expressing arbitrary proof conditions involving behavior and I/O, and augment it to add the capacity to reason about runtime as well. As a capstone proof of concept, we apply the new metrics machinery to an IoT lightbulb controller (already verified with respect to the previous framework) and produce a new specification with time bounds based on arrival of network packets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task</title>
<link href="https://hdl.handle.net/1721.1/162992" rel="alternate"/>
<author>
<name>Rozario, Consecrata Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/162992</id>
<updated>2025-10-07T04:13:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task
Rozario, Consecrata Maria
Graph Neural Networks (GNNs) have become a widely utilized tool in recommender systems in various contexts. While recommendation tasks can be approached using a multitude of data structures and types, graph-structured data is particularly well-suited for this domain, as graphs naturally capture a variety of relationships and interactions between entities. By leveraging graph representation learning, we can effectively encode these complex dependencies, enabling robust and context-aware recommendations. We use this methodology in the domain of policy recommendations for urban centers. To recommend policies, we would learn the complex local and global relationships between cities, their environmental features, and currently implemented policies. We construct a graph structure relating cities, implemented policies, and city features, and formulate the policy recommendation task as a GNN link prediction problem, demonstrating its potential to scale data-driven urban governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech</title>
<link href="https://hdl.handle.net/1721.1/162991" rel="alternate"/>
<author>
<name>Park, Janette H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162991</id>
<updated>2025-10-07T04:13:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech
Park, Janette H.
This study presents a framework for the automatic detection of the eight landmark acoustic cues in human speech. Landmarks are key articulatory events, produced as a result of minimal vocal tract constriction (e.g., vowels and glides) or closures and releases in the oral region (e.g., nasal, fricative, and stop consonants). A complete landmark detection system is a key step towards an overarching speech analysis system that relies on lexical acoustic cues, as landmarks guide the identification of other acoustic cues in speech. In the proposed framework, the acoustic properties of each of the eight landmark cues are modeled by extracting speech-related measurements and training Gaussian Mixture Models (GMMs). To remove the effects of speaker variability and different recording environments, methods for normalizing speech-related measurements are proposed and evaluated. For a new speech signal, the normalized speech-related measurements are extracted at each time frame and evaluated against the eight trained GMMs to compute the likelihood of each landmark. Using Bayes’ Theorem, the posterior probabilities are calculated to determine the most probable landmark (or absence thereof) at each time frame. The system’s performance is evaluated by comparing the detected landmarks to the manually labeled ground truth landmark annotations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation</title>
<link href="https://hdl.handle.net/1721.1/162990" rel="alternate"/>
<author>
<name>Lin, Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/162990</id>
<updated>2025-10-07T04:13:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation
Lin, Vincent
As single-cell transcriptomics datasets continue to grow in size and biological complexity, current models for cell type annotation remain limited in their generalizability and are often evaluated on only a small fraction of the standardized cell types defined in modern ontologies. Current state-of-the-art models for transcriptomic representation demonstrate that deep learning models can extract rich features on single-cell data but are evaluated on very few cell types and perform poorly on broader datasets. This work introduces a multimodal model architecture that integrates large language models (LLMs) with gene expression encoders to address this scalability gap in cell type annotation. Inspired by vision-language frameworks, our architecture combines a pretrained scRNA encoder with a Perceiver Resampler that maps gene expression profiles into the latent space of a large language model. We construct structured, ontology-grounded datasets of up to 197 cell types and evaluate our model's performance using instruction fine-tuning. Our experiments analyze the impact of integrating language modeling components with scRNA encoders and their benefit on cell type annotation performance for large, diverse datasets. Our results show that while a scRNA encoder may be sufficient for small datasets, our single-cell model leveraging LLMs consistently outperforms the scRNA encoder baseline on larger datasets, with a widening gap in classification performance as data complexity increases, demonstrating the scalability and improved generalizability of our multimodal architecture. We also provide further analysis of the tradeoffs associated with using the natural language domain for biological analysis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference Time Search for Protein Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/162989" rel="alternate"/>
<author>
<name>Qi, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/162989</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inference Time Search for Protein Structure Prediction
Qi, Richard
Scaling inference-time compute for deep learning models has led to superhuman performance in games and enhanced reasoning capabilities for language models. However, similar gains have not yet been made in the field of biomolecular structure prediction. We introduce a new paradigm for inference-time search by adding architectural components and a finetuning procedure to state-of-the-art structure prediction models that give rise to a discrete latent space. We implement algorithms for searching and sampling in this discrete latent space and conduct experiments on a small model, demonstrating an increase in oracle and top-1-selected accuracy for predicted protein-protein complex structures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight</title>
<link href="https://hdl.handle.net/1721.1/162988" rel="alternate"/>
<author>
<name>Chu, Kaitlyn A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162988</id>
<updated>2025-10-07T04:13:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight
Chu, Kaitlyn A.
Lower Body Negative Pressure (LBNP) has long been explored as a countermeasure to the physiological deconditioning and orthostatic intolerance associated with prolonged microgravity exposure. Traditional LBNP systems, however, are large, stationary devices that require astronauts to remain immobile during use, limiting their integration into daily spaceflight routines. Although more mobile LBNP solutions have emerged, they remain cumbersome and uncomfortable, ultimately still restricting multitasking and reducing operational feasibility. This study introduces the Soft Kinetics INterface (S.K.I.N.), a flexible, wearable structure designed to support the application of localized LBNP. The goal was to evaluate whether targeted negative pressure applied through the S.K.I.N. could replicate the fluid shift effects of a traditional LBNP chamber while improving comfort, mobility, and time-efficiency. The human thigh was chosen as the focus of this technology demonstration due to its known responsiveness to LBNP and its suitability for small-scale implementation. The development of the S.K.I.N. began with finite element modeling (FEM) to identify optimal material properties and structural geometry. Iterative physical prototyping resulted in a sinusoidal silicone waveform design, selected for its mechanical stability and user comfort. The final prototype was then evaluated in three experimental phases: (1) mechanical testing using pressure-sensitive film to assess structural integrity under vacuum, (2) an ex-vivo pig leg study to validate experimental protocols and assess the S.K.I.N.’s ability to induce fluid shifts, and (3) a human study (n=10) comparing fluid shifts between the S.K.I.N. and a scaled-down version of the traditional LBNP chamber. On average, results from the human study showed that the S.K.I.N. successfully induced localized fluid shifts similar to those of the chamber. However, response magnitude varied considerably across participants. Most of the observed effect was driven by female participants, who exhibited more pronounced fluid shifts, while most male participants showed minimal or no measurable response. FEM simulations supported this finding, suggesting that higher fat-tomuscle ratios — more common in women — may enhance tissue deformability and volume displacement, thereby facilitating greater fluid shifts under negative pressure. Although these differences limit generalizability, they also highlight the potential for the S.K.I.N. to serve as a more targeted countermeasure for specific physiologies or user groups. Although the current S.K.I.N. design’s limited surface area constrains its overall effect, the concept shows promise. The ability to deliver targeted fluid shifts in a more mobile, comfortable format could enable integration into dynamic operational settings. Future work should focus on expanding the system to cover larger areas, such as a whole-pants version, and incorporating a portable vacuum source for mobility in both spaceflight and terrestrial applications. Larger, more diverse participant cohorts will also be necessary to assess long-term usability, efficacy, and individual variability in response.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal</title>
<link href="https://hdl.handle.net/1721.1/162987" rel="alternate"/>
<author>
<name>Patterson, Lydia J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162987</id>
<updated>2025-10-07T04:13:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal
Patterson, Lydia J.
Complex diagrams and charts can be difficult for people who use screen magnification to navigate. A sense of spatial context and of the diagram’s overall structure is oftentimes lost, as magnifiers can only magnify a fraction of the screen at any given time. So, while sighted users have both clarity and full context simultaneously, screen magnifier users often have to choose or split their attention between the two. Existing screen magnifiers are content-agnostic, so the current way of navigating visualizations is freeform and unguided. The burden of figuring out where to explore while retaining a mental model of the diagram is placed entirely on the user. In this paper, we present Mantis—six prototypes of an automatic, content-aware screen magnification tool designed to aid people who have low vision in the traversal of diagrams. Each design experiments with what sorts of information might be provided to help the user retain a sense of context. Further, they each explore how such a tool might use its knowledge of the diagram’s semantic structure to streamline traversal to and from areas of interest to the user. To this end, we evaluate how these proof-of-concepts improve the user’s navigational experience and reduce the user’s cognitive load.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard</title>
<link href="https://hdl.handle.net/1721.1/162986" rel="alternate"/>
<author>
<name>Luong, Jacky K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162986</id>
<updated>2025-10-07T04:12:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard
Luong, Jacky K.
Teaching tools such as the Tragedy of the Commons (ToC) participatory simulation, developed by MIT STEP Lab, have the potential to develop different skills or knowledge compared to single-player educational games. ToC illustrates the challenges of managing shared resources, but its existing teacher dashboard may not be well-suited to support its growing use across various classrooms. Through surveying and interviewing educators along with observing classroom usage, the software's shortcomings and opportunities for improvement were identified. This resulted in the design and implementation of a redesigned teacher dashboard, including a new “central bank” feature that provides structure to support more complex simulations. Additional enhancements improved usability and performance. Evaluations with teachers and controlled playtests demonstrated that these changes show promise in enabling richer classroom dynamics and making facilitation easier. The findings underscore the importance of teacher experience in educational game design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists</title>
<link href="https://hdl.handle.net/1721.1/162985" rel="alternate"/>
<author>
<name>Liu, Andi</name>
</author>
<id>https://hdl.handle.net/1721.1/162985</id>
<updated>2025-10-07T04:13:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists
Liu, Andi
This thesis tests two design questions for Large Language Model (LLM) Chatbot Therapists: Which therapeutic school suits an LLM best, and does an explicit Theory-of-Mind (ToM) reflection improve outcomes? We prompted GPT-4.1-mini to act as eight therapists — CBT, Narrative, Psychodynamic, and SFBT, each with and without a ToM step — and held 240 simulated sessions with scripted AI patients. SFBT achieved the greatest projected PHQ-9 improvement (around 4 points), significantly higher than CBT, Narrative, or Psychodynamic approaches. Immediate distress (SUDS) fell modestly and uniformly across schools. ToM reasoning did not alter either measure. The findings show that extra “thinking time” might not automatically translate into therapeutic gain, but also highlight a current strength of LLMs: executing brief, rule-based therapies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Fiber Coupling with Actuated Mirrors</title>
<link href="https://hdl.handle.net/1721.1/162984" rel="alternate"/>
<author>
<name>Vel, Vetri Senthil</name>
</author>
<id>https://hdl.handle.net/1721.1/162984</id>
<updated>2025-10-07T04:13:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Fiber Coupling with Actuated Mirrors
Vel, Vetri Senthil
Almost all atomic physics experiments rely on precise alignment of lasers. For example, optical fields are used to cool, control, and image atoms in neutral atom arrays. In this thesis, we present a design for mirrors actuated by servos that allow the precise, repeatable alignment of lasers in free space optical setups. We then apply these actuated mirrors to automate fiber coupling, where laser beams are coupled from free space into a fiber waveguide. We present the theory of fiber coupling and use experimental data on the fiber coupling landscape to develop an accurate digital twin. Insights from the combination of the digital twin and experimental data are used to develop a fast and effective algorithm for automated fiber coupling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ACED: Automatic Concourse Event Detection</title>
<link href="https://hdl.handle.net/1721.1/162983" rel="alternate"/>
<author>
<name>Wagner, Luke A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162983</id>
<updated>2025-10-07T04:12:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ACED: Automatic Concourse Event Detection
Wagner, Luke A.
Fans of the San Antonio Spurs often face long delays when traversing the arena or waiting for food. Automatic Concourse Event Detection (ACED) is a novel system designed for tracking these statistics in the Spurs’ arena in real time. We use existing machine learning models and introduce novel processing algorithms to identify the total number of people in each section throughout the arena in addition to tracking the wait times for different restaurants and restrooms. ACED collects and stores this information in a database, which could be used to present fans with up-to-date arena information in a live dashboard to assist them in their in-game decision making. This would improve the overall fan experience, which could encourage fans to buy tickets more frequently. We provide the San Antonio Spurs with a completed implementation of ACED, which is ready to be deployed within the Spurs’ arena.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)</title>
<link href="https://hdl.handle.net/1721.1/162982" rel="alternate"/>
<author>
<name>Seeyave, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162982</id>
<updated>2025-10-07T04:13:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)
Seeyave, Evan
The global challenge posed by pandemics, notably COVID-19, has underscored the critical need for advanced personal protective equipment (PPE). This thesis details the development and evaluation of a multi-stage powered air-purifying respirator (PAPR) incorporating direct ultraviolet-C (UVC) germicidal irradiation. The proposed PAPR aims to provide enhanced protection by actively sterilizing air through this UVC chamber immediately prior to inhalation. This approach offers an advantage over traditional filter-based PAPRs by removing both the need to replace filters and pull air with high-power motors, while still neutralizing a broad spectrum of airborne pathogens, including viruses and bacteria. The primary objective of this research is to design, construct, and test a PAPR prototype capable of achieving a high inactivation rate (target 99.9%), thereby offering a robust solution for individuals in high-exposure environments. In addition to the UVC chamber, we also built an alternate ultraviolet-A (UVA) activated titanium dioxide (TiO2) photocatalytic oxidation (PCO) chamber. This work encompasses the overall design of the system, safety considerations, and testing to quantify its pathogen inactivation efficacy and to characterize system performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music</title>
<link href="https://hdl.handle.net/1721.1/162981" rel="alternate"/>
<author>
<name>Shi, Iris</name>
</author>
<id>https://hdl.handle.net/1721.1/162981</id>
<updated>2025-12-15T15:52:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music
Shi, Iris
Beatgridding is a technique meant to aid DJs in aligning the beats of two different songs. By overlaying a grid of beat markers (a “beatgrid”) on top of a waveform representation of the track being beatgridded, a song’s beats can be visualized and thus easily matched to another’s. State-of-the-art DJ software—like rekordbox by the company AlphaTheta—will algorithmically generate beatgrids for songs. However, these beatgrids are not always accurate and can often be difficult to correct with only the software-provided tools. GridFix is a desktop application designed to be an auxiliary tool for rekordbox, allowing users to correct rekordbox-generated beatgrids by providing additional functionality that rekordbox does not. GridFix’s main advantage is its ability to let users make local changes to small, isolated sections of a beatgrid, a task that is quite hard to achieve in rekordbox. GridFix is fully compatible with rekordbox and fairly easy to learn how to use, as shown by user testing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Metrics for Improving Cybersecurity on Software Dependency Networks</title>
<link href="https://hdl.handle.net/1721.1/162980" rel="alternate"/>
<author>
<name>Yao, Darren Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162980</id>
<updated>2026-01-16T20:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Metrics for Improving Cybersecurity on Software Dependency Networks
Yao, Darren Z.
Modern software ecosystems are deeply interconnected, allowing a vulnerability in a single component to propagate and affect many others. In this thesis, we model software ecosystems as directed graphs, and apply various graph-theoretic metrics to quantify security risk. We compare two deep learning frameworks (PyTorch and TensorFlow) with two traditional software frameworks (npm and PyPI), identifying critical properties of their dependency structures, which motivates several recommendations for improving software supply chain security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning Robotic Cutting Operations</title>
<link href="https://hdl.handle.net/1721.1/162979" rel="alternate"/>
<author>
<name>Lunawat, Tarang</name>
</author>
<id>https://hdl.handle.net/1721.1/162979</id>
<updated>2025-10-07T04:12:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning Robotic Cutting Operations
Lunawat, Tarang
Classical planning and most PDDL variants operate on the assumption that the number and types of objects present in the environment are known at the time of initialization and neither can nor do change during plan execution. However, there are many domains in which it is helpful and necessary to be able to capture action (or environment) effects that are able to change the existence of objects rather than just facts about these objects. PDDLStream already provides a framework for "certifying" new facts about the environment as necessary throughout plan execution; I propose using PDDLStream to construct a principled way to reason over not just added facts, but also added or removed objects in the environment. In order to do this, I will work within the domain of cutting operations in the kitchen, as this is a domain that both necessitates a lot of object change as objects are cut and often requires chains of these generated objects to be fully reasoned over. Additionally, I will lay the groundwork to use this principled way to reason over new objects to implement different types of cutting operations in the kitchen, with the eventual goal of a robot planner being able to sequence different provided actions to more efficiently work with knives in the kitchen in a human-like manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets</title>
<link href="https://hdl.handle.net/1721.1/162978" rel="alternate"/>
<author>
<name>Manojkumar, Saikrishna</name>
</author>
<id>https://hdl.handle.net/1721.1/162978</id>
<updated>2025-10-07T04:15:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets
Manojkumar, Saikrishna
The direct imaging of exoplanets orbiting stars outside our solar system remains one of the crucial tools we have available to answer whether there exists life beyond Earth. The light from an Earth-like exoplanet is approximately ten orders of magnitude dimmer than its host star and hence the imaging system of the telescope observing the exoplanet must be able to suppress the starlight to achieve a “contrast” of 10−10 in the image. This is typically achieved using a coronagraph, which blocks the light from the star while allowing the light from the planet to pass through. However, some starlight that leaks through the coronagraph needs to be further removed in the search region for the exoplanet; this region is referred to as the dark hole or dark zone (DZ). Creating a DZ requires the use of focal plane wavefront sensing and control techniques, which estimates the electric field of the starlight in the focal plane of the telescope using a camera and then informs the deformable mirrors (DMs) located upstream of the coronagraph to null these electric fields. Once the DZ is created with a desired contrast, there are still slow, high-order drifts in the optical system that cause the contrast to degrade over the long observation times of the science target. High-order wavefront sensing and control (HOWFSC) techniques are required to maintain the contrast in the DZ while observing a science target. Dark Zone Maintenance (DZM) is a technique that has demonstrated the ability to maintain the contrast in the DZ over long observation times. This algorithm utilizes an Extended Kalman Filter (EKF) to estimate the open-loop electric field at every pixel in the DZ and use this information to inform the control algorithm. The achievable contrast and contrast stability of DZM are determined by several key parameters: the optical system’s drift rate, the photon flux and associated shot noise in the measurement images, and the probe magnitude applied to the DMs for the estimation algorithm. This work quantifies the impact of the drift rate, photon rate, and probe magnitude on the performance of DZM by performing a parameter scan on high-contrast imaging testbeds. The parameter scan was performed on both the in-air High-contrast imager for Complex Aperture Telescopes (HiCAT) testbed at the Space Telescope Science Institute (STScI) and the in-vacuum Decadal Survey Testbed (DST) at the Jet Propulsion Laboratory (JPL). The parameter scan was run in both simulation and on the physical testbed using the contrast in the DZ as a performance metric, and evaluated relative to the photon-noise theoretical bounds to assess the efficacy of the DZM algorithm. The substantial difference between the theoretical bounds and experimental results, on average 70 times worse on HiCAT, motivated the development and implementation of a new DZM algorithm that utilized a separate EKF to estimate the modes of wavefront error derived from the DMs and use that information to correct for the aberrations. This new modal EKF algorithm was tested with a similar parameter scan on the HiCAT simulator demonstrating a nearly 5 times level of improvement relative to the original DZM algorithm simulation performance. The results of this work will inform the design of future algorithms to maintain high contrast during observations for upcoming space telescope missions such as the Habitable Worlds Observatory (HWO).
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incentivizing Data Contributions in Decentralized Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162977" rel="alternate"/>
<author>
<name>Wang, Yuxiao</name>
</author>
<id>https://hdl.handle.net/1721.1/162977</id>
<updated>2025-12-15T15:42:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incentivizing Data Contributions in Decentralized Collaborative Learning
Wang, Yuxiao
In a collaborative learning scheme such as the federated learning model, each user benefits from the data contribution of others. Previous work shows that the federated learning protocol can incentivize users to contribute more than in the competitive equilibrium by penalizing deviations. However, a central controller with access to all the data may raise privacy concerns. In this work, we construct a decentralized collaborative protocol in which users share data without relying on a centralized controller. We then extend this protocol to a repeated game and analyze the competitive equilibrium behavior, along with strategies users can implement to foster collaboration in the repeated setting of the protocol. We provide a quantitative analysis of free-rider behavior under decentralized protocols and compare the amount of information collected with decentralized protocols against that in the centralized protocol.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams</title>
<link href="https://hdl.handle.net/1721.1/162976" rel="alternate"/>
<author>
<name>McMenamy, Josiah</name>
</author>
<id>https://hdl.handle.net/1721.1/162976</id>
<updated>2025-10-07T04:14:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams
McMenamy, Josiah
This thesis aims to provide an intuitive debugging and learning tool for distributed systems that communicate by message passing. Understanding and debugging distributed systems can be challenging and slow to iterate on, so there is a need for tools that can speed up the time it takes to diagnose the root cause of a bug. There exists significant prior work in creating tools that can aid in the visualization and debugging of distributed system executions, such as the ShiViz log visualizer [13]. This work builds on top of these tools to provide more debugging information, handle large log files, and be easily instrumented in existing systems. We demonstrate using the tool to debug issues in an implementation of the Raft consensus algorithm [34].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring</title>
<link href="https://hdl.handle.net/1721.1/162975" rel="alternate"/>
<author>
<name>Nori, Divya</name>
</author>
<id>https://hdl.handle.net/1721.1/162975</id>
<updated>2025-12-15T17:19:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring
Nori, Divya
Protein binder design has been transformed by hallucination-based methods that optimize structure prediction confidence metrics, such as the interface predicted TM-score (ipTM), via backpropagation. However, these metrics are imperfect proxies for binding affinity and do not reflect the statistical likelihood of a binder–target complex under the learned distribution. In this work, we propose a principled alternative: an energy-based framework that directly extracts the statistical likelihood of a predicted binder–target complex from a structure predictor’s internal confidence distributions. Building on the Joint Energy-based Modeling (JEM) framework, we introduce pTMEnergy, a statistical energy function over structures that is derived from predicted inter-residue error distributions. We incorporate pTMEnergy into BindEnergyCraft (BECraft), a hallucination-based binder design pipeline that maintains the same optimization framework as BindCraft but replaces ipTM with our energy-based objective. Across a diverse panel of challenging protein targets, BECraft achieves higher in silico success rates compared to BindCraft, RFDiffusion, and ESM3. Beyond design, we evaluate pTMEnergy as an unsupervised scoring function for retrospective virtual screening tasks. Without any task-specific supervision or retraining, pTMEnergy consistently outperforms baseline methods across both protein–protein and protein–RNA interaction benchmarks. Our results demonstrate that confidence-derived energy functions offer a powerful and generalizable signal for binder design and scoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays</title>
<link href="https://hdl.handle.net/1721.1/162974" rel="alternate"/>
<author>
<name>Ouko, Edwin O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162974</id>
<updated>2025-10-07T04:14:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays
Ouko, Edwin O.
Geothermal well arrays, which organize multiple geothermal wells into carefully planned geometric configurations, provide an opportunity to enhance energy production capacity and increase fault tolerance of geothermal systems. Closed-loop geothermal systems (CLGS), a type of geothermal well design, promises to allow harnessing of geothermal energy in any location with minimal adverse environmental impact. I demonstrate how the development of these emerging geothermal technologies could be accelerated by recent advances in large language models (LLMs) in conjunction with high-level high-performance programming languages like Julia. In particular, I focus on how LLMs could be used in design brainstorming and to increase efficiency in numerical modeling. I assess the potential of state-of-the-art LLMs such as ChatGPT, Gemini, Claude, Grok, and a domain-specific model, AskGDR, as expert assistants in geothermal research. Owing to the unpredictable reliability of LLMs, there is a constant need for objective evaluation benchmarks in various domains. I propose a novel approach, leveraging Google’s recently introduced AI tool, NotebookLM, to accelerate the generation of quantitative geothermal benchmarks with only new unpublished questions. In addition, I propose the use of blackbox optimization as a computationally less costly alternative to approximate the optimal configuration of CLGS wells in a geothermal array to minimize thermal interference and improve heat energy production. I evaluate several optimization strategies such as Bayesian optimization, particle swarm optimization, natural evolution strategies, differential evolution optimization, Nelder-Mead, and simulated annealing on various performance characteristics such as convergence speed and highest production capacity attained.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis</title>
<link href="https://hdl.handle.net/1721.1/162973" rel="alternate"/>
<author>
<name>Medearis, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162973</id>
<updated>2025-10-07T04:14:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis
Medearis, Nicholas A.
The human microbiome plays a crucial role in maintaining our health. Alterations in the microbiome have been linked to various chronic conditions like autoimmune disorders, metabolic diseases, and cancer. While various tools have been developed to study the microbiome, each tool tends to be specialized for a specific task. To overcome this limitation, we report on the development of a foundation model pretrained on 13,524 human microbiome metagenomic samples. The model was then fine-tuned to predict the clinical status of the host. Our model was able to differentiate between healthy and diseased samples in 10-fold cross-validation on the training dataset with an accuracy of 83.7%. On an external validation dataset of 927 samples, our model had an accuracy of 74.9%. Notably, our model performed even better at differentiating diseases from one another. On the diseased samples in the training dataset, it classified samples with an accuracy of 93.3% in 10-fold cross-validation. Together, our results show that generative AI has the potential to transform microbiome research and advance personalized medicine.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner</title>
<link href="https://hdl.handle.net/1721.1/162972" rel="alternate"/>
<author>
<name>Mueller, David</name>
</author>
<id>https://hdl.handle.net/1721.1/162972</id>
<updated>2025-10-07T04:13:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner
Mueller, David
Investment in automation by small and medium-sized enterprise (SME) manufacturers in the United States has lagged behind their larger counterparts for decades, despite comprising a majority of the nation’s manufacturing industry. The cyber-physical production systems (CPPSs) introduced by Industry 4.0 promise to bolster productivity and efficiency, but only for those enterprises which invest in constituent technologies. These technologies are not easily integrated in existing factories, typically requiring installation of invasive infrastructure and continuous technical support. Robotic integration is typically performed by specialized third-party firms or by in-house staff with extensive technical training, such as engineers. SME manufacturers are particularly sensitive to the complexities of robot integration due to limited access to technologists, and their need for frequent reconfiguration under economies of scope. This thesis introduces Marve: the Mobile Augmented Reality Visual Editor. Marve is a proof-of-concept Android application that enables line workers to directly configure and control an autonomous mobile robot (AMR)-backed hybrid intralogistics system using lowcost consumer hardware. Workers can use Marve’s augmented reality (AR)-based interface to define and visualize the essential geometry and components of such a system. Once configured, workers are able to simulate how the system would respond to their requests to move material throughout the factory. The use of AR enables extensive work to be done at the planning stage of CPPS integration by line workers themselves, bypassing the need for modeling by engineers. Marve relies exclusively on fiducials and visual-inertial odometry (VIO) for localization, and fiducial tags for object tracking, thus eliminating the need for supporting infrastructure. Taken together, these features make Marve an easy on-ramp for SMEs seeking to transition legacy production lines into the CPPSs of Industry 4.0.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling</title>
<link href="https://hdl.handle.net/1721.1/162971" rel="alternate"/>
<author>
<name>Liu, Katie</name>
</author>
<id>https://hdl.handle.net/1721.1/162971</id>
<updated>2026-01-16T19:18:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling
Liu, Katie
Machine learning inference in multi-tenant cloud environments leads to significant challenges when it comes to minimizing latency and resource contention, especially as models grow in size and complexity. This thesis addresses the cold start overhead and scheduling inefficiencies of multi-tenant ML serving by integrating the RayServe distributed model-serving framework into σOS, a cloud operating system that unifies container and serverless paradigms. The thesis also proposes two model-aware schedulers within σOS that intelligently routes inference requests to reduce the number of cold starts: Model Colocation, which prioritizes placing requests on machines where the required model is already loaded, and Centralized Model Registry, which tracks globally available models to inform scheduling decisions. These policies proactively reduce model load times by reusing cached models. Experimental results on language translation workloads in an 8-node cluster show that these schedulers achieve a ≈ 50% reduction in average inference latency and eliminates roughly 4–5 cold starts per workload, compared to σOS’s default scheduler. Through this model-aware approach to scheduling, our work enables more efficient, scalable, and low-latency ML inference serving in multi-tenant cloud settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality</title>
<link href="https://hdl.handle.net/1721.1/162970" rel="alternate"/>
<author>
<name>Shukla, Aditeya</name>
</author>
<id>https://hdl.handle.net/1721.1/162970</id>
<updated>2025-10-07T04:14:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality
Shukla, Aditeya
The impacts of commercial aviation on global climate and air quality have led to an industry-wide movement to reduce its environmental impact. While technological developments in aircraft propulsion, materials, and aerodynamics aim to reduce fuel consumption and CO₂ emissions, these efforts often overlook the full climate and air quality impacts of aviation, especially emissions impacts of NOₓ, CO, HC, soot, and contrails. This study assesses the environmental constraints associated with advancements driven by fuel efficiency by modeling aircraft technologies across narrow-body, wide-body, and regional jet categories. By focusing on near-future technology insertions in materials, aerodynamics, and propulsion, we can compute quantifiable environmental metrics such as temperature changes, global warming potentials, and monetized environmental damages. Our modeling shows that certain propulsion technologies — such as increased component polytropic efficiencies or higher allowable turbine-metal temperatures — can reduce fuel consumption by more than 10% under favorable re-optimizations of engine design. However, they often raise engine core pressures or temperatures in ways that increase NOₓ emissions indices by more than 30%. This can lead to worse air quality damages, offsetting some of the CO₂ savings and in some cases result in a 2% increase in environmental damages on a total net present value (NPV) basis. Primary structure material upgrades consistently reduce both fuel burn and NOₓ emissions. These improvements in air quality from reduced NOₓ result in a 10% reduction of the total NPV from environmental impacts. This analysis shows that focusing on fuel efficiency alone can be an incomplete metric towards understanding the environmental impact of an aircraft. By offering a quantitative assessment of how near-future upgrades can affect both climate and air quality, this study also provides guidance on which technology paths are most effective in reducing the overall environmental impact of aviation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization</title>
<link href="https://hdl.handle.net/1721.1/162968" rel="alternate"/>
<author>
<name>Xu, Jessica J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162968</id>
<updated>2025-10-07T04:14:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization
Xu, Jessica J.
Neurodegenerative diseases, such as Alzheimer’s, impact many people worldwide and currently have no cure, making early detection essential for effective symptom management and intervention. Traditional diagnostic practices often rely on subjective clinical evaluations that can vary between practitioners, highlighting the need for more objective methods. The digital Symbol Digit Test (dSDT), administered via the Cognitive Health App on an iPad and using the ETVision Eye Tracking System, aims to provide an automated, reliable method to analyze patient cognitive function to detect early signs of impairment through capturing handwriting and gaze data. This thesis builds upon previous work by automating the synchronization of these two data modalities, refining definitions of learning behaviors, and developing pipelines for data processing and visualization. By creating a synchronized multimodal dataset, we can visualize participant behavior for more intuitive interpretation and draw meaningful conclusions. These contributions provide an end-to-end framework for analyzing behavior during the cognitive assessment and lay the groundwork for future development of diagnostic models to detect early signs of neurodegenerative diseases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Location Verification for Spoofing Detection in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/162967" rel="alternate"/>
<author>
<name>Schatz, Ensign Nathan Caleb</name>
</author>
<id>https://hdl.handle.net/1721.1/162967</id>
<updated>2025-10-07T04:14:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Location Verification for Spoofing Detection in Non-Terrestrial Networks
Schatz, Ensign Nathan Caleb
Reliable location awareness is essential for the development of new services and applications in non-terrestrial networks (NTN). The ability of malicious users to report false location information poses a significant threat to NTN performance. This threat introduces the need for a flexible and robust location verification system (LVS) that can reliably detect malicious users. This paper proposes a single-satellite LVS based on round-trip time and angle-of-arrival measurements. We characterize several sources of uncertainty unique to the NTN scenario and examine their combined effect on positioning error. To detect spoofing probabilistically, we approximate the likelihood function for the unknown user position using a Gaussian mixture model and employ a likelihood ratio decision rule for location verification. Results display receiver operating characteristic curves to evaluate the LVS performance under various satellite ephemeris error conditions, spoofing distances, number of measurements available to the system, and wireless channel properties. The proposed LVS is shown to reliably detect spoofing among malicious users.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Triangle Splatting</title>
<link href="https://hdl.handle.net/1721.1/162966" rel="alternate"/>
<author>
<name>Xu, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162966</id>
<updated>2025-10-07T04:14:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Triangle Splatting
Xu, Daniel
We develop a differentiable rendering method for recovering 3D meshes of scenes from 2D images. Unlike existing approaches, our method does not rely on a differentiable renderers and is compatible with any standard mesh rasterizer. To our knowledge, it is the first mesh-based differentiable rendering method that is not reliant the use of visibility masks entirely. Beyond these conceptual advancements, we implemented a set of highly optimized kernels that enable efficient scene representation on a sparse voxel grid, effectively overcoming the cubic scaling bottleneck faced by similar methods. These innovations result in promising performance on unbounded real-world scenes with complex backgrounds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing</title>
<link href="https://hdl.handle.net/1721.1/162965" rel="alternate"/>
<author>
<name>Ortiz, Ciarra Celena</name>
</author>
<id>https://hdl.handle.net/1721.1/162965</id>
<updated>2026-01-16T19:55:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing
Ortiz, Ciarra Celena
Entering a microgravity environment induces cephalad fluid shifts that can lead to cardiovascular and renal-hormonal adaptations that can effect astronaut health and performance in space. The current monitoring strategies for fluid shift lack the ability to track regional fluid shift in real-time, which limits countermeasure efficacy. This thesis aims to highlight the investigation and validation of using prototype non-invasive radiofrequency (RF) sensors for regional fluid shift detection. Additionally, the integration of the feedback from these sensors into Lower Body Negative Pressure (LBNP) chambers could allow for the development of an adaptive Lower Body Negative Pressure regulation framework. Coaxial RF sensors were designed and characterized using tissue phantoms, and tested in a human subject study involving controlled LBNP exposure. Reflection coefficients (S₁₁ and S₂₂) were analyzed to detect regional fluid changes in arm and leg tissue. The preliminary results indicated a statistically significant decrease in the arm reflection coefficients (S₁₁) during active LBNP, which is consistent with fluid being pulled towards the lower body. The leg reflection coefficients (S₂₂) were more variable and did not exhibit statistically significant results, suggesting a need for more investigation with placement and sensor sensitivity. This work demonstrates the potential of using wearable RF sensors for non-invasive fluid shift monitoring and lays the foundation for integrating fluid sensor feedback into adaptive LBNP control protocols to improve astronaut health monitoring and countermeasure personalization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Inference via Optimal Transport Ambiguity Sets</title>
<link href="https://hdl.handle.net/1721.1/162964" rel="alternate"/>
<author>
<name>Wang, Zheyu</name>
</author>
<id>https://hdl.handle.net/1721.1/162964</id>
<updated>2025-10-07T04:14:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Inference via Optimal Transport Ambiguity Sets
Wang, Zheyu
Uncertainty quantification is pivotal for ensuring the safety and reliability of predictive algorithms in high-stakes applications—ranging from cancer diagnosis to autonomous driving. This challenge is exacerbated by distribution shift, in which the true data–generating distribution diverges from the nominal distribution on which our statistical methods were trained. In this thesis, we formalize distribution shifts via ambiguity sets—metric neighborhoods in the space of probability measures defined by distances such as the Wasserstein metric—and demonstrate that leveraging these ambiguity sets endows two widely used statistical algorithms with distributional robustness. The Kalman filter enables accurate, real-time tracking of latent states by assimilating noisy, indirect measurements over time. Its performance relies on precise state-space models for both the evolution dynamics and the observation process. In practice, uncertainties in these models introduce errors that can significantly degrade filter accuracy. Here, we review two robust Kalman-filter variants that explicitly account for such errors via Wasserstein ambiguity sets. Split conformal prediction, hereafter referred to as conformal prediction, offers a powerful framework for quantifying predictive uncertainty by constructing prediction intervals with finite-sample, distribution-free guarantees. Despite its widespread success, ensuring its validity under train-test distribution shifts remains a significant challenge. We model distribution shifts using ambiguity sets defined by two optimal transport-based metrics and propose two robust conformal prediction algorithms that preserves validity under these shifts. First, we consider ambiguity sets defined by a pseudo-divergence derived from the LévyProkhorov (LP) metric, which captures both local and global data perturbations. We provide a self-contained overview of LP ambiguity sets and their connections to widely used metrics such as the Wasserstein and Total Variation distances. We then establish a natural link between conformal prediction and LP ambiguity sets: by propagating the LP ambiguity set through the scoring function, we reduce complex high-dimensional distribution shifts to manageable one-dimensional shifts, enabling exact computation of the worst-case quantile and coverage. Building on this foundation, we develop valid robust conformal prediction intervals under distribution shifts, explicitly relating LP parameters to interval width and confidence levels. Experimental results on real-world datasets demonstrate the effectiveness of the proposed approach. Next, we extend our analysis to robust conformal prediction over Wasserstein-2 ambiguity sets, deriving a theoretical characterization of the worst-case quantile. However, we identify intractability due to the dependence on the shape of the original score CDF and conclude with potential future directions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical Limits of Quantum Ranging</title>
<link href="https://hdl.handle.net/1721.1/162963" rel="alternate"/>
<author>
<name>Kartal, Bünyamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162963</id>
<updated>2025-10-07T04:14:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Theoretical Limits of Quantum Ranging
Kartal, Bünyamin
The ability to determine distances from dedicated measurements, namely active ranging, is crucial in a variety of systems including localization, radar, and lidar. This thesis establishes the quantum limits and determines the quantum advantage provided by single-beam displaced squeezed states in active ranging. Analytical expressions of the quantum Fisher information (QFI) are provided for monochromatic and continuous-mode waves passing through a thermal loss channel with arbitrary loss and noise conditions. The optimal allocation of system resources for performing displacement and squeezing operations is determined. The optimal allocation consists of apportioning all resources to perform either the displacement operation, providing no quantum advantage, or the squeezing operation. Analytical results are examined in optical and microwave regimes. The optimal gain, i.e., the ratio between the QFI obtained by optimal resource allocation and the QFI obtained by performing only the displacement operation, is derived for the optical and microwave regimes. Quantum advantage afforded by the prototypical heterodyne receiver is also investigated. The results of this thesis pave the way for establishing a foundation of active ranging and provide insights for system design employing currently available quantum technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalized Policy Learning with Planning</title>
<link href="https://hdl.handle.net/1721.1/162962" rel="alternate"/>
<author>
<name>Yang, Ryan P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162962</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generalized Policy Learning with Planning
Yang, Ryan P.
Generalized policy learning seeks to find policies that solve multiple tasks within a planning domain. We introduce methods to search for policies independently in a domain from empty initialized policies. As an extension, we also propose a problem setting to learn satisficing policies between domains. In an independent domain, we propose a score function to guide the policy search. Our approach, Policy-Guided Planning for Generalized Policy Generation (PG3), evaluates policies based on how well it can be used to plan. Empirically, we show that PG3 allows generalized policy learning to occur more efficiently than other baselines with PDDL-based problems and policies represented as lifted decision lists. Finally, our experiments show that policies independently learned are qualitiatively similar, prompting further investigation on the possibilities of further accelerating the policy search process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-Learning Exploration Strategies with Decision Transformers</title>
<link href="https://hdl.handle.net/1721.1/162961" rel="alternate"/>
<author>
<name>Welch, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162961</id>
<updated>2025-10-07T04:14:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Meta-Learning Exploration Strategies with Decision Transformers
Welch, Ryan
The problem of pure exploration in sequential decision-making is to identify strategies for efficiently gathering information to uncover hidden properties of an environment. This challenge arises in many practical domains, including clinical diagnostics, recommender systems, and educational testing, where data collection is costly and the effectiveness of exploration is critical. Efficient exploration in these contexts strongly depends on exploiting underlying structural relationships within the environment. For instance, recognizing that multiple medical tests may provide overlapping information can reduce the number of tests required to make a diagnosis. Existing exploration approaches drawn from reinforcement learning and active hypothesis testing typically rely on heuristic strategies that require explicit prior assumptions about such structural information. However, when this information is unknown, heuristic methods often lead to redundant exploration, significantly limiting their practical utility in high-stakes domains. Furthermore, these existing approaches do not leverage past experience to improve their exploration efficiency over time. To overcome these limitations, we introduce In-Context Pure Exploration (ICPE), a novel meta-learning framework capable of autonomously discovering and exploiting latent environmental structures across related tasks to guide efficient exploration. ICPE leverages the in-context learning and sequence-modeling capabilities of transformers, combined with supervised learning and deep reinforcement learning techniques to learn exploration strategies directly from experience. Through extensive experiments on synthetic and semi-synthetic exploration tasks, we demonstrate that ICPE is able to efficiently explore in deterministic, stochastic and highly structured environments without relying on any explicit inductive biases. Our results highlight the potential of ICPE to enable more practical exploration strategies suitable for real-world decision-making contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments</title>
<link href="https://hdl.handle.net/1721.1/162960" rel="alternate"/>
<author>
<name>Thirumalai, Vittal</name>
</author>
<id>https://hdl.handle.net/1721.1/162960</id>
<updated>2025-10-07T04:13:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments
Thirumalai, Vittal
Autonomous agents operating in real-world environments must make decisions under uncertainty, facing challenges such as partial observability, sparse rewards, and long-horizon planning. While reinforcement learning (RL) enables agents to learn from experience, standard policies often struggle to generalize in the presence of ambiguous tasks or incomplete information. Large language models (LLMs) can provide valuable semantic guidance, but their high computational cost and latency make constant querying impractical. This thesis introduces WhatWhen2Ask, a framework for cost-aware, confidence-driven querying of external multimodal large language models (MLLMs). The agent employs a Deep Q-Network (DQN) as its internal action planner, selectively querying open- and closed-source models (BLIP-2 and GPT-4o) in a hierarchical manner when its confidence is low and external guidance is likely to improve performance. Accepted hints are embedded and fused with structured state representations, supported by tailored reward shaping for improved learning in sparse environments. Evaluated in the HomeGrid environment, WhatWhen2Ask improves the success rate from 38% (DQN-only) to 54%, while querying in fewer than 6% of steps. Ablation studies show that semantic hints, confidence-based querying, selective hint filtering, and hierarchical fallback each contribute meaningfully to performance. These results suggest that principled, confidence-aware LLM querying can enhance decision-making in uncertain environments, offering a step toward more efficient and cost-aware language-augmented agents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach</title>
<link href="https://hdl.handle.net/1721.1/162958" rel="alternate"/>
<author>
<name>Liu, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/162958</id>
<updated>2025-10-07T04:13:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach
Liu, Katherine
With the high volume of activity flowing through financial institutions, detecting potential errors remains a critical challenge. This paper addresses two key areas where errors may occur: business name registrations and transactions within valid accounts. Traditional string-matching methods struggle to accurately identify incorrectly written business names that closely resemble existing ones, while existing error detection models for transaction data often suffer from class imbalance, leading to reduced performance on minority incorrect transaction cases. To address these issues, this paper proposes two novel approaches. First, a hybrid method integrating multi-agent Large Language Models (LLMs) with existing string-matching techniques enhances the detection of incorrect business names by capturing subtle variations beyond conventional edit-distance metrics, improving the recall from 0.815 for the baseline model to 0.987 using the proposed method. Second, an improved tabular data generation method for credit card transactions is introduced, leveraging LLMs and class balancing to generate high-quality synthetic data. Using this data to train error detection systems results in a decrease of the false negative rate from 23.47% to 12.84%. Together, these methods enhance the performance of error detection systems, enabling financial institutions to enhance the experiences of their clients.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction</title>
<link href="https://hdl.handle.net/1721.1/162957" rel="alternate"/>
<author>
<name>Su, Arnold C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162957</id>
<updated>2025-10-07T04:14:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction
Su, Arnold C.
In clinical settings, timely and accurate prediction of adverse patient outcomes can help guide treatment decisions. While deep learning models such as LSTMs have demonstrated strong predictive performance on multivariate clinical time series, they often lack interpretability. To address this gap, this thesis proposes a framework that combines the predictive strength of neural networks with the interpretability of latent variable models. Specifically, we develop a constrained inference approach to train a switching state space model—an autoregressive hidden Markov model (AR-HMM)—for outcome prediction. Our method leverages knowledge distillation: a high-capacity LSTM "teacher" model is first trained to predict a target clinical outcome of interest, and its predictive behavior is then transferred to an interpretable AR-HMM "student" model through a similarity constraint during inference. We implement a constrained variational inference approach to estimate the parameters of the student model while aligning its latent representations with that of the teacher model’s. We evaluated our approach using two real-world clinical datasets. Our approach demonstrates predictive performance comparable to state-of-the-art deep learning models, while producing interpretable latent trajectories that reflect clinically meaningful patient states.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Duality, Weight Decay, and Metrized Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162956" rel="alternate"/>
<author>
<name>Newhouse, Laker</name>
</author>
<id>https://hdl.handle.net/1721.1/162956</id>
<updated>2025-10-07T04:14:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Duality, Weight Decay, and Metrized Deep Learning
Newhouse, Laker
The Muon optimizer has shown convincing evidence that it is faster and more scalable than AdamW for deep learning training, setting speed records for training NanoGPT and scaling up to models with 16B parameters. The theory that led to Muon is called metrized deep learning, a method that suggests assigning norms to each part of a neural network. Chapter 1 begins with an accessible explanation of metrized deep learning, including one of its recurring tools: odd polynomial iterations that act directly on singular values. Chapter 2 reviews duality, a way to modify the gradient that seeks to decrease the loss the most while disturbing the model the least. Pedagogically, duality links four popular optimizers—SGD, Adam, Shampoo, and Muon—under a common framework, steepest descent under a norm. Practically, experiments suggest that duality-based optimizers train faster than AdamW and transfer learning rate across width. Chapter 3 develops tools to enforce weight norm constraints during training, conferring provable and upfront Lipschitz guarantees for transformers. We find that optimizer dynamics matter: switching from AdamW to Muon improves standard weight regularization methods—weight decay and spectral normalization—allowing models to reach equal performance with a lower Lipschitz bound. Leveraging that Muon’s update has a fixed spectral norm, we co-design a weight constraint method called spectral cap that improves the Lipschitz vs. performance tradeoff for MLPs and 2M parameter transformers. Our 4-Lipschitz transformer on Shakespeare text reaches validation accuracy 60%. Scaling to 145M parameters, our 600-Lipschitz transformer reaches 21% accuracy on internet text. However, to match the NanoGPT baseline validation accuracy of 39.4%, our Lipschitz upper bound increases to 10^274. Nonetheless, our Lipschitz transformers train without stability measures such as layer norm, QK norm, and tanh logit softcapping.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation</title>
<link href="https://hdl.handle.net/1721.1/162955" rel="alternate"/>
<author>
<name>Zhao, Sarah Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/162955</id>
<updated>2025-10-07T04:13:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation
Zhao, Sarah Ann
Uncertainty in nucleotide sequences is widespread in bioinformatics, arising from somatic mutations, population-level variation, sequencing errors, and ancestral state inference. Yet, standard formats like FASTA encode DNA deterministically using ASCII string characters, omitting this uncertainty and contributing to pervasive reference biases in genomics. Graph pangenomes have recently emerged to address these limitations by representing genetic variation across populations as bidirected graphs. While promising, these approaches are still developing and are not yet fully integrated with widely used linearly-referenced genomic tools and databases. To bridge this gap, I introduce pDNA (probabilistic DNA), a linearly-referenced data structure that encodes nucleotide-level uncertainty in a vector format compatible with traditional genomics workflows. Each position in a pDNA sequence is represented as a 4-dimension probability vector over the four possible DNA nucleotides, inspired by position weight matrices and one-hot encodings. I also introduce pFASTA, a binary file format for efficient storage of pDNA sequences, along with an open-source software package for generating, manipulating, and analyzing these data. This framework enables uncertainty-aware sequence analysis while maintaining compatibility with existing genomics infrastructure. I apply this framework to ancestral sequence reconstruction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Acquisition of Simulatable Rigid Object Models</title>
<link href="https://hdl.handle.net/1721.1/162954" rel="alternate"/>
<author>
<name>Yang, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/162954</id>
<updated>2025-10-07T04:14:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Online Acquisition of Simulatable Rigid Object Models
Yang, Ethan
How can we build a robot that operates autonomously in a home environment over long periods of time? A key requirement is the ability to perceive and understand its surroundings, including the objects it will interact with. This thesis investigates how a robot can reconstruct previously unknown objects and integrate them into a physics simulation for planning. We explore two methods for reconstructing the 3D geometry of objects and test their performance in simulation and in real-world experiments. Our results demonstrate that a learned depth model enables 3D reconstruction of unknown objects and their successful integration into simulation environments. Additionally, we investigate methods for estimating an object’s inertial parameters, using its reconstructed mesh and through manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling contrastive learning batch size by two orders of magnitude</title>
<link href="https://hdl.handle.net/1721.1/162953" rel="alternate"/>
<author>
<name>Tian, Betsy</name>
</author>
<id>https://hdl.handle.net/1721.1/162953</id>
<updated>2026-01-16T20:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling contrastive learning batch size by two orders of magnitude
Tian, Betsy
Contrastive learning has emerged as a powerful framework for unsupervised representation learning, allowing models to learn by maximizing agreement between related samples and distinguishing dissimilar ones. However, contrastive learning frameworks are fundamentally limited by the number of negative pairs a model can observe, and memory-intensive backbones constrain practical batch sizes. We introduce a three-phase, adapter-augmented training framework that scales contrastive batch sizes by two orders of magnitude – surpassing previous state-of-the-art learners in both accuracy and speed. First, we co-train the backbone and adapter on small batches to establish a strong initialization. Next, we freeze the backbone and train the adapter alone with very large batches, exposing it to an enlarged negative pool. Finally, we transfer large-batch adapter gradients back into the backbone via segmented backpropagation. We evaluate our method on the PlacesAudio dataset and show promising results for boosting retrieval performance at each phase. By exposing the model to substantially more negatives per effective batch, we achieve higher accuracy at a faster speed than optimizer-stepping baselines. Ultimately, this approach that scales batch size by hundreds of times can be integrated into any contrastive learning framework for more robust representation learning and abundant negative sampling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography</title>
<link href="https://hdl.handle.net/1721.1/162952" rel="alternate"/>
<author>
<name>Rubel, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162952</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography
Rubel, Evan
Early detection of lung cancer significantly improves patient outcomes, and tracking the growth of lung nodules over time is key to understanding their progression and informing future treatment decisions. However, calculating nodule growth in computed tomography (CT) scans remains a highly manual and time-consuming task. In this work, we develop an automated end-to-end pipeline to compute lung nodule growth using state-of-the-art computer vision techniques. While modern advances in deep learning have all but solved many learning tasks in the domain of natural images, biomedical imaging presents unique challenges due to limited data availability, inconsistent annotations, and deployment constraints. We address these challenges by training robust detection and segmentation models using the LUNA16 and LNDb datasets. On the held-out UniToChest dataset, our methods generalize well, attaining a nodule recall of 77.49%, reducing false positives per scan by a factor of 11.3 compared to existing techniques, and achieving a mean nodule-wise Dice score of 0.6453. We then apply our methods to analyze nodule growth in 1,378 patients from the National Lung Screening Trial; we estimate a median nodule volume-doubling time of 791.23 days across all nodules from the patients that do not receive a cancer diagnosis and a median nodule volume-doubling time of 637.38 days across all nodules from the patients that do receive a cancer diagnosis. We also recall 82.20% of radiologist-annotated nodules that are directly associated with a cancer diagnosis and estimate a shorter median nodule volume-doubling time of 370.11 days for these nodules. By automating lung nodule growth quantification, this work lays the foundation for improved screening protocols, personalized treatment planning, and the development of novel imaging biomarkers. To encourage further work in this area, we release our full software pipeline at https://github.com/evanrubel/nodule_volumes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator</title>
<link href="https://hdl.handle.net/1721.1/162951" rel="alternate"/>
<author>
<name>Louie, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/162951</id>
<updated>2025-10-07T04:13:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator
Louie, Tiffany
This work studies a high frequency, low phase noise, hybrid CMOS oscillator based on a cylindrical dielectric resonator coupled directly to an on chip structure. Dielectric resonators (DR) are known for their high quality factor, low cost, and high temperature stability which makes them a desirable frequency selecting element in design for millimeter-wave (mmWave) applications. Current dielectric resonator oscillators (DRO) have proven to be phase stable, but are limited in frequency (&lt; 40Ghz) due to their implementation with discrete components. However, in increasing the operational frequency up to the GHz range, it is possible to reduce size of the DR and place it directly on top of a cmos chip. We demonstrate, using a 22nm FD-SOI process, the design of a 80Ghz DRO with an area of 4mm² and an oscillator power consumption of 1.95mW. The DRO achieves a simulated phase noise of -128 dBc/Hz at 1MHz and -148 dBc/Hz at 10MHz.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LEO: an LLM-Powered EDA Overview</title>
<link href="https://hdl.handle.net/1721.1/162950" rel="alternate"/>
<author>
<name>Zheng, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162950</id>
<updated>2025-10-07T04:13:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">LEO: an LLM-Powered EDA Overview
Zheng, Sophia
Computational notebooks impose a linear structure that impedes data analysts’ sensemaking process with overwritten cells, dead-end code, and fragmented logic. This challenge is especially pronounced when analysts either encounter a notebook authored by someone else or revisit a self-authored notebook after significant time has passed. In both cases, understanding the analysis code becomes convoluted and laborious. To address these barriers, we introduce LEO, a computational notebook tool that operationalizes notebook summarization by leveraging large language models to (1) cluster analysis patterns and (2) trace variable use. LEO organizes code into a two-level hierarchy–General Level Sections and Code Level Actions—integrated with in-line textual summaries filtered on the variable-level, further supporting task-driven exploration. We evaluate the system’s effectiveness in a user study with five computational notebook users across two realistic use cases. Participants reported that LEO streamlined code comprehension and navigation of undocumented notebooks by allowing them to query variables and traverse code cells with greater ease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Articulated 3D Scene Graphs from Egocentric Vision</title>
<link href="https://hdl.handle.net/1721.1/162949" rel="alternate"/>
<author>
<name>Yu, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/162949</id>
<updated>2025-10-07T04:13:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Articulated 3D Scene Graphs from Egocentric Vision
Yu, Alan
Robotic mapping systems typically approach building metric-semantic scene representations from the robot’s own sensors and cameras. However, these “first person” maps inherit the robot’s own limitations due to its embodiment or skillset, which may leave many aspects of the environment unexplored. For example, the robot might not be able to open drawers or access wall cabinets. In this sense, the scene graph is not as complete, and requires a more capable robot to fill in the gaps by remapping. We narrow these blind spots in current methods by leveraging egocentric data captured as a human naturally explores a scene wearing Project Aria glasses, giving a way to directly transfer knowledge about articulation from the human to any deployable robot. We demonstrate that, by using simple heuristics, we can leverage egocentric data to recover models of articulate object parts, with quality comparable to those of state-of-the-art methods based on other input modalities. We also show how to integrate these models into 3D scene graph representations, leading to a better understanding of object dynamics and object-container relationships. We finally demonstrate that these articulated 3D scene graphs enhance a robot’s ability to perform mobile manipulation tasks, showcasing an application where a Boston Dynamics Spot is tasked with retrieving concealed target items, given only the 3D scene graph as input.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/162948" rel="alternate"/>
<author>
<name>Strømstad, Filip Traasdahl</name>
</author>
<id>https://hdl.handle.net/1721.1/162948</id>
<updated>2025-10-07T04:13:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles
Strømstad, Filip Traasdahl
Multi-agent systems have seen a significant rise in research interest, enabled by the increasing availability of low-cost autonomous platforms and motivated by a wide range of emerging applications. However, the coordinated deployment of large numbers of autonomous vehicles in marine environments remains a nontrivial and high-risk problem, yet it is often overlooked in the literature. These vehicles are typically deployed from a single location, and their underactuated nature, close proximity, and susceptibility to external disturbances make it difficult to achieve a mission-ready configuration without collisions. In this thesis, we address the problem of transitioning a set of underactuated Autonomous Surface Vehicles (ASVs) from arbitrary and inconvenient initial conditions to a deconflicted set of deployed vehicles. We propose a decentralized and scalable method that calculates and assigns target positions to the vehicles, generates optimal paths that comply with minimum turning radius constraints, and ensures collision avoidance between the vehicles through a shared speed policy. Contributions also include a formal definition and quantification of clustering and declustering in multi-agent systems. The approach is implemented using the MOOS-IvP autonomy framework, and performance is evaluated through simulation with up to \(64\) vehicles and extensive field trials with eight vehicles. Results demonstrate that our approach reduces the time to decluster for the most challenging initial conditions by 50% compared to the current manual method. By improving efficiency and robustness while eliminating human involvement, this work streamlines ASV fleet deployments, enabling more scalable multi-agent field operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DBOS Advanced Network Analysis Capability for Collaborative Awareness</title>
<link href="https://hdl.handle.net/1721.1/162947" rel="alternate"/>
<author>
<name>Lockton, Sophia E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162947</id>
<updated>2025-10-07T04:13:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DBOS Advanced Network Analysis Capability for Collaborative Awareness
Lockton, Sophia E.
Collaborative cyber defense is an essential strategy for detecting and mitigating cyber threats [1]. As traditional intrusion detection systems struggle against increasingly sophisticated attacks, we propose embedding collaborative cyber defense directly into system infrastructure. This work presents a novel implementation of collaborative awareness within DBOS (a Database-Oriented Operating System), resulting in a platform that significantly accelerates application development while providing built-in security for transactional web services. By treating security as a first-class operating system service, our approach facilitates real-time comprehensive network observation and analysis without the need for external tools. The implementation supports the construction, aggregation, and analysis of traffic matrices using both Python and PostgreSQL-based workflows. These workflows extract and process IP-level metadata from DBOS applications, enabling multi-instance aggregation and analysis of network data. This integration represents the first instance of collaborative network analysis within an operating system runtime, demonstrating that secure-by-default infrastructure is both feasible and performant.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minding the Politeness Gap in Cross-cultural Communication</title>
<link href="https://hdl.handle.net/1721.1/162946" rel="alternate"/>
<author>
<name>Machino, Yuka</name>
</author>
<id>https://hdl.handle.net/1721.1/162946</id>
<updated>2025-10-07T04:13:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minding the Politeness Gap in Cross-cultural Communication
Machino, Yuka
Misunderstandings in cross-cultural communication often arise from subtle differences in interpretation, but it is unclear whether these differences arise from the literal meanings assigned to words or from more general pragmatic factors such as norms around politeness and brevity. In this paper, we report three experiments examining how speakers of British and American English interpret intensifiers like “quite” and “very,” finding support for a combination of semantic and pragmatic factors. To better understand these differences, we developed a computational cognitive model where listeners recursively reason about speakers who balance informativity, politeness, and utterance cost. A series of model comparisons suggest that cross-cultural differences in intensifier interpretation stem from (1) different literal meanings, (2) different weights on utterance cost. These findings challenge accounts based purely on semantic variation or politeness norms, demonstrating that cross-cultural differences in interpretation emerge from an intricate interplay between the two.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/162945" rel="alternate"/>
<author>
<name>Senthil, Swathi</name>
</author>
<id>https://hdl.handle.net/1721.1/162945</id>
<updated>2025-10-07T04:12:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting
Senthil, Swathi
This thesis investigates the predictive capabilities of neural networks in financial time series forecasting, focusing on predicting the weekly close price of the SPY index. We explore the integration of options-derived features alongside traditional price data, compare recurrent architectures and transformer-based models, and evaluate multiple training strategies. Our key contributions include: (1) evidence that options-derived input features improve both error metrics and directional accuracy; (2) a comparison study of four training methods (one-step-ahead, direct multi-step, simulation error, and teacher-forcing); (3) the development of a bidirectional GRU-LSTM hybrid model that outperforms standard recurrent networks in multi-step forecasting; and (4) a novel coarse tokenization approach for discretizing continuous financial data, which improves first-week prediction performance when used in transformer models that use an asymmetric attention mechanism. Overall, this thesis illustrates the importance of input design, model architecture, and training methodology in neural financial forecasting. We conclude by outlining directions for future work, including cross-asset generalization and further exploration of tokenization schemes for transformer-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating LLM Hallucination in the Banking Domain</title>
<link href="https://hdl.handle.net/1721.1/162944" rel="alternate"/>
<author>
<name>Sert, Deniz Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/162944</id>
<updated>2025-10-07T04:13:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating LLM Hallucination in the Banking Domain
Sert, Deniz Bilge
Large Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layered Unlearning for Adversarial Relearning</title>
<link href="https://hdl.handle.net/1721.1/162943" rel="alternate"/>
<author>
<name>Qian, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/162943</id>
<updated>2025-10-07T04:13:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Layered Unlearning for Adversarial Relearning
Qian, Timothy
Our goal is to understand how post-training methods, such as fine-tuning, alignment, and unlearning, modify language model behavior and representations. We are particularly interested in the brittle nature of these modifications that makes them easy to bypass through prompt engineering or relearning. Recent results suggest that post-training induces shallow contextdependent “circuits” that suppress specific response patterns. This could be one explanation for the brittleness of post-training. To test this hypothesis, we design an unlearning algorithm, Layered Unlearning (LU), that creates distinct inhibitory mechanisms for a growing subset of the data. By unlearning the first &#119894; folds while retaining the remaining &#119896; − &#119894; at the &#119894;th of &#119896; stages, LU limits the ability of relearning on a subset of data to recover the full dataset. We evaluate LU through a combination of synthetic and large language model (LLM) experiments. We find that LU improves robustness to adversarial relearning for several different unlearning methods. Our results contribute to the state-of-the-art of machine unlearning and provide insight into the effect of post-training updates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks</title>
<link href="https://hdl.handle.net/1721.1/162942" rel="alternate"/>
<author>
<name>Qian, Janet</name>
</author>
<id>https://hdl.handle.net/1721.1/162942</id>
<updated>2025-10-07T04:13:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks
Qian, Janet
Bayesian optimization (BO) is a powerful framework for optimizing expensive blackbox functions, widely used in domains such as materials science, engineering design, and hyperparameter tuning. Traditional BO relies on Gaussian processes (GPs) as surrogate models, but GPs face limitations in flexibility and scalability. Prior-Data Fitted Networks (PFNs) have recently emerged as a promising alternative, leveraging transformer architectures and in-context learning to approximate posterior predictive distributions (PPDs) in a single forward pass. By training on large amounts of synthetically generated data from sample-able function priors, PFNs can learn to rapidly predict PPDs across a wide range of function classes. In this thesis, we investigate the application of PFNs to mixed-variable BO, a particularly challenging setting due to the interplay between continuous and discrete inputs and the combinatorial complexity of the search space. We evaluate how PFNs perform when integrated with a range of mixed-variable BO strategies, including various encoding schemes and discrete-aware acquisition optimization. Additionally, we explore how finetuning PFNs on targeted function priors can enhance performance when prior knowledge about the objective is available. Our contributions include empirical evaluations of mixed-BO techniques, insights into PFN training, and a suite of mixed-variable benchmark problems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering</title>
<link href="https://hdl.handle.net/1721.1/162941" rel="alternate"/>
<author>
<name>Ravuri, Chaitanya</name>
</author>
<id>https://hdl.handle.net/1721.1/162941</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering
Ravuri, Chaitanya
Modern code–generation LLMs can already solve a large fraction of programming problems, yet they still hallucinate subtle bugs that make their outputs unsafe for autonomous deployment. We present functional clustering, a black-box wrapper that eliminates nearly all hallucination-induced errors while providing a tunable confidence score. The wrapper samples many candidate programs, executes each on a self-generated test suite, and clusters candidates whose I/O behavior is identical; the empirical mass of the largest cluster serves as an exact confidence estimate. A single scalar threshold on this estimate lets users trade coverage for reliability with exponential guarantees. On LiveCodeBench our verifier preserves baseline pass@1 on solvable tasks yet slashes the error rate of returned answers from ∼65% to 2%, and drives it to 0% at a conservative threshold while still answering 15.6% of prompts. Manual audits show that the few residual mistakes stem from prompt misinterpretation, not random generation noise, narrowing future work to specification clarity. Because the method requires only sampling and sandbox execution, it applies unchanged to closed-source APIs and future models, offering a practical path toward dependable, autonomous code generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choosing Networks for Ride-Hailing Platforms</title>
<link href="https://hdl.handle.net/1721.1/162940" rel="alternate"/>
<author>
<name>Somsirivattana, Thana</name>
</author>
<id>https://hdl.handle.net/1721.1/162940</id>
<updated>2025-10-07T04:13:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Choosing Networks for Ride-Hailing Platforms
Somsirivattana, Thana
The development of autonomous vehicles is poised to reshape the landscape of transportation. As companies prepare to deploy these vehicles on ride-hailing platforms, a key operational challenge is determining the networks on which to train the vehicles. Our work contributes toward addressing this challenge on three fronts. First, we develop a theoretical model of the network selection problem and prove theoretical results that show the importance of two parameters: the detour factor and the fleet size. Second, we develop several approaches for selecting the networks. Third, we evaluate these approaches on empirical data. We find empirical support for the importance of the detour factor and the fleet size.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PyGridSim: A Functional Interface for Distributed System Simulation</title>
<link href="https://hdl.handle.net/1721.1/162939" rel="alternate"/>
<author>
<name>Zhao, Angela M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162939</id>
<updated>2025-12-11T16:38:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">PyGridSim: A Functional Interface for Distributed System Simulation
Zhao, Angela M.
This thesis details the development of PyGridSim, an open source python module that leverages OpenDSS capabilities to provide an efficient and scalable functional interface for building distributed system simulations. Distributed power systems encompass all components that power an electrical system— from larger power plants to microgrids—and represent the network of electric consumption and production in a system. Simulations of such power systems allow experts to analyze potential faults and risks in a fast, reproducable, and cost-efficient way. Thus, the accessibility of such simulations is critical to supporting the safety and reliability of power systems. While existing packages built for distributed system simulation provide the necessary computing power and customizability of a distributed system simulator, their interfaces are hard to scale over many nodes and often have difficult-to-learn syntax. PyGridSim aims to build on these existing modules—maintaining customizability while providing a flexible, intuitive, and scalable syntax structure.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the Programmability of A Distributed Hardware Accelerator</title>
<link href="https://hdl.handle.net/1721.1/162938" rel="alternate"/>
<author>
<name>Shwatal, Nathan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162938</id>
<updated>2025-10-07T04:13:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving the Programmability of A Distributed Hardware Accelerator
Shwatal, Nathan A.
Sparse iterative matrix algorithms are critical to many scientific and engineering workloads, yet they perform poorly on conventional hardware. (Ōmeteōtl, a new hardware accelerator with a distributed-memory and task-based execution model, aims to address these performance bottlenecks. However, programming for (Ōmeteōtl is low-level, error-prone, and far removed from the simplicity of typical iterative formulations. This thesis presents Lapis, a domain-specific language and compiler that allows users to express sparse matrix algorithms in high-level Python code and automatically generates efficient C++ code for (Ōmeteōtl. Lapis abstracts away data partitioning and task orchestration, reducing implementation complexity: for example, it lowers lines of code by 30× for conjugate gradients and 46× for power iteration. Despite this abstraction, generated code achieves 75.7% to 92.6% of the performance of manually written implementations across several benchmarks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/162937" rel="alternate"/>
<author>
<name>Mao, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/162937</id>
<updated>2025-10-07T04:12:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow
Mao, Grace
This work presents a computational investigation of the influence of geometric configurations within a hypersonic flow field on optical distortion, with a particular focus on the effects of window deformation and the role of thermochemical modeling compared to perfect gas assumptions. Turbulent RANS and conjugate heat transfer were used to model three 3D geometries in US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. The three investigated geometries are a flat plate with a flush-mounted sensor, an open cavity with a length-to-depth ratio of 2, and a closed cavity with a length-to-depth ratio of 16. The data demonstrate that the flat plate configuration has the best optical performance and that the closed cavity has the worst. Additionally, the inclusion of thermochemistry in the flow simulation results in a more pessimistic outlook on image quality compared to the perfect gas model. The results document optical distortion for several different geometries with and without thermochemical modeling within hypersonic flow that can inform future design decisions and research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs</title>
<link href="https://hdl.handle.net/1721.1/162936" rel="alternate"/>
<author>
<name>Wang, Shih-Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/162936</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs
Wang, Shih-Yu
There are numerous hardware security defense mechanisms designed to mitigate sidechannel attacks. However, ensuring that a defense can comprehensively protect against an entire class of attacks, while avoiding the introduction of new vulnerabilities that could lead to additional attack surfaces, remains a significant challenge. Although researchers have attempted to apply formal verification techniques to hardware security, these efforts have been hindered by scalability issues. In this paper, we introduce BlueVeri, a systematic and automatable approach for formally verifying the security of a Bluespec processor against speculative execution attacks. BlueVeri leverages the high-level information provided by Bluespec’s guarded atomic actions, simplifying and accelerating the verification process. We evaluate BlueVeri on out-of-order processors implemented in Bluespec, demonstrating that our approach substantially enhances verification scalability and is capable of proving the security properties of a minimal out-of-order processor within one hour.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Churn Prediction and Infrastructure Resilience</title>
<link href="https://hdl.handle.net/1721.1/162934" rel="alternate"/>
<author>
<name>Agrawal, Shreeansh</name>
</author>
<id>https://hdl.handle.net/1721.1/162934</id>
<updated>2025-10-07T04:12:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Churn Prediction and Infrastructure Resilience
Agrawal, Shreeansh
This thesis investigates how advanced machine learning methods can effectively address two critical business challenges facing the telecommunications industry: short-term customer churn prediction and long-term infrastructure resilience to climate-driven disruptions.&#13;
&#13;
In the first part of this work, I develop an upgrades-informed churn forecasting model tailored specifically for marketing operations. Recognizing limitations in the existing aggregate forecasting methodologies, I create a cohort-based cascade model that explicitly integrates customer upgrade behavior across various contract tenures. To address data sparsity and longitudinal gaps in newer contract types, I employ synthetic data generation and imputation techniques, such as regression-based methods and Multivariate Imputation by Chained Equations (MICE). For forecasting churn and upgrade rates, I prioritize interpretability by applying linear regression enhanced with time-series forecasting techniques and macroeconomic indicators, including the Consumer Price Index. This approach significantly improves forecasting accuracy, aligns internal stakeholder objectives, and supports strategic decision-making around customer retention and promotional offers.&#13;
&#13;
The second part focuses on building predictive models and strategic frameworks for long-term infrastructure resilience in the face of increasing climate risks. Leveraging spatial-temporal clustering methods (DBSCAN) and advanced neural network architectures, I develop a model to attribute historical outages to extreme weather events. Further, I integrate this model with future climate scenarios from CMIP5 projections using Monte Carlo simulations, providing actionable insights into future infrastructure vulnerabilities. Employing SHapley Additive exPlanations (SHAP), I interpret model predictions, highlighting critical factors such as precipitation, windspeed, and atmospheric pressure. Additionally, I propose frameworks for quantifying financial impacts of future outages and recommend optimization strategies for proactive infrastructure hardening and emergency response.&#13;
&#13;
Collectively, these applications demonstrate the value of strategically employing interpretable and robust machine learning methodologies to enhance short-term operational decisions and long-term strategic planning within telecom organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation</title>
<link href="https://hdl.handle.net/1721.1/162933" rel="alternate"/>
<author>
<name>Shafferman, Hannah R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162933</id>
<updated>2025-10-07T04:13:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation
Shafferman, Hannah R.
In the field of robotics, there has been a growing interest in multi-robot systems and their potential to improve the efficiency, scale, and reliability of tasks beyond what an individual robot can achieve. Global localization is a crucial task for autonomous robot navigation, specifically in the multi-agent scenario where robots need to localize within maps communicated by other agents. The scenario where vehicles are viewing their environments from the same perspective, or camera viewpoint, is well studied. However, when environments are mapped from different camera viewing angles, traditional methods fail to match visual features and thus fail to localize. The technical gap that this thesis addresses is when autonomous vehicles within a team are mapping the same environment from different viewpoints, specifically nadir and an oblique camera orientations in an unstructured environment. Many existing visual place recognition (VPR) methods fail to match visual features that look visually different due to appearance, illumination, or viewpoint changes and thus fail to localize. In this thesis, we demonstrate the shortcomings of previous work to generalize to an off-nadir camera angle and explore the benefits and challenges that arise with utilizing oblique imagery for visual feature detection and tracking. We propose a segmentation-based object tracking pipeline to improve tracking and environment mapping performance in this traditionally challenging scenario. Our approach consists of 1) a front-end auto-segmentation tracking pipeline followed by 2) a submap correspondence search, which exploits geometric consistencies between environment maps to align vehicle reference frames. We evaluate our approach on a challenging indoor, cluttered dataset and demonstrate a maximum precision 74% higher than traditional and learning-based baseline methods, with a map size 0.5% the size of the most memory conservative traditional baseline method.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/162932" rel="alternate"/>
<author>
<name>Sonandres, Kyle A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162932</id>
<updated>2025-10-07T04:13:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty
Sonandres, Kyle A.
Aerocapture is an orbital insertion maneuver that converts a hyperbolic approach trajectory into a desired captured orbit using the aerodynamic forces generated during a single atmospheric pass. While it offers major benefits, such as reduced interplanetary cruise time and lower propellant mass reserves, it also introduces significant risk due to extreme sensitivity to atmospheric and delivery state uncertainties. This drives the need for robust guidance algorithms and accurate environmental estimation techniques. This thesis presents approaches to address both of these needs, developing solutions to improve aerocapture performance and robustness to uncertainty. The first contribution is the development of ABAMGuid+, a novel aerocapture guidance algorithm that leverages simultaneous control over bank angle and angle of attack. Inspired by optimal control theory, the algorithm uses a four-phase structure to mimic the optimal control laws while maintaining tractability for online use. Optimal control theory is utilized to identify the optimal control solutions, and numerical optimization is used to validate the analytic solutions prior to integration into a guidance algorithm. Extensive simulation results of a Uranus aerocapture scenario, including over 140,000 Monte Carlo trajectories, demonstrate significant improvements in capture success rates and propellant efficiency compared to existing methods. The second contribution addresses environmental uncertainty directly by developing a deep learning-based approach to estimate the atmospheric density profile during flight. A long short-term memory (LSTM) neural network-based architecture is trained to predict atmospheric density given sequences of flight data. The trained model is integrated into the guidance loop and a curriculum learning process is used to refine in-flight performance. Monte Carlo results show that the LSTM-augmented guidance system reduces propellant usage compared to traditional estimation methods. In summary, this thesis presents two approaches that improve aerocapture performance and robustness to uncertainty. We show that this added robustness can be achieved both by expanding algorithmic ability and by improving environmental estimation approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations</title>
<link href="https://hdl.handle.net/1721.1/162931" rel="alternate"/>
<author>
<name>McGee, Carissma</name>
</author>
<id>https://hdl.handle.net/1721.1/162931</id>
<updated>2025-12-10T00:31:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations
McGee, Carissma
Gravitational microlensing is a phenomenon in which a foreground star or planet briefly magnifies light from a more distant background star. This effect enables the discovery of exoplanets that are otherwise undetectable, including those orbiting faint hosts and at large separations. Microlensing is well suited to characterizing exoplanets beyond the snow line, revealing mass ratios and orbital geometries inaccessible to transit or radial velocity methods. The Nancy Grace Roman Space Telescope will carry out the Galactic Exoplanet Survey to detect thousands of microlensing events with the cadence and precision necessary for statistical exoplanet population studies. To verify Roman’s ability to meet its core science requirement, recovering the lens mass and distance in at least 40% of planetary events with better than 20% uncertainty, targeted simulations are essential. Using the pyLIMASS inference framework and Fisher matrix-based uncertainty propagation, I demonstrate that for the well-characterized event OGLE-2013-BLG-0132Lb, the lens mass can be constrained to within 18.7% uncertainty, validating the feasibility of Roman’s requirement on a case-study basis. This thesis also addresses the legal and policy foundations needed to ensure global access to these simulation tools. By advancing open-source software models and proposing a space IP framework for equitable knowledge sharing, it supports collaborative scientific infrastructure for future international space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts</title>
<link href="https://hdl.handle.net/1721.1/162930" rel="alternate"/>
<author>
<name>Mueller, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/162930</id>
<updated>2025-10-07T04:13:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts
Mueller, Anna
Despite significant innovations in aviation technology over the last 70 years resulting in enormous efficiency improvement, the rising demand for air travel means that aviation carbon emissions continue to increase each year. The rate of improvement to aircraft propulsion engines is diminishing and additional improvements often add significant engine cost or weight. With the goal of reducing aviation’s contribution to global climate change, future aircraft engine designers must consider concepts that stray from the traditional turbofan engine. In this thesis, I develop an engine cycle model combining the turbofan engine with a steam power cycle and use the model to explore the benefits of applying this concept to aircraft engines. In order to study the impact to engine performance and emissions from adding a steam cycle, the engine model needs to be capable of representing the water phase changes and the heat exchangers required to drive those phase changes. My contribution is the development of such a model – with special attention to the modeling of water properties and phase change of water – which ties heat exchanger models into an engine thermodynamic model. The engine cycle as well as heat exchanger parameters including water-to-air ratio, combustor exit temperature, overall pressure ratio, and water pressure are varied to explore the impact to overall engine performance, including the impact of the added heat exchanger weight. This thesis covers the development and initial testing of this model, which enables future studies in engines with phase changing heat exchangers or water injection with the goal of assisting the search for the future engine technologies that will reduce harmful impacts of aviation while continuing to allow air travel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems</title>
<link href="https://hdl.handle.net/1721.1/162929" rel="alternate"/>
<author>
<name>Hoss, Summer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162929</id>
<updated>2025-10-07T04:13:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems
Hoss, Summer A.
There are countless challenges associated with the accurate modeling of the hypersonic flight of ablative thermal protection systems (TPS): resolving the relevant coupled physical phenomena through multi-physics simulations, the management of the disparate spatiotemporal scales associated with the fluid and solid responses, and establishing a reliable numerical model able to predict the response of ablative materials exposed to extreme gradients—to name a few. The two-way, loosely coupled framework presented in this thesis consists of ΣMIT, a multi-physics computational solid mechanics (CSM) code, coupled with US3D, a hypersonic computational fluid dynamics (CFD) solver, to form a complete aero-thermochemo-mechanical simulation framework. The ΣMIT-US3D coupling framework provides a step towards high-fidelity simulation capabilities for hypersonic vehicles with ablative TPS, establishing a strong foundation for the simulation of fluid-structure interaction (FSI) phenomena and computation of the mechanical response of porous ablators. The requirement of a robust numerical formulation for the solution of hypersonic pyrolysis problems was made apparent when encountering numerical convergence issues with legacy methods, which sparked the development of a robust semi-implicit pyrolysis material model. The so-called Linearized Pyrolysis model employs simplifying assumptions for the energy and mass balance equations and relies upon the time-lagging of chosen terms to achieve linear convergence and robust performance. The performance of the model has been validated against the Ablation Workshop Test Cases and has increased the range of allowable representative hypersonic boundary conditions significantly compared to the legacy approach. Together, the model and the coupling framework are applied to two aero-thermochemo-mechanical analyses contained within the thesis: a spherical-tipped nose cone and the Orion heat shield. Preliminary results identify the decomposition region as a zone in which high von Mises stress tends to occur—care must be taken to ensure that internal and external flight loads do not exceed allowable limits to prevent catastrophic TPS material failure in this region. However, perhaps the most significant insight resulting from the framework relates to the computation of mass fluxes through the porous ablative material, revealing that for an isotropic monolithic heat shield with at a zero angle of attack, pyrolysis gas flow is driven by the pressure gradient applied to the shield such that the flow exits at the edges of the shield rather than from the base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aeroverse: Aerospace Education in Extended Reality</title>
<link href="https://hdl.handle.net/1721.1/162928" rel="alternate"/>
<author>
<name>Johnson, Mollie</name>
</author>
<id>https://hdl.handle.net/1721.1/162928</id>
<updated>2025-10-07T04:13:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Aeroverse: Aerospace Education in Extended Reality
Johnson, Mollie
Aerospace education is a continuously evolving field that is increasingly dependent on digital tools. However, it is ambitious to shift the teaching paradigm to accommodate new cutting-edge technologies. Extended reality (XR), which encompasses augmented (AR) and virtual reality (VR), is an example of such technology. In recent years, VR has seen an increase in usage in education as a novel way to provide students with immersive learning experiences, and XR has a long history of use within the working aerospace industry. However, application in the overlap between the two— aerospace engineering education— remains largely unexplored to date. The themes addressed in this thesis are two-fold: first, the goal is to create VR learning modules to supplement the existing aerospace engineering curriculum. Second, the aim is to validate whether VR technology as a teaching medium can improve learning outcomes and student engagement within the MIT AeroAstro department. With these themes in mind, two experiments were conducted to explore this topic. The first experiment presents the design and execution of an experimental course aimed at aerospace engineering students to assess the educational impact of VR. Over the course of this study, ANOVA and Kruskal-Wallis tests found that there was no significant difference (p &gt; 0.05) in performance between the VR and non-VR groups, save for a few exceptional cases. The second experiment details the integration of a single VR module into an existing course in which all students interacted with the VR activity. Students responded positively to this experiment, reporting increased feelings of engagement and a sense that it aligned well with the rest of the course. One-sample Wilcoxon tests reveal that these findings are largely significant (p &lt; 0.05). This thesis advances the work on assessing VR use for aerospace education. The implications of this work may influence the decisions of other educators regarding the adoption of VR technology as supplements to their own teaching methodologies. As a whole, this thesis contributes to the broader conversation on integrating VR into the classroom.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration</title>
<link href="https://hdl.handle.net/1721.1/162927" rel="alternate"/>
<author>
<name>MacRobbie, Madelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162927</id>
<updated>2025-10-07T04:13:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration
MacRobbie, Madelyn
Human space exploration is evolving rapidly, with commercial successes and NASA’s Artemis missions driving rapid growth and innovation. Plans for longer, larger, and more complex missions necessitate development of new mission architectures to sustain the crews needed to support these missions. Larger missions and multi-site architectures have become feasible with advances in commercial launch vehicles, and generate increased safety and redundancy for crewed operations. However, crew dynamics in these mission architectures have yet to be investigated. This thesis investigates the role of mission architecture (specifically single-site versus dual-site configurations) in subgroup formation and the resulting impacts to socioemotional well-being. We first develop a systematic approach for optimizing analog mission design, then apply this to design two analog missions to compare the effects of single-site and dual-site mission architectures on crew dynamics and psychosocial health. Results provide valuable insights for future Mars mission design, where crew structure and psychosocial adaptation are critical to mission success.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Strong, Human-Compatible Codenames AI&#13;
Agent</title>
<link href="https://hdl.handle.net/1721.1/162926" rel="alternate"/>
<author>
<name>Zhu, Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/162926</id>
<updated>2025-10-07T04:13:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards a Strong, Human-Compatible Codenames AI&#13;
Agent
Zhu, Sebastian
Current language models are limited in their ability to solve complex planning and reasoning problems without the aid of search procedures. While a large body of work has developed search procedures tailored to single-turn, single-user natural language interactions, language generation in multi-agent contexts involving multiple users, imperfect information, and partially misaligned objectives remains extremely challenging. We aim to build search procedures that will enable language models to assist with interactive, multi-agent decision-making in a diverse range of contexts. Using the word game Codenames as a benchmark, we will combine game-theoretic planning procedures with basic language model-based scoring methods to create agents that both play strong policies and play well with human policies. This work yields a set of practical text generation procedures, new evaluation benchmarks, and foundational algorithmic improvements in language model search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Contrail Observability from Different Satellite Platforms</title>
<link href="https://hdl.handle.net/1721.1/162925" rel="alternate"/>
<author>
<name>Euchenhofer, Marlene V.</name>
</author>
<id>https://hdl.handle.net/1721.1/162925</id>
<updated>2025-10-07T04:13:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Investigation into Contrail Observability from Different Satellite Platforms
Euchenhofer, Marlene V.
Contrails are line-shaped ice clouds that can form behind aircraft engines and, under certain cold and moist conditions, spread into contrail cirrus that persists for several hours. By adding to the existing cloud cover, contrails can act to either cool or warm, with the latter, on average, being dominant, resulting in an overall warming effect. Although the effective radiative forcing from contrails is inferred to be of the same order of magnitude as that caused by aviation’s CO₂ emissions, large uncertainties remain around specific radiative forcing estimates. &#13;
Observational studies of contrails, either to support climate impact assessments or operational contrail avoidance strategies, face trade-offs between spatial and temporal resolution. Many recent publications have relied on data from geostationary satellites accepting lower input data resolution in exchange for higher temporal resolution and greater spatial coverage. Limitations of the observability of contrails in the resulting images have not been sufficiently investigated and need to be assessed and quantified.&#13;
This study aims to leverage the higher spatial resolution of VIIRS satellite imagery to identify potential limitations on contrail observability in lower-resolution GOES ABI imagery. We generate a dataset of human-identified contrails visible in false-color thermal infrared imagery from both GOES ABI and VIIRS for twelve scenes over the contiguous US. Based on this dataset, we investigate the number, cover, and appearance of the observed contrails. We find that GOES ABI does not resolve 80% of all contrails that can be identified in VIIRS imagery and only shows half of the total observed contrail length. Finally, incorporating an existing contrail-flight matching algorithm by Barbosa, we show that VIIRS tends to resolve more younger contrails than GOES ABI. The findings from this study help to bound the validity of current contrail simulations and modeling outputs that estimate contrail cover and occurrence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution</title>
<link href="https://hdl.handle.net/1721.1/162924" rel="alternate"/>
<author>
<name>Zhang, Sophie S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162924</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution
Zhang, Sophie S.
The increasing adoption of specialized database systems has led to the rise of heterogeneous data environments. While having multiple engines in a data infrastructure enables opportunities for workload optimization, SQL dialect incompatibility makes workload migration difficult. To address this challenge, we develop MINCE (Multi-dialect INtegration and Crossengine Execution), a technique that decomposes SQL queries into parts to enable federated execution across engines with differing SQL dialects. MINCE uses a rule-based method to partition a query into executable components that are assigned to different database systems. To evaluate different execution strategies, MINCE further implements a cost model that incorporates both on-engine query execution time and inter-system data transfer overhead. We evaluate MINCE on a TPC-H-based workload augmented with PostgreSQL-specific functions unsupported in Amazon Redshift. Experimental results show that MINCE produces the fastest execution strategy among our baselines for 72.1% of queries using estimated cardinality, achieving a 2× speedup over single-engine baselines. With perfect cardinality information available to our cost model, this value increases to 88.4%, with an average 2.8× speedup. These results demonstrate that our system not only enables more flexible federated query execution, but also reliably identifies performant execution strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New results in canonical polyadic decomposition overfinite fields</title>
<link href="https://hdl.handle.net/1721.1/162923" rel="alternate"/>
<author>
<name>Yang, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162923</id>
<updated>2025-10-07T04:13:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">New results in canonical polyadic decomposition overfinite fields
Yang, Jason
Canonical polyadic decomposition (CPD) consists of expressing a tensor (multidimensional array) as a sum of several rank-1 tensors, each of which is an outer/separable product of vectors. The number of rank-1 tensors used in a CPD is called the rank of the CPD, and the minimum possible rank of a CPD for a given tensor is called the rank of the tensor. CPD is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to 3 × 3 matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-R CPD of a n0 × · · · × nD−1-shaped tensor over a finite field F, where n0 ≥ · · · ≥ nD−1, our algorithm saves a multiplicative factor of roughly |F| R(n0−1)+n0( P d≥1 nd) . Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of 3 × 3 matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameter Estimation for Anonymous Hawkes Processes</title>
<link href="https://hdl.handle.net/1721.1/162922" rel="alternate"/>
<author>
<name>Wang, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162922</id>
<updated>2025-10-07T04:13:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parameter Estimation for Anonymous Hawkes Processes
Wang, William
Hawkes Processes are self-exciting point processes used to model many real-life networks in which an event from one agent causes the rate at which events occur from related agents to increase, such as in earthquake networks or social media. This project investigates the question of finding the underlying structure of the Hawkes Processes given a history of when events occurred. This problem has been studied extensively in the regime where the event labels are known, and the bulk of the literature involves parameterizing the model and passing it through statistical learning tools. Our proposed work focuses on the the same question in “anonymous" case where labels are not given. In this regime, the lack of information makes many previous approaches intractable and we develop novel non-parametric approaches for solving cases of the structure learning problem in algorithmic and information theoretic settings. Our results show the ability to learn the entire model under mild assumptions in the information theoretic regime, where we have access to an arbitrarily long Anonymous Hawkes Process transcript, whereas when we’re confined to a polynomially lengthed transcript, the situation is considerably more difficult.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organization Infrastructure for Tokenized Asset Records</title>
<link href="https://hdl.handle.net/1721.1/162921" rel="alternate"/>
<author>
<name>Whartenby, Patrick E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162921</id>
<updated>2025-10-07T04:13:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Organization Infrastructure for Tokenized Asset Records
Whartenby, Patrick E.
The Tokenized Asset Record (TAR) represents a way to connect existing technology related to tokenized assets and asset schemas to real-world documents that validate the existence of an object. Exactly who should manage TARs and the properties of the related organization schemes remains an open question. Answering this question is crucial to furthering the existing digital economy. While existing solutions have sought to expand digital commerce through pioneering digital clearing houses, little work has explored support for other classes of real-world digitized assets with proof of ownership and existence. The research proposed here seeks to answer this question by suggesting possible solutions and developing a framework for uniformly analyzing the proposals. The research proposes and evaluates three models for the management of TARs. The first is a scheme that involves each industry setting up its own TAR database and managing the system independently from other industries. The second proposes hosting all TARs on a single blockchain. The third argues for an off-chain decentralized platform to host all, akin to the Data Spaces proposed by the European Union. The research finds, based on the proposed criteria, that a decentralized off-chain approach best meets the goals of a TAR management framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective</title>
<link href="https://hdl.handle.net/1721.1/162920" rel="alternate"/>
<author>
<name>Thadawasin, Pakaphol</name>
</author>
<id>https://hdl.handle.net/1721.1/162920</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective
Thadawasin, Pakaphol
Foundation models have emerged as powerful tools for analyzing single-cell RNA sequencing (scRNA-seq) data, leveraging large-scale pretraining to capture complex gene expression patterns. However, a comprehensive quantitative framework for understanding the interplay between phenotypes and genotypes remains underdeveloped. Such a framework is critical not only for validating model performance but also for uncovering previously unrecognized biological relationships. In this work, we present both traditional and deep learning-based quantitative analysis pipelines for PolyGene [1], a transformer-based scRNA-seq foundation model, aimed at disentangling the complex phenotype–genotype relationship. First, we implement a top-k classification and entropy evaluation pipeline to serve as a primary validation framework. Our results demonstrate that the pretrained PolyGene [1] is robust in top-k classification metrics and provides meaningful insights into the entropy landscape of human cells across different life stages. Second, we propose a novel deep learning gradientbased gene selection method designed to address limitations in traditional feature selection approaches, such as poor scalability and sensitivity to heterogeneity in high-dimensional data. Through empirical evaluations on benchmark scRNA-seq datasets, we show that our method enhances model interpretability and improves downstream performance, offering a more scalable and biologically relevant alternative to existing techniques. Overall, this work introduces a set of quantitative analysis tools that fill a critical gap in evaluating and interpreting scRNA-seq foundation models, contributing to a deeper understanding of the genotype–phenotype interplay through modern deep learning techniques.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems</title>
<link href="https://hdl.handle.net/1721.1/162919" rel="alternate"/>
<author>
<name>Zen, Hilary</name>
</author>
<id>https://hdl.handle.net/1721.1/162919</id>
<updated>2025-10-07T04:13:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems
Zen, Hilary
Generation methods for deepfake images have advanced rapidly, and deepfake face images pose a critical security for biometric verification systems. Applications that rely on face recognition to grant access to sensitive data need to maintain high accuracy across a wide variety of deepfake generation methods, including novel and developing types that the application has not previously trained on. Current deepfake detection models achieve nearperfect accuracy on benchmark datasets, but do not perform as well on unseen types of deepfakes that were not part of their training dataset. We propose building an ensemble model with multiple base detectors, each trained on different generation model families to maintain high performance across many deepfake generation methods. Using four base models, including two models with the same architecture and training data, we exhaustively test all possible ensemble models. We find that combining similar base models trained on the same deepfake generation family does not improve performance compared to the individual base models. However, combining base models trained on different deepfake generation families leads to significant increases in accuracy and recall. Our ensemble framework provides a flexible and inexpensive solution in the ever-changing landscape of deepfake generation and security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Should Model Updates Propagate?</title>
<link href="https://hdl.handle.net/1721.1/162918" rel="alternate"/>
<author>
<name>Struckman, Isabella Marguerite</name>
</author>
<id>https://hdl.handle.net/1721.1/162918</id>
<updated>2025-10-07T04:13:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Should Model Updates Propagate?
Struckman, Isabella Marguerite
AI supply chains rely increasingly on downstream developers adapting pretrained upstream models. When upstream models are retrained with data deletions (which may be prompted by copyright violations, privacy compliance, or removal of illicit content), it’s unclear if all downstream developers must also undergo costly retraining. In this thesis, we investigate the propagation of data deletions through fine-tuned models within a controlled visual classification setting comprising dog-breed and plane-manufacturer recognition tasks. We show that not all model updates propagate equivalently to downstream tasks, and there is a strong relationship between the deleted data’s relationship to the downstream task and its affect on the downstream model. We demonstrate that neither simple performance metrics (accuracy or F1), nor output-level divergences, nor even embedding-based similarity metrics alone adequately predict when a deletion meaningfully impacts downstream tasks. To overcome these limitations, we introduce an information-theoretic metric grounded in Gaussian mixture modeling (GMM) of embedding distributions, capturing deeper representational shifts. Our proposed approach precisely distinguishes when deletions require downstream retraining, achieving high predictive accuracy and recall without directly accessing retrained downstream models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Modeling for NV Magnetometry</title>
<link href="https://hdl.handle.net/1721.1/162917" rel="alternate"/>
<author>
<name>Rich, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162917</id>
<updated>2025-10-07T04:13:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Modeling for NV Magnetometry
Rich, John P.
This thesis presents the development and application of a digital twin modeling framework for nitrogen-vacancy (NV) center-based magnetometry, advancing the field of quantum sensing. A surrogate model serves as a computational representation of the physical NV magnetometer system, enabling comprehensive exploration of parameter spaces to optimize device design. Leveraging machine learning techniques, this study optimizes control mechanisms, including the design of learned analog filters, to enhance system performance. This research investigates the fundamental limits of NV magnetometer performance, identifying strategies to minimize power requirements while maintaining high sensitivity. A dynamic framework is implemented to update the surrogate model’s parameters in real-time based on experimental measurements, ensuring accurate fidelity to the physical system. Additionally, the optimized control strategies are simulated within the digital twin environment, demonstrating their potential for advanced quantum sensing applications such as magnetocardiography (MCG) for heartbeat detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fuzzing for User-Schedulable Languages</title>
<link href="https://hdl.handle.net/1721.1/162916" rel="alternate"/>
<author>
<name>Moon, Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/162916</id>
<updated>2025-10-07T04:13:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fuzzing for User-Schedulable Languages
Moon, Kenneth
Performance engineers restructure programs to use hardware as efficiently as possible. Even simple mathematical functions can become sprawling and complex programs when fully optimized, as the resulting code must often be precisely molded around specialized behaviors supported by the hardware. To help performance engineers deal with this complexity, userschedulable languages provide scheduling operations, which are abstractions of common steps taken to restructure programs. By composing these scheduling operations, performance engineers can concisely represent their intended optimizations to programs. Exo, being a user-schedulable language, provides this abstraction with the additional guarantee that any scheduling operation which passes Exo’s automated checks does not change the behavior of the program. Though this guarantee is useful for avoiding bugs while optimizing a program, the analysis required to provide such a guarantee is infeasible on programs in general. To make analysis feasible, Exo only allows users to write programs with a restricted set of behaviors. As a result, some programs are impossible to schedule using Exo, limiting the use cases of Exo. In this thesis, we explore how fuzzing can be used as an alternative to the existing analysis in Exo, with the goal of allowing Exo to analyze more complex programs. “Fuzzing” refers to a test case-driven approach to determining properties of a program, such as whether its behavior changes after a scheduling operation. If the program’s outputs do not change after the scheduling operation when provided the same inputs, the fuzzer concludes that the program’s behavior did not change. Since fuzzing only requires us to know how to evaluate the program, it can be applied to a much broader set of programs than the existing analysis in Exo. However, fuzzing can miss mistakes in scheduling if the fuzzer fails to find a test case demonstrating the issue with a scheduling operation, as it is a complete form of analysis rather than a sound form of analysis like the existing analysis in Exo. Additionally, fuzzing can be costly compared to the original analysis, as repeatedly running programs on many test cases for many scheduling operations can be slow. We explore ways to mitigate these issues throughout this work. Finally, we evaluate our implementation of the fuzzer and its performance on some example use cases for Exo.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Verifiable Computation Made Easy</title>
<link href="https://hdl.handle.net/1721.1/162915" rel="alternate"/>
<author>
<name>Ma, Chengyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162915</id>
<updated>2025-10-07T04:12:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Verifiable Computation Made Easy
Ma, Chengyuan
Recent advancements in cloud computing, data privacy, and cryptography have sparked a growing interest in Verifiable Computation (VC) in both industry and academia. In particular, zero-knowledge proof (ZKP) algorithms are gaining rapid traction due to their strong privacy guarantees. However, they are notoriously computationally intensive, making performance a critical concern. Given the inherent data parallelism and heavy use of vector operations in ZKP computations, multicore CPUs and GPUs offer a promising acceleration path. Unfortunately, accelerated programming for ZKP remains challenging: ZKP algorithms evolve rapidly, their structures grow increasingly complex, and writing high-performance ZKP code is tedious, error-prone, non-portable, and unfriendly to algorithm developers. We present an end-to-end compiler framework, Zera, that lowers ZKP algorithms to parallel hardware for efficient acceleration, with minimal programmer effort. By effectively leveraging ZKP algorithm patterns and trends, we are able to automate the key performance optimizations, with a succinct linguistic extension and a set of practical compiler customizations. Consequently, with just 92 lines of trivial high-level annotation added to the original 7,000 lines of C++ code, our single-source code solution delivers 33.9× and 24.0× speedup on GPU over a highly optimized serial C++ implementation on CPU and an existing multithreaded Rust baseline on CPU, respectively. Compared to our hand-optimized GPU/CUDA implementation requiring an extra 2,000 lines of low-level code (roughly 60 programmer hours), our compiler-generated GPU implementation is only 58% slower (1.58× slowdown) on large inputs, demonstrating a compelling trade-off between performance and productivity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Partitioning for Efficient Parallel Reads</title>
<link href="https://hdl.handle.net/1721.1/162914" rel="alternate"/>
<author>
<name>Sragow, John</name>
</author>
<id>https://hdl.handle.net/1721.1/162914</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Partitioning for Efficient Parallel Reads
Sragow, John
Modern database management systems spend a significant portion of query execution time scanning data, so minimizing scanning latency is critical to maintaining high performance. As such, databases are partitioned into blocks so that queries can skip irrelevant tuples and avoid scanning the entire database. When this partitioning is optimized to minimize the number of blocks accessed by each query, smaller queries that access very few blocks fail to fully utilize the bandwidth because they cannot take advantage of parallel reading. However, reducing the size of each block in order to increase the number of blocks accessed by smaller queries slows down larger queries by forcing them to increase the number of I/Os they must perform. We propose a novel partitioning scheme that shuffles the row groups of blocks accessed by smaller queries so that they can read fewer tuples from multiple blocks in parallel without increasing the I/O cost of larger queries. Our experiments show that this technique allows smaller queries to be scanned up to twice as fast on larger block sizes as they would on a standard partitioning without significantly slowing down larger queries.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models</title>
<link href="https://hdl.handle.net/1721.1/162913" rel="alternate"/>
<author>
<name>Tang, Adrina</name>
</author>
<id>https://hdl.handle.net/1721.1/162913</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models
Tang, Adrina
Designing novel proteins with specific biological functions remains a fundamental challenge in computational biology. While recent advances in protein language models have enabled powerful sequence-based representations, most models, including state-of-the-art systems like ESM3, fall short in effectively encoding functional context during protein generation. In this work, we present a multimodal protein co-design framework that conditions sequence generation on fine-grained functional annotations, specifically leveraging residue-level Gene Ontology (GO) term labels on sequences from the UniRef100 database. By explicitly associating functional signals with residue elements of proteins, our model learns to generate function-conditioned protein sequences that are biologically plausible and semantically consistent. Unlike prior approaches, which treat function as a secondary feature or a classification task, our method focuses on joint reasoning over function and sequence during the design process. This closes a critical gap in the current landscape of protein design tools, offering a scalable and generalizable approach to co-designing protein sequences with user-specified functional profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pairwise Matching of Intermediate Representations for Fine-grained Explainability</title>
<link href="https://hdl.handle.net/1721.1/162912" rel="alternate"/>
<author>
<name>Shrack, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/162912</id>
<updated>2025-12-10T00:52:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Shrack, Lauren
The differences between images belonging to fine-grained categories are often subtle and highly localized, and existing explainability techniques for deep learning models are often too diffuse to provide useful and interpretable explanations. We propose a new explainability method (PAIR-X) that leverages both intermediate model activations and backpropagated relevance scores to generate fine-grained, highly-localized pairwise visual explanations. We use animal and building re-identification (re-ID) as a primary case study of our method, and we demonstrate qualitatively improved results over a diverse set of explainability baselines on 35 public re-ID datasets. In interviews, animal re-ID experts were in unanimous agreement that PAIR-X was an improvement over existing baselines for deep model explainability, and suggested that its visualizations would be directly applicable to their work. We also propose a novel quantitative evaluation metric for our method, and demonstrate that PAIR-X visualizations appear more plausible for correct image matches than incorrect ones even when the model similarity score for the pairs is the same. By improving interpretability, PAIR-X enables humans to better distinguish correct and incorrect matches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning for Space Object Density Distribution&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/162910" rel="alternate"/>
<author>
<name>Sarangerel, Sumiyajav</name>
</author>
<id>https://hdl.handle.net/1721.1/162910</id>
<updated>2025-10-07T04:12:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deep Learning for Space Object Density Distribution&#13;
Prediction
Sarangerel, Sumiyajav
The rapid growth of artificial objects in Low Earth Orbit (LEO) has heightened concerns over orbital congestion and collision cascades, known as Kessler Syndrome. Traditional high-fidelity models, while accurate, are computationally intensive and poorly scalable. This thesis introduces a machine learning–based framework for forecasting the long-term evolution of space object density. A large dataset is generated, using the MIT Orbital Capacity Assessment Tool – Monte Carlo (MOCAT-MC), simulating thousands of scenarios across varying launch, disposal, and maneuver parameters. A Convolutional Gated Recurrent Unit (ConvGRU) is trained to predict density distributions over a 100-year horizon, achieving accurate forecasts with significantly reduced runtime. With a simple guidance mechanism, the generalization capability of the model across diverse scenarios is greatly improved. This approach offers a scalable and efficient tool for supporting future space traffic management and sustainability efforts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162909" rel="alternate"/>
<author>
<name>Shi, Yichuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162909</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning
Shi, Yichuan
The emergence of large-scale machine learning (ML) models has highlighted a fundamental conflict: While computational demands push for the consolidation of data and models in vast, centralized data centers, real-world data continues to be distributed and fragmented across personal devices and private databases. How can we reconcile this contradiction without further monopolizing the ML ecosystem? What unique privacy and security risks arise from alternative ML orchestration system designs? Furthermore, how do these vulnerabilities and system failures inform our understanding of both how and what machines learn? This thesis attempts to explore these questions. It first examines key types of privacy leakages, evaluating their impact under realistic, cross-distribution settings. It then introduces a benchmarking analysis platform, SONAR, to investigate the relationship between privacy leakage (measured by attack performance), network topology, and data distribution. Finally, it presents Co-Dream, a novel algorithm for collaborative learning that offers improved privacy characteristics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prototyping a Scalable Proof Engine</title>
<link href="https://hdl.handle.net/1721.1/162908" rel="alternate"/>
<author>
<name>Rosario, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/162908</id>
<updated>2025-10-07T04:12:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prototyping a Scalable Proof Engine
Rosario, Jon
Formal verification is an exciting development in software engineering, enabling implementations of programs to be rigorously checked against mathematical specifications. Assuming the specification is well-defined, formal verification provides guarantees of a program’s correctness and freedom from bugs that are simply not possible with test-based methods. There’s just one catch: the process of verifying large programs in popular theorem provers such as Coq (now known as Rocq) or Lean is painfully slow. These proof assistants rely on proof engines to construct proofs of correctness for given properties, but to our knowledge, there is no widely available proof engine that offers strong performance guarantees. Even more frustrating is the lack of consensus on what “good” performance should even mean in this context. This thesis lays the groundwork for addressing that gap by presenting a proof engine design that achieves asymptotically linear-time performance with respect to several important variables. We illustrate the design and its performance characteristics with examples from an implementation of the design and outline directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis</title>
<link href="https://hdl.handle.net/1721.1/162905" rel="alternate"/>
<author>
<name>Paulin, Cole J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162905</id>
<updated>2025-10-07T04:13:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis
Paulin, Cole J.
We present a simulation-driven method for optimizing the structural performance of 3D printed objects made with recycled and fresh filament. Although sustainable materials such as recycled PLA reduce environmental impact, they often exhibit degraded or inconsistent mechanical properties, making them less suitable for structurally demanding applications. To address this, we develop a finite element analysis (FEA) pipeline that simulates stress and strain distributions under user-defined loading conditions, enabling intelligent segmentation of the object into regions of high and low mechanical demand. These segmented regions can be assigned recycled or fresh material during fabrication. Our system leverages open-source tools (SfePy) for simulation and we validate its accuracy against Abaqus, a commercial industry standard. We also introduce methods for automatically identifying and correcting segmentation artifacts, such as small disconnected islands. Through comparative simulation studies and performance evaluation, we demonstrate that our approach enables more sustainable 3D printing without sacrificing structural reliability
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System</title>
<link href="https://hdl.handle.net/1721.1/162904" rel="alternate"/>
<author>
<name>Lohier, Sebastien</name>
</author>
<id>https://hdl.handle.net/1721.1/162904</id>
<updated>2025-10-07T04:12:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System
Lohier, Sebastien
This thesis proposes a novel methodology for the automatic placement of Power Electronics Building Blocks (PEBBs) in modular, integrated power corridor designs. These building blocks, which are created and tested offsite for a variety of applications, are currently placed manually during the design process, a method that is time-consuming and suboptimal. To address this challenge, we reduce the placement problem to a 2D bin-packing problem, leveraging a hybrid approach combining Genetic Algorithms and Simulated Annealing. This approach enables the generation of optimized placements that find the extremes of arbitrary heuristics, including minimizing routing distance and power density, effectively improving both design efficiency and system performance. The proposed methodology offers a significant step toward automating and optimizing the layout of power electronic components in complex systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162752" rel="alternate"/>
<author>
<name>Lee, Ju Young</name>
</author>
<id>https://hdl.handle.net/1721.1/162752</id>
<updated>2025-09-19T04:50:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes
Lee, Ju Young
The demand for kidney transplants continues to outpace supply, with over 89,792 patients on the waitlist as of September 2024, yet only 27,332 transplants performed in 2023 [1], and 28% of recovered kidneys going non-utilized [2]. In this thesis, we highlight the use of large language model (LLM) embeddings combined with structured tabular data to build a predictive classifier that estimates offer outcomes for kidney donor-recipient matches. For each predictive model deployed, we provide further analysis on the interpretability of these black-box models using a custom-designed SHAP analysis framework. Our study focuses on three distinct U.S. regions (Regions 1,2,3) with markedly different demographics and amounts of data on organ acceptances (Region 1: 43,126 offers with 2.19% acceptance rate, Region 2: 394,640 offers with 1.57% acceptance rate, Region 3: 169,342 with 2.23% acceptance rate in years 2016-2019). Among the baseline XGBoost models, Region 3 achieved the highest performance, with a precision-accept score of 0.929 and accuracy of 0.993 in the test data. Building on this strong foundation, the multimodal TabText model in Region 3 achieved the best performance overall, with a precision-accept score of 0.959 and accuracy of 0.993 after fine-tuning for six epochs. Our findings suggest that increasing the number of text features, extending training epochs, and incorporating explicit numerical values led to improved model performance in Region 3. In Regions 1 and 2, the baseline model outperformed the TabText model, suggesting that data sparsity in these regions may have limited the effectiveness of the multimodal approach and that further hyperparameter tuning is needed. We also present several visualization techniques to enhance model interpretability. Specifically, we developed a novel SHAP explainer that illustrates feature interactions between multimodal inputs, including both tabular and textual data. Additionally, we explored methods to identify regions of high and low model fidelity by mapping per-sample prediction errors onto t-SNE embeddings. Overall, this thesis introduces new directions for transplant research in the context of transformer-based models and interpretable AI. Leveraging data-driven decision-support tools and refining allocation policies are essential steps toward addressing the persistent gap between supply and demand in the kidney transplant landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Medium Access Control Protocol for Satellite Constellations</title>
<link href="https://hdl.handle.net/1721.1/162751" rel="alternate"/>
<author>
<name>Li, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/162751</id>
<updated>2025-09-19T04:50:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Medium Access Control Protocol for Satellite Constellations
Li, Brian
Satellite internet constellations have emerged as a promising solution for providing global internet connectivity, especially in regions underserved by terrestrial infrastructure. However, as user demand increases, especially in densely populated urban areas, existing Medium Access Control (MAC) protocols face significant scalability challenges and fail to take advantage of advanced antenna processing techniques, including phased array nulling, as well as capacity sharing via inter-satellite links.&#13;
We present both an offline linear program and a novel online greedy MAC protocol to assign satellite resources to users using either sequential service, capacity sharing, or interference-aware nulling. Our offline formulation provides an upper bound on system performance, and while our online protocol is sub-optimal compared to this optimum, it is designed to be implementable on a real-time system. Simulations demonstrate that incorporating nulling can increase effective capacity by up to 25 times, substantially boosting profit in high-demand scenarios. We further quantify the performance gap between the online protocol and the offline optimum under varying demand distributions, showing that our online approach achieves near-optimal results in low-peakiness settings and gracefully degrades under more extreme conditions. These results highlight the importance of spatial processing at the MAC layer and offer practical design insights for future satellite internet constellations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs</title>
<link href="https://hdl.handle.net/1721.1/162750" rel="alternate"/>
<author>
<name>Kong, Blisse</name>
</author>
<id>https://hdl.handle.net/1721.1/162750</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs
Kong, Blisse
In recent years, large language models (LLMs) have become more ubiquitous in the workplace. In software engineering, they are often realized as “copilots" which produce code given a prompt or existing code. Programmers using these tools to increase their coding productivity need to be proficient in inspecting and in understanding these copilots’ outputs. As engineers incorporate these tools to accelerate their workflows, they have a parallel opportunity to accelerate learning new programming languages. This thesis presents a tutor interface where students with some programming experience in an origin language can learn a target language while practicing how to critically read and fix a copilot’s output to write correct, safe programs. This work also introduces the automatic generation of exercises teaching syntax and semantics on which a programmer experienced in the origin language but not the target language should focus.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Physical Withholding of Renewable Energy&#13;
Generators</title>
<link href="https://hdl.handle.net/1721.1/162749" rel="alternate"/>
<author>
<name>Irvine, Paul M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162749</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Physical Withholding of Renewable Energy&#13;
Generators
Irvine, Paul M.
Renewable generators may have incentives to strategically withhold energy output in electricity markets, either to exercise market power or to avoid congestion pricing caused by transmission constraints. Although academic work often treats renewables as not downward dispatchable, renewable generators technically can, at least in principle, reduce their output by self-curtailing. This paper shows that a firm with a large, diverse portfolio could find it profit-maximizing to withhold renewables over conventional thermal generators once it accounts for constraints on ramp rates and minimum generation, as well as the costs associated with starting-up generators and the probability of detection on generator type by market monitoring authorities. Long-term forward contracts like pay-as-produced Power Purchase Agreements (PPAs) can blunt incentives to exercise market power by insulating individual generators from wholesale prices; however, since generators under PPAs typically bid into the wholesale market and influence competitive prices, they may actually encourage renewable withholding if contract prices are sufficiently low and the parent firm’s portfolio is exposed to wholesale prices. To screen for renewable withholding, this paper proposes three methods: (1) examining the distribution of aggregate output across export interfaces for suspicious bunching, (2) testing deviations from ex-ante forecasts, and (3) identifying the time intervals where generators encounter model structural changes compared to a benchmark presumed free of withholding. Together, this work prepares academics and regulators to more accurately model the behavior of renewable generators in electricity markets and to screen for potential market abuses.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Argos: Verifiable FHE Using Commodity Hardware</title>
<link href="https://hdl.handle.net/1721.1/162748" rel="alternate"/>
<author>
<name>Jepsen, Fisher</name>
</author>
<id>https://hdl.handle.net/1721.1/162748</id>
<updated>2025-09-19T04:49:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Argos: Verifiable FHE Using Commodity Hardware
Jepsen, Fisher
We present Argos, a simple approach for adding verifiability to fully homomorphic encryption (FHE) schemes using trusted hardware. Traditional approaches to verifiable FHE require expensive cryptographic proofs, which incur an overhead of up to seven orders of magnitude on top of FHE, making them impractical. With Argos, we show that trusted hardware can be securely used to provide verifiability for FHE computations, with minimal overhead relative to the baseline FHE computation. An important contribution of Argos is showing that the major security pitfall associated with trusted hardware, microarchitectural side channels, can be completely mitigated by excluding any secrets from the CPU and the memory hierarchy. This is made possible by focusing on building a platform that only enforces program and data integrity and not confidentiality (which is sufficient for verifiable FHE, since all data remain encrypted at all times). All secrets related to the attestation mechanism are kept in a separate coprocessor (e.g., a TPM)—inaccessible to any software-based attacker. Relying on a discrete TPM typically incurs significant performance overhead, which is why (insecure) software-based TPMs are used in practice. As a second contribution, we show that for FHE applications, the attestation protocol can be adapted to only incur a fixed cost. Argos requires no dedicated hardware extensions and is supported on commodity processors from 2008 onward. Our prototype implementation introduces 3% overhead for FHE evaluation, and 8% for more complex protocols. In particular, we show that Argos can be used for real-world applications of FHE, such as private information retrieval (PIR) and private set intersection (PSI), where providing verifiability is imperative. By demonstrating how to combine cryptography with trusted hardware, Argos paves the way for widespread deployment of FHE-based protocols beyond the semi-honest setting, without the overhead of cryptographic proofs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework</title>
<link href="https://hdl.handle.net/1721.1/162747" rel="alternate"/>
<author>
<name>Kumar, Aryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162747</id>
<updated>2025-09-19T04:49:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework
Kumar, Aryan
BuildIt allows users to write C++ programs that can execute in multiple stages, where the output of one stage is the program source for the next stage, ending with some final output produced. This is particularly useful for writing specialized code and generating code for domain-specific languages. While there are other approaches to multi-stage programming, BuildIt has several advantages: it takes a library-based approach (so it requires no modifications to the compiler and is thus highly portable), and it has excellent ease of use as all the user has to do is change the declared types of variables in their C++ program. The goal of this thesis is to further improve BuildIt’s ease of use by simplifying this step: in particular, by developing a tool that will automatically convert existing C and C++ programs to the BuildIt framework. We show how to use Clang tooling in conjunction with modifications to the Clang compiler to perform non-trivial modifications to source, namely type-modification, to automatically convert code to its unstaged BuildIt equivalent. As the unstaged BuildIt code can be specialized by staging certain variables, this tool will ultimately enable more easily staging and optimizing C/C++ repositories with the BuildIt framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients</title>
<link href="https://hdl.handle.net/1721.1/162746" rel="alternate"/>
<author>
<name>Jung, Emma Yejoo</name>
</author>
<id>https://hdl.handle.net/1721.1/162746</id>
<updated>2025-09-19T04:49:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients
Jung, Emma Yejoo
Recent surges in the use of glucagon-like peptide-1 receptor agonists (GLP-1RA) have shown promise in reducing cardiovascular events and improving kidney function in patients with type 2 diabetes. Due to these hopeful improvements, kidney transplant recipients (KTRs) have started using GLP-1RA. However, their effects in KTRs remain largely unstudied in clinical studies. This thesis uses a large-scale Electronic Health Record (EHR) database to perform a retrospective cohort analysis to study the association between GLP-1RA use and kidney and cardiovascular outcomes amongst stable KTRs. Primary outcomes include all-cause mortality, major adverse kidney events (MAKE), and major adverse cardiac events (MACE). Among stable KTRs, GLP-1RA users show reduced risk for all-cause mortality (adjusted hazard ratio [aHR]: 0.45; 95% confidence interval [CI]: 0.32-0.62) and MAKE (aHR: 0.69; 95% CI: 0.58-0.81), but no significant difference for MACE (aHR: 0.84; 95% CI: 0.67-1.05). In addition, users show increased risk for irritable bowel syndrome (IBS) (aHR: 2.11; 95% CI: 1.07-4.15) and urinary tract infection (UTI) (aHR: 1.53; 95% CI: 1.27-1.85). These results indicate the potential of GLP-1RA to reduce mortality and adverse kidney outcomes and increase IBS and UTI in KTRs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians</title>
<link href="https://hdl.handle.net/1721.1/162745" rel="alternate"/>
<author>
<name>Kahler, Kailas B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162745</id>
<updated>2025-09-19T04:49:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians
Kahler, Kailas B.
3D Gaussian Splatting (3DGS) is a technique for novel view synthesis, where images of a scene from a specific viewpoint are generated using images from different viewpoints, that has gained popularity for its reduced computational overhead, resulting in faster training and rendering times compared to other methods like Neural Radiance Fields (NeRFs). Its applications outside of strictly novel view synthesis have also been explored, with monocular simultaneous localization and mapping (SLAM) in robotics being an emergent application. However, because of limited on-board battery capacity, the computer hardware used in small robots is much less capable than the high-powered GPUs that the 3DGS algorithm was originally developed on, having both less compute and memory capacity and bandwidth. While there has been work developing specialized compute for the rendering pipeline of 3DGS, memory remains an obstacle to deployment. The Gaussian map can occupy from 1MB − 700MB in memory, which is both too large to store on-chip within micro-robots and such that moving Gaussians from memory to compute can dominate power consumption. While there has been prior work on algorithms for compressing Gaussian representations, they are not yet capable of running in real-time on the hardware present in these robots, as would be required for SLAM. Thus, this thesis explores the limits of these compression methods on current hardware, resulting in an optimized CUDA implementation with better than 100× the throughput of prior work and achieving real-time operation on workstation-class hardware. However, after concluding that custom hardware is necessary for further improvement, this thesis also presents a hardware accelerator that nears real-time compression performance within a reduced power budget, outperforming an NVIDIA Jetson Orin Nano with 64% higher throughput while using 1/16th of the multipliers and drawing 38% of the power when running at 100MHz on an AMD UltraScale+ FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalization of AI Tutor Based on Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/162744" rel="alternate"/>
<author>
<name>Huang, Sheng</name>
</author>
<id>https://hdl.handle.net/1721.1/162744</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Personalization of AI Tutor Based on Knowledge Graphs
Huang, Sheng
Personalized tutoring, tailored to the specific knowledge and needs of individual students, has been shown to significantly enhance academic performance. Research by Schmidt and Moust, for example, highlights that tutors who engage with students on a personal level are more effective in guiding them toward higher academic achievement [1]. Inspired by this principle, the Axiom group at the MIT Media Lab developed an AI tutor for their Intro to Programming courses. The initial version of the tutor, RAGS, relied on analyzing past conversations between students and the tutor, as well as course content, to generate personalized responses. While this approach showed promise, it faced scalability challenges, such as the need to store an ever-growing volume of conversation history and the risk of exceeding token limits in prompt context windows. Additionally, the model occasionally struggled with over-generalization, particularly when responding to vague questions based solely on historical interactions. To address these limitations, this thesis introduces a new approach: a student knowledge graph. Rather than relying on an expanding archive of past conversations, the knowledge graph uses weighted nodes to represent a student’s understanding of each concept. A weight of -8 indicates subpar understanding, while a weight of 8 signifies mastery. After pre-processing the course data, the graph maintains a fixed size, eliminating the need for additional storage over time. This innovation solves two critical problems: &#13;
1. Scalability: By leveraging a fixed-size PostgreSQL database, the student knowledge graph avoids the storage challenges associated with saving endless conversation histories. &#13;
2. Improved Personalization: Instead of sifting through old conversations, the tutor uses concept weights to generate more precise and contextually relevant responses, even to vague questions. &#13;
Testing and evaluation of the implemented system demonstrate its effectiveness in both scalability and response quality. Over 60% of survey participants reported that the knowledge graph-enhanced tutor provided clearer and more relevant guidance, particularly when building on concepts they already understood. Additionally, over 80% of respondents noted improvements in the tutor’s ability to address weak areas and provide targeted practice, especially when preparing for quizzes or exams.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation</title>
<link href="https://hdl.handle.net/1721.1/162743" rel="alternate"/>
<author>
<name>Hadjiivanov, Michael D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162743</id>
<updated>2025-09-19T04:49:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation
Hadjiivanov, Michael D.
Large language models (LLMs) excel at generating fluent answers but are prone to hallucination when the prompt fails to anchor them to verifiable facts. Retrieval-augmented generation (RAG) mitigates this risk, yet existing graph-based retrievers either return bloated neighborhoods or incur prohibitive latency on large knowledge graphs (KGs). We introduce SPIRAL—Supervised Prior + Iterative Reinforcement with Adaptive Labelling—a lightweight two-stage framework that constructs compact, tree-shaped evidence subgraphs. This differs from previous work in its use of a trained, iterative policy network built on top of a prior over triples, delivering improved performance on multi-hop question answering tasks. Stage 1 trains a single-label GLASS-GNN on shortest-path heuristics, producing frozen, question-aware node embeddings at negligible runtime cost with significant local topology awareness around question entities. Stage 2 layers a GLASS policy—which re-labels the partial subgraph at each step—on top of these embeddings and optimizes it with proximal policy optimization. The policy scores only the 1-hop frontier, enabling sub-second inference even on million-edge graphs. On the multi-hop KGQA benchmark WebQSP, SPIRAL attains 0.95 triple recall and 0.97 answer recall while retrieving at most 50 triples—doubling the sampling efficiency of the strongest prior work. Coupled with Llama 3.1-8B, the retrieved trees boost Hit@1 by 2.5 % over SubgraphRAG. Ablation studies confirm that adaptive labels are critical for multi-hop reasoning. SPIRAL demonstrates that accurate and concise retrieval is achievable without resorting to massive models or expensive graph crawls, opening the door to real-time, KG-grounded assistants on modest hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL</title>
<link href="https://hdl.handle.net/1721.1/162742" rel="alternate"/>
<author>
<name>Choi, Justin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162742</id>
<updated>2025-12-09T18:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL
Choi, Justin J.
This work examines the current state of using large language models (LLMs) to solve Text-to-SQL tasks on databases in an enterprise setting. Benchmarks on publicly available datasets do not fully capture the difficulty and complexity of this task in a real-world, enterprise setting. This study examines the critical steps needed to work with enterprise data as well as using knowledge-injection to enhance the performance of LLMs on Text-to-SQL tasks. We begin by evaluating the baseline performance of LLMs on enterprise databases, revealing that a predominant source of failure stems from a lack of domain-specific knowledge. To improve performance, we explore knowledge-injection: the process of incorporating internal and external knowledge. Internal knowledge consists of database-specific information such as join logic, while external knowledge refers to institutional acronyms or group names. We present a hybrid retrieval pipeline that combines embedding and text based searching with LLM-guided ranking to supply models with relevant external knowledge during Text-to-SQL generation. We evaluate the impact of the knowledge-injection by testing the performance of LLMs on the table retrieval task after being augmented with appropriate external knowledge. We demonstrate that knowledge-injection significantly improves accuracy on table retrieval using BEAVER: an enterprise-level Text-to-SQL benchmark. Our findings highlight the importance of domain-specific knowledge-injection and retrieval augmentation in bringing LLMs closer to deployment in enterprise-grade database systems, as well as common failure modes that occur when executing enterprise Text-to-SQL.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162741" rel="alternate"/>
<author>
<name>Chomphoochan, Thanadol</name>
</author>
<id>https://hdl.handle.net/1721.1/162741</id>
<updated>2025-09-19T04:49:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators
Chomphoochan, Thanadol
As single-thread performance plateaus, modern systems increasingly rely on parallelism to scale throughput. Yet, efciently managing concurrency—particularly in transactional systems—remains a major bottleneck. This thesis explores the feasibility of accelerating transaction scheduling via hardware, leveraging FPGAs to ofoad scheduling logic from the CPU. We revisit Puppetmaster, a hardware transaction scheduler, and present a redesigned architecture emphasizing deployability, modularity, and evaluation. We implement both an optimized software baseline and a Bluespec-based hardware design, evaluating their performance across synthetic YCSB-style workloads with varying contention levels. Our hardware prototype demonstrates competitive throughput, achieving over 90% of peak throughput even under high-contention workloads. These results validate the potential of transaction scheduling as a target for hardware acceleration and highlight promising directions for future hybrid hardware-software concurrency-control systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection</title>
<link href="https://hdl.handle.net/1721.1/162740" rel="alternate"/>
<author>
<name>Edelman, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162740</id>
<updated>2025-09-19T04:49:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection
Edelman, Jonathan
This thesis investigates the behavior of carbon dioxide flow in porous media through high-fidelity computational modeling, with a specific focus on the impact of the Span-Wagner equation of state (EOS). Accurate modeling of CO₂ transport in subsurface environments is essential for applications such as carbon capture and storage (CCS). We model the entire flow from injection, down throughout a vertical pipe and into a porous reservoir. To this end, we utilize the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework developed by Idaho National Laboratory to perform finite element simulations. A key contribution of this work is the successful coupling of a porous rock domain with a one-dimensional pipe flow simulation in Julia, enabling a broader representation of injection scenarios. The study examines how the thermodynamic accuracy of the Span-Wagner Equation of State influences flow characteristics, in comparison to the Ideal Gas Equation of State. Through a series of coupled pipe-reservoir simulations, we assess variations in pressure and density as CO₂ is injected from the pipe into the porous medium. The model can detect phase change conditions, allowing us to predict the maximum mass flux that can be achieved below the liquefaction threshold, as defined by the binodal curve in the CO₂ phase diagram at a given temperature. The results highlight the importance of EOS selection in predicting multiphase flow behavior, especially under conditions relevant to geological storage. Furthermore, we find that the Ideal Gas EOS underpredicts injection rates under the same conditions. This integrated modeling approach advances the understanding of thermodynamic effects in coupled subsurface flow systems and supports the development of reliable tools for large-scale carbon storage applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data</title>
<link href="https://hdl.handle.net/1721.1/162739" rel="alternate"/>
<author>
<name>Dahleh, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/162739</id>
<updated>2025-09-19T04:49:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data
Dahleh, Omar
This thesis presents a novel approach to the de-identification of clinical notes from Organ Procurement Organization (OPO) records, leveraging advanced natural language processing (NLP) methodologies. Specifically, we employ in-context learning using large language models (LLMs) to effectively identify and remove protected health information (PHI), aiming to maintain high data utility post-redaction. Our work systematically evaluates the performance of the LLM-based method against established baseline techniques, including traditional Named Entity Recognition (NER) and rules-based systems. Through a slew of experiments, we assesses the strengths and limitations of each method regarding precision and recall. This work will contribute to a uniquely extensive dataset, comprising millions of de-identified OPO clinical notes, which will facilitate ethical healthcare research and enhance compliance with contemporary data protection standards. Ultimately, this dataset holds significant potential for improving processes and outcomes within the field of organ donation and procurement.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient ML Inference via Matrix-Vector Approximations</title>
<link href="https://hdl.handle.net/1721.1/162737" rel="alternate"/>
<author>
<name>Li, Daniel D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162737</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient ML Inference via Matrix-Vector Approximations
Li, Daniel D.
Efficient inference is a growing priority in deep learning, where large model sizes and increasing deployment demands pose challenges for latency, memory, and energy usage. This thesis presents a unified framework for evaluating approximation methods that accelerate inference by modifying weight matrices. We model each method as a function ƒ_c(A) that approximates a weight matrix A under a compression rate c, and assess its impact on both matrix–vector accuracy and downstream task performance. We conduct empirical evaluations across two representative models, AlexNet on CIFAR10 and DistilBERT on AG News, comparing quantization, sparsification, and low-rank approximations. Our analysis spans four perspectives: (1) how different methods trade off ℓ₂ error and compression, (2) how weight statistics and input distributions shape error, (3) how well ℓ₂ error predicts classification accuracy, and (4) how idealized compression differs from real memory savings. We find that sparsification offers a strong trade-off between storage and accuracy, particularly because it preserves task-relevant structure in the weights. We also show that ℓ₂ error is not always a reliable proxy for accuracy, especially when input data lie on low-dimensional manifolds. These results suggest that approximation quality must be evaluated not only by global distortion metrics, but also by how the method interacts with model structure and input distributions. Our findings offer practical guidance for deploying efficient deep learning models and shed light on how compression affects performance in real-world settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning</title>
<link href="https://hdl.handle.net/1721.1/162736" rel="alternate"/>
<author>
<name>Lee, Jimin</name>
</author>
<id>https://hdl.handle.net/1721.1/162736</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning
Lee, Jimin
Effective reasoning often requires more than text or language. It requires visualizing, drawing, gesturing, and interacting for both humans and artificial intelligence (AI). Specifically in educational subjects, such as geometry and graphs, visual tools like auxiliary annotations and drawings can greatly help students understand abstract theories. This thesis explores and suggests how multimodal interaction between humans and AI helps humans engage with the system more naturally and effectively, leading to improved problem-solving in mathematical settings. Recent large multimodal models (LMMs) have the ability to facilitate collaborative reasoning by supporting textual, visual, and interactive inputs, diversifying methods of communication between humans and AI. Utilizing such advancements, this thesis also dives into the development of Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. It also reviews findings from user studies with Interactive Sketchpad, demonstrating that multimodality contributes to user task comprehension and engagement levels. Together, these contributions can reframe the role of AI in education as a visual and interactive collaborator that supports deeper reasoning rather than simply providing answers. Furthermore, this work demonstrates the potential of multimodal human-AI systems in fostering engagement and scaling personalized, visual learning across domains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast and Scalable Subgraph Learning</title>
<link href="https://hdl.handle.net/1721.1/162735" rel="alternate"/>
<author>
<name>Liang, Derrick</name>
</author>
<id>https://hdl.handle.net/1721.1/162735</id>
<updated>2025-09-19T04:49:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast and Scalable Subgraph Learning
Liang, Derrick
Graph Neural Networks (GNNs) are a powerful framework for learning over structured data, enabling predictive modeling across domains such as bioinformatics, recommendation systems, and financial fraud detection. While scalable systems like SALIENT++ have advanced the training of node-level GNN tasks at industrial scale, they do not support an emerging class of workloads: subgraph classification, which is increasingly common in real-world applications. Prior implementations address this gap by modifying both the data pipeline and the model architecture—but at the cost of composability, creating tightly coupled systems that slow further development. This thesis introduces MOSAIC, a lightweight data transformation that reframes subgraph classification as nodewise prediction by augmenting the graph with representative nodes. This approach enables direct compatibility with SALIENT++ and other nodewise systems while decoupling workload format, dataloader design, and model architecture. I demonstrate that MOSAIC enables modular reuse of architectures like GraphSAGE and subgraph-aware components from GLASS, while preserving SALIENT++’s system-level scalability. On the large-scale Elliptic2 dataset, this integration reduces training memory usage by 2.8× and epoch runtime from over 90 minutes to 0.4 seconds—while improving classification performance. I implement MOSAIC as a succinct (&lt;100-line), reusable preprocessing script, enabling integration of the GLASS architecture into SALIENT++ in &lt;10 lines of code, compared to Wang et al.’s tightly coupled 500+ line design. These results highlight the feasibility of scalable, composable experimentation for subgraph learning tasks in high-performance GNN systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians</title>
<link href="https://hdl.handle.net/1721.1/162734" rel="alternate"/>
<author>
<name>Lam, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/162734</id>
<updated>2025-09-19T04:49:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians
Lam, Jordan
Image-based 3D scene reconstruction continues to be a challenge as it involves solving both the sufficient 3D representation problem and the 3D reconstruction itself. One approach to tackle the rendering problem is 3D Gaussian Splatting because of its potential to produce fast and realistic renders via 3D Gaussian representation. With many applications in the entertainment industry, there is motivation in using 3D Gaussian Splatting for not only reconstructing 3D dynamic scenes but also editing them. However, extending the problem to dynamic 3D scenes proves to be a challenging task as it involves discerning the correct representation of a 3D scene while maintaining the capability to render in real time. State-ofthe-art methods have proposed methods that reconstruct dynamic scenes or edit static scenes, but the problem of editing dynamic scenes is still underexplored. This thesis analyzes the feasibility of editing semantically trained Gaussians for dynamic 3D scene editing. By training 3D Gaussians to represent the semantics across the time steps of a dynamic 3D scene, these primitives can be combined with an image editing pipeline to perform real-time, realistic 3D scene editing. Results show that editing segmented 3D Gaussians produces higher-quality and efficient renders as compared to editing without segmentation. However, when evaluated for mainstream applications, results show the impracticality of this pipeline and draw focus to memory and editing limitations that need to be further researched for future advances in 3D Gaussian Splatting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized AI for Methylation Data with Applications to&#13;
Precision Health</title>
<link href="https://hdl.handle.net/1721.1/162733" rel="alternate"/>
<author>
<name>Jamee, Mehrab S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162733</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized AI for Methylation Data with Applications to&#13;
Precision Health
Jamee, Mehrab S.
Advances in precision health rely on integrating large-scale genomic data to identify biomarkers and predict health outcomes. However, sharing sensitive patient data between institutions like hospitals poses significant privacy and security challenges, limiting collaboration and the development of robust machine learning models. This thesis proposes a decentralized artificial intelligence framework for analyzing DNA methylation data, enabling institutions to collaboratively train models without exchanging sensitive information. By taking advantage of generative deep learning techniques and federated learning paradigms, the framework aims to impute missing biomarkers in fragmented datasets and improve the accuracy of downstream predictive tasks, like predicting chronological age, mortality, and cancer data. Two intermediate models are implemented and evaluated in this thesis. The first predicts age from DNA methylation data, and can be used for evaluation of the imputation model. The second is an imputation model that uses a conditional autoencoder architecture to reconstruct missing biomarker data in clinical datasets, which is designed to take advantage of contextual methylation embeddings, made available by recently published pretrained epigenomics foundation models. This work seeks to advance the use of decentralized AI in epigenomics, with the ultimate goal of improving personalized healthcare while preserving patient privacy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Smallholder Field Delineation</title>
<link href="https://hdl.handle.net/1721.1/162732" rel="alternate"/>
<author>
<name>Janjigian, Lily T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162732</id>
<updated>2025-09-19T04:49:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring Smallholder Field Delineation
Janjigian, Lily T.
Accurate crop field delineation from satellite imagery is a critical component of agricultural monitoring. However, most existing models are developed and evaluated in large-scale, industrial agricultural regions, where field boundaries are relatively regular and high-quality annotated data is more readily available. In contrast, smallholder regions—where fields are smaller, more irregularly shaped, and often lack precise geospatial labels—remain underrepresented in both data and model performance. This thesis investigates model architectures, loss functions, and learning paradigms for improving segmentation performance in smallholder settings. Using datasets from Austria, India, and Rwanda, we evaluate several model configurations including ResUNet++ with Dice+BCE and Tanimoto+BCE losses, a meta-learned ResUNet++ using Model-Agnostic Meta-Learning (MAML), and SAM2 ViT-H, a large vision transformer released by Meta, evaluated in a zero-shot setting. We introduce a data processing pipeline that converts vector field boundaries from the FTW dataset into highresolution image–mask pairs suitable for supervised learning. Quantitative and qualitative results reveal that models trained on industrial-scale data perform poorly in smallholder regions without adaptation. SAM2 exhibits strong zero-shot performance, especially on larger fields, while ResUNet++ models trained directly on India perform more consistently across small-to-medium sized fields. MAML yielded underwhelming performance under resource constraints, highlighting the need for further tuning. These findings underscore the importance of geographically diverse, well-aligned training data and support the case for developing globally representative agricultural segmentation datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks</title>
<link href="https://hdl.handle.net/1721.1/162731" rel="alternate"/>
<author>
<name>Jones, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162731</id>
<updated>2025-09-19T04:49:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks
Jones, John M.
Wildfires represent a growing global threat that requires rapid detection and response to minimize environmental damage, economic losses, and human casualties. In the United States, California stands out as a particularly common wildfire hot spot. Recent fire seasons have shattered historical records and been particularly devastating. This work investigates innovative methods for classifying and localizing wildfires through terrestrial cameras positioned on elevated terrain, aimed at improving early detection capabilities and response times while maintaining computational efficiency and reliability for the U.S. Space Force in Southern California. We present YOL2, a novel ensemble approach that combines a fine-tuned ConvNeXt Convolutional Neural Network incorporating a Dynamic Tanh normalization layer with a fine-tuned YOLO11 model for precise localization. Using a comprehensive dataset of 33,636 time-sequenced images from terrestrial cameras across the United States and Europe, our system achieves 98% fire detection accuracy and 55% localization mean average precision [50:95]. The implementation of Dynamic Tanh normalization—applied for the first time in wildfire detection—enhances computational efficiency without sacrificing performance. The images used capture the spread of incipient fires over time, with most containing bounding boxes denoting the approximate location of fire, allowing our system to identify fires quickly while minimizing false positives. Importantly, our spatiotemporal system operates effectively without requiring individual models to rely on multiple time steps as input, enabling modular component replacement and adaptation. The use of pan, tilt, and zoom cameras in concert with our YOLO model provides a more computationally efficient confirmation of fire than alternative methods, showing that extracting better results from less information is possible. Beyond wildfire applications, the YOL2 ensemble methodology demonstrates profound implications for remote sensing more broadly. This work establishes a foundation for highly efficient visual detection systems applicable across numerous domains requiring rapid and accurate object identification and localization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards transparent representations: on internal structure and external world modeling in LLMs</title>
<link href="https://hdl.handle.net/1721.1/162730" rel="alternate"/>
<author>
<name>Hariharan, Kaivalya</name>
</author>
<id>https://hdl.handle.net/1721.1/162730</id>
<updated>2025-09-19T04:49:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards transparent representations: on internal structure and external world modeling in LLMs
Hariharan, Kaivalya
Large language models (LLMs) generalize far beyond their training distribution, enabling impressive downstream performance in domains vastly different from their pretraining distribution. In this thesis, we develop a data-centric view on machine learning. We suggest that the deep generalization of LLMs is best understood through studying the relationships between the four fundamental components of this data generalization: pretraining data, test-time inputs, model outputs, and internal structure. Of these, we present two full research studies characterizing test-time inputs and internal structure. Chapter 1 develops the data-centric view of machine learning, and outline the thesis. Chapter 2 presents Breakpoint, a method of generating difficult coding tasks for models at a large scale that attempts to disambiguate the factors that make problems at test-time difficult. Chapter 3 analyzes the structure of gradient-based jailbreaks in LLMs. We argue that even though GBJs are more out of distribution than even random text, they induce a low-rank, structured change in models. Finally, Chapter 4 discusses the recent rise of reasoning models and proposing some lines of future work in the data-centric view towards developing more robust understanding of LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools</title>
<link href="https://hdl.handle.net/1721.1/162729" rel="alternate"/>
<author>
<name>Hong, Stephen S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162729</id>
<updated>2025-09-19T04:49:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools
Hong, Stephen S.
Optical tracking technology in sports has advanced rapidly in recent years, enabling new opportunities for data-driven analysis and tools to enhance the game. This study presents a framework for processing and analyzing a new skeletal tracking dataset collected from NBA basketball games. The methodology includes biomechanical joint validation, anomaly detection, and region-based consistency analysis to assess the integrity of player motion data. Joint movement anomalies are used to detect tracking errors, while court region and stadium-level evaluations help identify where the optical tracking system may be underperforming. These patterns can guide data providers toward specific areas that require refinement, offering a clearer starting point for improving system accuracy. After cleaning the dataset of 117 NBA games, two action recognition models—a transformer-based model and a temporal graph neural network—are implemented to classify player actions, specifically dribbling, passing, shooting, and rebounding, from sequences of skeletal tracking frames. The objective is to establish a baseline for developing tools to support officiating decisions in the NBA. By leveraging spatiotemporal representations of joint motion, this work improves the reliability of skeletal tracking data and contributes to the advancement of automated decision support in professional sports officiating.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training</title>
<link href="https://hdl.handle.net/1721.1/162728" rel="alternate"/>
<author>
<name>Erives, Ezra</name>
</author>
<id>https://hdl.handle.net/1721.1/162728</id>
<updated>2025-09-19T04:49:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training
Erives, Ezra
Sampling from distributions whose density is known up to a normalizing constant is an important problem with a wide range of applications including Bayesian posterior inference, statistical physics, and structural biology. Annealing-based neural samplers seek to amortize sampling from unnormalized distributions by training neural networks to transport a family of densities interpolating from source to target. A crucial design choice in the training phase of such samplers is the proposal distribution by which locations are generated at which to evaluate the loss. Previous work has obtained such a proposal distribution by combining a partially learned vector field with annealed Langevin dynamics. However, isolated modes and other pathological properties of the annealing path imply that such proposals achieve insufficient exploration and thereby lower performance post training. In this work we extend existing work and characterize new families of proposals based on controlled Langevin dynamics. In particular, we propose continuously tempered diffusion samplers, which leverage exploration techniques developed in the context of molecular dynamics to improve proposal distributions. Specifically, a family of distributions across different temperatures is introduced to lower energy barriers at higher temperatures and drive exploration at the lower temperature of interest. We additionally explore proposals based on Langevin dynamics involving non-Newtonian kinetic energies. We empirically validate improved sampler performance driven by extended exploration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web</title>
<link href="https://hdl.handle.net/1721.1/162727" rel="alternate"/>
<author>
<name>Luchko, Yaro</name>
</author>
<id>https://hdl.handle.net/1721.1/162727</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web
Luchko, Yaro
This thesis presents tools and ideas for prototyping and exporting collaborative digital music instruments (DMIs) on the web, the primary purpose of which is to lower the barrier to making music and to enable easier collaboration. This is done in the context of the Creativitas website, which has become a tool of the MIT 21M.080 "Introduction to Music Technology" class to learn about music technology and audio on the web, and a tool for FaMLE (the Fabulous MIT Laptop Ensemble) to use in live performances. The website allows creators to execute code within an editor code box and partake in a practice known as live coding, ultimately creating both sound and visuals. Audio is primarily created with the Tone.js interactive web audio framework, and visuals are drawn on a provided canvas using p5.js. This thesis extends the Creativitas website by providing functionality for exporting the written code as a standalone website. The exported standalone websites serve as DMIs, with standard controls such as volume, tempo, and start and stop buttons. Furthermore, we discuss and implement strategies for synchronizing timing and instrument values. This includes state-of-the-art strategies, as well as ideas for creating extendable interfaces that can include more strategies as they are developed. We end with two examples of exported DMIs, which can be effectively used in performances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning</title>
<link href="https://hdl.handle.net/1721.1/162726" rel="alternate"/>
<author>
<name>Lei, Si Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/162726</id>
<updated>2025-09-19T04:49:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning
Lei, Si Liang
Background. Programmable expressive features—such as speech, facial expressions, and chatbot-style dialogue—are often promoted as tools to enhance engagement in educational robotics. While prior research shows benefits in socially-oriented tasks like storytelling or group collaboration, it remains unclear how student-controlled expressive blocks affect learning when the task itself is non-social. This study isolates the impact of such features in a context where expressiveness is not instructionally required. Method. We conducted a controlled, two-cohort study with 41 middle school students (ages 10–12) during a one-day AI-and-robotics workshop using the Doodlebot platform. Students in the experimental group had access to optional blocks enabling the robot to speak, emote, and use GPT-based responses. These features were hidden from the control group. All participants completed identical programming tasks (e.g., maze navigation, visual classification) that did not require social interaction. Data sources included pre/post surveys, facilitator notes, and student code. We applied the Mann–Whitney U test [1, 2] and reflexive thematic analysis [3, 4] to examine outcomes. Results. The expressive condition showed no significant gains in programming confidence or peer trust, but performed significantly worse on the post-workshop concept quiz (p = .007, r = .41). Qualitative data revealed that students in this group often used expressive blocks for entertainment rather than learning, leading to distraction, off-task behavior, and increased reliance on adult facilitation. Contributions. This study contributes (i) empirical evidence on the limitations of robot expressiveness in non-social learning contexts, (ii) a mixed-methods protocol for analyzing classroom robot deployments, and (iii) design guidance for aligning robot behavior with pedagogical intent. Implications. Expressiveness in educational robots should be contextually deployed—not assumed beneficial by default. In technical, goal-driven tasks that do not involve social reasoning, unscaffolded expressiveness may introduce cognitive overhead or divert attention. We propose a “dial-a-sociality” model, where robot behavior can be flexibly tuned to match the demands of the learning environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar</title>
<link href="https://hdl.handle.net/1721.1/162724" rel="alternate"/>
<author>
<name>Kuka, Adrian</name>
</author>
<id>https://hdl.handle.net/1721.1/162724</id>
<updated>2025-09-19T04:49:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar
Kuka, Adrian
The past few years have witnessed growing interest in using millimeter-wave signals for non-line-of-sight (NLOS) perception tasks, with applications in robotics, augmented reality, and smart-homes. However, existing systems suffer from a lack of large mmWave datasets, resulting in limited accuracy and generalizability compared to their line-of-sight, camera-based counterparts. We present the design, implementation, and evaluation of mmSim, a new, high-speed millimeter-wave (mmWave) simulator capable of producing large synthetic datasets to help drive the field of mmWave-based NLOS perception. mmSim introduces two main contributions to improve the speed over existing mmWave simulators. First, it pre-selects areas of the object, which will produce reflections towards each simulated antenna location, allowing it to minimize future computation. Second, it introduces a coarse-to-fine approach to allow early, less critical steps to operate at lower resolutions, while maintaining the high resolution in later steps required for high-accuracy images. These techniques, combined with other performance optimizations, allow mmSim to achieve an over 24x improvement in speed over state-of-the-art mmWave simulators.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards AI Safety via Interpretability and Oversight</title>
<link href="https://hdl.handle.net/1721.1/162723" rel="alternate"/>
<author>
<name>Kantamneni, Subhash</name>
</author>
<id>https://hdl.handle.net/1721.1/162723</id>
<updated>2025-09-19T04:49:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards AI Safety via Interpretability and Oversight
Kantamneni, Subhash
In this thesis, we advance AI safety through mechanistic interpretability and oversight methodologies across three key areas: mathematical reasoning in large language models (LLMs), the validity of sparse autoencoders, and scalable oversight. First, we reverse-engineer addition within mid-sized LLMs and discover that LLMs represent numbers as helices. We demonstrate that LLMs perform addition via the manipulation of these helices using a "Clock" algorithm, providing the first representation-level explanation of mathematical reasoning in LLMs, verified through causal interventions on model activations. Next, we rigorously evaluate sparse autoencoders (SAEs), a popular interpretability tool, by testing their effectiveness on the downstream task of probing. We test SAEs under challenging probing conditions, including data scarcity, class imbalance, label noise, and covariate shift. While SAEs occasionally outperform baseline methods, they fail to consistently enhance task performance, underscoring a potentially critical limitation of SAEs. Lastly, we introduce a quantitative framework to evaluate scalable oversight - a promising idea where weaker AI systems supervise stronger ones - as a function of model intelligence. Applying our framework to four oversight games ("Mafia," "Debate," "Backdoor Code," and "Wargames"), we identify clear scaling patterns and extend our findings through a theoretical analysis of Nested Scalable Oversight (NSO), deriving conditions for optimal oversight structures. Together, these studies advance our understanding of AI interpretability and alignment, providing insights and frameworks to progress AI safety.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metagradient Descent: Differentiating Large-Scale Training</title>
<link href="https://hdl.handle.net/1721.1/162722" rel="alternate"/>
<author>
<name>Chen, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162722</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metagradient Descent: Differentiating Large-Scale Training
Chen, Benjamin
A major challenge in training large-scale machine learning models is configuring the training process to maximize model performance, i.e., finding the best training setup from a vast design space. In this work, we unlock a gradient-based approach to this problem. We first introduce an algorithm for efficiently calculating metagradients -- gradients through model training -- at scale. We then introduce a "smooth model training" framework that enables effective optimization using metagradients. With metagradient descent (MGD), we greatly improve on existing dataset selection methods, outperform accuracy-degrading data poisoning attacks by an order of magnitude, and automatically find competitive learning rate schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A simplified approach to calculating personalized estimates for electric vehicle charging delays</title>
<link href="https://hdl.handle.net/1721.1/162721" rel="alternate"/>
<author>
<name>Chen, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/162721</id>
<updated>2025-09-19T04:49:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A simplified approach to calculating personalized estimates for electric vehicle charging delays
Chen, Helen
In the past decade, electric vehicles (EVs) have gained traction as a cleaner alternative to internal combustion engine vehicles, commonly referred to as gas-powered vehicles. To promote EV adoption, the government has implemented various regulations and incentives to support the transition to cleaner transportation. However, EV adoption in the United States has progressed more slowly than expected, with EVs accounting for less than 10 percent of new vehicle sales in 2023. Recent surveys indicate that a significant barrier is the perceived inconvenience and uncertainty surrounding EV charging, particularly the additional time required to charge during active use, which we call charging delay. Currently, there exist some models for estimating these charging delays, but these models require users to input a significant amount of information, such as their daily driving schedules, locations of charging stations, and exact distances of trips taken each year, which many users may not even remember. These more complex models are likely to overwhelm users, especially those who may be entirely new to EVs. To fill this gap, this thesis introduces a simplified model for estimating personalized annual EV charging delay using a set of easy-to-provide inputs, including typical driving behavior and access to home and work charging. The model logic captures delay from both routine usage, such as weekly driving patterns or typical trips, and occasional, high-energy long-distance trips, which, while not routine, are still important to account for. For weekly trips, the model considers four scenarios based on combinations of home and work charging access to determine driving and charging schedules. For long-distance travel, the model uses data from the 2022 National Household Travel Survey (NHTS) and performs multiple iterations of bootstrap resampling to create synthetic distributions of long-distance trips within a year. Data related to individual routine vehicle usage and charging delay is unavailable, so we are unable to validate the model’s performance through accuracy calculations. Instead, we performed a one-at-a-time sensitivity analysis to better understand how charging delay is affected by different factors. We found that access to private charging, such as home or work charging, improves charging delay robustness for regular weekly trips, with the exception that relying solely on work charging on workdays can cause stepwise increases in non-workday delays. Additionally, long-distance trip delays are no affected by private charging access and follow a stepwise pattern based on vehicle range. In general, the simplified approach presented in this thesis offers a more accessible way for current and prospective EV owners to clearly understand their own expected experience of EV ownership.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards</title>
<link href="https://hdl.handle.net/1721.1/162720" rel="alternate"/>
<author>
<name>Li, Zhening</name>
</author>
<id>https://hdl.handle.net/1721.1/162720</id>
<updated>2025-12-05T17:48:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards
Li, Zhening
Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, there has been little theoretical work aimed to characterize these properties precisely. This work studies the utility of skills in sparse-reward environments with a discrete state space and finite action space. We show, both theoretically and empirically, that RL performance gains from skills are worse in environments where successful trajectories are less compressible. In environments with a highly incompressible distribution of successful trajectories, using unexpressive skills such as macroactions will provably worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Supervised ECG Learning for Multimodal Clinical Tasks</title>
<link href="https://hdl.handle.net/1721.1/162719" rel="alternate"/>
<author>
<name>Chen, Peilin</name>
</author>
<id>https://hdl.handle.net/1721.1/162719</id>
<updated>2025-09-19T04:49:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Self-Supervised ECG Learning for Multimodal Clinical Tasks
Chen, Peilin
We present a multimodal clinical AI framework that integrates time series, images, and text to support robust diagnostic reasoning across diverse input combinations. We first introduce ECG-JEPA, a self-supervised encoder pretrained on multiple ECG datasets to learn generalizable time series representations. This unimodal pretraining improves ECG classification, achieving a 23-point AUC gain on the underrepresented Ga dataset. We then align and fuse these ECG embeddings with chest X-rays and EHR text using a vision–language model backbone, enabling end-to-end multimodal inference. Our results show that incorporating ECG signals meaningfully improves diagnostic performance, highlighting the value of multitask time series pretraining and modular fusion for clinical AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)</title>
<link href="https://hdl.handle.net/1721.1/162718" rel="alternate"/>
<author>
<name>Huang, Roderick W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162718</id>
<updated>2025-09-19T04:49:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)
Huang, Roderick W.
The use of Mean-Variance Portfolio Optimization (MVO) in Modern Portfolio Theory (MPT) has been a long-standing method to guide investment decisions for market-traded assets like stocks and bonds. Recent research shows that portfolio optimization developed using MPT could prove useful in investment decisions for technology projects. Traditionally, empirical data from past projects and statistically driven technology trends are used to predict the risk-return model necessary for MPT. This thesis introduces a new methodology, Optimizing Portfolios in Technologies Investments Methodology with Hierarchy (OPTIM-H), which extends MPT to make investment decisions within a hierarchical organizational structure of technology projects. An integrated dataset was developed to demonstrate this methodology, combining 19,000 data records from Techport and Small Business Innovation Research (SBIR) datasets. The dataset captures investment trends and maturity pathways across 17 taxonomy areas, revealing that most projects begin at Technology Readiness Levels (TRLs) 2–4, with average funding amounts near \$300,000. OPTIM-H effectively distinguishes between broader technology groups and their subcategories, showing the impact of community interest on investment decisions. Furthermore, this work investigates k-means clustering as a tool for classifying technology projects for targeted investment, with the analysis identifying seven clusters and achieving a mean utility score of 0.595 with a standard deviation of 0.651.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Canvas with a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162717" rel="alternate"/>
<author>
<name>Heiberger, Henry R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162717</id>
<updated>2025-09-19T04:49:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Canvas with a Large-Scale Social Annotation Platform
Heiberger, Henry R.
The last decade has seen a growing interest in the use of collaborative annotation systems, educational tools that allow multiple users to asynchronously comment, highlight, and discuss digital content directly on the source material, transforming traditional classroom readings into a more engaging group activity. Originally developed by MIT CSAIL’s Haystack Group in 2012 under the direction of Professor David Karger, Nota Bene (NB) is a particular collaborative annotation tool that allows students to have annotated online discussions in the margins of textbooks, papers, and even webpages [1]. Though various studies have already proven its ability to succeed in a classroom setting, conversations with key stakeholders have revealed that the tool is missing a key feature found in many other popular collaborative annotation solutions: integration with the Canvas learning management system (LMS) [1–3]. Thus, this work sought to integrate the classroom management features that Canvas provides into the NB platform by supporting Canvas account linking, class importation and roster synchronization, and automatic grade uploading. By doing this, we hoped to improve NB’s quality as a classroom tool, enhancing its value to institutions, encouraging its wider adoption across the academic landscape, and aligning with a much broader trend of creating more integrated, efficient, and user-friendly educational technology solutions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples</title>
<link href="https://hdl.handle.net/1721.1/162716" rel="alternate"/>
<author>
<name>Hernandez, Adriano</name>
</author>
<id>https://hdl.handle.net/1721.1/162716</id>
<updated>2025-09-19T04:49:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples
Hernandez, Adriano
Artificial Intelligence (AI) and large language models (LLMs) not only present a challenge for adversarial robustness, but also the natural emergence of unwanted capabilities. Current approaches to safeguarding AI and LLMs predominantly rely on explicitly restricting known instances of these. However, this places a burden on model developers, because they cannot anticipate all the potential attacks and undesirable capabilities. To solve this problem, we leverage interdisciplinary knowledge. In the field of information security, the principle of least privilege provides guidance on how to defend from unknown threats. In AI, the principle could be implemented by ensuring that developers specify the knowledge and capabilities an AI system should retain, restricting all others by default. We call this application of the principle of least privilege, passive scoping. Our thesis makes two claims: &#13;
1. We argue that (a) passive scoping mitigates concerns about adversarial robustness and loss of control of AI systems and (b) passive scoping to edit the weights and activations at post-training time is underexplored by the literature. &#13;
2. Of possible approaches, our sparse autoencoder (SAE) filters can implement this underexplored type of passive scoping. They increase safety relative to LoRA finetuning and prompt engineering, but leave room for improvements. &#13;
The thesis is structured as follows: &#13;
1. Chapter 2 elucidates the challenges with adversarial robustness and loss of control risk. Chapter 3 puts forward a conceptual argument for the benefits of passive scoping. Later, it analyzes the extent to which passive scoping has been attempted. These two chapters work together to defend claims 1a and 1b. &#13;
2. Chapter 4 defines our optimization problem. Chapter 5 defines our experimental methodology and metrics. These two define our success criteria for claim 2. Chapter 6 finalizes our defense of claim 2 based on our results. &#13;
3. Chapter 7 explores related work, Chapter 8 engages in a broader discussion, and chapter 9 summarizes the contributions of this thesis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch</title>
<link href="https://hdl.handle.net/1721.1/162715" rel="alternate"/>
<author>
<name>Huang, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/162715</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch
Huang, Alexis
As generative AI tools become increasingly prevalent in young people’s lives, these technologies have a growing influence over the way that children learn. While much of the early work at the intersection of AI and education has focused on the development of intelligent tutoring systems designed to deliver content more efficiently, this thesis explores how generative AI might be used to support the creative learning process by sparking curiosity, encouraging exploration, and helping young people express themselves creatively. In this thesis, I explore ways of integrating generative AI with Scratch, the world's largest programming community for children, while remaining aligned with the core values of Scratch: creativity, playfulness, and self-expression. I designed three tools that extend the Scratch ecosystem: Scratch Connect, which explores using generative AI to help Scratchers discover projects that inspire them to create while opening the black box of recommendation systems; scrAItch, which investigates how people can iterate with generative AI by using text-based inputs to create and tinker with Scratch projects; and Scratch Spark, which reimagines the new learner experience by using generative AI to help users create personally meaningful “spark projects.” This thesis describes the process of imagining, creating, and reflecting on these tools, including many of the challenges and tensions that we encountered along the way. I discuss observations and feedback from creative workshops with young people, and conclude by reflecting on open questions and opportunities for future work in designing generative AI tools that support creative learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162714" rel="alternate"/>
<author>
<name>Forsythe, Eyan</name>
</author>
<id>https://hdl.handle.net/1721.1/162714</id>
<updated>2025-09-19T04:49:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators
Forsythe, Eyan
Analog accelerators can enable energy-efficient and high-throughput deep neural network (DNN) computations by computing in memory. Unfortunately, device and circuit nonidealities in these accelerators, such as noise and quantization, can also lead to low DNN inference accuracy due to computation errors arising from these non-idealities. These errors are largely a function of both the choice of DNN workload and different hardware design choices, such as circuit topology and DNN operand encoding. Different hardware design choices can affect the energy, throughput, and area of the system, so it is important to understand how these design choices interact with DNN inference accuracy. However, there is a lack of a systemic understanding of how each of these hardware design decisions affects accuracy and how they interact with other design decisions. To address these issues, we model how hardware design choices can lead to analog errors such as noise and quantization. Then, we explore these errors affect inference accuracy in analog accelerators and how tradeoffs can be made between inference accuracy, energy efficiency, area, and throughput. We find that analog errors generated from hardware design decisions can generate different amounts of accuracy loss depending on which layer in a DNN is subject to these analog errors. This leads to the structure of the DNN having a significant impact in how hardware design choices affect DNN inference accuracy, especially with respect to the individual layers of a DNN. We use knowledge of the relationships between device and circuit non-idealities to improve the accuracy of published analog accelerators and analyze the energy and area costs of the increased accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162713" rel="alternate"/>
<author>
<name>Guobadia, Omozusi E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162713</id>
<updated>2025-09-19T04:49:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications
Guobadia, Omozusi E.
The advancement of brain-machine interfaces (BMIs) requires neural signal acquisition systems that are capable of resolving both fast, low-amplitude action potentials (APs) and slow, higher-amplitude local field potentials (LFPs) under stringent power and area constraints. This thesis presents the design and simulation of a high-resolution, low-power successive approximation register (SAR) analog-to-digital converter (ADC) tailored for sub-cortical neural signal detection. To optimize dynamic range and reduce power consumption, a novel adaptive zoom-and-tracking architecture is introduced, enabling the ADC to dynamically adjust its reference window based on LFP trends while maintaining high-resolution capture of APs. The proposed system integrates a bootstrapped track-and-hold circuit, a differential capacitive DAC, and a strong-arm comparator in the analog front-end, alongside a digital FIR filter and SAR logic with zoom-range control in the digital domain. Simulations validate the functionality of each subsystem independently and in concert, demonstrating the system’s ability to dynamically isolate APs from LFP-dominated baselines while reducing analog power draw by over 60% compared to fixed-range ADCs. This work offers a promising approach for scalable, energy-efficient neural recording architectures suited to future BMI applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography</title>
<link href="https://hdl.handle.net/1721.1/162712" rel="alternate"/>
<author>
<name>Gupta, Shreya</name>
</author>
<id>https://hdl.handle.net/1721.1/162712</id>
<updated>2025-09-19T04:49:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography
Gupta, Shreya
Coronary artery disease is the leading cause of mortality globally, resulting in an urgent and critical need to better understand both vessel morphology and the processes of intervention. Angioplasty is an intervention which causes a previously constricted vessel to expand via placement of a stent, and is affected by numerous characteristics of the vessel such as calcium eccentricity and size, wall thickness, and prior lumen size. Being able to accurately assess whether a stent will properly expand allows cardiologists to pursue pre-stenting calcium lesion modification strategies that help avoid dangerous complications of improper stenting. This work introduces a pipeline for post-stenting lumen area prediction from pre-stenting optical coherence tomography (OCT) images. This pipeline includes morphological correction of OCT image segmentations, explainable feature extraction from OCT segmentations, and a predictive transformer network that combines morphological features with injected stent information. The aim is for such a pipeline to be used to support clinical decision making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/162711" rel="alternate"/>
<author>
<name>Fu, Evelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162711</id>
<updated>2025-09-19T04:49:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation
Fu, Evelyn
Accurately simulating object dynamics based on real-world perception inputs has wide applications in digital twins and robotic manipulation. Yet, doing so requires practitioners to carefully measure and reconstruct the dynamic and geometric properties of the objects, which is time-consuming and requires domain expertise. This project proposes an automatic pipeline to construct 3D representations from a collection of real objects, which can further be used to generate assets with accurate visual texture and collision geometry to be used in simulation. This pipeline will be designed to have minimal hardware requirements and aim to be efficient in time for physical actuation to maximize data collection on minimal hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-based Planning for Efficient Task Execution</title>
<link href="https://hdl.handle.net/1721.1/162710" rel="alternate"/>
<author>
<name>Ding, Wenqi</name>
</author>
<id>https://hdl.handle.net/1721.1/162710</id>
<updated>2025-09-19T04:49:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model-based Planning for Efficient Task Execution
Ding, Wenqi
Robotic agents navigating 3D environments must continuously decide their next moves by reasoning about both visual observations and high-level language instructions. However, they plan in a high-dimensional latent space, opaque to human collaborators. Hence, it is difficult for humans to understand the agent’s decision-making process. This lack of interpretability hinders effective collaboration between humans and robots. The key question we are trying to answer in this thesis is: Can we build a unified planning framework that fuses visual and language into a single, interpretable representation, so that humans can interpret robots’ decisions? We propose a model-based planning framework built around pretrained vision-language models (VLMs). We show that VLMs can be used to plan in a unified embedding space, where visual and language representations can be decoded back to human-interpretable forms. Empirical evaluation on vision-language navigation benchmarks demonstrates both improved sample efficiency and transparent decision making, enabling human-in-the-loop planning and more effective human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global Non-Convex Optimization with Integer Variables</title>
<link href="https://hdl.handle.net/1721.1/162709" rel="alternate"/>
<author>
<name>Kriezis, Demetrios C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162709</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global Non-Convex Optimization with Integer Variables
Kriezis, Demetrios C.
Non-convex optimization refers to the process of solving problems whose objective or constraints are non-convex. Historically, this type of problems have been very difficult to solve to global optimality, with traditional solvers often relying on approximate solutions. Bertsimas et al. [1] introduce a novel approach for solving continuous non-convex optimization problems to provable optimality, called the Relaxation Perspectification Technique - Branch and Bound (RPT-BB). In this thesis, we extend the RPT-BB approach to the binary, mixed-binary, integer, and mixed-integer variable domains. We outline a novel branch-and-bound algorithm that makes use of the Relaxation Perspectification Technique (RPT), as well as binary, integer, and eigenvector cuts. We demonstrate the performance of this approach on two representative non-convex problems, as well as two real-world non-convex optimization problems, and we benchmark its performance on BARON and SCIP, two state-of-the-art optimization solvers for non-convex mixed-integer problems. We observe that our algorithm, despite being more general, is able to outperform the state-of-the-art solvers on many problem instances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest</title>
<link href="https://hdl.handle.net/1721.1/162708" rel="alternate"/>
<author>
<name>Li, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162708</id>
<updated>2025-09-19T04:49:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest
Li, Jason
The deployment of large language models (LLMs) as autonomous agents is transforming the software development landscape. Increasingly more engineers are using natural language agents to expedite and guide development workflows, while large organizations are investing heavily on building agentic systems for tasks such as code generation and code repair. A key challenge in developing such systems is tuning agent hyperparameters— settings that affect performance such as choice of model, temperature settings, and context window sizes. As system complexity grows, the hyperparameter space expands, complicating optimization under real-world compute and time constraints. In this work, we present Palimpzest[1] as an agentic optimizer able to balance cost and performance objectives by tuning agentic hyperparameters. We demonstrate that Palimpzest can tune our agent hyperparameters at 8.5 times lower cost and with 24 times greater time efficiency compared to the conventional grid search. By integrating our custom-built Debugger and Code Editor Agents as new operators within Palimpzest, we enhance the system’s ability to resolve real-world GitHub issues. And to facilitate hyperparameter selection, we also introduce File Coverage, Report Accuracy, and Patch Similarity along with the traditional SWE-Bench Score as quality evaluation methods used by Palimpzest’s optimization loop. When evaluated on the SWE-Bench Lite[2] benchmark, our optimized system achieves a 15% score at a significantly lower cost compared to previous approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems</title>
<link href="https://hdl.handle.net/1721.1/162707" rel="alternate"/>
<author>
<name>Lau, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/162707</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems
Lau, Mary
Anomaly detection remains a persistent challenge in machine learning due to the extreme class imbalance, high cost of false negatives, and the need to regulate false positives in realworld settings at scale. This thesis introduces Tail-end FPR Max Recall, a business-aware evaluation framework designed for such constrained environments. Using this framework, we benchmark LightGBM—a gradient boosting method known for its computational efficiency and predictive accuracy—on an imbalanced dataset, comparing its performance against standard academic evaluation criteria. Our results demonstrate that Tail-end FPR Max Recall fills critical gaps left by standard academic criteria, providing a more realistic assessment of model performance that aims to maximize recall while enforcing a false positive rate budget. Beyond benchmarking, we propose two strategies that incorporate deep learning methods to augment the already strong performance of gradient boosting: (1) using generative models to produce synthetic minority-class samples that outperform traditional oversampling techniques, and (2) using neural embeddings to improve feature representation for anomaly detection. Together, these contributions offer a methodology for evaluating and improving anomaly detection pipelines in domains where rare, high-impact events must be detected while meeting strict operational demands.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification</title>
<link href="https://hdl.handle.net/1721.1/162706" rel="alternate"/>
<author>
<name>Kandeh, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/162706</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification
Kandeh, Stephen
In this work, a system of processors connected to an FPGA is interfaced with a custom analog frontend and used to create a verification environment for cryogenic devices. In particular, this thesis focuses on the technical structure of that system. Current validation efforts often rely on commercially available arbitrary waveform generators (AWGs) and oscilloscopes, which, while highly capable, are often prohibitively expensive and poorly suited for large-scale or parallelized testing environments. As noted in industry reports, scaling such instrumentation introduces significant challenges in cost, calibration, and signal synchronization, making them inefficient for high-resolution or high-speed analyses in multi-channel systems [1]. On the other hand, an FPGA provides the necessary performance to increase parallelism without a proportional increase in cost, greatly improving testing resolution and speed. When augmented with a set of processors, we introduce a level of accessibility and automatability not currently present in commercial products. To be clear, while the board was designed with the testing of nanowires in mind (and is not capable of measuring DC voltages), it can still be combined with separate lab equipment to interact with Josephson Junction based devices. That said, the flexibility of this system allows for a generalized application to any electronic that demands a specialized testing procedure involving arbitrary signal processing and generation. The money, time, and energy that this innovation will save on cryogenic electronic validation will significantly improve our progress in developing these technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy Efficient Real-time Operating Systems on Chip</title>
<link href="https://hdl.handle.net/1721.1/162705" rel="alternate"/>
<author>
<name>Kang, Ezra H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162705</id>
<updated>2025-09-19T04:49:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Energy Efficient Real-time Operating Systems on Chip
Kang, Ezra H.
Autonomous micro-robots are crucial for several tasks, such as search and rescue, noknowledge mapping, and navigation. Without an external power connection, these robots are constrained by their on-platform energy capacity. The power consumption of actuation systems used in micro-robots is within the same magnitude of the power consumption of the compute system. Thus, the remaining factor for enabling these micro-robots is associated with the design of energy-efficient compute systems. Energy usage of compute systems is typically dominated by memory operations, which previous efforts have attempted to mitigate with memory efficient software and hardware. These efforts are enabled with the software/hardware interface, which is implemented as an Operating System (OS). However, Operating Systems for energy-efficient platforms have not been fully explored. Current approaches utilize full general-purpose Operating Systems such as Linux, which can incur large memory and compute overhead penalties. These overheads not only consume the typically limited memory resources of energy-efficient systems, but also increase the number of memory accesses and CPU cycles, both of which are significant contributors to energy consumption. To address these concerns, we propose the design of a computational and memory efficient Real-time Operating System (RTOS). Our RTOS is designed to minimize both memory footprint and compute cycle overhead. It achieves this primarily through direct physical memory access, cycle-efficient task scheduling, and minimal runtime services to avoid unnecessary processing. Additionally, the modular RTOS kernel includes only the components required by an application in the final binary, reducing code size and memory usage without compromising functionality. The design enables the utilization of energy-efficient hardware accelerators and software, allowing for execution of robotics workloads with minimal memory and cycle overhead. When comparing robotics algorithms implemented on our proposed RTOS and baseline OSes, our design was able to achieve a 99% reduction in memory footprint. Additionally, it achieved up to a 47% increase in throughput. Thus, our design demonstrates a direct reduction in memory and CPU cycle overhead, which in turn lowers total system memory and energy consumption. The proposed design was demonstrated and verified on a resource constrained system-on-chip on the AMD Virtex Ultrascale+ VCU118 FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History</title>
<link href="https://hdl.handle.net/1721.1/162704" rel="alternate"/>
<author>
<name>Lu, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/162704</id>
<updated>2025-09-19T04:49:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History
Lu, Claire
Proper cell-cell communication is essential for multicellular development, from embryogenesis to stem cell differentiation. To map these networks, we developed IRIS (Intracellular Response to Infer Signaling state), a semi-supervised deep learning method that fits conditional variational autoencoders (CVAE) to single-cell RNA sequencing (scRNA-seq) data. IRIS is able to annotate cellular signaling states of individual cells using only their gene expression. Currently, IRIS has been validated in developmental contexts, including gastrulation, early endoderm organogenesis, and mesoderm lineages in mouse embryos. However, its predictions often show extremely high or extremely low confidence, suggesting a need for methods to prevent overconfidence and better account for uncertainty. To generalize IRIS to broader cell-cell communication problems, we combined engineering and experimental approaches, integrating uncertainty quantification techniques with new biological datasets. We implemented three approaches for estimating uncertainty in IRIS predictions: stochastic sampling, Monte Carlo dropout, and ensemble prediction. These approaches were evaluated on two new endoderm and mesenchyme combinatorial perturbation screens. Across all methods, uncertainty values reliably reflected the varying difficulty of predicting different signaling pathways, driven by both biological complexity and dataset representation. Moreover, higher uncertainty was consistently associated with lower prediction accuracy, confirming uncertainty as a useful proxy for model confidence. All three methods identified similar high-uncertainty cell populations, supporting their consistency and validity. By incorporating uncertainty quantification into IRIS, we provide more robust and interpretable predictions that can guide future experiments and enhance the model’s applicability across diverse biological contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162703" rel="alternate"/>
<author>
<name>Le, Khang D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162703</id>
<updated>2025-09-19T04:49:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications
Le, Khang D.
Current transformer magnetic energy harvesters (CTMEHs) harvest magnetic energy from an AC current-carrying conductor and convert this energy into usable electrical energy for use by various low-power devices, such as sensors and microcontrollers. The amount of power harvested by CTMEHs is determined by the primary current passing through the conductor; however, variables such as the magnetic core’s dimensions, magnetic properties, and turn count also influence performance. Previous works have focused mainly on analytical or numerical modeling of CTMEH behavior or improving power harvest performance given a specific magnetic core material. Some existing research has compared the effects of different core materials on CTMEH power harvest in limited fashion; but a comprehensive, comparative study of high permeability, high saturation flux density CTMEHs had yet to be explored. This thesis establishes core material as the primary independent variable along with primary current and frequency during testing to isolate the effects of magnetic properties on determining the amount of power a magnetic core can harvest under different current conditions. The thesis concludes that nanocrystalline material excels at lower-current applications, while silicon steel material offers better performance at higher-current applications across all frequencies when used as CTMEHs, offering system designers enticing material choices depending on the nature of the application.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliciting Visualization Attitudes with Repertory Grids</title>
<link href="https://hdl.handle.net/1721.1/162702" rel="alternate"/>
<author>
<name>Hua, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/162702</id>
<updated>2025-09-19T04:49:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliciting Visualization Attitudes with Repertory Grids
Hua, Dana
Research in public data communication typically focuses on improving the processes of encoding and decoding, answering the question of how to design a visualization to best communicate information to an audience. However, by treating visual communications as simply conduits for information, we ignore an important aspect of how people interact with communications. We ignore the attitudes – the thoughts, feelings, and intentions toward action – a person may form from communicative artifacts based on their personal values and experiences. Recent research has demonstrated that—much like natural language—readers of visualizations make social attributions: inferences about the identities and characteristics of an artifact’s makers, modes of distribution, and tools of production. In this thesis, I contribute a method to systematically map the visualization attitudes of an individual and the associated ideologies of their sociocultural group, by adapting the repertory grid technique from clinical psychology, to the context of data visualization. I demonstrate the effectiveness of this mixed methods approach by eliciting both the attitudes towards a visualization most salient to an individual, and the design features of the visualization that inform each attitude. This method offers a new way of exploring the content and latent structure of visualization attitudes, which opens new avenues for socioculturally-informed and intervention-driven research in data visualization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing scheduling for stream structured programming for StreamIt</title>
<link href="https://hdl.handle.net/1721.1/162701" rel="alternate"/>
<author>
<name>Dow, Nicholas Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/162701</id>
<updated>2025-09-19T04:49:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing scheduling for stream structured programming for StreamIt
Dow, Nicholas Lee
As straightforward increases in performance on general purpose CPUs slow down, the shift to application specific implementations and hardware has accelerated. This shift to towards specialization improves performance but often at the cost of developer productivity in learning these new tools. StreamIt is a Domain Specific Language developed to increase performance of streaming applications while being relatively user-friendly. While designed to be parallelized easily, the scheduling backend of the StreamIt compiler is not adapted to the heterogeneous and distributed nature of new accelerator hardware. This thesis details the design and development of a scheduler interface that enables hardware customized schedulers to be developed quickly. The scheduler interface allows for schedulers to take advantage of the unique compiler optimizations enabled by StreamIt’s structure. Two schedulers, one search based and another heuristic based, are built using this interface to schedule StreamIt workloads to optimize differing metrics such as throughput and latency. Our experiments evaluate the performance of these workloads, and details future direction for expanding the interface and scheduler designs that could take advantage of it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions</title>
<link href="https://hdl.handle.net/1721.1/162700" rel="alternate"/>
<author>
<name>Flynn, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162700</id>
<updated>2025-09-19T04:49:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions
Flynn, John M.
Portable, Low-Field MRI broadens access and enables numerous new applications such as point-of-care. Operating outside an RF-shielded room introduces electromagnetic interference (EMI), degrading further the signal-to-noise ratio (SNR) which is already diminished due to the lower magnetic fields used in portable imaging. Existing methods to reduce EMI perform well in simple noise environments, but can struggle with more complex profiles. Relaxing the linear assumptions is hypothesized to bring more robust mitigation algorithms. A system-wide characterization of SNR challenges was carried out on a rebuilt 800G scanner, existing techniques were validated, and new signal processing approaches were explored to drive image quality upwards. Various analytical approaches showed promise, such as dynamic coils/preamps, averaging methods, calibration, and smoothing methods. Groundwork was laid for learning-based methods throughout the pipeline. This work serves as an important baseline for the numerous experiments necessary for the full-system optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings</title>
<link href="https://hdl.handle.net/1721.1/162698" rel="alternate"/>
<author>
<name>Goel, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162698</id>
<updated>2025-12-09T18:18:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings
Goel, Abhinav
The inclusion of symmetries as an inductive bias, known as “equivariance”, often improves generalization on geometric data (e.g. grids, sets, and graphs). However, equivariant architectures are usually highly constrained, designed for pre-chosen symmetries, and cannot be applied to datasets with different symmetries. This work constructs a single model that is simultaneously equivariant to several groups, by simply regulating a certain input feature. Starting with a permutation-equivariant base model respecting the full Sₙ symmetry group, we can obtain subgroup G ⊆ Sₙ equivariance by using a symmetry-breaking input that is G-symmetric. Under mild conditions, the resultant network is only G-equivariant. But finding an input with automorphism group exactly G is computationally hard, which can be overcome by relaxing exact symmetry breaking to approximate symmetry breaking. This is done by leveraging the notion of 2-closure to derive fast algorithms. This method is validated on symmetry selection, multitask, and transfer learning settings, demonstrating that a single network equivariant to multiple permutation subgroups outperforms both separate equivariant models or a single non-equivariant model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation</title>
<link href="https://hdl.handle.net/1721.1/162697" rel="alternate"/>
<author>
<name>Choi, Sun Mee</name>
</author>
<id>https://hdl.handle.net/1721.1/162697</id>
<updated>2025-09-19T04:49:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation
Choi, Sun Mee
The advancement of semiconductor manufacturing processes has allowed for the availability of powerful microcontrollers at lower costs, granting system designers the flexibility to select between analog and digital signal processing techniques. Enabled by recent developments in low-power successive approximation register (SAR) analog-to-digital converter (ADC) technology, a digital approach to root-mean-square (RMS) measurement is proposed. The work begins with an explicit accumulation and averaging approach, and a set of improvements were designed to increase measurement accuracy and reliability. Algorithms are compared using the metrics of error, power efficiency, latency, and digital overhead. High-performing and power-efficient digital RMS measurement methods could be valuable for decentralized instrumentation systems such as smart grids and factory automation where long-lasting handheld and portable solutions are becoming critical.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hosting LLMs on Shared GPUs</title>
<link href="https://hdl.handle.net/1721.1/162696" rel="alternate"/>
<author>
<name>Choi, Kenneth K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162696</id>
<updated>2025-09-19T04:49:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hosting LLMs on Shared GPUs
Choi, Kenneth K.
Large language models (LLMs) have emerged as powerful tools for a wide array of applications. Serving multiple LLMs on shared GPUs has increasingly gained attention as single providers need to support multiple applications (summarization, chat, code generation), different model versions (A/B testing), and various types of customers. However, multi-model serving is particularly challenging, as static memory partitioning can lead to severe under-utilization, fragmentation, and latency spikes, while dynamic loading of model weights can cause unacceptable downtime due to high model loading overheads. To address these issues, we introduce hierarchical paging, a novel key-value (KV) cache management strategy, and we implement it within the vLLM serving engine. Hierarchical paging organizes GPU memory into a two-level hierarchy: large contiguous memory blocks allocated to individual models, which are then subdivided into smaller blocks that are allocated to different requests issued to that model. Our design enables dynamic memory sharing across models, improving model throughput and overcoming key problems of existing approaches. We detail our implementation and present end-to-end experiments that showcase these throughput improvements under different workloads. We include further evaluations on the runtime overheads of our hierarchical paging implementation, which show that the overheads are insignificant. Most importantly, we demonstrate that hierarchical paging is easy to implement, optimizing for implementation effort and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation</title>
<link href="https://hdl.handle.net/1721.1/162695" rel="alternate"/>
<author>
<name>Cheng, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162695</id>
<updated>2025-09-19T04:49:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation
Cheng, Emily
Synthesizing realistic tabular data is crucial for any analytical application, including policy evaluation related to household energy use. However, detailed household-level consumption data, necessary for such evaluation, are scare at fine geographic scales, as public surveys like the U.S. Residential Energy Consumption Survey (RECS) provide too few observations. We address this gap by developing a topology-guided diffusion-based generative model that produces realistic synthetic household data, and our approach handles two key challenges in this setting: (1) mixed continuous and discrete features and (2) strong hierarchical dependencies among variables. To handle categorical features, we build upon recent advancements in discrete diffusion, particularly TabDDPM [1] and TabDiff [2], which discretize the diffusion process through noise transition matrices, effectively extending diffusion methods to discrete tabular domains. To address hierarchical dependence, we include (1) a structure-aware noise schedule that injects noise from the leaves to the root along an approximate Chow–Liu tree constructed from the variables and (ii) a masked self-attention denoiser that aligns with the same graphical structure. Extensive experiments show that our structured diffusion model outperforms the baseline TabDiff on data with tree-like dependencies, due to the inductive bias from our structure-aware noise schedule. On data that only approximately follows a tree, such as the RECS dataset, our model maintains competitive performance, only slightly outperforming standard diffusion methods. These results highlight the potential for future work to further optimize the tradeoff between structural approximation and estimation accuracy and for future work beyond the energy domain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/162694" rel="alternate"/>
<author>
<name>Gregory, Cale</name>
</author>
<id>https://hdl.handle.net/1721.1/162694</id>
<updated>2025-09-19T04:49:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees
Gregory, Cale
This thesis evaluates the validity of current dynamic treatment regime algorithms and presents a novel data structure for extracting treatment decisions from unstructured clinical notes. The main contribution is the Clinical Decision Tree (CDT) which uses large language models (LLMs) to extract key decisions in chronic disease treatment. This addresses the main pain points in dynamic treatment regimes of low interpretability and reliance on poorly collected data for traditional machine learning methods. This work contains extensive experiments on mortality prediction, time series forecasting, and synthetic patient modeling. Experiments show that vital-based representations do not capture enough meaningful data about a patient to accurately predict and evaluate new treatment methods. By utilizing latent embeddings and vector search, experiments show that the collected vitals of patients fail to differentiate the outcomes of the related patients. Conversely, the clinical notes contain complex and substantial information about clinical decision making. LLMs enable the valuable knowledge extraction from unstructured data. Utilizing LLMs, experimental results and expert evaluation indicates that CDTs can extract and distill interpretable treatment decisions. Thus, CDTs are a valuable tool that can be refined to increase confidence in treatment decisions and identifying rare and uncommon medical practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova</title>
<link href="https://hdl.handle.net/1721.1/162693" rel="alternate"/>
<author>
<name>Han, Aileen</name>
</author>
<id>https://hdl.handle.net/1721.1/162693</id>
<updated>2025-09-19T04:49:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova
Han, Aileen
Agent-based modeling is a technique that allows students to reason about and create models of real-life phenomena. However, the programmatic implementations of this technique, such as StarLogo Nova, often introduce “friction”; students may get stuck on the syntactical details of the implementation before being able to engage in the mechanistic thinking behind their models. In order to shift students’ focus towards the goal of understanding the systems they are building, we set out to create an AI-powered assistant for StarLogo Nova that can explain and debug students’ code. After identifying and experimenting with various parameters of AI models in an attempt to improve their performance, we were able to build the StarLogo Turtle Helper, an easily accessible assistant integrated into the platform that can produce accurate responses to StarLogo-related questions. Through this process, we discovered two key properties of these models: first, the method through which these models use provided documentation (called retrieval-augmented generation, or RAG) is quite rudimentary, so any background knowledge should be included in the prompt or the model’s system instructions instead. Second, these models perform best if they are designed to only serve one purpose, so creating multiple models and chaining them together may be the best way to achieve more complex functionality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease</title>
<link href="https://hdl.handle.net/1721.1/162692" rel="alternate"/>
<author>
<name>Li, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162692</id>
<updated>2025-09-19T04:49:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease
Li, Jonathan
This work focuses on the progression from metabolic dysfunction-associated fatty liver to metabolic dysfunction-associated steatohepatitis, a more serious prognosis that can lead to liver failure and death. Additional adverse progressed outcomes include hepatic failure, fibrosis, cirrhosis, and malignant neoplasm of liver and intrahepatic bile ducts. We explore the possibility of using different machine learning techniques, including logistic regression, XGBoost, random forest, and decision trees to predict the likelihood of progression. We use data from Massachusetts General Brigham to train our models, incorporating demographics, physical measurements, lab results, and doctor notes. As a result of this project, we our best model was an XGBoost classifier with an AUROC of 0.800 with random forest at a similar performance of 0.786. However, all of our models had low AUPRC and sensitivity, indicating both overfitting and an imbalanced dataset.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Traceability via OTrace Concepts and Implementation</title>
<link href="https://hdl.handle.net/1721.1/162691" rel="alternate"/>
<author>
<name>Farooq, Ashar</name>
</author>
<id>https://hdl.handle.net/1721.1/162691</id>
<updated>2025-09-19T04:49:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Traceability via OTrace Concepts and Implementation
Farooq, Ashar
Financial transactions are commonplace in the modern world. Everyday consumers make purchases on many e-commerce sites and often use many third-party financial services, such as to predict your credit score, to obtain customized budget recommendations, and to find out which specific loan is the best for them. These financial services often need financial information from the consumer, which is not always clear to the consumer. In other words, consumer data are being used without their knowledge and consent. The proposed solution of using a traceability protocol called OTrace aims to mitigate this issue of not knowing where a consumer’s data is along with what is being done with it. This paper will aim to bolster OTrace to be more representative of a protocol that consumers can actually use as a service, and financial institutions can have trust that this will solve the problem of consumers not knowing which third-party financial services have their data. In other words, this work will create a more general traceable and accountable data sharing system specification that includes the OTrace layer on top of an OAuth layer that will be complemented with a model deployment example. The addition of more relevant OTrace API endpoints corresponding to a new specification along with an entire new OTrace Web implementation along with analysis will guide the data traceability world, data privacy world, open banking world, financial world and ultimately the global world forward. There will be a model deployment of an OTrace service on top of an OAuth protocol that can allow everyone to see it being used by various parties that can ultimately scale up to fix the problem of unintended data usage and lack of transparency of location of data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equivariant Autoregressive Models for Molecular Generation</title>
<link href="https://hdl.handle.net/1721.1/162690" rel="alternate"/>
<author>
<name>Kim, Song Eun</name>
</author>
<id>https://hdl.handle.net/1721.1/162690</id>
<updated>2025-09-19T04:48:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Equivariant Autoregressive Models for Molecular Generation
Kim, Song Eun
In-silico generation of diverse molecular structures has emerged as a promising method to navigate the complex chemical landscape, with direct applications to inverse material design and drug discovery. However, 3D molecular structure generation comes with several unique challenges; generated structures must be invariant under rotations and translations in 3D space, and must satisfy basic chemical bonding rules. Recently, E(3)-equivariant neural networks that utilize higher-order rotationally-equivariant features have shown improved performance on a wide range of atomistic tasks, including structure generation. Previously, we have developed Symphony, an E(3)-equivariant autoregressive generative model for 3D structures of small molecules. At each sampling iteration, a single focus atom is selected, which is then used to decide on the next atom’s position within its neighborhood. Symphony built on previous autoregressive models by using message-passing with higher-order equivariant features, allowing a novel representation of probability distributions via spherical harmonic signals. Symphony’s performance approached that of state-of-the-art diffusion models while remaining relatively lightweight. However, it continued to face challenges in error accumulation and determining bond lengths, and it was only evaluated against small organic molecules. Here, we expand on Symphony’s capabilities and make it more compatible with larger atomic structures. We add improvements to the embedders, split the radial and angular components when predicting atom positions, and increase the radial cutoff for atomic neighborhoods considered during prediction. We also increase Symphony’s training and inference speeds through a new implementation in PyTorch, making inference nearly 4x faster than previously. In addition, we demonstrate its effectiveness across a variety of tasks, including small molecule and protein backbone generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions</title>
<link href="https://hdl.handle.net/1721.1/162689" rel="alternate"/>
<author>
<name>Das, Gaurab</name>
</author>
<id>https://hdl.handle.net/1721.1/162689</id>
<updated>2025-12-09T18:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions
Das, Gaurab
Although advances in security have strengthened defenses in digital financial systems, attackers increasingly rely on social engineering to achieve their goals. These attacks are difficult to detect and prevent with existing security measures. To address this, we propose Vigilis, a fraud-protected application that employs advanced language models to counter such attacks in calls, texts, and payments. We first collect and make available a corpus of fraudulent calls from the Internet and train lightweight transformer-based models that achieve fraud detection accuracies of up to 94% and 87% on transcript and audio modalities, respectively. We integrate these models into a real-time call system within Vigilis that operates entirely on-device, enabling accurate fraud detection in an efficient and privacy-preserving manner. We then extend Vigilis to incorporate context-aware transaction authentication, where the underlying social context behind a transaction is determined from calls, texts, and browsing history and used to infer the transaction’s validity. By uniquely incorporating social concepts into traditional cybersecurity techniques, we attempt to counter and mitigate issues related to social engineering attacks in financial fraud.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GDSVD: Scalable k-SVD via Gradient Descent</title>
<link href="https://hdl.handle.net/1721.1/162688" rel="alternate"/>
<author>
<name>Gan, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162688</id>
<updated>2025-09-19T04:48:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GDSVD: Scalable k-SVD via Gradient Descent
Gan, Emily
We show that a gradient-descent with a simple, universal rule for step-size selection provably finds k-SVD, i.e., the k ≥ 1 largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the exact-parameterized and over-parameterized settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron’s method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of a modern compute infrastructure for iterative optimization coupled with this work is likely to provide a means of solving k-SVD for very large matrices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology</title>
<link href="https://hdl.handle.net/1721.1/162687" rel="alternate"/>
<author>
<name>Chen, Tina T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162687</id>
<updated>2025-09-19T04:48:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology
Chen, Tina T.
Transcription is a dynamic process with a multitude of characteristics, including transcript level, burst frequency, amplitude, and variability. Single-cell RNA sequencing data analysis often focuses on comparing transcription levels. However, these analyses capture only a portion of the wealth of information conveyed by transcription. The quantification and analysis of transcriptional variability poses an opportunity to study transcription and gene regulation from a new angle. Transcriptional variability has already been implicated in a number of biological processes, including in immune system development and in aging. Yet, the most appropriate method for measuring transcriptional variability in single-cell data has remained relatively unclear. Here, we simulated single-cell data with varying dispersion and dataset size to assess the relative responsiveness of the Gini index, variance-to-mean ratio, variance, and Shannon entropy to variability in single-cell counts. We found that the variance-to-mean ratio scales approximately linearly with increasing dispersion, and that it is scale-invariant. The Gini index displayed paradoxical behavior, and Shannon entropy was not scale-invariant. Thus, we applied the variance-to-mean to measure transcriptional variability in two publicly available datasets studying congenital heart defects in mouse models. We first found that change in transcriptional variability does not correlate with gene characteristics such as transcript level and evolutionary gene age. We also found that using change in transcriptional variability to focus GSEA and TF motif enrichment analyses revealed both genes with known involvement in cardiomyopathy and new genes and pathways as potential targets for future study. Notably, many of the genes and pathways identified through transcriptional variability analysis were not found by differential expression analysis, suggesting that transcriptional variability can provide additional biologically relevant information beyond what is observed from studying mean expression alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162686" rel="alternate"/>
<author>
<name>Heiberger, Harry G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162686</id>
<updated>2025-09-19T04:48:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform
Heiberger, Harry G.
In recent years, social annotation systems have become a popular and effective tool for hosting collaborative discussions on assigned readings. One such tool created by our lab is NB. Over the last twelve years, hundreds of instructors have incorporated NB within their classes, with over 50,000 students leaving millions of annotations [1]. While feedback for NB has mostly been positive, one major limitation is its difficulty in annotating documents with nested media types. As multimodal forms of learning beyond just text are becoming increasingly common in educational assignments, having the ability to annotate beyond simple text documents would greatly increase the utility of NB in the modern classroom. This work seeks to remedy this issue by expanding the types of documents NB can successfully annotate, specifically focusing on three mixed-media issue types: independently moving text components, image annotation, and video annotation. We will explore the design space of possible implementation strategies for these features and discuss the specific design decisions that were made when adding them to NB. We hope that by increasing the types of documents NB can annotate, we will better fulfill its goal of enhancing student engagement and learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162685" rel="alternate"/>
<author>
<name>Eppinger, Aria R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162685</id>
<updated>2025-09-19T04:48:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes
Eppinger, Aria R.
Adverse pregnancy outcomes (APOs), such as preeclampsia, fetal growth restriction, and preterm birth, occur in 10-15% of pregnancies. There is limited knowledge of how the cellular states in the placenta and decidua tissues are altered in women with particular APOs or may contribute to APOs. Single-cell RNA sequencing (scRNAseq) approaches have characterized cellular populations and interactions at the maternal-fetal interface using traditional dimensionality-reducing methods such as UMAP-based clustering. However, these techniques may generate limited representations of nuanced cellular functions and biological relationships among and within cell clusters. Pareto Task Inference (ParTI), a dimensionality reduction technique that fits data to an n-dimensional polygon or polytope, models how cells optimize among multiple biological functions and transition between states. We applied ParTI to assess its ability to identify nuanced cellular states and intercellular relationships and to highlight biological mechanisms underlying specific APOs. We analyzed scRNAseq data from 50 whole placental homogenates collected from healthy pregnancies and those complicated by fetal growth restriction (FGR), preterm preeclampsia (PrePET), spontaneous preterm birth (PTB), term preeclampsia or gestational hypertension (TermPET/GHTN), or type 1 diabetes (DM1). ParTI was applied to the dataset with 1) all main cell lineages (B-cells, trophoblasts, stromal, endothelial, Haufbauer, T-NK, maternal myeloid cells) and 2) syncytiotrophoblasts (SCTs), a sublineage of trophoblasts. Marker genes and gene set enrichment analysis for the ParTI polytope vertices, called archetypes, were performed to assess the biological states associated with the archetypes. We demonstrated that the ParTI polytope can separate both broad cell lineages and sublineages, suggesting that iteratively applying ParTI can serve as an alternative clustering approach when cell-lineage marker genes are previously known. Additionally, ParTI applied to SCTs separated healthy controls from pregnancies complicated by specific APOs. Gene set enrichment analysis of the cells proximal to the archetypes suggests biological differences in SCTs with specific APOs compared to the controls. Thus, ParTI can identify biological mechanisms underlying specific APOs and be applied to additional datasets to uncover biological relationships among and within cell-type clusters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal</title>
<link href="https://hdl.handle.net/1721.1/162684" rel="alternate"/>
<author>
<name>Cuevas, Elie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162684</id>
<updated>2025-09-19T04:49:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal
Cuevas, Elie E.
Recursive algorithms are a natural and expressive way to traverse complex data structures, but they often miss opportunities for optimization in modern compiler infrastructures like LLVM. This thesis explores a novel technique that temporarily transforms recursive traversals into synthetic loop-like structures, enabling existing loop-specific optimizations to apply, before transforming them back. By extending Clang’s semantic analysis and implementing a custom LLVM transformation pass, recursive traversals are initially structured into synthetic loops that can benefit from existing loop analyses and optimizations. After these optimizations are applied, the transformation restores the original recursive semantics, preserving program behavior while incorporating performance gains. Evaluation across custom microbenchmarks shows that while general recursive traversals suffer a modest overhead, workloads designed to benefit specific loop-focused optimizations achieve up to a 30% performance improvement. This demonstrates that even though the approach requires temporarily "misrepresenting" code to the compiler, selective exposure of recursive patterns to loop-based optimization infrastructure is practical and effective. This work establishes a proof-of-concept for compiler transformations that bridge recursion and iteration, paving the way for future systems that better optimize real-world recursive code without sacrificing clarity or maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grounding Time Series in Language: Interpretable Reasoning with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/162683" rel="alternate"/>
<author>
<name>Chen, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/162683</id>
<updated>2025-12-09T18:21:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grounding Time Series in Language: Interpretable Reasoning with Large Language Models
Chen, Lily
Can large language models (LLMs) classify time-series data by reasoning like a domain expert—if given the right language? We propose a method that expresses statistical time-series features in natural language, enabling LLMs to perform classification with structured, interpretable reasoning. By grounding low-level signal descriptors in semantic context, our approach reframes time-series classification as a language-based reasoning task. We evaluate this method across 23 diverse univariate datasets spanning biomedical, sensor, and human activity domains. Despite requiring no fine-tuning, it achieves competitive accuracy compared to traditional and foundation model baselines. Our method also enables models to generate expert-style justifications, providing interpretable insights into their decision-making process. We present one of the first large-scale analyses of LLM reasoning over statistical time-series features, examining calibration, explanation structure, and reasoning behavior. This work highlights the potential of language native interfaces for interpretable and trustworthy time-series classification.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>National crop field delineation for the United States</title>
<link href="https://hdl.handle.net/1721.1/162681" rel="alternate"/>
<author>
<name>Chen, Zitong</name>
</author>
<id>https://hdl.handle.net/1721.1/162681</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">National crop field delineation for the United States
Chen, Zitong
Comprehensive and accurate crop field boundary maps are crucial for digital agriculture, land management, and environmental monitoring. However, no high-quality field boundary dataset is publicly available in the United States. This thesis addresses this gap by creating a new, large dataset and training a deep learning model capable of mapping field boundaries. We built a dataset of over 15,000 image-mask pairs using high-resolution National Agriculture Imagery Program (NAIP) satellite imagery and curated field boundary labels. This dataset covers a variety of leading agricultural states and includes images taken at different scales to capture a wide variety of field sizes and layouts. We used this dataset to train an adapted ResUNet++ neural network model designed to segment crop fields. The trained model achieved around 0.8 for pixel-level accuracy, showing it can generally identify field areas well. However, its performance in matching predicted individual field instances with the ground truth instances (measured by mean instance Intersection over Union, or mIoU) was around 0.5. This lower instance score was largely due to the post-processing step, which converts the model’s probability predictions into separate field instances. Despite this, the field polygons produced by our approach are visually coherent with satellite field images and can be readily used with geospatial tools like Google Earth Engine. Our work provides a practical starting point for future research on mapping fields across the contiguous U.S. Potential directions for improvements may involve developing sharper boundary predictions, exploring direct instance segmentation models, refining post-processing methods, and expanding the dataset to include more challenging areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex</title>
<link href="https://hdl.handle.net/1721.1/162679" rel="alternate"/>
<author>
<name>Hanly, Bianca Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/162679</id>
<updated>2025-09-19T04:48:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex
Hanly, Bianca Marie
A Self-Interference Canceler is the principle component that allows for Simultaneous Transmit And Receive (STAR) of radio signal broadcasting. Previous research and designs by other groups have resulted in systems that either operate at high powers or are capable of cancellation over a wide bandwidth. This work seeks to build upon previous research in order to design an analog SIC that is capable of both high power (∼100W) and wide instantaneous bandwidth (∼1GHz) cancellation. The system is designed as a vector modulator using off-the-shelf hybrid couplers and switches with a custom variable attenuator designed using PIN diodes in a Waugh attenuator architecture. The system was fabricated on a four layer PCB and measured with a network analyzer. Simulated results for variable attenuator and overall vector modulator are presented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System</title>
<link href="https://hdl.handle.net/1721.1/162678" rel="alternate"/>
<author>
<name>Francis, Zachary R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162678</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System
Francis, Zachary R.
In the field of robotics, the development of household robots capable of performing everyday tasks continues to be a major area of research and practical interest. Many domestic chores—such as picking up and moving objects from one location to another—have been successfully performed by stationary robotic manipulators paired with visual perception systems. However, accomplishing more complex, varied, and spatially distributed tasks in real-world home environments requires a mobile platform with a more human-like form factor. These tasks demand greater flexibility, spatial awareness, and interaction capabilities than fixed systems can typically provide. This work focuses on the RBY1 robot from Rainbow Robotics, a humanoid platform designed to support advanced manipulation and mobility. A range of tools and modules were developed to enhance its functionality, including software for semantic perception, task execution, and environment interaction. This thesis provides a technical overview of these tools, highlighting their roles in collecting new datasets that can be used for semantic SLAM research. In the future, these tools can enable the robot to operate more effectively in domestic settings, towards the ultimate goal of enabling more capable home-assistive robots.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)</title>
<link href="https://hdl.handle.net/1721.1/162677" rel="alternate"/>
<author>
<name>Cunningham, Caroline K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162677</id>
<updated>2025-09-19T04:48:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)
Cunningham, Caroline K.
This thesis examined students’ programming process while using PyTutor, a generative AI tutor for introductory computer science students. This thesis had the research questions: (1) How does the process of test case creation, with or without PyTutor’s Test Case Runner, impact students’ programming process while using PyTutor? (2) How can prompt engineering of PyTutor’s system prompt be leveraged to improve AI Chat response quality with respect to: (a) reducing the amount of code revealed in the answer, (b) improving the conciseness of responses, and (c) having the AI chat give the student test cases as a tool to understand code correctness? (3) How does PyTutor’s responses from the updated prompt affect the programming process for computer science students? A key finding from a focus group in the first stage (n=9) was apart from test cases and was that the majority of participants who asked questions to PyTutor received at least three lines of code, unideal for PyTutor’s pedagogical purpose. This discovery inspired the next phase of this thesis of prompt engineering PyTutor, which resulted in an updated prompt. Responses from the both the updated prompt and the original prompt were scored using an evaluation rubric. For the Students thinking through problem category of the evaluation rubric, it was statistically significant that the distribution of points for responses from the updated prompt was greater than the distribution of points for responses from the original prompt. Finally, participants were asked to solve a programming problem using either PyTutor with the updated prompt (n=10) or PyTutor with the original prompt (n=2). Across the focus groups from the first and final stage, I found that fewer participants who used PyTutor with the updated prompt received at least three lines of code. Furthermore, participants who used PyTutor with the updated prompt required a greater number of messages to first receive three lines of code. Additionally, all four participants who received at least three lines of code from PyTutor with the updated prompt asked majority high-level questions. As participant feedback suggested that PyTutor’s responses for high-level questions could be repetitive, this data highlights a new direction of improving PyTutor’s responses when answering high-level questions to benefit students’ programming process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A set-theoretic approach to state estimation.</title>
<link href="https://hdl.handle.net/1721.1/162614" rel="alternate"/>
<author>
<name>Hnyilicza, Esteban.</name>
</author>
<id>https://hdl.handle.net/1721.1/162614</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">A set-theoretic approach to state estimation.
Hnyilicza, Esteban.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 112-113.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Precision Binary Trait Association on PhylogeneticTrees</title>
<link href="https://hdl.handle.net/1721.1/162565" rel="alternate"/>
<author>
<name>Balogun, Ishaq O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162565</id>
<updated>2025-08-28T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High Precision Binary Trait Association on PhylogeneticTrees
Balogun, Ishaq O.
Understanding how genetic variation drives microbial phenotypes is fundamental to advancing microbiology, particularly in pathogenicity, drug resistance, and host adaptation. Traditional genome-wide association study (GWAS) methods fail to account for shared evolutionary history, confounding association analyses. Microbial GWAS approaches emerged to address this, but modern methods often lack the statistical power to detect associations while controlling false discoveries, and face computational limits at scale. Here, we present SimPhyNI (Simulation-based Phylogenetic iNteraction Inference), a computational framework for detecting binary trait-trait associations in microbial populations. &#13;
&#13;
SimPhyNI uses stochastic simulations of trait evolution on phylogenetic trees to detect positive and negative associations with high precision and recall. Benchmarking on large synthetic datasets, SimPhyNI achieved a precision-recall AUC (PR AUC) of 0.987 and 0.975 for positive and negative interactions, respectively, indicating near-perfect discrimination of true from neutral associations. Competing methods showed substantially lower performance, especially for negative associations. We further applied SimPhyNI to empirical datasets, recovering known biology and generating plausible hypotheses for novel mechanisms. &#13;
&#13;
Though tested here on binary traits, SimPhyNI’s design supports future extension to multi-state and continuous traits using generalized models. Its high recall also makes it well-suited for constructing gene interaction networks and identifying co-evolving trait modules. By combining evolutionary modeling with scalable statistics, SimPhyNI advances our ability to uncover the genetic interactions that drive microbial function, ecology, and disease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/162564" rel="alternate"/>
<author>
<name>Rajan, Neena E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162564</id>
<updated>2025-08-28T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices
Rajan, Neena E.
The medical device industry, governed by a tight regulatory landscape, often relies heavily on structured Product Development Processes (PDPs) to bring innovative solutions to market. These structured processes create significant challenges when integrating technological innovations that emerge in the later stages of the development cycle. This study explores the complexities of this "innovation paradox" within large United States-based medical device corporations, examining how the rigidity of traditional PDP models affects the incorporation of innovative changes to in-flight projects. Drawing upon insights from a comprehensive literature review and a quantitative analysis utilizing a Monte Carlo simulation, this research highlights the impact of integrating an innovative change on the overall project timeline and cost. The simulation results show that introducing innovative changes to the PDP typically extends project timelines and increases the total net present costs and are affected by the timing of the change and its technological maturity. Introducing changes in later project phases significantly increases both duration and cost compared to earlier phases. Changes with lower technological maturity led to greater duration and cost escalations, especially when introduced late in the development cycle. To balance regulatory requirements and PDP agility, large medical device companies can adopt hybrid PDP models, establish dedicated innovation assessment teams, create flexible product designs, and focus on value-driven innovations that meet patient and market needs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Savaal: A system for automatically generating high-quality questions from unseen documents</title>
<link href="https://hdl.handle.net/1721.1/162563" rel="alternate"/>
<author>
<name>Chandler, Joseph A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162563</id>
<updated>2025-08-28T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Savaal: A system for automatically generating high-quality questions from unseen documents
Chandler, Joseph A.
Assessing human understanding through exams and quizzes is fundamental to learning and advancement in both educational and professional settings. However, current solutions to automate the generation of challenging questions from educational materials and documents are insufficient, resulting in superficial or often irrelevant questions. While LLMs have been shown to excel in tasks like question answering, their usage on question generation is underexplored for general domains and at scale. This work presents Savaal, a scalable question-generation system that generates higher-order questions from documents, as well as a real-world system implementation for general use. Savaal accomplishes the following goals and objectives: (i) scalability, capable of generating hundreds of questions from any document (ii) depth of understanding, synthesizing higherorder concepts to test learners’ understanding of the material, and (iii) domain independence, generalizing broadly to any field. Rather than naively providing the entire document in context to an LLM, Savaal breaks down the process of generating questions into a three-stage pipeline. We demonstrate that Savaal outperforms the direct prompting baseline as evaluated by 76 human experts on 71 documents across conference papers and PhD dissertations. We additionally contribute a general system for serving Savaal in real-world scenarios. We demonstrate that our system is scalable, enabling fault-tolerant and horizontal scaling of each individual component in response to fluctuations in usage. Moreover, our architecture enables interactive usage from users and collaboration in groups, reflecting real-world organizations like classrooms or enterprises. We hope that the system enables scalable question generation for educational and corporate use-cases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation</title>
<link href="https://hdl.handle.net/1721.1/162562" rel="alternate"/>
<author>
<name>Terakado, Daiki</name>
</author>
<id>https://hdl.handle.net/1721.1/162562</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation
Terakado, Daiki
This thesis presents a new integrated framework for evaluating in-space refueling architectures, focusing on their application to the human space missions such as Artemis. The framework tightly couples vehicle sizing with a boil-off control model, allowing the evaluation of various combinations of propellant types, refueling locations, and boil-off control. The model captures the dynamic interdependence between the components of the refueling system, the transport vehicle, the refueler, and the depot, using an iterative approach to ensure consistent mass estimates across configurations.&#13;
&#13;
The framework is applied to analyze human landing system (HLS) architectures with refueling in cis-lunar space. The key findings highlight the mass savings benefits of cryocoolers, the benefits of high Isp with Lox/LH2, the benefits with NRHO refueling for acceptable ΔV requirement, and positive and negative effects of reusability in mass and mission time. Furthermore, the study indicates that the number of required refueling events is more sensitive to payload and refueler capacity than to boil-off losses.&#13;
&#13;
To extend the framework toward long-term, scalable transportation solutions, the thesis compiles a comprehensive set of figures of merits (FoMs) and discusses future model extensions including risks, ISRU, and electric propulsion. Limitations such as lack of reusable configuration flexibility, and insufficient support for Mars mission parameters are identified as areas for future development. This work provides a foundational framework for the exploration of refueling architecture and solid next steps to design sustainable and scalable human space transportation systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedded Software-Defined Radio Architectures for 6G Cellular Networks</title>
<link href="https://hdl.handle.net/1721.1/162561" rel="alternate"/>
<author>
<name>Urbonas, Jonas</name>
</author>
<id>https://hdl.handle.net/1721.1/162561</id>
<updated>2025-08-28T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Embedded Software-Defined Radio Architectures for 6G Cellular Networks
Urbonas, Jonas
Over the past decades, the widespread adoption of wireless communication technologies in the industrial, scientific, medical, defense, and commercial sectors has resulted in substantial advancements in digital radio technologies. Each new generation of cellular technology, beginning with 1G, has introduced novel use-case scenarios that have challenged the performance of the prevailing digital radio architectures. The newly proposed scenarios for 5G-Advanced, and the upcoming 6G cellular networks due to be standardized by 2030 are no exception. The emerging 6G network components, such as the space-air-ground integrated cell-less networks, as well as the artificial intelligence-native network architecture, drive the demand for flexible and fully reconfigurable radio units supporting multi-GHz instantaneous signal bandwidths, frequency agile radio architectures covering multi-octave frequency ranges, and highly sensitive receivers.&#13;
&#13;
To support these requirements, software-defined radios (SDR) are becoming an essential building block of next-generation radio networks. This thesis presents a review of softwaredefined radio technology, examines its history, proposes the requirements of SDR units for 6G cellular networks, and presents a quantitative performance analysis of over 2 million distinct SDR architectures that could be used in 6G communication networks. It does so by defining the key system architectural decisions and their options, including the data converters, filters, mixer and amplifier technologies. It also examines different radio transmitter and receiver architectural topologies, including baseband sampling, IF sampling, direct RF sampling, and fully digital RFSoC, and constructs a multi-attribute utility (MAU) to quantify the system performance. The MAU is used to build a tradespace of SDR architectures, enabling the identification of the Pareto frontier. Analysis of SDR system architectures on the Pareto frontier reveals that the performance of direct RF sampling SDR architectures is highly competitive with industry-standard IF sampling. The tradespace is also used to analyze the sensitivity of system performance to individual architectural decisions via a main-effect analysis, allowing quantification of connectivity and sensitivity of available architectural decisions. Sensitivity analysis reveals that system performance is highly sensitive to receiver architectural decisions, particularly analog-to-digital converters, indicating the need for continued advances in this technology to produce high-performance SDR systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Design of Architected Lattices for Construction Applications</title>
<link href="https://hdl.handle.net/1721.1/162560" rel="alternate"/>
<author>
<name>Leamon, Sophie</name>
</author>
<id>https://hdl.handle.net/1721.1/162560</id>
<updated>2025-08-28T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Design of Architected Lattices for Construction Applications
Leamon, Sophie
Architected lattices have been utilized in aerospace and research applications for their modularity, scability, reconfigurability, and high strength-to-weight properties. However, voxels have yet to find widespread integration in the residential or commercial construction industry because of the industry’s distinct system needs. This study identifies the pain points unique to the construction industry that have slowed or disabled the adoption of new practices, highlighting the importance of utilizing known materials, methods, and the transparency of the design process, as major hurdles to adoption of innovation in the industry. This study presents a computational approach to designing architected lattices that seeks to undermine these core issues by making building with architected lattice structures agnostic to material and manufacturing methodology. Three open source computational approaches to architectural design are proposed: 1) integration of support structures for additively manufactured structures; 2) parametric design of voxels from 2D material, their manufacturing molds, and optional alignment features; and 3) generation of two-dimensional cut files for assembly with 3D printable joinery. These files are computationally designed and arranged for instantaneous production to demystify the lattice architectural design process, establish a pathway for utilizing all available materials in lattice construction, reduce the overhead costs for experimentation with lattice structures, and eliminate barriers to the fabrication process by enabling accessible manufacturing methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations</title>
<link href="https://hdl.handle.net/1721.1/162559" rel="alternate"/>
<author>
<name>Delkowski, Michal  .</name>
</author>
<id>https://hdl.handle.net/1721.1/162559</id>
<updated>2025-08-28T03:07:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations
Delkowski, Michal  .
This thesis examines the strategic, technical, and economic feasibility of China’s two flagship low Earth orbit (LEO) satellite megaconstellation programs, Guowang and Qianfan, in the context of the rapidly evolving global satellite communication (Satcom) market. Against the backdrop of SpaceX’s Starlink dominance and intensifying geopolitical competition, China’s efforts represent not only a telecommunications infrastructure push but also a broader assertion of technological sovereignty and global influence. This study uses a scenario-based analysis that integrates system throughput analysis and financial forecasting. Three deployment scenarios (base, optimistic, and pessimistic) are analyzed, accounting for satellite production rates, launch capabilities, and regional adoption patterns, particularly across Belt and Road Initiative (BRI) markets. The study also evaluates "system-of-systems" integration with China’s military objectives, and spectrum coordination challenges. Key findings reveal that Guowang becomes marginally viable only in the optimistic scenario, assuming deployment of at least 9,000 satellites, reduced satellite unit costs (targeting ~$300,000 per satellite), expanded gateway infrastructure, and realization of these targets by 2035, while remaining unviable in base and pessimistic cases. Qianfan faces greater commercial risk, achieving viability only with early adoption in BRI countries and government dual-use contracts, incurring a pessimistic-case NPV loss exceeding $76B. Resource allocation problem (RAP) modeling suggests that projected throughput may saturate early without major gateway expansion. Both constellations require China to scale reusable rockets and sustain a combined annual launch rate exceeding 1,000 satellites by the early 2030s. Neither constellation system meets China’s 2030 rural broadband targets under base-case conditions, over 40% of the 336M unconnected citizens remain underserved without terminal subsidies. Ultimately, China’s LEO Satcom strategy depends not on satellite count alone but on coordinated progress in launch economics, affordability, dual-use policy, and international partnerships.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of CPG budgets in Retailer-led marketing&#13;
programs</title>
<link href="https://hdl.handle.net/1721.1/162558" rel="alternate"/>
<author>
<name>Gandhi, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162558</id>
<updated>2025-08-28T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of CPG budgets in Retailer-led marketing&#13;
programs
Gandhi, Abhinav
Grocery retailers and Consumer Packaged Goods (CPG) companies have a symbiotic relationship. Retailers need CPGs to supply the products, and CPGs need retailers’ customers to grow their brands. Since shelf space is limited, CPGs offer trade and marketing funds to prominently feature their brands.&#13;
As part of loyalty programs, retailers offer coupons to customers that are often funded by CPGs. In return, CPGs expect a return on their investment(ROI). Since budgets are limited and are also expected to be utilized, it becomes a challenge for the retailer to find the right size of a mailer which can balance costs and relevance to customers. This thesis explores how knapsack problems can be used in an non-adaptive setting to help maximize the reach of print and email campaigns.&#13;
Seeking inspiration from existing literature, multiple simulations were set up to evaluate budget-constrained allocation and compare two approaches, the multiple-choice Knapsack (MCK) and a greedy algorithm. Considering uncertainty in redemption, the Newsvendor model was also explored to review the possibility of over-allocation to improve budget utilization and increase reach. The preliminary analysis findings offer promising results and provide a setting for further research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of design strategies and optimization for efficient mass timber structures as a function of column position</title>
<link href="https://hdl.handle.net/1721.1/162557" rel="alternate"/>
<author>
<name>Gerken, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/162557</id>
<updated>2025-08-28T03:07:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploration of design strategies and optimization for efficient mass timber structures as a function of column position
Gerken, Christoph
The building sector is responsible for a large share in global carbon emissions. As the load bearing structure is particularly material-intensive, a decisive shift can be achieved by improving its design and decreasing its volume. This thesis examines how structural mass timber floor systems can be designed in an efficient, low-waste manner through a design-oriented approach that is immediately applicable within the context of conventional construction techniques and building practices. Reducing material in timber structures has economical and ecological benefits. Reduced timber demand entails significant cost savings and decreased building weight which considerably cuts embodied carbon.&#13;
Since common floor systems mainly act in bending, this work focuses on the reduction of moment forces in standard setups comprised of timber slabs, beams, and columns. In principle, bending forces in beams and slabs can be reduced by moving the supports inwards, leading to overhanging structural elements. The original method presented in this thesis shows how this approach applies to conventional mass timber floor systems. This work provides an understanding of how informed column positioning can take advantage of this behavior and allows for material and embodied carbon reduction trough design. The consequent architectural implications of the resulting irregular column grid are explored in a floor plan design suggestion&#13;
Material demand and embodied carbon are evaluated as a function of column position through finite element analysis and optimization as part of a computational model. By consulting a mass timber manufacturer’s catalogue to assign appropriate products to structural members, this approach enables material reduction in the design process rather than in the production. Bypassing slow-changing, inert fabrication procedures, this method can be realized instantaneously.&#13;
This work identifies the optimal support position to reduce bending forces in beams and slabs to be at 41% of the distance from the element’s edge to its midspan. Furthermore, this research finds that the impact of ideal column position on material efficiency depends on required minimum effective spans. While being negligible in the absence of constraints, informed column positioning can reduce timber demand by 20% and embodied carbon by 16% when subjected to a minimum effective span requirement of 6 m – a common span in timber construction – in a building of 30x30 m and five floors. Building dimensions are found to have an insignificant impact on these results.&#13;
This thesis illustrates the potential for architects and engineers to enhance structural efficiency of mass timber floor systems merely by deviating from the usual, regular column grid and taking advantage of straightforward structural principles through design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges</title>
<link href="https://hdl.handle.net/1721.1/162556" rel="alternate"/>
<author>
<name>Fayad, Fred</name>
</author>
<id>https://hdl.handle.net/1721.1/162556</id>
<updated>2025-08-28T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges
Fayad, Fred
The assessment of concrete bridge conditions is critical for ensuring structural integrity and public safety. Traditional inspection methods, which rely heavily on visual inspections and manual assessments, are time-consuming, subjective, and prone to human error. With the increasing number of aging bridges worldwide, there is a growing need for more efficient and accurate methods to assess bridge health. This thesis aims to explore the application of machine learning techniques for automating the bridge condition assessment process and improving the accuracy and reliability of bridge evaluations.&#13;
 This study investigates the development and implementation of a model consisting of two machine learning algorithms to predict the condition of concrete bridges based on data collected from various public sources. The first algorithm appraises the structural health of a bridge based on bridge rating and the second algorithm assesses the condition of a bridge after a specific failure mechanism. Specifically, this work focuses on using classification algorithms such as Random Forest (RF), XGBoost, and Neural Networks (NN) in both algorithms to achieve their purpose.&#13;
 The results of this study demonstrate that machine learning models can provide a decent performance in predicting bridge conditions. The overall model achieved a testing accuracy of 79%. This research contributes to the field of civil engineering by showcasing the potential of machine learning in infrastructure management. By automating the assessment process, the proposed models can help reduce the time and cost of inspections while providing more accurate data to guide maintenance planning and bridge rehabilitation efforts. Future work will focus on further optimizing the models, incorporating additional data sources, and deploying the system for real-time bridge monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics</title>
<link href="https://hdl.handle.net/1721.1/162555" rel="alternate"/>
<author>
<name>Van Note, Lana</name>
</author>
<id>https://hdl.handle.net/1721.1/162555</id>
<updated>2025-08-28T03:08:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics
Van Note, Lana
Nutrient cycling is an important component of plants’ immune systems, largely driven by the act of exuding environmentally influential metabolites from roots. Root exudation may be driven by multiple unique mass-transport mechanisms, including active and passive transport types, though the latter is not well-studied despite being labelled a significant driver of low molecular weight metabolite exudation. This research investigates the generally accepted assumption that low molecular weight metabolites, including iron-fixing coumarins (scopoletin, fraxetin, etc.) are primarily exuded passively,  and high molecular weight metabolites follow an active exudation approach. Scopoletin and scopolin exudation from Arabidopsis thaliana in low-iron and replete conditions is quantified to determine if the hypothesized passive diffusion mechanism is a significant contributor to coumarin exudation. LC-MS analysis suggests that passive diffusion of scopoletin and scopolin from roots plays a significant role in total coumarin exudation values. Further research should include investigating the implications of passive coumarin exudation on long-term iron storage and soil health in addition to the relationship between coumarin production and exudation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center</title>
<link href="https://hdl.handle.net/1721.1/162554" rel="alternate"/>
<author>
<name>Athanasopoulos, Panagiotis Rafail</name>
</author>
<id>https://hdl.handle.net/1721.1/162554</id>
<updated>2025-08-28T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center
Athanasopoulos, Panagiotis Rafail
This thesis presents the conceptual design, technical modeling, and economic analysis of a novel offshore floating solar energy system integrated with Compressed Air Energy Storage (CAES) for reliable baseload power delivery to coastal data centers. The system architecture is modular, consisting of multiple “powercells,” each comprising a 5×5 photovoltaic (PV) array mounted above a matrix of submerged compressed air storage cylinders anchored below the floating platform, addressing the energy resilience and spatial constraints of coastal computing infrastructure. This scalable configuration enables distributed energy collection and localized storage, tailored to meet site-specific demands. Detailed thermodynamic modeling of both charging and discharging cycles is conducted, with analytical solutions validated against a full numerical implementation. Results show that under realistic operating assumptions, the temperature inside the storage vessels remains nearly isothermal due to the long charging duration and large heat exchange surface, enabling a simplified energy balance model.&#13;
&#13;
A techno-economic analysis evaluates both structural steel requirements and photovoltaic investment, benchmarked against market data from 2024. Key metrics such as structural cost per unit energy ($/kWh) and per rated power output ($/kW) are derived. The hybrid system is found to be economically competitive with lithium-ion (Li-ion) battery alternatives, offering extended lifespan (20–30 years), lower material costs, and enhanced sustainability through avoidance of critical minerals. Environmental and mooring considerations for offshore deployment are also addressed, demonstrating the feasibility of integrating energy generation, storage, and maritime infrastructure. This work advances the development of resilient, decarbonized energy systems aligned with global renewable energy targets and the rising demand for sustainable data center operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis</title>
<link href="https://hdl.handle.net/1721.1/162552" rel="alternate"/>
<author>
<name>Brower, Braden C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162552</id>
<updated>2025-08-28T03:08:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis
Brower, Braden C.
United States Navy Refueling and Complex Overhauls (RCOHs), and other extended maintenance availabilities, present uniquely demanding environments where Sailors face elevated risks for destructive behaviors, including suicide and substance abuse. Prolonged exposure to harsh industrial conditions, significantly degraded Quality of Service, demanding workloads, and critical manning shortfalls create cumulative stress distinct from operational duty. These destructive behaviors severely impact personnel’s well-being, erode force readiness through attrition and morale issues, and indicate systemic contributing factors as highlighted by recent investigations into carrier suicides during shipyard periods.&#13;
&#13;
This thesis utilizes Causal Analysis based on Systems Theory (CAST), grounded in systems thinking, to analyze the USS George Washington RCOH events and identify the underlying safety control structure flaws that contributed to this hazardous environment. Insights from the CAST analysis were then integrated with a qualitative System Dynamics model to better understand the feedback loops and dynamic interactions driving system behavior, particularly revealing a capability trap dynamic exacerbated by resource constraints and personnel pressures.&#13;
&#13;
The analysis identified critical, interacting systemic flaws across multiple organizational levels that contributed to the accident: (a) inadequate strategic resourcing and manning prioritization for RCOH personnel support, (b) deficient planning, risk management, and oversight processes that were ineffective at protecting Sailor well-being amidst budget and schedule pressures, (c) ineffective feedback mechanisms that prevented critical information from reaching decision-makers, (d) and reliance on flawed assumptions regarding the RCOH environment, Sailor resilience, and standard process adequacy. Based on these findings, the thesis provides actionable, systemically focused recommendations aimed at strengthening the Navy's safety control structure by improving decision makers’ mental models, enhancing feedback and oversight, enforcing well-being constraints, and fostering organizational learning. Combined, these recommendations empower leaders to proactively manage risks, reduce destructive behaviors, and ensure a safer, more resilient environment during future RCOHs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers</title>
<link href="https://hdl.handle.net/1721.1/162551" rel="alternate"/>
<author>
<name>Hoyt, Thomas S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162551</id>
<updated>2025-08-28T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers
Hoyt, Thomas S.
Flooding events pose a significant and growing threat to communities in the United States, particularly as climate change alters weather patterns and sea levels continue to rise. This thesis examines how the U.S. Army Corps of Engineers (USACE) can enhance community preparedness for flood emergencies through improved risk communication strategies. Focusing on the New England District as a representative case, it integrates data from the Federal Emergency Management Agency’s (FEMA) National Household Survey and the National Flood Insurance Program (NFIP) claims archive to develop and calibrate a System Dynamics model of flood risk perception and preparedness.&#13;
The model built in this thesis incorporates key variables and captures the feedback loops that influence community preparedness over time. Scenario testing demonstrates that monthly to quarterly engagements by USACE help sustain risk awareness and reduce flood-related damage, whereas less frequent engagement demonstrates minimal improvement above the baseline. By contrast, barriers to action, such as complex procedures or limited access to information, can substantially slow the adoption of preparedness measures. High levels of trust in authorities further amplify the effectiveness of risk communication and foster community engagement.&#13;
This model quantifies the importance of frequent engagement, low barriers to action, and trust-building initiatives in reducing flood impact. Through calibration against historical claims and survey data, the model provides a robust framework that can guide USACE and partner agencies in refining their own flood risk communication strategies to bolster community resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement</title>
<link href="https://hdl.handle.net/1721.1/162550" rel="alternate"/>
<author>
<name>Stribos, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162550</id>
<updated>2025-08-28T03:08:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement
Stribos, Sophia
Concrete remains one of the most widely used construction materials due to its strength, durability, and availability. However, it is responsible for a large share of the global carbon emissions. Within the 40% of the global emissions attributed to the building sector, 5-8% alone accounts for the production of cement, a key component in concrete. As the construction industry seeks innovations towards sustainable practices, alternative beam designs that improve material efficiency and introduce nontraditional reinforcement systems are emerging as promising potential. However, accurate structural models capable of predicting and validating the performance of these innovative beams are often lacking, limiting their implementation in the industry, primarily due to safety and code compliance.&#13;
This thesis bridges this gap by developing and validating a structural engineering model to predict the shear and flexural capacities and the deflection of irregular, efficiently shaped concrete beams, including those with alternative reinforcement and formwork. The model discretizes a 3D beam geometry into 2D sections to perform a geometric and structural cross-sectional analysis along the beam’s length. The structural engineering model is applied to two case studies: a topology-optimized steel-reinforced concrete beam and an integrated knit textile reinforced concrete beam, using experimentally measured material properties and beam testing data. The predicted engineering model results are compared against experimental data to validate the model’s accuracy.&#13;
While the model could accurately capture the behavior of the topology-optimized steel-reinforced beam, it slightly overestimated the strength of the knit-textile reinforced beam. The engineering model for the topology-optimized beam had a close alignment in flexural capacity and had a slightly conservative estimate in shear and deflection due to the nature of the design equations. However, the model showed a minor overprediction in the flexural capacity and deflection of the integrated knit textile beam. Discrepancies in this model were linked to inaccurate material properties, experimental imperfections, and prestressing effects. To ensure complete accuracy and reliability, additional beam analysis using this model is needed.&#13;
This research advances structural design by offering a tool for predicting the capacity and serviceability of irregular, efficiently shaped concrete beams, including those with alternative reinforcement. This thesis enables designers to validate and optimize their innovative beam designs and support their ideas as sustainable solutions within the concrete construction industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Decarbonization Pathways of Japan</title>
<link href="https://hdl.handle.net/1721.1/162549" rel="alternate"/>
<author>
<name>Suto, Sadami</name>
</author>
<id>https://hdl.handle.net/1721.1/162549</id>
<updated>2025-08-28T03:08:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Assessment of Decarbonization Pathways of Japan
Suto, Sadami
Developing realistic pathways for decarbonization is crucial for the success of climate change mitigation actions. To evaluate Japan’s pathways toward achieving carbon neutrality, this study enhances the MIT Economic Projection and Policy Analysis (EPPA) model and analyzes a suite of policy scenarios that combine domestic mitigation measures such as emissions targets from the updated Japan’s Nationally Determined Contribution (NDC), power mix goals, and availability of carbon capture and storage (CCS) with international emissions trading. The impacts on CO₂ emissions, GDP, consumption, carbon prices, and sectoral output in Japan between 2030 and 2050 are assessed.&#13;
&#13;
Under the baseline scenario, emissions over time remain flat at about 1,000 MtCO₂e, far exceeding the carbon neutrality goal. Even when Japan’s 2030 and 2040 NDC for CO₂ and power mix targets are fully achieved, residual emissions of 100 – 200 MtCO₂e remain, which calls for a need of carbon offsets. Relying on domestic-only measures is costly for Japan. In high-ambition domestic-only scenarios without CCS, carbon prices soar to over $46,000/tCO₂ by 2050, leading to GDP losses exceeding $1.5 trillion (23% of GDP) and significant contractions in key sectors of the economy.&#13;
&#13;
In contrast, scenarios incorporating international emissions trading enable Japan to achieve comparable total emissions reductions by partially relying on imported carbon credits. This mechanism significantly lowers marginal abatement costs, allowing carbon prices to stabilize at $20 –$30/tCO₂ and reducing GDP losses to about $100 billion (1.6% of GDP) by 2050.&#13;
&#13;
Scenarios that emphasize domestic reductions while flexibly using international credits emerge as manageable pathways. These scenarios achieve domestic emissions reductions of 40 – 60% by 2050, with carbon prices ranging from $140 to $340/tCO₂ and GDP losses contained between $150 and $290 billion (2.3% and 4.3% of GDP). Importantly, these scenarios incorporate the deployment of CCS, which plays a critical role in reducing marginal costs and enabling deeper abatement in hard-to-decarbonize sectors. Most industrial sectors maintain stable output, while carbon-intensive sectors undergo gradual structural transitions.&#13;
&#13;
Overall, these findings suggest that Japan can achieve carbon neutrality through an integrated strategy that combines strengthened domestic action, technological deployment, and international cooperation. This study provides a robust quantitative foundation for designing feasible, equitable, and cost-effective climate policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience</title>
<link href="https://hdl.handle.net/1721.1/162548" rel="alternate"/>
<author>
<name>Ren, Daisy</name>
</author>
<id>https://hdl.handle.net/1721.1/162548</id>
<updated>2025-08-28T03:08:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience
Ren, Daisy
Due to the rise in global traffic patterns in recent years, bridge failures due to impact effects are becoming an increasing concern, especially for aging infrastructure. Following the recent collapse of the Francis Scott Key Bridge, issues regarding bridge vulnerabilities and design deficiencies arose, which highlighted the need for better design codes and protection for bridge piers. This study aims to address these issues by better understanding bridges' impact-related structural failure mechanisms by developing a comprehensive optimization framework to enhance the resilience of structures to dynamic impact forces using three phases: (i) statistical analysis of bridge failure data from the Multidisciplinary Center for Earthquake Engineering Research (MCEER), with data focusing on the frequency, bridge types, and bridge material trends associated with different bridge failures across the United States, (ii) development of a compliance-based optimization for trusses using MATLAB that is applied to 2D representations of pier structures for different truss configurations (2X3, 3X4, 3X5) under stress, load, and volume constraints to simulate large magnitude impact conditions, and (iii) design and validation of optimization results through mathematical calculations of compliance and strain energy to ensure consistency between numerical results and structural mechanics principles. Both fail-safe and shape optimization strategies are employed and compared across all truss configurations, revealing distinct design methodologies between maximum and minimum compliance optimizations and the trade-offs between stiffness and energy dissipation. Maximum compliance optimization designs demonstrate increased redundancy and strain energy capacity, while minimum compliance optimization designs showed increased efficiency but were more prone to brittle failure. The final study utilizing volume constraints further examined material distribution under realistic impact loads and highlighted the importance of distributed load paths and deformation capacity in structural performance. This work provides a design framework for energy-absorbing pier geometries and aims to offer insight into improving current design standards for pier designs to account for extreme events and help guide retrofitting efforts that could prevent future failures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computing Economic Equilibria and Their Applications to&#13;
Market Games</title>
<link href="https://hdl.handle.net/1721.1/162547" rel="alternate"/>
<author>
<name>Bruce, Samuel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162547</id>
<updated>2025-08-28T03:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computing Economic Equilibria and Their Applications to&#13;
Market Games
Bruce, Samuel G.
The emergence of new technologies such as e-payments and tokenized assets, distributed ledgers, smart contracts and encryption have created new opportunities for improving access and equity in financial institutions. These new tools can be used to build better infrastructure and improve economic efficiency, especially in previously underdeveloped countries. The use of these tools in various applications however requires and intimate link between economics and computer science to ensure an implementation that is both computationally efficient and improves social welfare. There has been significant research in the field of computer science concerning the computation of economic equilibria, specifically Nash Equilibria and Correlated Equilibria. These algorithms, however, have not been used in many financial applications. Further, while research exists on various methods of computation for Correlated Equilibria, little exploration has been done evaluating the quality of these equilibria in terms of economic efficiency in specific mechanisms. This work provides a sweeping view of the existing literature on equilibrium computation as well as an analysis on the economic and algorithmic tradeoffs of different approaches. The discussion begins with simple 2-player, finite action games, then moves to more complex machine learning based method for equilibrium computation in difficult settings. One of these methods is then extended to a limit-order market game explicitly described by Dubey [1] and implemented, with small modifications, by SPEEDEX [2]. This limit-order game offers a continuous, vector-valued action space with complex payoff functions, causing tension with many of the equilibrium computation algorithms explored previously. This paper identifies these tensions, then offers modifications to algorithms which allow tractable, welfare improving approximate Coarse Correlated Equilibrium computation. Finally, there is a discussion on future work which aims to generalize the developed framework. The code corresponding to the equilibria computation will be released publicly in this repository [3].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads</title>
<link href="https://hdl.handle.net/1721.1/162546" rel="alternate"/>
<author>
<name>Chang, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162546</id>
<updated>2025-08-28T03:08:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads
Chang, Ryan
SigmaOS is a multi-tenant cloud operating system designed for efficient orchestration of fault-tolerant, burst-parallel workloads. It provides users with isolated cloud environments called realms, where resources are accessed through a Unix-like filesystem interface, and supports applications built from procs—lightweight, rapidly-spawnable programs that can be both short-lived for bursty tasks or long-running and stateful for persistent services. However, the current prototype exhibits performance bottlenecks that hinder its scalability for larger, more demanding applications. This thesis addresses these limitations by introducing two key optimizations: (1) a rearchitected watch API, enhancing its efficiency and scalability for monitoring directory changes crucial for inter-proc coordination and event notification, and (2) a new ft/task server, providing a robust and high-performance mechanism for managing fault-tolerant bags of tasks, essential for applications like MapReduce. Through these enhancements, this work demonstrates significant improvements in SigmaOS’s performance on the MapReduce benchmark, showcasing improved scaling capabilities for larger cluster deployments, larger inputs, and more granular tasks. These optimizations are crucial steps towards enabling SigmaOS to effectively realize its vision as a scalable and performant platform for complex cloud workloads.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes</title>
<link href="https://hdl.handle.net/1721.1/162543" rel="alternate"/>
<author>
<name>Gomez, Samuel John</name>
</author>
<id>https://hdl.handle.net/1721.1/162543</id>
<updated>2025-08-28T03:07:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes
Gomez, Samuel John
When faced with complex disturbances, continuous manufacturing processes require robust control and adaptability to maintain product quality and operational efficiency. Although advanced control strategies such as linear quadratic regulator, model predictive control, and adaptive control have demonstrated strong performance, many industrial processes still rely predominantly on classical proportional-integral-derivative (PID) controllers because of their simplicity, ease of implementation, and sufficient results.&#13;
&#13;
This thesis investigates the effectiveness of data-driven modeling techniques in capturing system dynamics more accurately than traditional physics-based models. It further examines using a high-fidelity digital twin, constructed from experimental data via linear system identification and nonlinear deep learning (NARX) approaches, to optimize PID controller parameters through simulation-based gradient descent methods.&#13;
&#13;
A comprehensive experimental platform was developed to collect synchronized sensor and video data from a roll-to-roll continuous manufacturing system, specifically targeting disturbance scenarios that cause process interruptions. The digital twin created from these data was validated against physical experiments and shown to outperform conventional physics-based models when predicting the system’s dynamic response to disturbance inputs.&#13;
&#13;
Optimal control of the system was explored by implementing a virtual PID controller that closely replicates the physical controller. Optimal gain settings were identified through simulation and applied to the physical manufacturing process. The experimental results showed a significant reduction in the mean squared error and the maximum web deviation. These results demonstrate the substantial potential of digital twin-driven, data-centric control approaches in enhancing resilience, efficiency, and adaptability in manufacturing processes. This research also lays the foundation for the future development of real-time, adaptive, and autonomous control strategies in industrial applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse</title>
<link href="https://hdl.handle.net/1721.1/162542" rel="alternate"/>
<author>
<name>Maruyama, Shun</name>
</author>
<id>https://hdl.handle.net/1721.1/162542</id>
<updated>2025-08-28T03:08:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse
Maruyama, Shun
This paper analyzes Japan’s economic and technological history since the Meiji Restoration through the framework of Power and Progress proposed by Acemoglu and Johnson (2023), focusing on the concepts of direction of technology and productivity bandwagon. A historical review reveals that technological progress and the distribution of its benefits were not determined solely by market mechanisms or technological inevitability, but were shaped by the power dynamics among governments, companies, workers, and others. Periods when workers held strong bargaining power and inclusive social institutions were in place saw the emergence of a virtuous cycle, in which the direction of technology moved toward broad-based innovation and the productivity bandwagon functioned effectively. Conversely, after the collapse of the bubble economy, a shift in the power balance in favor of companies led to a rise in short-term cost-cutting, resulting in a divergence from inclusiveness and innovation in the direction of technology, as well as a breakdown of the productivity bandwagon. This ultimately undermined Japan’s ability to leverage the strengths of its production system and led to a decline in technological capabilities. Currently, a new wave of technological innovation centered on AI is emerging. However, its impact remains heavily dependent on existing employment practices and corporate behavior models, making a short-term shift in direction unlikely. In the medium-to-long term, however, the societal will and collective action may create an opportunity to rebuild a virtuous cycle. This paper proposes action guidelines for companies, workers, and the government, and argues that realizing true prosperity from technological progress requires reassessing existing power structures and actively choosing new pathways as a society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog</title>
<link href="https://hdl.handle.net/1721.1/162541" rel="alternate"/>
<author>
<name>Chan, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/162541</id>
<updated>2025-08-28T03:08:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog
Chan, Martin
The Language Server Protocol (LSP) and popularity of VS Code have facilitated the current ubiquity of smart code editing features like hover or goto-definition. These features are powered by language servers, which are programs that perform compiler-like functions at keystroke latency on potentially incomplete code. Mainstream languages like Rust or Python have the large userbases to motivate the creation of bespoke language servers like Rust Analyzer or Pylance. However, smaller languages like Bluespec SystemVerilog, used in computer architecture classes at MIT, often need to make do without a language server. As students come to expect smart code editing features, they may miss the convenience when working with languages like Bluespec. In this thesis, we present a Bluespec Language Server forked from Rust Analyzer. This involved adapting the Rust Analyzer parser, HIR, and other internals to work for Bluespec SystemVerilog. The resulting artifact supports the full suite of typical smart editing features for classroom-grade Bluespec projects and continues to mostly work for industrial-grade projects. We discuss the many changes and challenges required to adapt this language server to work for a different language than it was designed for. Further, to address the current gap in the literature covering language server implementation, we include thorough discussion of the overall system architecture and several important subsystems with significant overlap with Rust Analyzer's internals. Finally, we conclude with a discussion of current limitations of our language server and directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout</title>
<link href="https://hdl.handle.net/1721.1/162540" rel="alternate"/>
<author>
<name>Andrade, Marco A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162540</id>
<updated>2025-08-28T03:07:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout
Andrade, Marco A.
Hydrogen gas (H₂) is considered a promising source of environmentally friendly and sustainable energy of benefit for global decarbonization. However, given the flammable and explosive nature of H₂, highly sensitive and selective detection systems with fast response are needed to enable leakage monitoring to ensure safe deployment and use. To address this need, we propose a microelectromechanical (MEMS) platform for H₂ sensing with the aim of achieving sub-1-ppm sensitivity. Our platform employs a MEMS structure that has H₂-responsive palladium (Pd) features. Once exposed to H₂, the Pd lattice expands as H₂ diffuses into it. This results in the structural deflection of a mechanically-mobile feature, in particular a cantilever. This deflection is measured using piezoresistors, which are embedded in the cantilever using a spin-on glass doping process. Piezoresistors enable rapid high-accuracy detection and quantification of H₂, as will be shown in this thesis through a combination of modeling, sensor development, sensor fabrication, and basic experimental characterization. In this thesis, we have successfully developed a fabrication plan, demonstrated the two key aspects of our fabrication, namely beam release and piezoresistor fabrication, shown beam bending driven by absorption of hydrogen by palladium, and shown that our piezoresistors respond to beam bending. Our physical results match our theoretical predictions for a beam of size 100 µm by 20 µm and a resistor with resistance 115 kΩ fabricated on SOI chips. This beam could be used to detect H₂ below 1 ppm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool</title>
<link href="https://hdl.handle.net/1721.1/162539" rel="alternate"/>
<author>
<name>Dale, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162539</id>
<updated>2025-08-28T03:07:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool
Dale, William
The convergence of artificial intelligence and entrepreneurship education has opened a novel frontier in pedagogical innovation. The deployment of Orbit—a bespoke generative AI tool—within MIT’s 15.390 entrepreneurship course, which follows the structured Disciplined Entrepreneurship framework, is examined through a System-of-Systems perspective. This approach reveals how the tool functions not as an isolated feature but as an integrated element within a multifaceted educational ecosystem. Drawing on quantitative usage data across three consecutive academic semesters (Spring 2024-Spring 2025) complemented by course evaluation metrics, our mixed-methods approach reveals the multidimensional impact of AI-enhanced entrepreneurial education. The findings demonstrate that Orbit, particularly in its refined v2 iteration, functions as a powerful External Enabler that significantly reduces both the opacity and agency-intensity inherent in complex entrepreneurial frameworks. This enabling function manifested through measurable increases in student adoption, idea generation, and iterative engagement with critical DE steps. Beyond efficiency gains, we identify a substantive Transformation of Learning where students developed distinctly different engagement patterns—characterized by increased iteration, greater willingness to tackle complex entrepreneurial challenges, and enhanced overall course experiences. This transformation appears to deepen rather than merely accelerate learning, as evidenced by improved course evaluations alongside increased time investment in coursework. However, our analysis reveals that this transformation operates within the constraints of what we term AI’s "Jagged Frontier"—an uneven landscape of capabilities leading to differentiated impacts across DE tasks and student segments. The evolution from Orbit v1 to v2 underscores how thoughtful system design and curriculum integration critically influence the effectiveness of educational AI tools. This research contributes a nuanced understanding of how specialized AI tools can enhance entrepreneurship education while highlighting that their benefits depend on deliberate design choices, strategic pedagogical integration, and recognition of current technological limitations. The SoS framework proves instrumental in capturing these emergent dynamics, offering valuable insights for educational technologists, entrepreneurship educators, and institutions navigating the AI-enhanced learning landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band</title>
<link href="https://hdl.handle.net/1721.1/162538" rel="alternate"/>
<author>
<name>Alsehali, Mohammed S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162538</id>
<updated>2025-08-28T03:07:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band
Alsehali, Mohammed S.
This thesis presents a system design framework for evaluating spectrum management architectures enabling co-primary access in the 37 GHz band. Motivated by increasing demand for mid-band and mmWave spectrum, and recent policy directions for federal-commercial sharing, this research investigates the trade-offs between utilization efficiency, coordination overhead, and interference performance across thousands of feasible spectrum management system.&#13;
&#13;
Using a morphological matrix, eight key architectural decisions were defined, including coordination topology, licensing mechanism, frequency planning, sensing mode, and access priority. A parametric event-driven simulation model was developed in Python to evaluate 2,808 valid architectures under low, medium, and high spectrum demand scenarios. The performance metrics, Spectrum Utilization Efficiency (SUE), Coordination Index (Cindex), and Blocking Probability (BP), were used to generate multi-dimensional tradespaces and identify Pareto-optimal solutions.&#13;
&#13;
Results indicate that semi-dynamic spectrum management systems with decentralized or hybrid coordination topologies consistently dominate the Pareto frontier across all demand levels. Compared to fully dynamic systems, semi-dynamic designs achieve 80–90% of the utilization efficiency with way less than 50% of the coordination cost. &#13;
&#13;
The results validate key hypotheses about performance trade-offs and offer actionable insights for regulators and system designers. This thesis recommends semi-dynamic, co-primary frameworks for initial 37 GHz implementation and proposes future research directions, including agent-based modeling, economic behavior integration, and acuarate physics modeling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology</title>
<link href="https://hdl.handle.net/1721.1/162537" rel="alternate"/>
<author>
<name>Jezewska, Martyna</name>
</author>
<id>https://hdl.handle.net/1721.1/162537</id>
<updated>2025-08-28T03:07:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology
Jezewska, Martyna
The Mayo Clinic, a renowned non-profit organization, has long been at the forefront of healthcare innovation. This thesis explores the implementation of digital pathology within the Mayo Clinic, focusing on its potential to enhance diagnostic accuracy, increase efficiency, enable remote collaboration, and ultimately improve patient care. By leveraging the Architecting Innovative Enterprise Strategy (ARIES) framework, this research provides a comprehensive analysis of the socio-technical aspects of digital pathology implementation. The study begins with a literature review on innovation and its application in healthcare,&#13;
followed by an in-depth case study of the Mayo Clinic's journey with digital pathology. Key findings highlight the importance of organizational design, stakeholder engagement, and continuous improvement in successfully integrating digital pathology into existing healthcare systems. The research concludes with recommendations for future innovations and insights on how healthcare institutions can better prepare for and adapt to disruptive technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration</title>
<link href="https://hdl.handle.net/1721.1/162536" rel="alternate"/>
<author>
<name>Suresh, Nithyaharini</name>
</author>
<id>https://hdl.handle.net/1721.1/162536</id>
<updated>2025-08-28T03:07:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration
Suresh, Nithyaharini
The rapid increase in wind energy deployment is critical to achieving net-zero carbon emissions in the United States. However, conventional Horizontal Axis Wind Turbines (HAWTs) face deployment constraints due to their large spatial requirements stemming from their size itself and turbine spacing to accommodate wake interference. Their large footprint makes it impractical to deploy in densely populated and restricted areas, such as military zones and urban regions. This setback results in the underutilization of available wind resources, limiting wind energy’s full potential. To overcome these constraints, Vertical Axis Wind Turbines (VAWTs) offer a spatially compact alternative, enabling deployment in space-constrained areas. This study investigates the feasibility of VAWTs as a complementary wind technology by integrating them into a renewable energy siting optimization framework. This framework considers HAWTs, Solar Photovoltaics (PV), battery storage, etc., within the New England region, assuming a 100% decarbonized power system. The model utilizes an analysis that aims to minimize total system costs to assess VAWTs under varying capital expenditures and land-use restrictions. A novel feature of this study is the usage of the land availability cutoff and land restriction cases that are introduced to realistically mimic real-world land use constraints that influence wind turbine siting. The land availability cutoff defines the minimum area of land usable within the parcel for it to be considered for HAWTs and Solar PV deployment, given their larger spatial footprint. Parcels below this land cutoff are excluded from those technologies and only consider VAWTs due to the lower land available within the parcel, representing constrained regions. This methodology offers a more technical modeling of spatial constraints for renewable energy siting and allows for a realistic assessment of VAWT feasibility. Results indicate that, at current commercial costs, VAWTs are less competitive withm HAWTs and solar PV, primarily due to their early stage in the technology development and their significantly higher CAPEX, which is approximately ten times that of HAWTs. To test the technology’s viability with hypothetical utility-scale costs, where VAWT costs fall within the range of $1,300–$1,500/kW, the model still preferentially selects HAWTs due to their higher capacity factors. However, when the model considers different land use restriction cases for VAWT technology, as compared to HAWTs and Solar PVs, VAWTs become significantly more viable. VAWT placement becomes notable in these cases, increasing its share in the energy mix by 2.61% to 10.32% in favorable conditions. At high levels of land availability on a per-parcel scale, specifically, when more than 70% of the land identified as technically suitable remains available for any deployment, high-quality sites with favorable wind resources and high capacity factors continue to support HAWTs as the dominant technology given their lower Levelized Cost of Energy (LCOE). However, when the land availability cutoff increases beyond 70%, reducing siting opportunities for HAWTs and solar PV, the reliance is shifted towards VAWTs, amplifying the impact of their higher LCOE on overall system costs and making cost differentials between technologies more critical. These findings emphasize that while CAPEX reductions are critical in scaling VAWTs and driving up their competitiveness, land-use policies and spatial constraints are primary determinants of deployment feasibility. The study highlights the need for targeted policy intervention for flexible siting policies and continued research to optimize VAWT deployment strategies, ultimately enhancing wind energy integration in land constrained regions within New England and maximizing wind resource potential.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Targeted Codon Optimization and Translation with Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162535" rel="alternate"/>
<author>
<name>Chemparathy, Anugrah</name>
</author>
<id>https://hdl.handle.net/1721.1/162535</id>
<updated>2025-08-28T03:07:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Targeted Codon Optimization and Translation with Deep Learning
Chemparathy, Anugrah
Codon optimization—the task of recoding a protein’s underlying DNA sequence to maximize expression in a target organism—is a complicated biological optimization problem. Each gene brings a dynamic combination of local and long-range dependencies along with globally imposed constraints specific to the organism. While most existing tools for systematic codon optimization are restricted to optimizing under the constraint of a fixed amino acid sequence, recent architectural advancements in deep learning have made it possible to introduce partial modifications to the amino acid sequence without affecting protein function during the codon optimization process. Such approaches greatly increase the search space of feasible sequences, potentially opening up pathways to previously unconsidered DNA sequences with significantly greater expression rates. In this thesis, we seek to understand and improve the inverse-folding codon optimization model CodonMPNN, the behavior and performance of which have not yet been fully evaluated. We present a detailed empirical evaluation of CodonMPNN, characterizing its performance across reconstruction and translation tasks and demonstrating that it captures higher-order codon usage patterns. We produce evidence that the CodonMPNN’s training has successfully captured nontrivial aspects of the codon distribution for 1000 unique organisms, and are able to better characterize the optimal tasks that CodonMPNN’s non-synonymous nature may be able to solve. Then, by a combination of improved pretraining and a new inference-time evolutionary algorithm we are able to modestly improve the base performance of CodonMPNN from its original publication. Together, these contributions yield a measurable improvement in CodonMPNN’s practical performance and provide actionable guidance for its application in constrained codon design. More broadly, this work highlights the importance of application-aware evaluation when deploying machine learning models in synthetic biology and motivates the design of future architectures that are better aligned with real-world usage constraints.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The First Signs of Vision</title>
<link href="https://hdl.handle.net/1721.1/162534" rel="alternate"/>
<author>
<name>Chang, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/162534</id>
<updated>2025-08-28T03:07:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The First Signs of Vision
Chang, Cathy
There has been a lot of research on the evolution of eyes through the lens of biology; however, there have been a distinct lack of research in simulating what animals saw as their eyes evolved. This project aims to create interactive simulations of the evolution of animal visions from the Cambrian Explosion to present day through the use of extended reality (XR) environments. Our goal is to communicate and educate about the evolutionary timescale to help our audience understand 1) the history of vision and intelligence and 2) how vision came to be and why it is the way it is. In addition, we want to bridge the gap between technology and vision research to help people better understand and visualize this evolutionary process. We have also collaborated with the Museum of Science and the MIT Museum to display this work in events at their venues.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind</title>
<link href="https://hdl.handle.net/1721.1/162533" rel="alternate"/>
<author>
<name>Bentley, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/162533</id>
<updated>2025-08-28T03:07:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind
Bentley, Sarah
Generative models have rapidly advanced in their ability to produce diverse, high-quality outputs. Yet their practical utility often falls short: users frequently struggle to guide models toward desired outputs, even when the model is capable of producing those outputs. This thesis argues that unlocking the full potential of generative AI requires not only improving what models can produce (producibility), but also how effectively users can guide them toward producible outputs (steerability). In short, how can we make the entire producible sets of generative models easily accessible to humans? Our contributions are fourfold. First, we formally define steerability and introduce a framework for evaluating it independently of producibility. Second, we instantiate this framework through benchmarks on the steerability of text-to-image and language models. We find that not only is steerability poor, but steering doesn’t reliably improve with more attempts. Third, we propose a framework for designing and optimizing steering mechanisms – tools that help users articulate and achieve their goals with models – and introduce Reinforcement Learning for Human Steering (RLHS) to systematically optimize these mechanisms. Finally, we instantiate this framework in a new steering mechanism for image generation that enables users to steer via images rather than text prompts. This mechanism achieves over 2x improvement over traditional text-based prompting on our benchmark. Our mathematical framework provides a generalizable path forward for measuring and improving the steerability of generative models, while our implementations of that framework empirically demonstrate its utility and viability. Overall, we define a new axis – steerability – upon which we can vastly improve generative models not only as tools for automation, but as bicycles for the mind.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA</title>
<link href="https://hdl.handle.net/1721.1/162531" rel="alternate"/>
<author>
<name>Suzuki, Wataru</name>
</author>
<id>https://hdl.handle.net/1721.1/162531</id>
<updated>2025-08-28T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA
Suzuki, Wataru
In Japan, the Tokaido Shinkansen, a major high-speed rail corridor, plans to introduce Grade of Automation 2 (GoA2) through Semi-Automatic Train Operation (STO). While partial automation promises advantages such as reduced driver’s workload and enhanced efficiency, it also creates new risks due to increasingly complex interactions among automated control systems, human operators, and physical infrastructure.&#13;
This thesis aims to systematically identify and address potential hazards arising from STO in high-speed rail. By using the Tokaido Shinkansen’s announced plan as a model case, the research seeks to uncover scenarios in which normal, non-failed system behaviors can still lead to unsafe outcomes, and to propose design solutions that mitigate those risks early in development. To achieve this, the study applies Systems-Theoretic Process Analysis (STPA). Rather than isolating hardware and function failures, STPA models the entire system as a hierarchical control structure, examining each controller’s possible unsafe actions and their feedback pathways. &#13;
The analysis reveals hazard scenarios that traditional failure-based methods might overlook. Examples include cases where a passenger is not detected between the train and platform doors at departure, or where verbal and signal instructions conflict and delay the driver’s response. These scenarios can happen even without any component failure. Drawing on these insights, the thesis recommends a variety of design improvements, such as new monitoring functions for subsystems, modifying instruction interfaces, and strengthening the software logic of automation systems.&#13;
These findings demonstrate the value of conducting a holistic safety analysis using STPA at the conceptual design stage, before late-stage changes become more expensive. Moreover, this research provides a comprehensive, system-level railway hazard analysis, and the proposed measures can be broadly applicable to high-speed rail systems with automation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps</title>
<link href="https://hdl.handle.net/1721.1/162530" rel="alternate"/>
<author>
<name>Taylor, Benjamin F.</name>
</author>
<id>https://hdl.handle.net/1721.1/162530</id>
<updated>2025-08-28T03:07:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps
Taylor, Benjamin F.
The efficient generation and transfer of energy in the golf swing has long been a subject of biomechanical interest, with a particular focus on the concept of the kinematic sequence, which is the coordinated segmental rotation of the pelvis, torso, arms, and club.  While previous studies have modeled aspects of this sequence using high-end laboratory setups or proprietary systems, few have provided open, quantifiable, and time-resolved measurements of angular kinematics across the full swing cycle.  This thesis seeks to address this gap by implementing a markerless temporal skeletal tracking approach built on the open-source MeTRAbs computer vision framework to model and measure joint angles and angular velocities throughout the golf swing.  Using two-dimensional video footage of right-handed golfers performing driver swings, the MeTRAbs pose estimation model and supplemental cross-frame temporal motion sequencing code were used to reconstruct three-dimensional joint trajectories and compute rotational kinematics of key body segments.&#13;
This study demonstrates the feasibility of using markerless pose estimation to extract golf swing signatures and angular velocity profiles without requiring expensive or inaccessible motion capture equipment. Preliminary analysis suggests that joint coordination patterns and temporal characteristics of body segment angular velocities may reveal quantifiable insights into the kinematic sequence, laying the groundwork for further research and instructional applications. Ultimately, this thesis contributes a replicable and cost-effective framework for analyzing golf swing biomechanics using open-source tools and computer vision.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities in Advanced Wireless Integrated Circuits</title>
<link href="https://hdl.handle.net/1721.1/162529" rel="alternate"/>
<author>
<name>Fareed, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/162529</id>
<updated>2025-08-28T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Opportunities in Advanced Wireless Integrated Circuits
Fareed, Mo
The continued evolution of wireless communications, novel compact radars, and power electronics has driven demand for high-performance semiconductor materials capable of operating at higher power density, fast switching speeds, and improved efficiency. Gallium Nitride (GaN) has emerged as a leading candidate due to its superior electrical properties compared to traditional silicon (Si), silicon carbide (SiC), and gallium arsenide (GaAs). GaN’s high power density, thermal stability, and high-frequency operation make it an ideal candidate for applications in 5G/6G infrastructure, satellite communications, defense radar, electric vehicles, and power electronics. However, widespread commercial adoption of GaN faces significant barriers, including high production costs, supply chain constraints, and integration challenges within existing silicon-based fabrication processes.&#13;
&#13;
This thesis explores the opportunities and challenges associated with GaN-based integrated circuits (ICs) in the context of advanced wireless systems by utilizing Dr. Eugene Fitzgerald’s innovation framework – Technology, Markets, and Implementation (TMI). A comparative analysis of monolithic vs. board-level GaN integration is conducted. The research highlights that scaling GaN wafer production to approximately 10,000 wafers per year (200mm sized wafers) is necessary to achieve cost-effective monolithic integration, yet current defense-driven demand is insufficient to drive economies of scale. Instead, commercial applications—such as telecommunications, power electronics, and consumer RF devices—are target audiences that can take advantage of monolithic integration in high volume. &#13;
&#13;
The findings indicate that while defense applications have led non-monolithic GaN adoption (that is, discrete GaN transistor adoption), they cannot sustain large-scale production alone due to small volume. The semiconductor industry must navigate manufacturing bottlenecks, cost reduction strategies, and foundry availability to ensure GaN’s transition from a niche, high-cost technology to a commercially viable solution. By mapping the TMI intersections and addressing economic and technical barriers, this thesis provides strategic insights into how GaN technology can achieve scalable production, unlock new market opportunities, and shape the future of advanced wireless integrated circuits.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts</title>
<link href="https://hdl.handle.net/1721.1/162528" rel="alternate"/>
<author>
<name>Fontaine, Anouk</name>
</author>
<id>https://hdl.handle.net/1721.1/162528</id>
<updated>2025-08-28T03:07:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts
Fontaine, Anouk
The AEC industry is responsible for 40% of global greenhouse gas emissions and 38% of EU waste, much of which is landfilled. The AEC waste represents an immense portion of resources that could be used instead of new materials. Many ongoing research projects have explored ways of reusing irregular components in construction, from whole steel trusses to single elements, triangulated subparts, or even irregular wood offcuts in order to mitigate the intensive recycling and deconstruction processes. However, the research has focused on general methodologies or one-off prototypes. This paper introduces a systematic approach to repurpose discarded steel and timber studs - components that make up to 10% of waste on local sites (Parigi, 2021) - into modular, steel-frame, load-bearing walls, providing a way to build new structures for the growing global demand for housing and infrastructure, while minimizing the creation of new emissions through the use of waste elements. Through a topdown and stock-constrained design approach, geometry optimization through a matching algorithm is combined with topology optimization to generate and evaluate various configurations to minimize new emissions and maximize structural efficiency. A human-scale prototype further assesses costs, architectural and structural flexibility, construction feasibility, robotic efficiency, and embodied emissions, offering a promising pathway for sustainable construction through effective waste reuse. For the available inventory, a human-scale prototype gives data on the workflow. This approach tackles the issues of existing waste stock with the growing demand for infrastructure and minimizes embodied emissions through structurally efficient resource use by pushing forward a systematic implementation of reuse in common construction practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems</title>
<link href="https://hdl.handle.net/1721.1/162527" rel="alternate"/>
<author>
<name>Kumar, Prashant</name>
</author>
<id>https://hdl.handle.net/1721.1/162527</id>
<updated>2025-08-28T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems
Kumar, Prashant
Electricity is set to become the central pillar of both energy production and consumption in the global effort to achieve net-zero emissions. As key sectors—transportation, chemicals, and heavy industry—seek to decarbonize by electrifying their operations, industrialized nations face mounting strain on their electricity systems. This strain is further compounded by the rising demand for electricity driven by data centers and artificial intelligence applications, heralding a future of potentially unrelenting load growth.&#13;
In such a context, it becomes not merely prudent but essential to approach decisions regard- ing investment and operation in the electricity sector with analytical rigor. Advanced capacity expansion models provide the tools for this task. In this thesis, the GenX model is employed to study Taiwan’s electricity system—an islanded, industrially-intensive grid—evaluating the evolution of its capacity mix, generation profile, prices, emissions, and overall costs.&#13;
Our findings suggest that a reliable path to decarbonization lies in a considered combination of natural gas-fired generation with carbon capture and storage (CCUS), renewable sources such as solar and wind, and energy storage systems. Furthermore, this study finds that integration of nuclear and geothermal technologies significantly improves the cost-effectiveness of achieving decarbonization targets.&#13;
This thesis also addresses the “missing money” problem endemic to energy-only electricity markets, examining how the introduction of a capacity market influences both investment and operational outcomes. We find that the efficacy of capacity markets is highly sensitive to the design parameters of the demand curve and the capacity credit values of the resources. For islanded systems such as Taiwan’s, a pragmatic approach to ensuring security of supply may involve retaining existing natural gas infrastructure as a strategic reserve, paired with a capacity market design that avoids excessive conservatism, leveraging the absence of policy interactions and competition with neighboring electricity markets, as observed in Europe.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids</title>
<link href="https://hdl.handle.net/1721.1/162526" rel="alternate"/>
<author>
<name>Anastos, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162526</id>
<updated>2025-08-28T03:06:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids
Anastos, Daniel
One of the largest existential challenges the US and other countries face is climate change. And maybe no other system is more crucial to combatting climate change than the grid.  Increasingly more requirements have been put onto the transmission and distribution grids to play an even larger role than they have in the past; consider AI, EV, residential solar, electrification of heat, decarbonization of buildings, increasing energy rates, old infrastructure. Improving the grid is a necessity to decarbonize and innovate. However, utilities, backed up by state regulation, usually, but not always, use traditional techniques to expand grid capacity and increase resiliency as opposed to investing in modern grid technology that would more quickly allow for future innovations and decarbonization. These technologies, or techniques, are broadly called grid enhancing technologies, or GETs. There are rational reasons why GETs are not used more often. Utilities are correctly, highly risk averse because they must safely and adequately supply power directly to people. Utilizing new technologies, even if proven, can be a risk that utilities are unwilling, or not allowed, to take given their role and responsibility. But these risks are largely avoided with the technologies discussed in this paper and one could argue these technologies could not only make the grid cheaper to expand but also give the grid more resilience. This paper explores how a particular grid section can increase its solar penetration by avoiding traditional hosting capacity limitations and use not even innovative GETs but GETs that are largely tested and proven. Traditionally, at some limit, the utility will stop allowing solar in an area due to various grid constraints. This paper explores how a utility may solve these constraints using new methods to avoid large grid expansion CAPEX costs and utilize new technologies or techniques. Some of the techniques explored here are commercial scale energy storage support at substations, PV curtailment, and volt-var optimization control.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geothermal Energy Planning Considerations for Military Operational Energy Demands</title>
<link href="https://hdl.handle.net/1721.1/162525" rel="alternate"/>
<author>
<name>Seckfort, Cody L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162525</id>
<updated>2025-08-28T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Geothermal Energy Planning Considerations for Military Operational Energy Demands
Seckfort, Cody L.
Contingency locations are temporary military bases that are often established in austere or contested environments. These locations rely heavily on diesel fuel for electrical power, which creates logistical vulnerabilities and increases the risk to personnel conducting fuel resupply missions. While the Department of Defense has made progress in adopting renewable energy technologies, many of these systems remain too large, inefficient, or underdeveloped for widespread use in operational environments. Geothermal energy presents a promising but underexplored alternative for generating reliable, on-site electrical power without the need for continuous fuel resupply.&#13;
This thesis evaluates the feasibility of geothermal energy systems for military operational energy demands and introduces a modified power planning process that incorporates geothermal considerations. The research focuses on closed-loop geothermal systems, utilizing an example system called the “Mil-Loop”, which is designed to minimize the system surface footprint and support remote installations. The planning process integrates existing geothermal tools, including GEOMAP/TEST for resource estimation and GEOPHIRES for system modeling and performance analysis. The Mil-Loop System Model incorporates each step of the planning process to produce a site-specific power system profile. &#13;
A case study using site-specific data from Bagram Airfield was used to assess the performance of a hybrid geothermal-diesel power system. The results suggest that geothermal system integration could reduce diesel fuel consumption by up to 42.9 percent over a 40-year site lifecycle. A sensitivity analysis indicates that geothermal system power output, drilling time, and installation costs are the most critical parameters affecting system viability. Advances in drilling technology and heat extraction have the potential to reduce installation costs and timelines, making geothermal more competitive with diesel generation. This thesis also identifies a gap in military energy planning resources, specifically the lack of frameworks that include geothermal options for operational environments. It recommends that the DoD begin integrating geothermal technologies into its energy planning strategies and develop modular systems that can be deployed in contested or resource-constrained areas. &#13;
While this research is limited by simplified power demand modeling and generalized tool assumptions, it offers a practical framework for evaluating geothermal viability in future defense applications. This study demonstrates that geothermal energy systems, particularly closed-loop configurations, can serve as a viable and strategically beneficial power source for military operations. When paired with targeted technology development and thoughtful integration into planning processes, geothermal systems can reduce logistical burdens, improve energy resilience, and enhance mission success in operational environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles</title>
<link href="https://hdl.handle.net/1721.1/162524" rel="alternate"/>
<author>
<name>Balla, Sai Prasad</name>
</author>
<id>https://hdl.handle.net/1721.1/162524</id>
<updated>2025-08-28T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles
Balla, Sai Prasad
This study provides a comprehensive techno-economic evaluation of a specific class of nuclear batteries—high-temperature gas-cooled 10 MW_th microreactors (HTGRs) with TRISO fuel in prismatic- and pebble-bed cores—using four composite moderator concepts (MgO–Be, MgO–BeO, MgO–YH, MgO–ZrH). These options are compared against a prismatic graphite benchmark, under both once-through and continuous-recycle fuel cycles.&#13;
&#13;
In once-through prismatic systems, hydride-based moderators can reduce overall fuel-cycle costs by up to about 20% relative to graphite, whereas beryllium-based moderators may remain 40–50% costlier due to higher raw material expenses. Shifting from prismatic blocks to pebble beds decreases moderator usage and increases burnup, thus making advanced moderator options more competitive. &#13;
&#13;
Adopting a continuous-recycle strategy replaces enrichment with reprocessing and can further lower fuel-cycle costs by roughly 30%. Coupling a sodium-cooled fast reactor (SFR) to supply transuranic’s further reduces the cost: SFR driver fabrication and reprocessing can account for the bulk of total costs, rendering microreactor-level variations comparatively minor. Meanwhile, pebble-bed designs propose ultra-high burnups and extended residence times, which could yield significant economic gains, contingent on demonstrated long-term TRISO fuel integrity.&#13;
&#13;
Waste handling also factors into the analysis. Deconsolidation—removing the inert moderator before disposal—can shrink spent-fuel volumes by more than 90%, easing repository demands. Continued R&amp;D into advanced additive manufacturing, high-burnup TRISO performance, and streamlined waste management will be crucial for capitalizing on these potential cost advantages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation</title>
<link href="https://hdl.handle.net/1721.1/162523" rel="alternate"/>
<author>
<name>Bhatia, Jagdeep Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/162523</id>
<updated>2025-08-28T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation
Bhatia, Jagdeep Singh
Robots with robust bimanual dexterity have the potential to transform industries such as manufacturing and healthcare by performing complex tasks at human-level proficiency. While end-to-end learning methods have shown promise in achieving this goal, scaling these approaches remains challenging. Existing paradigms suffer from high costs associated with collecting large-scale, high-quality demonstrations on physical systems and face performance saturation due to reliance on offline data. We propose a task-agnostic pipeline that leverages robotics simulation to overcome these limitations. In particular, we introduce DART, a cost-effective, augmented reality, robot teleoperation platform for scalable data collection. We demonstrate through user study that it enables twice the throughput of existing systems. We also present a learning algorithm that integrates real-world demonstrations with reinforcement learning to surpass performance plateaus. Finally, we design a method that zero-shot transfers policies trained in simulation on real robots using only RGB input. Together, these contributions provide a practical and scalable path toward achieving general-purpose dexterous robot manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image Registration and Gantry Tracking System of Clytia hemisphaerica</title>
<link href="https://hdl.handle.net/1721.1/162522" rel="alternate"/>
<author>
<name>Bunch, Bradley</name>
</author>
<id>https://hdl.handle.net/1721.1/162522</id>
<updated>2025-08-28T03:08:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Image Registration and Gantry Tracking System of Clytia hemisphaerica
Bunch, Bradley
Understanding nervous system function and evolution requires detailed behavioral analysis of model organisms such as the jellyfish Clytia hemisphaerica. However, its size and rapid, free-swimming nature pose significant tracking challenges. This work presents a platform for the XY gantry system developed to overcome these hurdles for high-resolution behavioral monitoring. Separately, to prepare for downstream neural analysis, we developed an automated neuron segmentation pipeline - tailored for image registration purposes. Together, the tracking system and the analysis preparation pipeline provide powerful, distinct tools for high-throughput behavioral quantification and facilitate future studies linking behavior to underlying neural dynamics in Clytia hemisphaerica.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Technology Applied to Automotive Diagnostics</title>
<link href="https://hdl.handle.net/1721.1/162521" rel="alternate"/>
<author>
<name>Mwarage, Jessy Mbagara</name>
</author>
<id>https://hdl.handle.net/1721.1/162521</id>
<updated>2025-08-28T03:07:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Technology Applied to Automotive Diagnostics
Mwarage, Jessy Mbagara
There is currently a lot of interest in the area of Digital Twin (DT) Technology. Physical product oriented organizations are increasingly looking for ways to stay ahead of the technological innovation curve in order to not get disrupted by more agile entrants. Therefore, the promise of a technology like DT is alluring for the sake of maintaining a competitive edge. This thesis seeks to explore the potential benefits of DT technology alongside what challenges might be faced in implementing one. To this end, a problem statement is formulated in the field of automotive diagnostics. This is a key value addition field for automotive companies seeking to better manage the diagnosis and repair of their automobiles in the field or the manufacturing environment. The problem is further concretized with a study of some user-driven use cases and needs in a real automotive company. From these needs, a set of requirements is formulated to guide the architecture and design of a DT demonstration. The process of architecting and designing the DT is documented. This includes a deep dive on the modeling approaches considered, the solution space for the architecture and the detailed design and implementation of a DT demonstration from a selected architectural concept. The DT demonstration is then operated under controlled conditions in order to showcase some of its capabilities. Pursuant to all this, a reflection on the effectiveness of the demonstration and the lessons learned about the implementation process are discussed. The results of the study and demonstration show some promise for organizations seeking to adopt DT technology, in this particular case for automotive diagnostics. The benefits are mainly in terms of better system architecture  planning and the increased potential for better incorporating lessons learned from products operating in the field back into the design process. These benefits are weighed against the socio-technical challenges of implementing DTs from the outset of a system design exercise.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization</title>
<link href="https://hdl.handle.net/1721.1/162520" rel="alternate"/>
<author>
<name>Wang, Zach</name>
</author>
<id>https://hdl.handle.net/1721.1/162520</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization
Wang, Zach
This thesis presents a survey application designed for the future development of HumanInformed Topology Optimization (HiTop) towards the deeper integration of optimization and real-world feasibility. Topology optimization produces high-performance designs by optimally distributing material, but its application in professional environments remains limited due to fabrication constraints and inflexible design workflows. To address this, the Carstensen Group developed HiTop, which integrates optimization algorithms with human experience, allowing engineers to modify the computer design based on their professional judgment. Thus, the future development of HiTop requires real-world data on human preferences. This project introduces a web-based survey app integrated with Qualtrics. It presents users with various design scenarios and computer-optimized designs, and records their modifications and reasoning. A preliminary survey collected responses from 13 professionals and engineering students. Preliminary findings suggest that engineers consistently focus on similar regions of interest, even when motivated by different reasons. However, the sample size is too small to make any statistically significant conclusions. While the platform mostly performed as intended, a bug related to data storage was discovered during analysis. The issue has since been resolved, and the tool is now fully functional and ready for broader deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process</title>
<link href="https://hdl.handle.net/1721.1/162519" rel="alternate"/>
<author>
<name>Lauber, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162519</id>
<updated>2025-08-28T03:07:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process
Lauber, Emily
This research investigates the motivational drivers for companies and individuals to participate in the World Wide Web Consortium’s Web standards development process. Motivational drivers are identified through a literature review, primary sources, and interviews. Thirteen semi-structured interviews were conducted with questions related to participants’ experience with the World Wide Web broadly, Web standards in general, the organization of W3C, and game modeling of the process. W3C was selected as the case study of Web-related standards bodies because of its unique model of paid membership yet open standards available royalty-free. The W3C standards process requires consensus-building, horizontal review, and proof of implementation before the organization officially recommends the specification. Existing research documents the history and value of standardization across industries, the modeling of various Standards Development Organizations (SDOs) in information industries, and the negotiation of international Internet governance. This thesis does not attempt to prove a societal benefit of Web standards but instead focuses on an individual’s belief in societal benefit and how that belief drives their engagement with W3C.&#13;
&#13;
Initial findings point to members seeking economic, philosophic, and moral value through participation in Web standards development. A game theory framework evaluates the economic value of different players within the ecosystem and identifies that Web browser vendors and long-time consortium members have greater power for their preferred specification outcomes than Web developers or newcomers. Despite changes in the Web ecosystem in the past 30 years, W3C members continue to be drawn to the Web for the same philosophical intents that Sir Tim Berners-Lee designed the Web for. There are shared concerns though that the economic power players identified in the game modeling has damaged or will threaten the philosophy of an open, safe, accessible Web. Interviewees shared personal beliefs that there was a moral responsibility to engage in Web standards development and enable W3C’s mission of “empowering humanity”. Further research is required to catalogue more motivational drivers, evaluate drivers across other Web-related Standards Development Organizations, and rank the priority of motivations when the different drivers are in tension.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems</title>
<link href="https://hdl.handle.net/1721.1/162518" rel="alternate"/>
<author>
<name>Putnam, Rachael M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162518</id>
<updated>2025-08-28T03:07:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems
Putnam, Rachael M.
Designing perception systems for autonomous robots and vehicles requires balancing sensor performance against cost, complexity, and integration constraints. This thesis introduces GO4R (Generation and Optimization of Perception System Architectures for Robotics), a multi-objective framework that jointly designs sensor selection, placement, against volumetric, entropy-based utility metric H (-) and monetary cost M ($). Perception Entropy H is formalized as a volumetric measure of uncertainty across a voxelized regions of interest (ROI), which naturally rewards coverage, overlap, and redundancy required for robust sensor fusion and calibration.&#13;
&#13;
NSGA-II is implemented with custom mixed-variable operators to specifically handle both continuous (e.g. sensor poses) and discrete (e.g. sensor type/count) decision variables found in this problem. Two case studies, long-range outdoor navigation on a Clearpath Jackal and short-range indoor navigation on ANYmal-C, demonstrate the framework’s ability to generate Pareto-optimal sensor architectures under vastly different ROI definitions and operating conditions. In the Jackal study, GO4R converges to a population of 11 novel Pareto-optimal designs, and revealing sensitivity to voxel size and importance weighting. In the ANYmal-C study, the compact, uniformly weighted ROI yields a flatter Pareto front with 25 Pareto-optimal designs, and underscores how intrinsic sensor parameters (e.g. angular resolution, and Field of View) dominate design trade-offs when baseline coverage is already high.&#13;
&#13;
Key architectural decisions are analyzed, quantified by their impact on Pareto front shape, and ordered according to the GO4R method to successively reduce uncertainty. The resulting guidelines provide practitioners with a rigorous, reusable process for tailoring perception systems to task-specific requirements. Finally, GO4R provides a publicly available NVIDIA Isaac Sim extension to aid practitioners in following the GO4R method, no matter their Autonomy application. Future work will extend GO4R to dynamic environments, improve fidelity of generated designs, and incorporate additional cost metrics such as computational load and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale</title>
<link href="https://hdl.handle.net/1721.1/162517" rel="alternate"/>
<author>
<name>Shao, Yu-Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/162517</id>
<updated>2025-08-28T03:07:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale
Shao, Yu-Tong
Crop residues are a widely available form of agricultural waste with several possible reuse applications, including use as biofertilizers, animal feed, biofuels, and for carbon sequestration. However, in many parts of the world, large quantities of these residues are still burned in the field, releasing significant amounts of greenhouse gases (GHGs) and air pollutants to the atmosphere. This study aims to evaluate alternative and carbon-efficient strategies for reusing crop residues – especially focusing on rice straw and wheat straw – by conducting life cycle assessments (LCA) of multiple utilization pathways. Different alternative scenarios for utilizing crop residues are assessed: incorporating residue in field, animal usage for feeding, pyrolysis for electricity generation, pyrolysis for carbon sequestration, and electricity generation through residue combustion. Specifically, for the scenarios of pyrolysis and electricity generation through residue combustion, the maximum feasible transport distances of crop residues from agricultural fields to processing facilities are modeled for different logistics methods, providing information for the locations for establishing centralized facilities while maintaining GHG benefits for the scenarios. The results of this study highlight that electricity generation using crop residues, either through pyrolysis or direct residue combustion, offers the greatest climate benefits among all evaluated options. Carbon sequestration through pyrolysis also yields substantial GHG reductions, although slightly lower than the benefits from electricity generation. While crop residue-based electricity emits 4.35 to 31.25 times more GHGs per unit of electricity generated than renewable sources and 50.00 to 67.57 times more than nuclear sources, it still performs better than fossil fuels and provides added value in terms of agricultural waste management, resulting in 30.56 to 66.67% lower GHG emissions. Moreover, transportation emissions account for only a small share of the total life cycle global warming potential (GWP) in the electricity generation scenarios, ranging from 0.66% (via ships) to 16.40% (via trucks) for every 1000 km traveled. This makes long-distance residue transport viable. The findings of this study underscore the potential for crop residues to play a meaningful role in climate mitigation and sustainable agricultural waste management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada</title>
<link href="https://hdl.handle.net/1721.1/162516" rel="alternate"/>
<author>
<name>Shalash, Karim</name>
</author>
<id>https://hdl.handle.net/1721.1/162516</id>
<updated>2025-08-28T03:07:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada
Shalash, Karim
Heating systems contribute significantly to Canada’s greenhouse gas emissions, accounting for approximately 117 megatons of CO₂ equivalent. demanding urgent decarbonization to meet national climate targets. This thesis employs the Advanced Technology Roadmap Architecture framework, integrating strategic roadmapping and technology portfolio selection methodologies to evaluate pathways for transitioning Canada’s heating sector to net-zero emissions by 2050. By analyzing historical emissions, forecasting adoption trends for key technologies like heat pumps, and conducting stakeholder-driven scenario analysis, this research identifies critical barriers to scaling low-carbon solutions, including high upfront costs, infrastructural limitations, and regional climatic constraints. &#13;
Seven representative heating architectures—air-source heat pumps, ground-source heat pumps, district heating, hydrogen-based systems, electric resistive heating, and conventional gas-fired furnaces—are evaluated comprehensively. Among these, district heating is particularly emphasized due to its potential for significant emissions reductions and minimal consumer-bearing initial cost of ownership, especially when strategically integrated with waste heat recovery from data centers. This integration utilizes otherwise wasted thermal energy, creating a robust symbiotic opportunity for urban and industrial decarbonization. &#13;
To support the practical deployment of these architectures, the thesis establishes a targeted technology portfolio comprising essential enabling and supporting technologies. Enabling technologies include centralized supervisory control systems, urban-scale district heating networks, inverter-driven compressors, advanced refrigerants, ground heat exchangers, and circulation pumps with variable frequency drives. Critical supporting technologies identified encompass building information modeling integration kits, cybersecurity modules, digital permitting platforms, smart thermostats, and thermal energy storage systems, among others. &#13;
This thesis further explores technology trade-offs, focusing on structural complexity, technology readiness, and associated risks of deployment. Through detailed modeling and stakeholder-informed scenario analysis, the thesis concludes that effective decarbonization of heating in Canada necessitates substantial policy interventions, robust financial incentives, targeted infrastructure investments, and region-specific strategies. The analysis indicates that a carefully allocated $8 billion catalyst investment could close approximately 60% of Canada’s heating emissions gap by 2050. Ultimately, district heating coupled with waste heat recovery emerges as a particularly promising strategic option, underscoring its transformative potential within a diversified approach to achieving Canada’s sustainable heating future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad</title>
<link href="https://hdl.handle.net/1721.1/162515" rel="alternate"/>
<author>
<name>Alkhalil, Kabbod</name>
</author>
<id>https://hdl.handle.net/1721.1/162515</id>
<updated>2025-08-28T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad
Alkhalil, Kabbod
Access to prenatal care is critical for reducing maternal and neonatal mortality rates. Yet, accessibility to healthcare facilities remains an understudied challenge in many sub-Saharan African countries. This study examines the spatial accessibility to healthcare facilities in N’Djamena, Chad, across various transportation modes, as well as the relationship between travel time and adherence to WHO-recommended prenatal care visits.&#13;
This analysis utilized a mixed-methods approach. A geospatial analysis was conducted to estimate travel times and distances to the nearest healthcare facility across the city of N’Djamena using various transportation modes to uncover areas of low accessibility. This analysis was supplemented with survey data collected from interviews with 67 pregnant women across three different hospitals.&#13;
Findings show that 72% of the surveyed population use motorcycles or cars and benefit from high accessibility. 95% of these patients have travel times under 26 and 30 minutes, respectively. In contrast, pedestrians have poor accessibility, especially when patients only attend district or national hospitals. This behavior is very likely – 81% of the surveyed population reported bypassing closer facilities, advancing familiarity and quality of care as the main reasons. In this instance, 20% of the population have travel times greater than one hour on foot. &#13;
While adherence to WHO guidelines was high in early pregnancy (below 20 weeks), it declined in later stages. The study found no statistically significant correlation between travel time and adherence.&#13;
Improving accessibility for pedestrians will require a combination of health system improvements, better facility distribution, and transport subsidies. The Ministry of Public Health and urban planners can employ similar data-driven approaches to plan the placement of new healthcare facilities and develop outreach strategies to ensure equitable access in a growing urban context.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks</title>
<link href="https://hdl.handle.net/1721.1/162514" rel="alternate"/>
<author>
<name>Bansal, Umang</name>
</author>
<id>https://hdl.handle.net/1721.1/162514</id>
<updated>2025-08-28T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks
Bansal, Umang
Distributed Denial of Service attacks, and particularly DNS Amplification attacks, have seen a steady rise in deployment over the past few decades. DNS Amplification attacks, in particular, are challenging to identify and mitigate because of their apparent similarity to legitimate DNS traffic. This thesis proposes a new Proof-of-Work mitigation strategy that provides a defense against DNS Amplification attacks and shifts the burden of mitigation to the attackers. Through our experiments, we show that our Proof-of-Work strategy is effective in reversing the impact of DNS Amplification attacks on the victim’s ability to service legitimate clients. We also provide an evaluation framework to evaluate the mitigation strategy’s impact on the victim’s quality of service.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Human-Informed Variables in Medical Data</title>
<link href="https://hdl.handle.net/1721.1/162513" rel="alternate"/>
<author>
<name>Abu Daoud, George</name>
</author>
<id>https://hdl.handle.net/1721.1/162513</id>
<updated>2025-08-28T03:07:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Human-Informed Variables in Medical Data
Abu Daoud, George
In the Age of Information and Artificial Intelligence, data plays a major role in analyzing and understanding underlying trends and patterns as well as informing processes and operations. Medical data often captures information beyond mere patient conditions and state, but also human behavioral aspects of the medical process, affecting the data itself and the decisions informed by it. Modeling these variables could help us understand how they influence decisions in the field and potentially augment our models for better and more nuanced predictions. In the first study, we look into how external non-medical factors might affect decision-making by investigating the effect of 30-day mortality metrics on discharge rates following surgeries in Cardio-Vascular Intensive Care Units (CVICU) using data from the MIMIC-IV dataset. In the second study, we examine data extraction from human-notes for enhancing organ procurement decision processes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site</title>
<link href="https://hdl.handle.net/1721.1/162512" rel="alternate"/>
<author>
<name>Verensia, Ria</name>
</author>
<id>https://hdl.handle.net/1721.1/162512</id>
<updated>2025-08-28T03:07:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site
Verensia, Ria
Understanding how soil moisture declines following rainfall—when the soil progressively dries due to evaporation and plant uptake—is critical for assessing plant water stress, surface energy partitioning, and land–atmosphere interactions. These periods of moisture loss, commonly referred to as soil moisture drydowns, provide a valuable window into the transition from wet to dry surface conditions. This study focuses on the critical soil moisture threshold (θ*), which marks the transition from energy- water-limited surface evaporation regimes. This transition reflects a key shift in surface energy balance and controls the extent to which evaporation is constrained by moisture availability. While previous research has typically treated θ* as a static value based on soil texture, emerging evidence suggests that it may vary depending on environmental conditions, particularly seasonal climate. This study investigates whether θ* is a fixed property or a dynamic threshold influenced by seasonal variation and available energy. Using in situ data from the Soil Temperature and Moisture Profile (STAMP) system and Infrared Thermometer (IRT) measurements at a semi-arid grassland site in Oklahoma, USA, I identify and analyze soil moisture drydown events. I estimate θ* by applying piecewise linear regression to the relationship between soil moisture and diurnal surface temperature range, isolating the breakpoint that indicates the transition from energy-limited to water-limited evaporation. Results reveal that θ* exhibits systematic temporal variations, particularly across seasons and temperature regimes, suggesting that surface temperature dynamics during drydowns are most likely a response to changes in soil moisture content. These findings challenge the assumption that θ* is solely texture-dependent and highlight the need to account for dynamic environmental controls in modeling surface energy exchange. This research provides new insights into soil moisture-temperature coupling and offers implications for land surface model development, drought forecasting, and vegetation response assessments under a changing climate.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design</title>
<link href="https://hdl.handle.net/1721.1/162509" rel="alternate"/>
<author>
<name>Binbas, Berkin</name>
</author>
<id>https://hdl.handle.net/1721.1/162509</id>
<updated>2025-08-28T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design
Binbas, Berkin
Large language models (LLMs) have recently emerged for daily use and have already been extensively utilized for various tasks. They are shown to be able to carry out more and more complex tasks every day, including those that require a high level of formal/mathematical reasoning at human or superhuman levels. In particular, their in-context learning capabilities and the domain-specific knowledge they have via their vast pretraining corpus, as well as their fine-tunability for specific tasks drove a lot of attention and research in the field. However, applications of LLMs to the frontiers of scientific research remains an underexplored direction. In this work, we investigate how one can leverage LLMs to aid with building compact mathematical models and experimental design. Specifically, we propose a framework for using LLMs as a guide to concurrently handle the experimental design and symbolic regression tasks for data obtained from 1) a black box 1D function and 2) a black box physical system. We propose further modifications to our base framework, and perform experiments to analyze how it performs under different experiment variants, across different LLM tiers. Our experiments reveal that while larger models (of around 70b parameters) do not always achieve better downstream performance compared to smaller models (of around 8b parameters), they are able to utilize the given information and/or physical context when designing experiments and proposing symbolic expressions, and perform better than random-design baselines. We also observe that natural language constraints do not consistently improve symbolic regression accuracy. These results underscore both the challenges and the potential of integrating LLM agents into the scientific discovery process, particularly as proposers of experiments and symbolic expressions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>I-Con: A Unifying Framework for Representation Learning</title>
<link href="https://hdl.handle.net/1721.1/162508" rel="alternate"/>
<author>
<name>Alshammari, Shaden</name>
</author>
<id>https://hdl.handle.net/1721.1/162508</id>
<updated>2025-08-28T03:07:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">I-Con: A Unifying Framework for Representation Learning
Alshammari, Shaden
As the field of representation learning grows, there has been a proliferation of different loss functions to solve different classes of problems. We introduce a single information-theoretic equation that generalizes a large collection of modern loss functions in machine learning. In particular, we introduce a framework that shows that several broad classes of machine learning methods are precisely minimizing an integrated KL divergence between two conditional distributions: the supervisory and learned representations. This viewpoint exposes a hidden information geometry underlying clustering, spectral methods, dimensionality reduction, contrastive learning, and supervised learning. This framework enables the development of new loss functions by combining successful techniques from across the literature. We not only present a wide array of proofs, connecting over 23 different approaches, but we also leverage these theoretical results to create state-of-the-art unsupervised image classifiers that achieve a +8% improvement over the prior state-of-the-art on unsupervised classification on ImageNet-1K. We also demonstrate that I-Con can be used to derive principled debiasing methods which improve contrastive representation learners.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India</title>
<link href="https://hdl.handle.net/1721.1/162507" rel="alternate"/>
<author>
<name>Kumbhare, Piyush</name>
</author>
<id>https://hdl.handle.net/1721.1/162507</id>
<updated>2025-08-28T03:07:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India
Kumbhare, Piyush
This thesis analyzes the disparate market penetration rates of electric two-wheelers (E2Ws) and electric four-wheelers (E4Ws) in India, using systems thinking approaches to understand the underlying dynamics and propose strategic interventions. In 2024, while E2Ws have achieved 4.43% market penetration, E4Ws lag significantly at 1.91%, despite similar policy support. Through force field analysis and stakeholder value mapping, this research identifies key factors driving this disparity and evaluates their temporal evolution over three time horizons.&#13;
The analysis reveals that E2Ws benefit from stronger driving forces, including urban suitability, favorable total cost of ownership, and simpler charging solutions, with 91% of users relying on home charging. In contrast, E4Ws face more substantial barriers, particularly in upfront costs, charging infrastructure requirements, and range anxiety. Technical modeling of key Figures of Merit (FOMs) demonstrates how different optimization challenges affect each segment's market acceptance.&#13;
The research culminates in recommendations for accelerating E4W adoption, emphasizing the need for India-specific models priced similar to internal combustion engine (ICE) vehicle, localized manufacturing ecosystems, robust charging infrastructure, and innovative financing solutions. The findings suggest that while E2W adoption will continue to grow naturally, E4W penetration requires coordinated interventions across manufacturing, technology, infrastructure, policy, and consumer awareness dimensions. This research contributes to understanding how systems thinking can inform strategic planning for electric vehicle adoption in emerging markets, with specific implications for India's goal of 30% EV penetration by 2030.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience</title>
<link href="https://hdl.handle.net/1721.1/162506" rel="alternate"/>
<author>
<name>Dao, Nguyen Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/162506</id>
<updated>2025-08-28T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience
Dao, Nguyen Luc
Large Language Models (LLMs) have been increasingly adopted by businesses to support their workflows, driving significant investment in developing generative agents. These agents can collaborate and exchange information to solve complex problems. Previous research has found that the benefits of such multi-agent systems include better performance and the potential emergence of collective intelligence characterized functionally as leadership, debate, and feedback. However, expanding multi-agent systems to include agents beyond trusted boundaries introduces the risks of malicious agents that provide incorrect or harmful information to deteriorate collective decisions or cause systemic failure. This study investigates how architectural decisions, including group size, agent prompting, and collaboration schemes, impact the system's resilience against malicious agents. Our experiment results show that increasing group size improves both accuracy and resilience at the cost of more tokens. Step-back abstraction prompting enhances accuracy and mitigates the likelihood of hallucinations induced by malicious agents. Group Chat topology is highly vulnerable to malicious interferences. Reflexion, Crowdsourcing, and Blackboard topologies offer safeguards against such risks. Eventually, we expand our research to investigate accountability gaps in generative AI systems. Designing generative multi-agent systems requires careful consideration of the trade-offs between performance, cost, resilience, and accountability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>National Space Power Analysis Through Organizational and Market Evolution</title>
<link href="https://hdl.handle.net/1721.1/162505" rel="alternate"/>
<author>
<name>Deline, Carrie B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162505</id>
<updated>2025-08-28T03:07:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">National Space Power Analysis Through Organizational and Market Evolution
Deline, Carrie B.
The space domain is undergoing fundamental changes and unprecedented growth. Once dominated by state-led missions, the space sector is now home to commercial competition, rapid innovation, and evolving models of public-private collaboration. These changes call to question how space power is built and maintained, especially during the raising geopolitical climate and power competition in space. The rise of agile commercial industry has driven down launch costs, accelerated technology development and opened new markets and business cases forcing legacy institutions to re-evaluate their strategies and business models.&#13;
&#13;
This thesis is motivated by the need to understand how organizations are responding to these changes, and how their choices collectively shape the United States as a national space power. Through the application of a theoretical space power model based on war strategy and Schumpeterian innovation theory, the different elements of space power will be explored in today’s context. It seeks to identify the organizational drivers of change, tensions and synergies between legacy enterprises and new entrants, and the implications of the dynamic space ecosystem.&#13;
&#13;
This thesis includes a mixed-methods analysis starting with a historical understanding of the evolution of the sector. By identifying current market trends, government policies and initiatives, the applied theoretical model is presented. The model is supported by market data, a force field analysis of organizational shifts, and qualitative interview insights from industry leaders. The research aims to contribute insights for government strategists and industry leaders concerned with America’s future as a space power and their organization’s role within it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification</title>
<link href="https://hdl.handle.net/1721.1/162504" rel="alternate"/>
<author>
<name>Chentouf, A. Anas</name>
</author>
<id>https://hdl.handle.net/1721.1/162504</id>
<updated>2025-08-28T03:07:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification
Chentouf, A. Anas
Noisy labels are a pervasive challenge in modern supervised learning, especially in highstakes domains such as healthcare, where model reliability is critical. Detecting and mitigating the influence of mislabeled data is essential to improving both performance and interpretability. Building on insights from training dynamics, we propose Local Consistency across Training Epochs (LoCaTE), a class of data-filtering methods that leverages over-parameterized and over-trained neural networks to distinguish clean samples from mislabeled ones. Our approach integrates both local neighborhood information and the behavior of samples across training epochs to identify noise and enhance model robustness. We evaluate our method on real (human) and synthetic label noise across three classification datasets, finding that it achieves competitive F₁ of label error detection and improved downstream accuracy using a lightweight classifier with low added computational cost.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation</title>
<link href="https://hdl.handle.net/1721.1/162503" rel="alternate"/>
<author>
<name>Balachandran, Adithya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162503</id>
<updated>2025-08-28T03:07:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation
Balachandran, Adithya S.
Multimodal AI aims to build comprehensive models by integrating information from diverse sensory inputs such as text, audio, and vision. However, significant challenges remain in understanding how these different modalities interact and contribute to downstream tasks. In particular, we seek to characterize how modalities complement each other, overlap in the information they convey, or contribute jointly to patterns that are not clear from any single modality alone. To address this, we propose novel methods for quantifying these multimodal interactions using information-theoretic techniques. Specifically, we will introduce a novel estimator for Partial Information Decomposition (PID) using normalizing flows, with the ability to scale well to high-dimensional data. We also develop a new framework for estimating pointwise PID, which provides insights into how individual data points contribute to information sharing and interactions across modalities, and show how to apply this framework for anomaly detection. We demonstrate the effectiveness of our methods on a variety of high-dimensional datasets, including both synthetic and real-world multimodal data such as videos.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale</title>
<link href="https://hdl.handle.net/1721.1/162502" rel="alternate"/>
<author>
<name>Bang, Hyemin</name>
</author>
<id>https://hdl.handle.net/1721.1/162502</id>
<updated>2025-08-28T03:07:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale
Bang, Hyemin
To improve the reliability of machine learning models, researchers have developed metrics to measure the alignment between model saliency and human explanations. Thus far, however, these saliency-based alignment metrics have been used to conduct descriptive analyses and instance-level evaluations of models and saliency methods. To enable evaluative and comparative assessments of model alignment, we extend these metrics to compute explanation alignment — the aggregate agreement between model and human explanations. To compute explanation alignment, we aggregate saliency-based alignment metrics over many model decisions and report the result as a performance metric that quantifies how often model decisions are made for the right reasons. Through experiments on nearly 200 image classification models, multiple saliency methods, and MNIST, CelebA, and ImageNet tasks, we find that explanation alignment automatically identifies spurious correlations, such as model bias, and uncovers behavioral differences between nearly identical models. Further, we characterize the relationship between explanation alignment and model performance, evaluating the factors that impact explanation alignment and how to interpret its results in-practice.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data</title>
<link href="https://hdl.handle.net/1721.1/162501" rel="alternate"/>
<author>
<name>Abdulrezak, Ayyub</name>
</author>
<id>https://hdl.handle.net/1721.1/162501</id>
<updated>2025-08-28T03:07:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data
Abdulrezak, Ayyub
MIT has a wealth of pianos spread across its campus. These instruments are owned by various groups and MIT organizations. Every day, students, faculty, and extended members of the MIT community play and practice with them. However, there currently exists no available data on their usage. This project aims to create the infrastructure for capturing this data. To this end, we installed sensing equipment on pianos across campus, constructed a matching database and filesystem of all playing sessions across time, and established a public API for the retrieval of this data. The collected data will later be used to power a publicly accessible webpage of real-time and historical visualizations, as well as serve to bolster research efforts into the characteristic piano playing of MIT.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations</title>
<link href="https://hdl.handle.net/1721.1/162449" rel="alternate"/>
<author>
<name>Wettstein, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162449</id>
<updated>2025-08-22T03:06:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations
Wettstein, Benjamin
As AI technologies and autonomy grow, the transition to application in the military, specifically enhancing military cyberspace operations, has become both a strategic imperative and an adoption challenge. This thesis explores the challenge of effectively integrating autonomous cyber weapons systems into offensive military cyberspace operations. I offer both technical and policy recommendations to ensure autonomous technology development does not outpace its ability to be integrated. &#13;
&#13;
This thesis analyzes historical case studies, such as loitering munitions and escort jammers, to examine the potential for integrating autonomous cyber weapons systems into military offensive cyberspace operations. This analysis finds that the more autonomous and lethal a weapon is, the more difficult it is to integrate it into military operations.&#13;
&#13;
Subsequently, the current state of cyberspace operations is analyzed by discussing two cyberspace attacks, Stuxnet and Conficker. This analysis reveals that cyberspace operations currently demonstrate low to medium levels of autonomy and low levels of lethality. Therefore, there is a significant opportunity to adopt autonomous systems in the current context of offensive cyberspace operations. However, as the domain of cyberspace is transforming with the growth of complexity in technology, there are evolving legal, ethical, bureaucratic, and technical concerns. This thesis contains policy recommendations around technical standards, investment and acquisitions, and regulations regarding using autonomous cyber capabilities to address these challenges. Along with the policy recommendations, the core technical recommendation that enables autonomous cyber systems is the safe and effective deployment of human-machine interfaces to direct and control them. This thesis argues that interfaces are not merely supporting tools but are, in fact, the central technical mechanism for enabling traceability, oversight, and control in autonomous cyberspace operations. The future development and integration of autonomous cyber systems must&#13;
prioritize interface design tailored to varying degrees of autonomy and operator control.&#13;
&#13;
The technical portion of this thesis explores different interfaces for autonomous cyber systems, utilizing distinct models of autonomy within the Cyber Operations Research Gym (CybORG) simulation environment. Each interface corresponds to the three human-machine relationships discussed, which include a semi-autonomous interface (human in the loop), a supervised autonomous interface (human on the loop), and a fully autonomous interface (human out of the loop). These interfaces serve as a proof of concept, providing evidence that different levels of autonomy can be implemented on the same autonomous cyber system. Additionally, the use of LLMs to explain the actions taken by autonomous cyber systems is explored.&#13;
&#13;
Ultimately, this thesis contributes technical and policy recommendations for navigating the future of autonomous cyber warfare. As autonomous systems evolve in sophistication and capability, the U.S. military must adopt policy and technical mechanisms that enable autonomy without sacrificing oversight, accountability, or effectiveness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Response of Arabidopsis to bacterial presence under iron stress</title>
<link href="https://hdl.handle.net/1721.1/162448" rel="alternate"/>
<author>
<name>Kitzinger, Katherine A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162448</id>
<updated>2025-08-22T03:06:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Response of Arabidopsis to bacterial presence under iron stress
Kitzinger, Katherine A.
Iron availability is essential for the normal function of plants, but it becomes less available for uptake under drought. A lack of iron can lead to early senescence, fewer and less nutritious crops, and in extreme cases, plant death. In response to these stressful conditions, microbial interactions can lead to improved plant health, however the mechanism by which this occurs is not understood. In this study we cocultured an Arabidopsis MTP8 knockout line, which is susceptible to iron stress, as well as a subset of a previously established synthetic microbial community derived from healthy Arabidopsis roots. We cocultured the Arabidopsis lines and bacteria under three different iron levels in a hydroponics system and measured the dry weight and chlorophyll content ten days post inoculation. This study aims to narrow the potential mechanism of the beneficial effects of bacteria on plants experiencing nutrient stress.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types</title>
<link href="https://hdl.handle.net/1721.1/162447" rel="alternate"/>
<author>
<name>Zhang, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/162447</id>
<updated>2025-08-22T03:06:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types
Zhang, Eileen
Topology optimization is a rising tool in structural design that can improve material efficiency and promote sustainability. However, currently topology optimization is not greatly used in the industry due to the nature of its user-unfriendly, high computation cost and difficulty in manufacturability. This thesis proposes a new framework combining traditional discrete topology optimization with truss elements and continuum elements topology optimization in creating a more informed algorithm suitable for more practical design scenarios. In addition, the drawing toolkit is also introduced in helping users better interact with the system in outputting their desired outcome. The hybrid element type topology optimization is achieved by creating separate local stiffness matrices and mapping them respectively to the same global design space to perform optimization together. The interactive drawing functions are used as add-in truss members that users can select the amount and draw in the length and locations of them in the design space. This framework is tested on multiple topology optimization classic problems including cantilever beam with bracings and MBB beam. All draw-in truss hybrid topology optimized results show a more efficient design results with lower compliance and overall lower material quantity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature</title>
<link href="https://hdl.handle.net/1721.1/162445" rel="alternate"/>
<author>
<name>Neupane, Pragya</name>
</author>
<id>https://hdl.handle.net/1721.1/162445</id>
<updated>2025-08-22T03:06:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature
Neupane, Pragya
Tables in scientific literature are rich sources of structured data, yet their complex and variable formats pose challenges for automated extraction. This thesis focuses on improving the reliability of Table Structure Recognition (TSR) using the Table Transformer (TATR) model, with a specific application to childhood obesity intervention studies. While fine-tuning TATR on a domain-specific dataset improves detection metrics, persistent errors such as overlapping rows and misclassified header columns remain. Through a systematic post-hoc error analysis of 175 scientific tables, we identify these dominant failure modes and develop lightweight post-processing modules: an overlap-aware row filtering algorithm and an OCR-enhanced column boundary correction method. Importantly, instead of relying on computationally expensive large language models (LLMs), this approach leverages efficient, interpretable techniques tailored to the domain-specific structure of public health tables. Our combined method reduces the proportion of structurally erroneous tables from 46.3% to an estimated 9.7–12.6%, improving the semantic alignment and interpretability of model outputs. This work contributes a transparent and scalable pipeline that enhances the trustworthiness of automated table extraction systems, with direct relevance to evidence-based decision-making in public health.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States</title>
<link href="https://hdl.handle.net/1721.1/162444" rel="alternate"/>
<author>
<name>Colcord, Christopher C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162444</id>
<updated>2025-08-22T03:06:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States
Colcord, Christopher C.
Steel is energy- and CO₂ emissions-intensive to produce, but it is also a crucial material for infrastructure, defense, and the energy transition. This thesis focuses on Electric Arc Furnace (EAF) steelmaking, which accounts for roughly 70% of steel production in the United States. Decarbonization levers for EAF producers are diverse—encompassing energy efficiency (EE) measures, fuel switching, material input substitution, development of onsite carbon-free electricity (CFE) generation, CFE procurement through power purchase agreements (PPAs) or unbundled renewable energy credits (RECs), and negative-emissions credit purchases, among others. We first construct a techno-economic model that analyzes costs and emissions of individual EAF facilities in the United States under a business-as-usual (BAU) scenario for the years 2025 through 2035. We then calculate the Levelized Cost of Carbon Abatement (LCCA) of various decarbonization levers against the BAU counterfactual. We build aggregate LCCA curves to draw insights on least-cost emissions abatement strategies for facilities and opportunities for policy to accelerate decarbonization decisions.&#13;
&#13;
We find that the modeled levers collectively deliver a 46% reduction in EAF CO₂ emissions versus the BAU case—equivalent to a reduction of roughly 1.7% of national industrial CO₂ emissions. Voluntary CFE procurement has the greatest potential to abate EAF emissions, but comes with large uncertainties. Onsite CFE and PPAs have negative LCCAs in most cases, whereas unbundled RECs have positive LCCAs. EE measures provide modest emissions reductions and costs are negative on a levelized basis under a wide range of assumptions. EE opportunities, onsite CFE, and PPAs may be bound by non-financial constraints. Direct reduced iron (DRI) with carbon capture has lower variable costs and produces fewer emissions versus hydrogen-based DRI in most cases. While the challenges to decarbonize EAF steelmaking are immense, we find EAF facilities can take actionable steps in the near term—supported by federal and state policies—to abate carbon emissions while reducing levelized costs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon</title>
<link href="https://hdl.handle.net/1721.1/162443" rel="alternate"/>
<author>
<name>Neufeldt, Charlie</name>
</author>
<id>https://hdl.handle.net/1721.1/162443</id>
<updated>2025-08-22T03:06:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon
Neufeldt, Charlie
Climate change exacerbates environmental stressors such as drought, challenging the resilience of agricultural systems and highlighting the need to understand plant genomic architecture and its responses to such environmental variation. A key molecular mechanism underlying these responses is transcriptional plasticity: environment-induced changes in gene expression that vary among genotypes, representing one way that genotype-by-environment (GxE) interactions manifest at the molecular level. While transcriptomic data offers a unique and powerful view into these responses, traditional modeling approaches often rely on linear assumptions, limiting their ability to detect complex, nonlinear patterns of regulation. This thesis investigates whether generative machine learning modeling, specifically the use of transformers, can extract biologically meaningful representations of gene expression dynamics in plants. Inspired by the successes of the scGPT model for human genomics, I developed and trained a compact transformer architecture, the PlantGeneEncoder, on bulk RNA-seq data from two natural accessions of Brachypodium distachyon grown under drought and control conditions. The model was trained on binned expression values using both a baseline configuration and a set of regularized variants incorporating noise injection, co-expression preservation, entropy-based sample weighting, and masked gene modeling as a self-supervised objective. While baseline models achieved perfect reconstruction accuracy, they failed to preserve meaningful biological structure in the latent space. Regularized models achieved a better trade-off, maintaining high reconstruction fidelity while demonstrating improved genotype classification performance and modestly better alignment with the original expression structure. However, environmental condition signals remained difficult to capture across all configurations, with classification accuracies only marginally above random chance. These findings highlight the promise and limitations of transformer-based generative modeling for plant transcriptomics and provide a flexible framework for future efforts to model transcriptional plasticity and regulatory responses to environmental stress.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation</title>
<link href="https://hdl.handle.net/1721.1/162438" rel="alternate"/>
<author>
<name>Blaze, Edie</name>
</author>
<id>https://hdl.handle.net/1721.1/162438</id>
<updated>2025-08-22T03:06:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation
Blaze, Edie
In the construction industry, structural, architectural, and environmental considerations can often be at odds with each other, leading to inefficient structures and, consequently, material waste. Topology optimization has shown promise as one potential solution to this problem, offering designs that are both structurally efficient and aesthetically interesting. However, topology-optimized designs are often difficult to manufacture or do not take into consideration other aspects that are crucial in the construction industry. Human-informed topology optimization, or HiTop, is a previously-developed algorithm that allows users to edit areas of interest, providing a computationally-efficient solution to address concerns with the designs. This paper uses MATLAB to apply HiTop to the design of the lateral-load-resisting systems of tall buildings, comparing results to those of three other designs: a “human” design with standard cross bracing, a optimized design using classical topology optimization, and a previously-developed algorithm which optimizes designs under a sum of modal compliances formulation, similar to how structures are analyzed in seismic codes. The designs are evaluated quantitatively, comparing natural periods, modal displacements, sum of modal compliances using modal decomposition, as well as computation time. They are also evaluated qualitatively, as HiTop is used to modify designs to improve constructability and aesthetics. The HiTop algorithm successfully created manufacturable, aesthetic designs in line with the user’s goals across a range of H/B ratios within a brief time frame. HiTop designs also performed similarly to the classically optimized designs, indicating that modifications to an optimized design to improve manufacturability, aesthetics, or other potential goals of a user do not significantly decrease structural performance under seismic loading.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peatland burning identification among other wildfires across different ecozones in Canada</title>
<link href="https://hdl.handle.net/1721.1/162437" rel="alternate"/>
<author>
<name>Chen, Ming</name>
</author>
<id>https://hdl.handle.net/1721.1/162437</id>
<updated>2025-08-22T03:06:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Peatland burning identification among other wildfires across different ecozones in Canada
Chen, Ming
The unprecedented severity of the 2023 Canadian wildfires highlights growing concerns about the vulnerability of global peatlands—key ecosystems storing substantial amounts of terrestrial carbon. Peatlands, traditionally resistant to burning, are increasingly at risk due to climate-induced warmer and drier conditions. This study specifically investigates the extent and characteristics of peat burning in the 2023 Canadian wildfires based on available remote sensing data. The primary objective is to determine whether fires on peatlands demonstrate distinct fire behavior compared to fires on non-peatland. To achieve this goal, this study utilized statistical tools and machine learning algorithms, including power-law relationship estimates, Mann-Whitney U test, K-means clustering, and generalized additive model (GAM) to identify the contribution of peat presence to fire behaviors. Key findings demonstrate that fires on peatland are significantly more intense, longer-lasting, and associated with higher carbon emissions. Even though peat combustion can not be confirmed without field validations, these results underscore the critical importance of the potential impact of peat on wildfire growth and management. By highlighting the disproportionate impact of peat burning, this study provides a foundation for future research aimed at developing targeted remote sensing techniques and policy responses to mitigate peatland vulnerability and preserve vital carbon stores in the context of global climate change.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures</title>
<link href="https://hdl.handle.net/1721.1/162434" rel="alternate"/>
<author>
<name>Huynh, Amy</name>
</author>
<id>https://hdl.handle.net/1721.1/162434</id>
<updated>2025-08-22T03:06:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures
Huynh, Amy
This thesis explores pathways to circularity for polyethylene-based textiles through an integrated framework that combines material experimentation, systems-level policy analysis, and cultural innovation. Focusing on olefin block copolymer (OBC) filaments—engineered with semicrystalline polyethylene hard segments and elastomeric soft blocks—the study evaluates their mechanical behavior across a range of stitch-based textile geometries. Cyclic and postfatigue tensile testing reveals how formulation and structure shape energy dissipation and durability, informing design strategies for high-performance applications such as intra-vehicular spacesuits and wearable technologies. To understand the broader systems context, the thesis analyzes barriers to integrating recycled polyethylene (rPE) into textile supply chains, identifying economic, legal, institutional, technological, firm-level, and societal constraints. It proposes targeted strategies based on global policy trends, EU case studies, and a geospatial analysis of U.S. recycling infrastructure. Finally, the work explores how generative AI can revitalize traditional craft practices—such as bobbin lace—by co-creating patterns designed for both aesthetic and functional performance in new materials. Together, these efforts propose a model for advancing sustainable textile innovation that bridges material science, circular design, and policy transformation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fueling Conflict: A Global Dataset of Energy Protests</title>
<link href="https://hdl.handle.net/1721.1/162432" rel="alternate"/>
<author>
<name>Harrison, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/162432</id>
<updated>2025-08-22T03:06:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fueling Conflict: A Global Dataset of Energy Protests
Harrison, Ethan
How do popular grievances about (the lack of) access to energy lead to political violence and instability? I use a mixed-methods approach to answer this question, based on a qualitative case study in Sri Lanka and a quantitative framework for tracking energy protests worldwide. Specifically, through an analysis of the 2022 Aragalaya protest movement in Sri Lanka, I elaborate on how breakdowns in state capacity to provide energy to its citizens can trigger civilian unrest. Building on this case study, as well as insights from the empirical literature on the drivers of instability related to energy access, I then pilot a machine learning (ML) framework to identify energy-related protest events in the Armed Conflict Events Database (ACLED) based on context-specific keywords, which results in the creation of the first global dataset on energy protests. This novel source of evidence, in turn, will open new avenues for research on the conflict-energy nexus, particularly on the impact of market shocks on civilian unrest and instability in low- and middle-income countries – a topic for which current empirical work is limited. I show how the ML framework I develop here can be used to enable continuous monitoring of protest activity related to energy access, as well as how the framework can be extended to other forms of political violence, offering a promising tool for peace-building initiatives across contexts. Therefore, such a framework could inform key evidence to support policymakers, practitioners, and researchers in the design of strategic policies that facilitate the provision of energy while mitigating the risk of conflict and instability worldwide, particularly in "energy-poor" countries.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers</title>
<link href="https://hdl.handle.net/1721.1/162431" rel="alternate"/>
<author>
<name>Khan, Nadia Rehman</name>
</author>
<id>https://hdl.handle.net/1721.1/162431</id>
<updated>2025-08-22T03:06:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers
Khan, Nadia Rehman
Both NASA and ESA have committed to establishing a lasting presence on the Moon by 2030. However, lunar surface debris has already exceeded 200,000 kg, prompting concerns about the environmental, operational, and economic viability of future missions. This thesis proposes that circular economy principles—particularly reusability, modularity, and interoperability—must be embedded in early mission architecture to reduce waste, improve system longevity. To evaluate these goals, this thesis introduced a novel decision-support framework, the Lunar Exploration Impact Assessment (LEIA), alongside a policy-informed set of Lunar Surface Sustainability Guidelines (LSSG). Both decision support tools were designed help mission designers and space policy stakeholders to incentivize the design of resilient reusable lunar landers and rovers. In this thesis the LEIA framework, was applied to two case studies: NASA JPL’s EnduranceA autonomous lunar sample return rover, and ESA’s multi purpose Argonaut lander to evaluate the sustainability of each spacecraft after the EOL/M phase of each mission. Scores were computed using a Multi Criteria Decision Analysis (MCDA) approach. Seven Impact Assessment Indicators (IAI)s were considered, to assign a sustainability rating for each mission: Cost-effectiveness, environmental impact, science value, redundancy, resilience, strategic value, and technological feasibility. The Endurance-A mission achieved a sustainability score of 66.4%, based on a sample collection post primary mission scenario, indicating moderate sustainability across some categories such as cost-effectiveness 18.9% and technological feasibility 12%. However, the environmental impact score was limited to 7.7%, due to the out-gassing and launch emissions associated with the SpaceX Starship lander. The rovers redundancy and maintainability ratings also constrained the overall sustainability rating – highlighting a gap in the availability of tools suitable for EVA-based repairs on the lunar surface. Subsystems most at risk of degradation—mobility, thermal, and power—require enhanced design for long-term reuse scenarios. Each of these factors were made salient through the Argonaut case study, indicating that in the short to medium term in order to prevent the accumulation of lunar surface debris lunar rovers and landers must be designed to be more resilient to the conditions of the lunar environment. To supplement the LEIA framework, a set of policy recommendations were developed in order to address the lack of End of Life (EOL) procedures in place to manage lunar surface debris – in the form of retired lunar missions. The guidelines detailed how economic policy mechanisms adopted in circular economy systems could be leveraged to incentivize the design of sustainable lunar surface missions and operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beam Mechanism Failure in Multistory Steel Frame Structures</title>
<link href="https://hdl.handle.net/1721.1/162430" rel="alternate"/>
<author>
<name>Hashbarger, Brad</name>
</author>
<id>https://hdl.handle.net/1721.1/162430</id>
<updated>2025-08-22T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Beam Mechanism Failure in Multistory Steel Frame Structures
Hashbarger, Brad
Engineers must ensure that building structures are not in danger of collapse, so analyses always include safety factors that create redundant yet materially inefficient buildings. This has been common practice for most of structural history, but today, growing concerns for carbon emissions force designers to cut material usage while retaining the same level of safety. Processes opt into one of two processes: an overall lighter unit or stiffening specific internal systems to encourage a load path. The problem with either of these options lies in progressive collapse in the event of structural damage. If one column is lost, stresses propagate either until equilibrium or a larger collapse occurs. Progressive collapse remains a popular research area to identify specific vulnerabilities, often with numerical models for a visualization of each stress state and redundant capacity. Previous studies used analytical and experimental performance to observe the critical effects of losing an external versus internal column and the role of other components, such as joints, joists, and composite slabs, to carry additional loads. However, designs and analyses are bound by assumptions that govern model behavior. To understand the sensitivity and limits of these assumptions, this thesis predicts the performance of steel moment-frame structures of varying bay geometries, proposing deflection fields to inform modern practice in all phases of project development. Instead of numerical simulations, the process follows an analytical approach based in the fundamental methods of equilibrium and the conservation of work and energy. By designing sections for their elastic capacity, their operational performance is directly linked to their failure response. This suggests the dominance of design preferences in stability, even with changes in beam spans or floor loading. Results support an optimal span ratio for plasticity under two-way load distributions that favors bay geometry ratios (L1/L2) between 1 and 2 but varies based on failure locations and how many columns have been lost. This also emphasizes the weaknesses out of plane as span ratios range from 0.5 to 1. Project layouts can utilize the free strength provided by bay geometries as part of the structural design process. If large deflections or span lengths are expected, beam depth and section thickness should increase together to ensure beams utilize their full plastic capacity to achieve additional redundancy from catenary action. Overall, the thesis demonstrates that such considerations in the early design stage can enable steel structures to achieve greater safety with less material.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Aboveground Biomass (AGB) Throughout the Pacific</title>
<link href="https://hdl.handle.net/1721.1/162428" rel="alternate"/>
<author>
<name>Domingo-Kameʻenui, Joy P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162428</id>
<updated>2025-08-22T03:06:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Estimating Aboveground Biomass (AGB) Throughout the Pacific
Domingo-Kameʻenui, Joy P.
Aboveground biomass (AGB) is a signiﬁcant carbon pool in forests, making AGB a good indicator of forest health and carbon storage. AGB has been studied on multiple scales, in which allometric equations were developed to ﬁnd relationships between AGB and tree parameters. However, despite the presence of AGB studies for speciﬁc sites in the Paciﬁc Islands, there is a lack of AGB comparative studies or data syntheses focused on the Paciﬁc Islands as a whole. This study synthesized data on AGB, tree height H, land cover, and Paciﬁc Island forest community to develop allometric equations using linear and polynomial regression models for trees in the Paciﬁc based on H as the main parameter. This study found polynomial relationships between AGB and H for shrub and herbaceous covers. Speciﬁcally, AGB = 1.76 H^2 + -51.01 H + 346.53 for shrub cover (adjusted R^2 = 0.94, n = 39), and AGB = 1.11 H^2 + -81.97 H + 1167.20 for herbaceous cover (adjusted R^2 = 0.71, n = 79). However, future research and data collection would be necessary to develop allometric equations for tree cover and barren land cover. No signiﬁcant correlation was found between AGB and H for Paciﬁc Island forest community. This study may help with forest management and conservation practices, along with carbon sequestration and storage practices in forests, in the Paciﬁc Islands. This study may also contribute to Paciﬁc-led climate change mitigation and adaptation methods and initiatives.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-powered Data Mining for the Development of Sustainable Concrete Materials</title>
<link href="https://hdl.handle.net/1721.1/162427" rel="alternate"/>
<author>
<name>Duan, Yifei</name>
</author>
<id>https://hdl.handle.net/1721.1/162427</id>
<updated>2025-08-22T03:06:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI-powered Data Mining for the Development of Sustainable Concrete Materials
Duan, Yifei
Data mining has become essential to contemporary industrial and scientific research, playing a pivotal role in uncovering insights from large-scale industrial datasets and literature collections. The sustainable transition of the concrete industry, a major contributor to global CO₂ emissions, demands both operational optimization and scientific innovation. This thesis presents comprehensive data mining frameworks for both industrial and literature source data to support the development of more sustainable concrete materials. Focusing on concrete manufacturing, we develop AI-powered methodologies tailored to real-world industrial data and complex scientific literature. For industrial data mining, we propose to incorporate interpretability and realistic engineering design scenarios to enhance the reliability of both predictive and prescriptive modeling of concrete mixes containing supplementary cementitious materials (SCMs). A domain-informed amortized Gaussian process and a shallow multi-layer perceptron (MLP) are shown to possess superior scientific consistency in predicting time-varied compressive strength, and time-invariant slump and air content properties, respectively. The explainable surrogate property models are applied in mix design optimization under a variety of realistic scenarios considering different engineering design requirements and SCM costs and densities. The importance of the comprehensive property constraint set is demonstrated in comparison against a baseline using only 28-day strength constraint which results in unreasonable property values. The necessity to differentiate realistic scenarios is also highlighted through the differences of optimized mixes and their production costs and climate impacts. Higher design strength, higher design slump, lower design air content, higher SCM density, and higher SCM unit cost can drive up the production costs. Though stratification patterns in the production costs of optimized mixes are observed across different scenarios, the mix-wise climate impacts are not clearly stratified, indicating that substantial emission reduction can be achieved without significantly increasing costs, regardless of the realistic scenarios. For literature mining, a novel method that finetunes lightweight large language models (LLMs) (pythia-2.8B) with multichoice instructions is developed. With the multifaceted linguistic complexity of communication within the domain rendering it infeasible to adopt the conventional named-entity-recognition approach, the new method successfully achieves great information inference accuracy in a time-, cost-, and computation-efficient manner, outperforming the GPT-3.5 in-context learning baseline by over 20%. A knowledge graph is constructed with the literature-mined data, offering insights to promote alternative material substitution strategies in concrete production as the current commercial SCMs are not comprehensively sustainable in the longer term. Statistical summary and temporal trend analyses are adopted to provide both static and dynamic insights into the research landscape. Although SCMs have remained a research hotspot, results revealed a systematic shift in recent studies from commercial SCMs to other materials. Geopolymer and fine aggregate studies have surged in the recent period, while clinker feedstock and filler studies have declined. A node similarity metric is modified to develop a model-free link prediction algorithm, enhanced with random graph perturbation for robustness and uncertainty quantification. Through link prediction, the currently underexplored lime-pozzolan cement application emerges as a potentially promising future research direction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology</title>
<link href="https://hdl.handle.net/1721.1/162426" rel="alternate"/>
<author>
<name>Hsu, Yu-Hsuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162426</id>
<updated>2025-08-22T03:06:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology
Hsu, Yu-Hsuan
The building sector plays a critical role in global energy consumption and carbon emissions, accounting for 21% of global GHG emissions (12 GtCO₂-eq) and 31% of global final energy demand (128.8 EJ) in 2019 (Cabeza et al. 2022). This reality underscores the urgent need to enhance energy efficiency within the sector. This research applies ecological metabolic scaling principles to building energy analysis, utilizing the Massachusetts Institute of Technology (MIT) campus as a case study. Analogous to biological systems, where an animal’s metabolic rate scales to 3/4 power of its mass, our findings indicate that larger buildings, similar to larger organisms, are inherently more energy efficient.&#13;
Furthermore, an analysis of overall energy consumption at MIT from 2009 to 2020 reveals a steady decline, though not proportionally, as the scaling exponent fluctuated with a decreasing trend (&lt;3/4), indicating improved efficiency in larger buildings. However, the COVID-19 pandemic in 2020 acted as a major shock, disrupting this trend. This disruption was likely driven by operational and behavioral changes, including reduced occupancy, increased remote work, and adjustments to ventilation and heating systems to ensure health and safety. These shifts highlighted the system’s tendency to return to the baseline scaling exponent of 3/4, demonstrating regression to the mean and ultimately pushing efficiency back to its prior baseline level of 25%.&#13;
Additionally, the study includes case analyses of specific buildings on the MIT campus to provide deeper insight into comparative energy performance. While several guidelines for energy systems have been proposed, certain limitations remain. Future research should focus on expanding the dataset to help validate the applicability of these findings to other contexts while also accounting for variations in building types. Ultimately, this study aims to facilitate the development of more effective policies and innovations in building energy management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases</title>
<link href="https://hdl.handle.net/1721.1/162424" rel="alternate"/>
<author>
<name>Grewal, Darshdeep</name>
</author>
<id>https://hdl.handle.net/1721.1/162424</id>
<updated>2025-08-22T03:06:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases
Grewal, Darshdeep
The urgent global transition to renewable energy is constrained by the intermittent nature of solar and wind sources, highlighting the critical need for scalable energy storage solutions. This thesis presents a comprehensive investigation into the development of structurally integrated supercapacitors based on carbon-doped cement composites, known as EC3 cells. These multifunctional materials combine structural performance with electrochemical energy storage capabilities, enabling integration directly into civil infrastructure. The research focuses on three essential challenges for real-world deployment: (1) replacing laboratory acrylic casings with hydrophobic sealants compatible with cementitious systems, (2) quantifying and mitigating shrinkage and swelling in nanocarbon cement matrices under electrolyte exposure, and (3) identifying corrosion-resistant current collectors that maintain conductivity and mechanical durability under harsh conditions. Bitumen-based coatings were found to be promising sealants for moisture containment. Shrinkage studies [ are underway, I will complete this part shortly]. Meanwhile, corrosion testing of various collector materials revealed that graphene sheets and stainless steel–reinforced graphillic papers offered optimal trade-offs between conductivity, corrosion resistance, and mechanical performance. The thesis concludes with two field-implementation design proposals—a vertical column and a vaulted arch—both of which leverage compression to improve electrochemical contact and stability. Altogether, this work establishes a foundational framework for embedding energy storage directly into the built environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preserving Human Autonomy in AI-Mediated Negotiations</title>
<link href="https://hdl.handle.net/1721.1/162422" rel="alternate"/>
<author>
<name>Chen, J. Alvin</name>
</author>
<id>https://hdl.handle.net/1721.1/162422</id>
<updated>2025-08-22T03:05:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Preserving Human Autonomy in AI-Mediated Negotiations
Chen, J. Alvin
The rapid integration of generative artificial intelligence (AI) into negotiation and conflict resolution processes raises critical ethical concerns about the erosion of human autonomy, particularly when AI systems navigate irreconcilable “sacred” values (non-negotiable moral principles) alongside transactional “mundane” interests. This thesis investigates whether generative AI can be designed to recognize and respect important values and beliefs while preserving human agency in decision-making. Drawing on datasets from a repository of large language model (LLM) prompts tested in simulated negotiation scenarios, this study employs a mixed-methods approach to evaluating AI’s efficacy in balancing efficiency with ethical imperatives in negotiation. Quantitative metrics (enumerating the outcomes of two-party negotiations) are analyzed alongside qualitative assessments of values such as transparency and consent, drawn from Kantian ethical frameworks.&#13;
&#13;
My analysis reveals that while AI negotiating bots excel in trades across mundane, tradable interests they struggle to navigate beliefs and values without oversimplifying moral reasoning or obscuring cultural considerations. These findings inform policy recommendations, including a call for human-in-the-loop validation and technical safeguards for protecting important values in efforts to incorporate AI-assistance into negotiations. By bridging technical analysis and ethical theory, I hope this research contributes to improvements in designing autonomy-preserving AI systems for use in a range of negotiating settings, prioritizing human dignity alongside computational efficiency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness</title>
<link href="https://hdl.handle.net/1721.1/162417" rel="alternate"/>
<author>
<name>Rude, Connor D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162417</id>
<updated>2025-08-22T03:06:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness
Rude, Connor D.
Achieving Space Situational Awareness (SSA) in the Cislunar region—the area between the geosynchronous belt and the Moon's gravitational boundary—poses significant technological and organizational challenges. Instead of proposing new theoretical systems, this thesis employs the Architecting Innovative Enterprise Strategy (ARIES) Framework to evaluate existing SSA architectures and previously suggested solutions. ARIES provides a structured assessment through its elements (strategy, information, infrastructure, products, services, processes, organizations, and knowledge), identifying infrastructure, acquisition strategies, policy-driven timelines, and communication structures as key areas for improvement. Stakeholder objectives, current initiatives, and operational needs guide the characterization of an ideal SSA architecture.&#13;
&#13;
Four prior system proposals for cislunar SSA are assessed using qualitative analysis of existing literature and first-order physics-based simulations. These evaluations correlate specific design features with enhanced system suitability. Particularly beneficial are constellation proximity to targets, strategic constellation placement and phasing, sensor orbital diversity, and orbital stability. Additionally, certain design strategies consistently yield higher suitability, including focusing on underserved SSA regions, leveraging heritage technology, and optimizing designs for ride-share launch compatibility.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLM-Supported Natural Language to Bash Translation</title>
<link href="https://hdl.handle.net/1721.1/162415" rel="alternate"/>
<author>
<name>Westenfelder, Finnian Ellis</name>
</author>
<id>https://hdl.handle.net/1721.1/162415</id>
<updated>2025-08-22T03:05:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">LLM-Supported Natural Language to Bash Translation
Westenfelder, Finnian Ellis
The Bourne-Again Shell (Bash) command-line interface for Linux systems has complex syntax and requires extensive specialized knowledge. Using the natural language to Bash command (NL2SH) translation capabilities of large language models (LLMs) for command composition alleviates these issues. However, the NL2SH performance of LLMs is difficult to assess due to inaccurate test data and unreliable heuristics for determining the functional equivalence of Bash commands. We present a manually verified test dataset of 600 instruction-command pairs and a training dataset of 40,939 pairs, increasing the size of previous datasets by 441% and 135%, respectively. Further, we present a novel functional equivalence heuristic that combines command execution with LLM evaluation of command outputs. Our heuristic can determine the functional equivalence of two Bash commands with 95% confidence, a 16% increase over previous heuristics. Evaluation of popular LLMs using our test dataset and heuristic demonstrates that parsing, in-context learning, in-weight learning, and constrained decoding can improve NL2SH accuracy by up to 32%. Additionally, we consider military use cases for NL2SH models and discuss the limitations of current Department of Defense documentation standards for LLMs. We write and publish documentation for our models and datasets to promote safe use. Our findings emphasize the importance of dataset quality, execution-based evaluation, translation method, and proper documentation for advancing NL2SH translation and enabling responsible use. Our code is available at https://github.com/westenfelder/NL2SH.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050</title>
<link href="https://hdl.handle.net/1721.1/162406" rel="alternate"/>
<author>
<name>Ma, Clara Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162406</id>
<updated>2025-08-22T03:06:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050
Ma, Clara Z.
Discussions on “space sustainability” have largely centered on orbital debris, the burnup of vehicles during atmospheric reentry, and the resulting emissions. However, few studies have examined emissions from the launches themselves. Along with reentry burnup, rocket launches are the only source of high altitude anthropogenic emissions. At such high altitudes, emitted particles can remain in circulation for years. With the annual growth rate of the commercial launch industry averaging 14.6% in the last 4 years and over 211 launches in 2023 alone, our research on the atmospheric impact of launch vehicles comes at a crucial point in the policy debate on space sustainability.&#13;
&#13;
This thesis outlines several potential future scenarios of the launch industry in 2050, with all the vehicles in each scenario using the same fuel type. We examine these four launch scenarios—a kerosene (RP-1) launch industry, a methane (CH4) launch industry, a hydrogen (H2) launch industry, and a control or “baseline” scenario without launches. For each scenario, we estimate the number of launches for a distribution of heavy-lift launch vehicles across origin spaceports. We simulate the chemical interactions of the launch plumes with the atmosphere using the global atmospheric chemistry model GEOS-Chem High Performance (GCHP). Finally, we quantify the steady state impact of launch emissions on stratospheric ozone and surface air quality.&#13;
&#13;
We find that the black carbon emitted by kerosene and methane rockets causes an indirect increase in stratospheric ozone due to the removal of NOx, with ozone column change averaging 5.07 Dobson Units (DU) and 1.26 DU respectively; hydrogen rockets cause a net decrease in ozone column averaging -0.11 DU. The population-weighted average surface ozone impact is -0.286 ppb, -0.068 ppb, and 0.023 ppb for RP-1 rockets, CH4 rockets, and H2 rockets respectively. The population-weighted average surface PM2.5 impact is -0.031 μg/m3, -0.004 μg/m3, and 0.002 μg/m3 for RP-1, CH4, and H2 rockets respectively. Although RP-1 and CH4 rockets decrease surface ozone and surface PM2.5, H2 rockets have the smallest magnitude impacts on the atmosphere overall. Our findings have important implications for commercial launch providers, research institutions, and policymakers including the Federal Aviation Administration (FAA) and NASA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation</title>
<link href="https://hdl.handle.net/1721.1/162404" rel="alternate"/>
<author>
<name>Velonia Bellonia, Maria Eleni</name>
</author>
<id>https://hdl.handle.net/1721.1/162404</id>
<updated>2025-08-22T03:05:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation
Velonia Bellonia, Maria Eleni
Automation and AI systems are reshaping the workplace. How these technologies make a difference varies according to local contexts. Workers’ willingness to trust and embrace these technologies is shaping how this transformation unfolds in practice. Some workers trust AI more than others, and interestingly, trust levels differ from one region to another. Drawing on a far-reaching 2024 worker survey spanning different countries, and on a rich body of literature on technology, trust, and change, this work examines how key factors influencing workers’ AI trust and technology optimism interweave, shaping their perspectives on new technologies and automation. The focus is on understanding how the industrial and regulatory landscape in which workers operate, combined with their personal experiences with AI, shapes their AI optimism, with a particular emphasis on the US and Europe. While external market innovation indicators provide limited understanding of workers’ technology optimism, individual interaction and familiarity with AI, alongside organizational AI adoption and a worker’s industry of employment, emerge as key factors shaping AI trust. Additionally, the regulatory environment, encompassing technology governance, social safety nets, and workers’ institutional trust, all seem connected with how workers think about the impact of new technologies on society, the economy, and their jobs. Interpersonal trust propensity contributes to AI trust formation, though its relevance exhibits regional variation. By offering insights into the critical factors shaping the relationship between workers and AI, this study aims to provide evidence that supports societies in unlocking the value of emerging technologies, while empowering the workforce to confidently embrace and excel alongside them.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods</title>
<link href="https://hdl.handle.net/1721.1/162329" rel="alternate"/>
<author>
<name>Lim, Tiffany M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162329</id>
<updated>2025-08-12T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods
Lim, Tiffany M.
Bus service changes range in scale, and understanding their impacts on ridership and travel times can inform decision-making as changes are considered for the bus network. Budgetary limitations are at the heart of service change decisions, resulting in the need for analysts to assess different scenarios and accommodate quick turnarounds. This thesis provides a sketch planning framework for predicting ridership and travel time impacts of bus service changes, with a focus on direct demand models and the use of an open-source multimodal routing algorithm. The framework is designed to be streamlined with the use of data sources and capabilities, such as exporting a General Transit Feed Specification (GTFS) feed of a given bus network scenario, that agencies may have access to through existing transit planning tools.&#13;
&#13;
Direct demand models are developed to estimate bus ridership at the level of approximately one-mile route-segments and time-of-day periods. This level of analysis provides a more disaggregated evaluation of bus ridership than past direct demand models. The models are sensitive to both route and network improvements. New variables designed to capture the relationship between bus routes, including the competitive and complementary nature of routes, are introduced and incorporated in the model development process. These models are developed for the Washington Metropolitan Area Transit Authority (WMATA). A case study analyzing two scenarios in WMATA's Better Bus Network Redesign (BBNR) is presented, with selected route examples to illustrate how the models capture different types of service changes. These routes fall under three categories: routes with no major service changes, routes with improvements in frequency, and routes with re-routing and other improvements.&#13;
&#13;
An open-source multimodal routing algorithm, available through an R package called r5r, is used for travel time analysis. r5r calculates a distribution of door-to-door travel times for a given origin-destination (OD) matrix and returns a selected percentile value from the distribution for each OD pair. The percentile parameter is calibrated through a comparison of estimated travel times and actual travel times recorded in origin-destination-interchange inference (ODX) data. Low percentile values were found to provide travel times close to actual travel times. Additional guidance is provided for interpreting travel times from r5r, and use cases related to calculating travel time impacts between scenarios and evaluating rail competitiveness for a given bus network are explored.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources</title>
<link href="https://hdl.handle.net/1721.1/162328" rel="alternate"/>
<author>
<name>Fiorista, Riccardo</name>
</author>
<id>https://hdl.handle.net/1721.1/162328</id>
<updated>2025-08-12T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources
Fiorista, Riccardo
Rail platform crowding poses serious challenges to passenger safety, operational performance, and service quality in urban rail transit systems. This thesis investigates the short-term forecasting of platform-level crowding, focusing on enhancing prediction accuracy, spatial granularity, and operational interpretability through multi-source data integration. We first employ a gradient-boosted tree regression model (LightGBM) to leverage fare card transaction, vehicle location, weather, and public event data from the Washington Metropolitan Area Transit Authority (WMATA) to forecast platform-level occupancies 15–60 minutes ahead of time. Our results show significant improvements over a WMATA-internal baseline while providing a robust data preparation and prediction pipeline. Subsequently, we explore integrating platform-level CCTV data to overcome the lack of real-time crowding estimates. Using a custom-collected image dataset and three computer vision methods, namely object detection (YOLOv11, RT-DETRv2) and head counting (APGCC), crowd-level classification (Crowd-ViT), and semantic image segmentation (DeepLabV3), we demonstrate that estimated counts from calibrated image segmentation maps enable accurate real-time estimation of platform crowding. Additionally, we show that these estimates can correct and improve 15-minute horizon predictions when incorporated with a stochastic gradient-boosted tree learner such as LightGBMLSS. Finally, we extend the time series modeling framework by incorporating network-wide causal influences through an analysis driven by Empirical Dynamic Modeling and Convergent Cross Mapping. We show that accounting for network effects improves predictive performance, particularly for platforms characterized by regular low-occupancy patterns, improving the prediction of anomalies. The work presented in this thesis extends the existing literature on short-term platform crowding prediction, offering new methodologies to incorporate emerging CCTV data and causal network effects for increased prediction accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra</title>
<link href="https://hdl.handle.net/1721.1/162327" rel="alternate"/>
<author>
<name>Avis, Victoria</name>
</author>
<id>https://hdl.handle.net/1721.1/162327</id>
<updated>2025-08-12T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra
Avis, Victoria
This thesis examines transcalar tensions that emerge from urban infrastructure development projects funded through bilateral foreign assistance mechanisms. Using a mixed-methods case study approach to gather data from a wide variety of historical and contemporary primary and secondary sources, this research centers a harbor revitalization and port reconstruction project in Jamestown, a historic fishing community in Accra, Ghana. Having coordinated plans with the Ghanaian national government, a Chinese state-owned construction firm began working on the port in 2020. In 2024, the revitalized harbor and expanded port were officially handed over to the government of Ghana in a widely attended ceremony. The spatial implications of this physical urban infrastructure project across international, national, municipal, and local levels are complex and interrelated. Therefore, this case study is especially relevant at a historical moment when the nature of bilateral engagement may be undergoing significant transformation. &#13;
&#13;
This thesis argues that spatial thinking, a foundational concept in urban planning, is a necessary analytical lens to incorporate within international development practice. Despite its relevance, spatial thinking has not been meaningfully incorporated into international development policy or implementation. Therefore, this thesis seeks to bridge epistemic gaps between urban planning and international development by advancing a spatial thinking framework, adapted for use in international development contexts. In doing so, this thesis envisions a future for bilateral development assistance that delivers equitable and sustainable development outcomes across scales of engagement. This approach, rooted in spatial thinking, intends to respond to local community needs and aspirations, capacitate municipal governments, align with national priorities, and accommodate geopolitical dynamics that facilitate bilateral project implementation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils</title>
<link href="https://hdl.handle.net/1721.1/162319" rel="alternate"/>
<author>
<name>Angehrn Rodas, Frida Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/162319</id>
<updated>2025-08-12T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils
Angehrn Rodas, Frida Nicole
Aggregation of the tau protein into fibrils is a key feature of Alzheimer's disease (AD) and many other neurodegenerative disorders. Developing small molecules that bind these tau fibrils is important for the diagnosis and treatment of tauopathies. This thesis revolves around a study on the binding sites of a positron emission tomography (PET) ligand, PI-2620, to a recombinant tau construct that adopts the C-shaped AD fold. Using solid state NMR experiments in combination with other techniques such as Transmission Electron microscopy (TEM) as well as docking simulations allowed a better understanding of the binding sites of this PET agent. Specifically, 13C-19F REDOR experiments were used to identify nearby residues to the ligand. PI-2620 was found to bind two primary sites within the C-shaped structure. The docking simulations allowed the proposition of several possible binding poses. Additional 2D NMR experiments suggest that PI-2620 alters the protofilament interfaces. The stoichiometry of PI-2620 binding to tau fibrils was determined to be approximately 20 mol%, with varying degrees of ligand mobility. These findings offer insights into the interaction of this PET tracer with tau fibrils and have implications for the design of improved imaging agents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement</title>
<link href="https://hdl.handle.net/1721.1/162318" rel="alternate"/>
<author>
<name>Ali, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/162318</id>
<updated>2025-08-12T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement
Ali, Adam
This thesis examines the impact of voluntary 24/7 (hourly) low-carbon power procurement on grid-wide emissions and investment strategies in generation technologies. Recognizing the growing number of businesses and government agencies making voluntary commitments to reduce greenhouse gas emissions (GHGs) through increased procurement of low-carbon power, this study investigates the effectiveness of these commitments, particularly those aiming for hourly matching of low-carbon energy with consumption. &#13;
&#13;
This study employs GenX, an open-source capacity expansion model, to simulate an electricity market with two classes of buyers. Buyers in one class commit to reduce the carbon intensity of their electricity procurement by some amount, while buyers in the other class procure electricity at minimum cost without any regard to carbon emissions. This setup allows for a detailed examination of how different levels of ambition in voluntary hourly low-carbon commitments influence the electricity system and investment strategies. The study tests both a simpler model without storage and demand-response capabilities and a more complex model that incorporates these elements to assess their impact on meeting hourly clean energy targets.&#13;
&#13;
Our findings suggest that at low to moderate ambition levels of hourly low-carbon electricity procurement, the buyers with voluntary commitments can primarily "reshuffle" built low-carbon generation without incentivizing new clean capacity additions or achieving measurable reductions in system-wide emissions. Significant shifts in generation investments and decreases in total carbon emissions are observed only when commitments exceed a critical threshold, ranging from approximately 70% to 96%, depending on the facts of the system, which happen to be reflected in different model set-ups. Even then, cost-minimizing behavior in voluntary procurement can distort investment, spurring excessive wind and solar builds that exceed what a least‑cost, socially-optimal zero‑carbon portfolio would require.&#13;
&#13;
In conclusion, for voluntary 24/7 procurement to cut emissions materially—and avoid misallocating capital—either ambition must be extremely high or participation must broaden enough to share costs and benefits. Otherwise, committed buyers bear steep costs, non‑participants enjoy spill‑over gains, and the system drifts toward a sub‑optimal technology mix.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clarifying Decision Making Processes: Tools for Interdependency Modeling</title>
<link href="https://hdl.handle.net/1721.1/162317" rel="alternate"/>
<author>
<name>Baker, Ellie F.</name>
</author>
<id>https://hdl.handle.net/1721.1/162317</id>
<updated>2025-08-12T03:07:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Clarifying Decision Making Processes: Tools for Interdependency Modeling
Baker, Ellie F.
Tools for problem specification in AI Decision making are underdeveloped at present. I propose two new tools for this purpose; first, a model of AI Decision Making, which supports problem identification and mitigation. Second, a Bill of Assumptions for Data Production. Data is an important component of AI Decision Making Systems, and data is necessarily produced by making a series of assumptions. My Bill of Assumptions for Data Production is a new approach to communicating these assumptions that facilitates collaboration, data transparency, and reduction of harmful bias. I illustrate this new approach by developing a dataset that estimates the distribution of Government education spending in the US across income deciles. My dataset informs existing Distributional National Accounts (DINA), which are a primary measure of income inequality in the US (Piketty et al., 2018). My estimate shows Government education spending is more progressive than assumed in current DINA. Furthermore, I show that removing federal education funding to postsecondary institutions would produce substantial harm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming geospatial textual data into narrative storytelling visualization</title>
<link href="https://hdl.handle.net/1721.1/162315" rel="alternate"/>
<author>
<name>Ma, Ruixian</name>
</author>
<id>https://hdl.handle.net/1721.1/162315</id>
<updated>2025-08-12T03:06:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transforming geospatial textual data into narrative storytelling visualization
Ma, Ruixian
Current large language models (LLMs) often struggle to integrate geospatial data into dynamic, interactive visualizations, relying instead on text-based outputs. This limitation hinders the full potential of geospatial data to convey complex information through narrativedriven communication, making it difficult for users to interpret the data easily. Meanwhile, existing data visualization tools typically depend on static dashboards and rigid scientific formats, which have a steep learning curve and lack engagement through narrative elements. Audiences, however, are increasingly drawn to story-driven presentations, as seen in platforms pioneered by the MIT Senseable City Lab, and widely popularized by The New York Times and the Washington Post, which use narrative data visualization formats to attract and immerse readers. This gap between the capabilities of current LLM-based tools and users’ preferences presents a unique opportunity to develop a narrative-based geospatial visualization tool that meets these needs. This tool could transform how we communicate spatial data, particularly in fields such as journalism, travel planning, and urban planning, where the ability to convey complex patterns in an engaging manner is essential.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems</title>
<link href="https://hdl.handle.net/1721.1/162314" rel="alternate"/>
<author>
<name>Leong, Chee Weng Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/162314</id>
<updated>2025-08-12T03:06:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems
Leong, Chee Weng Michael
Between 2019 and 2022, a pattern break during the COVID-19 pandemic introduced consequential changes to the trajectory of urban activity and mobility patterns. This thesis advances both theoretical and practical understandings of this evolving post-pandmemic regime of activity and mobility, as well as its implications for the future of cities and public transit systems, using high-resolution location-based services data and a case study within the Washington, DC metropolitan area. First, a custom analysis framework is developed where geographical units - subcenters and neighborhoods - are designed to provide insight at an interpretable scale that corresponds to policy and business decision making. Second, a custom suite of twelve mobility metrics are curated to distill the applicability of post-pandemic changes in travel patterns to business problems (site selection, network planning, and operations planning) and societal outcomes (social fabric, quality of life, and environmental sustainability). To complement spatial analysis, these metrics are also regressed on socio-economic attributes to provide greater explanatory power. Lastly, key trends in post-pandemic activity and mobility are distilled into eight mega-trends, and their implications for the adaptation of public transportation systems and future urban development are discussed, including complexity from divergent definitions of success among different stakeholders.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms</title>
<link href="https://hdl.handle.net/1721.1/162313" rel="alternate"/>
<author>
<name>Cervantes Gil, Sergio Yael</name>
</author>
<id>https://hdl.handle.net/1721.1/162313</id>
<updated>2025-08-12T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms
Cervantes Gil, Sergio Yael
Micro and small enterprises (MSEs), particularly informal micro-retailers known as nanostores, play a vital role in developing economies but remain largely underserved by traditional financial institutions and overlooked in economic policy. In Mexico, nanostores account for more than 95% of businesses and over 10% of national employment, yet face high closure rates, low productivity, and limited access to formal credit. This thesis asks: What structural and contextual factors determine the survival and performance of nanostores, and how can policy better support high-potential firms within this segment? To answer this, the study constructs a longitudinal panel of nanostores using microdata from the Mexican Economic Census (2009, 2014, and 2019), and combines it with municipality-level contextual data including crime, infrastructure, unemployment, electricity costs, and business regulations. It applies survival models to estimate firm closure dynamics and implements a misallocation framework to quantify distortions in capital and labor usage. The results reveal that misallocation—particularly of capital—is pervasive and systematically linked to institutional weaknesses and credit access constraints. In response to the limited real-time data available for this sector, the thesis proposes the LIFT Performance Index, developed by the MIT Low-Income Firms Transformation Lab (MIT LIFT Lab), as a diffusion-based tool for monitoring micro-retailers’ business sentiments using structured operational surveys. A pilot implementation in Argentina demonstrates the index’s potential to generate timely and actionable insights for policymakers and private stakeholders. Overall, this work contributes a novel empirical foundation for understanding heterogeneity within the micro-retail sector and offers a scalable framework for designing targeted, data-driven interventions to support inclusive economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems</title>
<link href="https://hdl.handle.net/1721.1/162309" rel="alternate"/>
<author>
<name>Gess, Derek</name>
</author>
<id>https://hdl.handle.net/1721.1/162309</id>
<updated>2025-08-12T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems
Gess, Derek
Autonomous underwater vehicles (AUVs) are an ever-increasingly essential tool for ocean-based applications, whether it be scientifically, economically, or militarily. To advance the capabilities of AUVs, it is crucial to improve the mission time and length of these vehicles. One proposed way to achieve this is with remote undersea wireless power transfer (WPT) systems to allow AUV charging from remote areas of the ocean floor. While there has been significant research in WPT system design, these projects often tailor the design specifications towards a specific AUV shape, size, or power requirement. These point designs have wildly different power outputs, efficiencies, coupling coefficients, sizes, and more, making it difficult to understand how the design parameters affect each of these properties. This paper aims to address this knowledge gap in current undersea WPT systems by designing an equivalent circuit framework for a WPT system with a targeted power output of ~1 kW to show how design parameters such as input voltage, coil size, transfer gap, coupling coefficient, and load resistance affect the power output and efficiency of the charger. Furthermore, the effects of misalignment in vertical and lateral directions for two separate compensation networks – series-series (SS) and series-parallel (SP) – are compared to determine which compensation network would perform best under specified circumstances. The paper then addresses the losses associated with a conductive environment by coupling the circuit model with an electric field model in seawater. The impact of undersea losses on system metrics is quantified, showing a 3% decrease in efficiency as compared to in air. Finally, the study investigates the use of magnetic cores in WPT systems for EM shielding and field-shaping characteristics. A design methodology is introduced to rank material properties based on the desired system performance characteristics. Suggested materials are then chosen according to this ranking and tested using the models derived in the study. By mapping both electrical and magnetic-core design spaces in a conductive seawater environment, this thesis delivers a unified methodology for designing scalable, efficient undersea wireless chargers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal</title>
<link href="https://hdl.handle.net/1721.1/162306" rel="alternate"/>
<author>
<name>Navarro, Cadine</name>
</author>
<id>https://hdl.handle.net/1721.1/162306</id>
<updated>2025-08-12T03:06:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal
Navarro, Cadine
Seeds, the “abominable mystery” (Darwin, 1897), hold our past and potential future. They also hold sound. Much like cities, they are sites of growth, transformation, and resilience. This thesis draws parallels between laboratory research on the sensing capacities of seeds and embodied experiences of sensing within urban landscapes, exploring how living systems interact with sound and vibration. Through both scientific and poetic approaches, it examines how seeds respond to sonic environments and how this sensitivity can inform human engagement with acoustics in the urban context. The investigation of intangible forces, vibration, resonance, and sound reveals a shared responsiveness between seeds and cities, documented through graphs, sound spectra, and reflective narratives that bridge science and art. Focusing on sound as a strategic lens, this work brings attention to often-overlooked sensory domains, inspiring a more ecologically and socially responsive urbanism. Ultimately, it advocates for practices of deeper listening as a method to engage openly and imaginatively with human and nonhuman worlds, and to reimagine urban environments as spaces of attunement, dialogue, and co-existence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Vulnerabilities of AI in Latin America</title>
<link href="https://hdl.handle.net/1721.1/162299" rel="alternate"/>
<author>
<name>Dobles Camargo, Claudia</name>
</author>
<id>https://hdl.handle.net/1721.1/162299</id>
<updated>2025-08-12T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Critical Vulnerabilities of AI in Latin America
Dobles Camargo, Claudia
Artificial Intelligence (AI) is rapidly reshaping societies, economies, and governance systems worldwide. While it offers tools for addressing critical challenges—such as climate change, health care, and educational inequity—it also risks deepening historical inequalities, undermining democratic institutions, and exacerbating global technological dependencies if not ethically governed. Latin America faces unique vulnerabilities for the development and use of AI, currently underexplored in existing scholarship—such as informal data work or the territorial principle and its implications on AI law enforcement—this study investigates AI's critical vulnerabilities within the Latin American context in order to determine and provide regional and national policies to advance on an inclusive, strategic, and ethical approach for developing and deploying AI systems in Latin America. The study seeks to answer the question through a cross-analysis and comparative case study of six countries (Brazil, Chile, México Costa Rica, El Salvador and Honduras) drawing on existing and recent global and regional benchmarks, including the Stanford HAI AI Index (2024), UNESCO’s Recommendation on the Ethics of AI (2021), and the Latin American AI Index (ILIA 2024). The countries were selected based on a broad range of AI readiness levels, focusing on mapping institutional, regulatory, and socio-political contexts as well as metrics and input from relevant sources. The analysis shows structural inequality as the core vulnerability shaping AI’s impact in Latin America, alongside governance gaps, limited regional cooperation, and minimal public participation. The analysis identifies ten critical vulnerabilities—including the use of AI in surveillance, increase in inequality, increase in disinformation, AI-use in organized crime, and environmental exploitation—that, if unaddressed, may accelerate democratic erosion and technological dependency. Ethical principles are shown to be deeply interconnected and grounded in human rights, yet their implementation remains aspirational. This research underscores a call for action toward regional coordination, inclusive education strategies prioritizing gender policies and rural areas, and aligned industrial policies in the countries of the region. A Latin American context-specific, collective approach ensures that AI serves the public interest, strengthens sovereignty, and supports equitable development in Latin America.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurements on contact potentials of metals</title>
<link href="https://hdl.handle.net/1721.1/162239" rel="alternate"/>
<author>
<name>Zisman, William A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162239</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1928-01-01T00:00:00Z</published>
<summary type="text">Measurements on contact potentials of metals
Zisman, William A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1928; Includes bibliographical references (leaves 59-60).
</summary>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A department store fore the Hudson Bay Company Edmundton, Alberta, Canada</title>
<link href="https://hdl.handle.net/1721.1/162238" rel="alternate"/>
<author>
<name>Thrift, Eric W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162238</id>
<updated>2025-08-07T03:07:29Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">A department store fore the Hudson Bay Company Edmundton, Alberta, Canada
Thrift, Eric W.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers</title>
<link href="https://hdl.handle.net/1721.1/162155" rel="alternate"/>
<author>
<name>Chuttani, Milan</name>
</author>
<id>https://hdl.handle.net/1721.1/162155</id>
<updated>2025-07-30T03:06:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers
Chuttani, Milan
To meet its 2050 net-zero carbon emissions goals, Massachusetts must rapidly retrofit its aging stock of three-story multi-family homes, also known as “Triple Deckers.” However, high upfront capital costs, disparities between subsidized gas and electric energy rates, complex eligibility criteria, and misaligned incentives for landlords and renters constrain the widespread adoption of deep energy retrofits (DERs) in small multi-family homes. &#13;
&#13;
Drawing on energy democracy and reparative planning theory, this thesis reframes Triple Decker retrofits as a pathway to social and spatial transformation that empowers residents through cooperative participatory processes. This project proposes a practical framework for a “Community Retrofit Trust” which uses systems of distributed energy savings, community ownership of DER assets, and cooperative governance to ensure tenants, building owners, and neighbors in environmental justice communities share benefits from DERs while maintaining rental affordability. A proposed values-based decision-making process also helps community cooperatives adapt the Retrofit Trust’s framework to their unique social contexts.&#13;
&#13;
Descriptive case studies of two community solar initiatives illustrate how cooperative approaches that build trust, bundle projects and local expertise, and expand opportunities for participation can efficiently distribute energy benefits across a community while increasing investment and lowering costs. A feasibility analysis of a Community Retrofit Trust in Boston examines the strengths, challenges, and contradictions of incentivizing Triple Decker DERs through a cooperative approach.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas</title>
<link href="https://hdl.handle.net/1721.1/162154" rel="alternate"/>
<author>
<name>Bradford, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/162154</id>
<updated>2025-07-30T03:08:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas
Bradford, Mo
Southwest Arkansas, a rural and mineral-rich region, is entering a new wave of resource-driven economic activity fueled by lithium extraction. While local leaders are pushing for rapid industry development to counter long-standing socioeconomic decline, this research asks a critical question: Can these pro-industry strategies truly deliver equitable and lasting public benefits, or will they repeat historical patterns of extraction that have sidelined local communities?&#13;
This study critiques neoliberal development schemes and neoconservative, sectionalist ideologies that deprioritize equity-driven agendas and prioritize deregulation and private sector efficiency, arguing that such approaches often weaken institutional civic organizing and reduce responsiveness to public needs. As an alternative, it proposes civic infrastructure as a strategic solution, one that strengthens the networks of community institutions, local governments, and intermediary organizations essential for advancing equity in extractive economies.&#13;
The research further explores the role of intermediary organizations in bridging institutional and capacity gaps in Southwest Arkansas. These organizations can support under-resourced communities by providing convening power, technical assistance, and financial resources. &#13;
Through policy analysis, case studies, and field interviews, this work examines how civic infrastructure and intermediary support can work together to shift economic development toward more just and inclusive outcomes in resource-extractive economies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston</title>
<link href="https://hdl.handle.net/1721.1/162153" rel="alternate"/>
<author>
<name>Dy, Raelene Ina Bianchi Louise Mendez</name>
</author>
<id>https://hdl.handle.net/1721.1/162153</id>
<updated>2025-07-30T03:08:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston
Dy, Raelene Ina Bianchi Louise Mendez
When we think of urban living and its depictions in popular culture, many shows and movies depict characters in leisure activities, such as meeting friends, going on dates or pursuing hobbies, often at night. Despite the prominence of the night as a key theme in depictions of urban leisure, transportation planners have rarely focused on nighttime leisure travel as an area of intensive study beyond the lens of safety. This thesis investigates the nighttime leisure travel patterns of residents and students in Greater Boston through statistical analysis and data sculpture with a focus on how these vary by gender. To create a baseline understanding of travel patterns, I focused on the Boston Metropolitan Area and used the most recent version of the Massachusetts Department of Transportation’s Household Travel Survey from 2011. I limited my analysis to a fixed set of leisure activities during a fixed nighttime period to understand associated travel behaviors. I also implemented a data sculpture method to investigate how a subset of MIT students made decisions around their travel modes. I found that women travelled differently from men, in that they spent more time walking and were more likely to be passengers in a car. In contrast, men were more likely to be behind the wheel and travel further. Both men and women showed a preference for walking over all other modes when leaving an activity.  Together, these findings indicate that nighttime leisure travel is not a simple extension of daytime patterns. To better design nighttime transportation that accommodates gender differences, planners need to respond to the special qualities of the city after dark.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Economic Reevaluation of Navi Mumbai and the Indian Satellite City</title>
<link href="https://hdl.handle.net/1721.1/162150" rel="alternate"/>
<author>
<name>Thomas, Archer</name>
</author>
<id>https://hdl.handle.net/1721.1/162150</id>
<updated>2025-07-30T03:08:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Economic Reevaluation of Navi Mumbai and the Indian Satellite City
Thomas, Archer
Navi Mumbai, a municipality in the Mumbai Metropolitan Region, is the largest satellite city project in India. Nevertheless, it has been seen within the planning discipline as underperforming its original ambitions. Drawing upon the goals enumerated in the city’s original development plan, this thesis proposes a series of quantitative metrics corresponding to said goals and then utilizes data drawn from surveys, censuses, official reports, financial statements, and remote sensing datasets to propose an updated evaluation of Navi Mumbai’s performance over the past half-century. This thesis argues that, contrary to earlier perceptions, Navi Mumbai has largely succeeded in fulfilling its ambitions, and that this can be attributed to shifting suburbanization patterns in India, the prescient decision to prioritize office-based service industries over manufacturing, and the ongoing reconfiguration of transportation and logistics networks within the Mumbai region. Reflecting on the history of urban and economic planning in India, this thesis then suggests the implications of Navi Mumbai’s apparent success for satellite city projects in India and across the Global South, focusing on questions of financing and governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Private Sector in Public Transit: Evaluating Early US Experience in P3s</title>
<link href="https://hdl.handle.net/1721.1/162149" rel="alternate"/>
<author>
<name>Farabow, Web</name>
</author>
<id>https://hdl.handle.net/1721.1/162149</id>
<updated>2025-07-30T03:08:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Private Sector in Public Transit: Evaluating Early US Experience in P3s
Farabow, Web
Problems in US public transit are well documented: transit providers struggle to develop new infrastructure, face high project costs and long implementation timelines, pursue designs that prioritize ease of delivery over value to the public, and struggle to sustain their operations. In response to these challenges, Public-Private Partnerships (“P3s” or “PPPs”) have been promoted as a way to deliver more infrastructure on faster timelines at lower cost and higher quality. As P3s have been increasingly considered for major transit projects, this thesis investigates their ability to deliver on promotional claims, and their ability to address key challenges in American public transportation.&#13;
&#13;
First, the thesis contextualizes contemporary P3s within a history of private sector involvement in US public transit. In addition to detailing how existing infrastructure came to be, this history intends to sharpen an understanding of contemporary P3s by considering how forms of private involvement have changed over time. It proceeds to develop detailed case studies for three major infrastructure projects that have proceeded under a P3 model: RTD’s Eagle P3 in Denver, Maryland MTA’s Purple Line in Southern Maryland, and Los Angeles Metro’s Sepulveda Transit Corridor Project. Combining historic research and contemporary case study analysis, the thesis seeks to understand the circumstances under which contemporary P3s have emerged, and to draw lessons from early experience.&#13;
&#13;
American transit providers have considered P3s for a variety of reasons, but have been primarily motivated by limited administrative and financial capacity, and by a perceived ability of private firms to deliver projects on faster timelines. Early P3s have facilitated provision, enabling projects that otherwise may not have been built, and have demonstrated their potential to ensure sustainable operations over long-term contract periods. But P3s have achieved mixed results in accelerating project timelines, and their ability to reduce lifecycle project costs remains unclear. While P3s seek to increase private involvement in transit provision, the model places a higher burden on upfront public planning compared to conventional delivery strategies. Public infrastructure owners can design P3s to leverage private sector resources and capacity, but the model comes with tradeoffs that should be carefully weighed against likely benefits. Ultimately, P3s can address a number of acute challenges in American public transit, but are unlikely to provide a workaround to fundamental political and financial challenges that limit transit development more broadly.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island</title>
<link href="https://hdl.handle.net/1721.1/162147" rel="alternate"/>
<author>
<name>Jones, Wil</name>
</author>
<id>https://hdl.handle.net/1721.1/162147</id>
<updated>2025-07-30T03:08:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island
Jones, Wil
This thesis advances a reparative framework for cultural preservation by combining immersive documentation with co-authored digital storytelling to support Black spatial memory and community sovereignty. Grounded in fieldwork on Daufuskie Island, South Carolina—a historic Gullah Geechee community confronting dispossession and cultural enclosure—the project co-creates Daufuskie3D (https://daufuskie3d.org/), an interactive website that presents annotated 3D scans, oral histories, ambient videos, and symbolic interface design rooted in Gullah epistemologies.&#13;
&#13;
It is guided by two research questions: How can immersive documentation support reparative preservation for communities at risk of spatial erasure? And what frameworks—technical, ethical, and political—ensure digital practices reflect Black cultural values, descendant authorship, and community control? Drawing from Black geographies, wake work, vernacular cartography, and speculative design, the thesis introduces a conceptual distinction between visualization and analysis tools to examine how different modes of spatial capture shape visibility and authority. The project finds that immersive tools, when grounded in ethical design and descendant authorship, can function not simply as representational media but as reparative infrastructure—supporting visibility, stewardship, and spatial return in communities confronting erasure.&#13;
&#13;
The Daufuskie3D website serves as both platform and method. Its spatial interface draws on Gullah visual language, including Underground Railroad quilt codes and spiritual symbolism, while its non-linear navigation resists conventional heritage taxonomies. Rather than flattening culture into content, the site embraces ambiguity, withheld spatial detail, and narrative restraint as ethical design principles. Developed in partnership with Ms. Sallie Ann Robinson, a sixth-generation Gullah cultural steward, the project repositions preservation as participatory, situated, and future-facing. It offers Daufuskie3D as both a working prototype and a methodological contribution toward reparative immersive practice—centering digital preservation as a strategy of memory, sovereignty, and cultural regeneration within the Black diaspora.&#13;
&#13;
Keywords: Immersive Documentation, 3D Scanning / LiDar / Photogrammetry, Cultural Preservation, Gullah Geechee, Daufuskie Island, Reparative Preservation, Black Geographies, Digital Heritage, Speculative Design, Counter Cartography, Counterpublic, Spatial Justice, Oral History, Afrofuturism, Digital Public, Digital/ Web Archive, Cultural Stewardship, Ethical Design, Participatory Design, Underground Rail Road, Return
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy</title>
<link href="https://hdl.handle.net/1721.1/162146" rel="alternate"/>
<author>
<name>Jin, Brooke</name>
</author>
<id>https://hdl.handle.net/1721.1/162146</id>
<updated>2025-07-30T03:08:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy
Jin, Brooke
The political economy of the power sector has been characterized by a putative transition from fossil capitalism to green capitalism in an attempt to mitigate the worst effects of anthropogenic climate change on nature and society. In recent years the rise of green industrial policy, such as the passage of the Inflation Reduction Act of 2022, has sought to stimulate domestic economic development of green-technology projects and implement protectionist trade policies with the normative intent of protecting the geopolitical hegemony of U.S. industry. Yet the objectives of such industrial policies, which function less to reduce carbon emissions than to increase resource- and carbon-intensive consumption patterns, run antithetical to putative state objectives of the decarbonization of the power grid and industrial operations, and in fact green capitalism does not exist without the continued influence of fossil capital.&#13;
In this thesis I look to Marxist theories of the state, capital, labor, and nature to illustrate the crises of capitalism that have been occurring due to the exponential increase in power demand by data centers and large technology companies. In reshaping the governance of power markets, electricity generation, and transmission and distribution infrastructure through this increase in demand, called load growth, I show the illusion of sustainability under a green-capitalist political economy that purports to advance decarbonization goals, yet which in actuality facilitates conditions for the centralization and monopolization of private capital, as well as the continued destruction of nature and exploitation of workers. However, this crisis of load growth and the issue of governance that it raises open a window for experimentation into new state systems, socialized modes of production, and labor and environmental solidarity in the creation of a new climate policy: one that prioritizes equity, welfare, ecological preservation, and a truly decarbonized society. I propose a socialization of the power sector to increase community autonomy over their energy needs and to begin to dismantle the technocratic influence of fossil-fuel and large technology companies over electricity generation and access.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model</title>
<link href="https://hdl.handle.net/1721.1/162145" rel="alternate"/>
<author>
<name>Cina-Sklar, Zoë</name>
</author>
<id>https://hdl.handle.net/1721.1/162145</id>
<updated>2025-07-30T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model
Cina-Sklar, Zoë
Decarbonizing residential buildings in the United States is critical for reaching climate goals and has significant public health and energy justice benefits if accessible to all. To date, building electrification has been individual-level and market-driven, with some financial incentives at the state and federal level. This model is generally inaccessible to low-income homeowners and renters who are unable to afford the upfront costs of building improvements and new electric appliances. Neighborhood-scale building decarbonization has been proposed as an alternative in which new developments would be built allelectric or existing buildings would be electrified at the block or neighborhood scale. In the latter use case, neighborhood-scale building decarbonization is often tied explicitly to decommissioning gas lines. Specifically, proponents posit that these projects could be funded through avoided gas line repair and replacement costs. Investor-owned utilities are seen by some experts in the space as key to the success of neighborhood-scale building decarbonization because of their financing capabilities and existing role in providing heating and/or electric service to customers. In recent years, a number of state policymakers have passed legislation approving utility-funded neighborhood-scale building decarbonization and state utility commissions have promulgated regulations approving cost recovery for these projects. Utilizing desk research and informant interviews, this paper analyzes what has enabled and hindered existing utility-funded neighborhood-scale building decarbonization pilot projects in California, Massachusetts, and New York. I diagnose strong and specific climate goals, the passage of enabling legislation, an engaged state utility commission, and strong advocacy ecosystems as key factors for initiating neighborhood-scale pilot projects. Through informant interviews, I identify costs, financing, community buy-in and planning as central determinants for the success of pilot projects and the future of the model. I close by offering recommendations and outstanding research areas for planners interested in pursuing future neighborhood-scale building decarbonization projects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act</title>
<link href="https://hdl.handle.net/1721.1/162144" rel="alternate"/>
<author>
<name>Barrera Gonzalez, Devora</name>
</author>
<id>https://hdl.handle.net/1721.1/162144</id>
<updated>2025-07-30T03:08:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act
Barrera Gonzalez, Devora
This thesis questions whether planning and the activities the profession’s umbrella covers are beneficial or harmful. The project analyzes the role of planning in the colonization of Turtle Island by materializing and legitimizing the seizure of Indigenous Land through planning practices like urbanization, enclosure, the creation of Indian reservations, and tools like cartography, lawfare, and landscape architecture and design. I make an argument in this thesis about how there is no such thing as sustainable or beneficial urbanization because urbanization equals death, that planning is inherently harmful because it was born as a tool of colonization, and that there is no way to decolonize the profession, given that the profession upholds the current land system, I make an argument that the only solution to reverse and undo the harm done by planning and urbanization is to give Land Back to Indigenous Peoples. For this, building my argument, I will walk you through the narrative built to dispossess land, the concept of imaginary geography, how planning enabled and legitimized diferent ways for land dispossession, and finally, the modification of land (urbanization). A chapter is dedicated to looking closer at one piece of lawfare in particular: the morrill act, revealing the history of the foundation of MIT at the expense of Indigenous Peoples, the role that universities play in the maintenance and strengthening of the systems of oppression in place. Using that information to answer the calls for decolonization of the profession, this thesis makes an argument and underscores that, given that planning is born as a tool for colonization, the profession can’t be decolonized and demands Land Back as the only solution. The thesis presents the information on two parcels that belong to the Confederate Tribes of Coos, Lower Umpqua, and Siuslaw Indians, located in the state of Oregon, that were seized and, through the morrill act, resold with the proceeds benefitting MIT, calling for the restitution of the parcels and giving Land Back.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review</title>
<link href="https://hdl.handle.net/1721.1/162143" rel="alternate"/>
<author>
<name>Duque Añez, Silvia</name>
</author>
<id>https://hdl.handle.net/1721.1/162143</id>
<updated>2025-07-30T03:08:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review
Duque Añez, Silvia
In Bogotá, where long-standing spatial and social inequalities intersect with growing climate risks, public space policy holds the potential to either reinforce exclusion or promote resilience and justice. Decisions about parks, plazas, and green corridors are not neutral; they reflect political priorities, embedded values, and power dynamics. This thesis asks: To what extent, and in what ways, does Bogotá’s public space policy framework incorporate criteria of equity and climate resilience? Through this question, the research examines how policies define and implement these concepts, what types of interventions they promote, and what limitations may emerge.&#13;
While prior research has emphasized the importance of inclusive and adaptive public spaces, there is limited analysis of how these principles are embedded in policy instruments in Latin American cities. Addressing this gap, this thesis develops an analytical framework informed by literature on urban environmental justice and climate adaptation. This framework serves as both an evaluative tool and a resource for policymakers seeking to move beyond vague commitments and toward actionable pathways for equity and climate resilience. &#13;
The framework is used to analyze two key policy instruments: the District Public Space Policy (Política Pública Distrital de Espacio Público 2019-2038) and the Master Plan (Plan de Ordenamiento Territorial: Bogotá Reverdece 2022-2035). The evaluation reveals that both perform well, reflecting a genuine political effort to prioritize these issues. However, the findings also show that narrow or inconsistent interpretations of equity and climate resilience can lead to unintended consequences, and that significant implementation challenges remain. By grounding its analysis in a Global South context, this thesis contributes to international conversations on urban sustainability, offering both a critical lens and a practical tool. Ultimately, this research advocates for a shift in public space governance, one that treats equity and resilience not as aspirational ideals, but as measurable, structural commitments to a more just and climate-ready urban future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interest Group Politics in U.S. "Social Housing" Experiments</title>
<link href="https://hdl.handle.net/1721.1/162142" rel="alternate"/>
<author>
<name>Davidson, Zak</name>
</author>
<id>https://hdl.handle.net/1721.1/162142</id>
<updated>2025-07-30T03:08:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interest Group Politics in U.S. "Social Housing" Experiments
Davidson, Zak
The rising cost of housing has renewed interest in public sector-led models of mixed-income housing production. Advocates, local governments, and state lawmakers are exploring strategies to involve the public sector more directly in the residential development process by capitalizing revolving loan funds, leveraging public land, and creating new public authorities. While a universal definition for “social housing” remains elusive, most policymakers and supporters agree that social housing is permanently affordable for economically and racially diverse households and includes elements of resident self-governance. This research analyzes how key interest groups—including affordable housing developers, tenant advocates, labor unions, market-rate developers, and pro-housing coalitions—shape and respond to emerging social housing initiatives. Drawing on interviews and case studies of Seattle, Montgomery County (MD), California, New York, Atlanta, and Chattanooga between 2019 and 2025, this thesis examines how political context, institutional constraints, and coalition dynamics influence how social housing proposals are framed, negotiated, and either supported or resisted by key stakeholders. Four key themes emerge from these case studies. First, existing affordable housing developers often interpret new mixed-income, permanently affordable proposals as competition, particularly amidst resource scarcity and institutional constraints. This constitutes a substantial roadblock for the social housing movement. Second, proponents’ theory of change, initiative branding, and their ability to participate in multi-issue bargaining notably impact how affordable housing interest groups respond. Third, private sector actors’ support appears dependent on the public sector’s willingness to partner and how proponents describe the problem they are solving. Fourth, while collaborations around social housing may trigger fault lines between YIMBYs and tenant justice groups regarding revenue neutrality and the value of new market-rate supply, social housing represents an opportunity for bridge-building and collaboration across the housing movements. As interest in these models grows, this research offers practical insights for advocates and policymakers seeking to design locally tailored, politically viable approaches to public-led, mixed-income housing production.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shifting Spaces: Housing and Urban Change in Kabul</title>
<link href="https://hdl.handle.net/1721.1/162141" rel="alternate"/>
<author>
<name>Ghanizada, Bibi Khadija</name>
</author>
<id>https://hdl.handle.net/1721.1/162141</id>
<updated>2025-07-30T03:07:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Shifting Spaces: Housing and Urban Change in Kabul
Ghanizada, Bibi Khadija
This thesis explores the evolution of Kabul’s housing landscape with a focus on the emergence of Shahraks (planned townships) after 2001. Drawing on historical research, four case studies (Aria City, Khwaja Rawash Township, Khushal Khan Mena Blocks, and Omid-e-Sabz Township), and interviews with residents and experts, it analyzes how Shahraks have reshaped urban development in a rapidly growing city. Inspired by Soviet-era Mikrorayons, Shahraks introduced formal infrastructure, legal recognition, modern amenities, and opportunities for new economic activity. They helped expand Kabul’s formal housing stock and created pockets of urban community identity. However, the research finds that Shahraks also deepen spatial and socioeconomic inequalities. Largely built through private investment and targeting wealthier residents and civil servants, they remain inaccessible to the majority of Kabul’s population. Many Shahraks were developed on contested or illegally grabbed land, raising concerns about&#13;
tenure security and governance. Despite improved infrastructure compared to informal settlements, Shahraks often suffer from poor climate responsiveness, environmental degradation, limited green spaces, and energy-intensive designs. Their weak integration with Kabul’s broader urban fabric further exacerbates issues of spatial fragmentation. Looking ahead, the thesis argues that Kabul must learn from both the achievements and shortcomings of Shahraks as it plans&#13;
future projects like Kabul New City. Their model is not inherently unsustainable or inaccessible, but without deliberate reforms, Kabul risks reproducing a cycle where contemporary urban development becomes synonymous with exclusion, fragmentation, and missed opportunity. Key recommendations include prioritizing affordable and expandable housing models, enforcing transparent land governance, promoting climate-adaptive design, strengthening connections&#13;
between housing and employment centers, and carefully structuring public-private partnerships to align private investment with public goals. As Kabul embarks on projects like Kabul New City, it must learn from the partial successes and profound shortcomings of past developments.&#13;
The challenge is not simply to build new cities, but to build a more inclusive, adaptable, and sustainable urban future for all Kabulis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese</title>
<link href="https://hdl.handle.net/1721.1/162137" rel="alternate"/>
<author>
<name>Chiappero, Sofia Belen</name>
</author>
<id>https://hdl.handle.net/1721.1/162137</id>
<updated>2025-07-30T03:07:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese
Chiappero, Sofia Belen
Digital public spaces have become vital for organizing, belonging, and community-building, particularly for marginalized groups such as the LGBTQ+ community, who are increasingly excluded from both physical and online public spaces. Yet, the design of these digital spaces is largely shaped by profit-driven interests rather than the needs of the communities that rely on them. This thesis addresses this gap by asking: What if we treated digital spaces with the same care and intention we demand from our physical public spaces?&#13;
&#13;
To explore this question, the thesis brings together frameworks from urban planning, LGBTQ+ advocacy, and digital design. It proposes a reframing of “urban planning” to include “digital urban planning,” grounded in principles of rights, care, safety, and collective memory. Through a feminist urbanist lens and systems thinking, the work challenges the separation between physical and digital cities.&#13;
&#13;
Methodologically, the project moves beyond traditional research approaches, incorporating Conversational Design and the Relational User Framework to co-create knowledge with activists. The resulting contributions include both a prototype and a roadmap for a digital public space that supports and amplifies LGBTQ+ advocacy; not as a technical fix, but as a speculative and participatory framework for reimagining digital public infrastructure.&#13;
&#13;
This research is grounded in a case study of Letra Ese, an activist-led LGBTQ+ organization in Mexico. The case illustrates how such groups navigate systemic neglect while leveraging technology to document violence and sustain community. Ultimately, the thesis offers a starting point for rethinking the design of digital public spaces and argues for the inclusion of digital environments within the domain of urban planning, recognizing that for many, especially marginalized communities, much of life is already lived online.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene</title>
<link href="https://hdl.handle.net/1721.1/162136" rel="alternate"/>
<author>
<name>Delaney, Simone Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/162136</id>
<updated>2025-07-30T03:07:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene
Delaney, Simone Hope
Since the early days of conquest, Black, Indigenous, and Afro-Indigenous peoples of the Lower Mississippi River Delta have survived recurrent processes of settler colonial un-worlding by re-worlding sovereign lifeways rooted in reciprocal relationships to other colonized peoples and the environment. Un-worlding occurred to Black and Indigenous peoples through dispossession of land, capture into enslavement, and genocide. This process was intertwined with the un-worlding of the landscape’s agency, which was captured and enclosed into property by arresting waterways’ movements through constrictive engineering using coercive labor. In the Bas de Fleuve swamps (today known as the Louisiana Central Wetlands), self-emancipated fugitives that had escaped enslavement formed autonomous inner worlds in the unenclosed territories between the Mississippi River and Lake Borgne. Known as Maroons, they were led by a leader named Juan San Malò and forged interdependent networks that extended to Indigenous settlements, enslaved Africans on plantations, and free Blacks in New Orleans. By living outside eurosettler logics of property and re-establishing reciprocity with the more-than-human web of life, they demonstrated that the liberation of captive people is bound to the liberation of captive landscapes. Their re-worlding was also reminiscent of the pan-African trickster figure: anarchistic heroes that overturn the dominant oppressive world order for more liberatory realities Today, the destruction of wetlands across Southeast Louisiana means that descendants are facing an un-worlding of the sovereign livelihoods their ancestors re-established generations before. This is due to anthropogenically induced land loss, flooding, storm surge, and saltwater intrusion influenced by extractivist industries. Through revolutionary recall, reclaiming the logics of re-worlding established by Juan San Malò’s band of Maroons offers pathways to resist the intensifying threats of climate change that represent afterlives of slavery. Common Ground Relief is one collective that has drawn from Maroon legacies to lead bottom-up disaster response, mutual aid initiatives, and citizen-led wetland restoration. Drawing from creative land reclamation projects led by Utē Petit, Monique Verdin, the Nanih Bvlbancha Builders, and the Descendants Project, a constellation of small, site-specific projects are also presented to demonstrate how revolutionary recall can become a form of speculation for broader land-based liberation in the Lower Mississippi Delta.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers</title>
<link href="https://hdl.handle.net/1721.1/162135" rel="alternate"/>
<author>
<name>Cerny, Faith W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162135</id>
<updated>2025-07-30T03:07:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers
Cerny, Faith W.
Today’s real estate development strategy must incorporate decarbonization to mitigate the built environment’s detrimental impact on climate change. Beyond required climate action, developments are increasingly seen as responsible for improving occupant health and wellbeing. Furthermore, industry stakeholders are tasked with efficiently delivering sustainable, high quality, and affordable housing in dense, urban areas to meet a growing demand. As the stakes intensify and demands of real estate development increase, projects face multiple barriers to implementation. This thesis explores mass timber construction as a viable solution to modern development challenges. While research content derives from multiple geographies within North America, a particular focus on the relevance and utility for Greater Boston, MA, USA is maintained. The thesis comprises five chapters. Following an introduction, the second chapter provides an overview of mass timber as an evolving building technology with an emphasis on how and why it is gaining momentum as a viable and preferred alternative to traditional building materials. The section conversely discusses commonly cited drawbacks delaying industry acceptance. The third chapter explores mass timber adoption at multiple scales, including studies of innovative projects proving achievement of development objectives despite challenges. Guided by insights from interviews, this chapter discusses stakeholders’ current understanding of the material and motivations for its use, perceived feasibility constraints as well as believed opportunities for its incorporation and proliferation, with a focus on Greater Boston. The fourth chapter considers methods to accelerate the rate of mass timber adoption, including facilitation of local development strategy. The section builds on research and interview findings to establish key considerations when evaluating a mass timber project and to propose an analytical framework for real estate developers to holistically assess the value of incorporating the material in their projects. The concluding chapter speculates the local arc of adoption and subsequent impacts of widespread mass timber project implementation for the city and region.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities</title>
<link href="https://hdl.handle.net/1721.1/162134" rel="alternate"/>
<author>
<name>Boeri, Jake</name>
</author>
<id>https://hdl.handle.net/1721.1/162134</id>
<updated>2025-07-30T03:06:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities
Boeri, Jake
A shift towards the use of micromobility vehicles (MMVs), specifically motorized two-wheeled vehicles in urban mobility networks, has gained significant attention over the past decade. Many have commented on a perceived increase in MMV use in New York City (NYC) in particular, a trend that appears to have accelerated in the wake of the COVID-19 pandemic and in response to the expansion of high-quality bicycle facilities across the city. However, the extent to which different types of MMVs are used and related rider behavior is poorly understood, forcing policymakers, planners, elected officials, and community members to develop policies and infrastructure with inadequate information. Through direct observation of 9,629 vehicles across five locations, this thesis provides a degree of ground truth and an initial understanding of the prevalence of different MMV types used in protected bicycle facilities in NYC and related user behavior, including commercial application of these vehicles, helmet use, and passenger presence. The findings of this study point to a surprisingly high use rate of motorized MMVs in protected bicycle facilities in NYC, with motorized vehicles comprising nearly three-quarters (73.96%) of all vehicles observed. E-bikes were the largest class of vehicles observed (63.85%), followed by conventional, non-motorized bicycles (25.76%), e-scooters (6.69%), and mopeds (1.96%). Commercial-use vehicles made up nearly one-quarter (23.20%) of observations. A very small proportion of observations were cargo vehicles (2.89%), indicating their limited use for both personal and commercial purposes. Users were significantly more likely to wear a helmet when using a non-motorized vehicle than a motorized one, with helmet use varying substantially across vehicle classes. Modal split of MMV types, commercial use, and cargo vehicle use varied by both location and time of day, pointing to uneven distribution across the mobility network. There were substantial differences between the manual count from this study and automated bicycle counts generated by the New York City Department of Transportation over the same period, indicating a systemic undercounting of MMV use by the automated count system. In response to these findings, a series of recommendations are provided for how NYC and other cities with both developed and developing MMV networks can promote and guide safe, equitable, and sustainable mode shift as micromobility use expands. These&#13;
proposals include policy and spatial planning improvements that should be part of a response to widespread MMV adoption, and the ongoing transformation of how protected bicycle facilities are used.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU</title>
<link href="https://hdl.handle.net/1721.1/162133" rel="alternate"/>
<author>
<name>Berra Sandin, Mikel</name>
</author>
<id>https://hdl.handle.net/1721.1/162133</id>
<updated>2025-07-30T03:07:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU
Berra Sandin, Mikel
Europe’s housing affordability crisis presents significant territorial challenges, particularly as housing demand increasingly spills over from inner cities to surrounding municipalities at the metropolitan scale. This study addresses key policy questions regarding the coordination of housing supply and planning instruments in large urban areas of the European Union. &#13;
Focusing on 23 large Functional Urban Areas (FUAs), the research follows a three part approach: a quantitative analysis of municipal-level housing production and demographic growth between 2011 and 2021 based on Census data; an analysis of the effects of housing supply on housing prices; and an AI-powered quantitative examination of urban plans, at municipal, metropolitan, and regional scales to observe whether they establish housing supply goals. This methodology generates evidence on the spatial dynamics of housing development, by creating an EU-wide database at municipal granularity, while providing a novel focus and analytical approach to institutional urban plans as drivers of housing supply.&#13;
Findings prove mixed alignments between housing supply and demographic growth, with Southern and coastal urban areas falling short on housing supply. In most cases, there is a pronounced metropolitan effect, where peripheral municipalities experience larger housing and population growth. When analyzing the plans, more frequent planning relates to larger housing provision. In addition, the research highlights that housing goals are usually determined at local plans, showing a mismatch between planning efforts and housing dynamics, which tend to be metropolitan or regional. Therefore, the research deepens the understanding of European housing provision and the planning of urban territories, highlighting the need for stronger housing policy mechanisms at the metropolitan level.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Miles Matter: Demographics, Distance, and Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/162132" rel="alternate"/>
<author>
<name>El-Sisi, Kareem H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162132</id>
<updated>2025-07-30T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Miles Matter: Demographics, Distance, and Decision-Making
El-Sisi, Kareem H.
In this thesis, I investigate which variables have the strongest influence on an individual's travel mode choice depending on the purpose and level of urgency (leisure, essential, emergency) of the trip. I analyze the relationship between spatiotemporal costs conditioned by demographic segmentation using data on population mobility patterns in auto-centric Los Angeles and multimodal New York City. Through a synergistic three-pronged methodology consisting of spatial (time and distance analysis complemented by a spatial interaction model), statistical (multinomial logistic regression model), and machine learning-based (graph neural networks and extreme gradient boosting) analysis, I explore the multifaceted nature of decision-making processes in different urban environments. The hidden patterns revealed by artificial intelligence show that distance is the key determinant of mode choice, depending on the urban form of the city and its adaptation to multimodality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston</title>
<link href="https://hdl.handle.net/1721.1/162131" rel="alternate"/>
<author>
<name>Hunsen, Alula</name>
</author>
<id>https://hdl.handle.net/1721.1/162131</id>
<updated>2025-07-30T03:07:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston
Hunsen, Alula
Amidst a bevy of nonprofits and governmental actors that support and facilitate cultural and aesthetic production in the City of Boston, a vanguard of Black artists and cultural organizers are developing structures and organizations to help local members of Boston’s Black communities steer their own cultural production. This thesis develops an understanding of actions being taken by these organizers and organizations through interviews, and builds a set of participatory action research frameworks by partnering with these organizations (specifically: Thrill, Black Cotton Club, and 5Thou), to conduct further research as to how Black Bostonians can continue to self-determine in the realms of arts and culture. Drawing from a lineage most directly traceable to the Black Arts Movement of the late 1960s, and to hip-hop cultural production in ensuing decades, these organizers are furthering Black-led, community-controlled arts, and fostering community-building. Borrowing theorist Henri Lefebvre’s conception and declaration of a right to creative expression and participation, characterized as oeuvre and as a critical aspect of a “right to the city,” I hypothesized that these actions toward cultural self-determination could be seen as the establishment of a Black oeuvre. This assertion was expanded upon by research partners, to include a broader array of strategies and conceptual frameworks for producing Black place, community, and culture in Boston.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line</title>
<link href="https://hdl.handle.net/1721.1/162130" rel="alternate"/>
<author>
<name>Martinez, Alejandra A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162130</id>
<updated>2025-07-30T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line
Martinez, Alejandra A.
The first 14.5-mile phase of the Southeast Gateway Line (SEGL), a planned light rail project through Southeast Los Angeles and the Gateway Cities region, is expected to be completed by 2035. The rail line aims to improve transit access while being complemented by a regional planning framework and station area planning that seeks to promote transit-oriented communities around station areas and drive equitable community development along the corridor. However, it remains uncertain whether the frameworks and governing bodies responsible for implementing the rail project, including the Los Angeles County Metropolitan Transportation Authority (LA Metro), the Gateway Cities Council of Governments (GCCOG), and cities along the corridor, will effectively align the transit investment with these land use and development goals.&#13;
&#13;
Given these uncertainties, this thesis focuses on the Southeast Los Angeles (SELA) subregion, where a history of structural challenges underscores both the urgency and the complexity of realizing visions for transit-oriented communities tied to the forthcoming rail investment. Drawing on semi-structured interviews with LA Metro and GCCOG staff, along with officials and staff from cities hosting future stations, this research explores the emerging political, economic, and structural barriers to implementing transit-oriented land use around two future SEGL stations: Florence/Salt Lake Station in Huntington Park and Firestone Station in South Gate, both stations of which have multi-jurisdictional spheres of influence. This thesis also proposes a collaborative framework that encourages SELA stakeholders to engage in incremental, low-stakes planning and establish accountability mechanisms before the rail arrives, laying the foundation for sustained stewardship over the vision of transit-oriented communities and broader equitable community development goals throughout the rail's lifespan.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies</title>
<link href="https://hdl.handle.net/1721.1/162127" rel="alternate"/>
<author>
<name>Smith, Mistaya</name>
</author>
<id>https://hdl.handle.net/1721.1/162127</id>
<updated>2025-07-30T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies
Smith, Mistaya
Rural communities in the United States face economic challenges due to a combination of factors including the decline of the extractive sector, the departure of manufacturing, the agglomeration of farmland, and the regionalization of key public services. To some policymakers, this economic decline, in combination with the nation’s rural-urban political stratification, serves as reason to further abandon rurality and promote migration to urban areas. These policies overlook the interdependence between rural and urban ecosystems and ignore rural America’s unique assets. In capitalizing on rurality’s existing natural beauty and land access, the trail-based outdoor recreation economy functions as a form of asset-based economic development in rural communities. In connecting recreators to the land, serving as the setting of social connection, and creating place-based connections across time, trails further benefit rural communities through the construction of place attachment. Investment in trails as a form of economic development, however, commodifies nature so as to attract external interest in rural places. Externally-driven population increases and wealth influxes in rural communities can cause physical gentrification in the form of rising property values and resident displacement. This gentrification process also contains a cultural component as the commodification of nature and the demographic shift in rural places erodes place attachment between longtime residents and the land through the displacement of local place-based knowledge, changes in traditional land access, and disruption to recreational use patterns. Research suggests that those with deeper place attachments exhibit greater civic engagement, a deeper sense of community and belonging, and more care for their community and environment. Therefore, cultural gentrification can also lead to a decline in community care and a risk to rural vitality. This thesis examines five rural Northeastern towns with trail-based outdoor recreation economies to discern how each community approaches the risks of physical and cultural gentrification.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wildfire Risk Management for Informal Settlements in Chile</title>
<link href="https://hdl.handle.net/1721.1/162126" rel="alternate"/>
<author>
<name>Sakai, Yuri</name>
</author>
<id>https://hdl.handle.net/1721.1/162126</id>
<updated>2025-07-30T03:07:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wildfire Risk Management for Informal Settlements in Chile
Sakai, Yuri
This thesis explores the critical intersection of wildfire risk and informal settlement development in Chile, focusing on the municipality of Viña del Mar. This city experienced the deadliest wildfires in the nation’s history in 2024 and holds the nation’s highest concentration&#13;
of informal settlements. Despite this double vulnerability, the city has inadequately integrated wildfire resilience into its disaster risk management (DRM) framework, creating an urgent&#13;
need for policy reform.&#13;
&#13;
Through combined statistical and geospatial analyses, the author documents informal settlements’ expansion trajectories, especially between 2011 and 2024, and systematically assesses their wildfire exposure. Utilizing unregularized community datasets, wildfire risk classifications, and municipal planning documents, the analyses revealed that the growth of informal settlements outpaces regularization interventions. They also unveiled that all of the informal&#13;
communities in the city, including their wildland-urban interface zones, face significant fire risk.&#13;
&#13;
These findings further led the research to evaluate the current Chilean wildfire governance under Law 21.364 (enacted 2021) to provide comprehensive DRM across national, regional, and municipal administrative levels. Additionally, the study examines the disaster response mechanisms for the 2024 Chile Wildfires. This policy and evidence-based analyses identify inherent still reactive approaches to disasters even 4 years after the policy transition, and reveal a systematic marginalization of informal settlements.&#13;
&#13;
Based on these findings, the research culminates in phase-specific actionable policy recommendations addressing the compound vulnerabilities of informal communities through: 1) enhanced shelter capacity estimation methodologies; 2) formalized private sector involvement; 3) integrated tsunami-wildfire warning systems; 4) periodic intergovernmental learning opportunities; and 5) technical support in reconstruction. Given the 2024 tragedy and Chile’s transition toward comprehensive DRM, these interventions are particularly crucial to accelerate its transition and establishment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Still Working: Re-examining America’s Urban Working Waterfronts</title>
<link href="https://hdl.handle.net/1721.1/162124" rel="alternate"/>
<author>
<name>Zhang, Mabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/162124</id>
<updated>2025-07-30T03:07:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Still Working: Re-examining America’s Urban Working Waterfronts
Zhang, Mabelle
While American urban waterfronts once served as critical sites of production, they are now disappearing, reflecting larger de-industrialization trends. This thesis argues for a critical re-examination of the continued and evolving role that waterfronts play as sites of work. It expands the definition of urban working waterfronts to include sites of industry, production, and economic activity, thereby aligning with these sites’ historic and ongoing uses. &#13;
&#13;
This thesis examines four working waterfronts in the Northeastern United States, a region with over 400 years of urban development driven by and around its waterfronts. This thesis examines this through four case studies: Central Waterfront in Portland, ME; Waterfront District in New Bedford, MA; Waterfront at Port Morris, NY; and Waterfront at Sunset Park, NY. &#13;
&#13;
Through analyzing these cases, this thesis proposes a typology of working waterfronts :the Traditional Working Waterfront, the Industrial Working Waterfront, and the Hybrid Working Waterfront—based on key differences in uses, forms, and governance. &#13;
&#13;
This thesis argues that the central issue is not merely protecting working waterfronts, but understanding how they are adapting to new realities. State and community-driven protections through zoning help protect existing working waterfronts, however; these sites are not stagnant relics of historic working waterfronts—rather, they are ever-evolving in response to new economic realities through incorporating new industries, technologies, and public access into their sites.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pedestrian Accessibility and Individual’s Subjective Happiness</title>
<link href="https://hdl.handle.net/1721.1/162123" rel="alternate"/>
<author>
<name>Shikida, Aika</name>
</author>
<id>https://hdl.handle.net/1721.1/162123</id>
<updated>2025-08-25T18:54:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pedestrian Accessibility and Individual’s Subjective Happiness
Shikida, Aika
Cities in many countries are taking steps to use happiness as a formal policy measure of well-being, in addition to more commonly used economic indicators such as Gross Domestic Product. Economists and public policy and public health scholars have researched the factors that are associated with happiness, linking higher self-reported happiness outcomes with financial status, gender, social interactions, personal health, and sense of security. However, the link between happiness and the built environment around one’s home or workplace has been understudied and remains poorly understood. While location quality — particularly pedestrian accessibility to commercial, recreational, institutional, educational, and transportation facilities — is known to affect home location values, how the same set of location attributes that affect housing prices may have a relationship with happiness remains unclear. In theory, more convenient home locations offer individuals the capacity for independent living (e.g., walking access to destinations), social interactions (e.g., chance encounters with community members), and a sense of belonging (e.g., through self-sufficient neighborhood amenities) — qualities that should also contribute to happiness. This thesis reports on an exploratory analysis of location quality and self-reported happiness in the United States and Japan. Using a customized pedestrian accessibility metric, this thesis examines how access to daily destinations is related to individuals’ subjective happiness, controlling for socio-demographic variables. In the U.S. data, we found that people living in areas with higher pedestrian accessibility to destinations were not necessarily more likely to report being happier, on average. In fact, there was a small tendency for individuals in these areas to report slightly lower happiness levels, on average, after accounting for other influences such as age, income, and marital status. Note that the relationship between pedestrian accessibility and happiness may be more complex than expected and may involve other factors (e.g., presence or absence of greenery). We conducted an additional analysis by dividing the Census tracts into two groups based on population density. In areas with lower population density, the relationship between pedestrian accessibility and happiness remained negative and statistically significant and showed the same strength as the overall analysis. For Nagasaki, Japan, there was not a statistically significant relationship between happiness and pedestrian accessibility, but this might be due to a problem in the street network data, so further investigation is required. In addition, a qualitative analysis of Nagasaki reveals that residents report that problems with the walking environment (e.g., narrow sidewalks, slopes and stairs, darkness at night, road surface differences, distance to facilities) influence their travel behavior and happiness. Nevertheless, although the results of this thesis have limitations, as described above, promoting pedestrian accessibility should remain an important consideration for policy makers when setting public policy goals, since pedestrian accessibility could, for instance, lead to improved physical and mental health, as well as other benefits. For both the U.S. and Japan, future work is necessary to understand the complex experiences of individuals that include spatial, psychological, and environmental factors related to the built walking environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Listeria monocytogenes crosses host cell barriers</title>
<link href="https://hdl.handle.net/1721.1/162120" rel="alternate"/>
<author>
<name>Hanna, Ruth</name>
</author>
<id>https://hdl.handle.net/1721.1/162120</id>
<updated>2025-07-30T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">How Listeria monocytogenes crosses host cell barriers
Hanna, Ruth
Listeria monocytogenes is a bacterial pathogen that causes listeriosis, a severe food-borne illness that can cause severe complications and mortality in immunocompromised or pregnant people. Listeria is able to cross several host barriers to cause severe disease, including the intestinal barrier, the blood-brain barrier, and the placental barrier. This is mediated by a diverse range of bacterial factors. In this review, I outline the key host barriers encountered by Listeria during host infection and the mechanisms by which Listeria crosses each barrier.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oakland's Preservation Park: Planning for the Future</title>
<link href="https://hdl.handle.net/1721.1/162117" rel="alternate"/>
<author>
<name>Kaufman, Samantha</name>
</author>
<id>https://hdl.handle.net/1721.1/162117</id>
<updated>2025-07-30T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Oakland's Preservation Park: Planning for the Future
Kaufman, Samantha
Preservation Park in Oakland is an anomaly. It is neither a green park nor strictly an office park, 16 historic homes, carefully renovated and maintained, are arranged around an internal way and studded with a central fountain in the Victorian style. Seeds for this park were initially planted by the city's Landmark Preservation Advisory Board in 1976 and with fits and starts, it opened in 1991.  As Interstate Highway 980 was built, the park was created as a way to save a few of the most beautiful homes under threat by the Oakland Redevelopment Authority's urban renewal clearance and construction of the highways. Interstates 580, 880, and 980 were lashed across Oakland to bring suburban commuters over the bridge to San Francisco, cutting up a city of neighborhoods and destroying thousands of homes and small businesses. Oakland envisioned this acre and a half as a permanent site for community based organizations and non-profits to revitalize the edge of downtown and West Oakland. &#13;
 &#13;
Since 1991, the office space has been rented to tens of non-profits and hosted hundreds of weddings, conferences, and other public and private events. In 2004, the community development corporation, East Bay Asian Local Development Corporation purchased the park from the city and continued to manage the property as a successful office park and event space. The COVID-19 pandemic irrevocably changed how many people work, and for the first time, Preservation Park vacancies increased and have remained substantially below 100%, presenting a challenge to EBALDC and its portfolio. This thesis seeks to provide the client with a framework to assess possible redevelopment and reprogramming schemes which is sensitive to the community goals of EBALDC and requirement for the property to sustain itself. By considering, financial feasibility and partnerships, a multi-phase roadmap with a 20-year time horizon is presented to EBALDC to consider. This will also provide a potential framework for more non-profit firms to pursue commercial real estate management and redevelopment as a strategy for community wealth-building and neighborhood stability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks</title>
<link href="https://hdl.handle.net/1721.1/162116" rel="alternate"/>
<author>
<name>Kleinbock, Yvette</name>
</author>
<id>https://hdl.handle.net/1721.1/162116</id>
<updated>2025-07-30T03:07:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks
Kleinbock, Yvette
In the spring of 2020, as COVID-19 spread across New York City and the United States, an inadequate government response and an overburdened social safety net left millions facing unemployment, eviction, and food insecurity with limited institutional support. Yet alongside these systemic failures, mass acts of solidarity emerged, as unprecedented numbers of people mobilized mutual aid eff orts to help their neighbors survive. While many mutual aid groups have since disbanded or experienced burnout, others have sustained the work, helping to establish alternative infrastructures of collective care. Taking Astoria, Queens as a case, this thesis examines the political lessons that have emerged in the aftermath of the COVID-19 pandemic, focusing on what it takes to sustain community-led solidarity networks and considering City’s role and responsibility in supporting urban infrastructures of care more broadly. To conceptualize this relationship between local community eff orts and the City, I further consider the possibilities of co-governance as a framework for community care. This research utilizes a community-centered, relational, qualitative approach that draws on oral history and ethnographic traditions, including thematic analysis of key informant interviews, document review, and participant observation. Tracing the trajectory of mutual aid and other community-led eff orts in Astoria and exploring the possibilities and challenges of collaborative governance, this research imagines how planning, policy, and governance strategies in New York City can deepen collective capacity, foster resilience, and advance more just and caring urban futures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a Digital Common Application for Affordable Housing in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162115" rel="alternate"/>
<author>
<name>Moss, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162115</id>
<updated>2025-07-30T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Implementing a Digital Common Application for Affordable Housing in Massachusetts
Moss, Emily
The need for affordable housing in Massachusetts is immense, with fragmented housing application processes further compounding barriers for low-income residents to access stable housing. To address these challenges, the Massachusetts Executive Office of Housing and Livable Communities (EOHLC) initiated the development of a digital common application (Common App) in 2024 to streamline tenant application and selection processes for privately owned publicly subsidized housing opportunities throughout the state. This client-based thesis offers an implementation roadmap for EOHLC to successfully operationalize the Common App within the agency.&#13;
&#13;
The roadmap is structured around three topics as requested by EOHLC: (1) organizational design considerations as the Common App scales, including internal staffing models, external vendor relationship management, and budget planning; (2) long-term technical integration opportunities, including identifying relevant data systems likely to interact with the Common App and potential areas for alignment; and (3) compliance mechanisms to ensure housing providers’ participation in the Common App, including a review of Massachusetts fair housing regulations as one possible strategy to require or incentivize providers to use the platform.&#13;
&#13;
Each topic draws from a review of state policies as well as academic literature in organization studies, information systems, and public administration; stakeholder interviews; and case study research on digital affordable housing search and application platforms in Massachusetts, Detroit, San Francisco, and the Bay Area—culminating in a series of recommendations for EOHLC to effectively administer the Common App over the long term.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162114" rel="alternate"/>
<author>
<name>Wong, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/162114</id>
<updated>2025-07-30T03:07:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts
Wong, Nicole
U.S. cities are ramping up building decarbonization initiatives to reduce greenhouse gas emissions from buildings. However, these programs and policies generate complex challenges at the intersection of housing, climate, and environmental justice, especially for cities that face barriers to adopting strong renter protections. This thesis offers two case studies regarding tenant-related equity concerns that emerged during the implementation of building decarbonization initiatives in greater Boston, Massachusetts: Boston’s building performance standard the Building Emissions Reduction and Disclosure Ordinance (BERDO) and Everett’s energy efficiency incentive program Electrify Everett. This thesis also identifies strategies that residents, community organizations, and city officials highlight as important to advance building decarbonization without generating unintended consequences for tenants. &#13;
Key equity concerns include the potential impacts of building decarbonization on rental affordability, displacement, and energy burden, whereas strategies include broad tenant protections such as rent control, renter protections attached to building decarbonization subsidies, and robust enforcement mechanisms. This research illuminates the need to build power to win essential tenant protections, focus decarbonization on housing with existing affordability protections, and advance alternative, decommodified forms of housing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements</title>
<link href="https://hdl.handle.net/1721.1/162113" rel="alternate"/>
<author>
<name>Sears, Caroline Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/162113</id>
<updated>2025-07-30T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements
Sears, Caroline Julia
The MITR fission converter (FC) is a core-driven subcritical assembly at the MIT Nuclear Reactor Laboratory, located on the MIT campus in Cambridge, MA. The assembly is made of eleven partially-depleted MITR-II fuel elements in a separate cooling tank attached to the side of the core-tank graphite reflector. The FC serves to boost the thermal flux from the core and send a hardened neutron spectrum to an irradiation target, providing a fission energy flux spectrum without the need to put a sample inside the core tank. It was previously used for boron-neutron capture therapy clinical trials before its decommissioning in the 2010s. Recently, it has been modified from a medical beamline to a general-use engineering and materials testing facility. The new FC-based experimental facility has roughly one cubic meter of empty space downstream intended to contain large experiments, called the m³. This work is a safety and performance study aimed at quantifying the impact of modifying the facility’s geometry as part of the FC’s recommissioning, as well as the impact of changing its fuel from HEU to LEU fuel as part of the MITR LEU conversion project. Neutronics and thermal hydraulics analysis of the renovated facility have been performed using the codes MCNP5 and STAT7, respectively. This analysis quantified the FC’s k_eff, power distribution, multi-group neutron flux, and conditions which cause onset of nucleate boiling (ONB). It was determined that the FC assembly will remain subcritical (k&#13;
_eff &lt; 0.9) and low power (≤200 kW) under a wide range of performance conditions, including with both types of fuel and a variety of materials on the target-side of the FC tank. The HEU-fueled FC is expected to require no changes to the limiting safety system settings (LSSS) outlined in the original technical specifications document. The LEU fuel is expected to increase the FC performance, but as a tradeoff, will require minor changes to the LSSS setpoints to maintain margin to ONB under the most limiting thermal-hydraulic conditions. Additionally, this study evaluates the feasibility of using the FC for in-assembly fuel experiments, particularly as a pathway for testing the new LEU fuel elements at low power. This study indicated that this proposed FC configuration with one LEU and ten HEU elements is feasible and maintains wide safety margins.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations</title>
<link href="https://hdl.handle.net/1721.1/162112" rel="alternate"/>
<author>
<name>Fortier, Lauren G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162112</id>
<updated>2025-07-30T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations
Fortier, Lauren G.
The economic viability of small and microreactors depends on reducing energy generation costs. The implementation of autonomous reactor control systems provides an avenue for reducing operations and maintenance expenses. Advanced reactor designs with enhanced passive safety features, reduced source terms, and digital instrumentation and control systems, directly support autonomous controllers. In these plants, where the need for human operators is already reduced, the introduction of supervisory control systems (SCS) for dynamic operations further lessens operator dependence while building trust in these systems, laying a solid foundation for the transition to fully autonomous reactor control.  &#13;
&#13;
Finite state automata (FSA) provide a framework for engineering fully verifiable and validatable supervisory controllers, and thereby facilitate the transformation to autonomous operations in nuclear power plant operations. FSA serve as a foundational mathematical tool for modeling discrete event systems (DES). Properties such as nonblocking and controllability can be formally demonstrated and verified by leveraging the extensive set of mathematical proofs within the scope of regular languages. Furthermore, a DES can be directly linked to reactor plant systems and operational procedures within a hierarchical architecture by using a graded functionalization approach analogous to that of complex dynamic systems, such as self-driving vehicles. In this scheme, feedback controllers can regulate low-level actuation functions while a supervisory controller can govern high-level plant state transitions. &#13;
&#13;
A generic supervisory controller was developed as a transition technology toward autonomous reactor operations. This controller was then tailored for application on a limited feedback model, for initial proof-of-concept testing, and then was scaled for use on light water reactor (LWR) simulators. In the absence of advanced reactor simulators for operational testing, LWR simulators were used because they provide realistic feedback and controls within a more conservative operating margin than advanced reactors. These supervisory controllers successfully executed operational procedures within a fully verifiable framework, establishing the foundation of this modeling approach and laying the groundwork for its implementation in advanced reactor designs. This scalable model thus facilitates a smooth transition from functioning as an operator aid to fully autonomous operation as a comprehensive plant controller, increasing the economic viability of nuclear power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine</title>
<link href="https://hdl.handle.net/1721.1/162106" rel="alternate"/>
<author>
<name>Gendler, Isaac A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162106</id>
<updated>2025-07-30T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine
Gendler, Isaac A.
The Central Ukrainian municipality of Tetiiv is experiencing an influx of migrants due to its relatively safe position amid the Russian invasion. Tetiiv, in collaboration with the Ukrainian NGO Vid Sertsya Budova, is building a new neighborhood to accommodate internally displaced people, refugees, war veterans, and local residents. The neighborhood will require water, wastewater, and thermal infrastructure that satisfies European Union requirements given Ukraine’s ambition to join the economic bloc. This thesis performs a pre-feasibility study to help Tetiiv and Vid Sertsya Budova create an optimal configuration of water, wastewater, and thermal infrastructure for the new neighborhood. For water infrastructure, the report calculates water consumption using the BREEAM framework, quantifies storage requirements, analyzes water quality, estimates rainwater harvesting potential, and identifies optimal water source locations within 30 km using the DRASTIC methodology combined with geospatial analysis. For wastewater infrastructure, the study estimated wastewater generation, analyzed different wastewater treatment options, and used a decision matrix to identify the most optimal wastewater system for the site, a moving bed biofilm reactor system. The thermal infrastructure study developed a conceptual heating system for the new neighborhood, incorporating ground-source heat pumps in each row house and single-family home, vertical boreholes, a thermal energy network, and a wastewater heating system for the multifamily co-living units. This study offers a blueprint for Ukraine and other regions recovering from urbicidal conflict and disaster to rebuild in alignment with the new climate paradigm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affect in Resiliency Planning: A Conversation with Broad Channel</title>
<link href="https://hdl.handle.net/1721.1/162105" rel="alternate"/>
<author>
<name>Fiol, Olivia</name>
</author>
<id>https://hdl.handle.net/1721.1/162105</id>
<updated>2025-07-30T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Affect in Resiliency Planning: A Conversation with Broad Channel
Fiol, Olivia
Planning for climate change is more relevant than ever, as the earth continues to warm, sea levels rise, and no global policy or political will is in sight. In order to plan under hostile circumstances, it is of the utmost importance that planners turn our attention to the hyper-local scale, continuing momentum in our personal and professional relationships. In this thesis, I argue that centering affective experiences of place is essential in conversations about the future of places under climate change, especially in communities and neighborhoods resistant to the conversation about climate change’s impacts on their futures in the first place. This project focuses on Broad Channel, the only inhabited island community in New York City’s Jamaica Bay, which is on the front lines of sea level rise and tidal flooding in the city. I interviewed city leaders, community members, artists, planners, and activists to understand how we can move through and with affect when considering the future of a place. This can open up conversation about climate change previously inaccessible. These conversations also surfaced the need for planners to regroup and understand how their own affective positions impact difficult conversations about climate change. I offer these insights and recommendations for future resiliency planning work, reflecting both inward and outward.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between Fields and Cities: The Politics of Land Use Changes in Punjab, India</title>
<link href="https://hdl.handle.net/1721.1/162104" rel="alternate"/>
<author>
<name>Kodzis, Trevor Quigley</name>
</author>
<id>https://hdl.handle.net/1721.1/162104</id>
<updated>2025-07-30T03:08:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Between Fields and Cities: The Politics of Land Use Changes in Punjab, India
Kodzis, Trevor Quigley
This thesis examines the urbanization of agricultural lands in the State of Punjab, looking for patterns that explain the type of development that is occurring while embedding these transformations in a larger political and economic context. The study will focus on both transportation infrastructure and the real estate developments surrounding it, as a way of situating Punjab within a larger discourse on infrastructure and urbanization in the Global South. Through the case studies of three Punjabi cities: Mohali, Bathinda, and Ludhiana, this paper will employ remote sensing to analyze recent transformations from agricultural to developed land across different land use zones, revealing two primary patterns. First, highway infrastructure projects have been delayed because of land acquisition problems and a contentious political environment. Second, with the exception of Ludhiana, most of the real estate in Punjab is concentrated in the residential sector. This apparent stagnation of manufacturing growth in Punjab results from a wide range of political and economic factors including high land prices, protest movements, emigration, fiscal policies, geography, and competition with other states. In contrast to the rest of the state, Ludhiana has successfully attracted industrial growth, illustrating how cities that urbanized earlier follow a different path of economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ozarkitecture: Shaping the Sense of a Region</title>
<link href="https://hdl.handle.net/1721.1/162103" rel="alternate"/>
<author>
<name>Jones, Rubin</name>
</author>
<id>https://hdl.handle.net/1721.1/162103</id>
<updated>2025-07-30T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ozarkitecture: Shaping the Sense of a Region
Jones, Rubin
Contemporary planning often invokes a “sense of place,” yet the deeper work of placemaking remains largely unfulfilled. In its absence, cities and regions fracture into landscapes that appear whole but feel hollow. These are spaces stripped of the sensory depth and symbolic meaning that make dwelling possible. This thesis thus returns to the concept of the genius loci—the spirit of place—not as a nostalgic embellishment, but as an ethical and practical imperative. It traces the philosophical and historical foundations of place, examines how contemporary practice has diluted its meaning, and explains why a new approach is necessary. From this foundation, the project engages Kevin Lynch’s operational models and develops a reframed approach—shifting from a visual image to an embodied experience—to ground planning practice in the textures of memory, movement, and belonging. Five new concepts—anchor, patch, joint, seam, and trail—offer a vocabulary for cultivating places that hold meaning across time and transformation. This framework is applied in Northwest Arkansas, a region where rapid growth threatens to outpace the character of its communities. By strengthening sensory experience, rooted memory, and collective authorship, this project aims to offer a different way forward through regional transit—where planning not only shapes space, but safeguards access to the ongoing, unfinished project of place itself.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan</title>
<link href="https://hdl.handle.net/1721.1/162102" rel="alternate"/>
<author>
<name>Sati, Maysaa</name>
</author>
<id>https://hdl.handle.net/1721.1/162102</id>
<updated>2025-08-06T18:54:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan
Sati, Maysaa
Displacement camps are often framed as zones of impermanence; spaces of waiting designed to contain crises, not cultivate futures. Yet, in Kalma Camp in South Darfur, displacement has given rise to a self-organized, complex urban environment shaped by collective labor, cultural resilience, and everyday acts of spatial and political agency. This thesis explores how communities in Kalma have remade space, redefined home, and preserved identity in the face of prolonged uncertainty. Drawing on ethnographic fieldwork, spatial analysis, and critical urban theory, it situates Kalma not as an exception, but as a generative urban formation—an emergent city born from the margins.&#13;
Through chapters that trace the camp’s spatial evolution, intergenerational understandings of belonging, informal governance, cultural production, and political expression, this research challenges dominant humanitarian paradigms that treat camps as temporary and peripheral. It argues that residents are not passive recipients of aid, but planners, builders, and cultural producers who contest displacement through care, memory, and infrastructure. By threading together theoretical insights from scholars such as Malkki, Bhabha, Roy, and Simone with grounded narratives from Kalma, the study reveals how displacement can also be a site of urban possibility.&#13;
In reframing camps like Kalma as sites of urban life, not despite the crisis, but through it, this thesis calls for a fundamental shift in how urban planners, humanitarian actors, and scholars engage with protracted displacement. It invites us to see resilience as planning, care as governance, and the camp not as a space of suspension, but as a place where new urban futures are already being forged.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment</title>
<link href="https://hdl.handle.net/1721.1/162092" rel="alternate"/>
<author>
<name>Edwards, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162092</id>
<updated>2025-07-30T03:08:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment
Edwards, Emily
The LIBRA project investigates tritium breeding using beam-target style DT neutron generators to irradiate molten salt vessels. A critical aspect of understanding this process is the characterization of the energy and flux anisotropies within the neutron environment, which are inherent to the beam-target neutron generation method. These spectral and flux characteristics directly impact tritium production and the interpretation of experimental results, which makes the neutron field characterization essential for a complete understanding of the tritium breeding system. This paper presents the use of an sCVD diamond detector and an sCVD diamond proton recoil telescope to characterize the neutron environment produced by the DT neutron generator employed in the LIBRA experiments. The results of these measurements provide insight into the neutron flux and energy distributions incident on the breeding salt, enabling a more complete understanding of the neutron input in the LIBRA experimental tritium breeding process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework</title>
<link href="https://hdl.handle.net/1721.1/162090" rel="alternate"/>
<author>
<name>Leung, Yu Hang (Hannah)</name>
</author>
<id>https://hdl.handle.net/1721.1/162090</id>
<updated>2025-07-30T03:08:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework
Leung, Yu Hang (Hannah)
Rapid expansion of urban populations has spurred the construction of new cities, contrasted with the heightened urgency to adopt climate risk mitigation and disaster resilience strategies. Along with the global need for Nature-based Solutions (NbS), new eco- developments which are planned within biodiversity hotspots should adopt resilient climate adaptation strategies for long term benefits. However, these projects are often not financially justified or positioned to sustain long investments and holding periods. This thesis examines development of Ibu Kota Nusantara (IKN) in Indonesia as an evolving eco-development case study on how biodiversity could be repositioned as a key aspect in investment frameworks.&#13;
Developing new cities and eco-developments tend to rely on external investments, as internal structures are navigate the challenges of rapid growth while seeking a self-sustainable equilibrium. For IKN, private investors hesitate to invest in a project that is situated in an unstable political landscape, while low government expenditure and poor governance structures has marred development progress. Based on the inherent need to build to support a growing urban population, this multidisciplinary thesis explores three components that are needed to design an eco-development project - namely consistent way to value biodiversity in comparison to development values, proper environmental governance, and sustainable financial instruments to support the initial and operational expenditures of a project. Measurement approaches such as GBS-FI and S&amp;P NBS are able to streamline corporation’s dependency value of biodiversity, based on  valorization models developed by SEEA-EA and the United Nation’s Integrated National Financing Framework. A mixed-methods approach of qualitative case study analysis and in-depth review of existing and potential financial instruments is used to understand the demand and supply side of eco-developments. A&#13;
Contingent Valuation Method of assessing buyers’ Willingness-To-Pay in addition to qualitative questionnaire on perceived values of biodiversity provides insights on local understanding and WTP of premiums in support of elevated costs of eco-developments. The intention of this research is to explore how biodiversity could be recentered as a foundational element to sustainable development of cities. More broadly, this research seeks to synthesize the interdisciplinary discussions around development, environmental policy and ecological planning, while evaluating the feasibility of innovative financial mechanisms to mobilize capital for large-scale eco-development projects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation</title>
<link href="https://hdl.handle.net/1721.1/162089" rel="alternate"/>
<author>
<name>Lim, Sungmoon</name>
</author>
<id>https://hdl.handle.net/1721.1/162089</id>
<updated>2025-07-30T03:07:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation
Lim, Sungmoon
This paper examines the intersection of population aging and digital civic government in Seoul, South Korea. As cities worldwide digitize and age simultaneously, understanding elderly citizens' representation in digital governance platforms becomes critical for inclusive urban governance. As a leader in both aging and urban technologization, Seoul serves as an ideal case study. Combining computational analysis of civic queries with qualitative interviews, this study investigates whether elderly residents' concerns are adequately represented in Seoul's e-government platform. Comparing these datasets reveals significant disparities in how elderly concerns are represented digitally: despite Seoul's technological sophistication and digital inclusion efforts, substantial gaps remain in representing elderly citizens' concerns in governance forums, signaling gaps that may undermine age-inclusive development. This research contributes to theoretical understandings of digital democracy and urban aging while offering practical insights for designing more inclusive systems that address the realities of dual urban phenomena—aging and digitization—as they coalesce in cities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators</title>
<link href="https://hdl.handle.net/1721.1/162088" rel="alternate"/>
<author>
<name>Jiragoontansiri, Witiwat</name>
</author>
<id>https://hdl.handle.net/1721.1/162088</id>
<updated>2025-07-30T03:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators
Jiragoontansiri, Witiwat
Compact Steam Generators (CSGs) are vital components in Small Modular Reactors (SMRs), particularly within Integral Pressurized Water Reactor (iPWR) configurations where compactness and high performance are essential. This thesis explores the use of Multiphase Computational Fluid Dynamics (M-CFD) to simulate two-phase flow boiling in CSGs based on Printed Circuit Heat Exchanger (PCHE) technology. Using the commercial CFD code STAR-CCM+, two modeling approaches—the Volume of Fluid (VOF) model and the Two-Phase Thermodynamic Equilibrium (TPTE) model—are applied to simulate both adiabatic and heat transfer conditions within mini-channels. The simulations are validated against experimental data from two sources: an R-134a-based vertical test loop developed at MIT’s Greenlab and a water-based PCHE test section from Kromer’s prior work. Key two-phase flow parameters such as void fraction, pressure drop, and heat duty are evaluated and compared to experimental benchmarks. Calibration methodologies are implemented to improve predictive accuracy. The validated models are then used to simulate realistic CSG operating conditions based on Babcock \&amp; Wilcox and NuScale reactor designs. Results indicate that PCHE-based CSGs, despite being smaller, are capable of delivering favorable thermal and hydraulic performance, with slightly better results compared to the existing steam generator design. Overall, the study demonstrates the potential of M-CFD tools to support the design and optimization of CSGs for next-generation nuclear applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty</title>
<link href="https://hdl.handle.net/1721.1/162087" rel="alternate"/>
<author>
<name>Chachra, Vir</name>
</author>
<id>https://hdl.handle.net/1721.1/162087</id>
<updated>2025-07-30T03:07:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty
Chachra, Vir
The United States is witnessing a shift in its geography of poverty, with suburban communities experiencing greater increases in poverty rates relative to urban cores. However, transit service and fare policies have not kept pace with this demographic shift, inadequately meeting the needs of a growing population of lower income riders in the suburbs, particularly those served by higher-cost modes like commuter rail.  &#13;
&#13;
This thesis confronts this evolving dynamic, bridging a research gap between transit fare policy and the suburbanization of poverty, analyzing seven transit systems across the US through a Spatial Difference in Differences research approach, revealing mode specific shifts in transit cost burdens from 2019 to 2021 and impacts of these shifts on social vulnerability as defined by the CDC. The thesis also explores federal policy pathways to create greater fare equity in light of this dynamic, either through supporting operations costs for transit agencies or through a flat-fare national transit pass for riders, akin to Germany's Deutschlandticket (D-ticket) program.&#13;
&#13;
Focusing on suburban commuter rail communities across the sampled networks, the analysis finds that in 2021, communities with only commuter rail access and higher-than-average social vulnerability scores were associated with approx. an 11% additional increase in transit cost burdens compared to all other groups while also experiencing an increase in transit cost burdens overall. Furthermore, a two-fold increase in transit costs as a share of median income in 2021 was correlated with an additional 7.4% rise in social vulnerability index scores for commuter rail communities, relative to those with access to other modes that are closer to the urban core. While these communities have a 38% lower social vulnerability score, the analysis estimated a 60% increase from 2019 to 2021, highlighting a disproportionate increase and challenging the assumption of the wealthy commuter rail suburb.  &#13;
&#13;
This increasing sensitivity to transit cost burdens points to a significant ongoing interaction between national trends of suburbanization of poverty and fare policy. Given that many transit agencies face funding constraints and are nationally inconsistent in their low-income fare programs, they may be structurally limited in their ability to address these disparities on their own. This analysis considers lessons from historical policies such as the National Mass Transportation Assistance Act of 1974 and recent international programs like Germany’s D-ticket, to suggest that federal support for transit operations—paired with inclusive, mode-agnostic fare programs—would help address these emerging inequities in transit affordability amid the suburbanization of poverty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving Concepts of the Public Interest in Comprehensive Planning</title>
<link href="https://hdl.handle.net/1721.1/162085" rel="alternate"/>
<author>
<name>Tagliani, Jessie</name>
</author>
<id>https://hdl.handle.net/1721.1/162085</id>
<updated>2025-07-30T03:07:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evolving Concepts of the Public Interest in Comprehensive Planning
Tagliani, Jessie
The public interest is an important, yet contested, concept in the field of planning. On the one hand, it offers a normative criterion against which planning decisions can be evaluated and is traditionally viewed as the source from which planners derive their authority. However, the precise nature of the concept is fiercely debated by both planning practitioners and theorists, with some going so far as to denounce its existence. Today, the increasingly pluralist and complex nature of communities lead to questions over the concept’s relevance and applicability. In the second half of the twentieth century, planning theoreticians began assembling a body of literature surrounding this concept, mostly in the form of typologies of the definitions that have been ascribed to the public interest However, my review of the literature revealed that the study of the public interest as a normative criterion for planning has almost entirely taken place in the realm of planning theory. Therefore, I sought to add to the empirical scholarship concerning the public interest by analyzing it from two angles: first, I sought to understand how the public interest as a historical concept has changed and evolved alongside the field of planning throughout the twentieth century. Second, I chose the field of comprehensive planning as my analytical lens due to its longevity across the history of the planning profession and its close affiliation to the concept of the public interest. Specifically, I sought to analyze how the public interest is manifested in a series of comprehensive plan documents and thereby illustrate how the concept’s operationalization has evolved over the course of the past half century of planning. I began my analysis by drawing on over fifty years of scholarship to construct my own typology of the main definitions of the public interest. I then applied these definitions to four different models of comprehensive planning that were developed between 1962 and 2012. I also obtained a second perspective on the evolution of the concept of public interest by examining a series of comprehensive plans adopted by the City of Annapolis between 1964 and 2022. The two analyses revealed very different trajectories in the evolution of the public interest as an empirical concept. On the one hand, the four models demonstrate a fairly linear evolution in what is constituted to be the substance and process of constituting the public interest, which can be broadly classified as achieving social equity, the responsible stewardship of natural resources, and authentic citizen involvement. By contrast, the five Annapolis comprehensive plans did not neatly follow the same evolution. Instead, a recurring concern for many of the Annapolis plans is the conservation of the physical city through the control of the city’s growth, the careful maintenance of its economy, and the preservation of its urban fabric. However, the more recent plans demonstrate a stronger commitment to the social values and processes espoused by the four planning models, indicating that there is growing consensus in the field of planning today regarding an empirical understanding of the public interest.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia</title>
<link href="https://hdl.handle.net/1721.1/162084" rel="alternate"/>
<author>
<name>Kurniaputri, Aulia</name>
</author>
<id>https://hdl.handle.net/1721.1/162084</id>
<updated>2025-07-30T03:07:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia
Kurniaputri, Aulia
While walking is fundamental to inclusive urban mobility, major cities in Indonesia continue to face challenges in providing barrier-free pedestrian infrastructure, even for individuals without physical impairments. As the population of older adults in Indonesia continues to grow, the risk of disability within this demographic will increase, contributing to the overall number of individuals with disabilities. In Bandung City, there is a rising awareness across various sectors of society regarding the rights of older adults and individuals with disabilities to navigate sidewalks safely. These trends highlight the importance of improving inclusivity on city streets, where people travel daily to reach their essential and desired destinations.&#13;
&#13;
This thesis explores an evidence-based methodology to prioritize sidewalk accessibility improvements for older adults and individuals with physical disabilities, aiming to develop a prioritization strategy that targets maximum impacts. Accessibility scores and pedestrian flow counts are calculated with the Urban Network Analysis (UNA) toolbox. Three types of user groups—non-disabled individuals, cane or crutch users, and wheelchair users—were assigned penalties for each type of barrier on a sidewalk segment, resulting in varying perceived distances. Those with physical mobility limitations perceived longer distances than those without. To identify priority locations, a system-selection ranking was applied that considered sidewalk segments with both high-frequency usage and significant discrepancies between actual and perceived lengths. The methods outlined in this thesis are scalable for use in other neighborhoods and cities, thereby supporting data-driven decision-making in pedestrian infrastructure improvements.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine</title>
<link href="https://hdl.handle.net/1721.1/162083" rel="alternate"/>
<author>
<name>Bendixen, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/162083</id>
<updated>2025-07-30T03:08:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine
Bendixen, Amanda
Offshore wind projects are inherently complex, requiring the integration of social, environmental and technical planning. Meaningful engagement with communities is critical to ensuring procedural fairness, trust and equity throughout the development process. Yet, the role of civic design in shaping these outcomes remains unexplored. This thesis investigates how relationality and reciprocity are fostered through the civic design of public engagements for offshore wind development in the Gulf of Maine. Through qualitative analysis of public meeting transcripts – using thematic coding and memo writing in Atlas.ti – this study identifies civic design elements and recurring engagement themes. &#13;
&#13;
The findings highlight relational accountability as a mechanism for building trust, transparency and procedural fairness. They also explore how civic design can support reciprocity, while revealing how structural barriers can undermine relationality. This research demonstrates the possibilities and limitations of civic design in fostering relational and reciprocal public engagements. It concludes with recommendations for incorporating civic design elements that promote sustained, reciprocal relationships, accountability and long-term community involvement in offshore wind development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Safety and Surveillance: New Possibilities for Public Light After Dark</title>
<link href="https://hdl.handle.net/1721.1/162082" rel="alternate"/>
<author>
<name>Corlett, Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/162082</id>
<updated>2025-07-30T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Beyond Safety and Surveillance: New Possibilities for Public Light After Dark
Corlett, Lucy
As cities refocus planning and design goals in response to evolving global standards for urban well-being, sustainability, and spatial equity, research on best practices and innovative considerations for the public realm has expanded. As a result, a new movement in research and guidance on public light has emerged. Rather than continuing to view lighting as a punitive means of enforcing surveillance and public safety, this movement in research and practice advances radically inclusive, responsive design methods that use light to redress inequality in the built environment. This thesis builds on a growing body of research that establishes the powerful influence of light on human experience and perception, initiating a dialogue between different models for place-based approaches to lighting design in shared public spaces. Drawing on in-depth studies of these models, interviews with stakeholders, scholarship, policy, and design and planning practice, this thesis recommends that city planners serve as the bridge between ideation and implementation in a new era of urban illumination.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement</title>
<link href="https://hdl.handle.net/1721.1/162081" rel="alternate"/>
<author>
<name>Paul, Sanjana</name>
</author>
<id>https://hdl.handle.net/1721.1/162081</id>
<updated>2025-07-30T03:08:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement
Paul, Sanjana
As renewable energy development accelerates across the United States, conflicts over project siting have become increasingly common; often rooted not in opposition to clean energy itself, but in concerns over fairness, community inclusion, and long-term accountability. This thesis investigates how Community Benefits Agreements (CBAs) can serve as tools to address these challenges, focusing on how negotiation dynamics, mediation, and stakeholder engagement shape the equity and enforceability of CBAs in renewable energy siting. Using a mixed-methods approach, this research draws on qualitative case studies, stakeholder interviews, and legal-policy analysis, alongside a limited quantitative assessment of CBA implementation outcomes. The study examines both the procedural and structural conditions that influence how benefits are negotiated, formalized, and monitored. By analyzing cases that include third-party facilitation, amendment mechanisms, and diverse stakeholder participation, the thesis identifies best practices for designing CBAs that move beyond performative engagement and toward genuine community empowerment. Ultimately, this research offers a multidimensional understanding of CBAs as emergent governance instruments situated at the intersection of infrastructure planning, environmental justice, and public accountability. It concludes by proposing a model state-level regulatory framework to support equitable CBA development and embed principles of justice into the future of renewable energy siting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162080" rel="alternate"/>
<author>
<name>Morales, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/162080</id>
<updated>2025-07-30T03:07:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts
Morales, Daniela
Many health and environmental regulations apply only within specific political or administrative boundaries, creating a mismatch between the spatial scale of natural systems which impact health and the spatial extents of relevant regulations. For example, in Massachusetts, local Boards of Health govern specific public health and environmental issues through spatialized regulatory powers that carry significant weight in both local and larger geopolitical contexts. Despite the fact that watershed management influences regional public health outcomes through impacts to water quality, water quantity, and climate resilience measures, the organizations focused on watershed management do not have influence that matches the power of public health entities. This thesis explores how watershed management decisions could have similar weight to other public health governance decisions by exploring the specific speculative case of what interest there is in, and what barriers there are to, watershed management organizations in Northeastern Massachusetts working as public health governing units, such as local Boards of Health. Using a mixed methods approach, combining organizational and policy analyses with semi-structured key informant interviews and surveys, I assessed the opportunities, barriers and interest for multi-sector watershed and health governance to advance Planetary Health in Northeastern MA. The findings showed low receptiveness towards adopting a new regional governance system due to both perceived and actualized legal, organizational and social barriers. The findings also highlighted an interest towards strengthening existing regional partnerships and building new collaborations across the fields of public health and watershed management for more effective approaches towards environmental health decision making. These results suggest a need for additional interdisciplinary training for both sectors, and the creation of new spaces and relationships for collaboration between actors involved in public health, watershed management, and related fields.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History</title>
<link href="https://hdl.handle.net/1721.1/162079" rel="alternate"/>
<author>
<name>Mohamed, Menatalla</name>
</author>
<id>https://hdl.handle.net/1721.1/162079</id>
<updated>2025-07-30T03:07:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History
Mohamed, Menatalla
In the post-World War II era, urban renewal was designed as a path towards the revitalization of American cities through public investment into the redevelopment of ‘blighted’ areas. Through eminent domain takings, urban renewal projects led to the forced relocation of residents from their homes and neighborhoods, with a disproportionate impact on Black, immigrant, and low-income communities across the country. The archives of the renewal period hold the story of this widespread displacement and are of significant value for contemporary planning practice. Through the lens of two case studies, this thesis explores how and why urban renewal archives are being revisited today to address this displacement history through institutional and community approaches to memorialization. In Cambridge, MA, the Cambridge Redevelopment Authority (CRA) is an example of an agency drawing on its own archive to publicize its role in past forced relocation through its use of eminent domain. In Rochester, NY, Clarissa Uprooted is a public history and community building project centered around the story of Clarissa Street, a historically Black neighborhood that was demolished for renewal in the 1960s. Through document analysis and interviews, I examine how these efforts to activate urban renewal archives and better understand the scope and impact of forced relocation provide avenues for planners and community members to remember the past, acknowledge systemic harms, and reflect on repair. Despite the different positionalities of the CRA and Clarissa Uprooted, a comparative approach also highlights how both organizations have created opportunities to unearth histories of dissent to urban renewal, more fully recognize the legacy of commercial displacement, and imagine avenues to planning, policy, and institutional change. This research demonstrates the significance of local archival initiatives that draw upon the past to better position planners and communities to face the urban challenges and inequities of the present and future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures</title>
<link href="https://hdl.handle.net/1721.1/162078" rel="alternate"/>
<author>
<name>Moeykens, Riley S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162078</id>
<updated>2025-07-30T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures
Moeykens, Riley S.
Following the nuclear accident at Fukushima Daiichi Power Station in 2011, an urgent need for safer, more economical, and versatile nuclear fuels has arisen. In recent years, uranium boride (as a tetraboride and diboride) has been further investigated as a candidate fuel form for its high thermal conductivity, high melting point, high uranium loading, and potential for dual use as a fuel and burnable absorber. In this work, the synthesis, structural behavior, and oxidation behavior of uranium borides and chromium- and yttrium- alloyed uranium borides are investigated. The structure of the synthesized uranium borides and chromium- and yttrium- alloyed uranium borides were probed using synchrotron X- ray Powder Diffraction (XRD) and Pair Distribution Function (PDF) analysis with in-situ heating. The methods and challenges in synthesizing uranium boride and chromium- and yttrium-alloyed uranium boride, as well as the consequential thermophysical and oxidation properties of these potential fuel forms, are elucidated in this work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors</title>
<link href="https://hdl.handle.net/1721.1/162077" rel="alternate"/>
<author>
<name>Hallinan, Aidan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162077</id>
<updated>2025-07-30T03:07:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors
Hallinan, Aidan M.
In the United States, comprehensive reactor design certification, site permitting, and operating licensing processes exist to ensure the safe and reliable operation of nuclear power plants (NPPs). Most of these plants have belonged to the same design class: large, centrally located Light Water Reactors (LWRs). Thus, our regulatory processes were tailored for their phenomenology and the unique challenges associated with their operation and maintenance. However, these types of plants may be impractical for specific energy markets, where smaller, non-LWR, highly flexible, and multi-faceted NPPs can be more optimal. The novelty of these designs and their use cases has further inspired new operating paradigms, which will be referred to as Semi-Autonomous, Highly Automated, or Remote Operations (SAHARO) in this thesis. While some of these new reactors have seen limited progress in design certification and licensing efforts under current regulatory practices, there remains little precedent for these novel operating approaches. To facilitate discussion, guide designers, and inspire regulatory progress, I begin by looking at existing regulations, licensing practices, technical guidelines, and other rules that govern the NPP design and operations. I then dive into current applications and discussions of the sub-components of SAHARO, across different technical domains as well as nuclear power, to gather technical, operational, and regulatory insights. To provide reactor design evaluators with an additional tool, I define a Risk-Complexity Score (RCS), which couples simple system complexity quantification with existing risk measures and can support risk-informed system analyses. I then conduct an internet network Quality of Service (QoS) test to demonstrate one of the many important considerations for remote operations stress-testing, which proposes an approach for evaluation within the SAHARO licensing process: the “SAHARO Coping and Minimum Inventory Assessment Strategy.” Lastly, based on my literature and industry reviews, I have constructed a framework that informs reactor designers on how to iterate through the SAHARO-based design process, while also enabling vendor-regulator collaboration and shared learning. Ultimately, I aim to help designers and regulators in the nascent fields of autonomous, automated, and remote NPP operations identify the key questions these technologies and systems must address to ensure safe, effective, and practical application.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra</title>
<link href="https://hdl.handle.net/1721.1/162074" rel="alternate"/>
<author>
<name>Kulkarni, Nikita</name>
</author>
<id>https://hdl.handle.net/1721.1/162074</id>
<updated>2025-07-30T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra
Kulkarni, Nikita
Over 600 million people in Sub-Saharan Africa lack access to electricity. While Ghana is projected to achieve universal access by 2030, this national milestone obscures lived experiences of energy insecurity— particularly in urban centers like Accra. Despite a reported 91% grid connection rate, only 17% of Accra’s households consider their electricity supply reliable (Afrobarometer, 2022). Traditional, binary metrics— focused solely on grid connection—fail to capture essential social dimensions such as reliability, affordability, equity, and resilience, particularly under intensifying climate and urban pressures. My thesis&#13;
investigates persistent energy insecurity in Accra, Ghana’s capital, through the lens of dumsor—a term used to describe recurring power outages that disrupt daily life and expose the fragility of the centralized&#13;
electricity system. Drawing on the frameworks of splintered urbanism and the techno-politics of infrastructure failure, the thesis explores how dumsor reflects institutional fragmentation, political contestation, and inequality in the energy infrastructure space. In response to dumsor, I examine whether decentralized energy systems, particularly solar, can offer a pathway to local energy resilience—defined here as the place-based capacity to withstand dumsor through cleaner, more affordable alternatives for sustainable and reliable power. The study combines a technical assessment of Accra’s solar potential with a critical analysis of policy frameworks, climate finance mechanisms, and political agendas. Grounded in fieldwork and interviews with stakeholders across the energy value chain—from regulators and municipal actors to utilities, solar providers, financiers, residents, and advocacy groups—my thesis identifies on-the-ground barriers to and opportunities for the energy transition. While distributed solar presents a promising alternative with broad reach, persistent challenges in affordability, coordination, and delivery capacity threaten its scalability. Without targeted policy interventions, there is a risk of reinforcing a new form of energy infrastructure splintering—where only the affluent benefit. My thesis concludes that addressing energy insecurity in Accra requires strategic institutional and policy reforms to reconfigure governance, empower municipalities, and enable inclusive financing and policy at the most local level to enable solar alternatives. Energy decentralization offers a promising path forward, but the thesis underscores the ongoing role of the state as a critical enabler of an energy transition that is sustainable and just.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities</title>
<link href="https://hdl.handle.net/1721.1/162073" rel="alternate"/>
<author>
<name>Jex, Sara Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/162073</id>
<updated>2025-07-30T03:07:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities
Jex, Sara Lynn
Recent federal investments in domestic manufacturing have renewed economic interest in legacy industrial cities across the United States. As these places attract new development, it is critical to safeguard against repeating the harms of the 20th-century exodus of industry and manufacturing jobs—when offshoring, suburbanization, and discriminatory housing policies deepened spatialized racial and economic inequalities. How can communities retain the wealth generated by new industrial investments, even if companies leave? This thesis explores how industrial brownfield redevelopment might utilize community wealth-building (CWB) strategies to advance equitable economic development. Focusing on the work of the Site Readiness for Good Jobs Fund in Cleveland, Ohio—a nonprofit preparing long-vacant industrial land for job-dense uses—it examines the potential for mission-driven organizations to use brownfield redevelopment to anchor wealth locally and proactively resist displacement. By analyzing case studies in Buffalo, Milwaukee, Chicago, and Philadelphia, the research tackles three questions: How do mission-driven organizations deliver community benefits through industrial brownfield redevelopments? In what ways do CWB models reshape how capital flows through redevelopment projects? And, what questions and decisions must the Site Readiness Fund consider to build lasting community wealth in Cleveland? Findings suggest that industrial brownfield redevelopment, when paired with strategic partnerships, site control, and a clear vision, offers a unique opportunity to implement CWB models. These strategies can help mission-driven organizations redistribute the risks and rewards of necessary public investments in brownfields and build trust with the community, ensuring that residents surrounding these reactivated sites benefit not just from new jobs, but from ownership and long-term economic power over their futures. The thesis concludes by applying these lessons to the Site Readiness Fund, outlining potential paths forward that embed economic democracy in the redevelopment of Cleveland’s legacy industrial areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Burning S(e)oul: A Body for Cremation</title>
<link href="https://hdl.handle.net/1721.1/162072" rel="alternate"/>
<author>
<name>Kwun, Namhi</name>
</author>
<id>https://hdl.handle.net/1721.1/162072</id>
<updated>2025-07-30T03:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Burning S(e)oul: A Body for Cremation
Kwun, Namhi
Every year, there are over 70,000 fatalities around Seoul, with only two operating crematoria in the city, that is over 100 bodies a day each institution needs to process efciently. By May 26, it would have been six years since my grandfather was gone in those fames. Threading the remnants of mourning, Burning S(e)oul, in forms of a short flm, is a dialogue between “absences” of bodies and architecture. It is presented as a triptych along three parallel timelines divided into fve tableaux. Narrating the aftermaths of death, it refects the bereaved, the deceased, and the workers’ perspective along three mandatory days of grieving. Absence in this paradigm is not solely physical or emotional but rather phenomenological— what appears a quotidian existence of oneself is stripped of its corpse, reafrming that the inherent genius loci of the crematorium instead refect a broader infuence that institutions have experienced since post-war Korea. It argues that the systematized practice of death processing is an apparatus used to sever the genealogy of individual bodies from their role in afrming personal and communal kinships. Embedded within its architectural design, this alienation dismantles time by shifting the condition of death processes as an engineered state, rather than historical or material one. This detachment is emblematic of the country’s postwar trajectory, where rapid modernization prioritized efciency over continuity, severing longstanding rituals that once bond personal grief to communal memory. The friction between an engineered present and an inherited past manifest as a form of cultural desynchronization— one where the ostensibly modern remains haunted by the traditional. This shift extends beyond mere technical or practical concerns; it represents a deliberate method of assimilating a nonlinear societal modernization—one that in its pursuit of progress, distances itself from historical trauma. Yet this tension does not merely mark a transition; it accumulates as a generational melancholy, where the urgency of progress leaves grief suspended in an unresolved state, neither fully severed nor meaningfully preserved.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics</title>
<link href="https://hdl.handle.net/1721.1/162071" rel="alternate"/>
<author>
<name>Xu, Ziqing (Becky)</name>
</author>
<id>https://hdl.handle.net/1721.1/162071</id>
<updated>2025-07-30T03:07:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics
Xu, Ziqing (Becky)
The credibility of voluntary carbon markets hinges on the quality of carbon offset projects, particularly in forestry and land-use sectors where claims of additionality and emissions reductions are often disputed. This paper introduces a novel, open-source approach to evaluating carbon offset projects by integrating open datasets, satellite-based remote sensing, and large language models (LLMs). Focusing on additionality and baseline integrity, the study examines existing challenges—including inflated baselines, inconsistent standards, leakage risks, and limited transparency—and proposes a system to automate early-stage project assessment. The platform combines AI-driven document analysis and geospatial data processing to evaluate risk factors such as additionality, leakage, and policy compliance, offering stakeholders an accessible, scalable tool to identify high-integrity carbon credits and mitigate greenwashing. This work aims to enhance transparency, accountability, and trust in the voluntary carbon market through data-driven, user-friendly decision support.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections</title>
<link href="https://hdl.handle.net/1721.1/162067" rel="alternate"/>
<author>
<name>Zangi, Arthur S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162067</id>
<updated>2025-07-30T03:07:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections
Zangi, Arthur S.
Ion implantation devices, machines which can very precisely dope semiconductors using beams of accelerated charged particles, have in recent years begun to be used in implanting high energy light ions, with energies greater than 1 MeV. This has caused unprecedented production of neutron and gamma radiation, particularly of neutrons from the ¹³C(alpha,n)¹⁶O reaction, creating an unacceptable radiation hazard. To address this issue, we undertake dose mapping and modeling efforts to create simulation tools in Geant4 which can accurately predict dose rates on the Axcelis VXE LT. &#13;
&#13;
Existing physics tools for modeling nuclear reactions have been shown to produce non-physical results at incident particle energies of 1 - 2 MeV, as these tools are frequently used for modeling reactions which may have energies into the GeV or even TeV range. To address these deficiencies, we construct a new drop-in physics model which uses relativistic kinematic equations to precisely predict the energy and angular distributions of secondary particles produced in Geant4 at low energies. This model relies on accurate cross-section data to describe the reaction; to address gaps in the literature on the two neutron producing reactions of interest to this work, we measure the angular dependent cross-section of the ¹³C(alpha,n)¹⁶O reaction over 7 angles, at the 2.605 and 2.670 MeV resonances, and we measure the total cross-section of the ²⁹Si(alpha,n)³²S reaction at 2.6 and 2.7 MeV.&#13;
&#13;
By implementing the new physics model and adding new cross-section data to the model of the ion implantation device, we are able to produce a high-fidelity simulation of radiation production and transport in ion implantation devices. Using this tool, we then propose solutions to mitigate radiation production within the ion implanter, reducing the radiation hazards of high energy ion implantation devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation Effects on Thermal Properties of Advanced Nuclear Materials</title>
<link href="https://hdl.handle.net/1721.1/162066" rel="alternate"/>
<author>
<name>Johnston, Maren</name>
</author>
<id>https://hdl.handle.net/1721.1/162066</id>
<updated>2025-07-30T03:07:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Radiation Effects on Thermal Properties of Advanced Nuclear Materials
Johnston, Maren
Understanding the effects of irradiation on critical thermophysical properties is fundamental for the advancement of next-generation nuclear systems operating in high-flux neutron and gamma environments. Zirconium hydride (ZrH) and yttrium hydride (YH) have emerged as promising neutron moderating materials due to their exceptional hydrogen density leading to superior moderating power. Yet, the radiation-induced microstructural evolution and its correlation to macroscopic thermal transport phenomena remain insufficiently characterized.&#13;
&#13;
In this work, ZrH and YH specimens were characterized pre- and post-irradiation via laser flash analysis, high-resolution dilatometry, and differential scanning calorimetry. Comparative analysis revealed that even low-fluence neutron irradiation induced complex defect clusters that degraded thermal diffusivity, while the crystallographic lattice parameters, vibrational energy states (inferred from thermal expansion measurements), and heat capacity exhibited an inconclusive response to radiation damage.&#13;
&#13;
To address limitations in current characterization methods for large-scale, anisotropic composite nuclear materials, we developed an advanced thermal transport measurement facility using infrared photothermal excitation. This platform enables spatially-resolved thermal diffusivity mapping of silicon carbide (SiC) composites—materials with complex three-dimensional fiber arrangements being evaluated for accident-tolerant fuel cladding applications. Complementary Thermal Conductivity Microscopy (TCM) measurements conducted at Idaho National Laboratory provided microscale resolution of constituent thermal properties, establishing a multi-scale characterization approach that bridges microscopic thermal transport mechanisms with bulk composite performance. These findings advance the qualification of advanced nuclear materials, enabling more accurate thermomechanical modeling and performance prediction under the extreme conditions of next-generation reactors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design</title>
<link href="https://hdl.handle.net/1721.1/162065" rel="alternate"/>
<author>
<name>Hultquist, Riley J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162065</id>
<updated>2025-07-30T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design
Hultquist, Riley J.
Structural materials are a key limiting factor in the safety, longevity, and efficiency of nuclear power plants. Advanced metal alloys show great promise for use in reactor environments, but ensuring their reliability requires a fundamental understanding of their microstructural evolution under extreme conditions. In situ X-ray experiments offer a powerful means to investigate nanoscale defect evolution under reactor-relevant conditions. Bragg coherent diffraction imaging (BCDI), a synchrotron X-ray technique, enables high-resolution 3D imaging of degradation processes. Combined with an experimental electrochemical cell, BCDI is a promising tool for providing insight into the problems facing advanced materials in next-generation reactor designs. In this work, a custom designed electrochemical cell, successfully adapted for use at four beamlines, was developed and used to demonstrate in situ corrosion and hydrogen embrittlement (HE) of nickel (Ni) and copper (Cu) microcrystals. HE experiments confirmed the hydrogen evolution reaction (HER) at Cu surfaces and bulk embrittlement, using a removable silver/silver chloride (Ag/AgCl) electrode to maintain a stable reference potential. The cell’s chemical durability was demonstrated during more than 30 hours of operation, wherein Ni microcrystals were subjected to boric acid (B(OH)3) and lithium hydroxide (LiOH) to simulate the corrosive coolant chemistry of pressurized water reactors (PWRs). BCDI revealed the evolution of phase and dislocations in a Ni microcrystal under these conditions, affirming its power as a nanoscale measurement tool. Furthermore, BCDI provided direct evidence of lattice expansion in Cu in response to cathodic reduction of hydrogen. Additional analysis reveals a selective beam relaxation effect on Ni microcrystals, providing further insight into radiation-material interactions. The findings of this work lay important groundwork for future advanced alloy development utilizing user-friendly in situ experimental cells.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago</title>
<link href="https://hdl.handle.net/1721.1/162064" rel="alternate"/>
<author>
<name>Joyce-Johnson, Seamus C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162064</id>
<updated>2025-07-30T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago
Joyce-Johnson, Seamus C.
Shared micromobility/bikeshare services and public transit both offer travel alternatives to the automobile in urban areas. While these services might be viewed as competitors in the urban mobility space, this thesis argues that each benefits from the other as part of a “package of options” available to the car-free or car-lite urban resident that together provide a comprehensive replacement for auto-mobility. This work centers on the Chicago mobility context. It compares shared micromobility systems in Chicago, Los Angeles, Austin, Pittsburgh, and Washington, D.C., each of which have varying levels of transit integration, ridership, ownership models, and fares. It finds that transit agency ownership of shared micromobility systems appears not to be a panacea and that truly integrated fares are not present even in agency-owned systems. It also finds that lower fares are present in systems with greater levels of public subsidy, regardless of the ownership model. The second part of the thesis characterizes the specific interactions between Divvy, Chicago’s main scooter- and bikeshare system, and the Chicago Transit Authority (CTA). It tests the suitability of novel data sources, including CCTV footage and CTA farecard transactions, for inferring transfers between the two systems and finds that existing spatiotemporal inference methods do not capture the wide heterogeneity in transfer rates among rail stations. Although Divvy has stations near most CTA rail stations, there is room for improvement in the rapidity of these transfers. Using GIS and open-source routing tools, the thesis finds an average walk time of 2.1 minutes from CTA entrances to the nearest Divvy station and suggests high-priority relocations. The third part of the thesis presents preliminary results from a survey of Chicago-area residents probing their attitudes and behaviors regarding shared micromobility and public transit. The survey results showed some evidence of complementary use between the two modes. The thesis concludes with a set of recommendations for the CTA regarding improvements in its integration with Divvy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii</title>
<link href="https://hdl.handle.net/1721.1/162062" rel="alternate"/>
<author>
<name>Dufour, Curtis</name>
</author>
<id>https://hdl.handle.net/1721.1/162062</id>
<updated>2025-07-30T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii
Dufour, Curtis
This thesis is about subaltern spaces and identities in the Roman colony of Pompeii—an ancient city notably destroyed and preserved by the eruption of Vesuvius in 79 CE; one that has been widely studied for its preservation of a Roman urban environment that was ‘frozen in time’. The excellent preservation of the site reveals a colonial material record that has long encouraged terminal narratives of Roman acculturation, so-called Romanization, which have devalued the plurality of identities and meanings found in the dispersed spaces and imageries of the ancient city. Rejecting this unilinear narrative of colonization, this thesis instead examines the networks of meaning tied to subaltern spaces, architectures, and imageries of Pompeii under Roman colonial rule. &#13;
&#13;
In doing so, this thesis adopts a middle-range approach to the study of Pompeii’s spaces—giving attention to the distinct elements of the material record while acknowledging their interrelations that form networks of meaning stretching across time, space, and culture. These networks shaped and collated the distinctive spatial and imagistic elements constructed in the city under Roman rule—creating cohesive and legible spaces that recursively engaged with the diverse population of the city. Engaging in a ‘peopling’ of the past—that is, reimagining the lived experiences of subaltern Pompeian residents within the ancient colonial city—this thesis explores how networks of meaning led to the persistence, subsidence, and emergence of subaltern identity spaces within the ancient colonial city—spaces that were erased, appropriated, and peripheralized under Roman colonial rule. &#13;
&#13;
Through a detailed analysis of the networked spaces in the city—employing methodological frameworks from urban planning, social geography, and urban ethnography—this thesis tracks the presence of the proposed networks of meaning attached to subaltern spaces within the spatial and imagistic environment of the Colonia Cornelia Veneria Pompeianorum. In doing so, this thesis finds that the plurality of identity spaces in Pompeii cannot be understood through top-down, unilinear narratives of domination and erasure; rather, they must be apprehended as dynamic social and spatial features wherein subaltern Pompeian identities persisted within the very frameworks intended to marginalize them—producing hybridized spaces, syncretized architectural forms, and alternative discourses of place defined by the networked meanings that made the city legible to the diverse individuals who inhabited it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages</title>
<link href="https://hdl.handle.net/1721.1/162061" rel="alternate"/>
<author>
<name>Fabris-Green, Sarafina</name>
</author>
<id>https://hdl.handle.net/1721.1/162061</id>
<updated>2025-07-30T03:06:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages
Fabris-Green, Sarafina
This thesis employs a site planning and policy perspective to explore how parking garages can serve as last-mile microhubs for e-commerce package deliveries in New York City. During the COVID-19 pandemic, deliveries accelerated, prompting a proliferation of “last-mile facilities,” the destination where parcels go just prior to final delivery. This surge of activity has prompted residents to raise complaints about trucks and vans driving through their neighborhoods and blocking streets or sidewalks when unloading their goods. In response, New York City government has been forced to think more proactively about the freight supply chain and its impact on the urban environment. New York and other cities have begun experimenting with the use of microhubs. Microhubs are small spaces in which packages are unloaded from vans and trucks onto smaller, more sustainable modes such as cargo bikes and handcarts. A commonly identified but understudied location for microhubs is the parking garage. London stands out as a city with this form of hub. This thesis employs three primary research methods—site observations, interviews, and case studies—to argue that parking garages could provide a solution to better utilize dense urban space in dense cities and improve quality of life for residents by reducing the negative impacts of existing last-mile warehouses and delivery vehicles, all while requiring minimal funding. This is shown through an analysis of existing microhub sites in London and how they relate to their urban surroundings. These findings are then applied to two distinct contexts and garage designs in New York City. Finally, the thesis offers site planning criteria that connect land use policy to the design of the facilities and the surrounding public realm through the concept of “planning at the interface.”
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana</title>
<link href="https://hdl.handle.net/1721.1/162060" rel="alternate"/>
<author>
<name>Goyal, Shubhi</name>
</author>
<id>https://hdl.handle.net/1721.1/162060</id>
<updated>2025-07-30T03:07:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana
Goyal, Shubhi
Global infrastructure losses from disasters now exceed an estimated US$700–845 billion annually, disproportionality affecting cities in the Global South (CDRI, 2023). Accra, as a rapidly urbanizing coastal city, faces recurring floods, coastal erosion, and rising vulnerabilities that erode development gains and entrench existing socio-economic inequalities. Climate-related disasters alone cost the city US$118 million in annual losses (CDRI, 2023), disproportionately affecting informal settlements. Infrastructure financing remains underfunded: the city needs US$37.9 billion annually to meet infrastructure needs by 2047 (GNIP, 2018), while a US$900 million gap undermines its Climate Action Plan (AMA, 2025). &#13;
&#13;
Despite increased national investment and brewing/blooming/?? global climate finance mechanisms, Accra struggles to attract and equitably deploy resources for inclusive resilience (CPI, 2023).  Projects like the Greater Accra Resilient and Integrated Development (GARID) project expose systemic issues – prioritizing asset protection over community-centered design, with inadequate participation and social co-ownership (GARID PAD, 2019).&#13;
&#13;
This thesis critically examines how infrastructure financing mechanisms in Accra shape the potential to build inclusive resilience. Mapping the city’s financing landscape, it analyzes how institutional, financial, and governance arrangements influence the selection, distribution, and implementation of investments. Using GARID as a case study, the thesis applies a critical justice framework – drawing on distributive justice (who benefits and who bears the costs), procedural justice (who has voice and decision-making power), and epistemic justice (whose knowledge systems are valued in infrastructure planning) (Carolini, 2022) – to evaluate current infrastructure financing practices and explore opportunities to embed these justices in efforts to build resilience. Findings reveal that infrastructure financing decisions are dominated by centralized donor-driven and ministerial priorities, constrained by fiscal austerity, and evaluated through technocratic frameworks that marginalize community participation and local knowledge. &#13;
&#13;
Ultimately, the thesis argues that building inclusive resilience in climate-vulnerable cities like Accra requires transforming infrastructure financing systems to prioritize social inclusion, participatory governance, and knowledge pluralism – alongside, not subordinate to, economic efficiency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs</title>
<link href="https://hdl.handle.net/1721.1/162059" rel="alternate"/>
<author>
<name>Wagner, Cale</name>
</author>
<id>https://hdl.handle.net/1721.1/162059</id>
<updated>2025-07-30T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs
Wagner, Cale
As sea level rise and other climate impacts force millions across the U.S. to increasingly relocate in coming decades, how receiving cities accommodate this growth will significantly impact future emissions trajectories. This thesis examines the climate migration feedback loop, where climate migrants relocate to urban areas with carbon-intensive development patterns, inadvertently accelerating the climate change driving their displacement.&#13;
&#13;
Through analysis of three contrasting metropolitan areas—Atlanta, Portland, and Buffalo—this research demonstrates how different development approaches could either perpetuate or disrupt this feedback loop. Using a spatial methodology based on the urban transect model, the study compares Business-as-Usual scenarios that follow current development trends with Climate-Driven Reform scenarios that redirect growth toward transit-accessible, walkable locations.&#13;
&#13;
The research reveals that Climate-Driven Urbanism can meaningfully reduce both land consumption and emissions compared to conventional development patterns. These reductions stem not from technological advancement or behavioral change, but from strategic spatial reorganization of the same migrating population, with each metropolitan area demonstrating unique implementation pathways. By connecting regional migration flows to metropolitan development scenarios and neighborhood design interventions, this thesis offers planners, designers, and communities a framework for evaluating alternative futures that transform population growth from a spatial challenge and emissions liability into a catalyst for sustainable urbanism.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration</title>
<link href="https://hdl.handle.net/1721.1/162058" rel="alternate"/>
<author>
<name>Smith, Alessandra</name>
</author>
<id>https://hdl.handle.net/1721.1/162058</id>
<updated>2025-07-30T03:06:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration
Smith, Alessandra
This thesis investigates how city governments can reconceptualize infrastructure to reshape value creation for communities, using the City of Atlanta as a case study. By examining various departments and executive offices within Atlanta’s municipal structure, the research highlights the complexities of urban governance, where value is not uniformly defined or understood even within a single city. The central question guiding this work is: How can Atlanta’s city agencies collaborate across departments to identify opportunities to create more value through city-owned assets?&#13;
&#13;
Through stakeholder interviews and a mapping of publicly owned assets, this thesis explores an alternative, strategic approach to infrastructure one that supports not only urban planners but also city practitioners seeking to enhance residents’ quality of life through a value-based lens. The study also acknowledges the often overlooked, expanded value of built assets, which remains difficult to capture through conventional metrics. In doing so, it argues for a broader, more inclusive understanding of infrastructure’s role in urban life.&#13;
&#13;
This research offers a framework to view and explore infrastructure and values in a more comprehensive and holistic way compared to traditional methods. The framework centers strategy around prioritizing infrastructure planning, its relative outcomes, the spatial relationships and function of infrastructure, and the relationships that influence how people interact with infrastructure from a value-based lens.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory</title>
<link href="https://hdl.handle.net/1721.1/162057" rel="alternate"/>
<author>
<name>Phillips, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/162057</id>
<updated>2025-07-30T03:07:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory
Phillips, Natalie
This thesis traces the 30-year history of redevelopment activities at the Kingsbridge Armory in the Northwest Bronx, as community groups have mounted an expanding challenge to development-as-usual in New York City. Using urban regime theory as a lens, I deploy archival research and interviews to assess the tensions that emerge when regime politics collide with a building movement of community power at the Kingsbridge Armory over time. I argue that New York City’s predominant urban economic development regime is not structured to accommodate an organization that is both a grassroots leader and a developer, and that as community power continues to evolve, the regime’s traditional arrangements become increasingly untenable. I ultimately assert that the increasingly structural movement of community power at the Kingsbridge Armory requires a reimagining of the informal processes, logics, and roles that have defined New York economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China</title>
<link href="https://hdl.handle.net/1721.1/162056" rel="alternate"/>
<author>
<name>Wu, Franny Xi</name>
</author>
<id>https://hdl.handle.net/1721.1/162056</id>
<updated>2025-07-30T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China
Wu, Franny Xi
This thesis critically examines China's land expropriation regime through a mixed-methods approach that integrates ethnographic investigation, quantitative economic analysis, and practical interventions developed in collaboration with affected communities. Drawing on extensive fieldwork in the Yangtze Delta Region, including 50 in-depth interviews with dispossessed residents, the research documents how China's urbanization strategy systematically captures land value through a dispossession machinery operating at the intersection of state power, market mechanisms, and contested citizenship. The ethnographies reveal a sophisticated system of dispossession enabled by a network of actors whose complementary roles maintain procedural appearances while facilitating extralegal tactics. Quantitative analysis demonstrates systemic under-compensation and value capture that leaves dispossessed households with livelihood disruption and housing insecurity. The research examines how affected communities navigate severe constraints through adaptive resistance strategies to overcome power asymmetries and institutional manipulation, and documents their economic, social, and health outcomes. Moving beyond analysis to practice, the thesis introduces two pragmatic interventions developed through collaborative design with affected communities: a digital humanities platform hosting multimedia ethnographic archives and a quantitative data dashboard; and an anti-displacement handbook which operationalizes research findings into actionable guidance calibrated to the specific challenges identified by community partners. These practical outputs, established as the China Dispossession Watch social venture, reflect a theory of change focused on addressing information asymmetries while building horizontal knowledge networks and long-term movement capacity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor</title>
<link href="https://hdl.handle.net/1721.1/161774" rel="alternate"/>
<author>
<name>Martinez-Sandin, Owen.</name>
</author>
<id>https://hdl.handle.net/1721.1/161774</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor
Martinez-Sandin, Owen.
Thesis: M.C.P., Massachusetts Institute of Technology, Department of City and Regional Planning, 1960; Includes bibliographical references.
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical electron work functions of film coated metals</title>
<link href="https://hdl.handle.net/1721.1/161773" rel="alternate"/>
<author>
<name>Levine, Jules David.</name>
</author>
<id>https://hdl.handle.net/1721.1/161773</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Theoretical electron work functions of film coated metals
Levine, Jules David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaves 47-48).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat transfer from immersion heaters to boiling liquids</title>
<link href="https://hdl.handle.net/1721.1/161772" rel="alternate"/>
<author>
<name>Simpson, H. C.</name>
</author>
<id>https://hdl.handle.net/1721.1/161772</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Heat transfer from immersion heaters to boiling liquids
Simpson, H. C.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1951; Includes bibliographical references (leaves 161-163).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of degradation rate and crosslink density of artificial skin on wound contraction</title>
<link href="https://hdl.handle.net/1721.1/161761" rel="alternate"/>
<author>
<name>Lee, Elaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/161761</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Effects of degradation rate and crosslink density of artificial skin on wound contraction
Lee, Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1986; Bibliography: leaves 93-94.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems</title>
<link href="https://hdl.handle.net/1721.1/159944" rel="alternate"/>
<author>
<name>Samuel, Kaira M.</name>
</author>
<id>https://hdl.handle.net/1721.1/159944</id>
<updated>2025-07-08T03:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems
Samuel, Kaira M.
Engineering applications of machine learning often involve high-dimensional, computationally intensive simulations paired with limited and evolving datasets. As new designs and constraints emerge, models must adapt to incoming data without frequent retraining, which is often infeasible due to the cost of generating engineering data. Continual learning (CL) offers a promising alternative by enabling models to incrementally learn from sequential data while mitigating catastrophic forgetting, in which there is a loss of performance on previously seen examples. This thesis investigates the application of continual learning to regression-based engineering tasks, with an emphasis on surrogate modeling. We begin by benchmarking several foundational CL strategies, including regularization-based and rehearsal-based methods, across five diverse engineering datasets. To support this analysis, we construct nine new regression-focused continual learning benchmarks designed to reflect practical engineering scenarios. Results show that Experience Replay, a simple rehearsal method, consistently achieves strong performance, approaching "joint training" performance baseline of retraining from scratch, while substantially reducing computational cost. To further explore how rehearsal strategies can be made more efficient and effective, we propose two adaptive replay methods that prioritize memory samples based on forgetting dynamics. These methods extend previous adaptive replay strategies by using input clustering and representations from TabPFN, a foundation model for tabular data, to guide more informed sample selection without knowledge of experience boundaries. We evaluate their performance on both complex engineering datasets and controlled synthetic tasks. In scenarios where forgetting is unevenly distributed, the adaptive methods offer clear advantages, highlighting the potential for more intelligent replay under constrained resources. This work positions continual learning as a practical and effective strategy for handling dynamic engineering datasets, and offers new insights into how adaptive replay can enhance efficiency in data-limited, high-cost learning environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries</title>
<link href="https://hdl.handle.net/1721.1/159943" rel="alternate"/>
<author>
<name>Shen, Changxiao Nigel</name>
</author>
<id>https://hdl.handle.net/1721.1/159943</id>
<updated>2025-07-08T03:06:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries
Shen, Changxiao Nigel
The development of immersed methods brings a promising solution to the numerical simulation of interface-coupled multi-physics problems, such as multi-phase flows and fluidstructure interactions. This renders necessitates the design of novel high-order and efficient solvers based on immersed methods. This thesis examines two pivotal aspects of these methods: firstly, the acceleration of computational processes via adaptive resolution strategies; and secondly, the enhancement of accuracy order while sustaining numerical stability. To achieve the former, we develop a novel wavelet transform algorithm applicable to computational domains with arbitrary geometries. This wavelet transform maintains the order of the wavelet and serves as an indicator for local truncation error (LTE), resulting in an adaptive resolution strategy with explicit error control. To address the latter, we introduce a fifth-order upwind finite difference (FD) scheme that sustains numerical stability across any immersed interface discretization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constrained and High-dimensional Bayesian Optimization with Transformers</title>
<link href="https://hdl.handle.net/1721.1/159942" rel="alternate"/>
<author>
<name>Yu, Rosen Ting-Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/159942</id>
<updated>2025-07-08T03:06:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Constrained and High-dimensional Bayesian Optimization with Transformers
Yu, Rosen Ting-Ying
This thesis advances Bayesian Optimization (BO) methodology through two novel algorithms that address critical limitations in handling constraints and high-dimensional spaces. First, we introduce a constraint-handling framework leveraging Prior-data Fitted Networks (PFNs), a foundation transformer model that evaluates objectives and constraints simultaneously in a single forward pass through in-context learning. This approach demonstrates an order of magnitude speedup while maintaining or improving solution quality across 15 test problems spanning synthetic, structural, and engineering design challenges. Second, we propose Gradient-Informed Bayesian Optimization using Tabular Foundation Models (GITBO), which utilizes pre-trained tabular foundation models as surrogates for high-dimensional optimization (exceeding 100 dimensions). By exploiting internal gradient computations to identify sensitive optimization directions, GIT-BO creates continuously re-estimated active subspaces without model retraining. Empirical evaluation across 23 benchmarks demonstrates GIT-BO’s superior performance compared to state-of-the-art Gaussian Process-based methods, particularly as dimensionality increases to 500 dimensions. Together, these approaches establish foundation models as powerful alternatives to Gaussian Process methods for constrained and high-dimensional Bayesian optimization challenges.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm</title>
<link href="https://hdl.handle.net/1721.1/159923" rel="alternate"/>
<author>
<name>Yu, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/159923</id>
<updated>2025-07-08T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm
Yu, Kevin
Retrosynthesis, in which one proposes a reaction pathway towards a target molecule from simpler starting materials, is a fundamental task in synthetic chemistry. Current computational search methods assume the sufficiency of reaching arbitrary building blocks but fail to address the common real-world constraint where the use of specific starting materials is desirable. To this end, this thesis reformulates computer-aided retrosynthesis as a starting material-constrained problem, in which one or more starting materials are given as input in addition to the target structure. Under this formulation, we are able to apply novel strategies to more efficiently navigate the combinatorial explosion of reactions to consider during synthesis planning. First, we demonstrate how training on multi-step synthesis routes inferred from a reaction base allows a neural network to predict the number of steps needed to synthesize targets from other specified building blocks. Using this learned value function in combination with recent advances in bottom-up synthesis planning, this thesis proposes a novel bidirectional CASP algorithm, DESP (Double-Ended Synthesis Planning). We demonstrate the utility of DESP through a number of empirical benchmarks and case studies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations</title>
<link href="https://hdl.handle.net/1721.1/159919" rel="alternate"/>
<author>
<name>Wells-Moran, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/159919</id>
<updated>2025-07-08T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations
Wells-Moran, Sarah
Pine Island Glacier (PIG) drains 10\% of the West Antarctic Ice Sheet and has undergone rapid change in the observational record, contributing to uncertainty in sea level rise projections. The Pine Island Ice Shelf (PIIS), which provides a key buttressing force that slows the flux of ice across the grounding line, has accelerated 800 m/yr (an approximate 20\% increase in speed) between 2015 and 2024, accompanied by a visible increase in damage in the Southern shear margin, indicating a partial loss of buttressing. We examine this loss of buttressing to determine the mechanisms through which ice shelves collapse. Buttressing allows an ice shelf to increase in thickness to a point at which the stresses within the ice would exceed the tensile yield strength without the compression provided by buttressing. Following the Compressive Arch Theory proposed by \textcite{doake_breakup_1998}, we hypothesize that when a calving event decouples the ice shelf from a buttressing region, the thicker ice shelf is thrown into tension and rapidly collapses, as happened with the Larsen B Ice Shelf in 2002. We use the Ice-sheet and Sea-level System Model to investigate the instantaneous stress response to loss in buttressing on an idealized glacier, with the goal of finding the changes in shear margin buttressing that most accurately recreate observed changes. In our model, we are only able to replicate observed changes in stress regime by decoupling both shear margins, suggesting the PIIS is currently providing negligible buttressing, allowing PIG to accelerate, thin, and retreat. We construct a timeline of shear margin evolution and collapse over the PIIS from 2015 to 2024 using model outputs of stress field response to changes in buttressing, coupled with observed changes in velocity, effective and principal strain rates, and calving events. Despite losing buttressing from both shear margins, the PIIS is still intact, contrary to our initial hypothesis on compressive arch failure. We re-frame Compressive Arch Theory to better capture the timescales involved in loss of buttressing. We posit that compressive arch failure from loss of buttressing on short time scales leads to rapid ice shelf disintegration, whereas compressive arch failure occurring on longer time scales allows the ice to viscously relax, leading to ice shelf thinning instead of collapse. This new framework for investigating loss of buttressing allows us to better assess the stability of ice shelves and more accurately model future Antarctic contributions to sea level rise.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction</title>
<link href="https://hdl.handle.net/1721.1/159918" rel="alternate"/>
<author>
<name>Zhuang, Yingjia</name>
</author>
<id>https://hdl.handle.net/1721.1/159918</id>
<updated>2025-07-08T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction
Zhuang, Yingjia
Cast-in-place concrete production plays a dominant role in the architecture, engineering and construction (AEC) industry, particularly in large-scale projects, contributing significantly to global material consumption, construction costs, and embodied carbon emissions. Shape optimized concrete has been developed as a solution for more affordable and sustainable construction using less material to create efficient structures that meet structural demands. Although extensive research and development has focused on applying shape optimization to prismatic concrete beams, these beams are often limited by the constraints of available formwork and are primarily designed as pre-cast components. This paper presents the results of optimizing the Zip-Form, a digitally fabricated formwork system made from mild steel, designed for forming shape-optimized concrete beams, and its integration with conventional formwork equipment. The study evaluates the structural performance, embodied carbon, and cost of the Zip-Form integrated system in comparison to a traditional formwork platform used for prismatic beams. The findings highlight the Zip-Form’s potential for forming shape-optimized concrete beams using cast-in-place methods, making it a viable solution for sustainable large-scale construction projects in the current industry. The methodology outlined in this thesis provides a comprehensive design process, beginning with the structural design of the shape-optimized&#13;
concrete beams, followed by the design of the Zip-Form integrated formwork system to cast the beams, and concluding with an embodied carbon and cost analysis to evaluate the environmental and financial benefits. This thesis aims to bridge academic research and innovation with practical, real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon</title>
<link href="https://hdl.handle.net/1721.1/159899" rel="alternate"/>
<author>
<name>Colclasure, Abigail M.</name>
</author>
<id>https://hdl.handle.net/1721.1/159899</id>
<updated>2025-07-08T03:06:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon
Colclasure, Abigail M.
The most recently published lightcurves of the large Uranian satellites were published in 1989 and there have been no published lightcurves of the satellites’ northern hemispheres. In this work, I present the first visible-wavelength lightcurves of the northern hemispheres of Titania and Oberon. Observations of the Uranian satellites are inherently difficult given their proximity to Uranus. Contamination from stray Uranian light is a major challenge and the background near the satellites must be well characterized. I mitigated the effects of stray Uranian light using point spread function photometry. I modelled Uranus with a Lorentzian with the same full width at half max as the stellar point spread function. I also determined that Uranus’s profile is poorly modeled with a Gaussian or with the stellar empirical point spread function. After accounting for Uranian light in this way, there remains significant correlation between the photometric measurements of Titania and Oberon. I considered what may be causing this correlation and suggest several paths forward.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software</title>
<link href="https://hdl.handle.net/1721.1/159895" rel="alternate"/>
<author>
<name>Dixit, Vaibhav Kumar</name>
</author>
<id>https://hdl.handle.net/1721.1/159895</id>
<updated>2025-07-08T03:06:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software
Dixit, Vaibhav Kumar
This thesis introduces theoretical and computational frameworks for nonlinear, nonconvex optimization problems in statistics, machine learning, and optimal control. Disciplined Geodesically Convex Programming (DGCP) extends convexity verification to Riemannian manifolds, enabling optimization on curved spaces with global optimality guarantees. We develop rules and atoms for Cartan-Hadamard manifolds, particularly symmetric positive definite matrices, transforming non-convex problems into tractable ones through Riemannian geometry. We also present Optimization.jl, a unified interface for diverse optimization methods that supports specialized implementations for specific problem classes. Its modular architecture integrates automatic differentiation with an extensible plugin system. The framework’s capabilities are demonstrated through a GPU-accelerated hybrid method combining Particle Swarm Optimization with L-BFGS, and an augmented Lagrangian approach with stochastic inner optimizers that connects constrained optimization with machine learning techniques. Our work combines theoretical foundations with practical implementation, providing researchers tools to use advanced optimization methods without specialized mathematical knowledge.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accurate Protein Function Prediction with Graph Transformer-Based Function Localization</title>
<link href="https://hdl.handle.net/1721.1/159880" rel="alternate"/>
<author>
<name>Mitra, Shania</name>
</author>
<id>https://hdl.handle.net/1721.1/159880</id>
<updated>2025-07-08T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accurate Protein Function Prediction with Graph Transformer-Based Function Localization
Mitra, Shania
Protein function prediction is a fundamental challenge in biology, crucial for understanding biological processes, disease mechanisms, and accelerating drug discovery. While computational methods leveraging sequence or structural information have advanced, accurately translating protein structure to function and pinpointing the specific residues responsible remain significant hurdles. Many existing deep learning approaches fall short, often relying on post-hoc analyses that lack specificity or fail to directly integrate functional site identification into the prediction process. In this study, we introduce the Protein Region Proposal Network (ProteinRPN), a novel graphbased deep learning framework designed to address these limitations. ProteinRPN is the first model to integrate the proactive identification of functional regions within the Gene Ontology term prediction pipeline. The core of the model is a Region Proposal Network module that processes protein structure graphs (residues as nodes, contacts as edges) to identify potential functional regions, termed anchors. These anchors are subsequently refined using a multi-stage process involving a novel differentiable node drop pooling layer that incorporates domain knowledge. A functional attention layer further enhances the representations of predicted functional nodes, and a Graph Multiset Transformer aggregates this localized information into a comprehensive graph-level embedding for final prediction. The model is optimized using a combination of cross-entropy classification loss, supervised and self-supervised contrastive learning losses (SupCon and InfoNCE) for robust representation learning. Evaluated on standard benchmarks derived from the DeepFRI/HEAL datasets, ProteinRPN demonstrates state-of-the-art performance, consistently outperforming existing sequencebased and structure-based methods across all three Gene Ontology domains (Molecular Function, Biological Process, Cellular Component) based on standard CAFA metrics (Fmax, AUPR, Smin). Notably, ProteinRPN achieves significant improvements over strong baselines like HEAL, with AUPR (Area under Precision Recall curve) gains of approximately 15.4% (BP), 8.5% (CC), and 1.3% (MF). Furthermore, ablation studies validate the contribution of each key component, particularly the region proposal mechanism. Qualitative analysis confirms the model’s ability to accurately localize known functional residues within protein structures, offering enhanced interpretability. By directly modeling and identifying functionally relevant structural regions, ProteinRPN presents a robust, interpretable, and high-performing approach to structure-based protein function prediction. This work contributes a novel framework that bridges the gap between structural information and functional annotation, offering potential for deeper biological insights and advancing computational tools for understanding the proteome.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Axial vibration of steam turbine buckets</title>
<link href="https://hdl.handle.net/1721.1/159852" rel="alternate"/>
<author>
<name>Ewert, Richard H.</name>
</author>
<id>https://hdl.handle.net/1721.1/159852</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">Axial vibration of steam turbine buckets
Ewert, Richard H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938; Includes bibliographical references (leaf 56).
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The deflection of steam turbine diaphragms</title>
<link href="https://hdl.handle.net/1721.1/159851" rel="alternate"/>
<author>
<name>Prohl, Melvin Albert.</name>
</author>
<id>https://hdl.handle.net/1721.1/159851</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">The deflection of steam turbine diaphragms
Prohl, Melvin Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new railway labor plan</title>
<link href="https://hdl.handle.net/1721.1/159850" rel="alternate"/>
<author>
<name>Gilman, Jonathan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/159850</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">A new railway labor plan
Gilman, Jonathan C.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1963; Includes bibliographical references (leaves 119-121).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transient analysis of marine steam turbine, propeller and ship dynamics.</title>
<link href="https://hdl.handle.net/1721.1/159848" rel="alternate"/>
<author>
<name>Stang Lund, Emil.</name>
</author>
<id>https://hdl.handle.net/1721.1/159848</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Transient analysis of marine steam turbine, propeller and ship dynamics.
Stang Lund, Emil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1965; Bibliography: leaves 79-91.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strengthening Value Chains for Developing and DeployingBatteries in the Global South</title>
<link href="https://hdl.handle.net/1721.1/159832" rel="alternate"/>
<author>
<name>Munjal, Mrigi</name>
</author>
<id>https://hdl.handle.net/1721.1/159832</id>
<updated>2025-07-01T03:05:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strengthening Value Chains for Developing and DeployingBatteries in the Global South
Munjal, Mrigi
This thesis presents an integrated assessment of the elements required to strengthen the battery industry in emerging markets. It articulates a synergistic approach to fostering resilient battery value chains that are critical for the sustainable energy transition in the Global South. The first part touches upon building a more diversified and secure raw material base is essential for robust battery value chains in developing economies. It establishes the groundwork by proposing a potential pathway to diversify the global lithium supply chain by examining the potential of lithium mining in Arkansas through stakeholder analysis and policy recommendations. The second part underscores the importance of technology adaptation and process innovation in developing cost-effective battery chemistries suitable for the distinct conditions of the Global South. This part of the thesis addresses the technological challenges in scaling up battery production, focusing on sodium-ion batteries (SIBs) as a promising alternative to lithium-ion systems. Through an innovative application of natural language processing, this analysis distills the vast landscape of SIB research to identify scalable solutions for electrode design and manufacturing. The final part of the thesis converges on the deployment aspect of batteries, scrutinizing the role of Battery Energy Storage Systems (BESS) in three distinct emerging markets: India, South Africa, and Malawi. It offers a granular perspective on the application of BESS within varied energy landscapes, advocating for the customization of storage solutions to local market realities. This illuminates the transformative potential of BESS for enhancing grid stability and enabling renewable energy integration, thereby empowering the Global South to leapfrog to a resilient and green energy paradigm. This thesis coalesces into a comprehensive framework that underscores the multifaceted aspects of value chain enhancement—from mineral sourcing and battery chemistry innovation to end-use applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation</title>
<link href="https://hdl.handle.net/1721.1/159437" rel="alternate"/>
<author>
<name>Hagen, Arnulf.</name>
</author>
<id>https://hdl.handle.net/1721.1/159437</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation
Hagen, Arnulf.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1990; Title as it appears in the M.I.T. Graduate List, Feb. 1990: A computer model for assessing the economics of hierarchical systems.; Includes bibliographical references (leaves [73]-85).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine</title>
<link href="https://hdl.handle.net/1721.1/159373" rel="alternate"/>
<author>
<name>Jebran, Ahmad Mujtaba</name>
</author>
<id>https://hdl.handle.net/1721.1/159373</id>
<updated>2025-07-09T03:14:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine
Jebran, Ahmad Mujtaba
Diagnosing and treating small intestinal disorders such as bleeding, inflammatory bowel disease, and tumors pose significant challenges due to limitations in accessing this anatomical compartment. To address these challenges, we develop BIOSENTERO, a bioinspired soft enteroscopic robot, to facilitate deep small intestine procedures, which addresses challenges associated with locomotion, steering, and intervention faced by existing soft robotic systems. BIOSENTERO features a hollow-cylinder design consisting of a linearly deformable soft pneumatic actuator as the robotic body, two radially expandable soft pneumatic actuators wrapped with Kirigami sleeves as the robotic head and tail units, a central hollow channel for housing accessory endoscopic tools, and a control box and joystick for navigation. The robot's body is a fiber-reinforced actuator with four inflatable chambers, enabling versatile movements, including axial expansion and contraction and bending over 90 degrees for 360-degree planar access. The dynamic Kirigami sleeve design achieves clinically acceptable friction force on intestinal mucosa with radial expansion, while minimizing tissue distention. A reinforced central channel supports the passage of tools to facilitate diagnostic and therapeutic interventions. A control box supports efficient locomotion and steering, achieving autonomous speeds of ~100 mm/min in vitro and ~43 mm/min in ex vivo intestinal tissue, and an assisted speed of ~200 mm/min in pig studies, without overdistention. Through in vivo pig studies, we demonstrated BIOSENTERO's potential for tissue biopsies, localized drug delivery, and real-time visualization in the deep intestinal region, without causing tissue overdistention and damage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Wireless Delamination Sensor</title>
<link href="https://hdl.handle.net/1721.1/159372" rel="alternate"/>
<author>
<name>Ghosh, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/159372</id>
<updated>2025-07-09T03:13:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structural Wireless Delamination Sensor
Ghosh, Aniruddha
Composite materials, particularly laminated fibre reinforced polymer composites, have gained widespread acceptance in various industries due to their superior strength-to-weight ratio and corrosion resistance. The phenomenon of cracking between plies/laminae of such a layered composite is commonly referred to as delamination and occurs due to various reasons, such as corrosion and fatigue of the structure. Structural integrity can be enhanced by monitoring delamination and, ideally, to have a sensor that can continuously monitor the delamination extent. The delamination sensor proposed in this thesis (termed as Wireless Interlaminar Nano Sensor, or WINS) is an LC resonant circuit (resonant frequency fₛ = 1 MHz), and unlike prior sensors, is comprised solely of structural materials: a structural epoxy and carbon nanotubes (CNTs). The delamination crack causes a change in capacitance of the sensor leading to a change in its resonant frequency. The wireless sensor operation was demonstrated using an LC resonant circuit implemented on a printed circuit board, which is termed the sensor emulator (SE). A wireless sensing circuit and reader provided by Analog Devices Inc. was used for the initial measurements using the SE and later, the proof-of-concept (PoC WINS) devices. The PoC WINS device is a CNT-polymer nanocomposite based parallel plate capacitor, adhesively bonded between two composite laminates, and connected in parallel to the capacitor of the SE. The PoC WINS device was subjected to loading in the Mode-I configuration to induce delamination crack growth. The quality factor Q of the SE was varied (Q = 18, 3.2, 1.6, 0.8) by adding different external resistors, and a signal was acquired wirelessly for each value of Q as the delamination crack propagated. The wirelessly acquired signal was also sampled (sampling frequency Fₛ = 100 MHz) and analyzed to estimate the resonant frequency of the sensor. The effect of low sampling frequency was studied by downsampling of the acquired signal by a factor of 100. When Q was large (Q= 18), a change of∼2 kHz in the resonant frequency could be detected, corresponding to a change in capacitance of∼100 pF. At smaller values of Q∼1, challenges encountered in wireless signal acquisition were the too-rapid decay of the sensor signal and low signal-to-noise ratio (SNR). A wireless sensing circuit was designed and developed to enable signal acquisition at Q ≤1. The SE was used in the feedback system of a modified Armstrong oscillator (MAO) to obtain a sinusoidal signal of constant amplitude (∼1 V, SNR∼100 dB) even at Q = 0.8. The frequency (f_AO) of the signal wirelessly acquired from MAO is a non-linear function of the capacitance and the quality factor Q of the sensor and was observed to be in the range of 2 MHz. The MAO was tested for its performance using PoC WINS devices. It was observed that capturing the output signal for a duration of∼100 µs was sufficient for the accurate estimation of frequency (standard deviation∼3 Hz). At Q = 0.8 of the sensor, the MAO was able to detect a change in capacitance of 100 pF. To enable the use of low sampling rate (Fₛ = 1 MHz) for wireless signal acquisition, enhance the sensitivity of detecting change in capacitance, and provide direct readout of the change in capacitance of the sensor, the MAO was made part of another circuit termed MAO+. In the MAO+, mixer and filter circuits were used to modulate fAO from∼2 MHz to∼180 kHz and then to∼25 kHz, allowing the use of sampling frequency as low as 50 kHz to estimate the frequency. A phase-locked loop was made part of MAO+ which enabled direct readout of the change in capacitance of the sensor through a 4 1⁄2    digit digital display. The MAO+ was independently tested using PoC WINS devices and was able to detect a change in capacitance (at Q= 0.8 of the SE) of∼10 pF, corresponding to∼200 microns crack advance. This thesis presents the design, implementation, and operation of a wireless sensing circuit that allows signal acquisition at a low quality factor (Q ≤1) without compromising the SNR, demonstrating the first practical (wireless, made out of structural materials) delamination sensor for advanced composites.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System</title>
<link href="https://hdl.handle.net/1721.1/159371" rel="alternate"/>
<author>
<name>Estrin, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/159371</id>
<updated>2025-07-09T03:14:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System
Estrin, Julia
Radio Frequency (RF) generators play a crucial role as bias voltage sources in plasma-enhanced semiconductor manufacturing processes. Employing pulsed waveforms to generate plasma offers significant improvements in manufacturing precision. However, producing these waveforms is challenging due to the need for high voltages (kilovolt range), high frequencies (hundreds of kilohertz to low megahertz), precise timing, and broadband frequency content. Traditional methods to generate these waveforms are limited by semiconductor voltage ratings, leading to either low-voltage waveforms or complex circuits to achieve higher pulse voltages. This work presents a simple, compact, and efficient method for generating a pulsed bias voltage for plasma processing. The approach involves synthesizing the pulsed waveform at a low, convenient voltage and then using a transformer to step up the voltage to the desired level. A low-leakage inductance coaxial cable-based transformer is developed to provide scaling with sufficient fidelity across a wide frequency range. Zero voltage switching (ZVS) is achieved on all devices, ensuring highly efficient operation. The proposed system is validated through a lab bench prototype that generates pulses of 2.1 kV at a frequency of 400 kHz. Additionally, this system allows for adjustments in pulse duty ratio and slew rate, offering enhanced control and versatility for various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring</title>
<link href="https://hdl.handle.net/1721.1/159370" rel="alternate"/>
<author>
<name>Jang, Kyuho</name>
</author>
<id>https://hdl.handle.net/1721.1/159370</id>
<updated>2025-07-09T03:13:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring
Jang, Kyuho
Monitoring circulating cells is crucial for assessing cancer metastasis and evaluating the efficacy of chimeric antigen receptor (CAR) T-cell therapies. Traditional blood-draw methods face challenges such as discontinuous monitoring and potential cell degradation, leading to inaccurate estimations. In vivo flow cytometry (IVFC), which measures real-time cellular response to laser illumination such as fluorescence, presents a viable alternative. However, its application in humans has been limited by the bulky design of existing devices and configurations unsuitable for larger organisms. This thesis introduces a novel, wearable fluorescence IVFC device tailored for human use, featuring a compact laser diode and silicon photomultiplier (SiPM) to enhance portability and functionality. The device includes a specialized optical system similar to a fluorescent microscope, which optimizes the signal- to-noise ratio by maximizing cellular fluorescence and minimizing background interference. Experimental determination of the limit of detection (LOD) for the SiPM and device establishes their detection capabilities and operational stability. Theoretical evaluations confirm that while the device can detect individual fluorescent cells in vitro, its current configuration does not support this sensitivity in vivo. The thesis also proposes strategies to improve the device’s sensitivity, aiming for reliable in vivo detection of single fluorescent cells.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator</title>
<link href="https://hdl.handle.net/1721.1/159368" rel="alternate"/>
<author>
<name>Lee, Giho</name>
</author>
<id>https://hdl.handle.net/1721.1/159368</id>
<updated>2025-07-09T03:13:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator
Lee, Giho
Despite the transformative advance in artificial intelligence (AI), the AI processing hardware have not matched the speed and power-efficiency requirement, restricting the realization of the full potential of AI and requiring innovation in AI hardware. Data transmission bottleneck between memory and processor has been pointed out as main source of poor computing speed and power efficiency. By embeding neural weights in hardware to minimize data transmission, non-volatile memory (NVM)-based in-memory computing have expected to have several orders of speed and power-efficiency boosts. However, its practical implementation as a next generation AI hardware has been not successful due to the non-idealities in NVMs including unstability, poor state resolution, challeng in programming, and systemon-a-chip (SoC) incompatibility. This thesis introduces ultra-accurate and ultra-robust geometrically programmed nano-resistor (GPNR) that can overcome NVM non-idealities and enable commercial AI accelerator based on analog in-memory computing. The state-of-theart 6-bit conductance state resolution and 8-bit stability of nano-resistor was realized by channel geometry optimization and thermodynamically stable material, while SoC imcompatible programming in NVM devices is omited. To evaluate the computing performance, experimental vector-matrix multiplication (NVM) operation were performed, showing 5-bit accuracy operation with 28x28 GPNR array without selectors. Finally, AI inference simulation was performed with simplifed 5x5 cropped MNIST digit image classification task. GPNR-based final classification layer demonstrates 91.0 % accuracy, comparable to the software limit of 93.2 %. The outcomes of this research not only bolster the feasibility of GPNR technology in practical applications but also highlight the potential for future advancements in AI accelerators that can fully harness the capabilities of analog in-memory computing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk</title>
<link href="https://hdl.handle.net/1721.1/159366" rel="alternate"/>
<author>
<name>Morice, Peter G.</name>
</author>
<id>https://hdl.handle.net/1721.1/159366</id>
<updated>2025-07-09T03:13:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk
Morice, Peter G.
The dehydration of cow milk to powder form extends product shelf life and reduces product shipping costs and emissions. However, the existing thermal methods commonly employed by the dairy industry produce harmful emissions in the combustion of fossil fuels. This work explores the potential role of an electrochemical alternative method of proton exchange membrane (PEM) electrolysis in the process of concentrating milk solids. Although the thermodynamic specific energy of electrolysis at [mathematical notation] is high compared to existing thermal methods around [mathematical notation], experimental results for PEM electrolysis assisted by mechanical centrifugation suggest a specific energy closer to [mathematical notation] is possible. The energy competitive PEM electrolysis method has the additional benefit of zero emissions when supplied by renewable energy sources. Analysis of milk solids processed by the electrolysis assisted method shows promising levels of high fat, mineral, and total protein content, with liquid chromatography quantifying both casein and whey protein types retained in the solid product.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS</title>
<link href="https://hdl.handle.net/1721.1/159364" rel="alternate"/>
<author>
<name>Yadav, Pradyot Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/159364</id>
<updated>2025-07-09T03:14:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS
Yadav, Pradyot Singh
With data rates pushing into the Tbps, there is an urgent need for the use of mmWave and subterahertz RF front ends and transistors. Gallium Nitride (GaN) transistors have continued to push the limits of high-power density, high frequency semiconductor devices. The future of GaN radio frequency (RF) circuit technology is at the intersection of device engineering, advanced packaging, and circuit design. Currently, these are three separate fields with little-to-no communication between them, resulting in critical limitations to today’s technology. These fields need to collaborate, crosspollinate, and intersect in order to modernize and advance innovation for the next generation of RF front ends. To design the most efficient W-G Band devices and systems, we must embrace a design/system-technology co-optimization (DTCO/STCO) approach, that combines innovative GaN transistors with engineered linearity, novel heterogeneous integration with state-of-the-art Silicon (Si) bias and control circuitry, and advanced physics-based modeling. This thesis presents the development of a 3DIC consisting of GaN HEMTs and Si CMOS BEOL, in particular W-band GaN HEMTs, Si CMOS BEOL circuits in Intel16, and advanced packaging of dielets.  The full chip continuum is investigated and innovated upon.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/159363" rel="alternate"/>
<author>
<name>Cotey, Samuel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159363</id>
<updated>2025-07-09T03:13:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices
Cotey, Samuel A.
Amedical device is sought to improve drug delivery options available to healthcare providers and patients; our initial focus is to develop a piston that can provide the power necessary to do an injection from an ingestible device. While many methods to administer drugs currently exist, the administration method in many cases is largely driven by factors that supersede ease, convenience, or comfort for the patient [1]. Many patients are saddled by cumbersome drug regimens that expose them to the risk of complex and painful drug administration paths and dependence on medical sharps [2, 3]. For these patients, being able to take injectable drugs orally allows them to use what appears to them to be simple, traditional drug delivery methods in lieu of injections that are painful and inconvenient. In order to perform an injection with a device that fits within an ingestible form factor, a novel piston is required. A concept design for an Al-Ni nanofilm powered miniature linear actuator has been developed in order to perform jet injections from within the gastrointestinal anatomy of a patient. This actuator consists of a small pressure vessel filled with liquid alcohol that undergoes a phase change to gas and generates pressure that can be used to cycle a piston in a drug loaded cylinder. Via exothermic reaction, nanofilm deposits thermal energy into the alcohol filled pressure vessel in order to generate the pressure needed to perform a jet injection. Cylindrical pressure vessel chambers with a diameter of 7mm and heights ranging from 3mm to 7.5mm were 3D printed and used to measure peak internal pressure vessel pressure as well as work output. The piston was used to push incompressible fluid through a nozzle in order to characterize the actuator’s work output. By using Bernoulli’s Equation, pressure on the piston head as a function of piston location along the stroke length was determined to characterize actuator performance as a function of pressure vessel size. The pressure vessel and the piston were modeled theoretically and empirically in order to identify the relevant design parameters so the piston can be effectively incorporated into the overall injection device.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Droplet Based Microalgae Photobioreactor for Biofouling Prevention</title>
<link href="https://hdl.handle.net/1721.1/159361" rel="alternate"/>
<author>
<name>Callan, Tess A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159361</id>
<updated>2025-07-09T03:14:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Droplet Based Microalgae Photobioreactor for Biofouling Prevention
Callan, Tess A.
Microalgae have a wide variety of applications aiding in sustainability, yet during the cultivation process, photobioreactor biofouling remains an issue. It blocks light from entering the reactor and necessitates reactor cleaning, ultimately reducing overall reactor productivity and increasing cultivation costs. Here we investigate a new type of reactor that removes the possibility of biofouling by growing the algae in aqueous droplets surrounded by oil that preferentially wets the reactor surface. We first look into growing the algae in droplets and discuss major parameters that will be impacted. Then, we show a droplet-based reactor that demonstrates the potential to scale the system with similar growth rates to industry. Finally, we investigate the impact on major costs to confirm the economic viability of transitioning to this reactor. Overall savings in the cultivation process, mainly from power reduction and biofouling prevention, are shown.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link href="https://hdl.handle.net/1721.1/158849.2" rel="alternate"/>
<author>
<name>Proman, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158849.2</id>
<updated>2025-06-17T03:13:05Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.
This development and business plan considers the neighborhood context and current market conditions characterizing the subject site’s redevelopment potential. The subject site, further defined in this thesis, is a prime parcel of land in the South Boston neighborhood of Boston, MA currently improved and used for quick-serve restaurant operations. Proximate to the Seaport, Fort Point, and Dorchester, South Boston is surrounded by demand drivers resulting in explosive growth that make it one of the most desirable and expensive housing submarkets in the entire City of Boston. Development considerations are fully defined in the report including zoning, equity, financial projections, ground lease, and market-level factors. A conclusion is made on the feasibility of the proposed project with recommendations for next steps resulting from the modeled base-case scenario. Market assumptions and any unresolved development issues are clearly identified and discussed.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency</title>
<link href="https://hdl.handle.net/1721.1/159328" rel="alternate"/>
<author>
<name>Yeaple, Thomas L.</name>
</author>
<id>https://hdl.handle.net/1721.1/159328</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency
Yeaple, Thomas L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaf 79.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions</title>
<link href="https://hdl.handle.net/1721.1/159325" rel="alternate"/>
<author>
<name>Farley, Holt Leonard.</name>
</author>
<id>https://hdl.handle.net/1721.1/159325</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions
Farley, Holt Leonard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.</title>
<link href="https://hdl.handle.net/1721.1/159323" rel="alternate"/>
<author>
<name>Borrero Mutis, Santiago.</name>
</author>
<id>https://hdl.handle.net/1721.1/159323</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.
Borrero Mutis, Santiago.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1978; Bibliography : leaves 150-153.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winter weather types of the eastern North Pacific and adjacent coastal and island areas</title>
<link href="https://hdl.handle.net/1721.1/159313" rel="alternate"/>
<author>
<name>Kosco, George Francis.</name>
</author>
<author>
<name>Dorsett, John O. F.</name>
</author>
<id>https://hdl.handle.net/1721.1/159313</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1940-01-01T00:00:00Z</published>
<summary type="text">Winter weather types of the eastern North Pacific and adjacent coastal and island areas
Kosco, George Francis.; Dorsett, John O. F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1940; Includes bibliographical references (leaves [44]-[45]).
</summary>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aircraft leasing and airline corporate strategy</title>
<link href="https://hdl.handle.net/1721.1/159302" rel="alternate"/>
<author>
<name>Setyopurnomo, Rudy.</name>
</author>
<id>https://hdl.handle.net/1721.1/159302</id>
<updated>2025-12-06T03:20:32Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Aircraft leasing and airline corporate strategy
Setyopurnomo, Rudy.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 130-131).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An interactive statistics package for the social sciences.</title>
<link href="https://hdl.handle.net/1721.1/159299" rel="alternate"/>
<author>
<name>Lebling, Peter David.</name>
</author>
<id>https://hdl.handle.net/1721.1/159299</id>
<updated>2025-12-06T03:20:35Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">An interactive statistics package for the social sciences.
Lebling, Peter David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1973; Bibliography: leaf 92.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>International political behavior: historical analysis of Scandinavia and the Netherlands.</title>
<link href="https://hdl.handle.net/1721.1/159298" rel="alternate"/>
<author>
<name>Deber, Raisa Rebecca Sarah Berlin.</name>
</author>
<id>https://hdl.handle.net/1721.1/159298</id>
<updated>2025-12-06T03:20:34Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">International political behavior: historical analysis of Scandinavia and the Netherlands.
Deber, Raisa Rebecca Sarah Berlin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1971; Bibliography: leaves 176-185.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior</title>
<link href="https://hdl.handle.net/1721.1/159296" rel="alternate"/>
<author>
<name>Zartarian, Gary Michael.</name>
</author>
<id>https://hdl.handle.net/1721.1/159296</id>
<updated>2025-12-06T03:20:37Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior
Zartarian, Gary Michael.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An examination of private capital available to the railroad industry.</title>
<link href="https://hdl.handle.net/1721.1/159295" rel="alternate"/>
<author>
<name>Wait, Barbara Rust.</name>
</author>
<id>https://hdl.handle.net/1721.1/159295</id>
<updated>2025-12-06T03:20:36Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">An examination of private capital available to the railroad industry.
Wait, Barbara Rust.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1979; Bibliography: leaves 103-106.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of eddies on fCO₂ in the North Pacific surface ocean</title>
<link href="https://hdl.handle.net/1721.1/159266" rel="alternate"/>
<author>
<name>Padalino, Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/159266</id>
<updated>2025-12-06T03:20:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The effect of eddies on fCO₂ in the North Pacific surface ocean
Padalino, Christine
We investigate the impact of mesoscale eddies in the North Pacific on surface ocean fCO₂ using the in-situ measurements from the Surface Ocean CO₂ Atlas (SOCAT) to inform the importance of the mesoscale dynamics on global CO₂ fluxes. We sort SOCAT measurements from 2000-2019 by whether or not they are in an eddy, per- form basin scale analysis, and present case studies. The results show lower fCO₂ in both anticyclones and cyclones compared to the background ocean, with the mag- nitude of the anomaly varying seasonally and spatially. Due to the many potential mechanisms of the eddy impacts, we analyze a temperature normalized fCO₂ to tease apart the impact of altered temperature from a biological response or mixing. With this method, we find evidence that eddies are increasing the background biological activity. To further attempt to separate the different effects eddies could have on sur- face fCO₂ and CO₂ fluxes, we identify two long lived eddies with many measurements over their lifetimes to use as case studies. We find that both the anticyclonic and cyclonic eddy initially increase fCO₂, but at the end of the lifetime mixing likely plays a role in counteracting temperature effects. The investigation of the varying effects the mesoscale can have on CO₂ fluxes not only allows for a better understanding of how eddies will affect surface fCO₂ but also provides insight into the potential impact on global scale estimates. Our analysis shows that on average, while mesoscale eddies modulate surface ocean fCO₂, they do not have a detectable enhancement of the CO₂ flux in the North Pacific.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nothing Unwanted: Prototyping Matter out of Place</title>
<link href="https://hdl.handle.net/1721.1/159265" rel="alternate"/>
<author>
<name>Wang, Yiqing</name>
</author>
<id>https://hdl.handle.net/1721.1/159265</id>
<updated>2025-12-06T03:20:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Nothing Unwanted: Prototyping Matter out of Place
Wang, Yiqing
What we discard never truly disappears.  &#13;
&#13;
Accompanying the societal shift from post-war scarcity to a consumerist culture, the contemporary building industry relies on abundant virgin materials, machinery, and a global transportation network. Immersed in this culture of convenience, architecture has limited agency to engage responsibly and intimately with reclaimed materials. The design of waste, inevitably, often symbolizes the separation between society and its waste, marked by an intention to remove, re-form, and re-standardize. Zero-waste systems and circular economy often inadvertently create hidden wastes, labor, and carbon footprints, leading to an uneven distribution of environmental harms.  &#13;
&#13;
The thesis explores the unique materiality of municipal waste, linking human living with their unwanted with an architectural prototype. The new "unwanted" architecture integrates local waste into an adaptive inventory, avoiding over-precision, over-purification, and over-modularization. Based on the characteristics of US municipal waste, local-sourced garbage, including e-waste, plastics, wood, paper, metal, dust, and food waste, is studied, calibrated, and assembled to create building components and rooms. The bottom-up approach offers a way to compute heterogeneous materials with digital methods and low-tech on-site operations to minimize environmental impact. The richness of space blurs the boundaries between domesticity and abjection and between the sublime and the disgusting. &#13;
&#13;
The prototypes aim to rebuild both the Functional and Emotional Unwanted and re-imagine a scalable and operable building system. The design contrasts the previously visible waste in architectural design with today's invisible waste stream due to sophisticated waste management. It demonstrates an intimate approach to the gigantic amount of urban waste, emphasizing its cultural, personal, and collective dimensions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile Multi-Bounce LiDAR</title>
<link href="https://hdl.handle.net/1721.1/159263" rel="alternate"/>
<author>
<name>Somasundaram, Siddharth</name>
</author>
<id>https://hdl.handle.net/1721.1/159263</id>
<updated>2025-12-06T03:20:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mobile Multi-Bounce LiDAR
Somasundaram, Siddharth
Single-photon avalanche diodes (SPADs) are emerging sensors that can measure the propagation of light in a scene, capturing higher-order reflections, shadows, and light transport that ordinary cameras are unable to. Measurement of these multi-bounce light paths is especially useful for non-line-of-sight (NLOS) imaging. The increasing availability of SPAD sensors on mobile devices (e.g. iPhone Pro LiDAR) raises the potential to enable NLOS capabilities on consumer devices in the future. Currently, these sensors are primarily employed for LiDAR-based depth estimation, with untapped potential in other applications. In light of recent advances in SPAD device development, the timing is opportune to revisit the applicability of multi-bounce LiDAR techniques on consumer-grade mobile devices.&#13;
&#13;
This thesis extends the applicability of multi-bounce LiDAR techniques from research-grade SPAD hardware to consumer-grade mobile LiDARs. First, we enable single-shot capture of two-bounce signals and remove the need for laser scanning by developing a tomographic formulation for two-bounce non-line-of-sight imaging. Second, we enable real-time non-line-of-sight capture at eye-safe laser power under object and camera motion. Our approach is inspired by principles from burst photography. &#13;
&#13;
We implement and evaluate the proposed algorithms in simulations and on experimental SPAD hardware. We also demonstrate real-time non-line-of-sight tracking on a consumer-grade smartphone LiDAR. Potential future applications of our results include "X-ray vision" in AR/VR, full-body tracking for AR headsets, room scanning for hard-to-reach areas, collision avoidance for autonomous vehicles, and robotic navigation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward the Understanding of Brain’s Molecular Language</title>
<link href="https://hdl.handle.net/1721.1/159206" rel="alternate"/>
<author>
<name>Zoghi Tavana, Sara</name>
</author>
<id>https://hdl.handle.net/1721.1/159206</id>
<updated>2025-11-20T03:14:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward the Understanding of Brain’s Molecular Language
Zoghi Tavana, Sara
What underlies the extraordinary capacity of neurons to process information, form memories, and orchestrate complex behaviors? Over a century of research has established that proteins are the central functional molecules of the cell, yet translating this knowledge into an understanding of emergent neural phenomena and effective treatments for neurological disorders remains elusive. We argue that this paradox stems from studying proteins in isolation, overlooking how their function is fundamentally shaped by spatial context and interactions with DNA, RNA, other proteins, lipids, carbohydrates, and metabolites. This coordinated&#13;
molecular interplay, we posit, ultimately gives rise to the complex neural circuits and behaviors observed in higher organisms. Intriguingly, Alfred Binet foreshadowed this perspective as early&#13;
as 1889 when he suggested that even simple, single-celled organisms—lacking anatomically defined nervous systems—might harbor a "diffuse nervous system" of molecular interactions&#13;
within their cytoplasm enabling complex behaviors. However, the historical progression of neuroscience, largely dictated by available methodologies and oscillating between siloed reductionist molecular approaches and systems-level analyses, has not yet been able to fully capture this intricate molecular choreography underlying neural function. In this review, we examine how studying molecular species in isolation, while yielding important insights, has ultimately proven insufficient for understanding emergent neural functions. We propose that recent technological advances in expansion microscopy, molecular anchoring, machine learning-enabled&#13;
protein detection, and cryo-fixation now make it possible to map molecular networks in their native context. This integrative approach promises to illuminate the molecular "language" of the brain, shedding light on how collective interactions among biomolecules&#13;
give rise to neuronal emergent abilities—and guide future therapeutic innovations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanism of hydrolysis of triphenylsilyl fluoride</title>
<link href="https://hdl.handle.net/1721.1/159195" rel="alternate"/>
<author>
<name>Esteve Campderá, Ramón María.</name>
</author>
<id>https://hdl.handle.net/1721.1/159195</id>
<updated>2025-11-20T03:14:58Z</updated>
<published>1948-01-01T00:00:00Z</published>
<summary type="text">Mechanism of hydrolysis of triphenylsilyl fluoride
Esteve Campderá, Ramón María.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1948; Includes bibliographical references (leaves 32-33).
</summary>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extractive and azeotropic distillation</title>
<link href="https://hdl.handle.net/1721.1/159194" rel="alternate"/>
<author>
<name>Hughes, Richard R.</name>
</author>
<id>https://hdl.handle.net/1721.1/159194</id>
<updated>2025-11-20T03:14:59Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Extractive and azeotropic distillation
Hughes, Richard R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1947; Bibliography: leaves 94-95.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Railroad reliability and freight car utilization : an assigned fleet model.</title>
<link href="https://hdl.handle.net/1721.1/159190" rel="alternate"/>
<author>
<name>Assarabowski, Richard John.</name>
</author>
<id>https://hdl.handle.net/1721.1/159190</id>
<updated>2025-11-20T03:15:01Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Railroad reliability and freight car utilization : an assigned fleet model.
Assarabowski, Richard John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 132-133.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking</title>
<link href="https://hdl.handle.net/1721.1/159152" rel="alternate"/>
<author>
<name>Jones, Andrew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/159152</id>
<updated>2025-11-12T05:01:55Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking
Jones, Andrew C.
This thesis explores how systems thinking—a methodology often reserved for large organizations—can be effectively applied to small businesses facing complex challenges. Using Lamplighter Brewing Co., an independent microbrewery in Cambridge, Massachusetts, as a case study, the research examines how the brewery adapted to the disruptions of the COVID-19 pandemic and the evolving economic landscape that followed. It documents the iterative application of systems thinking principles to identify root causes, leverage points, and actionable solutions to address issues such as declining revenue, rising costs, and misaligned organizational structures.&#13;
Lamplighter's interventions ranged from restructuring its management and marketing teams to pivoting its sales and production strategies. By leveraging tools such as causal loop diagrams and stock-and-flow models, the brewery uncovered systemic dynamics driving its performance. The research highlights the importance of iterative learning, targeted interventions, and holistic analysis in fostering resilience and sustainability in resource-constrained environments.&#13;
While focused on the craft brewing industry, the findings offer transferable insights for small businesses in similarly dynamic sectors, demonstrating that systems thinking can empower smaller organizations to navigate complexity, adapt strategically, and thrive amidst uncertainty.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images</title>
<link href="https://hdl.handle.net/1721.1/159150" rel="alternate"/>
<author>
<name>Balaji, Purvaja</name>
</author>
<id>https://hdl.handle.net/1721.1/159150</id>
<updated>2025-11-12T05:01:41Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images
Balaji, Purvaja
Phonotraumatic vocal hyperfunction (PVH) is a vocal disorder characterized by damaged vocal folds from excessive or abusive voice use. Clinical assessment of PVH relies on timeconsuming videostroboscopy examination, which poses challenges for large-scale clinical studies. We address the need for more efficient clinical assessment tools by proposing deep learning approaches for automatically detecting PVH severity from stroboscopic images. One of the main challenges in building deep learning models for this task is a lack of labeled stroboscopy data. Motivated by this challenge, we explore two approaches: direct classification and segmentation-then-classification. In the segmentation-then-classification approach, we first train a model to segment the glottis, a clinically relevant part of the vocal fold anatomy. Then, we use the predicted segmentation along with the stroboscopic image as inputs into a classification model. This approach helps to guide the model towards key anatomical features. We achieve up to 0.53 accuracy in four-class PVH severity prediction with the direct classification approach. Incorporating glottal segmentations improves the accuracy to 0.64, underscoring the value of providing anatomically-informed segmentations when assessing PVH severity. By creating an automated PVH severity tool, our work has the potential to help clinicians more efficiently monitor disease progression and to facilitate large-scale screening, thereby contributing to improved patient care.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intuitive Audio Interaction and Control in Multi-Source Environments</title>
<link href="https://hdl.handle.net/1721.1/159149" rel="alternate"/>
<author>
<name>Oduniyi, Erick O.</name>
</author>
<id>https://hdl.handle.net/1721.1/159149</id>
<updated>2025-11-12T05:01:34Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Intuitive Audio Interaction and Control in Multi-Source Environments
Oduniyi, Erick O.
In an increasingly noisy world, managing auditory focus is a persistent challenge. This thesis explores how embodied interactions—primarily head tracking, alongside experiments with gaze tracking, speech commands, and audio-visual segmentation—can enhance user control over complex auditory environments. By linking head orientation to volume adjustments, we investigated whether natural, instinctive movements could serve as intuitive, hands-free mechanisms for isolating and amplifying relevant sounds. User studies revealed that head tracking is effective in structured audio contexts, such as music, where distinct sources are easily separable. However, its utility diminishes in dense, overlapping conversations, highlighting the need for finer control mechanisms. While gaze and segmentation offer promising refinements, cognitive load and system responsiveness remain key challenges. These findings underscore that embodied audio interaction must be adaptive, content-aware, and seamlessly integrated with user intent.This research contributes to human-computer interaction by demonstrating both the potential and limitations of movement-based audio control. Future work should refine multimodal fusion, improve segmentation accuracy, and enhance accessibility to create systems that dynamically respond to users’ natural behaviors—reducing cognitive strain and enabling more fluid, user-centric auditory experiences.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework</title>
<link href="https://hdl.handle.net/1721.1/159146" rel="alternate"/>
<author>
<name>Peters, Michael Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/159146</id>
<updated>2025-11-12T05:01:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework
Peters, Michael Scott
Modern-day manufacturing organizations find themselves in volatile and competitive markets with increasing pressure to deliver products faster, at lower cost, and with increased quality. In response to this pressure, many organizations are considering how technological advancements may improve the efficiency of their product development operations. Leading organizations have digitally transformed their businesses by shifting away from manual processes, static documents, and siloed operations toward automation, model-based data, and interconnectivity enabled by a digital thread. Accordingly, organizations pursuing the competitive edge offered through the digitalization of their business operations have often used different assessment tools to benchmark their current capabilities and define their vision for the future of their organizational operations.&#13;
&#13;
This thesis proposes a set of model-based and digital thread capabilities that are central to the long-term success of product development operations, along with a corresponding maturity model that may be used to identify gaps between current- and future-state capability implementation. Using the proposed capability maturity model, known as the Model-based Enterprise Capability Assessment Framework (MECAF), this study evaluated and compared capability maturity across various organizations in the Aerospace and Defense, Automotive, and Heavy Machinery industries. Through interviews with each participating organization, this thesis also explores the expected benefits, common challenges, and anticipated value of implementing model-based capabilities. Additionally, this thesis proposes an approach to bridging the gap from strategy to implementation based on the lessons learned and best practices of the organizations studied.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Productivity in the Workplace for Product Development Teams</title>
<link href="https://hdl.handle.net/1721.1/159145" rel="alternate"/>
<author>
<name>Farfan Perdomo, Jorge</name>
</author>
<id>https://hdl.handle.net/1721.1/159145</id>
<updated>2025-11-12T05:01:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Productivity in the Workplace for Product Development Teams
Farfan Perdomo, Jorge
Productivity is a measure of the value generated for every hour worked. In a product development team, productivity can be affected by endogenous and exogenous factors, such as biological rhythms, work style, availability, work interruptions, team size, location, and the management strategies taken in a project. These factors will have an effect on the amount of effective work value generated in a workweek.&#13;
&#13;
A mathematical model and a Monte Carlo simulation were used to quantitatively assess the impact of these factors on the estimated cost and duration of a product development project. Based on the model results, we determined that workweek capacity and interruptions in the workplace are central to productivity. In addition, we demonstrated that combining different management strategies could be used to bring the project back on schedule and within budget to reduce the effects of these inefficiencies due to diverse endogenous and exogenous factors.&#13;
&#13;
For these reasons, this case study on a product development project will provide insight to engineering managers and project leaders about the effects of these inefficiencies in the workplace. The findings will help pave the way toward a more accurate project estimation and better modeling of project dynamics to reduce the amount of uncertainty in product development teams.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler</title>
<link href="https://hdl.handle.net/1721.1/159144" rel="alternate"/>
<author>
<name>Ali, Sabiyyah</name>
</author>
<id>https://hdl.handle.net/1721.1/159144</id>
<updated>2025-11-12T05:00:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler
Ali, Sabiyyah
This thesis presents FLCN (Free of Locks, Cilk is Now), a nonblocking work-stealing runtime scheduler that supports Cilk multithreaded programming. The existing OpenCilk runtime system uses lock-based synchronization and thus suffers from lock contention, does not provide progress guarantees, and can experience performance degradation with high worker counts and in multiprogrammed scenarios. FLCN leverages the existing runtime system’s provably efficient scheduling algorithm and introduces several new data structures and concurrency protocols to form a correct and performant lock-free system. In addition to enabling fork-join task parallelism, FLCN supports other Cilk features such as reducer hyperobjects. Through analyzing the performance of FLCN on various canonical benchmark programs, I find that for programs with low amounts of work, FLCN performs worse than the existing runtime. However, for most programs, I find that FLCN is either competitive with or marginally outperforms the existing runtime. Additionally, FLCN consistently exhibits higher scalability than the existing runtime, performing especially better when using hyperthreads and in multiprogrammed environments. I also outline future work that could make FLCN a more comprehensive and performant system, including ideas for improving FLCN’s work efficiency that would in turn better its performance on programs with low amounts of work.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies</title>
<link href="https://hdl.handle.net/1721.1/159143" rel="alternate"/>
<author>
<name>Feng, Eugenia Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/159143</id>
<updated>2025-11-12T05:00:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies
Feng, Eugenia Y.
Numerous methodologies to solving goal-conditioned short-horizon tasks require hundreds of expert demonstrations, but these demonstrations are effort-intensive to collect, reducing the scalability of these approaches. Even with approaches that do work, they may have difficulty generalizing to slightly different settings. In this work, we explore two approaches to training generalist robot learning policies using large-scale foundation models. &#13;
&#13;
The first approach aims to use a video foundation model to generate task-conditioned synthetic demonstrations at scale from a single expert demonstration. The objective is to leverage these synthetic demonstrations as proxy for expert demonstrations to train models that learn rewards from expert videos for solving complex visual RL problems. &#13;
&#13;
The second approach seeks to improve upon the generalization ability of behavior cloning policies. Moving away from the use of videos for training, we explore using privileged representations such as keypoints or object-poses learned using open-set foundation models. By tracking pose or keypoint correspondences, the aim is to minimize the required number of demonstrations to achieve task completion and improve generalization within classes of objects.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback</title>
<link href="https://hdl.handle.net/1721.1/159142" rel="alternate"/>
<author>
<name>Gupta, Aneesh</name>
</author>
<id>https://hdl.handle.net/1721.1/159142</id>
<updated>2025-05-20T12:38:32Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback
Gupta, Aneesh
Large language models (LLMs) have become an integral part of many fields from customer support automation to research assistants. However, despite their growing adoption, they face significant challenges, particularly when it comes to safety in sensitive contexts. Existing methods like Reinforcement Learning with Human Feedback (RLHF) and keyword filtering have contributed to improving the robustness of these models, but these approaches are very resource-intensive and the models can still be vulnerable to malicious attacks like prompt injections and jailbreaking. One notable limitation in testing defenses against such attacks is the scarcity of appropriate datasets. This thesis investigates the use of small language models (SLMs) to generate goal hijacking messages, a subset of prompt injection messages. Techniques such as LoRA fine-tuning and full fine-tuning of even smaller models are employed in this short form text generation model. We also introduce a fine-tuned SLM enhanced with Reinforcement Learning with Artificial Intelligence Feedback (RLAIF), which removes reliance on slow human feedback by using faster AI-generated feedback instead. By optimizing the reference model and reward functions, we improve alignment with ground truth prompt injection messages while addressing issues such as mode collapse and overfitting. These findings show promise, and further research is necessary to determine how well the approach can generalize to other domains and perform in real-world scenarios. Future work is likely to focus on multilingual datasets and distributed computation to further extend the applicability and efficiency of the method.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot</title>
<link href="https://hdl.handle.net/1721.1/159141" rel="alternate"/>
<author>
<name>Johnson, Quincy</name>
</author>
<id>https://hdl.handle.net/1721.1/159141</id>
<updated>2025-11-12T05:00:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot
Johnson, Quincy
A search then sample approach to bilevel planning in the context of task and motion planning is one method of effectively solving multi-step robotics problems. In this planning framework, high-level plans of abstract actions are refined into low-level continuous transitions by sampling controller parameters associated with each action. Efficiently sampling these parameters remains a significant challenge, as exhaustive searches often become computational bottlenecks, especially for tasks requiring complex or multimodal parameter distributions. Moreover, relying on samplers hand-designed by humans is both impractical and limiting. To address these challenges, we propose using diffusion models to learn efficient sampling distributions from demonstrations. By avoiding the limitations of hand-specified and naïve sampling methods, our approach enhances planning efficiency and achieves superior performance across diverse tasks that require learning multimodal parameter distributions to solve successfully.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Design, and Assembly of Spring Tires</title>
<link href="https://hdl.handle.net/1721.1/159140" rel="alternate"/>
<author>
<name>Lu, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/159140</id>
<updated>2025-11-12T05:00:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Modeling, Design, and Assembly of Spring Tires
Lu, Michael
With a renewed interest in the Moon and the need for autonomous lunar rovers that drive longer distances and operate over extended durations, designing efficient and robust mobility systems is paramount. Created by NASA Glenn Research Center, the spring tire is a compliant airless tire engineered for planetary rover missions in lunar and Martian environments. It consists of hundreds of coiled springs woven together to create a toroidal-shaped mesh wheel that can deform to uneven terrain, providing additional durability and traction. This work aims to apply this technology to two robotic testbeds: ERNEST, an autonomous lunar traversal rover built at NASA Jet Propulsion Laboratory, and IPEx, a lunar regolith mining robot built at Kennedy Space Center. This thesis discusses the modeling of these spring tires with numerical methods along with the design of two spring tire prototypes for use on the aforementioned rover platforms. A streamlined assembly process for these compliant wheels is also outlined as well as the results of compression testing, rough terrain driving, and drawbar pull testing to assess their performance.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U</title>
<link href="https://hdl.handle.net/1721.1/159138" rel="alternate"/>
<author>
<name>De Levante Rodriguez, Ricardo Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/159138</id>
<updated>2025-05-20T12:38:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U
De Levante Rodriguez, Ricardo Antonio
High-harmonic fast wave (HHFW) heating experiments in the National Spherical Torus Experiment (NSTX) at Princeton Plasma Physics Laboratory (PPPL) have shown that up to 60% of the injected power can be lost in the Scrape-Off Layer (SOL) when the fast wave is able to propagate in front of the antenna [Hosea, Phys. Plasmas 15, 056104 (2008))]. This work discusses progress in modeling HHFW propagation and losses in the divertor region using more realistic SOL plasmas in the NSTX-U SOL 2D geometry. Previous RF studies assume density is a function only of magnetic flux, decaying exponentially, which may be insufficient to accurately determine the wavefield, especially in the divertor and high-field side plasma regions. In this work, the temperature profile is first evaluated by solving the non-linear heat conduction equation using a finite element approach in the Petra-M workbench assuming axisymmetry. A 2D density profile is then obtained from a prescribed outer midplane radial profile assuming pressure is uniform on a flux surface. This approach results in density and temperature profiles in which the strong asymmetric nature of diffusion is successfully captured. In particular, it is shown that for a parallel to perpendicular heat conduction anisotropy ratio of up to 10⁸ the expected exponentially decaying temperature profile is obtained using a non-linear iterative solver with proper mesh refinement conditions. Furthermore, this work focuses on investigating the effect of the SOL plasma density profile on the fast-wave propagation at different antenna phasing. The simulation results show that the gradient of the midplane density profile affects the wavefield pattern. As the density profile broadens, the wavefield intensity is reduced in the SOL and increased in the core. Finally, HHFW power in the plasma was studied by adding electron-ion collision power dissipation as a proxy for HHFW power deposition. The simulation results show that increasing the density gap width between the antenna and the core results in more power deposited in the SOL relative to the core.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets</title>
<link href="https://hdl.handle.net/1721.1/159137" rel="alternate"/>
<author>
<name>Ng, Chu Pang Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/159137</id>
<updated>2025-11-12T05:00:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets
Ng, Chu Pang Alex
Lunar missions are hindered by the challenges of maintaining continuous operation, especially during the 14-day lunar night, when solar power sources may be unavailable, causing significant mission delays and limiting efficiency. Frequent returns to charging stations supplied by fixed lunar surface power plants further disrupt workflows and restrict the operational range of lunar vehicles. To address these issues and enhance lunar mission performance, a continuous, secure, and shareable power source is essential. While nuclear power and larger battery systems are viable options for continuous lunar energy supply, they pose challenges such as safety risks, complex deployment, and limited scalability. This thesis focuses on exploring microwave-beamed power systems as a flexible and scalable solution for sustained lunar operations. Ideally, the power source would enable 24/7 operations without requiring vehicles to return to base stations, allowing for unrestricted navigation across the lunar surface, including in permanently shadowed regions (PSR). In addition, it would support the construction of critical infrastructure, accelerating the development of the lunar economy. This thesis aims to support sustained lunar exploration and infrastructure development by exploring the design space for microwave-beamed power systems under three different demand use cases of increasing scale, loosely corresponding to the three phases of the Artemis program: Local (Shackleton Crater), Regional (navigation between equatorial regions and South Pole), and Global (entire lunar surface). A case study focused on the YUTU-2 lunar rover investigates alternative architectures for each use case, comparing power beaming from tall towers vs. satellites. Evaluation reveals that the most effective solution for the Local use case is a tower-based approach featuring a single 100m tower, &gt;10,000 solar modules, and using 1 GHz operating frequency, at a cost of $3.4M/W. For the Regional use case, a satellite-based solution is preferred, utilizing 6-7 satellites per plane, 210,000 solar modules, and a frequency range of 1.0 GHz, at a cost of $1.7M/W - $1.8M/W. The Global use case also favors a satellite-based approach, employing 6 satellites per plane across 5 polar planes, with varying numbers of solar modules and utilizing a frequency of 1 GHz, at a cost of $0.8M/W. The trade studies showed that larger receiver antenna areas and lower frequencies improve performance and cost-effectiveness. Furthermore, larger microwave-beamed power systems leverage economies of scale, lowering the cost per watt by an average of $1M/W when scaling from the Regional to the Global power system, with potential for further reductions through future expansions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model</title>
<link href="https://hdl.handle.net/1721.1/159136" rel="alternate"/>
<author>
<name>Li, Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/159136</id>
<updated>2025-05-20T12:38:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model
Li, Chen
The design of sustainable urban communities near transportation hubs, such as train stations, may play a vital role in enhancing neighborhoods by fostering new jobs, encouraging mixed-use developments, and promoting a cleaner environment. The engagement of experts and non-experts is often promoted as part of the urban planning process, yet workshops, while motivating, do not necessarily affect the systems design and long-term sustainability of the neighborhood in a substantive way.&#13;
 &#13;
Prior studies present methods for detecting teamwork during the design of complex systems, including model-based co-creation and urban design workshops. While interactive model-based workshops promote increased engagement of non-experts, the traditional role of experts in framing the design options and the workshop dialogue remain. This thesis research seeks to examine how expertise shapes decision-making in urban sustainability contexts using enhanced system models. &#13;
 &#13;
The research approach focuses on sustainable urban design workshops for compact city development, following three key steps.  First, a neighborhood system model incorporating a commute flow simulator is developed to support collaborative exploration and design decision-making processes. Second, during a pilot experimental workshop, participants are divided into control and treatment groups, challenged to design a vibrant community with economic, social, and environmental benefits. The treatment group receives an expert-proposed, advocated solution to assess its impact on exploration and decision-making. Finally, results are analyzed using Large Language Models (LLMs) and statistical methods to assess how expert-driven solutions impact teamwork collaboration, decision-making speed, and final design alignment with the advocated solution.&#13;
&#13;
While the pilot workshop primarily serves to validate the approach and test the methodology, conclusive results cannot be drawn due to its exploratory nature. Nevertheless, this research successfully developed a robust urban design system model, enabling stakeholders to generate innovative solutions that foster a thriving community. Additionally, it established a methodology to advance the understanding of expertise in teamwork dynamics, laying a strong foundation for future studies in teamwork analysis and urban design challenges.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/159133" rel="alternate"/>
<author>
<name>Johnson, Jamal</name>
</author>
<id>https://hdl.handle.net/1721.1/159133</id>
<updated>2025-05-20T12:38:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks
Johnson, Jamal
Achieving clean, sustainable energy at scale is a pressing global challenge. Fusion of light elements holds significant potential to address this critical need. While only experimental fusion reactors are currently operational, significant progress is being made in the research and design of near-future tokamak fusion power plants. Reactor success will depend on a comprehensive understanding of heat and particle transport, including the role of impurities. This thesis focuses on the development of machine-agnostic neural network surrogates for TGLF, designed to predict impurity transport coefficients alongside heat and electron particle fluxes in DD plasmas. Training data are derived from synthetic fluxes generated for L, H, and I confinement modes in Alcator C-Mod, DIII-D, and ASDEX-Upgrade. To reduce training complexity, shot data are discretized by radius, and networks are developed at six ρ coordinates: 0.2, 0.4, 0.6, 0.7, 0.8, and 0.9. Fifteen plasma parameters are selected as inputs to the neural networks after examining TGLF flux sensitivities across all five output channels. Predicted impurity fluxes for arbitrary charge states and masses, ranging from 4He to 184W, are used to derive diffusive and convective transport coefficients. Three types of synthetic TGLF data are created and applied to network training to produce accurate models. The primary synthetic data type approximates experimental data by sampling within a perturbation range of ±10% around a given shot. Supporting data types enhance network performance by improving trends in single-parameter (1D) scans and addressing areas of highest network uncertainty. Hyperparameter optimization and testing resulted in highly accurate networks. Testing set relative errors averaged over ρ = 0.4–0.7 and 0.9 show approximate deviations of 0.12 ± 0.029 for heat flux and 0.42 ± 0.095 for particle flux channels. However, error metrics at ρ = 0.2 and 0.8 require location-specific tuning and potentially more data to match the accuracy achieved at other radii. The networks are used to analyze boron and carbon impurity peaking within machinespecific H-modes. Their predictions are then compared to published results. Qualitative results for boron peaking correlations in ASDEX-Upgrade are clearly reproduced, while carbon peaking trends in DIII-D are weaker. Sparse DIII-D data, which also includes atypical advanced modes, is believed to have contributed to reduced accuracy in these cases. Using H-mode shots spanning low to high local collisionality, impurity diffusion trends with charge state (Z) in ITG and TEM dominated plasmas were examined, showing good agreement with published studies. Additionally, analysis of network-derived convective transport shows that Z-sensitivity increases with collisionality. Network scans of the ion and electron heat flux responses to temperature gradients also reveal the clear presence of a critical gradient at all radii. These results demonstrate that the neural networks developed in this work can reliably reproduce TGLF results and deliver fast predictions of heat, electron particle, and impurity transport in tokamaks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities</title>
<link href="https://hdl.handle.net/1721.1/159131" rel="alternate"/>
<author>
<name>Pierre, Georine</name>
</author>
<id>https://hdl.handle.net/1721.1/159131</id>
<updated>2025-05-20T12:38:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities
Pierre, Georine
In the face of the growing challenge of urban waste, especially within rapidly expanding informal settlements projected to house over 45% of the global population by 2050 (United Nations Department of Economic and Social Affairs, 2022), innovative solutions are imperative. The thesis proposes a paradigm shift towards urban mining, emphasizing the significant value embedded in discarded electronics—where a tonne of circuit boards can hold ten times more precious metals than traditional ore (Minnesota Center for Environmental Advocacy, 2022). The global distribution of off-shored e-waste has led to the emergence of informal settlements that depend on e-waste recovery to support livelihoods and income generation. These communities have become prime examples for urban mining, embracing circular economic strategies to find adaptive ways to repurpose e-waste. Accra, Ghana’s Old Fadama, home to one of the largest e-waste sites in the world, has become a vital economic hub for informal e-waste processing.  With a population of over 100,000 dwellers, local and migrant workers have built resilient communities through innovative recycling practices, tech repairs, and DIY digital fabrication methods. However, they face imminent environmental risks, health hazards, and displacement threats.&#13;
&#13;
Focusing on Old Fadama, the thesis will address the narratives of urban mining communities and look toward a systematic sympoiesis between economic, environmental, and social realities. By doing so, the thesis seeks to answer how we can foster nurturing and circular relationships for informal settlements and develop regenerative ecosystems for urban mining in the city environment. As an integrated field research, case study, and implementation, the thesis will: conduct key urban analysis for understanding e-waste sites and urban mining communities; identify technology interventions and policy recommendations that can improve local conditions; and utilize data-driven communication to advocate for new opportunities for urban systems tied to e-waste extraction through immersive multimedia as part of a public exhibition.&#13;
&#13;
Using a novel methodology, the thesis adopts the learnings from the economic, physical, and community-based interventions observed in informal e-waste recovery processes. The thesis combines quantitative data from satellite imagery and remote sensing with qualitative insights gathered through crowdsourced GIS mapping, films, interviews, and creative capacity-building workshops. These combined insights aim to enhance urban models, nurturing the innovation potential already present within urban mining communities. The thesis research will contribute to the previous work of MIT City Science Group’s “Power of Without” initiative, a comprehensive roadmap for understanding and collaborating with informal settlements and proposing non-Western decentralized infrastructure solutions. The thesis aims to provide practical insights for implementing innovations in urban mining communities by developing sustainable e-waste recovery strategies and supporting micro-industries in cities, which could serve as a model for similar contexts globally.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity</title>
<link href="https://hdl.handle.net/1721.1/159129" rel="alternate"/>
<author>
<name>Niu, Yuner A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159129</id>
<updated>2025-05-20T12:38:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity
Niu, Yuner A.
The effective adoption of blockchain technology in genomic data management is influenced not only by its technical advantages but also by external factors such as regulatory conditions, and the demands of consumers and patients. This thesis explores the critical factors required for blockchain platforms to thrive in managing genomic data, focusing on how these systems can be structured to address the high-priority needs of various stakeholders, including patients, healthcare providers, regulators, and researchers. Through a comprehensive examination of privacy, security, regulatory compliance, and equity concerns, the research develops a multidisciplinary framework that balances technological innovation with real-world stakeholder expectations. By conducting an in-depth stakeholder analysis and analyzing existing blockchain platforms used for genomics, the thesis presents a roadmap for creating blockchain solutions that are both technologically viable and aligned with the complex social, legal, and ethical landscape of genomic data management. This framework aims to maximize value for all stakeholders while mitigating associated risks, positioning blockchain as a viable tool in the future of personalized medicine.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells</title>
<link href="https://hdl.handle.net/1721.1/159128" rel="alternate"/>
<author>
<name>Liu, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/159128</id>
<updated>2025-05-20T12:38:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells
Liu, Emily
Advances in sequencing technologies have significantly deepened our understanding of gene regulation in cells. Among these, Perturb-seq has emerged as a powerful technique, enabling high-resolution profiling of transcriptomic responses to genetic perturbations at the single-cell level. Such insights have profound implications for functional genomics and the identification of therapeutic targets. This thesis investigates the efficacy of mechanistic computational models for predicting the effects of previously unseen genetic perturbations on cellular expression profiles. While existing deep learning approaches excel at interpolating within observational data, they often struggle to extrapolate to novel perturbations. To address this limitation, this study introduces a hybrid framework that integrates a linear causal model, grounded in the gene regulatory network, with variational deep learning techniques.&#13;
&#13;
The proposed mechanistic model utilizes a learned gene regulatory network to represent perturbational effects as shift interventions that propagate through the network. This approach operates within a low-dimensional gene space, effectively capturing the essential information needed to reconstruct full transcriptomic profiles. By incorporating this mechanistic causal model into a variational autoencoder (VAE), the framework generates detailed and comprehensive transcriptomic responses while maintaining the capacity to handle noisy, large-scale single-cell data.&#13;
&#13;
Two deep variational architectures are explored within this framework, corresponding to different output distributions. The single cell variational inference (SCVI) architecture, employing a zero-inflated negative binomial output distribution, demonstrates challenges in learning perturbational data distributions. In contrast, a standard VAE architecture with a Gaussian output distribution on normalized gene expressions, when paired with the structural causal model, achieves superior performance compared to current state-of-the-art methods. This hybrid approach, termed the Single-Cell Causal Variational Autoencoder (SCCVAE), demonstrates robust capabilities in both interpolation and extrapolation.&#13;
&#13;
For observed perturbations, the SCCVAE framework reveals latent representations that identify functional perturbation modules and simulate single-gene knock-down experiments across varying penetrance levels. These findings highlight SCCVAE as a powerful tool for interpreting and predicting perturbational responses at the single-cell level, advancing the integration of causal and variational approaches in computational biology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework</title>
<link href="https://hdl.handle.net/1721.1/159127" rel="alternate"/>
<author>
<name>Mejia, Frederick</name>
</author>
<id>https://hdl.handle.net/1721.1/159127</id>
<updated>2025-05-20T12:38:16Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework
Mejia, Frederick
For some algorithmic problems, quantum computation has the potential to provide enormous speedups over classical computers. However, the drastic slowdowns associated with running error-free quantum hardware make achieving these theoretical advantages challenging. Researchers and industry leaders planning for the future would benefit from understanding when it will be both feasible and advantageous to switch to quantum computing platforms. This thesis builds on the framework by Choi, Moses, and Thompson (2023) to evaluate the feasibility and timeline for achieving Quantum Economic Advantage (QEA)—the point at which quantum hardware can outperform comparably-priced classical machines for specific computational tasks. This thesis substantially extends and deepens this framework and introduces a calculator to make these analyses accessible. The model incorporates parameters from quantum hardware vendors, such as physical-logical qubit ratios and overall connectivity, alongside the computational complexities of specific problems, to estimate the year of QEA. Most of the parameters in the tool are freely adjustable, allowing users to explore how varying assumptions about quantum improvement and technological advancement influence the projected timeline for QEA.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structure, Function, and Interaction in Protein Language Models</title>
<link href="https://hdl.handle.net/1721.1/159126" rel="alternate"/>
<author>
<name>Zheng, Jared</name>
</author>
<id>https://hdl.handle.net/1721.1/159126</id>
<updated>2025-05-20T12:38:15Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structure, Function, and Interaction in Protein Language Models
Zheng, Jared
In recent years, transformer architectures have shown remarkable capabilities in learning meaningful representations from text and images. This approach has been extended to the realm of protein sequences through pretrained protein language models, which have excelled in various protein engineering tasks. In this thesis, we investigate a pre-trained protein language model’s ability to predict protein structure and the effects of mutations. For many advanced protein understanding tasks, such as predicting protein function and protein-protein interactions, fine-tuning of the model is essential. We explore methods to fine-tune the Evolutionary Scale Modeling (ESM2) model, a pretrained protein language model, for predicting protein functions structured as Gene Ontology terms and predicting protein-protein interactions. Notably, we develop a novel method of modeling the hierarchy constraint in GO term prediction that improves training convergence and test performance while making the model hierarchically consistent with GO. This research aims to enhance our understanding of protein language models in decoding complex biological information, thereby contributing to advancements in computational biology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Healthcare Agents: Large Language Models in Health Prediction and Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/159124" rel="alternate"/>
<author>
<name>Kim, Yubin</name>
</author>
<id>https://hdl.handle.net/1721.1/159124</id>
<updated>2025-05-20T12:38:13Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Healthcare Agents: Large Language Models in Health Prediction and Decision-Making
Kim, Yubin
Large Language Models (LLMs) are transforming healthcare, yet utilizing them for clinical applications presents significant challenges. In this thesis, we explore two critical aspects in healthcare AI: (1) leveraging LLMs for multimodal health prediction from wearable sensor data and (2) developing collaborative AI framework for medical decision-making. We first introduce a Health-LLM framework that performs multimodal fusion of temporal physiological signals from wearable devices with contextual metadata to predict health outcomes. By implementing novel context enhancement strategies, our framework demonstrates significant improvements in prediction accuracy across multiple health domains compared to existing benchmarks. Furthermore, we present MDAgents, an adaptive framework that optimizes multi-agent LLM collaboration for complex medical reasoning tasks. MDAgents dynamically configures agent roles and interaction patterns based on task complexity, implementing a hierarchical consensus mechanism that emulates clinical team dynamics. Through comprehensive evaluation on medical diagnosis and reasoning tasks, MDAgents exhibits superior performance in&#13;
multimodal medical reasoning compared to single-agent approaches. Our findings demonstrate that LLMs, when architected for multimodal integration and strategic collaboration, can serve as robust agents in healthcare systems, advancing both preventive medicine through continuous health monitoring and clinical decision support through distributed AI reasoning.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategizing against online learners in normal form repeated&#13;
games</title>
<link href="https://hdl.handle.net/1721.1/159121" rel="alternate"/>
<author>
<name>Assos, Angelos</name>
</author>
<id>https://hdl.handle.net/1721.1/159121</id>
<updated>2025-05-20T12:38:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Strategizing against online learners in normal form repeated&#13;
games
Assos, Angelos
With the advent of machine learning and AI, learning algorithms are becoming more and more prevalent in online learning settings, where sequential decision-making is required. In such settings, the decisions of each agent can affect the utilities (or losses) of the other agents, as well as influence the decisions made by other agents later on in the interaction. Therefore, if an agent is good at anticipating the behavior of the other agents, in particular how they will make decisions in each round as a function of their experience thus far, he could try to judiciously make his own decisions over the rounds of the interaction so as to influence the other agents to behave in a way that ultimately benefits his own utility. In this thesis, we study repeated two-player games involving two agents: a learner, which employs an online learning algorithm to choose his strategy in each round; and an optimizer, which knows the learner’s utility function, parameters and the learner’s online learning algorithm. The optimizer wants to plan ahead to maximize his own utility while taking into account the learner’s behavior. We study this setting in zero-sum and general-sum games. In zero-sum games, we provide algorithms for the optimizer that can efficiently exploit a learner that employs a specific online learning algorithm in discrete and continuous-time dynamics. Specifically, the learner employs the Multiplicative Weights Update (MWU) algorithm for the discrete-time games, and the Replicator Dynamics in the continuous-time games. In general-sum games, we provide a negative result. Our negative result shows that, unless P=NP, there is no Fully Polynomial Time Approximation Scheme (FPTAS) for maximizing the utility of an optimizer against a learner that best responds to the history in each round. We additionally provide exponential-time algorithms that efficiently strategize against a learner that uses MWU, as well as a new way of thinking about strategizing against online learners via calculus of variations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents</title>
<link href="https://hdl.handle.net/1721.1/159120" rel="alternate"/>
<author>
<name>Covarrubias, Lucian</name>
</author>
<id>https://hdl.handle.net/1721.1/159120</id>
<updated>2025-05-20T12:38:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents
Covarrubias, Lucian
Robots have been playing an ever increasing role in complex environments, often in coordination with teams of systems or humans. Autonomous systems of the future will need to be tightly grounded in the real world, drawing information directly from their environment to develop an understanding of the world. They will need to maintain a semantic understanding of their environment, including the kinds of objects they observe and their relationships to each other. At the same time, they must be able to reason over diverse constraints related to their tasks, such as time limits and resource usage. While there are existing approaches which enable robots to execute tasks with semantic goals, such as finding a certain type of object in a room, they often fail to consider the multitude fo task specific constraints which are vital to robust performance. On the other hand, planners which consider task specific constraints require a human to provide all information about the environment manually. These systems are too cumbersome to model complex tasks, requiring hours of manual effort which is prone to errors. This thesis presents an architecture for semantically grounded planning which leverages the strengths of constraint based planners while automating the environmental modeling step with an advanced semantic perception engine. By automating environmental modeling, we are able to create a system which executes complex semantically grounded tasks such as navigating to certain objects within a certain room, without major user input which is typically required of these systems.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformers as Empirical Bayes Estimators The Poisson Model</title>
<link href="https://hdl.handle.net/1721.1/159119" rel="alternate"/>
<author>
<name>Jabbour, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/159119</id>
<updated>2025-05-20T12:38:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Transformers as Empirical Bayes Estimators The Poisson Model
Jabbour, Mark
We study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lifting 2D Vision Models into Structured Scene Representations</title>
<link href="https://hdl.handle.net/1721.1/159118" rel="alternate"/>
<author>
<name>Tang, George</name>
</author>
<id>https://hdl.handle.net/1721.1/159118</id>
<updated>2025-05-20T12:38:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Lifting 2D Vision Models into Structured Scene Representations
Tang, George
Intelligent agents can leverage structured scene representations capable of capturing object compositionality, affordances, and semantics as a world emulator. However, 3D scene data is limited, rendering supervised and self-supervised methods ineffective. Recent advances in 2D foundation models exhibit remarkable performance and generalization. Concurrently, several works have demonstrated lifting feature maps produced by these models into a 3D feature representation. This thesis further explores how lifting can be effectively employed to construct pixel-level fidelity structured scene representations.&#13;
&#13;
Learned scene representations such as NeRF and Gaussian Splatting do not support additional functionality besides novel view rendering. The world is compositional: a scene can be described in terms of objects. Correspondingly, we present a lifting solution for efficient open-set 3D instance segmentation of learned scene representations. Compared to previous approaches, our solution is more than an order of magnitude faster and can handle scenes with orders of magnitude more instances.&#13;
&#13;
Toward identifying affordances, we tackle the problem of zero-shot mesh part segmentation. Learning-based mesh segmentation does not generalize due to a lack of diverse mesh segmentation datasets, while traditional shape analysis methods are overfitted to previous benchmarks. We present a lifting solution for mesh part segmentation that overcomes these limitations, showing comparable performance to top-performing shape-analysis methods on traditional benchmarks while exhibiting much better generalization on a novel mesh dataset curated from an image-to-3D model.&#13;
&#13;
Beyond feature fields, lifting can be used for a variety of applications, including scene understanding and editing. However, current lifting formulations are inefficient and often exhibit additional unintended modifications. To address these deficiencies, we generalize lifting to semantic lifting, which incorporates per-view masks indicating relevant areas. These masks are determined by querying corresponding per-view feature maps derived from feature fields. However, it is impractical to store per-view feature maps, and the scene representations can be expensive to store and query. To enable lightweight, on-demand retrieval of pixel-aligned relevance masks, we introduce a Vector Quantized Feature Field. We demonstrate the effectiveness of semantic lifting with our method on complex indoor and outdoor scenes from the LERF dataset.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Affordance-Based Generation for 3D Generative AI</title>
<link href="https://hdl.handle.net/1721.1/159117" rel="alternate"/>
<author>
<name>Wang, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/159117</id>
<updated>2025-05-20T12:38:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward Affordance-Based Generation for 3D Generative AI
Wang, Sean
Recent advances in 3D content creation with generative AI have made it easier to generate 3D models using text and images as input. However, translating these digital designs into usable objects in the physical world is still an open challenge. Since these 3D models are generated to be aesthetically similar to their inputs, the resulting models tend to have the visual features the user desires but often lack the functionality required for their use cases. This thesis proposes a novel approach to generative AI in 3D modeling, shifting the focus from replicating specific objects to generating affordances. We trained models that allow users to create point clouds that satisfy physical properties called affordances, which are properties that describe how an object should behave in the real world. By ensuring that the generated objects have the expected affordances, we explore how existing tools can be augmented to generate 3D objects whose functionality is consistent with their appearances.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs</title>
<link href="https://hdl.handle.net/1721.1/159116" rel="alternate"/>
<author>
<name>Zhang, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/159116</id>
<updated>2025-05-20T12:38:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs
Zhang, Sarah
Open-source models present significant opportunities and risks, especially in dual-use scenarios where they can be repurposed for malicious tasks via adversarial fine-tuning. In this paper, we evaluate the effectiveness of Tampering Attack Resistance (TAR), a safeguard designed to protect against such adversarial attacks, by exploring its resilience to full-parameter and parameter-efficient fine-tuning. Our experiments reveal that while TAR enhances tamper resistance compared to models without safeguards, it remains susceptible to variability. Specifically, we observe inconsistencies where the same adversarial attack can succeed under some initializations and fail under others. This is a critical security risk as even a single instance of failure can lead to models being exploited for harmful purposes. These findings highlight the limitations of current tamper-resistant safeguards and emphasize the need for more robust safeguards to ensure the safe and ethical deployment of open-source models.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets</title>
<link href="https://hdl.handle.net/1721.1/159115" rel="alternate"/>
<author>
<name>Chen, Edenna H.</name>
</author>
<id>https://hdl.handle.net/1721.1/159115</id>
<updated>2025-05-20T12:38:06Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets
Chen, Edenna H.
The rise of machine learning (ML) algorithms has led to a parallel rise in ML-ready datasets. A novel metadata schema released by OpenAI and MLCommons called Croissant, which is specifically designed for ML-ready datasets, aims to increase data accessibility, user understanding of data, and accuracy of claims based on data. However, current methods to automatically generate Croissant metadata present difficulties, such as involving manual entries. This can be especially difficult when attempting to preserve information about large ML-ready datasets, which are often derived from large scientific repositories belonging to organizations such as National Aeronautics and Space Administration (NASA). These major scientific repositories provide their own metadata standards, such as NASA’s Space Physics Archive Search and Extract (SPASE) schema but context from this metadata can often be lost during data processing. This thesis presents a novel, improved approach to Croissant metadata generation which involves a hybrid parsing logic and Large Language Model (LLM) inference approach, as well as recommendations for future Croissant standards and SPASE to Croissant schema metadata conversion, that aims to retain this lost context.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applied Plankton Image Classification for Imaging FlowCytobot Data</title>
<link href="https://hdl.handle.net/1721.1/159114" rel="alternate"/>
<author>
<name>Duckworth, Barbara R.</name>
</author>
<id>https://hdl.handle.net/1721.1/159114</id>
<updated>2025-10-20T03:17:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Applied Plankton Image Classification for Imaging FlowCytobot Data
Duckworth, Barbara R.
As the ability to gather vast quantities of data from oceanographic bioimaging sensors increases, so too does the need to process, analyze, and store that data in a consistent, standard way that enables replicability and accessibility for future studies. The Imaging FlowCytobot (IFCB), an automated submersible flow cytometer, produces high resolution images of plankton at rates up to 10 Hz for months or years, resulting in billions of images. This project compares various methods to categorize incoming images of plankton gathered by the IFCB - Convolutional Neural Nets (CNNs), Vision Transformers (ViT), and self-supervised learning (MAE). The benefits and downsides of each model are analyzed and discussed for future IFCB operators to process their data using the methods that best align with their research questions, along with step-by-step explanations about the pros and cons of each method depending on the use case.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Tsirelson's Theorem for All Compiled Nonlocal Games</title>
<link href="https://hdl.handle.net/1721.1/159113" rel="alternate"/>
<author>
<name>Falor, Chirag</name>
</author>
<id>https://hdl.handle.net/1721.1/159113</id>
<updated>2025-05-20T12:38:05Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Computational Tsirelson's Theorem for All Compiled Nonlocal Games
Falor, Chirag
Nonlocal games, defined as cooperative tasks between spatially separated players, have been a foundational tool in the study of quantum advantage and have been useful in classically verifying quantum computations. To address the challenge posed by the spatial separation assumption, Kalai et al. (STOC' 23) introduced a compilation procedure that compiles any nonlocal game into an interactive game between a classical verifier and a computationally bounded quantum prover. This compilation preserves classical soundness and quantum completeness, though quantum soundness has been established only in the asymptotic limit of the security parameter or for specific classes of games. In this work, we advance towards a concrete framework to bound the quantum value of compiled nonlocal games. Building on the notion of nice sum-of-squares certificates, introduced by Natarajan and Zhang (FOCS' 23) to bound the value of the compiled CHSH game, we extend the niceness framework and construct a hierarchy of semidefinite programs that searches exclusively over nice certificates. We show that this hierarchy converges to the optimal quantum value of the game. Additionally, we present a transformation to make any degree-1 sum-of-squares certificate nice. This approach provides a systematic method to reproduce known bounds for special classes of games and showcases the general applicability of the framework to low-degree certificates. Source code: https://github.com/chiragfalor/&#13;
Nice-SoS-SDP
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials</title>
<link href="https://hdl.handle.net/1721.1/159112" rel="alternate"/>
<author>
<name>Hansen, Jacob A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159112</id>
<updated>2025-10-20T03:16:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials
Hansen, Jacob A.
Visual Instruction Tuning (VisIT) data, commonly available as human-assistant conversations with images interleaved in the human turns, are currently the most widespread vehicle for aligning strong LLMs to understand visual inputs, converting them to strong LMMs. While many such VisIT datasets are available, most of them are constructed via ad hoc techniques, separately proposed by different groups, commonly poorly documented, without available (reproducible) code, and employing paid closed-source model APIs like GPT-4, Gemini, or Claud to convert image metadata (labels) to VisIT instructions. This incurs significant cost and difficulty to scale, improve quality, or produce VisIT data for new datasets. In this work, we address these challenges and propose an open and unified recipe and approach, Instructify, for converting available metadata to VisIT instructions using open LLMs. Our multi-stage Instructify features an efficient framework for metadata grouping, quality control, data and prompt organization, and conversation sampling. We show that our approach can reproduce or improve the data quality of the available VisIT datasets when applied to the same image data and metadata sources, improving GPT-4 generated VisIT instructions by ∼3% on average and up to 21% on individual benchmarks using open models, such as Gemma 2 27B and LLaMa 3.1 70B. We further show that our approach enables effective performance scaling (in terms of resulting LMM performance on a large variety of benchmarks) of the produced VisIT data both in terms of quantity and quality. In addition, we explore the impact of multiple factors, including conversation format, base model selection, and resampling strategies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models</title>
<link href="https://hdl.handle.net/1721.1/159110" rel="alternate"/>
<author>
<name>Kim, Dong Young</name>
</author>
<id>https://hdl.handle.net/1721.1/159110</id>
<updated>2025-10-20T03:16:43Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models
Kim, Dong Young
Single-cell Assay for Transposase-Accessible Chromatin using sequencing (scATAC-seq) has emerged as a powerful tool for profiling chromatin accessibility at single-cell resolution. By capturing epigenomic landscapes, scATAC-seq provides critical insights into the regulatory elements that govern gene expression. However, the sparsity of scATAC-seq data, resulting from its low sequencing depth relative to the genome’s potential complexity, poses significant challenges for effective and accurate modeling. To advance the utility of scATAC-seq in modern biology, we explore its integration into deep learning frameworks through two innovative applications. First, we demonstrate how incorporating scATAC data enhances the performance of existing genomic language models by providing complementary context about chromatin accessibility. Specifically, we introduce scATAC to improve SegmentNT, a DNA segmentation model that leverages the Nucleotide Transformer (NT) to predict 14 types of genomic and regulatory elements from DNA sequences up to 30kb at single-nucleotide resolution. Second, we introduce a novel multimodal foundation model that extends existing scRNA-seq foundation models by integrating scATAC-seq data. This model captures crossmodal relationships between gene expression and chromatin accessibility, establishing a unified framework that can be fine-tuned for diverse downstream tasks, including cell type classification and cross-modal imputation. Our work highlights the potential of incorporating scATAC-seq data into existing genomics deep learning strategies, providing a framework for integrating regulatory DNA analysis more seamlessly into genomic modeling.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models</title>
<link href="https://hdl.handle.net/1721.1/159109" rel="alternate"/>
<author>
<name>Nguyen, Linh K.</name>
</author>
<id>https://hdl.handle.net/1721.1/159109</id>
<updated>2025-10-20T03:16:14Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models
Nguyen, Linh K.
The rapid generation of time series data across a wide array of domains—such as finance, healthcare, and industrial systems—has made anomaly detection a critical task for identifying irregular patterns that could signal significant events like fraud, system failures, or health crises. Traditional approaches to time series anomaly detection, including statistical models like ARIMA and deep learning methods, have proven effective but often require an extensive training phase, which can be both data and time-consuming. In recent years, the emergence of foundational models, including large language models (LLMs) and specialized time series models, has opened up new possibilities for anomaly detection. These models, pre-trained on vast and diverse datasets, offer the potential to perform tasks with minimal task-specific training. This thesis investigates the feasibility of leveraging these foundational models for time series anomaly detection, with the aim of determining their effectiveness in detecting anomalies without the traditional training requirements. We also aim to investigate whether foundational models pretrained specifically on time series data yield better results compared to large language models (LLMs) that were not pretrained for time series tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First-Person Teleoperation of a Bimanual Robotic System</title>
<link href="https://hdl.handle.net/1721.1/159108" rel="alternate"/>
<author>
<name>Thakur, Nandini</name>
</author>
<id>https://hdl.handle.net/1721.1/159108</id>
<updated>2025-10-20T03:16:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">First-Person Teleoperation of a Bimanual Robotic System
Thakur, Nandini
First-person teleoperation of robots is a large field of research that could serve many benefits for automation. Teleoperation is a popular method to collect demonstrations for imitation learning that are easily learned by the robot, and thus it’s important to create teleoperation systems that are intuitive and enable human-like perception of a scene. Adding a first-person component to basic teleoperation systems is key to improving operators’ visual perception and making teleoperation possible for extended periods of time. Existing teleoperation systems do not integrate elements that provide the operator with a good perception of the task space, such as a first-person VR view and the ability to leverage the neck to search around the space. They rely on techniques such as third-person view of the space, or provide a first-person view but without the ability to move the neck to look around. This thesis proposes a VR-based teleoperation system with an actuated 5-DoF neck for enabling human-like perception and improving the ability to perform high quality demonstrations for use in imitation learning.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference</title>
<link href="https://hdl.handle.net/1721.1/159107" rel="alternate"/>
<author>
<name>Vidal, Justice</name>
</author>
<id>https://hdl.handle.net/1721.1/159107</id>
<updated>2025-10-20T03:16:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference
Vidal, Justice
The growth of machine learning applications has increased the necessity of lightweight, energyefficient solutions for resource-constrained devices such as the STM32C011F6 microcontroller. However, such devices struggle with supporting larger models even after miniaturization techniques such as quantization and pruning. To facilitate machine learning inference on such devices, this work introduces Scalable Embedded Tiny Machine Learning (SETML), a general framework for distributed machine learning inference on microcontrollers. Furthermore, the framework is designed to be compatible with sensor-based applications that can take advantage of small hardware, such as gesture recognition, by testing binary size constraints with an accelerometer and its supporting library. This work evaluates the latency, power consumption, and cost trade-offs of using multiple small and efficient devices versus a larger device. The STM32C011F6 microcontroller is used as the primary hardware in the tested device network, while evaluation of the system is done in comparison with a device using a similar core processing element, the Seeeeduino XIAO SAMD21.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the energy transfer network in upconverting nanoparticles</title>
<link href="https://hdl.handle.net/1721.1/159106" rel="alternate"/>
<author>
<name>Zheng, Yuxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/159106</id>
<updated>2025-10-20T03:15:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigation of the energy transfer network in upconverting nanoparticles
Zheng, Yuxuan
Upconverting nanoparticles (UCNPs) have emerged as promising luminescent materials for a wide range of applications, including bioimaging, drug delivery, and photovoltaics. The intricate network of energy transfer processes within UCNPs enables their unique ability to convert low-energy infrared (IR) radiation into higher-energy visible light through photon upconversion, presenting significant challenges for accurate modeling. Despite their broad applications, theoretical models of UCNPs remain incomplete, and current models fail to accurately reproduce all experimental results. This thesis presents a comprehensive comparison of prevalent modeling approaches with the aim of developing improved models that more faithfully reproduce experimental observations. Using the Judd-Ofelt theory, we calculated essential transition rate parameters, including electric dipole (ED), magnetic dipole (MD), multiphonon relaxation (MPR), and energy transfer (ET), using constants sourced from the literature. We implemented both Monte Carlo models and Ordinary Differential Equation (ODE) models. Using the calculated rate parameters, we simulate the energy transfer pathways in Yb³⁺-Er³⁺ and Yb³⁺-Tm³⁺ UCNPs. Simulation results from all models were compared with experimental data to evaluate their effectiveness in capturing key luminescent properties such as population evolution, lifetime, saturation curves, and spectral purity.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach</title>
<link href="https://hdl.handle.net/1721.1/159103" rel="alternate"/>
<author>
<name>Peralta Walker, Stephanie Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/159103</id>
<updated>2025-10-12T03:17:23Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach
Peralta Walker, Stephanie Christine
This thesis adopts a systems approach to analyze the complex network of stakeholders involved in adopting blood-based laboratory screening tests for Alzheimer’s disease (AD). Traditional diagnostic methods, including cerebrospinal fluid (CSF) testing and positron electron tomography (PET) brain imaging, are invasive, costly, and inaccessible to many. Blood-based tests offer a less invasive and more cost-effective alternative, yet they remain underutilized in clinical practice. By conducting a literature review, stakeholder interviews, and a Kano analysis, the thesis identifies and evaluates the key stakeholder needs to support the widespread adoption of these tests, such as the need for demonstrated clinical performance of these tests, reimbursement, broader education of patients and health care professionals, and safe, effective medicines to treat AD. The research highlights two emerging tests that have published studies demonstrating clinical validation, a key parameter of clinical performance. A stakeholder tension analysis is included with proposed tension resolutions using stakeholder saliency to guide prioritization. Addressing these stakeholder needs could facilitate broader implementation, improve early diagnosis, and support emerging therapeutic interventions for AD, thus reshaping the diagnostic landscape for this increasingly prevalent disease.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects</title>
<link href="https://hdl.handle.net/1721.1/159102" rel="alternate"/>
<author>
<name>Ballard, Zachary N.</name>
</author>
<id>https://hdl.handle.net/1721.1/159102</id>
<updated>2025-10-12T03:17:20Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects
Ballard, Zachary N.
The United States Coast Guard is currently transforming its decision-making process for prioritizing shore infrastructure maintenance and repair projects. Current decision-making subjectivity appears to be generating inadequate project prioritizations. Stakes are high for an aging infrastructure portfolio in harsh coastal conditions, with increased national reliance on the Coast Guard in a fiscally constrained budgetary environment. Data availability, quality, and fidelity continue to increase, supporting the rationale for more robust and data-informed decision-making frameworks. &#13;
&#13;
The research begins with examining Coastal and Shore Operations (CSO) funding history, along with a thorough description of the current Centralized Planned Obligation Prioritization (C-POP) process. The complex, sociotechnical nature of the problem is highlighted by identifying all involved stakeholders and categorizing them through the leading view of stakeholder theory and salience. A detailed review of the governing asset management literature is conducted, gradually narrowing from a broad, international, and asset-type neutral perspective to more tailored infrastructure cross-asset prioritization material. Requisite framework data substance, collection, and analyses are described, and recommendations for data processing improvements are made. &#13;
&#13;
Two leading prioritization models are examined: the Importance and Urgency Quadrant Model and the Value Focused Multi-Criteria Decision Model. Their respective data visualizations are generated and analyzed. Using the multi-criteria analysis rooted in multi-attribute utility theory, four portfolios of measurably increasing value are constructed, compared with a baseline portfolio reflecting actual project selections in December 2023. These portfolio iterations include a linear programming solution to the Knapsack Problem of selecting projects that maximize overall portfolio utility within a budget limit while incorporating some of the more social and qualitative system properties. &#13;
&#13;
A traceable, adaptable, defendable, and objective data-informed multi-criteria framework is proposed, which aims to facilitate the effectiveness of the overall Coast Guard shore infrastructure portfolio in the long term.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems-Theoretic Approach to Organizational Design and Analysis</title>
<link href="https://hdl.handle.net/1721.1/159101" rel="alternate"/>
<author>
<name>Gutierrez, Lauren E.</name>
</author>
<id>https://hdl.handle.net/1721.1/159101</id>
<updated>2025-10-12T03:17:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Systems-Theoretic Approach to Organizational Design and Analysis
Gutierrez, Lauren E.
A significant challenge for large organizations lies in organizational design, particularly for public sector bureaucracies and the largest of industry’s private firms. Organizations tend to turn to organizational design improvements when facing effectiveness and efficiency issues. Unfortunately, these large organizations struggle with organizational design because of the sheer size and complexity of their organization which results in a fragmented and often times faulty approach to improving their organization. Organizations, at their core, are a special type of system or a set of components that operate or work together to achieve some common purpose. Organizations are purely social systems in that their elements are not technical or engineered. &#13;
&#13;
Systems Theory provides a lens through which these types of social systems can be studied. Just like in engineered systems, an organization's emergent behavior is determined by its internal elements' complex interactions. Traditional organizational design and analysis methods focus on optimizing these internal elements in the hopes of re-integrating optimized elements in pursuit of organizational-level optimal behavior. Just like in traditional systems engineering, component-level optimization does not yield system-level optimal behavior. &#13;
&#13;
This thesis codifies a systems-theoretic approach to organizational design and analysis using the language of Systems Theory and the semantics of Systems-Theoretic Accident Model and Processes. By extending traditional Systems-Theoretic Process Analysis (STPA), a tool for hazard analysis used primarily for engineered systems, this work refines STPA’s concepts and terminology to be more accessible for analyzing social systems. Building off this extension, this thesis leverages a contemporary Department of Defense reorganization effort as a case study, illustrating Systems-Theoretic Organizational Design and Analysis (STAODA) as a tool to assess organizational design options.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge</title>
<link href="https://hdl.handle.net/1721.1/159099" rel="alternate"/>
<author>
<name>Yang, Shang</name>
</author>
<id>https://hdl.handle.net/1721.1/159099</id>
<updated>2025-10-12T03:17:06Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge
Yang, Shang
Deep learning for visual perception on edge devices has become increasingly critical, driven by emerging applications in autonomous driving and AR/VR. Typically, sparse convolution on 3D point clouds and Visual Language Models (VLMs) for image processing are two important methods for visual understanding and reasoning. However, the limited compute resources and memory on edge devices pose significant challenges, necessitating specialized system support for deep learning models. Specifically, the efficiency challenges for edge visual perception are twofold: First, the sparsity and inherent irregularity of point cloud data introduce substantial complexity for parallel processing. Second, the colossal model sizes and amount of computation of LLMs and VLMs render edge deployment particularly challenging. In this thesis, we aim to address the efficiency issues of on-device deep learning via system-algorithm co-design. We first introduce TorchSparse++, a high-performance inference engine for sparse convolution on GPUs. Unlike existing sparse convolution systems, TorchSparse++ well balances the efficiency and implementation simplicity, achieving the best performance across different application scenarios. Specifically, we first create a highly efficient Sparse Kernel Generator that generates performant sparse convolution kernels at less than one-tenth of the engineering cost of the current state-of-the-art system. On top of this, we design the Sparse Autotuner, which extends the design space of existing sparse convolution libraries and searches for the best dataflow configurations for training and inference workloads. Consequently, TorchSparse++ achieves 2.9×, 3.3×, 2.2× and 1.7× measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is 1.2-1.3× faster than SpConv v2 in mixed precision training across seven representative autonomous driving benchmarks. It also seamlessly supports graph convolutions, achieving 2.6-7.6× faster inference speed compared with state-of-the-art graph deep learning libraries. Furthermore, to democratize the power of large foundation models in edge AI, we propose AWQ and TinyChat, a hardware-friendly full-stack solution for efficient on-device LLM and VLM deployment. AWQ is a novel quantization method based on the insight that not all weights in an LLM are equally important. Protecting only 1% salient weights can greatly reduce quantization error. Specifically, AWQ employs an equivalent transformation and scales up the salient weight channels to reduce the weight quantization error, during which the scale is determined by collecting the activation statistics offline. Alongside AWQ, we further introduce TinyChat, an efficient and flexible inference framework tailored for 4-bit on-device LLM/VLMs. With on-the-fly dequantization, extensive kernel fusion and platform-aware weight packing, TinyChat offers 2.7-3.7× speedup over the Huggingface FP16 implementation on both desktop and mobile GPUs. It also enables the deployment of the 70B Llama-2 model on mobile GPUs. Together, these techniques significantly reduce the computational and memory costs for deploying deep learning models on edge devices, increasing the accessibility of deep learning for practical application. We hope that this thesis can inspire future research on efficient edge AI across diverse modalities.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagnosing Supply Chain Threats to Defense Innovation</title>
<link href="https://hdl.handle.net/1721.1/159098" rel="alternate"/>
<author>
<name>Schneider, Donald E.</name>
</author>
<id>https://hdl.handle.net/1721.1/159098</id>
<updated>2025-10-12T03:16:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Diagnosing Supply Chain Threats to Defense Innovation
Schneider, Donald E.
As the U.S. Department of Defense (DoD) shifts focus to an era of global power competition, the demand for rapid innovation and disruptive technologies has grown significantly. Prototyping remains a vital tool for advancing technological innovation, enabling early learning and risk reduction in developing complex systems. However, persistent supply chain challenges threaten the success of defense prototyping projects, causing schedule delays, and diminished effectiveness. &#13;
This research identifies the underlying causes of supply chain disruptions specific to Federal Acquisition Regulations (FAR) governed prototyping efforts, offering a socio-technical systems analysis that accounts for stakeholder relationships, market dynamics, and regulatory frameworks. Through extensive data collection, including stakeholder interviews across agencies, organizations, and supply chain roles, 181 issues were identified and analyzed, revealing over 500 contributing factors. The disciplined analysis of these factors identified three systemic root causes: (1) the misapplication of production management strategies that focus on efficiencies at scale and low tolerance for risk; (2) pooled supply chain management functions, which marginalizes prototyping’s unique demands and creates inefficiencies; and (3) regulatory and organizational barriers to entry that deter non-traditional suppliers, hindering innovation.&#13;
To address these systemic challenges, the thesis recommends restructuring organizations to better align with the unique demands and risks of prototyping while simultaneously creating pathways to reduce barriers for new suppliers. Resolving these issues will require a coordinated effort across the prototyping ecosystem. By addressing these root causes, the DoD can improve the efficiency and effectiveness of prototyping programs, ultimately sustaining U.S. technological superiority in an increasingly competitive global environment.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A framework for determining remote sensing capabilities for ecosystem services valuation</title>
<link href="https://hdl.handle.net/1721.1/159097" rel="alternate"/>
<author>
<name>Sampath, Aparajithan</name>
</author>
<id>https://hdl.handle.net/1721.1/159097</id>
<updated>2025-10-12T03:16:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A framework for determining remote sensing capabilities for ecosystem services valuation
Sampath, Aparajithan
Nature provides vital services—clean water, air purification, and climate regulation—to human societies thanks to the "natural capital" like forests and lakes on our planet. Accurately measuring and valuing these ecosystem services is crucial for informed economic and development decisions. Remote sensing (RS) technology offers a powerful way to monitor natural capital (e.g., mapping forest cover, assessing water quality). However, current data lack the accuracy and precision needed for robustly monitoring the value of these services. This deficiency has impeded the use of natural capital assessment data in economic decision-making. This research partly addresses this challenge by developing a new framework to investigate the necessary sensor characteristics (spectral, radiometric, temporal, spatial) for effectively monitoring natural capital and quantifying ecosystem services. The framework first identifies the different types of services provided by an ecosystem, then uses a physics-based approach to identify crucial physical parameters and determines the necessary measurements that need to be made from a sensor for their quantification. The sources of uncertainty impacting quantification and value estimation are also analyzed in detail. The approach is integrated to formulate a system utility function that is used to compare performance of existing and proposed RS systems, and the overall results are subsequently used in proposing required capabilities for future remote sensing systems for natural capital monitoring. The framework is demonstrated on a case study focused on the flood attenuation function (service) provided by wetlands. Water budget models are utilized to identify essential parameters for monitoring water storage by wetlands. Using a study area encompassing the Fall Lake Creek reservoir (Oregon, USA), water storage capacity is measured and monitored by integrating USGS digital elevation models with Sentinel-1 synthetic aperture radar, Sentinel-2 optical data, and Planet Scope optical data. Results are validated against USGS published ground truth measurements. A strong correlation (r² of 0.95) was observed with all three datasets. An uncertainty analysis was conducted, using the random fields method, in which synthetic spatially autocorrelated errors were added to the RS datasets. Radiometric uncertainties were studied through addition of gaussian noise as a percentage of reflectance values, and results showed effects of &lt; 2.5% on estimated water volume. Elevation data uncertainties (which were approximated to simulate uncertainties in globally available DEMs) showed higher effects, and errors in estimated storage volumes increased proportionally. A study of inundation (for a case study over Miami, FL) revealed that as the root mean square error of the DEMs increased from 2m to 7 m, the risk of flooding (defined as water depth accumulation of greater than 90 cm) increased more than 3 times. A utility function was developed to evaluate sensors based on their ability to estimate wetland water volumes. This function considers sensor characteristics like spatial, radiometric, and temporal resolution. Notably, the function estimates that a future optical system with 2x improved spatial and 4x improved temporal resolution (compared to Sentinel-2) can increase utility 7-fold.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images</title>
<link href="https://hdl.handle.net/1721.1/159096" rel="alternate"/>
<author>
<name>Kishnani, Deepali</name>
</author>
<id>https://hdl.handle.net/1721.1/159096</id>
<updated>2025-10-12T03:16:34Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images
Kishnani, Deepali
This thesis explores how the uncanny valley phenomenon—historically tied to near-human robots—applies to text-based AI interactions and AI-generated images. While the concept has been predominantly studied in the context of robotics, the advent of generative AI reveals that text and visuals that are 'almost, but not quite' human can also provoke unease. &#13;
&#13;
Two experiments structure the study. The first examines GPT4-Turbo (GPT4o) text conversations. Sixty participants engaged with one of three “chatbots”: an “Uncanny-Valley Bot” (prompt engineered to fall in the uncanny valley), a “Human-Like Bot” (prompt engineered to converse like humans), or a human control. Godspeed Questionnaire results indicate that the uncanny valley effect surfaces in text-only form: participants consistently rated the “Uncanny-Valley Bot” lowest in anthropomorphism, animacy, likeability, and perceived intelligence. Furthermore, the experiment revealed that the distinction between GPT and humans is becoming increasingly blurred, with 60% of participants mistaking a human for GPT and 40% mistaking GPT for a human. Lastly, results highlighted a strong user preference for naturalness, human imperfections, and vulnerability. While human flaws enhance relatability, deviations that disrupt perceived humanity trigger the uncanny valley.&#13;
&#13;
The second experiment investigates AI-generated images produced by Stable Diffusion XL at varying degrees of realism. Fifty-six participants ranked each image’s “strangeness,” revealing that highly realistic or clearly stylized outputs raise fewer concerns. By contrast, images that inhabit the uncanny valley elicited discomfort. To quantify these findings, recognized metrics like Frechet Inception Distance (FID) and Kernel Inception Distance (KID) were used to compare real and AI-generated images. Both metrics strongly correlated with human perceptions, suggesting that distance metrics can be used to determine realism. The study also shows that image generation models can detect visual features associated with the uncanny valley. However, performance drops when the prompt calls for subtle, “mid-range” realism, indicating the model’s difficulty in maintaining comfort and believability at intermediate levels.&#13;
&#13;
Collectively, the two experiments confirm that uncanny valley responses are not confined to physical robots but persist in text-based dialogue and AI-synthesized images. Yet challenges remain. Short interaction windows, small participant samples, and reliance on selected AI models call for studies on the generalizability of these findings. Future work should adopt longitudinal designs, larger samples, and multiple AI systems. Addressing the uncanny valley in both textual and visual content is essential for advancing user trust, and comfort in AI.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots</title>
<link href="https://hdl.handle.net/1721.1/159094" rel="alternate"/>
<author>
<name>Evagora, Christopher K.</name>
</author>
<id>https://hdl.handle.net/1721.1/159094</id>
<updated>2025-10-12T03:16:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots
Evagora, Christopher K.
Legged robotics has recently shifted toward advanced optimization-based control methods, such as Model Predictive Control (MPC), to generate agile and energy-efficient locomotion. By casting the control problem as an optimization task, robotic systems can account for complex robot dynamics and operational constraints, including joint limits and actuator capabilities. However, high-performance maneuvers also demand rigorous consideration of onboard battery constraints. This work presents an empirically derived lithium-ion battery model that captures transient voltage sag and time-dependent internal battery state, enabling more accurate prediction of feasible power delivery. Additionally, a custom high-power battery pack was designed to meet the power demands of the MIT Humanoid, emphasizing power density, safety, and maintainability. Although the work presented in this thesis does not integrate the battery model into a trajectory optimization framework, it establishes the foundation for future research that aims to couple battery and robot dynamics in robot control. Ultimately, this approach will facilitate safer and more capable legged robots by ensuring that planned trajectories respect both physical and electrochemical constraints.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI</title>
<link href="https://hdl.handle.net/1721.1/159092" rel="alternate"/>
<author>
<name>Báez Alicea, Isabel</name>
</author>
<id>https://hdl.handle.net/1721.1/159092</id>
<updated>2025-05-20T12:37:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI
Báez Alicea, Isabel
In recent years, three-dimensional model generation and manipulation through generative AI has seen significant developments. Current projects enable the generation of threedimensional assets from natural language prompts and input images, as well as functionalityaware model manipulation through mesh segmentation and categorization. However, all these workflows lack a coherent, unified platform that caters to users’ needs and each method’s technologies. Programs that rely on terminal-based commands lack the graphics needed for model interactions, and plugin extensions for 3D modeling applications are unintuitive and hard to extend for new functionalities. Additionally, both approaches require users to have prior computer engineering and/or 3D graphics knowledge. For this thesis, I propose the creation of a web-based, multimodal graphical user interface that consolidates all these different technologies in a single platform. By supporting model stylization and model generation (both from text prompts and input images), users can utilize combined workflows and expand the range of output possibilities for 3D asset creation. Other features in our interface include model uploading, saving, and downloading to enable a continuous stream of work on a single 3D asset. Apart from all this, we expand the current capabilities of existing image-to-3D generation programs by enabling users to combine up to six images together and create a merged 3D object. Each of these images corresponds to a view angle from which the outputted mesh will be built.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues</title>
<link href="https://hdl.handle.net/1721.1/159091" rel="alternate"/>
<author>
<name>Chen, Cecilia</name>
</author>
<id>https://hdl.handle.net/1721.1/159091</id>
<updated>2025-05-20T12:37:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues
Chen, Cecilia
Krylov subspace methods, like the Arnoldi iteration, are a powerful tool for efficiently solving high-dimensional linear algebra problems. In this work, we analyze the convergence of Krylov methods for estimating the numerical range of a matrix. Prior bounds on approximation error often depend on eigenvalue gaps of the matrix, which lead to weaker bounds than observed in practice, specifically in applications where these gaps are small. Instead, we extend a line of work proving gap-independent bounds for the Lanczos method, which depend only on the matrix dimensions and number of iterations, to the more general Arnoldi case.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GIM: Guidance as Initialization Method</title>
<link href="https://hdl.handle.net/1721.1/159090" rel="alternate"/>
<author>
<name>Duitama Cortes, Juan Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/159090</id>
<updated>2025-10-04T03:17:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">GIM: Guidance as Initialization Method
Duitama Cortes, Juan Sebastian
This work makes two contributions: the evaluation of early stop guidance for deep Fully Connected Networks (FCNs) and the introduction of guidance as an initialization method (GIM). Network initialization has been a meaningful and challenging topic in the field of machine learning (ML) for a long time. Many initialization methods exist, ranging from data-independent to data-dependent approaches. Initializations allow for a better understanding of model behavior and improvements in model performance. The novel guidance tool enabled us to propose GIM, a new technique that initializes a model by leveraging representational similarity with respect to models of different architectures. A model with an architecture that performs poorly in a specific task can be initialized with guidance from a model with an architecture that performs well in the respective task. We focus on the case of FCNs in the task of image classification and provide experimental results to validate our approach.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Weather For A Mixed Reality Platform</title>
<link href="https://hdl.handle.net/1721.1/159089" rel="alternate"/>
<author>
<name>Ni, Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/159089</id>
<updated>2025-10-04T03:17:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Simulating Weather For A Mixed Reality Platform
Ni, Hao
Complex systems are inherently difficult to teach in a traditional classroom setting. The We’re In This Together (WIT) project aims to provide a different teaching strategy by using AR/VR headsets to situate the students directly inside the system. WIT’s first game attempts to tackle common weather concepts including precipitation and fronts; however, the most recent version fails to demonstrate and model the concepts in an accurate and comprehensible way. This project focuses on developing a brand-new simulation layer for the game that better captures the causes behind common weather phenomena. The new simulation uses a particle-based approach to model the movement of air in the atmosphere and creates a more thorough and interactive experience to help students explore the various aspects of weather.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method</title>
<link href="https://hdl.handle.net/1721.1/159088" rel="alternate"/>
<author>
<name>Wong, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/159088</id>
<updated>2025-10-04T03:16:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method
Wong, Zoe
While recent advances in Generative AI enable visual stylization of 3D models using image prompts, they typically neglect tactile properties. TactStyle addresses this limitation by enabling creators to enhance 3D models with both visual and tactile properties derived from texture images. Using a fine-tuned image-generation model, TactStyle generates highly accurate heightfields that faithfully replicate the tactile properties of input visual textures and applies them to 3D models. However, applying textures to 3D models presents challenges, such as ensuring even texture resolution, avoiding texture warping, and minimizing visible seams. TactStyle’s current implementation often struggles with significant texture stretching and distortion caused by poor UV mapping, compromising the accurate heightfields and diminishing the tactile fidelity of printed models. Our research systematically evaluates various UV unwrapping methods, including alternative UV projections and an optimization-based neural UV mapping, to improve the realism and accuracy of texture application on 3D models in digital fabrication. Building on these findings, we will release a Blender plugin that integrates the optimal UV unwrapping methods with TactStyle, enabling creators to easily customize their 3D models with accurate tactile properties using only reference texture images. This work enhances the practicality and accessibility of tactile 3D model customization, bridging the gap between visual and tactile design elements.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome</title>
<link href="https://hdl.handle.net/1721.1/159087" rel="alternate"/>
<author>
<name>Edwards, Lilly</name>
</author>
<id>https://hdl.handle.net/1721.1/159087</id>
<updated>2025-10-04T03:16:39Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome
Edwards, Lilly
9p deletion (9p-) syndrome is primarily characterized by intellectual disability, developmental delays, and autism. This project investigated how much of the neuronal phenotypes of 9p- syndrome could be attributed to RFX3, a transcription factor and autism risk gene. Bulk RNA-seq data of iPSC-derived neurons from patients with 9p- syndrome and CRISPRengineered cell lines was analyzed using Principal Component Analysis, Differential Gene Expression analysis, and Functional Enrichment analysis. The findings indicate that RFX3 plays a significant role but is not the sole driver of the neuronal phenotypes. SMARCA2, a gene linked to intellectual disability and part of the SWI/SNF complex, was identified as a direct target of RFX3 in the commonly deleted region of chromosome 9p. Notably, the combined deletion of RFX3 and SMARCA2 led to greater dysregulation of SMARCA2 expression and SWI/SNF complex components than the deletion of either gene alone. These findings highlight the potential synergistic effects of RFX3 and SMARCA2 in 9p- syndrome and suggest their combined disruption may underlie the neuronal phenotypes observed.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Model Editing for Unlearning in Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/159086" rel="alternate"/>
<author>
<name>Hossain, Shariqah</name>
</author>
<id>https://hdl.handle.net/1721.1/159086</id>
<updated>2025-10-04T03:16:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigating Model Editing for Unlearning in Large Language Models
Hossain, Shariqah
Data regulations on the Right to be Forgotten such as that in the General Data Protection Regulation (GDPR) of the European Union protect the right of users to remove private information from organizations. With the increasing usage and influence of large language models (LLMs) that are trained on personal data, a question of how to implement the removal of information within these models arises. In addition, large language models (LLMs) are trained on a large corpus of data that is usually scraped from the Web. A current challenge with ensuring reliable and safe outputs from LLMs is false, toxic, harmful or biased information from Web data that is captured in the knowledge of the model. Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for models with large numbers of parameters or fail to remove the entire scope of information without harming performance in the knowledge that is to be retained. Model editing algorithms solve a similar problem of changing information in LLMs, but they focus on redirecting inputs to a new target rather than removing that information altogether. Despite the parallels between model editing and unlearning, there has yet to be a thorough investigation of the potential of model editing approaches within this setting. In this work, we explore ROME, IKE, and WISE editing algorithms and design new editing targets for an unlearning setting. For evaluating the potential of the model editing algorithms, we focus on unlearning fictitious information using the Task of Fictitious Unlearning (TOFU) benchmark. Through this investigation, we show that model editing approaches can exceed the performance of current unlearning methods at removing information depending on the setting. They share the limitation of traditional unlearning of being unable to encapsulate the scope of what is to be unlearned without damage to overall model performance. We hope to leverage this information to improve methods for unlearning model knowledge and therefore improve the reliability of LLMs.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/159085" rel="alternate"/>
<author>
<name>Ravichandran, Anish</name>
</author>
<id>https://hdl.handle.net/1721.1/159085</id>
<updated>2025-10-04T03:16:23Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models
Ravichandran, Anish
This thesis explores potential applications of LLMs for assisting the analyses and decisionmaking of complex electric power grid operators. The power grid is a critical piece of infrastructure currently challenged by increased electrification, integration of renewable energy sources, and distributed energy resources (DERs). Human operators struggle to process the massive amounts of data produced by modern smart grids and need innovative solutions to handle the increased complexity of operational decisions. This thesis investigates the potential role of Large Language Models (LLMs) in grid operation tasks, focusing on interpretability and generalizability while exploring how LLMs can assist operators by providing actionable insights and recommendations. Multiple versions of LLM agents were developed, including naive and tool-assisted designs, and were evaluated on the Learn to Run a Power Network (L2RPN) benchmark for steady-state and cascading failure scenarios. While the LLM agents performed better in scenarios requiring exploratory decision-making, they struggled in steady-state operation and were constrained by their integration with tools and the testing environment. This work was limited by compute constraints, which affected the choice of model and the length of evaluation scenarios, and future work is needed toward seamless interaction of LLMs and power systems simulators, however LLMs have the potential to transform future grid operation, paving the way for more resilient and sustainable energy sector of the 21st century.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs</title>
<link href="https://hdl.handle.net/1721.1/159084" rel="alternate"/>
<author>
<name>Skelić, Lejla</name>
</author>
<id>https://hdl.handle.net/1721.1/159084</id>
<updated>2025-10-04T03:16:16Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs
Skelić, Lejla
The role of Large Language Models (LLMs) has not been extensively explored in analog circuit design, which could benefit from a reasoning-based approach that transcends traditional optimization techniques. In particular, despite their growing relevance, there are no benchmarks to assess LLMs’ reasoning capability about circuits. Therefore, we created the CIRCUIT dataset consisting of 510 question-answer pairs spanning various levels of analog-circuit-related subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04% accuracy when evaluated on the final numerical answer. To evaluate the robustness of LLMs on our dataset, we introduced a unique dataset design and evaluation metric that enable unit-test-like evaluation by grouping questions into unit tests. In this case, GPT-4o can only pass 27.45% of the unit tests, highlighting that the most advanced LLMs still struggle with understanding circuits, which requires multi-level reasoning, particularly when involving circuit topologies. This circuit-specific benchmark introduces a scalable and reliable automatic evaluation method, transferable to other reasoning domains, and highlights LLMs' limitations, offering valuable insights for advancing their application in analog integrated circuit design.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks</title>
<link href="https://hdl.handle.net/1721.1/159083" rel="alternate"/>
<author>
<name>Tulla Lizardi, Miguel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159083</id>
<updated>2025-10-04T03:15:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks
Tulla Lizardi, Miguel A.
Exploit chains play a crucial role in advanced persistent threats (APTs) and other malicious cyber campaigns. Sophisticated attackers can navigate across a network, escalate their privileges, and compromise valuable targets by executing the right exploits in the right order. However, finding these exploits chains is a challenging task requiring a broad knowledge of the vulnerabilities present in computer systems and the exploits that take advantage of them. Networks can be complex, with many hosts and intricate software stacks. Moreover, the range of known exploits and vulnerabilities is constantly growing, complicating the process of determining how they can be linked. This thesis introduces a solution, ALFA-Chains, that automates the discovery of exploit chains by leveraging classical AI planning, Large Language Models (LLMs), and existing exploit/vulnerability databases. ALFA-Chains describes networks and exploits using the Planning Domain Description Language (PDDL), a formal language to represent planning problems. This allows us to use optimized off-the-shelf planners that have been developed by the AI planning community over many years. Our system takes natural language descriptions of exploits and classifies them into categories based on their preconditions and effects. From this intermediary representation, we can programmatically generate PDDL that captures the requirements needed to run the exploit and the access gained by the attacker. Due to this automated approach, ALFA-Chains is able to consider a vast set of exploits when determining if a network is susceptible to exploit chaining. We show how ALFA-Chains can process 1,880 Metasploit exploits and their corresponding 2,002 CVEs to detect exploit chains in a variety of realistic network configurations. We proceed to discuss potential applications of ALFA-Chains, including automated penetration testing and vulnerability prioritization.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Inductive Biases of Conditional Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/159081" rel="alternate"/>
<author>
<name>Yu, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/159081</id>
<updated>2025-09-03T03:35:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On the Inductive Biases of Conditional Diffusion Models
Yu, Christina
Diffusion models have achieved remarkable progress in recent years across various domains and applications, but how diffusion models generalize is still not well understood. While prior work predominantly focuses on unconditional diffusion models, in this thesis we focus on understanding generalization for conditional diffusion models, which is especially relevant for modern text- or observation- conditioned applications. In particular, we are interested in the inductive biases of conditional diffusion models which predispose them to certain forms of interpolation in regions outside the support of the training data. We observe that neural networks are capable of learning qualitatively different forms of interpolation, which may be influenced by the architecture and capacity of the network and other aspects of neural network training. We develop a potential framework to model the interpolation behavior of neural networks via nonparametric estimation, which happens to have the property of being schedule consistent, or truly denoising at every time step. We find that, assuming a neural network with sufficient capacity, conditional diffusion models are biased towards smoothing, which can lead to non-schedule consistent behavior away from the training data and has a number of interesting consequences.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Pass Readout With Ring Resonators for Qubit Measurement</title>
<link href="https://hdl.handle.net/1721.1/159080" rel="alternate"/>
<author>
<name>Zang, Alicia</name>
</author>
<id>https://hdl.handle.net/1721.1/159080</id>
<updated>2025-09-03T03:35:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">All Pass Readout With Ring Resonators for Qubit Measurement
Zang, Alicia
Quantum computers may advance computing by solving some NP complexity problems, such as factoring and simulating quantum systems. Superconducting qubits, configurable artificial atoms comprised of circuit elements, are a leading platform to create quantum computers. Many schemes for superconducting qubit readout include a weakly coupled port as a capacitor in the feedline, which allows for directionality in the readout signal. However, this impedance mismatch creates problems with resonator linewidth variation, standing waves, and voltage nodes in the feedline, leading to challenges in scaling to larger frequency multiplexed systems. This thesis proposes an all-pass readout scheme that utilizes ring resonators that do not require a weakly coupled port, allowing for more modular qubit readout architectures.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verification of Go Channels</title>
<link href="https://hdl.handle.net/1721.1/159079" rel="alternate"/>
<author>
<name>Zhang, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/159079</id>
<updated>2025-09-03T03:35:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Verification of Go Channels
Zhang, Jessica
Goose is a tool for translating a subset of the Go programming language into Perennial/Iris, which is an extension of Coq. However, Goose did not support channels, which are an important synchronization tool that Go is well known for.&#13;
&#13;
This thesis presents an extension to Goose to support channels, including a model to represent Go channels and operations in GooseLang, the language defined in Perennial/Iris that Goose translates into, an extension to the Goose translator to support channels, and a library of separation logic specifications that define the expected behavior of channel operations on open channels. Finally, this thesis evaluates how effective this model and library is for verifying Go code containing channels, and discuss some limitations and potential future work.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario</title>
<link href="https://hdl.handle.net/1721.1/159006" rel="alternate"/>
<author>
<name>Loo, Pang Chieh.</name>
</author>
<id>https://hdl.handle.net/1721.1/159006</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1917-01-01T00:00:00Z</published>
<summary type="text">Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario
Loo, Pang Chieh.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1917
</summary>
<dc:date>1917-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of industry financing of a new jet transport for U.S. domestic airline service</title>
<link href="https://hdl.handle.net/1721.1/159001" rel="alternate"/>
<author>
<name>Evani, Sunder Rayma Murthy.</name>
</author>
<id>https://hdl.handle.net/1721.1/159001</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A study of industry financing of a new jet transport for U.S. domestic airline service
Evani, Sunder Rayma Murthy.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Review of intervention programs for pre-schoolers in Venezuela.</title>
<link href="https://hdl.handle.net/1721.1/159000" rel="alternate"/>
<author>
<name>Eskenasy, Sandra Patricia.</name>
</author>
<id>https://hdl.handle.net/1721.1/159000</id>
<updated>2025-12-17T03:47:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Review of intervention programs for pre-schoolers in Venezuela.
Eskenasy, Sandra Patricia.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1978; Bibliography: leaf 147.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Multi-query Planning in Graphs of Convex Sets</title>
<link href="https://hdl.handle.net/1721.1/158967" rel="alternate"/>
<author>
<name>Morozov, Savva</name>
</author>
<id>https://hdl.handle.net/1721.1/158967</id>
<updated>2025-04-07T09:05:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Fast Multi-query Planning in Graphs of Convex Sets
Morozov, Savva
Planning in Graphs of Convex Sets (GCS) is a recently developed optimization framework that seamlessly integrates discrete and continuous decision making. It naturally models and effectively solves a wide range of challenging planning problems in robotics, including collision-free motion planning, skill chaining, and control of hybrid systems. In this thesis, we study the multi-query extension of planning through GCS, motivated by scenarios where robots must operate swiftly within static environments. Our objective is to precompute optimal plans between predefined sets of source and target conditions, in an effort to enable fast online planning and reduce GCS solve times. Our solution consists of two stages. Offline, we use semidefinite programming to compute a coarse lower bound on the problem’s cost-to-go function. Then, online, this lower bound is used to incrementally generate feasible plans by solving short-horizon convex programs. We demonstrate the effectiveness of our approach through a variety of experimental domains: collision-free motion planning for a warehouse robot arm, item sorting for a top-down suction gripper, and footstep planning for a bipedal walker. In particular, in a warehouse-like scenario involving a seven-joint robot arm, our method generates higher-quality paths up to 100 times faster than existing motion planners.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structuring Representation Geometry in Self-Supervised Learning</title>
<link href="https://hdl.handle.net/1721.1/158966" rel="alternate"/>
<author>
<name>Gupta, Sharut</name>
</author>
<id>https://hdl.handle.net/1721.1/158966</id>
<updated>2025-04-07T09:25:49Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structuring Representation Geometry in Self-Supervised Learning
Gupta, Sharut
The central promise of deep learning is to learn a map &#119891; : &#119987; → ℝ_&#119889; that transforms objects &#119987;—represented in their raw perceptual forms, such as images or molecular strings—into a representation space ℝ_&#119889; where everything that is hard to do with raw perceptual data becomes easy. For instance, measuring the similarity between two objects [scientific notation] expressed as tensors of pixel intensities is non-trivial in their raw form, but becomes straightforward if &#119891; maps these objects to a space where simple Euclidean distances, ‖&#119891;(&#119909;₁) − &#119891;(&#119909;₂)‖₂ are meaningful measures of similarity. While this simple recipe has shown standout success in a range of tasks, certain applications require representations that encode richer structural relationships beyond pairwise similarity. For instance, tasks that encode relational information— such as “&#119883; is a parent of &#119884; ” or “&#119860; is a treatment for &#119861;”—require embedding spaces that capture richer structural relationships. In this thesis, we explore what &#119891; should encode in order to be useful for a range of unknown downstream tasks, from the point of view of the geometric structure of representation space. We investigate this question in the context of self-supervised learning, a paradigm that extracts meaningful representations by leveraging the structure of the data itself without relying on explicit labels. Specifically, we propose adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations in the embedding space. To this end, we introduce an equivariance objective and theoretically prove that its minima forces transformations on input space to correspond to rotations on the spherical embedding space. Our proposed method significantly improves performance on downstream tasks, and ensures sensitivity in embedding space to important variations in data (e.g., color, rotation) that existing contrastive methods do not achieve.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations</title>
<link href="https://hdl.handle.net/1721.1/158965" rel="alternate"/>
<author>
<name>Hsiao, Yi-Hsuan</name>
</author>
<id>https://hdl.handle.net/1721.1/158965</id>
<updated>2025-04-08T04:50:00Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations
Hsiao, Yi-Hsuan
Insects demonstrate remarkable capabilities in navigating complex environments and executing tasks such as pollination and coordinated object transport. Inspired by these biological feats, insect-scale micro aerial vehicles (MAVs) have been developed with advanced flight functionalities, including collision resilience and aerial acrobatics. Despite these advancements, MAVs weighing less than a gram continue to face critical challenges in design, assembly, and repair. Additionally, limitations in sensing and control have prevented the realization of swarm-like behaviors, thereby constraining research on collective actions and potential applications such as distributed sensing. To overcome these obstacles, this work introduces a scalable and modular fabrication method for sub-gram MAVs. A parametric design algorithm automatically generates laser cutting templates from a minimal set of design parameters, while stereolithographic 3D printing is employed to fabricate static components such as airframes and connectors, significantly streamlining the production process. This modular approach improves assembly efficiency and repairability, reducing fabrication time by more than half. Using this methodology, two sub-gram MAVs successfully demonstrated controlled hovering and coordinated payload transport. These results represent a significant step toward enabling insect-inspired robotic swarms, providing a platform for future studies on collective flight behaviors and swarm robotics.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Goal Inference from Open-Ended Dialog</title>
<link href="https://hdl.handle.net/1721.1/158960" rel="alternate"/>
<author>
<name>Ma, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/158960</id>
<updated>2025-04-07T09:13:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Goal Inference from Open-Ended Dialog
Ma, Rachel
Embodied AI Agents are quickly becoming important and common tools in society. These embodied agents should be able to learn about and accomplish a wide range of user goals and preferences efficiently and robustly. Large Language Models (LLMs) are often used as they allow for opportunities for rich and open-ended dialog type interaction between the human and agent to accomplish tasks according to human preferences.&#13;
&#13;
In this thesis, we argue that for embodied agents that deal with open-ended dialog during task assistance:&#13;
&#13;
1. AI Agents should extract goals from conversations in the form of Natural Language (NL) to be better at capturing human preferences as it is intuitive for humans to communicate their preferences on tasks to agents through natural language.&#13;
&#13;
2. AI Agents should quantify/maintain uncertainty about these goals to ensure that actions are being taken according to goals that the agent is extremely certain about.&#13;
&#13;
We present an online method for embodied agents to learn and accomplish diverse user goals. While offline methods like RLHF can represent various goals but require large datasets, our approach achieves similar flexibility with online efficiency. We extract natural language goal representations from conversations with Large Language Models (LLMs). We prompt an LLM to role play as a human with different goals and use the corresponding likelihoods to run Bayesian inference over potential goals. As a result, our method can represent uncertainty over complex goals based on unrestricted dialog. We evaluate in a text-based grocery shopping domain and an AI2Thor robot simulation. We compare our method to ablation baselines that lack either explicit goal representation or probabilistic inference.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Subject Image Generation</title>
<link href="https://hdl.handle.net/1721.1/158959" rel="alternate"/>
<author>
<name>Yin, Tianwei</name>
</author>
<id>https://hdl.handle.net/1721.1/158959</id>
<updated>2025-04-08T04:17:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multi-Subject Image Generation
Yin, Tianwei
Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend identity among subjects. In this thesis, we present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300–2500 speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals</title>
<link href="https://hdl.handle.net/1721.1/158958" rel="alternate"/>
<author>
<name>Gold, Hannah T.</name>
</author>
<id>https://hdl.handle.net/1721.1/158958</id>
<updated>2025-04-08T04:37:02Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals
Gold, Hannah T.
Fundamental limits of thermal radiation are imposed by Kirchhoff’s law, which assumes the electromagnetic reciprocity of a material or material system. Thus, breaking reciprocity can enable breaking barriers in thermal efficiency engineering¹. This thesis presents 1D photonic crystals composed of Weyl/Dirac semimetal and dielectric layers, whose structures are optimized to maximize the nonreciprocity of infrared radiation absorptance/emittance in planar and compact designs. Two different mechanisms to enable nonreciprocal infrared absorbers/emitters are simulated and compared – anomalous Hall effect in Weyl semimetals 2 and electric-current-induced Fizeau drag in either Dirac or Weyl semimetals3 . To engineer an ultra-compact absorber structure that does not require gratings or prisms to couple light, a genetic algorithm (GA) was used to maximize nonreciprocity in the design globally, followed by the application of the numerical gradient ascent (GAGA) algorithm as a local optimization to further enhance the design. The first absorber design takes advantage of the intrinsic nonreciprocity of time-reversal symmetry (TRS) breaking Weyl semimetals due to their pseudomagnetic field in momentum space. GAGA methodology is then applied to design and optimize a flat absorber using inversion (IS) breaking Weyl/Dirac semimetals as active layers, in which tunable nonreciprocity is induced through an applied DC current bias. This momentum bias imparts plasmon Fizeau drag, the drag of an electrical current on propagating surface plasmon polaritons (SPPs). A semi-classical theory recently developed is used to model SPP transport along interfaces of 3D semimetals under Fizeau drag3 . Lastly, in both cases the optimization algorithm accounts for both s- and p-polarized absorptance spectra to create a final design suitable for thermal applications, which maximizes the nonreciprocal absorptance of p-polarized light and simultaneously minimizes the parasitic, reciprocal absorptance of s-polarized light.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture</title>
<link href="https://hdl.handle.net/1721.1/158956" rel="alternate"/>
<author>
<name>Figueroa, Samuel D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158956</id>
<updated>2025-04-08T04:36:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture
Figueroa, Samuel D.
Granular media exhibit extraordinary impact-mitigating properties due to their nonlinear grain-to-grain interactions, enabling efficient energy dissipation and wave perturbation under dynamic loading—behaviors unattainable in conventional monolithic materials. Recent efforts have sought to engineer granular systems with tunable mechanical responses, though few have begun to realize them as functional architected materials. Here, we introduce a two-level architected granular framework that programs spherical microgranular media across both grain-level (ellipsoidal microvoids) and bulk granular packing-level architectures, offering surprising control over static and dynamic properties. Using nanoindentation experiments, we reveal tunable quasi-static stiffness behavior, where hollow architected granular packings can exhibit superior mass-normalized energy dissipation compared to their fully dense counterparts. Finite element simulations uncover a structurally engineered Poisson effect, enabling nonlocal contact mechanisms that enhance load-bearing capacity across different packing structures. Future custom direct impact experiments demonstrate a potential route the effectiveness of our multi-scale design in dynamically programming energy dissipation. Our findings demonstrate that a hierarchical granular crystal exhibits enhanced specific energy absorption at a fraction of the weight of their fully dense counterparts and unique nonlocal stress redistribution, surpassing classical granular mechanics through architectural design. This work establishes a path toward lightweight, tunable, and impact-resistant metamaterials, with broad applications in nonlinear waveguiding, energy dissipation, and protective systems.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations</title>
<link href="https://hdl.handle.net/1721.1/158954" rel="alternate"/>
<author>
<name>Wang, Chenyu</name>
</author>
<id>https://hdl.handle.net/1721.1/158954</id>
<updated>2025-04-08T04:13:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations
Wang, Chenyu
High-throughput drug screening – using cell imaging or gene expression measurements as readouts of drug effect – is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. Experiments on drug screening data reveal InfoCORE’s superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Congestion Control for DNN training clusters</title>
<link href="https://hdl.handle.net/1721.1/158950" rel="alternate"/>
<author>
<name>Narang, Sanjoli</name>
</author>
<id>https://hdl.handle.net/1721.1/158950</id>
<updated>2025-04-07T08:56:14Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Congestion Control for DNN training clusters
Narang, Sanjoli
The modern DNN workloads generate network traffic having striking differences with the conventional data-center traffic. DNN training jobs generate periodic traffic pattern where all subsequent flows depend on the completion of the currently running flow. Although this periodic behavior calls for a new non-conventional congestion control protocol for DNN training clusters, it also creates an unprecedented opportunity to approximate optimal schedule for DNN jobs in a distributed manner without requiring priority queues, centralized information, or switch hardware support. Prior work on MLTCP proposed updates to existing congestion control algorithms to make them capable of minimizing network congestion when DNN jobs compete for the network. In this thesis, we propose several techniques to expand the scope of prior work to support DNN jobs with more complex communication patterns or parallelization strategies, and further improve the performance speedup over TCP. With two straightforward ideas of updating the congestion control parameters, we expand the performance benefits of MLTCP to a wider set of periodic DNN jobs. Augmenting existing congestion control algorithms with MLTCP provides an effective guiding mechanism to a random search to find the optimal interleaved schedule for competing DNN jobs. Our contributions boost this guided search to improve performance further. We provide detailed theoretical analysis and extensive flow-level simulations to take a deep dive into the convergence, performance speedup, and fairness of MLTCP with the proposed changes.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Input Adaptive Allocation of Language Model Computation</title>
<link href="https://hdl.handle.net/1721.1/158949" rel="alternate"/>
<author>
<name>Damani, Mehul</name>
</author>
<id>https://hdl.handle.net/1721.1/158949</id>
<updated>2025-04-07T09:13:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Input Adaptive Allocation of Language Model Computation
Damani, Mehul
Computationally intensive decoding procedures—including search, reranking, and self-critique— can improve the quality of language model (LM) outputs in problems spanning code generation, numerical reasoning, and dialog. Existing work typically applies the same decoding procedure for every input to an LM. But not all inputs require the same amount of computation to process. Can we allocate decoding computation adaptively, using more resources to answer questions whose answers will be harder to compute? We present an approach that predicts the distribution of rewards given an input and computation budget, then allocates additional computation to inputs for which it is predicted to be most useful. We apply this approach in two decoding procedures: first, an adaptive best-of-k procedure that dynamically selects the number of samples to generate as input to a reranker; second, a routing procedure that dynamically responds to a query using a decoding procedure that is expensive but accurate, or one that is cheaper but less capable. Across a suite of programming, mathematics, and dialog tasks, we show that accurate computation-allocation procedures can be learned, and reduce computation by up to 50% at no cost to response quality, or improve quality by up to 10% at a fixed computational budget.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device</title>
<link href="https://hdl.handle.net/1721.1/158941" rel="alternate"/>
<author>
<name>Lee, Young Joong</name>
</author>
<id>https://hdl.handle.net/1721.1/158941</id>
<updated>2025-04-07T08:57:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device
Lee, Young Joong
Despite recent exponential advances in computer vision and reinforcement learning, it remains challenging for robots to interact with liquids due to visual obstructions, transparent liquids, and fine-grained splashes. Yet, a substantial opportunity exists for robotics to excel in liquid identification and manipulation, given its potential role in chemical handling in laboratories and various manufacturing sectors such as pharmaceuticals or beverages. Recent advancements in electronic wearables, designed to replicate or surpass the functions and attributes of human skin, and their convergence with machine learning have provided opportunities to enhance the capabilities of robotic systems. Here, we present a novel approach for liquid class identification and position estimation with the robotic wearable device that can ‘see through’ the container, leveraging electrical impedance sensing. We design and mount a digitally embroidered electrode array to a commercial robotic gripper. Coupled with a customized impedance sensing board, we collect data on liquid manipulation with a swept frequency sensing mode and a frequency-specific impedance measuring mode. Our developed learning-based models achieve an accuracy of 93.33% in classifying 9 different types of liquids (8 liquids + air) and 97.65% in estimating the liquid position in the cup without any vision system present. We investigate the effectiveness of our system with a series of ablation studies. These findings highlight our work as a promising solution for enhancing robotic manipulation in liquid-related tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Tunneling Nanoelectromechanical Switches</title>
<link href="https://hdl.handle.net/1721.1/158940" rel="alternate"/>
<author>
<name>Dang, Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/158940</id>
<updated>2025-04-08T04:11:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Tunneling Nanoelectromechanical Switches
Dang, Tong
As silicon complementary metal-oxide-semiconductor (CMOS) technology nears its scaling limits, nanoelectromechanical (NEM) switch relays have emerged as promising candidates for complementing CMOS technology due to their superior characteristics, including zero leakage, steep subthreshold swings, high on-of current ratios, and robustness in harsh environments. However, the practical integration of NEM switches still faces challenges such as high actuation voltages, stiction, and slower switching speeds compared to CMOS. One promising strategy to mitigate these issues is the integration of a self-assembled monolayer (SAM) to create tunneling NEM switches. Such switches could achieve nanometer-scale mechanical modulation of gaps between electrodes, showing the potential to overcome the limitations of a conventional NEM switch by exhibiting low actuation voltages, high switching speeds, and minimizing stiction. Nevertheless, the tunneling NEM switches reported to date still show limited performance and require intricate fabrication processes. Additionally, functional tunneling NEM switches demonstrated are limited to two-terminal architectures. This thesis explores innovative designs, fabrication techniques, and material choices to address these limitations and to develop tunneling NEM switches with enhanced performance and reliability for next-generation NEM logic applications. To this end, switches with various structures have been fabricated and investigated, and their respective characteristics are analyzed. In a three-terminal lateral structure fabricated using entirely conventional nanofabrication techniques, switching is demonstrated in both contact and tunneling modes. While operation in direct contact mode shows a high on-of ratio, the integration of the SAM leads to a significantly reduced actuation voltage of 2 V and a lower hysteresis. Further, two-terminal vertical structured devices are studied in tunneling mode, and they consistently demonstrate operation cycles exceeding 100, with a maximum of over 7000, which manifests the reliability prospects of SAM. The trends in IV characteristics indicate that the SAM might have experienced physical deformation due to compression, highlighting a potential area for future research in the molecular engineering of the self-assembly monolayer.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Near-Optimal Learning and Planning in Separated Latent MDPs</title>
<link href="https://hdl.handle.net/1721.1/158934" rel="alternate"/>
<author>
<name>Chen, Fan</name>
</author>
<id>https://hdl.handle.net/1721.1/158934</id>
<updated>2025-04-08T04:08:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Near-Optimal Learning and Planning in Separated Latent MDPs
Chen, Fan
We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of δ-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp statistical threshold for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis</title>
<link href="https://hdl.handle.net/1721.1/158933" rel="alternate"/>
<author>
<name>Hoopes, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/158933</id>
<updated>2025-04-07T08:52:58Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis
Hoopes, Andrew
We present VoxelPrompt, an agent-driven vision-language framework that tackles diverse radiological tasks through joint modeling of natural language, image volumes, and analytical metrics. VoxelPrompt is multi-modal and versatile, leveraging the flexibility of language interaction while providing quantitatively-grounded image analysis. Given a variable number of 3D medical volumes, such as MRI and CT scans, VoxelPrompt employs a language agent that iteratively predicts executable instructions to solve a task specified by a natural language input prompt. These instructions communicate with a vision network to encode image features and generate volumetric outputs (e.g., segmentations). VoxelPrompt interprets the results of intermediate instructions and plans further actions to compute discrete measures (e.g., tumor growth across a series of scans) and present relevant outputs to the user. We evaluate this framework on diverse neuroimaging tasks and show that the single VoxelPrompt model can delineate hundreds of anatomical and pathological features, measure many complex morphological properties, and perform open-language analysis of lesion characteristics. VoxelPrompt carries out these objectives with accuracy similar to that of fine-tuned, single-task models for segmentation and question-answering, while facilitating a large range of tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array</title>
<link href="https://hdl.handle.net/1721.1/158932" rel="alternate"/>
<author>
<name>Pfenninger, Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/158932</id>
<updated>2025-04-07T08:41:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array
Pfenninger, Paige
Sub-bottom profiling using an autonomous underwater vehicle equipped with a source and a towed array is an excellent method to finely survey large areas of the ocean bottom with minimal interference from the water column. This approach has the benefit of being able to determine the range dependence of the sub-bottom on a meter-by-meter scale rather than assuming constant sub-bottom properties over a large range. This thesis conducts theoretical and experimental studies to investigate the feasibility of using the arrival times of acoustic signals from an autonomous underwater vehicle source to a short, 16-element towed hydrophone array to determine the sound speed and layer thickness of the seabed through Bayesian geoacoustic inversion. This method provides range-dependent geoacoustic parameters with a resolution on the order of 10 meters. Numerical studies indicate that, for timing data with low variance, arrival times can be used to accurately estimate seabed properties. However, the performance of the Bayesian inversion model deteriorates as the variance of the timing data increases. Experimental data were collected during the Seabed Characterization Experiment at the New England Mud Patch and the New England Shelf Break. This thesis attempts to improve the arrival times through the use of sub-array focusing but concludes that this method is not feasible due to the experimental data exhibiting a high level of variance in the sub-bottom timing returns, likely due to the presence of scatterers in the sediment layer. Therefore, the mean and variance of the direct path, bottom, and sub-bottom timing returns were calculated using Gaussian process regression. Furthermore, the results show that layer thickness and sound speeds are highly coupled, making it challenging to uniquely determine seabed properties.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials</title>
<link href="https://hdl.handle.net/1721.1/158931" rel="alternate"/>
<author>
<name>Yao, Aijia</name>
</author>
<id>https://hdl.handle.net/1721.1/158931</id>
<updated>2025-04-07T09:20:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials
Yao, Aijia
Emerging disruptive technologies such as Artificial Intelligence (AI) and 6G communications have driven stringent demands for hardware components that enable faster and more energy-efficient computation. With the diminishing returns of traditional silicon-based scaling and the escalating complexity of advanced semiconductor processes, two-dimensional (2D) materials offer promising opportunities when developed through Design-Technology Co-Optimization (DTCO). This thesis presents a comprehensive study of DTCO with a novel framework tailored for 2D material-based electronics that addresses critical challenges in material synthesis, device design, and circuit integration. In this framework, experimental material and device data are integrated into the design and optimization of MoS₂-based multichannel transistors (MCTs). With the help of DTCO, we have achieved record performance for double-gate, single-channel MoS₂ transistors as well as the first demonstration of high-performance, functional double channel MoS₂ transistors. Based on the results of MCTs, a Process Design Kit (PDK) is developed to facilitate circuit-level integration. These advancements constitute a promising foundation for the development of next-generation electronics beyond sub-2 nm technology node.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Encoder-Agnostic Learned Temporal Matching for Video Classification</title>
<link href="https://hdl.handle.net/1721.1/158930" rel="alternate"/>
<author>
<name>Ho, Darryl</name>
</author>
<id>https://hdl.handle.net/1721.1/158930</id>
<updated>2025-04-08T04:30:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Encoder-Agnostic Learned Temporal Matching for Video Classification
Ho, Darryl
In recent years, large transformer-based video encoder models have greatly advanced stateof-the-art performance on video classification tasks. However, these large models typically process videos by averaging embedding outputs from multiple clips over time to produce fixed-length representations. This approach fails to account for a variety of time-related features, such as variable video durations, chronological order of events, and temporal variance in feature significance. While methods for temporal modeling do exist, they often require significant architectural changes and expensive retraining, making them impractical for offthe-shelf, fine-tuned large encoders. To overcome these limitations, we propose DejaVid, an encoder-agnostic method that enhances model performance without the need for retraining or altering the architecture. Our framework converts a video into a variable-length temporal sequence of embeddings, which we call a multivariate time series (MTS). An MTS naturally preserves temporal order and accommodates variable video durations. We then learn pertimestep, per-feature weights over the encoded MTS frames, allowing us to account for variations in feature importance over time. We introduce a new neural network architecture inspired by traditional time series alignment algorithms for this learning task. Our evaluation demonstrates that DejaVid substantially improves the performance of a state-of-the-art large encoder, achieving leading Top-1 accuracy of 77.2% on Something-Something V2, 89.1% on Kinetics-400, and 88.6% on HMDB51, while adding fewer than 1.8% additional learnable parameters and requiring less than 3 hours of training time.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Transformer Key-Value Cache Size with Cross-Layer Attention</title>
<link href="https://hdl.handle.net/1721.1/158929" rel="alternate"/>
<author>
<name>Brandon, William</name>
</author>
<id>https://hdl.handle.net/1721.1/158929</id>
<updated>2025-04-07T08:57:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Brandon, William
Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs). However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA). MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this work, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA). With CLA, we find that it is possible to reduce the size of the KV cache by another while maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, potentially enabling future models to operate at longer sequence lengths and larger batch sizes than would otherwise be possible.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link href="https://hdl.handle.net/1721.1/158924" rel="alternate"/>
<author>
<name>Forester, Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/158924</id>
<updated>2025-04-07T08:48:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission
Forester, Paige
Advances in Free Space Optical Communications have led to numerous missions that have demonstrated optical space-to-ground links, however, fewer missions have demonstrated optical space-to-space links. NASA’s CubeSat Laser Infrared CrosslinK (CLICK) Mission aims to be the first to demonstrate optical space-to-space communication on a CubeSat scale using Commercial Off the Shelf (COTS) components that include a micro electromechanical system (MEMS) fine steering mirror for precision pointing. The first phase of the CLICK mission, CLICK-A, launched in September 2022 to demonstrate optical downlink. The second phase, CLICK-B/C, aims to demonstrate optical crosslink between two spacecraft: CLICK-B and CLICK-C. Optical crosslink communication requires precision pointing for both spacecraft to close the link. The development of the CLICK-B/C Fine Pointing, Acquisition, and Tracking (PAT) is presented in this thesis, as well as the analysis of disturbance rejection and evaluation of expected spacecraft disturbances. This thesis also asses the slewing required for differential drag control which is used to maintain the crosslink range between the two CubeSats. Preliminary results are presented from the CLICK-B/C flight hardware integration and testing phases, as well as findings from simulation of the lasercom payload’s performance.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/158922" rel="alternate"/>
<author>
<name>Liu, Mingyang</name>
</author>
<id>https://hdl.handle.net/1721.1/158922</id>
<updated>2025-04-07T09:27:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning
Liu, Mingyang
In this thesis, we explore the design of algorithms capable of handling large games where the state space is too large to store strategies in a tabular format from a theoretical perspective. Specifically, we focus on developing algorithms suitable for deep reinforcement learning in two-player zero-sum extensive-form games. There are three critical properties for effective deep multi-agent reinforcement learning: (last/best) iterate convergence, efficient utilization of stochastic trajectory feedback, and theoretically sound avoidance of importance sampling corrections. Chapter 3 introduces Regularized Optimistic Mirror Descent (Reg-OMD), which provably converges to the Nash equilibrium (NE) linearly in last-iterate. Chapter 4 shows that algorithms based on regret decomposition enjoy best-iterate convergence to the NE. Chapter 5 proposes Q-value based Regret Minimization (QFR), which achieves all three properties simultaneously.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds</title>
<link href="https://hdl.handle.net/1721.1/158921" rel="alternate"/>
<author>
<name>Johnson, Alayna</name>
</author>
<id>https://hdl.handle.net/1721.1/158921</id>
<updated>2025-04-07T08:46:20Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds
Johnson, Alayna
The synthesis of a new polysilylether via entropy-driven ring-opening metathesis polymerization (ED-ROMP) of cyclic bifunctional silyl ether-based monomers is reported. High molecular weight polymers (up to 100 k) with narrow dispersities were achieved at modest temperature. These polymers display excellent thermal stability and ultra-low T_g (–88 ºC). The polymers are both rapidly deconstructable via the cleavage of the labile silicon-oxygen linkages with either acid or fluoride triggers and partially depolymerizable by the addition of exogenous metathesis catalyst. Analysis of the deconstructed polymer products provided insight into the polymer microstructure, showing that the ED-ROMP process was regiorandom. Altogether, this work offers a new class of deconstructable polymers with a range of potential applications. Incorporation of these bifunctional silyl ether-based monomers into copolymers could aid in the triggered deconstruction of otherwise nondegradable hydrocarbon backbones.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids</title>
<link href="https://hdl.handle.net/1721.1/158920" rel="alternate"/>
<author>
<name>Jones, Aaron Jerome</name>
</author>
<id>https://hdl.handle.net/1721.1/158920</id>
<updated>2025-04-07T09:12:49Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids
Jones, Aaron Jerome
Global efforts to mitigate climate change have led to a significant increase in the integration of renewable energy resources into the electricity grid. This transition not only necessitates the adoption of renewable energy technologies but also requires rethinking and redesigning existing power grid infrastructures to accommodate the unique characteristics of these resources. This research focuses on modeling techniques which can assist in analyzing the feasibility of microgrid topologies. Microgrids have emerged as a flexible and efficient approach to implementing novel grid topologies that support higher levels of renewable energy penetration. They also support the integration of distributed energy resources (DERs), such as photovoltaic (PV) systems, thereby promoting a more sustainable and efficient energy grid design. This thesis utilized sanitized load and system topology data from a real world microgrid located in Illinois to test the feasibility of increasing the number of PV units the system can utilize for reactive power support. &#13;
&#13;
In these systems, ensuring feasibility is a crucial concern due to power mismatches caused by the inherent variability of renewable resources. This work focuses of maintaining voltage within the constraints while increasing PV penetration on the system. We simulate the implementation of microgrids with PV generation using Alternating Current Optimal Power Flow (AC-OPF). The results of this thesis show the limits of feasible reactive power support from distributed PV units on a utility disconnected microgrid based on our voltage constraints. The study shows that there exists a limit to reactive power support provided by distributed PV units. Beyond this limit we see voltage collapse shown as infeasibility of power flow solutions. In order to avoid this problem we optimize the reactive power support from PV so that a solution exists within the constraints. The lesson learned for practical use of this result is that operators should use AC-OPF to compensate for reactive power using PV. Future research will explore the challenges and opportunities associated with the widespread adoption of microgrids, such as dynamic voltage instabilities that can occur with high levels of PV integration and complexities in inverter control strategies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Engineering of Protected Superconducting&#13;
Qubits</title>
<link href="https://hdl.handle.net/1721.1/158919" rel="alternate"/>
<author>
<name>Kim, Junghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158919</id>
<updated>2025-04-07T09:16:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Engineering of Protected Superconducting&#13;
Qubits
Kim, Junghyun
Building extensible quantum information processors becomes increasingly promising as the qubits exhibit longer coherence times. To this end, realizing protected qubits, whose Hamiltonians are inherently resilient to both relaxation and dephasing, has attracted strong interest. In this thesis, we primarily explore the soft 0 − π qubit, a leading candidate for implementing superconducting qubit protection with current fabrication techniques. To enhance protection, the soft 0 − π qubit requires its two major modes, the charge-mode (θ) and the flux-mode (ϕ), to satisfy an asymmetric condition: maximizing charge-mode capacitance while minimizing flux-mode capacitance. The main challenge is therefore reducing stray capacitance from the large charge-mode capacitor, which hinders the reduction of flux-mode capacitance. To address this challenge, we depart from the conventional coplanar interdigitated capacitor design and use parallel-plate capacitors (PPC) with small footprints, achieving the desired large charge-mode capacitance while reducing unwanted stray capacitances. By reducing the capacitor area by a factor of approximately 50, the PPC 0−π qubit has achieved an estimated Eᵠ_C /Eᶿ_C ratio of 30–50, placing it among the highest reported. Additionally, we propose enhanced mode-selective control of the soft 0−π qubit using these parallel-plate capacitors. Finally, we discuss the remaining challenges of the soft 0−π qubit and introduce alternative parameter regimes that can potentially improve Raman-based control and qubit readout.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach</title>
<link href="https://hdl.handle.net/1721.1/158916" rel="alternate"/>
<author>
<name>Ladera, Adriana J.</name>
</author>
<id>https://hdl.handle.net/1721.1/158916</id>
<updated>2025-04-07T08:38:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach
Ladera, Adriana J.
Metal Organic Chalcogenolates (MOChas) are a class of robust, self-assembling, and hybrid materials featuring inorganic metalo-chalcogen frameworks that are scaffolded by organic ligands. These low-dimensional structures exhibit tunable optoelectronic properties, making them promising candidates for various applications, including optical sensors and nanotechnology. This tunable relationship between MOCha structural arrangements and targeted properties opens up a vast yet challenging search space for novel MOCha structures. Density Functional Theory (DFT) can predict properties of materials with good accuracy, making it a powerful choice for even hypothetical materials. However, the discovery of novel MOChas structures is constrained by poor scalability of DFT relaxation times for large systems and a lack of high-throughput design methods that can capture the complex geometries of MOChas. In this work, we employ DFT calculations to investigate the energetic and electronic properties of various MOChas, and provide insight into the optical behavior and kinetic favorability of such structures. To address the computational bottlenecks of high-throughput design and DFT workloads, we discuss the use of machine-learned interatomic potentials and various generative models that can enable rapid prototyping of novel MOCha structures.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Dual Extruder Biomaterial 3D Printer</title>
<link href="https://hdl.handle.net/1721.1/158914" rel="alternate"/>
<author>
<name>de Alva, Jesse P.</name>
</author>
<id>https://hdl.handle.net/1721.1/158914</id>
<updated>2025-04-07T08:24:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Development of Dual Extruder Biomaterial 3D Printer
de Alva, Jesse P.
This research presents the design and fabrication of a novel dual-extruder biotic 3D printer for the precise deposition of natural biocomposites using organic materials such as pectin, chitosan, and cellulose. Unlike traditional FDM printers that rely on thermoplastic extrusion, this printer employs a syringe-based mechanical extruder capable of depositing viscous biomaterial hydrogels. The integration of a first-of-its-kind dual-extruder system enables the fabrication of multi-material prints and the exploration of biomaterial composites and complex geometric structures, thereby advancing sustainable, bio-inspired manufacturing.&#13;
This thesis emphasizes the machine engineering aspects of the printer's development, including project motivation, systematic design methodology, component design and fabrication, testing, and exploration of future work. Notable features of the system include user-friendly operation for non-experts, open-source accessibility, and compatibility with a wide range of biomaterials. By addressing existing limitations in biomaterial 3D printing technology, this work provides a robust platform to support future research in biomaterials, sustainable additive manufacturing, and bio-inspired design. Furthermore, the open-source nature of the printer fosters innovation and collaboration, accelerating the adoption of sustainable materials and manufacturing methods.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Annealing Techniques for Color Center Formation</title>
<link href="https://hdl.handle.net/1721.1/158913" rel="alternate"/>
<author>
<name>Christen, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/158913</id>
<updated>2025-04-08T04:31:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Annealing Techniques for Color Center Formation
Christen, Ian
Color centers in diamond have emerged as leading atom-like quantum systems for applications spanning from quantum repeaters to sensors. However, the optical and spin properties of engineered diamond color centers are limited by crystal damage produced during ion implantation, crystal irradiation, and annealing. In this thesis, we develop advanced material processing methods and characterization techniques to address critical challenges in the formation of high-performance diamond color centers to advance towards the efficient creation of desired dopant-vacancy centers with minimal formation of deleterious multi-vacancy clusters.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Responsible Computational Text Generation: AI Content Classification and Policy Framework</title>
<link href="https://hdl.handle.net/1721.1/158904" rel="alternate"/>
<author>
<name>Jung, Minseok</name>
</author>
<id>https://hdl.handle.net/1721.1/158904</id>
<updated>2025-04-07T09:23:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Responsible Computational Text Generation: AI Content Classification and Policy Framework
Jung, Minseok
Recent advances in generative AI, particularly in producing human-like text, have blurred the lines between human and AI authorship. Since these AI tools rely on stochastic generation rather than traditional scientific reasoning, concerns about misinformation and reliability have emerged, highlighting the need for AI detection tools and policy guidelines. In response, this study proposes a dual approach: (1) the application of adaptive thresholds to improve the use of AI text detectors and (2) an AI policy framework based on user patterns and opinions. To enhance detector performance, we present a threshold optimization algorithm that adapts to diverse subgroups, such as those based on text lengths and stylistic features, thereby reducing discrepancies in error rates. The commonly used method relies on a single universal threshold, which has led to inconsistent results across various text types because of different probability distributions. Our approach addresses these shortcomings by tailoring thresholds to the specific characteristics of each group. In parallel, the study examines the pressing need for comprehensive AI guidelines, given the rise of misinformation and academic integrity issues. While a few institutions have introduced comprehensive policies, many institutes lack approaches grounded in user patterns and opinions. To remedy this problem, we propose a policy framework based on a user study. The findings of this research will provide practical solutions for more effective AI text classification and a reliable framework for the necessity of AI writing policies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy</title>
<link href="https://hdl.handle.net/1721.1/158903" rel="alternate"/>
<author>
<name>Cooper, Megan F. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/158903</id>
<updated>2025-04-07T08:32:41Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy
Cooper, Megan F. L.
Recently, a (α+β) Ti alloy was developed with an outstanding combination of both high strength and high ductility; however, the plasticity micromechanisms that lead to damage nucleation for this alloy had not yet been investigated in detail. In this work, post-mortem analysis and an in-situ SEM-EBSD tensile experiment were conducted to determine where damage was nucleating most frequently in the microstructure, and what deformation modes were associated with damage nucleation. Damage within primary α grains was found to be the most common, with most of these damage incidents occurring along {10̅12} twin-twin boundaries with a ~60° misorientation. The {10̅12} twinning mode is only activated in the localized neck, and twin activation is strongly dependent on initial crystallographic texture. The twinned domains are rotated such that prismatic slip is easier to activate, but prismatic slip transfer is unlikely across ~60° twin-twin boundaries due to geometric incompatibilities. The in-situ test revealed that a crack formed along a ~60° twin-twin boundary where slip was blocked. These findings provide new insights into how twin-twin interactions in Ti alloys can lead to damage nucleation and impact overall ductility.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion</title>
<link href="https://hdl.handle.net/1721.1/158901" rel="alternate"/>
<author>
<name>Kim, Adam K.</name>
</author>
<id>https://hdl.handle.net/1721.1/158901</id>
<updated>2025-04-08T04:28:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion
Kim, Adam K.
Advances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stageAdvances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stage actuators responsible for accelerating both the LS and SS stages, resulting in lowered power consumption. Using the single degree-of-freedom experimental setup previously built in our lab, we conducted several characterization experiments to develop a PEA position feedback controller augmented by a hysteresis-compensated feedforward trajectory to shape the contact compression and forces. We find that introducing a viscoelastic contact interface is essential for stabilizing the PEA controller and slowing the contact dynamics to remain within the controller bandwidth. Our feedforward trajectory successfully brings a 0.84 kg mass moving towards the PEA with an initial speed of 60 mm/s to zero velocity in approximately 1.5 ms using 36 µm of PEA stroke length. These results demonstrate the feasibility of using PEAs as mechanical assist devices for high-acceleration turnaround events in lithography tools.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵</title>
<link href="https://hdl.handle.net/1721.1/158900" rel="alternate"/>
<author>
<name>Kim, Donghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158900</id>
<updated>2025-04-07T08:28:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵
Kim, Donghyun
Dynamic stall is the abrupt flow separation from airfoils rapidly changing their orientation. This phenomenon, characterized by a delayed stall followed by a sharp drop in lift, has prompted efforts to prevent or delay it. This study aims to predict the lift of an airfoil randomly maneuvering under dynamic stall conditions by utilizing sparse surface pressure measurements, which we believe can maximize the effectiveness of various dynamic stall suppression techniques. Using data from large eddy simulations, we demonstrate that a long short-term memory network, fed with raw surface pressures, delivers accurate predictions. Also, a new method introduced here, IdDM, conclusively links the characteristic frequency range of pressure fluctuations that emerges during the dynamic stall to the chord-lengthscale vortex dynamics. However, further analysis suggests that the forecast predominantly relies on the lower frequency components tied to the airfoil motion, possibly because the vortex dynamics are dependent on and sensitive to the airfoil motion. Meanwhile, specific sensor locations are proven to be more informative than others in this random, unsteady flow, and we show that optimal sensor placement can be quickly determined using mutual information alone. It reveals that two pressure sensors positioned near the leading edge, one on each side of the airfoil, capture most of the information needed to predict lift. The lift can be predicted with sparse sensors because surface pressures are strongly correlated across the airfoil, with large-scale flow structures dominating the forces.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Visual Intelligence from Photons to Action</title>
<link href="https://hdl.handle.net/1721.1/158899" rel="alternate"/>
<author>
<name>Young, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/158899</id>
<updated>2025-04-08T04:20:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Designing Visual Intelligence from Photons to Action
Young, Aaron
For embodied agents to perceive and effectively act within their environment, they must sense the world around them and translate this information into meaningful and safe actions; a process fundamental to both biological and human-engineered systems. Nature has evolved highly attuned visual systems, resulting in diverse and efficient eyes capable of facilitating complex behaviors. Conversely, roboticists have engineered sophisticated cameras and sensors, enabling robots to perform tasks beyond the capabilities of natural systems. This thesis explores the design of visual intelligence by integrating insights from both biology and engineering in two complementary parts. In Part I, we computationally recreate the evolution of vision within simulated embodied agents. By evolving the physical and neural aspects of vision in simulation - and training these visually-capable agents with deep reinforcement learning - we demonstrate that task-specific environmental pressures lead to distinct eye morphologies and behaviors, mirroring observations in biological evolution. This in silico approach enables us to investigate the fundamental principles underlying the emergence of animal eyes and provides a framework for exploring novel sensor designs subject to both biological (e.g., survival) and engineering constraints (e.g., manufacturability). In Part II, we leverage visual cues not typically used in nature (i.e., active illumination and multi-bounce light) to demonstrate enhanced robotic navigation via non-line-of-sight imaging. Using single-photon LiDARs, we capture the temporal propagation of individual photons, enabling the detection of objects around corners. This sensing capability allows us to develop robots that effectively anticipate and avoid hidden obstacles, reducing navigation time by 50% and overall trajectory length by 33%. Together, these works demonstrate how the synthesis of biologically-inspired design principles with advanced sensing modalities can enhance embodied agents' capabilities, while providing insights into both natural vision evolution and robotic perception.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments</title>
<link href="https://hdl.handle.net/1721.1/158898" rel="alternate"/>
<author>
<name>Payette, Jack G.</name>
</author>
<id>https://hdl.handle.net/1721.1/158898</id>
<updated>2025-04-07T09:26:51Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments
Payette, Jack G.
Dissimilatory sulfur metabolisms recording differing biological isotopic fractionation are well studied, important components of sulfur cycling (Mateos et al., 2023). Assimilatory sulfur metabolisms and genes across life provide a complementary window into sulfur biogeochemistry with individual pathways having specific isotopic fractionations acting on distinct redox states (e.g. sulfate, sulfide, sulfite) for anabolism (Liu et al., 2012). An assimilation pathway exists, which starts with sulfate adenylyltransferase (sat/ATP sulfurylase) catalyzing a reaction of adenosine triphosphate (ATP) and sulfate (SO42-) resulting in adenosine 5’-phosphosulfate (APS), and incorporation of more reduced sulfur into biomolecules. This sat/ATP sulfurylase enzyme represents the first step required by life to incorporate sulfate and informs our understanding of biological processes performing this fundamental chemical reaction. A phylogenetic and molecular clock analysis of the sat/ATP sulfurylase protein family (E.C. 2.7.7.4) was performed to determine the age of sulfate assimilation proteins. Extant diversity of sat proteins was estimated to have a last common ancestor ~3.24 Ga (95% CI 3.52–3.06 Ga) using relaxed molecular clocks calibrated with eukaryotic and cyanobacteria age ranges from previously published fossil calibrated investigations. These results suggest sulfate cycling in Paleoarchean environments, despite extensive evidence of low marine sulfate concentrations (Crowe &amp; Canfield et al., 2014). Archean sulfate biogeochemical cycling could result from microbial sulfur oxidation and sources could include abiotic oxidation of volcanic sulfur, hydrothermal processes or pyrite (Canfield, 2001, Lyons et al., 2024). This phylogenomic evidence of sulfate during Archean times provides an independent complement to geochemical records and indicates that sulfur redox chemistry during the Archean was likely more complex than previously described.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Falling isn't the End: Reimagining Demolition as a Creative Practice</title>
<link href="https://hdl.handle.net/1721.1/158896" rel="alternate"/>
<author>
<name>Lee, So Jung</name>
</author>
<id>https://hdl.handle.net/1721.1/158896</id>
<updated>2025-04-08T04:12:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Falling isn't the End: Reimagining Demolition as a Creative Practice
Lee, So Jung
This thesis investigates resilience not as an endpoint but as a condition of continuous transformation. It critiques the shortcomings of current architectural discourse in addressing climate disasters, waste, and carbon footprints. While these crises are widely acknowledged, architecture often operates within restrictive economic, legal, and cultural systems, relegating resilient design to the periphery or diminishing its potential impact.&#13;
Collapse, traditionally perceived as failure, is reimagined here as a generative moment—an opportunity to rethink materials, systems, and the narratives that shape them. Central to this exploration is the concept of assembly, where materials are designed with deliberate life spans—some transient, others enduring. By anticipating the gaps and shifts that arise when permanence is no longer assumed, this thesis proposes new possibilities for adaptive design and architectural resilience within the evolving rhythms of life.&#13;
To articulate these ideas, the thesis employs speculative scenarios and temporal media. These tools position architecture as a system in flux, evolving in tandem with societal and environmental changes. Through narrative-driven methodologies, this work seeks to expand architectural discourse, prompting reflection on the discipline’s foundational assumptions while connecting it to broader cultural and systemic challenges.&#13;
Ultimately, this thesis redefines resilience—not as resistance or mere survival but as a dynamic and imaginative practice. It advocates for architecture’s leadership within the broader zeitgeist of sustainability, transforming pressing global challenges into opportunities for creative agency and systemic reinvention.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>American (Ise): On the Lifecycle of Stadiums in the United States</title>
<link href="https://hdl.handle.net/1721.1/158895" rel="alternate"/>
<author>
<name>Wang-Xu, Mackinley</name>
</author>
<id>https://hdl.handle.net/1721.1/158895</id>
<updated>2025-04-08T04:40:25Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">American (Ise): On the Lifecycle of Stadiums in the United States
Wang-Xu, Mackinley
When the Kingdome in Seattle was completed in 1976, it was celebrated as a marvel of modern engineering, expected to last for centuries. Yet, in an ironic twist, it was demolished by implosion in 2000, surviving only twenty-four years. The Kingdome epitomizes the issue of short lifespans that has plagued American stadiums since the post-war era. A broad survey of these structures reveals an average lifespan of just three decades—a startlingly brief tenure for buildings of their scale and significance. These stadiums also follow a distinctive model of renewal. Similar to the Shikinen Sengu ritual at the Ise Shrine, a new stadium is often constructed adjacent to its predecessor. However, unlike Ise, where materials from the old shrine are reused and disseminated throughout Japan’s network of shrines, old stadiums are almost always demolished and discarded. This thesis seeks to superimpose Ise as a model onto American stadiums, envisioning an architecture that embraces both impermanence and longevity through circularity. Investigations into the barriers to circularity specific to stadiums serve as the foundation for design proposals, spanning scales from the detail to the site. The project ultimately imagines a stadium in a constant process of disassembly and renewal, where its spatial and programmatic potential challenge paradigms of completeness. In the context of a climate crisis demanding waste reduction, and for a typology notorious for its excess, stadiums can learn to do more with less.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Insurance</title>
<link href="https://hdl.handle.net/1721.1/158891" rel="alternate"/>
<author>
<name>Janson, Charles Perot</name>
</author>
<id>https://hdl.handle.net/1721.1/158891</id>
<updated>2025-04-07T08:25:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Building Insurance
Janson, Charles Perot
Over the past 350 years, the building insurance industry has been shaped by a series of major urban fires, each incrementally standardizing risk assessment and property valuation as financial products of risk management. In recent years, however, climate change has introduced unprecedented weather events that challenge the fine tuned models of insurance; in particular, the rise of wildfires in California and the Pacific Northwest have led to local withdrawal of insurance altogether. Within these contexts, the spatial conditions inherited by a highly insured past continually sustain separation, individual prosperity, and standard assemblies as inheritances of expansionist agendas. At this juncture of system failure, this thesis asks: how can architecture rethink more cooperative forms of building and living together that localize risk sharing, responsibility, and stewardship? While wildfire defense strategies put forth by insurance companies and building code armor stick-frame American single family home and its aesthetic traditions, this thesis proposes a new building typology entirely: a neighborly cooperative of adjoined homes. Under a single roof, property lines are transformed into sites of mutual stewardship, manifesting insurance no longer as an abstract response to risk, but as a series of social and spatial relationships between neighbors.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Chongqing Tiandi Project: An Asset Management Perspective</title>
<link href="https://hdl.handle.net/1721.1/158890" rel="alternate"/>
<author>
<name>Yang, Junsi</name>
</author>
<id>https://hdl.handle.net/1721.1/158890</id>
<updated>2025-04-08T04:31:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Evaluating Chongqing Tiandi Project: An Asset Management Perspective
Yang, Junsi
This thesis uses the Chongqing Tiandi project as a case study to analyze the entire process of development and asset management for large-scale urban renewal projects in China's second-tier cities. It focuses on the motivations and outcomes of Shui On Land's transition from an asset-heavy to an asset-light model. Based on theoretical analysis (Chapter 2), corporate-level financial analysis (Chapter 3), and project-level in-depth studies and interviews (Chapter 4), the thesis explores the logic and impact of this strategic transformation from multiple perspectives. The theoretical analysis summarizes real estate lifecycle management theory, portfolio theory, and corporate strategic transformation theory, providing a framework to examine Shui On Land's strategic decisions. The financial analysis reveals that, from 2015 to 2017, Shui On Land faced significant financial pressure with high debt ratios and cash flow constraints, necessitating systematic asset disposals. While the company disposed of multiple assets during this period, Chongqing Tiandi's 79.2% equity disposal was particularly strategic due to its position as a high-risk, low-return asset within the company's portfolio. The project-level analysis and interviews demonstrate that replicating successful development models from first-tier cities in second-tier markets faces unique challenges. In Chongqing Tiandi's case, these challenges manifested in multiple ways: limited residential price premiums due to local land supply policies, substantial investment requirements for super high-rise developments exceeding $1 billion, and persistently low office rental rates in the local market. These factors compromised the project's financial self-sustainability and made it particularly vulnerable in Shui On's portfolio, especially when compared to projects in other second-tier cities like Wuhan. The development and subsequent equity sale of Chongqing Tiandi not only provided essential financial support for Shui On Land but also reflected a strategic decision to divest from a project where market conditions created both immediate challenges and future uncertainties. This research provides valuable references for the development of large-scale projects in China's second-tier cities, emphasizing the need for developers to utilize funds efficiently, adapt flexibly to market changes, and focus on achieving long-term value. These insights hold significant implications for sustainable development in complex market environments.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers</title>
<link href="https://hdl.handle.net/1721.1/158889" rel="alternate"/>
<author>
<name>Kseibati, Reem</name>
</author>
<id>https://hdl.handle.net/1721.1/158889</id>
<updated>2025-04-07T09:18:33Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers
Kseibati, Reem
This thesis examines the growing demand for data centers and the critical challenges posed by their water and energy consumption. As artificial intelligence (AI) technologies expand, the infrastructure supporting these systems has become essential. The study highlights the projected increase in data center capacity driven by AI workloads and focuses on the impact in water-stressed regions across the United States. Given the resource-intensive nature of data centers, the research explores cooling technologies aimed at reducing environmental impact. Traditional air cooling is compared with innovative liquid and evaporative cooling techniques. Additionally, the thesis promotes circular economy principles, emphasizing resource efficiency, reuse, and regeneration as a pathway to sustainable operations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends</title>
<link href="https://hdl.handle.net/1721.1/158888" rel="alternate"/>
<author>
<name>Park, Suhyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/158888</id>
<updated>2025-04-07T08:26:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends
Park, Suhyeon
Co-living emerged as a novel asset class in the mid-2010s, addressing the housing needs of urban residents affected by rising housing costs, increasing urban migration, and the growing prevalence of single-person households. In South Korea, co-living has gained attention as a viable alternative to traditional housing, driven by unique local dynamics, including the decline of the dominant Jeonse system and a significant shortage of housing tailored to single-person households. With a growing preference for monthly rental systems over the Jeonse systems, both local conglomerates and start-ups have capitalized on the opportunity to offer company-operated co-living spaces. As the market grows, major international investors and global co-living providers have also entered, reflecting a unique market environment where institutionalized housing options are expanding alongside a notable shift in rental transaction systems. In this new era of urban housing, co-living is rapidly expanding and gaining popularity. This thesis seeks to answer the following question: What factors have driven the emergence and growth of the co-living market in Seoul, and what is its growth potential? To address this, it starts with an analysis of market drivers, provider strategies, and regulatory developments, followed by projections of market potential and an assessment of potential threats and mitigation strategies for long-term viability of co-living in Seoul. The goal is to offer insights for co-living providers to optimize their spaces and services. The findings suggest that while co-living addresses unmet housing demand, its long-term success depends on balancing operational efficiency with tenant satisfaction. While these strategies are applicable in other cities, they are particularly critical in Seoul, where the Jeonse system remains a strong and historically preferred alternative. In Seoul, co-living serves a dual mission: introducing an innovative housing model and reshaping the paradigm of the Wolse rental housing system. To succeed, co-living operators must clearly articulate their unique value proposition, addressing both the housing needs of urban residents and the broader evolution of the rental market.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ending Well, Making the Harvest-Paths of Our Values</title>
<link href="https://hdl.handle.net/1721.1/158886" rel="alternate"/>
<author>
<name>Kpodo, Courage Dzidula Kwaku</name>
</author>
<id>https://hdl.handle.net/1721.1/158886</id>
<updated>2025-04-07T09:22:58Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Ending Well, Making the Harvest-Paths of Our Values
Kpodo, Courage Dzidula Kwaku
Any single story shrinks all others. In a place historically cultivated for the cocoa cash crop, this thesis proposes reorienting architectural practice towards a plural valuing of land and its constituent spirits. The journey begins in 2022 with my acquisition of a 99-year lease for a 5-acre land in Ghana. Prior to the conception of an academic proposal, this was to preserve and grow ecological and financial value through time.&#13;
Located on a hill-cluster in the Eastern Region, this place is crucial as the birthplace of Ghana’s cocoa industry, which became the world’s largest exporter by 1911. Spurred by economic and colonial incentives, farmer-settlers acquired and cultivated forest land including the one I presently steward. They forged communities that live on despite a subsequent decline of cocoa production in the region. Five centuries of colonial influence in West Africa reduced a plural landscape into singular extractive narratives, creating place-names like the Gold Coast, renamed Ghana after independence. The capitalist framework of monocultural extraction, one reliant on a colonial government and its land survey department, continues under contemporary African states. Architecture and planning—a practice historically tied to power and capital—remains instrumental in this system, often overlooking other ways of valuing land.&#13;
This thesis confronts the dispositions of an inherited profession by foregrounding the practices and materials of a socio-cultural paradigm. It is epitomized by the tree called Newbouldia laevis (African boundary tree) and its plural meanings in West Africa. It follows a cocoa harvest-path from a community named after a farmer-settler, Yaa-Aso, and ascends the hills, crossing the land limits of 7 farmers. It ends on the land I hold, with a lease ending in CE 2122.&#13;
In July 2024, I led a convocation of the farmers along the path in the defunct cocoa distribution building, toward framing futures based on other values apart from capital. 3 languages were spoken in that gathering - Twi, Anlo-Eʋe and English. It resulted in a 7-foot expansion of the path, and the pacification of a seasonal spirit-stream that crosses it. They set the context for imagining a series of 5 moments, herein recorded, that explore a value system of things spiritual and communal, offered by the transgressions of a widened path and the land I hold at its end.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Sustainable Recommender Systems</title>
<link href="https://hdl.handle.net/1721.1/158881" rel="alternate"/>
<author>
<name>Huang, Lei</name>
</author>
<id>https://hdl.handle.net/1721.1/158881</id>
<updated>2025-04-07T09:14:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Designing Sustainable Recommender Systems
Huang, Lei
Recommender systems are widely deployed to serve users with content they like. However, content must be created and insufficient demand dampens a creator’s production incentive. We argue that the canonical recommender system may not be sustainable if, by promoting the content each user likes the most, it suppresses the creation incentive of the less popular but still valuable content. We propose a “sustainable recommender system” solution – subsidize creators with demand according to their “sensitivity,” which measures how easily a creator can be incentivized by demand, and their “contribution,” which measures how important a creator is to users overall. Theoretically, we prove that this algorithm maximizes long-term user utility by internalizing the externality of user choice on other users. Computationally, our main innovation is to estimate creator contribution using computer vision, where we train a deep-learning model to compute how creator distribution affects system-wide user utility. Analyzing data from a large content platform, we show that our algorithm incentivizes valuable creators and sustains long-term user experience.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality</title>
<link href="https://hdl.handle.net/1721.1/158878" rel="alternate"/>
<author>
<name>Peragallo, Nadra Alia</name>
</author>
<id>https://hdl.handle.net/1721.1/158878</id>
<updated>2025-04-07T09:02:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality
Peragallo, Nadra Alia
The luxury hospitality industry has long been attuned to shifting consumer preferences, particularly as travelers increasingly seek unique, meaningful experiences. In today’s global market, trends centered on personalization, wellness, authenticity, and regeneration—further accelerated in the post-pandemic travel era—present both challenges and opportunities for real estate investors. This shift raises a critical question: How and where can value be unlocked in this evolving landscape?&#13;
&#13;
This thesis explores how real estate investors can maximize value creation in the luxury hospitality sector by leveraging traditional performance metrics alongside a complementary&#13;
framework designed to uncover underexplored opportunities and enhance collaboration among stakeholder groups. Through the analysis of two case studies—Salterra Resort &amp; Spa in South&#13;
Caicos, Turks &amp; Caicos Islands, British West Indies, and Puntacana Resort and Club in the Dominican Republic—the study demonstrates the practical application of this framework in&#13;
tropical, coastal, and island regions, where the interaction between tourism, local communities, and fragile ecosystems is particularly pronounced. By showcasing its success, this research provides adaptable stakeholder rubrics and qualitative system dynamics causal loop diagrams as templates, while broadening the scope for innovation and inspiring further exploration of sustainable, value-driven approaches in luxury hospitality.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide</title>
<link href="https://hdl.handle.net/1721.1/158875" rel="alternate"/>
<author>
<name>Byun, Gi Hyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158875</id>
<updated>2025-04-07T09:19:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide
Byun, Gi Hyun
As the unprecedented temperature rise originating from anthropogenic carbon dioxide (CO₂) emission intensifies, the development of post-combustion carbon capture technologies has been urged. Although its maturity, conventional thermal swing processes using aqueous amines, suffer from significant limitations, including high energy requirements and sorbent degradation. Electrochemical CO₂ capture technologies, which use electrical energy instead of thermal energy, have emerged as an energy efficient way to capture CO₂. This shift not only improves energy efficiency but also reduces reliance on fossil fuels, further contributing to reduction in CO₂ emissions. This work explored the potential of electrochemical metal oxide formation for CO₂ capture, a promising alternative to amine-based systems due to its exceptional sorbent (i.e., metal oxide) stability. Li₂O in eutectic mixture of potassium nitrate (KNO₃) and lithium nitrate (LiNO₃) was chosen as a case study due to the relatively well-understood chemistry of the system and the potential synergistic effects between metal oxide and the molten salt. Primarily, we investigated the synergistic effect of Li₂O in nitrate molten salt via thermal gravimetric analysis. Next, electrochemically produced Li₂O by reduction of oxygen gas was tested as a CO₂ sorbent while investigating parameters affecting its conversion to lithium carbonate (Li₂CO₃). Through this study, we suggested dissolution model as a crucial pathway for conversion. Lastly, we explored the effect of adding nitrite ion (NO₂⁻) to the molten salt. Irreversible side reaction between NO₂⁻ and CO₂ was confirmed with X-ray diffraction and NOₓ measurement. This thesis demonstrates the feasibility of electrochemical metal oxide-based CO₂ capture, highlighting some considerations in the capture step.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space</title>
<link href="https://hdl.handle.net/1721.1/158866" rel="alternate"/>
<author>
<name>Brown, Ireland</name>
</author>
<id>https://hdl.handle.net/1721.1/158866</id>
<updated>2025-04-08T04:38:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space
Brown, Ireland
Designing profitable mission and logistics architectures is necessary to establish a profitable commercial market and support a robust space economy. It is the goal of the National Aeronautics and Space Administration (NASA) to establish such an economy in low Earth orbit (LEO) through the implementation of commercial LEO destinations and to commission self-sustaining lunar infrastructure through the Artemis missions. The ISS and the Apollo lunar landers demonstrated the ability to provide safe and reliable habitation, but the cost to support these missions has been on the order of billions of United States Dollars (USD). Minimizing the operational costs of commercial space systems will be required if commercial companies expect to generate a profit from their services. To address this, this thesis derives and demonstrates a manual cost optimization method for space system mission architectures, with respect to logistical and system design. In tandem, a computational tool called the Cost model for Space system Operations (COST-O) was developed. The demonstration included the iteration of a logistics and system design vector for two cases: a commercial LEO space station, and a commercial lunar in-situ resource utilization (ISRU) liquid oxygen generation system. These mission architectures were modelled and simulated in SpaceNet which first analyzed for feasibility and then were processed by COST-O. This data was used to make financial forecasts and were analyzed for cost sensitivity. The results suggest that for a commercial LEO space station, a closed loop ECLSS, large stockpile of resources, reduced resupply cadence, and a combination of tourists and visiting crew would be a profitable architecture at the crew capacity of at least three paying customers present on the station per day with an annual operational cost of 1,129,731,710 USD. Profits would be achieved by the end of ten years of steady state operations at the current market price of 3.12 million USD per crew member per day. Attempts to minimize this cost should first be made in the cadence of funded astronaut technician flights, as crew launches contribute most to the overall operational cost. Future work should address ways to minimize this, such as reducing the required amount of astronaut technicians that must be present at any given time. For a commercial lunar ISRU liquid oxygen generation system, an architecture supporting a closed loop system, using Starship as the launch and landing vehicle, a prepositioned stockpile of resources at the lunar surface, and a hydrogen reduction agent is most cost optimal, with an annual operating cost of 19,275,486,559 USD, and profitability achieved at the design rate of twenty metric tons of liquid oxygen produced and sold per year. At the current market price of 1.2 million USD per kilogram, the system would be profitable by the end of the first year of steady state operations. Attempts to minimize this operational cost further should improve the recyclability of the system. Future work should evaluate added robustness to the architecture by delivering multiple systems and should model deliberate cargo packing decisions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market</title>
<link href="https://hdl.handle.net/1721.1/158865" rel="alternate"/>
<author>
<name>Huang, Shenglin</name>
</author>
<id>https://hdl.handle.net/1721.1/158865</id>
<updated>2025-04-08T04:09:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market
Huang, Shenglin
As demand for green and healthy buildings grows, real estate developers face complex decisions regarding building certification adoptions, which have become influential in real estate market dynamics. This thesis investigates how developers in the competitive Boston-Cambridge area navigate the sophisticated certification landscape—focusing on LEED, ENERGY STAR, WELL, Fitwel, and WiredScore/SmartScore—to gain competitive advantages, attract and retain tenants, maximize financial performance, and align with regulatory requirements and ESG goals.&#13;
Using a mixed-methods approach, including quantitative analysis of certification overlaps and trends, along with qualitative insights from industry interviews, the study provides a comprehensive understanding of how real estate developers strategically use certifications to influence asset value while meeting tenant and investor expectations. Findings offer potentially actionable insights into how certifications shape market positioning and inform the decision-making process in real estate development.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using AI to Improve Price Transparency in Real Estate Valuation</title>
<link href="https://hdl.handle.net/1721.1/158862" rel="alternate"/>
<author>
<name>Xu, Cunjia</name>
</author>
<id>https://hdl.handle.net/1721.1/158862</id>
<updated>2025-04-08T04:14:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Using AI to Improve Price Transparency in Real Estate Valuation
Xu, Cunjia
This thesis explores the integration of artificial intelligence (AI) into real estate valuation, focusing on visual property attributes to enhance traditional Hedonic models. By incorporating Vision Language Models (VLMs) and generative AI, the research evaluates the potential of these technologies to assess non-standard variables like aesthetic appeal, condition and cohesiveness of interior and exterior property photos. The study contrasts traditional hedonic regression models, which rely on quantifiable factors such as square footage and location, with a new approach that includes AI-generated scores derived from property photos. The study employs three distinct models: the No_Rubric Model, the Composite Model, and the Verbose Model with the Hedonic model serving as the baseline for evaluating their performance. The results demonstrate that incorporating visual data significantly improves model&#13;
accuracy, aligning valuations more closely with buyer preferences and sold prices. This shift addresses the industry's need for price transparency and highlights how developers can design properties that better meet market demands.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microfluidic Platform for Vascularized Tissue Models</title>
<link href="https://hdl.handle.net/1721.1/158859" rel="alternate"/>
<author>
<name>Johnson, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/158859</id>
<updated>2025-04-07T09:05:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Microfluidic Platform for Vascularized Tissue Models
Johnson, Matthew
This thesis presents a microfluidic platform designed to support 3D vascularized tis­sue models for microphysiological systems. The platform delivers pneumatic pressure and vacuum signals to drive fluid flow and pressure on tissue culture devices with integrated pumps and back-pressure regulators. The mechanical performance of the pumps and back-pressure regulators is characterized. Tissue compartments in each device contain endothelial and stromal cells suspended in a hydrogel during culture. An oxygenating reservoir stores and replenishes oxygen in circulating cell culture me­dia. During assembly, screws are used to compress an elastomeric membrane, forming a seal and transmitting pneumatic pressure signals from the connection manifold to acutate the fluidic control elements. After a biological experiment the tissue culture devices can be disassembled, cleaned, and re-used, thus enabling cost-effective experi­mentation and prototyping. Each of the 4 layers of the tissue culture devices arc ma.de of thermoplastic polymers, and their design is translatable to injection molding for future production at scale. The design and manufacturing methods for the platform and individual device features are discussed. Two major biological experiments are presented to demonstrate the platform's ability to support emergent vascularization in the tissue culture device over 7 days. Microscope images show development of perfusable microvessel networks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods</title>
<link href="https://hdl.handle.net/1721.1/158858" rel="alternate"/>
<author>
<name>Rosado, Laura M.</name>
</author>
<id>https://hdl.handle.net/1721.1/158858</id>
<updated>2025-04-07T08:26:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods
Rosado, Laura M.
A novel instrument was designed to characterize a force exertion model of engineered skeletal muscle rings. The instrument uses strain gauges to transduce a muscle ring contraction and has a verified resolution of 5 μN and 1.4 μm over the ranges of 5 μN and 1400 μm respectively. Experiments were carried out with four muscle ring specimens at six different structural stiffnesses. Each ring was excited at 1 Hz for 30 seconds while force and displacement was monitored. It was determined that the relationship between muscle contractile distance and force is related by a negative power function.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion</title>
<link href="https://hdl.handle.net/1721.1/158856" rel="alternate"/>
<author>
<name>Basnight, Natalie Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/158856</id>
<updated>2025-04-08T04:26:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion
Basnight, Natalie Ann
Initiatives are underway to develop tiltrotor and vertical take-off and lift (VTOL) aircraft that enhance commercial and military aviation’s autonomy, capability, and survivability. These designs integrate rotary and fixed-wing elements, introducing distinct safety considerations. These safety concerns are largely due to the differing mental models of operators trained in either rotary or fixed-wing aviation, alongside the rising reliance on autonomy. The traditional hazard analysis techniques (e.g., Fault Tree Analysis and Failure Models and Effects Criticality Analysis) do not adequately account for system component interactions or human factors in complex new aircraft designs. System Theoretic Process Analysis (STPA) is a powerful new hazard analysis technique for novel tiltrotor aircraft that includes their unique safety requirements. It is a top-down system hazard analysis technique that identifies loss scenarios (N. G. Leveson and J. Thomas Mar2018). It satisfies the tasks described in MIL-STD-882E (Department of Defense 2023). This research demonstrates the use of STPA to identify and mitigate potential instances of mode confusion between the operator’s mental model and the autonomy’s decision logic in the uniquely dynamic tilt-rotorcraft environment. Two previous tiltrotor aircraft accidents are analyzed utilizing Causal Analysis based on System Theory (CAST) to help set a framework for the importance of human and machine collaboration in systems. These accidents show a trend in the dangers of aircraft system mismanagement between various controllers. The CAST results for these accidents help provide information about how to prevent these types of incidents in the future, setting the stage for the use of STPA on novel tiltrotor aircraft, as demonstrated in this thesis. STPA can be used before design, implementation, and fielding, allowing for better early design of systems and reducing the cost of later redesign or modification.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Testing of a Hovercraft with Electroaerodynamic Propulsion</title>
<link href="https://hdl.handle.net/1721.1/158851" rel="alternate"/>
<author>
<name>Quiram, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/158851</id>
<updated>2025-04-08T04:31:42Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Testing of a Hovercraft with Electroaerodynamic Propulsion
Quiram, Matthew
Electroaerodynamic (EAD) multistaged ducted (MSD) thrusters are a novel solid-state thruster architecture that has been shown to provide order-of-magnitude improvements in thrust density compared to single-stage EAD thrusters. This makes MSD thrusters well-suited for use in EAD hovercraft, where generating sufficient pressure is crucial for hovering. This study explored the feasibility of a wire-to-airfoil corona discharge MSD thruster powered hovercraft through a scaled-down prototype and final design. The hovercraft was tethered to a ground-based power supply and carried a payload mass to simulate having on-board power electronics to limit the scope of the project. The design of an EAD hovercraft involved applying the principles of hovercraft lift to a design optimization that implements the recently developed EAD MSD thruster model. A hovercraft prototype was designed and constructed to validate the models applied during the design phase and to test hovering capabilities without a payload. Using the manufacturing lessons and insights gathered in the prototype testing, a full-scale model was designed and built to hover while having an additional payload capacity that would be representative of a set of power electronics.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link href="https://hdl.handle.net/1721.1/158849" rel="alternate"/>
<author>
<name>Proman, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158849</id>
<updated>2025-06-09T15:22:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.

</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Exploration of a Miniaturized Stirling Engine</title>
<link href="https://hdl.handle.net/1721.1/158848" rel="alternate"/>
<author>
<name>Hee, Ryann</name>
</author>
<id>https://hdl.handle.net/1721.1/158848</id>
<updated>2025-04-07T09:04:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design Exploration of a Miniaturized Stirling Engine
Hee, Ryann
Increased interest in long-term space exploration has increased demand for small yet powerful energy sources, especially for remote and harsh environments where traditional power sources may be impractical. In such scenarios, space probes and high-reliability systems necessitate innovative solutions to meet their growing power and thermal management requirements while maintaining small form factors. Presently, micro power systems fall short of achieving the desired efficiencies for these applications, typically hovering around 2% [1]. Stirling engines, with their proven capability to attain high thermodynamic efficiency (30-40%), offer a promising solution if this efficiency can be maintained in a miniaturized form [2]. This study delves into the design space of a miniaturized Stirling engine with a target input of 2Wth, which could be tailored for small-scale (mesoscale ~cm3 ) high-efficiency power generation or micro-cooling. Previous research has laid the groundwork for understanding the thermodynamics of miniaturized Stirling engines, exposing substantial challenges, including overwhelming parasitic losses at this scale. The current study endeavors to mitigate these losses and explore the path to optimal efficiencies through Simulink modeling. Simulations have demonstrated design spaces capable of producing mechanical efficiencies as high as 14% with a 2Wth input, marking significant progress in addressing the limitations of current micro power systems. The research's innovative approach has significant implications for enabling the power generation required for small space probes, particularly for long durations and need self-sustaining power over extended periods [3], [4]. As the study advances, it holds the promise of developing a physical prototype using the findings from the design space study, which helps push the field forward for future power generation and micro-cooling in small-scale space technology. This thesis aims to map the design space of a miniaturized Stirling engine focusing on mitigating parasitic losses to achieve markedly greater efficiency compared to existing technologies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future</title>
<link href="https://hdl.handle.net/1721.1/158844" rel="alternate"/>
<author>
<name>Bhatt, Nirmal K.</name>
</author>
<id>https://hdl.handle.net/1721.1/158844</id>
<updated>2025-04-07T08:49:38Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future
Bhatt, Nirmal K.
Long-term energy system planning is one of the most pressing challenges for the power sector, which must maintain reliability while decarbonizing. Currently, no unified regulatory, modelling, or market framework exists in the United States to facilitate planning in pursuit of a clean and reliable grid. Variable renewable energy (VRE) generation can produce cheap power but they increase the grids exposure to interannual variability in demand and VRE generation. This raises questions about how grid planners will value VRE and clean firm power (such as nuclear power). This thesis evaluates the importance of considering interannual variability and clean firm power in long-term energy system planning. I use GenX, an open-source capacity expansion model, to model the U.S. New England region in 2050 assuming a high degree of electrification and various technology availability and emissions reduction pathways. I find that clean firm power will reduce the cost of decarbonizing the New England grid but that grid planners must consider decades of weather and demand data if they are to make appropriate investments. I also present a novel outputs-based timeseries clustering method which allows models like GenX to optimize grids using longer timeseries of weather and demand data. Based on my work, I recommend that policymakers, grid operators, and market designers establish rigorous standards around energy modelling for long-term planning that includes multiple scenarios and appropriately values technologies such as firm power.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>If These Hills Could Speak</title>
<link href="https://hdl.handle.net/1721.1/158841" rel="alternate"/>
<author>
<name>Bayowa, Tejumola</name>
</author>
<id>https://hdl.handle.net/1721.1/158841</id>
<updated>2025-04-08T04:18:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">If These Hills Could Speak
Bayowa, Tejumola
If these hills could speak, what would they reveal, and how would they express it? This central question guides this thesis, which examines three hills in the heart of Ibadan, Southwest Nigeria— each occupied by the ruins of colonial monuments. Before the construction of these structures, the hills served as sanctuaries, providing water, food, and safety. However, under British colonial rule, architecture was utilized to disrupt this harmonious relationship. Over the course of 50 years, three monuments were erected that mark Britain’s colonial imprint on the city: a neoclassical courthouse (1925), built to assert control over the central market; a 60-foot tower (1936), which displaced the surrounding forests; and a theater (1977), built during a time of national struggle for unity and identity. Today, at the foot of these hills, a community has forged a way of life within a broken system. By repurposing and subverting structures in ways their creators never intended, this community embodies a praxis and poiesis of adaptive creativity within the built environment. This process represents a transformative act of pidginization—a collective tactic for repair, resistance, and reappropriation in response to an ongoing, imposed socio-political order. For these hills to speak again, the ruins must be transformed. This thesis begins that process by applying acts of pidginization learned from below to the three ruins. It proposes their conversion through deconstruction and de-monumentalization, with the aim of fostering economic development, ecological restoration, and cultural production in the city.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Hing Travel Agency Fictional Archive of Disappearing Hong Kong</title>
<link href="https://hdl.handle.net/1721.1/158840" rel="alternate"/>
<author>
<name>Wu, Ina</name>
</author>
<id>https://hdl.handle.net/1721.1/158840</id>
<updated>2025-04-07T09:27:37Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On Hing Travel Agency Fictional Archive of Disappearing Hong Kong
Wu, Ina
Hong Kong, shaped by rapid transformation and precarious land ownership, is a city where erasure defines its urban landscape. Amid this flux, a place I once called home was demolished, prompting the question: “How can one return to a place that no longer exists?” This thesis explores the transformative potential of disappearance, reframing it as a generative force that creates space for imagination, resistance, and continuity. Through On Hing Travel Agency (OHTA), demolished buildings "travel" into fictional worlds, becoming vessels of memory and imagination. Rooted in Hong Kong’s literary tradition—where fiction resists erasure and archives aspirations—the project employs fiction as both a tool of preservation and a site for belonging. Fictional destinations, inspired by Hong Kong novels, such as The Permanent City (1959), The Floating City (1986), and The Vanished Cities (2010), reflect pivotal historical moments while offering pathways to reconcile personal loss and master alternative spatial logics. The project culminates in the Lost Traveler’s Guide to Hong Kong, a publication curating maps, brochures, and layered narratives to immerse travelers in speculative thinking. By bridging the past and future, real and imagined, OHTA is a attempt to demonstrates how fiction can reclaim agency within the politics of disappearance, transforming loss into a catalyst for new narratives and creative engagement. Even in absence, Hong Kong’s disappearing spaces retain their resonance, generating new narratives and underscoring the creative potential of loss.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sweating Details: Labor of “Los Constructores del Valle”</title>
<link href="https://hdl.handle.net/1721.1/158839" rel="alternate"/>
<author>
<name>Andrade, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/158839</id>
<updated>2025-04-07T08:43:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Sweating Details: Labor of “Los Constructores del Valle”
Andrade, Gabriel
“You should always be grateful for the work you can find, so make sure you prove you deserve it.”- Commonly heard growing up amongst the Builders of the Valley in Orange, NJ. The necessary attitude that fuels the built environment.&#13;
&#13;
This thesis proposes a dialogical method of tectonics through exploring the embodied experiences of those who physically build the city and its architecture, positioning architectural design as fundamentally tied to the labor that makes buildings possible. It centers on two primary questions: “Who builds this architecture?” and “How does this design impact a builder’s occupational livelihood?”&#13;
&#13;
To challenge professional standards that perpetuate a disconnection between designers and builders, this thesis reconnects me, as a designer, with my educators from Orange, NJ. These individuals—professional construction workers—shaped my earliest understanding of the built environment and how to navigate it socially and professionally. Through this process, learning more about who they are, how they entered construction, and how the work has affected them over the years.&#13;
&#13;
This education with ongoing dialogue pushes towards future opportunities of working together, focusing on designing better for the act of building by prioritizing the physical, mental, and financial longevity of my Educators. The culmination of this research and communication is materialized through four architectural details within a workspace, designed to showcase my Educator’s expertise and affinities as professionals. These details reimagine occupational choreography, opening up for future workflows that think through both lessening and healing the musculoskeletal disorders that many builders face after years of laboring across the tristate area.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response</title>
<link href="https://hdl.handle.net/1721.1/158836" rel="alternate"/>
<author>
<name>Unikewicz, Brendan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/158836</id>
<updated>2025-04-08T04:52:50Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response
Unikewicz, Brendan M.
Soft material research has seen significant growth in recent years, with emerging applications in robotics, electronics, and healthcare diagnostics where understanding material mechanical response is crucial for precision design. Traditional methods for measuring nonlinear mechanical properties of soft materials require specially sized samples that are extracted from their natural environment to be mounted on the testing instrument. This has been shown to compromise data accuracy and precision in various soft and biological materials. To overcome this, the Volume Controlled Cavity Expansion (VCCE) method was developed. This technique tests soft materials by controlling the formation rate of a liquid cavity inside the materials at the tip of an injection needle, and simultaneously measuring the resisting pressure which describes the material response. Despite VCCE’s early successes, expansion of its application beyond academia has been hindered by cost, size, and expertise. In response to this, the first portable, bench-top instrument utilizing VCCE is presented here. This device, built with affordable, readily available components and open-source software, streamlines VCCE experimentation without sacrificing performance or precision. It is especially suitable for space-limited settings and designed for use by non-experts, promoting widespread adoption. The instrument’s efficacy was demonstrated through testing Polydimethylsiloxane (PDMS) samples of varying stiffness. This study not only validates instrument performance, but also sets the stage for further advancements and broader applications in soft material testing. All data, along with acquisition, control, and post-processing scripts, are made available on GitHub.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Dynamics of Diversity, Equity, and Inclusion Practice Adoption</title>
<link href="https://hdl.handle.net/1721.1/158834" rel="alternate"/>
<author>
<name>Yadama, Aishwarya Pandey</name>
</author>
<id>https://hdl.handle.net/1721.1/158834</id>
<updated>2025-04-07T09:12:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Dynamics of Diversity, Equity, and Inclusion Practice Adoption
Yadama, Aishwarya Pandey
Despite the widespread adoption of Diversity, Equity, and Inclusion (DEI) initiatives in corporate America, significant disparities persist in the representation, compensation, and treatment of women and racial minorities. This paper investigates why well-intentioned DEI efforts often fail to achieve their intended outcomes and identifies managerial barriers to progress. This research employs a qualitative dynamic modeling approach to analyze the complexities of DEI practice implementation within organizations. I conducted a scoping review, focusing on longitudinal and experimental designs to identify key mechanisms influencing the outcomes of DEI practices. The interplay between organizational processes and individual cognitive and behavioral responses can be illustrated via reinforcing and balancing feedback loops that I map onto a causal loop diagram, which reveals how DEI initiatives interact with existing organizational processes and cultural dynamics. This paper introduces a dynamic perspective on DEI practice implementation, highlighting the feedback mechanisms that can either hinder or facilitate progress toward diversity goals. The model reveals that certain DEI practices may inadvertently trigger reinforcing loops that perpetuate inequality. By mapping DEI practices and their effects, this study provides a framework for understanding how DEI outcomes can diverge significantly depending on different implementation strategies. It underscores the importance of considering the endogenous feedback effects of DEI initiatives and offers insights into strategic interventions that can disrupt undesirable reinforcing cycles and promote progress toward organizational diversity, equity, and inclusion.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precisely Loose: Unraveling the Potential of Particles</title>
<link href="https://hdl.handle.net/1721.1/158833" rel="alternate"/>
<author>
<name>Yoon, Jeonghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158833</id>
<updated>2025-04-07T08:25:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Precisely Loose: Unraveling the Potential of Particles
Yoon, Jeonghyun
Random, irregular, erratic, arbitrary, unspecifiable, and unpredictable—particles. In a post-extractive future, our reliance on standardized materials, continuously sourced through the exploitation of raw resources, will no longer be sustainable. Instead, architecture will increasingly contend with materials that defy standardization. This thesis focuses on these non-normative materials—particles, encompassing construction demolition debris, manufacturing defects, naturally occurring gravels, and locally sourced mineral waste. Ubiquitous yet underutilized, these materials hold potential not only for use, but also for reuse. However, they are often dismissed as rigid and unpredictable ingredients that require precise manipulation and cumbersome processing in order to achieve predictable results. What kind of architecture could emerge if we embraced the inherent nature of these particles, not as rigid materials to be controlled, but as dynamic, fluid entities? By embracing their uncertainty as a generative design agent, how would design approaches and construction processes transform? This thesis presents a catalogue of precisely loose methods for engaging with particles. These methods offer an alternative design approach that moves beyond the obsession with refinement and control over material behavior. By pouring, pushing, reconfiguring, and containing—in lieu of identifying, cutting, placing, and stacking—this series of interactions explores the potential of plurality, investigating how loosely controlled particles can adapt to collaborative construction processes. In doing so, this thesis redefines architectural material culture rooted in rubble, offering a framework to reimagine our relationship with the irregular, the unpredictable, and the overlooked.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Hybrid Prediction in Autonomous Driving</title>
<link href="https://hdl.handle.net/1721.1/158832" rel="alternate"/>
<author>
<name>Yau, Tiffany Yee Kay</name>
</author>
<id>https://hdl.handle.net/1721.1/158832</id>
<updated>2025-04-07T09:20:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multi-Agent Hybrid Prediction in Autonomous Driving
Yau, Tiffany Yee Kay
In autonomous driving, the hybrid task of predicting both high-level actions and lowlevel trajectories of human behaviour is fundamental to safe downstream decision-making. Much of the existing work in behaviour prediction tackle this problem without sufficiently modelling agent-agent interactions, limiting their ability to capture the full range of possible joint outcomes. Another key challenge in multi-agent prediction is the intractable prediction space that grows exponentially in the number of agents and duration of the prediction horizon. As a result, scalability is a major challenge. This thesis presents two approaches to address these challenges in multi-agent hybrid prediction. In our first approach, we model interactions and address scalability by learning to factor the joint prediction distribution. We observe that agents do not interact with all other agents in the scene, but rather, there are groups that strongly interact. Therefore, we group agents and represent the high-level interaction outcomes of groups with discrete variables. We additionally assume that inter-group interactions are sparse and can be sufficiently represented with a directed acyclic graph. These assumptions enable us to factor the distribution into a product of factors, effectively reducing the prediction space, and providing an order in which to easily sample discrete values. We evaluate the performance of this method on a large-scale autonomous driving dataset and show that it exceeds prior methods in coverage of possible interaction outcomes by 24% to 48% on various multi-agent validation data splits, while maintaining state-of-the-art prediction error. Our second approach represents agents in a traffic scene as a set of concurrent hybrid models and assumes a collision avoidance model of interactions, rather than learning the model from data like the first approach. Our method begins enumeration based on a simpler collision-agnostic prior distribution. Based on our factored representation, we determine the next best assignment to the prior. We extract bounding conflicts to correct the prior and increasingly reduce the error between the distribution used by enumeration and our collision-aware posterior distribution. Our experiments show that enumeration using A* with bounding conflicts (A*BC) is faster than A* and is therefore better at addressing scalability. In terms of prediction metrics, we find that our collision-aware posterior performs worse than the collision-agnostic prior and suggest future directions for improvement.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Environmental Regulation on Data Center Valuation</title>
<link href="https://hdl.handle.net/1721.1/158830" rel="alternate"/>
<author>
<name>Lee, Donghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158830</id>
<updated>2025-04-07T08:26:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Impact of Environmental Regulation on Data Center Valuation
Lee, Donghyun
Artificial intelligence has become one of the defining trends of modern society, with applications spanning virtually every industry. This societal shift has also influenced the real estate landscape. While data centers have existed for decades, it is only in recent years that they have garnered significant attention, demonstrated by their strong rent growth and compressed cap rates.1 Along with the attention over data centers, there also has been extensive research on how data centers impact the environment, such as "Quantifying the Sustainability Impact of Data Center Availability" by Manish Marwah et al. which present how data center power architecture may impact the environment and "The Environmental Footprint of Data Centers in the United States" by Md Abu Bakar Siddik, Arman Shehabi, and Landon Marsto. This research delves into quantifying the environmental impacts of data centers, specifically focusing on carbon and water footprints. However, what remains unexplored is how environmental regulations influence the valuation of data centers as a distinct real estate property type. This thesis examines how data center valuations could be impacted if existing environmental regulations were applied to regions where data centers are concentrated. The findings reveal a complex dynamic: while penalties under these regulations would reduce net operating income (NOI), potentially devaluing these assets, the same regulations would discourage new development, exacerbate the already constrained supply, and ultimately drive-up market rents for these properties. As a result, these opposing forces create ambiguity regarding the net impact of such regulations on data center valuations, with the outcome depending on which force prevails. What is clear, however, is that tenants would bear the brunt of these regulations, as landlords are likely to pass on increased costs through higher rents. On the other hand, while the environmental impacts of data centers and AI applications is critical to achieving sustainability goals, the societal benefits of AI solutions—ranging from advancements in healthcare to increased operational efficiencies—must also be considered. Balancing these competing priorities presents a unique challenge for policymakers and investors, with significant implications for the future of real estate and the digital economy.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market</title>
<link href="https://hdl.handle.net/1721.1/158829" rel="alternate"/>
<author>
<name>Ghasemlou, Peggy</name>
</author>
<id>https://hdl.handle.net/1721.1/158829</id>
<updated>2025-04-07T08:58:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market
Ghasemlou, Peggy
This thesis conducts a detailed examination of the implications of anti-warehouse development policies in San Diego, focusing on their impacts on key economic indicators from 2024 to 2034. The research provides an overview of the U.S. industrial market, addressing crucial topics such as logistics market size, job creation, and the growth of e-commerce, while also exploring the NIMBY phenomenon and its influence on community opposition to developments, including a discussion of Bill 98 and its legislative implications. A specific focus on the industrial market in Southern California reveals important insights into job growth, rental rates, and market dynamics in San Diego. Through a comprehensive analytical approach, the study addresses the effects of development policies by presenting ten distinct scenarios that project delivery volumes, uncovering potential reductions ranging from 10% to 90% compared to a baseline scenario without restrictions. The analysis anticipates vacancy rates and job losses across various years, utilizing the LINEST function for forecasting key market indicators, including asking rents and asset valuations. Additionally, the research highlights the critical importance of logistics categories and decarbonization strategies to meet net-zero goals, as well as contemporary warehouse design trends and transportation innovations. The conclusions drawn from this research emphasize the complexities of balancing community interests with economic growth and sustainability in the region, as well as the broader economic implications of restrictive development policies on San Diego's warehouse industry, which could adversely affect the economic vitality of the warehouse sector.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Markers</title>
<link href="https://hdl.handle.net/1721.1/158828" rel="alternate"/>
<author>
<name>Ortiz, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/158828</id>
<updated>2025-04-08T04:44:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Dynamic Markers
Ortiz, Evan
When I was a child, I was certain that all clouds came from New 	Jersey. After passing through the Lincoln Tunnel, I-95 would gradually ascend, lifting our car to eye level with the billowing clouds emerging from beneath us. These clouds rose from the Meadowlands, a great marsh just two miles west of Manhattan, a landscape that has become defined by the infrastructure that occupies it. Nearly equal in land mass and opportunity to Manhattan, this landscape managed to resist holistic transformation due to our inability to control its water. Rather than becoming a prosperous site for agriculture in the 19th century, or the next metropolis in the early 20th, the Meadowlands fell out of focus and became a site to absorb the infrastructural networks needed to uphold rapid development at its edges.&#13;
&#13;
The Meadowlands was sutured shut by the networks interlaced through it in an attempt to erase the failures of the past. Utilizing this landscape as an urban sponge neglected that the marsh hosted a series of ecological infrastructures of its own. The Meadowlands' soft, uncertain ground once managed variations in the water level, but the draining of the ground that came with development reduced its capacity, making pump stations essential for managing water in inhabited areas. Unlike the other forms of infrastructure in the Meadowlands, the presence of the pump station is subdued, its invisibility upholds the illusion that the developments within this landscape are not threatened by their surroundings. However, steady sea level rise and an increase in storm surges have caused these pumps to fail, pulling the veil on their existence and more importantly, the essential role they play in our continued occupation of this landscape. The urgent need to increase the capacity of the pump station provides an opportunity to reconsider their agenda.&#13;
&#13;
This thesis proposes the Dynamic Marker, a new type of infrastructure that redefines the relationship between human systems and ecological flows. Grafted onto existing pump stations in the Meadowlands, it releases water as mist from 800 feet in the air, transforming the hidden mechanics of water management into a moment of wonder. The Dynamic Marker fosters microclimates and ecological connections, transforming infrastructure into a dynamic process that evolves with its surroundings. Over time, it becomes both a memorial to the marsh and a provocation for the future, inviting a rethinking of infrastructure as a participatory and adaptive force that responds to its surrounding ecology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper</title>
<link href="https://hdl.handle.net/1721.1/158824" rel="alternate"/>
<author>
<name>Aulgur, Leanah Sloan</name>
</author>
<id>https://hdl.handle.net/1721.1/158824</id>
<updated>2025-04-07T09:04:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper
Aulgur, Leanah Sloan
Charlotte Perkins Gilman’s The Yellow Wallpaper is a designer’s work of critical fabulation. Published in 1892, the short story follows an unnamed woman prescribed a “rest cure” by her husband, John. Confined to a room wrapped in gothic yellow wallpaper, the narrator becomes obsessed with its patterns. As her mind deteriorates, she sees a woman trapped behind the paper. This production reimagines Charlotte’s bedroom as not yellow, but green—a rich, vibrant green laced with the medium responsible for its provocative coloration: arsenic. The toxic pigment, invented in the late 18th century, induces bodily ailments, mental instability, and even death when used in textiles. Interiors threatened tenants with toxins as this green spread through 19th-century Europe before reaching New England and our narrator. Though known as an author and suffragette, Charlotte was first a designer. As a student in the inaugural class of the Rhode Island School of Design, she studied the arts just miles from the ports where the green pigment began its early residence. Her writing draws from arsenic publications, her scenes mimic medical case studies, and archives suggest she was aware of these toxic walls. This theatrical table reading positions the authoring of The Yellow Wallpaper within the simultaneous stories of the arsenic wallpaper. Why does the author mimic material traces of the green while redirecting her readers to the yellow? When does the color transition from literal to abstract? This work recontextualizes the foundational feminist text by unfabulating the story through design—questioning Charlotte’s literary misdirections and the public discourse surrounding the toxic color.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Marketplace Multiculturalism</title>
<link href="https://hdl.handle.net/1721.1/158823" rel="alternate"/>
<author>
<name>Chowdhary, Harris</name>
</author>
<id>https://hdl.handle.net/1721.1/158823</id>
<updated>2025-04-07T09:21:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Marketplace Multiculturalism
Chowdhary, Harris
Picture Texas. No longer simply cowboys, footballs, and firearms, this land today is sustained by a daily choreography of cross-border commerce, managed by entertainment media turned handheld surveillance, and peppered with enclaves of immigrants from the world over. A contact zone where logistical and legislative apparati warp to serve consumer comfort, Texas today is the world tomorrow: forget the Alamo, it’s highways, tax-incentives, and backyard barbecue on the 21st century frontier. This thesis responds to a call for roadside service stations along a planned international tourist corridor in the Texas-Mexico borderlands with six interventions: a panoramic viewing tower disguised as a billboard, a sunken stadium for athletic agonism, a photovoltaic drive-in charging cinema, an international culinary incubator, a showroom for automated fulfilment, and a customs and border patrol welcome center. These structures are testing grounds for modes of relation and value exchange that edge beyond the outdated positivisms of globalization. They ask how architecture might produce new possibilities and publics by working within and taking advantage of contemporary systems of control. As tourist destinations, the stops suggest the nation’s true mythos lies not in static symbols but in choreographies of transaction and contact. Articulating in built form the dynamic processes that define a territory of sprawl, this proposal suggests that Texas’s most authentic monuments are the stops we make along the way.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective</title>
<link href="https://hdl.handle.net/1721.1/158820" rel="alternate"/>
<author>
<name>Fersztand, David</name>
</author>
<id>https://hdl.handle.net/1721.1/158820</id>
<updated>2025-04-07T08:35:40Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective
Fersztand, David
The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with non-smooth components. We investigate its convergence rate in two settings. We first focus on a composite setting where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the Moreau envelope of the dual problem. In light of this correspondence, we first extend the linear convergence of Kelley's method on convex piecewise linear functions from the positive homogeneous to the general case. Building on this result, we propose a novel complexity analysis of PBA and derive a O (epsilon^-4/5) iteration complexity, improving upon the best known O (epsilon^-2) guarantee. This approach also unveils new insights on bundle management. We then present the first variant of the PBA for smooth objectives, achieving an accelerated convergence rate of O(epsilon^-1/2 log(epsilon^-1)), where epsilon is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of the PBA, which was previously posed in two recent papers. We interpret the PBA as a proximal point algorithm and base our proposed algorithm on an accelerated inexact proximal point scheme. Our variant introduces a novel null step test and oracle while maintaining the core structure of the original algorithm. The newly proposed oracle substitutes the traditional cutting planes with a smooth lower approximation of the true function. We show that this smooth interpolating lower model can be computed as a convex quadratic program. We finally show that Nesterov acceleration can be effectively applied when the objective is the sum of a smooth function and a piecewise linear one.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings</title>
<link href="https://hdl.handle.net/1721.1/158817" rel="alternate"/>
<author>
<name>Barakat, Layal A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158817</id>
<updated>2025-04-07T08:40:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings
Barakat, Layal A.
Makerspaces are used as a tool in higher education to support curricular, hands-on projects and encourage student extracurricular and personal projects. Because access to making is more self-driven, there is a gap between what makerspace trainings teach students and what students are expected to know by the time they reach capstone courses in engineering. To test the effects of introducing a technical makerspace training to students, several steps were taken. First, known barriers to making were explored and organized into categories. Second, Design Expertise was defined as a means to combat these barriers: it is a combination of (1) knowledge, (2) skill, (3) perspective, and (4) motivation. Third, a rigorous framework, the Design-Fabrication-Performance (DFP) matrix was created to break down design expertise into manageable chunks. Next, existing makerspace trainings at MIT were characterized using the DFP matrix. Afterwards, the DFP matrix was used to design a new, experimental training which would incorporate engineering design thinking and expertise with the typical makerspace machine training structure. Finally, 23 student participants were recruited, surveyed using a Likert scale (1 = strongly disagree, 5 = strongly agree), and interviewed to understand the impact of the training on participant perspectives, engineering identity, and maker motivation. Initial results suggest that student self-efficacy increases as a result of the training, This outcome is shown by the highest average differential of all survey responses (M = 0.78, SD = 0.85) for question 15: “I am confident in my ability to use GIR level knowledge to design and make things that perform as intended”. The maker training reinforced the motivation to make things for a majority of students, with the average score for the associated question being 4.48 (SD: 0.85). The training also positively impacted some traditionally marginalized groups in STEM. For the statement "I feel comfortable in engineering at MIT", women averaged 3.27 and men 3.90 before the training. The average differentials in the post- and pretraining scores to this question for these groups were 0.4 and 0.91 respectively. The training also appears to level playing field for students with less advanced backgrounds in engineering and science. For the question “I am confident in my ability to solve GIR level problems on my own”, students with parents with graduate degrees or higher averaged 4.44 before the training, while those with parents with undergraduate degrees or lower averaged 3.57. The average differentials are 0.22 and 0.64 respectively. Although students saw the value in modeling systems before design and fabrication, several questions demonstrated that students found modeling to be tedious and preferred to test and iterate on their designs in the makerspace; further work is needed to eliminate barriers to sustain student interest and participation in the long term. A longitudinal study following these students would also be needed to reveal long term outcomes such as STEM retention and long-term makerspace usage.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates</title>
<link href="https://hdl.handle.net/1721.1/158815" rel="alternate"/>
<author>
<name>Lin, Yuying</name>
</author>
<id>https://hdl.handle.net/1721.1/158815</id>
<updated>2025-04-07T09:14:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates
Lin, Yuying
Carbon nanotubes (CNTs) have advantaged mass-specific mechanical properties and excellent thermal and electrical conductivity, making them an attractive reinforcement for composite systems. Due to an increasing need for more sustainable materials, incorporation of CNTs into thermoplastic matrices presents a promising solution for recyclable and repairable polymer nanocomposites (PNCs). This thesis presents an approach to fabricating and characterizing thermoplastic PNCs that incorporate ultra-high volume fractions of horizontally-aligned carbon nanotubes (HA-CNTs). An MIT-developed bulk nanocomposite laminating (BNL) process was adapted to fabricate multi-ply, unidirectional composites with poly(methyl methacrylate) (PMMA) and acrylonitrile butadiene styrene (ABS) matrices. For the HA-CNT/PMMA system, the BNL process was tailored to fabricate 4-ply and 8-ply laminates with fiber volume fraction v_f &gt; 45 vol.%, using a 9 wt.% PMMA in anisole solution. Through characterization via X-ray microcomputed tomography (µCT), scanning electron micrography (SEM), thermogravimetric analysis (TGA), Fourier transform infrared (FTIR) spectroscopy, and polarized Raman spectroscopy, HA-CNT/PMMA laminates were shown to be free of micro-scale voids with weak or non-existent process-structure interactions, i.e., the CNTs had negligible effect on the polymer structure. TGA and IR helped demonstrate that the BNL process did not lead to decomposition or chemical changes to neat PMMA, and FTIR also revealed that the fabrication process did not induce covalent bonding between CNTs and PMMA. The crystalline behavior of PMMA was studied via dynamic scanning calorimetry (DSC) as well as X-ray diffraction (XRD), which demonstrated that BNL processing temporarily lowers neat PMMA glass transition temperature T_g by 4 ◦C with no permanent change after removal of thermal history. However, CNT inclusion leads to higher laminate T_g by 11 ◦C as shown through both DSC and dynamic mechanical analysis (DMA), which can be explained by CNT constraints on polymer chain movement as opposed to any crystallinity changes in the PMMA. Storage modulus of 8-ply HA-CNT/PMMA laminates was shown to be more than 600% of neat PMMA via DMA, while a decrease in tan(δ) of the laminate compared to neat PMMA indicates an increase in elastic behavior due to CNT inclusion. 4-ply laminates were subjected to a minimum radius of curvature test showing a ∼ 50% increase in yield strain compared to neat PMMA. Electrical properties of 4-ply HA-CNT/PMMA laminates were measured via 4-point probe testing, which demonstrated good Ohmic contact between CNTs, with conductivity of ∼ 2 × 10⁴ S m⁻¹ and anisotropy ratio of 1.2. A preliminary investigation was completed to evaluate the feasibility of using the BNL process for the HA-CNT/ABS system. Uniform suspensions of ABS in anisole were developed to use the BNL polymer infiltration method of spin-coating and vacuum-assisted infusion. It was shown that the nature of the ABS suspension led to uneven polymer distribution over the HA-CNTs. This work has demonstrated the successful incorporation of high volume fractions of aligned CNTs into PMMA thermoplastic matrices as well as the electrical conductivity of such composites, opening an avenue to the development of other high v_f thermoplastic PNCs and exploration into additional multifunctional capabilities.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis</title>
<link href="https://hdl.handle.net/1721.1/158811" rel="alternate"/>
<author>
<name>Adornetto, Turner Day</name>
</author>
<id>https://hdl.handle.net/1721.1/158811</id>
<updated>2025-04-07T09:09:32Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis
Adornetto, Turner Day
The electric grid is a large, complex machine. And yet, it represents but one, narrow framework for energic relations. Visions for just and sustainable futures – for social and ecological repair – should wander further afield. One place they could go is home. In this essay, the Appropriate Technology Small Grants Program, an oft-forgotten chapter of U.S. energy history, shows us how small-scale, place-based inventors transformed homes and neighborhoods into converters and conductors of nearby flows and potentials. At the height of the energy crisis of the 1970s, these inventors pursued a distributed solution to shortage. Along the way, they re-wired the material and conceptual strictures of the modern dwelling and broke into a vast reserve of lowcost, renewable power. Home, they showed, was a workshop to understand and design energic connectivities. But tracing the effects of home-based appropriate technology leads us somewhere else – to the frontiers of energy extraction, where social justice activists proved that small-scale, place-based energy systems could replace unjust mines and dams. What emerged, then, through renewed attention to the possibilities for home and energy, was a powerful counter to the logics of sacrifice at both ends of the energy continuum. Today, as we chart our own response to crisis, it helps to remember how others tried to create solidarities and resist tradeoffs with small-scale, place-based infrastructures. We can, I think, do more with energy.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mining Multifaceted Customer Opinions from Online Reviews</title>
<link href="https://hdl.handle.net/1721.1/158810" rel="alternate"/>
<author>
<name>Mao, Chengfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/158810</id>
<updated>2025-04-07T08:44:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Mining Multifaceted Customer Opinions from Online Reviews
Mao, Chengfeng
Online reviews are a valuable source for studying customer needs and preferences. Previous studies focus on extracting a set of a priori defined constructs such as product attribute perception or explicit customer needs from reviews. Such a priori focus circumvents the limitations of certain natural language processing algorithms but discards valuable information in reviews that are not in the scope of the predefined construct. This study proposes a new method of extracting customer opinions and opinion targets from reviews with the Aspect Sentiment Triplet Extraction (ASTE) algorithm and then identifying theoretical constructs critical for product development with a posteriori interpretation method. We demonstrate the value of our proposed method by identifying granular opinion targets and expressions to find infrequent but important phenomena such as user innovations and delights.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination</title>
<link href="https://hdl.handle.net/1721.1/158809" rel="alternate"/>
<author>
<name>Radhakrishnan, Radhika</name>
</author>
<id>https://hdl.handle.net/1721.1/158809</id>
<updated>2025-04-07T09:17:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination
Radhakrishnan, Radhika
In this paper, I present a study of public and private CCTV surveillance of urban public spaces in India, which I term as ‘geographies of selective surveillance’ — areas where state power is discretionarily exercised and abused, and the presence of the state is experienced principally through police pickets and everyday violence unleashed on marginal occupants, rather than by access to civic amenities and systems of justice. I analyze these experiences of surveillance from the standpoint (Harding, 1992) of minoritized communities of street-level trans sex workers in Kolkata and Muslims in Mumbai. I then situate these experiences within the Matrix of Domination (Collins, 1990), a theoretical framework that explains how systems of power are configured. Defining empowerment as the power to gain control of and/or benefit from a scenario by weakening the Matrix of Domination, I analyze the structural determinants that make surveillance empowering or disempowering for these communities. I find that on the one hand, surveillance can be an empowering tool for minoritized communities as evidence of harm and innocence in cases of false accusations or when police officials typically refuse to believe their experiences due to discriminatory attitudes. On the other hand, surveillance also offers new opportunities for the private exploitation of the instruments of state power through corruption as well as community-based moral policing to be done with greater success and efficiency. I argue that what ultimately determines how surveillance is experienced is not laws and policies, but rather how power is discretionarily exercised on the ground, refracted through the influence of cultural and political beliefs, and discourse.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)</title>
<link href="https://hdl.handle.net/1721.1/158808" rel="alternate"/>
<author>
<name>Wang, Thelma Yuanzhi</name>
</author>
<id>https://hdl.handle.net/1721.1/158808</id>
<updated>2025-04-07T08:58:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)
Wang, Thelma Yuanzhi
Since the introduction of hormone pharmaceuticals into China during the early twentieth century, these substances became objects of fascination for a growing urban elite class. Drawing from newspapers, medical journals, and advertisements, this article examines the unique trajectories of hormone medicine in China. In conversation with previous scholarship on the dynamics of advertising and consuming hormones in China, this article examines specifically the discourses around the production and science of hormones. The circulation of hormones was informed by ideas of traditional Chinese medical cosmologies and enrolled in a nationalist movement encouraging the consumption of hormones produced by emerging Chinese medical entrepreneurs. This article provides a case study in a postcolonial context that problematizes historiographies depicting a linear transition of global hormone science from backwards to scientific, from traditional to modern.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs</title>
<link href="https://hdl.handle.net/1721.1/158805" rel="alternate"/>
<author>
<name>Zhou, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/158805</id>
<updated>2025-04-07T08:25:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs
Zhou, Rui
Engineering design demands the synthesis of multimodal and often incomplete data—ranging from detailed parametric specifications, assembly graphs, visual references, and textual descriptions. Despite growing interest in generative models for design ideation and exploration, state-of-the-art approaches struggle with incomplete inputs, lack of support for modalities other than text and image, and limited controllability. This thesis addresses these gaps by unifying two complementary advances:&#13;
&#13;
First, we introduce a graph-guided diffusion approach for parametric data completion. By coupling Graph Attention Networks with a diffusion-based imputation mechanism, our method acts as a highly accurate and creative design auto-completion system for incomplete partial designs. On a dataset of 12,500 bicycles, this design imputation framework achieves a root mean square error (RMSE) of approximately 0.92 on numerical features and an error rate of around 0.18 for categorical attributes, outperforming both classical imputation methods such as MissForest, hotDeck, PPCA and advanced diffusion-based baselines such as TabCSDI. Moreover, it achives a Diversity Score of 3.10, surpassing all baselines, illustrating that the imputation process transforms incomplete data into multiple creative designs.&#13;
&#13;
Second, we develop a multimodal control architecture that can extend foundation models to condition their generation processes with all or a subset of parametric inputs, assembly graphs, component images, and textual constraints. This model tremendously enhances both the controllability and precision of the generation process of foundational generative models, enabling controlling modalities that were not possible before. We first show that our model excels at tasks that state-of-the-art models struggle on. We further validate the performance of our model with surrogate models that investigate individual features. Our model achieves 95% or greater R^2 scores on different continuous parameters. Further, we show that our model is able to generate creative and novel designs while maintaining a high level of precision. This enables engineers to guide generative outputs along precise dimensional, aesthetic, and functional targets. Across numerous trials of different settings, we observe that our pipeline robustly fuses tabular parametric information, assembly graphs, and reference component images to produce results aligned with both specification precision and creativity. &#13;
&#13;
Together, these contributions establish a coherent framework for AI-augmented design exploration. By viewing missing parameters as an opportunity for data-driven design autocompletion and by tightly integrating multimodal control over foundation models, this work elevates generative AI from a niche conceptual tool to a reliable design copilot. The implications of this thesis are profound: we show the possibilities and the pathways to AI copilot systems that can reduce data bottlenecks, broaden design spaces, and offer more thorough, constraint-adherent design candidates. As engineering problems grow in complexity and scale, the synergy of high-fidelity parametric imputation and multimodal control promises to accelerate innovation, cut development cycles, and guide human designers toward more inventive and manufacturable solutions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Past Market Outcomes: Evidence from the Music Industry</title>
<link href="https://hdl.handle.net/1721.1/158804" rel="alternate"/>
<author>
<name>Du, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/158804</id>
<updated>2025-04-08T04:42:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Learning from Past Market Outcomes: Evidence from the Music Industry
Du, Jason
We leverage unique features of music albums to investigate how musicians learn from current products when developing new products. We find that songs on a musician’s next album tend to be more similar to the songs that are more successful on that musician’s current album. This effect is stronger when the musician has less experience, and when the song on the current album is more novel (for that musician). Our findings suggest that musicians learn from the success of previous songs when developing new songs, and that learning is stronger if the musician has more need to learn, and when the song contains more new information.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams</title>
<link href="https://hdl.handle.net/1721.1/158797" rel="alternate"/>
<author>
<name>Nguyen, Viet</name>
</author>
<id>https://hdl.handle.net/1721.1/158797</id>
<updated>2025-04-07T09:06:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams
Nguyen, Viet
NBA teams have always had to think about real estate through one certain lenses: the arena they play their 41 home games in (plus any subsequent playoff games). But now, NBA teams have evolved past only just thinking about the arena. Teams have increasingly gotten involved in real estate development. This thesis seeks to explore the impact of real estate as a revenue driver for NBA teams, trends observed, and strategic decisions that teams must consider. This thesis will explore current real estate activities of all 30 NBA teams and will examine the choices that teams must make regarding arenas, real estate development, and practice facilities. The findings will help teams and municipalities understand best practices for team-driven real estate, and how strategies can vary team by team based on their situations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation</title>
<link href="https://hdl.handle.net/1721.1/158788" rel="alternate"/>
<author>
<name>Yan, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/158788</id>
<updated>2025-04-08T04:38:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation
Yan, Yu
Public housing in the United States, a critical resource for nearly 1.7 million residents, faces significant challenges due to aging infrastructure and chronic operating funding shortfalls. The Rental Assistance Demonstration (RAD) program, authorized by Congress in 2012, aims to address these issues by leveraging private financing to rehabilitate and modernize public housing properties. Although the RAD program has been around for more than a decade and leveraged over $18.5 billion of construction investments, close to 75% of the more than 2500 eligible local PHAs are yet to benefit from it. This thesis examines the evolution of RAD programs, including the two newer tools, RAD/Section 18 Blend and Faircloth-to-RAD, and their adoption by public housing authorities (PHAs).&#13;
The research incorporates a review of HUD program and policies, RAD implementation data, and interviews with industry practitioners, including PHAs, developers, and consultants, to understand the hurdles preventing the adoption of the program and the characteristics of successfully structured projects. This thesis offers insights into how specific strategies are used to overcome the hurdles and provides practical recommendations for PHAs seeking to leverage RAD for public housing preservation and development. Key findings highlight the importance of utilizing available funding sources to achieve financial feasibility and enhancing organizational skills and capacity.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro</title>
<link href="https://hdl.handle.net/1721.1/158786" rel="alternate"/>
<author>
<name>Ajisafe Jr., Frederick Henry Oladimeji</name>
</author>
<id>https://hdl.handle.net/1721.1/158786</id>
<updated>2025-04-08T04:31:52Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro
Ajisafe Jr., Frederick Henry Oladimeji
Methane is a powerful greenhouse gas that has important implications for climate change. Over the past decade, satellites have rapidly improved their ability to detect this gas from above the atmosphere. This Thesis uses two Systems Engineering frameworks, Systems Architecture and EVDT, to examine a case study of methane monitoring in Rio de Janeiro, Brazil. Data from one of these novel satellite systems, GHGSat, is taken over the Seropédica landfill near the city, and compared to Rio’s own IPCC- and GPC-derived greenhouse gas inventory. This is followed by a participant observation in the summer of 2024 involving interviews, discussions, and site visits. A near-doubling of methane was observed over Seropédica, raising questions about the cause of this increase. The direct engagement with Stakeholders provided by this study contributes to a literature gap in satellite monitoring of urban landfills in southeastern Brazil.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments</title>
<link href="https://hdl.handle.net/1721.1/158518" rel="alternate"/>
<author>
<name>Veys, Yasmin</name>
</author>
<id>https://hdl.handle.net/1721.1/158518</id>
<updated>2025-04-07T09:19:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments
Veys, Yasmin
We would like to enable robots to navigate efficiently in large, outdoor environments, where the traversabilities of many regions are unknown prior to planning. If we reason about the uncertainty in the environment instead of assuming that all unknown space is free to move through, we can generate policies that result in, on average, more efficient navigation. However, designing models that enable intelligent and efficient reasoning about environmental uncertainty is challenging. We would like our model to capture the underlying navigation problem and accurately represent the relevant uncertainty, yet remain as sparse as possible, so that planning remains tractable. Higher model expressiveness improves plan quality but reduces computational efficiency in planning, whereas higher model sparsity improves efficiency at the cost of plan quality. Balancing model expressiveness and model sparsity, thus, is crucial for generating high quality plans efficiently. In this thesis, we describe several useful models for planning under uncertainty and justify our decision to use weighted stochastic graphs with probabilistically traversable edges. We then present a novel method of efficiently generating sparse stochastic graphs given coarse information derived from overhead images of our environments. We test our approach in several simulated environments, demonstrating that our graphs effectively trade off between plan quality and planning efficiency for uncertainty-aware agents navigating in the graph. We then deploy our algorithms in a real-world environment on real-world hardware for single-agent and multi-agent teams. We discuss the challenges associated with using our approach in the field and the implications of our model assumptions not matching the real world. Finally, we present preliminary results for adding cost uncertainty to our graph-based representation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder</title>
<link href="https://hdl.handle.net/1721.1/158517" rel="alternate"/>
<author>
<name>Manohara, Mohith</name>
</author>
<id>https://hdl.handle.net/1721.1/158517</id>
<updated>2025-04-07T08:30:15Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder
Manohara, Mohith
Continuous bladder monitoring is important for the monitoring of bedridden patients. One method to continuously monitor the bladder is to capture ultrasound images and use machine learning processing to measure the bladder volume from these images. Circuits for implementing these functions can be integrated onto a wearable device, and each of these functions can be integrated onto a single chip. In this thesis, we analyze ultrasound imaging in the context of the bladder to come up with algorithms and hardware to perform continuous bladder monitoring. We first assemble a discrete setup which can form ultrasound images. Using this setup, we describe a new algorithm for generating an ultrasound image by to power gate the hardware during the imaging process to save additional power when capturing the image. We combine these concepts into a single Analog Front End (AFE) chip that can capture images in a power efficient manner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum free games</title>
<link href="https://hdl.handle.net/1721.1/158516" rel="alternate"/>
<author>
<name>Zhang, Tina</name>
</author>
<id>https://hdl.handle.net/1721.1/158516</id>
<updated>2025-04-07T08:30:02Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantum free games
Zhang, Tina
The complexity of free games with two or more classical players was essentially settled by Aaronson, Impagliazzo, and Moshkovitz [AIM14]. In the quantum world, there are two complexity classes that can be considered quantum analogues of classical free games: (1) AM⇤, the multiprover interactive proof class corresponding to free games with entangled players, and, somewhat less obviously, (2) BellQMA(2), the class of quantum Merlin-Arthur proof systems with two unentangled Merlins, whose proof states are separately measured by Arthur. In this work, we make significant progress towards a tight characterization of both of these classes. &#13;
1. We show a BellQMA(2) protocol for 3SAT on n variables, where the total amount of communication is Õ(√n). This answers an open question of Chen and Drucker [CD10] and also shows, conditional on ETH, that the algorithm of Brandao, Christandl and Yard [BCY10] for optimizing ˜ over separable states is tight up to logarithmic factors. &#13;
2. We show that AM*[ⁿprovers = 2, q = O(1), a = poly log(n)] = RE, i.e. that free entangled games with constant-sized questions are as powerful as general entangled games. (In contrast, [AIM14] shows that classical free games are much weaker than general classical games.) We show this using a question “hyper-compression” theorem that iteratively applies the introspection technique of Ji et al. [JNV⁺20]. Our result is a significant improvement over the headline result of Ji et al., whose MIP⇤ protocol for the halting problem has poly(n)-sized questions and answers. &#13;
3. By the same techniques, we obtain a zero-gap AM* protocol for a P2 complete language with constant-size questions and almost logarithmically (O(log n · log* n)) large answers, improving on the headline result of Mousavi, Nezhadi and Yuen [MNY21]. &#13;
4. Using a connection to the nonuniform complexity of the halting problem we show that any MIP* protocol for RE requires W(log n) bits of communication. It follows that our results in item 3 are optimal up to an O(log* n) factor, and that the gapless compression theorems of [MNY21] are asymptotically optimal. We conjecture that these bounds can be saturated in the gapped case as well.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Navigation of Unknown Environments with Distant Visual Cues</title>
<link href="https://hdl.handle.net/1721.1/158514" rel="alternate"/>
<author>
<name>Fahnestock, Ethan Kendall</name>
</author>
<id>https://hdl.handle.net/1721.1/158514</id>
<updated>2025-04-07T09:13:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Guiding Navigation of Unknown Environments with Distant Visual Cues
Fahnestock, Ethan Kendall
While navigating unknown environments, robots rely primarily on proximate features for guidance in decision making such as depth information from lidar or stereo to build a costmap, or local semantic information from images. The limited range over which these features can be used can result in poor robot behavior when assumptions made by motion planning about the cost of the map beyond the range of proximate features misguide the robot. Integrating “far-field” image features that originate beyond these proximate features into the mapping pipeline has the promise of enabling more intelligent and aware navigation through unknown terrain. To navigate with far-field features, key challenges must be overcome. As far-field features are typically too distant to localize precisely they are difficult to place in a map. Additionally, the large distance between the robot and these features makes connecting these features to their navigation implications more challenging. In this thesis we propose FITAM, an approach that learns to use far-field features to predict navigation costs to guide navigation through unknown environments from previous experience in a self-supervised manner. Unlike previous work, our approach does not rely on flat ground plane assumptions or range sensors to localize observations. We demonstrate the benefits of our approach through simulated trials and real-world deployment on a Clearpath Robotics Warthog navigating through a forest environment.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas</title>
<link href="https://hdl.handle.net/1721.1/158513" rel="alternate"/>
<author>
<name>Garcia Coleto, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/158513</id>
<updated>2025-04-07T08:27:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas
Garcia Coleto, Andres
Current developments in integrated visible-light photonics have led to advancements in applications such as augmented-reality displays and quantum systems. However, the development of crucial integrated-photonics devices such as integrated gratingbased antennas and integrated optical modulators has predominantly focused on the infrared spectrum, leaving a gap in visible-light technologies. This thesis addresses this gap by designing and experimentally demonstrating integrated visible-light liquidcrystal-based (LC-based) modulators and grating-based antennas. First, we provide a thorough design guide for integrated visible-light grating-based antennas and experimentally demonstrate five antennas with varying advanced capabilities, including the first visible-light unidirectionally-emitting grating-based antennas for integrated optical phased arrays (OPAs), facilitating the use of integrated OPAs for new visible-light applications. Second, we discuss the fabrication processes, considerations, and evaluation techniques for successful packaging of integrated LC modulators, supporting the broader integration of LC into silicon-photonics platforms, enabling more compact and efficient on-chip modulation. Third, we experimentally demonstrate the first integrated visible-light LC-based variable-tap amplitude modulators, enabling a compact and low-power solution to integrated visible-light amplitude modulation for high-density integrated visible-light systems. Fourth, we experimentally demonstrate the first 300-mm wafer-scale platform and fabrication process that results in mechanically-flexible photonic wafers and chips, enabling the field of integrated photonics to advance into new application areas that require flexible photonic chips.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry</title>
<link href="https://hdl.handle.net/1721.1/158512" rel="alternate"/>
<author>
<name>Gaensbauer, Hans</name>
</author>
<id>https://hdl.handle.net/1721.1/158512</id>
<updated>2025-04-07T08:24:50Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry
Gaensbauer, Hans
Frequent, low-latency measurements of bioreactor culture growth are critical for achieving maximum culture efficiency and productivity. Typical cell density and viability measurements are made by removing a sample from the culture, but this approach is both slow and unsuitable for small culture volumes that cannot support frequent destructive sampling. In this work, magnetic resonance relaxometry measurements taken through the walls of the bioreactor tubing are used to monitor the cell density in near real-time. Using intracellular iron as the marker, the system detects variations in cell density in minutes, enabling rapid intervention to save the culture that would be impossible with the once-daily measurements taken by a traditional sampling-based culture analysis system. Given the biochemical importance of intracellular iron, these measurements have the potential to provide phenotypic information on cells without disrupting the bioreactor culture.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks</title>
<link href="https://hdl.handle.net/1721.1/158510" rel="alternate"/>
<author>
<name>Bangachev, Kiril</name>
</author>
<id>https://hdl.handle.net/1721.1/158510</id>
<updated>2025-04-07T08:44:36Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks
Bangachev, Kiril
A probabilistic latent space graph PLSG (n, Ω, D, σ) is parametrized by its number of vertices n, a&#13;
probability distribution D over some latent space Omega,  and a connection function [mathematical function] such that [mathematical formula] almost surely with respect to D. To sample from [mathematical notations], first for each node [mathematical formula] an independent latent (feature) vector x_i is drawn from Omega according to D. Then, for each pair of vertices i and j an edge is drawn independently with probability sigma(x_i,x_j).$ Interest in settings of high-dimensional latent spaces $\Omega$ has surged in recent years due to the rise of high-dimensional data and powerful compute.&#13;
&#13;
The features x₁, x₂, . . . , xₙ are oftentimes hidden due to privacy considerations or absence of measurement. This gives rise to many challenging statistical tasks. A prerequisite for nearly any more sophisticated inference and estimation task is the following simple hypothesis testing question. When can we even test for the presence of high-dimensional latent space structure? When is there a computationally efficient test and what could this computationally efficient test be? We address the following aspects of these questions in the thesis.&#13;
&#13;
Chapter 2: We focus on the canonical geometric setting when latent vectors are distributed uniformly over the sphere [mathematical formula] where Tₚ is such that expected graph density is p. A conjecture that has witnessed continuous interest and progress in the past 15 years is that the information-theoretically optimal test for detecting the spherical random geometric graph is the signed triangle count. We contribute to the existing literature by confirming that the signed triangle count is computationally optimal among low-degree polynomial tests. Our main technical ingredient is a strategy for bounding Fourier coefficients of random geometric graphs based on a representation of spherical random geometric graphs as Erdős-Rényi with few planted edges. This part of the thesis is based on [BB24b].&#13;
&#13;
Chapter 3: The conjectured optimality of the signed triangle count and the relavance of triangle-based statistics to the axiomatic triangle inequality of metric spaces have led to the conventional wisdom that triangle-based statistics are optimal in monotone random geometric graphs. We break this intuition by showing that in the case of a sup-norm geometry over the torus, the signed 4-cycle count is strictly stronger than the signed triangle count and is, furthermore, optimal among low-degree tests. Our main technical contribution is a novel strategy for bounding Fourier coefficients of random geometric graphs mimicking the cluster-expansion formula from statistical physics. This part of the thesis is based on [BB24a].&#13;
&#13;
Chapter 4: While random geometric graphs over the sphere with Euclidean geometry and the torus with sup-norm geometry are interesting mathematically, they are perhaps too simplistic to describe real-world networks. Hence, one should ask to what extent the results and techniques used for these models generalize to other probabilistic latent space graphs. We introduce a new family of probabilistic latent space graphs which we call random algebraic graphs. In random algebraic graphs, Omega is an algebraic group and sigma is compatible with the group structure. This family captures the aforementioned random geometric graphs as well as instances of the stochastic block model and random subgraphs of Cayley graphs. We have two sets of results. First, we develop a general criterion based solely on the magnitudes of Fourier coefficients of sigma for the statistical hardness of detecting a random algebraic graph when the underlying group is the Boolean hypercube. We use this result to provide a uniform approach to many previously known results in the literature, but also highlight that certain structural properties of the connection function such as non-trivial symmetries and non-monotonicity yield novel behavior. Second, we exhibit a universal behavior for the impossibility of detecting a random algebraic graph based solely on the group size but not on the group structure. The result can be equivalently phrased in terms of the local structure of typical Cayley graphs. This part of the thesis is based on [BB23].
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factorization and Compositional Generalization in Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/158507" rel="alternate"/>
<author>
<name>Liang, Qiyao</name>
</author>
<id>https://hdl.handle.net/1721.1/158507</id>
<updated>2025-04-08T04:30:30Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Factorization and Compositional Generalization in Diffusion Models
Liang, Qiyao
One of the defining features of human intelligence is compositionality—the ability to generate an infinite array of complex ideas from a limited set of components. This capacity allows for the creation of novel and intricate combinations of arbitrary concepts, enabling potentially infinite expressive power from finite learning experiences. A likely prerequisite for the emergence of compositionality is the development of factorized representations of distinct features of variation in the world. However, the precise mechanisms behind the formation of these factorized representations in the human brain, and their connection to compositionality, remain unclear. Diffusion models are capable of generating photorealistic images that combine elements not co-occurring in the training set, demonstrating their ability to compositionally generalize. Yet, the underlying mechanisms of such compositionality and its acquisition through learning are still not well understood. Additionally, the relationship between forming factorized representations of distinct features and a model’s capacity for compositional generalization is not fully elucidated. In this thesis, we explore a simplified setting to investigate whether diffusion models can learn semantically meaningful and fully factorized representations of composable features. We conduct extensive controlled experiments on conditional diffusion models trained to generate various forms of 2D Gaussian data. Through preliminary investigations, we identify three distinct learning phases in the model, revealing that while overall learning rates depend on dataset density, the rates for independent generative factors do not. Moreover, our findings show that models can represent continuous features of variation with semi-continuous, factorized manifolds, resulting in superior compositionality but limited interpolation over unseen values. Based on our investigations, we propose a more data-efficient training scheme for diffusion models and suggest potential future architectures for more robust and efficient generative models.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication and Testing of A Middle-Ear Implanted Microphone</title>
<link href="https://hdl.handle.net/1721.1/158504" rel="alternate"/>
<author>
<name>Wawrzynek, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/158504</id>
<updated>2025-04-08T04:19:30Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Fabrication and Testing of A Middle-Ear Implanted Microphone
Wawrzynek, Emma
Cochlear implants are devices that can restore hearing to people with sensorineural deafness. Despite their name, cochlear implants rely on an external unit which contains components such as a microphone. This work presents the design, fabrication, and testing of an implantable middle-ear microphone called the “UmboMic” that measures the displacement of the tympanic membrane at the umbo. Particular consideration is paid to the biocompatability of the microphone and its long-term durability in the body. The work discusses biocompatible materials, methods of encapsulation, and techniques for testing device robustness. &#13;
&#13;
The UmboMic is a piezoelectric displacement sensor that is implanted in the middle ear cavity and contacts the umbo. As the umbo moves, it displaces the UmboMic, resulting in a charge that is amplified with a custom amplifier. The active area of the UmboMic is a triangular shaped cantilever made from two layers of piezoelectric thin film called polyvinylidene fluoride (PVDF). The bimorph design reduces common mode noise as compared to our previous microphone designs. &#13;
&#13;
Extensive bench testing and experiments in fresh human cadavers demonstrates excellent microphone performance despite the use of biocompatible materials. The UmboMic sensor is well shielded against electromagnetic interference, tolerant to implantation variations, and can be repeatably fabricated with little difference between sensor performances. It demonstrates high sensitivity from 100 Hz to above 8 kHz, with a sensitivity of 58 fC/Pa at 1 kHz and 230 fC/Pa at 2 kHz when including the outer ear. The noisefloor of the UmboMic normalized over 1/3 octave bins is 10⁻² fC, and the A-weighted equivalent input noise of the UmboMic with the outer ear is 82.4 dB SPL from 100 Hz to 7 kHz. When tested in five different human cadavers, the UmboMic sensors work reliably despite anatomical differences. &#13;
&#13;
Internalizing the entire cochlear implant would greatly improve the quality of life of wearers. In its current form, cochlear implants cannot be used during sleep and vigorous activity, are susceptible to noise from wind, and function poorly in loud environments. Implanting the device would mitigate these problems and provide users with the discretion of an invisible device. Our prototype demonstrates the feasibility of an implanted microphone and is an important step towards developing a totally implantable cochlear implant.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structured Handwritten Input for Dementia Classification</title>
<link href="https://hdl.handle.net/1721.1/158498" rel="alternate"/>
<author>
<name>Flores, Gerardo</name>
</author>
<id>https://hdl.handle.net/1721.1/158498</id>
<updated>2025-04-07T08:57:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Structured Handwritten Input for Dementia Classification
Flores, Gerardo
We explore the use of deep learning to score the Digit Symbol Substitution Test (DSST), a paper-and-pencil behavioral test useful in diagnosing Alzheimer’s. We train a model to classify Alzheimer’s based on the subject’s responses to any one of the 108 queries in the test. We then combine predictions across the test to produce a new classifier that is considerably stronger. We also make an extensive search of architectures and optimization techniques that have proved useful in other settings. The ultimate result is a very strong classifier, with AUC for a response to a single question of 86% and for an overall patient of 97.25%.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays</title>
<link href="https://hdl.handle.net/1721.1/158495" rel="alternate"/>
<author>
<name>DeSantis, Daniel Markus</name>
</author>
<id>https://hdl.handle.net/1721.1/158495</id>
<updated>2025-04-07T08:27:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays
DeSantis, Daniel Markus
Silicon-photonics microsystems have enabled advanced optoelectronic capabilities in applications spanning from sensors to communication systems. In particular, integrated optical-phased-array-based (OPA-based) technologies, such as solid-state LiDAR and free-space optical communications (FSOC) systems, show promise to revolutionize the way we sense and communicate. This thesis enables new integrated-OPA-based solid-state beam-steering capabilities for these existing applications, as well as emerging spatially- and spectrally-demanding applications. First, we develop and experimentally demonstrate a novel multi-beam solid-state OPA-based LiDAR system capable of detecting and ranging multiple targets simultaneously, passively, and without rastering. Through this work, we demonstrate a new spatially-adaptive sensing modality for solid-state LiDAR that promises to reduce the data deluge associated with LiDAR sensing for autonomous systems. Second, we show the first, to the best of our knowledge, spiral integrated OPAs, enabling emission of focusing beams with tunable variable focal heights for the first time. This work introduces a first-of-its-kind integrated OPA architecture and, as such, enables new functionality for emerging applications of OPAs that require focusing operation, such as biophotonic optical tweezers and chip-based 3D printers. Third, we show the first visible-light integrated-OPA-based FSOC transmitter and use it to experimentally demonstrate the first integrated-OPA-based underwater-wireless-optical-communication (UWOC) link. This integrated OPA transmitter chip can reduce the size, weight, and mechanical complexity of apparatus for UWOC systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributional Private Information Retrieval</title>
<link href="https://hdl.handle.net/1721.1/158492" rel="alternate"/>
<author>
<name>Lehmkuhl, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/158492</id>
<updated>2025-04-07T09:02:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Distributional Private Information Retrieval
Lehmkuhl, Ryan
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it has fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional private information retrieval, a new type of PIR that can run faster than classic PIR—both asymptotically and concretely—when the popularity distribution is heavily skewed. Distributional PIR provides exactly the same cryptographic privacy notion as classic PIR. The speedup comes from providing a relaxed form of correctness: distributional PIR guarantees reliable retrieval for PIR queries that follow the popularity distribution, but only “best-effort” retrieval for out-of-distribution queries. We give several constructions of distributional-PIR schemes that make black-box use of existing standard PIR protocols. On a popularity distribution drawn from real-world Twitter data, distributional PIR reduces compute costs by 5.1–77× compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately streaming social-media posts, and show that our PIR schemes reduce the end-to-end server cost by 8×.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Enhancing Robustness and Generalization in Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/158491" rel="alternate"/>
<author>
<name>Schechter, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/158491</id>
<updated>2025-04-07T09:24:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Methods for Enhancing Robustness and Generalization in Machine Learning
Schechter, Amit
We propose two methods for improving subgroup robustness and out of distribution generalization of machine learning models. First we introduce a formulation of Group DRO with soft group assignment. This formulation can be applied to data with noisy or uncertain group labels, or when only a small subset of the training data has group labels. We propose a modified loss function, explain how to apply it to data with noisy group labels as well as data with missing or few group labels, and perform experiments to demonstrate its effectiveness. In the second part, we propose an invariant decision tree objective that aims to improve the robustness of tree-based models and address a common failure mode of existing methods for out-of-domain generalization. We demonstrate the benefits of this method both theoretically and empirically. Both these approaches are designed to enhance machine learning models’ performance under distribution shift.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/158490" rel="alternate"/>
<author>
<name>Shaw, Seiji A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158490</id>
<updated>2025-04-07T08:44:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation
Shaw, Seiji A.
We derive methods to represent the epistemic uncertainty of models used in long-horizon robot planning problems in autonomous manipulation. We develop a representation of epistemic uncertainty for two types of models: uncertainty over the physical parameters of a model that predicts the observed outcome of a manipulation action and uncertainty over a geometric graph built by a sampling-based motion planner as a representation of the configuration space to answer a motion planning query. We propose a simple planning system that integrates these uncertainty characterizations to reason about the informational value of executing a manipulation action or allocating a number of samples to a sampling-based motion planner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Burst Imaging with Learned Continuous Kernels</title>
<link href="https://hdl.handle.net/1721.1/158488" rel="alternate"/>
<author>
<name>Biscarrat, Camille</name>
</author>
<id>https://hdl.handle.net/1721.1/158488</id>
<updated>2025-04-07T08:59:17Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Burst Imaging with Learned Continuous Kernels
Biscarrat, Camille
Burst imaging is a technique that consists of taking multiple images in quick succession and merging them into one output image. By aligning and combining data from multiple frames, we can increase resolution, attenuate noise, reduce motion blur and expand the dynamic range to obtain a higher quality image. In this thesis, we propose a method that learns continuous kernels to process and merge burst frames. We show that the learned kernels adapt to local image information and take advantage of sub-pixel sample location information to demosaic, denoise and merge the burst into a high quality output.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetic Weyl Semimetals for Spintronic Applications</title>
<link href="https://hdl.handle.net/1721.1/158487" rel="alternate"/>
<author>
<name>He, Zhiping</name>
</author>
<id>https://hdl.handle.net/1721.1/158487</id>
<updated>2025-04-07T08:41:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Magnetic Weyl Semimetals for Spintronic Applications
He, Zhiping
Magnetic Weyl semimetals are a category of topological materials that hold promise for spintronic applications due to their unconventional transport properties, which arise from both bulk and surface topological states, as well as the rich interplay between band topology and magnetism. Among the family of semimetallic materials, the antiferromagnetic Weyl semimetals Mn₃X (X=Sn, Ge, etc.) and the ferromagnetic Weyl semimetal Co₂MnGa have attracted significant interest. So far, despite extensive theoretical and experimental investigations, the magnetic dynamics of Mn₃X and the spin-polarized tunneling in Co₂MnGa based spintronic devices remain not fully explored.&#13;
&#13;
In this thesis, I establish a theoretical framework to describe the low energy dynamics of strained Mn₃X. Using perturbation theory, I identify three distinct dynamic modes and derive a Landau-Lifshitz-Gilbert (LLG)-like equation to describe uniform mode dynamics. I also analyze the excitation of dissipative spin waves and the spin superfluidity state in Mn₃X by extending the model to include spatial inhomogeneity. The analytical results are validated against numerical simulations based on fully coupled LLG equations, where good agreement is achieved. In addition, I study fully epitaxial magnetic tunnel junctions (MTJs) composed of Co₂MnGa. By growing Co₂MnGa/MgO/Co₂MnGa stacks under different conditions, I develop a series of MTJs with varying degrees of chemical ordering in the Weyl semimetal electrodes and compare their tunneling magnetoresistance (TMR). I find that the TMR is enhanced with the improvement of the chemical ordering in Co₂MnGa. Our results reveal the relationship between the spin tunneling in MTJs and the chemical order of Co₂MnGa electrodes, offering insights into further enhancing TMR through Weyl semimetal engineering.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/158486" rel="alternate"/>
<author>
<name>Koh, Dooyong</name>
</author>
<id>https://hdl.handle.net/1721.1/158486</id>
<updated>2025-04-07T08:50:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning
Koh, Dooyong
Physical devices exhibiting stochastic functions with low energy consumption and high device density have the potential to enable complex probability-based computing algorithms, accelerate machine learning tasks, and enhance hardware security. Recently, superparamagnetic tunnel junctions (sMTJs) have been widely explored for such purposes, leading to the development of limited-scale sMTJ-based systems. Existing sMTJs face significant scalability and reliability issues, however, because their intrinsically low energy barrier and correspondingly small device area result in high sensitivity to external perturbations, as well as large variations from device to device. Here, we present an experimental demonstration of three-terminal sMTJs as reliable and potentially scalable sources of true randomness in the field-free regime. By leveraging dual-current controllability and incorporating feedback, we stabilize the switching operation of superparamagnets and reach cryptographic-quality random bitstreams. The realization of controllable and robust true random sMTJs underpin a general hardware platform for computing schemes exploiting the stochasticity in the physical world, as demonstrated by the generative artificial intelligence example in our experiment. Furthermore, we experimentally demonstrate a novel method of utilizing sMTJs as stochastic analog-to-digital converters (sADCs) in a crossbar array architecture for neural network acceleration, showing performance comparable to software implementations. This work highlights the potential of sMTJs to revolutionize energy-efficient computing and provides a foundation for future advancements in probabilistic computing and hardware security.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Model Tools for Project-based Learning</title>
<link href="https://hdl.handle.net/1721.1/158485" rel="alternate"/>
<author>
<name>Ravi, Prerna</name>
</author>
<id>https://hdl.handle.net/1721.1/158485</id>
<updated>2025-04-07T09:24:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Large Language Model Tools for Project-based Learning
Ravi, Prerna
Project-Based Learning (PBL) has emerged as a prominent educational approach that im- merses students in meaningful, real-world tasks, fostering deep and lasting learning experiences. Unlike traditional instructional methods, PBL emphasizes a student-centered pedagogy, where learners actively construct knowledge through exploration, collaboration, and reflection. This approach not only nurtures a love of learning but also encourages students to form personal con- nections to their academic experiences, making education more relevant and impactful. How- ever, while PBL offers significant educational benefits, it also presents challenges for educators, including the complexities of designing and managing projects, assessing student learning, and balancing student autonomy with guided instruction.. The advent of artificial intelligence (AI), particularly large language models (LLMs), holds promise for addressing these challenges by en- hancing personalized learning, automating administrative tasks, and providing real-time feed- back. To ensure that these AI tools are sustainable and conducive to diverse classroom contexts, it is crucial to involve educators in the design process from the outset.&#13;
&#13;
This thesis contributes to the intersection of PBL and generative AI by documenting a co- design process with interdisciplinary K-12 teachers aimed at integrating AI into PBL pedagogy. Through need-finding interviews, collaborative workshops, and iterative tool design, this re- search explores how AI can support teachers in implementing high quality PBL while maintaining the integrity of student-centered learning. We also investigate how this technology can augment the current roles of teachers without replacing them, and support their professional growth.&#13;
&#13;
The thesis is structured around three key objectives: exploring the challenges educators face with PBL, co-designing AI tools that address these challenges, and proposing design guidelines for future AI tools in PBL classrooms. By refining the design of AI-powered PBL tools, enhancing teacher professional development resources, and ensuring these tools are accessible and equitable, educators will be better equipped to foster engaging, student-centered learning environments. These contributions not only encourage future research and development of AI educational tools, but also aim to foster a more immersive and constructionist learning approach in classrooms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable and Automated Bias Detection for AI in Healthcare</title>
<link href="https://hdl.handle.net/1721.1/158474" rel="alternate"/>
<author>
<name>Alexiev, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/158474</id>
<updated>2025-04-07T09:05:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Interpretable and Automated Bias Detection for AI in Healthcare
Alexiev, Christopher
Biases in artificial intelligence systems and the data they operate over are a major hurdle to their application in clinical and biomedical settings. Such systems have frequently been shown to fail to generalize from their training data to the real world environment and often display differing levels of accuracy over different population subgroups, which has detrimental effects on patients' quality of care and on healthcare equality. Here, we introduce an automated framework for identifying and understanding nontrivial sources of bias in healthcare datasets and AI models. Our framework is data and model agnostic and does not rely on human-developed heuristics or assumptions to uncover bias. We demonstrate its effectiveness by uncovering serious and nontrivial sources of bias in three widely used clinical datasets and one biomedical dataset, over the diverse tasks of diabetes risk prediction, lung cancer risk prediction, and biomolecular toxicity prediction. Our framework is used to uncover biases caused by patient BMI and computed tomography (CT) scanner type in the data used by a cutting-edge lung cancer risk prediction AI model, causing AUC drops on the order of ten percent.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical Characterisation of Strain and Defects in 2D Photonic Materials</title>
<link href="https://hdl.handle.net/1721.1/158473" rel="alternate"/>
<author>
<name>Mukherjee, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/158473</id>
<updated>2025-04-07T08:37:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optical Characterisation of Strain and Defects in 2D Photonic Materials
Mukherjee, Abhishek
Strain and defect engineering have shown to be powerful tools in modifying optoelectronic properties of semiconductors. This thesis aims to advance the fundamental understanding of electronic and optical properties in material systems with broken inversion symmetries and to use this understanding to engineer in-situ, localized strain fields for tailoring photonic responses at the nanoscale. We will address the fundamental question: How can we characterize the effect of strain and defects in two-dimensional photonic materials? To this end, we open with a review of current strategies in strain engineering, its fundamental consequences on electronic, optical, and magnetic properties, and the state-of-the-art applications of this technology in achieving band-gap-engineered straintronic devices. Touching on the advent of strain engineering for flexoelectricity - a spontaneous material polarization produced by a strain gradient that lifts the inversion symmetry, which can enable a bulk photogalvanic effect, we posit the aspect of meta-valent bonding in materials having a key role in this, by showing that the majority of prime material candidates known to have exhibit large photogalvanic response exhibit this characteristic. The rest of the thesis focuses on characterizing layered metal thio(seleno)phosphates, a family of materials known for their magnetic, electronic, and nonlinear optical properties. We show how the optical properties of these materials can be modulated via different means of defects and strain. These photoactive materials can be pivotal to a future comprising of strain-engineered flexoelectric devices, which take advantage of the bulk photogalvanic effect, to develop a new family of practical, deployable, self-powered, and low-cost photodetectors, and integrated arrays with limits-breaking performance in the UV-to-LWIR spectral bands.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-accuracy, speed-optimized positioning system for electron beam lithography</title>
<link href="https://hdl.handle.net/1721.1/158470" rel="alternate"/>
<author>
<name>Dadok, Luděk.</name>
</author>
<id>https://hdl.handle.net/1721.1/158470</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">High-accuracy, speed-optimized positioning system for electron beam lithography
Dadok, Luděk.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Vita.; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transfer function of heavy duty gas turbine combustor components.</title>
<link href="https://hdl.handle.net/1721.1/158468" rel="alternate"/>
<author>
<name>Farrell, Thomas Dominic.</name>
</author>
<id>https://hdl.handle.net/1721.1/158468</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Transfer function of heavy duty gas turbine combustor components.
Farrell, Thomas Dominic.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of iron tricarbonyl complexes.</title>
<link href="https://hdl.handle.net/1721.1/158467" rel="alternate"/>
<author>
<name>Fanelli, Joseph John.</name>
</author>
<id>https://hdl.handle.net/1721.1/158467</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A study of iron tricarbonyl complexes.
Fanelli, Joseph John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of braking on automobile vehicle dynamics.</title>
<link href="https://hdl.handle.net/1721.1/158466" rel="alternate"/>
<author>
<name>Evans, David Gordon.</name>
</author>
<id>https://hdl.handle.net/1721.1/158466</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The effect of braking on automobile vehicle dynamics.
Evans, David Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fractographic investigation of crack-closure.</title>
<link href="https://hdl.handle.net/1721.1/158465" rel="alternate"/>
<author>
<name>Faral, Michel.</name>
</author>
<id>https://hdl.handle.net/1721.1/158465</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Fractographic investigation of crack-closure.
Faral, Michel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An accident and seismic containment reliability study including statistical uncertainty</title>
<link href="https://hdl.handle.net/1721.1/158464" rel="alternate"/>
<author>
<name>Fardis, M. N.
            (Michael N.)</name>
</author>
<id>https://hdl.handle.net/1721.1/158464</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An accident and seismic containment reliability study including statistical uncertainty
Fardis, M. N.
            (Michael N.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1978; Bibliography: leaves 180-183.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A piezoelectric force measuring system for human mobility analysis.</title>
<link href="https://hdl.handle.net/1721.1/158463" rel="alternate"/>
<author>
<name>Estey, Paul Norman.</name>
</author>
<id>https://hdl.handle.net/1721.1/158463</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A piezoelectric force measuring system for human mobility analysis.
Estey, Paul Norman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaves 178-182.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of methods of determining flood damages and of evaluating flood control benefits</title>
<link href="https://hdl.handle.net/1721.1/158456" rel="alternate"/>
<author>
<name>Lampert, James B.
            (James Benjamin),
            1914-1978.</name>
</author>
<id>https://hdl.handle.net/1721.1/158456</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">A study of methods of determining flood damages and of evaluating flood control benefits
Lampert, James B.
            (James Benjamin),
            1914-1978.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1939; Includes bibliographical references (leaf 101).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal shock resistance of ceramics.</title>
<link href="https://hdl.handle.net/1721.1/158452" rel="alternate"/>
<author>
<name>Goodof, Robert Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/158452</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Thermal shock resistance of ceramics.
Goodof, Robert Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovative Structural and Mechanical Satellite Systems</title>
<link href="https://hdl.handle.net/1721.1/158321" rel="alternate"/>
<author>
<name>Thomas, Annika</name>
</author>
<id>https://hdl.handle.net/1721.1/158321</id>
<updated>2025-04-08T04:46:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Innovative Structural and Mechanical Satellite Systems
Thomas, Annika
This thesis covers two topics within the field of satellite mechanical engineering. The first topic covered is the structural and thermal design and validation BeaverCube 2 Earth-imaging CubeSat. The second topic covered is the electromagnetics modeling and simulation of inductive spin drive for a novel magnetically levitated spherical control moment gyroscope for satellite attutide control.&#13;
&#13;
For the first topic on BeaverCube 2, the key tasks were to design and assemble the structure of the CubeSat, ensure that subsystems maintain their operating temperatures on orbit, and validate the structural integrity of the CubeSat structure during launch. We design and manufacture 24 components that integrate all subsystems of BeaverCube 2 and meet the size requirements of a 3U (3 x (100cm3)) CubeSat, including a chassis, panels, payload structure and connectors for the stack of boards. Next, we ensure that all subsystems of the satellite do not exceed their temperature limits through analytical and simulated thermal analysis, showing that during worst case hot (70∘ beta angle) and worst case cold (70∘ beta angle) orbits, no subsystem reaches within 5 ∘C of its operating temperature limits. Finally, we analyze the structure of BeaverCube 2 to validate that the components can structurally withstand the 4-7 G linear accelerations, 13.5 rad/s radial accelerations, 1200 N side rail loads, and random vibration environment that may be experienced during launch [1]. The design is shown to be robust in these conditions, with margins of safety of stress ranging from 19.97 to 37.56 and deformation of the stack of circuit boards not exceeding 0.05 mm. The minimum frequencies of modes of vibration throughout the structure occur at 623 Hz, which is well above the allowed minimum mode of 100 Hz.&#13;
&#13;
For the second topic of modeling the spherical control moment gyroscope, the key tasks were to design an actuation method using inductive drive and to experimentally validate a closed-loop controller for suspension of a prototype. For the actuation method, we present the electromagnetics modeling of an inductive spin drive, including analytical derivations of a bulk conductivity model and a skin current model. The analytical skin model shows that inductive drive with a rotating dipole magnetic field can generate a peak value 130 &#120583;Nm of torque. We simulate both models with a rotating dipole and a rotating quadrupole stator drive configuration. Next, we successfully magnetically levitate a permanent magnet rotor prototype. We develop an analytical plant model for the system and a controller for closed-loop suspension with 40 Hz crossover and 20∘ phase margin, then we present preliminary experimental results.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Screen Time</title>
<link href="https://hdl.handle.net/1721.1/158318" rel="alternate"/>
<author>
<name>Landman, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/158318</id>
<updated>2025-04-08T04:41:06Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">Screen Time
Landman, Jeffrey
In Times Square, architecture is inextricable from mediated representations. The place is dislocated by the screens that envelop its buildings and the other screens, around the world, upon which its image is ceaselessly presented. The neighborhood itself is named after the Times Tower, which was opened in 1905 as the office and printing press of The New York Times, and remains at the center of the square today, entirely empty, voided by the advertising value of its screens. But this condition is not a contemporary anomaly. If the screens, flowed through by consumer desire, currently vaporise the building’s edge, in 1904, before it was even occupied, the building summoned the city with the results of the general election, broadcast to the metropolis via searchlight. The building has always extended its edge, projecting public messages while concealing private concerns.&#13;
&#13;
This thesis understands the building as one actor in a media apparatus: a network of interconnections between broadcasting devices and media, infrastructure, public and political events, development policy and financial systems. The Tower indexes 20th century architecture’s participation in this media apparatus, telling a story in which communication and the distribution of power predate and outlast inhabitation, a story in which occupation is not part of the program. The thesis tracks the tower through six innovative broadcasting devices which the building sponsored, including the world’s first moving electric sign, the New Year’s Eve Ball, the world’s first changeable architectural screen, and the world’s largest open architectural competition. &#13;
&#13;
The form of the thesis is a short movie that uses found footage and computer generated animations to apprehend the Tower amid its myriad images. In designing for animated representation the thesis is positioned in a lineage of paper architectures, proposing a form of architectural production which embraces and redirects the forces of the media apparatus. The movie reconfigures, misaligns and misuses its historical sources to reproduce and subvert the Screen Time from which architecture can now never be distinct.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform</title>
<link href="https://hdl.handle.net/1721.1/158317" rel="alternate"/>
<author>
<name>Yang, Liudi</name>
</author>
<id>https://hdl.handle.net/1721.1/158317</id>
<updated>2025-04-08T04:46:19Z</updated>
<published>2020-09-01T00:00:00Z</published>
<summary type="text">Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform
Yang, Liudi
This thesis aims to develop and deploy a method of predicting product purity and automating anomaly detection for Mytide Therapeutics’ peptide manufacturing platform. A baseline study revealed how early purity prediction and anomaly reporting could decrease the production cycle time, manual data review, and chemical waste produced by the synthesis process. The most important tool for making purity predictions is UV absorption on the byproducts and excess reagents that come out of the reactor, where the peptides are made. A large part of this thesis was improving the quality of the UV data in order to make purity predictions using the improved UV traces. Sensor data from historical runs, including pressure, temperature, and flow rates, were analyzed to characterize several common anomalies. The reporting system takes in live data and alerts the relevant parties when the limits are reached, so that corrective action can be implemented quickly. The anomaly tracking code also generates a report to either be viewed on the user interface or stored in the backend database with the run’s historical data. Implementation of the described system improvements had several positive impacts on the workflow. The live anomaly alerts allowed for issues to be reported to the relevant parties upon occurrence, which increased the uptime of the system. The anomaly report, which is tagged to each peptide synthesis run, allows for historical data evaluation and easy decision-making for advancing the peptide to the next step of the process. The purity prediction allowed for earlier identification of certain poor-purity peptides by 27% of the production time. Together, these system improvements helped to advance the company’s peptide manufacturing platform towards total automated decision-making.
</summary>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China</title>
<link href="https://hdl.handle.net/1721.1/158308" rel="alternate"/>
<author>
<name>Shao, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/158308</id>
<updated>2025-04-08T04:49:38Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China
Shao, Yu
This thesis interrogates COVID-19 emergent urban governance trends in China in response to the COVID-19 crisis, with a particular focus on the use of the narratives of epidemic and state emergency, as well as the governance strategies during the pandemic and in the socalled post-COVID era. More importantly, this thesis intends to investigate people’s responses towards emergency policies—the compliances and creative strategies that people have adopted to demonstrate their resistance. Using a combination of ethnographic data and archival research, this thesis covers five major themes: a) the impacts that different outbreak narratives perpetuated on the Internet; b) left-wing scholars’ view (or hope) for the rise of socialism and how the Chinese state has used the socialist narrative to build up its international image; c) the strong comeback of capitalist practices the pandemic exacerbated the precariousness of work; d) how the pandemic has been used as a justification to impose panoptic surveillance and control on Chinese citizens and asked for absolute obedience towards government policies, as well as how the formulaic practices dominated the post-COVID landscape; and finally, e) people’s response and sentiments to government policies such as lockdowns and social distancing displayed on social media platforms. It concludes by arguing that even in an autocratic state with increasingly tightened control justified by the epidemic, people are not passive recipients of such policies. They have come up with creative strategies to express their resistance and exhibit negotiation with the policies. It further argues that in China, COVID-19 has aroused a new wave of active civil participation, for citizens to discuss politics openly, starting from pandemic related topics to the freedom of speech at large. Complicating what Panagiotis Sotiris terms biopolitics from below, it suggests that the creative posts on social media platforms are a savvy means of claiming back our bodies.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm</title>
<link href="https://hdl.handle.net/1721.1/158268" rel="alternate"/>
<author>
<name>Herron, Lucas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158268</id>
<updated>2025-04-08T04:16:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm
Herron, Lucas A.
The detection of sea ice is a major problem faced by Argo floats operating in polar regions. In these areas, the presence of sea ice threatens to damage or destroy floats in the event of an impact at the surface. While methods have been proposed and implemented to combat this danger, the most successful of which is the Ice Sensing Algorithm (ISA), further work is necessary to fully mitigate the risks, particularly in the Arctic. In this analysis, past CTD profiles from the Arctic are compiled and matched with sea ice data to examine the performance of the ISA and recommend potential changes and new methods to further improve its accuracy. This is accomplished by fitting the data to statistical and machine learning models to predict the presence of ice and analyzing the results. Results show that both modifications to current methods and the inclusion of new variables may increase the predictive power of the ISA. Specifically, the analysis shows that the use of point measurements (as opposed to a metric over a pressure range) at the shallowest allowable depth provides the best performance. The additional inclusion of practical salinity and time of year as predictive variables also increases the performance of the algorithm. Results and statistics on the performance of the algorithm are provided and analyzed in various regions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Robotic Manipulation in Remote Environments with Shared Autonomy</title>
<link href="https://hdl.handle.net/1721.1/158267" rel="alternate"/>
<author>
<name>Phung, Amy</name>
</author>
<id>https://hdl.handle.net/1721.1/158267</id>
<updated>2025-04-07T09:18:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Robotic Manipulation in Remote Environments with Shared Autonomy
Phung, Amy
The evolution of robotics technology continues to facilitate exploration and scientific study in remote environments, enabling research in areas that were previously impossible to reach. Robots operating in space and marine environments encounter similar operational challenges, as both face high operational costs, bandwidth-limited conditions, and natural, unstructured environments where dynamic obstacles might be present. Within the oceanographic domain, conventional deep-sea sampling operations involve remotely operated vehicles (ROVs) equipped with robotic manipulator arms to complete dexterous tasks at depth. While effective, deep-sea ROV operations require specialized instrumentation, highly trained shipboard personnel, and large oceanographic vessels, which make deep-sea samples inaccessible to most.&#13;
This thesis presents the SHared Autonomy for Remote Collaboration (SHARC) framework, and evaluates its utility within an oceanographic context. By leveraging shared autonomy, SHARC enables shore-side operators to collaboratively carry out underwater sampling and manipulation tasks, regardless of their prior manipulator operations experience. With SHARC, operators can conduct manipulation tasks using natural language and hand gestures through a virtual reality (VR) interface. The interface provides remote operators with a contextual 3D scene understanding that is updated according to bandwidth availability.&#13;
Evaluation of the SHARC framework through controlled lab experiments indicates that SHARC’s VR interface enables novice operators to complete manipulation tasks in framerate-limited conditions (i.e., &lt;0.5 frames per second) faster than expert pilots using the conventional topside controller. For both novice and expert users, the VR interface also increased the task completion rate and improved sampling precision. During sea trials, SHARC enabled collection of an underwater in-situ X-ray fluorescence (XRF) measurement at more than 1000 meters water depth in the Eastern Pacific with centimeter-level precision by remote scientists with no prior piloting experience. This demonstration provides compelling evidence of SHARC’s utility for conducting delicate operations in unstructured environments across bandwidth-limited communications, which holds relevance for improving operations in other sensitive domains where dexterity is required. SHARC’s ability to relax infrastructure requirements and engage novice shore-side users provides a promising avenue for democratizing access to deep-sea research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of compensators for double integrator plants</title>
<link href="https://hdl.handle.net/1721.1/158231" rel="alternate"/>
<author>
<name>Schwartz, Adam L.</name>
</author>
<id>https://hdl.handle.net/1721.1/158231</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">Comparison of compensators for double integrator plants
Schwartz, Adam L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1989; Includes bibliographical references (leaves 186-189).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Issues in new product development--the introduction of tape automated bonding technology</title>
<link href="https://hdl.handle.net/1721.1/158228" rel="alternate"/>
<author>
<name>Maggs, Virginia Loop.</name>
</author>
<id>https://hdl.handle.net/1721.1/158228</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">Issues in new product development--the introduction of tape automated bonding technology
Maggs, Virginia Loop.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1990; Includes bibliographical references (leaves 142-144).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characteristics of electric strain gages at low temperatures</title>
<link href="https://hdl.handle.net/1721.1/158225" rel="alternate"/>
<author>
<name>Sevand, Ali H.</name>
</author>
<author>
<name>Day, Emmett E.</name>
</author>
<id>https://hdl.handle.net/1721.1/158225</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Characteristics of electric strain gages at low temperatures
Sevand, Ali H.; Day, Emmett E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1946; Bibliography: leaf 21.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the reaction of sulfur vapor with a metallic oxide</title>
<link href="https://hdl.handle.net/1721.1/158223" rel="alternate"/>
<author>
<name>Hard, Robert A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158223</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">An investigation of the reaction of sulfur vapor with a metallic oxide
Hard, Robert A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1949; Bibliography: leaf 59.
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction</title>
<link href="https://hdl.handle.net/1721.1/158221" rel="alternate"/>
<author>
<name>Lima, Luiz Hamilton de Resende.</name>
</author>
<id>https://hdl.handle.net/1721.1/158221</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction
Lima, Luiz Hamilton de Resende.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of production smoothing in a job shop environment</title>
<link href="https://hdl.handle.net/1721.1/158219" rel="alternate"/>
<author>
<name>Cruickshanks, Allan Benjamin.</name>
</author>
<author>
<name>Drescher, Robert D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158219</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of production smoothing in a job shop environment
Cruickshanks, Allan Benjamin.; Drescher, Robert D.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Vitae.; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of multi position letter sorting machine operation in the United States Postal Service</title>
<link href="https://hdl.handle.net/1721.1/158218" rel="alternate"/>
<author>
<name>Cruce, A. C.,
            1858-1919.</name>
</author>
<author>
<name>Lee, Jerry Kenneth.</name>
</author>
<id>https://hdl.handle.net/1721.1/158218</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of multi position letter sorting machine operation in the United States Postal Service
Cruce, A. C.,
            1858-1919.; Lee, Jerry Kenneth.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the air bleeds and several typical idling systems of the carburetors</title>
<link href="https://hdl.handle.net/1721.1/158211" rel="alternate"/>
<author>
<name>Ding, Qinghua.</name>
</author>
<id>https://hdl.handle.net/1721.1/158211</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Analysis of the air bleeds and several typical idling systems of the carburetors
Ding, Qinghua.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1946; Bibliography: leaf 59.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Double trouble: Predicting new variant counts across two heterogeneous populations</title>
<link href="https://hdl.handle.net/1721.1/158206" rel="alternate"/>
<author>
<name>Shen, Yunyi</name>
</author>
<id>https://hdl.handle.net/1721.1/158206</id>
<updated>2025-04-07T08:53:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Double trouble: Predicting new variant counts across two heterogeneous populations
Shen, Yunyi
Collecting genomics data across multiple heterogeneous populations (e.g., across different cancer types) has the potential to improve our understanding of disease. Despite sequencing advances, though, resources often remain a constraint when gathering data. So it would be useful for experimental design if experimenters with access to a pilot study could predict the number of new variants they might expect to find in a follow-up study: both the number of new variants shared between the populations and the total across the populations. While many authors have developed prediction methods for the single-population case, we show that these predictions can fare poorly across multiple populations that are heterogeneous. We prove that, surprisingly, a natural extension of a state-of-the-art single-population predictor to multiple populations fails for fundamental reasons. We provide the first predictor for the number of new shared variants and new total variants that can handle heterogeneity in multiple populations. We show that our proposed method works well empirically using real cancer and population genetics data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time</title>
<link href="https://hdl.handle.net/1721.1/158205" rel="alternate"/>
<author>
<name>Colicci, Vittorio</name>
</author>
<id>https://hdl.handle.net/1721.1/158205</id>
<updated>2025-04-07T08:33:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time
Colicci, Vittorio
Vegetation has become ubiquitous among most modern landscapes. However, for much of the Earth’s history, land plants were absent. Their rapid diversification throughout the Devonian and Carboniferous brought about a massive shift in geomorphology and landscape evolution. Complex rooting structures were the principal agents of change, mechanically reinforcing their substrates and generating cohesive sediments through weathering. This work examines the root systems of three major tree genera from these periods: Calamophyton, Lepidodendron, and Calamites. Simplified reconstructions were designed, 3D printed, and uprooted from a sand testbed to explore the effects of root geometry on anchoring ability. Force and displacement data were gathered for each model and used to calculate anchoring strength and uprooting work. Force laws were then derived to approximate the anchoring contributions of root weight, sediment weight, static friction, and shear strength. This analysis revealed a strong dependence on the span, surface area, and volume of the root system, which were used to normalize values across different geometries. The Calamophyton model required the greatest uprooting force per unit length, whereas the Lepidodendron model required the greatest uprooting force per unit area and volume. These results were interpreted within the environmental context of each genus alongside particular features of root geometry. Calamophyton contributed less to soil cohesion due to its simple unbranched architecture, however it likely increased wetland habitability for subsequent species. Meanwhile, Lepidodendron would have bolstered cohesion on account of its densely-packed dichotomous rootlets. Calamites is unique in its clonal reproductive habit and nodal branching architecture, which could have helped it colonize particularly unstable environments. We maintain that the earliest trees played a key role in surface stabilization within their ecosystems and likely paved the way for species that followed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares</title>
<link href="https://hdl.handle.net/1721.1/158204" rel="alternate"/>
<author>
<name>Min, Youngjae</name>
</author>
<id>https://hdl.handle.net/1721.1/158204</id>
<updated>2025-04-07T09:26:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares
Min, Youngjae
While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset. However, due to computational and memory constraints and potential privacy concerns, storing and accessing all the data is impractical in many real-world scenarios where the data arrives in a stream. In this thesis, we investigate the problem of one-pass learning, in which a model is trained on sequentially arriving data without retraining on previous datapoints. Motivated by the increasing use of overparameterized models, we develop Orthogonal Recursive Fitting (ORFit), an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints. By doing so, we bridge two seemingly distinct algorithms in adaptive filtering and machine learning, namely the recursive least-squares (RLS) algorithm and orthogonal gradient descent (OGD). Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA). Further, we show that, for overparameterized linear models, the parameter vector obtained by our algorithm is what stochastic gradient descent (SGD) would converge to in the standard multi-pass setting. Finally, we generalize the results to the nonlinear setting for highly overparameterized models, relevant for deep learning. Our experiments show the effectiveness of the proposed method compared to the baselines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment</title>
<link href="https://hdl.handle.net/1721.1/158203" rel="alternate"/>
<author>
<name>Rutherford, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/158203</id>
<updated>2025-04-08T04:40:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment
Rutherford, Emma
Rhegmatogenous retinal detachment (RRD) is a vision-threatening condition that affects 10 to 18 per 100,000 people in the United States annually [1]. The current standard for treatment is pars plana vitrectomy (PPV), which is an invasive and expensive surgical procedure that leaves patients unable to perform usual activities for four to six weeks. In addition, current methods tend to produce distorted vision upon recovery. In-office Suprachoroidal Viscopexy™ (SCVEXY™) is a minimally invasive technique recently developed by Dr. Rajeev Muni for treating rhegmatogenous retinal detachment (RRD) which has been performed on a handful of people [2]. This procedure has the potential to greatly reduce the cost and recovery time of RRD while also improving the quality of the repair. It can be performed with no incision, no tamponade agent, and no patient post-op positioning requirements [2]. SCVEXY works by injecting viscous fluid into the suprachoroidal space, a “potential space” between the sclera and choroid, creating a “bleb” of fluid underneath the tear that pushes the choroid towards the retina and allows it to reattach. However, difficulty in safely injecting into this space at the location of the retinal tear currently limits the widespread utilization of the technique. If this procedure was made reliably safe, it could greatly change how retinal detachments are treated and improve patient outcomes. The primary difficulty arises in precisely locating the suprachoroidal space in order to inject the viscous fluid. The thickness of the sclera varies from patient to patient and between locations on the eye. Additionally, the scleral and choroidal tissues are very thin, leaving little room for positional error. Hemorrhage may occur if the needle punctures through the choroid and into the subretinal space, which could lead to bad outcomes. This work presents a device developed to minimally invasively reach posterior segments of the eye, deploy an injection needle in-situ with high resolution, sense when the needle tip has passed into the suprachoroidal space (SCS), and inject a viscous fluid. Not only will this device be used to treat retinal detachment in a minimally invasive manner, but it could also be used for drug injection or fluid aspiration via the suprachoroidal and subretinal spaces for treatment of a variety of posterior ocular diseases.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonrigid single-axis space integrator dynamics</title>
<link href="https://hdl.handle.net/1721.1/158115" rel="alternate"/>
<author>
<name>Shaw, Edward Eugene.</name>
</author>
<id>https://hdl.handle.net/1721.1/158115</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Nonrigid single-axis space integrator dynamics
Shaw, Edward Eugene.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 63-64).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stratospheric radiance</title>
<link href="https://hdl.handle.net/1721.1/158113" rel="alternate"/>
<author>
<name>Schweickart, Rusty.</name>
</author>
<id>https://hdl.handle.net/1721.1/158113</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Stratospheric radiance
Schweickart, Rusty.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaves 68-70).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A comparison of the existing methods of studying the stability of earth slopes</title>
<link href="https://hdl.handle.net/1721.1/158111" rel="alternate"/>
<author>
<name>La Casta-Sanchez, Salvador.</name>
</author>
<id>https://hdl.handle.net/1721.1/158111</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">A comparison of the existing methods of studying the stability of earth slopes
La Casta-Sanchez, Salvador.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1959; Includes bibliographical references (leaf 15).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning models</title>
<link href="https://hdl.handle.net/1721.1/158103" rel="alternate"/>
<author>
<name>Crooks, Lawrence.</name>
</author>
<id>https://hdl.handle.net/1721.1/158103</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Planning models
Crooks, Lawrence.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Bibliography: leaves 121-127.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area</title>
<link href="https://hdl.handle.net/1721.1/158101" rel="alternate"/>
<author>
<name>Cummings, Mary Rowena.</name>
</author>
<id>https://hdl.handle.net/1721.1/158101</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area
Cummings, Mary Rowena.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1982; Bibliography: leaves 174-177.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Levels, layers, and planes : the framework of a system of knowledge representation semantics</title>
<link href="https://hdl.handle.net/1721.1/157977" rel="alternate"/>
<author>
<name>Smith, Brian Cantwell.</name>
</author>
<id>https://hdl.handle.net/1721.1/157977</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Levels, layers, and planes : the framework of a system of knowledge representation semantics
Smith, Brian Cantwell.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 199-203.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decision making for energy conservation in existing commercial buildings.</title>
<link href="https://hdl.handle.net/1721.1/157973" rel="alternate"/>
<author>
<name>Chertow, Richard Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/157973</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Decision making for energy conservation in existing commercial buildings.
Chertow, Richard Philip.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Includes bibliograhical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conical flow modeling for polygonal cross section bodies at off design conditions.</title>
<link href="https://hdl.handle.net/1721.1/157972" rel="alternate"/>
<author>
<name>Kamkar, Hamid.</name>
</author>
<id>https://hdl.handle.net/1721.1/157972</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Conical flow modeling for polygonal cross section bodies at off design conditions.
Kamkar, Hamid.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology</title>
<link href="https://hdl.handle.net/1721.1/157969" rel="alternate"/>
<author>
<name>Richards, Daniel Herndon</name>
</author>
<id>https://hdl.handle.net/1721.1/157969</id>
<updated>2025-04-08T04:08:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology
Richards, Daniel Herndon
Background: Steering an emerging medical technology involves making decisions under uncertainty. Localized drug delivery (LDD) is an emerging medical technology that may be useful in treating epilepsy, which is burdensome and difficult to clinically manage. Costeffectiveness analysis (CEA) is a model-based, problem-oriented framework for determining whether a treatment should be prescribed and reimbursed, though it is typically used to compare treatment alternatives that are already clinically available. Two research questions were posed: How can a clinical CEA be constructed for an emerging medical technology to enhance its steering? And, under what conditions would an emerging technology, LDD, be prescribed in place of resective surgery for drug-resistant epilepsy? Methods: A CEA was constructed with the clinical decision point defined as pediatric patients with drug-resistant epilepsy of focal origin. A new treatment alternative, LDD, was proposed as a solution-neutral, generalized concept, and technological factors were posited that influence parameters in the CEA. A one-way sensitivity analysis was conducted to verify the model and observe its most sensitive parameters. A probabilistic sensitivity analysis was conducted to observe P10 and P90 values for clinical effectiveness. Results: The most sensitive driver of incremental effectiveness of LDD over surgery was, per the model, the potential of LDD to reduce systemic side effects. The potential clinical benefit of LDD over surgery was estimated, probabilistically, as between P10 and P90 values of 0.081 and 0.339 QALYs, respectively. Limitations of the model were discussed. A ‘utopia point’ was calculated. The relationship of the CEA to a total addressable market (TAM) calculation was discussed. The CEA modeling process enhanced learning about the problem and solution spaces. Conclusions: Despite its limitations, CEA modeling can enhance steering activities for emerging medical technologies. Insights from CEA may also help to assess trade-offs in capabilities and cost, as well as observe trends in clinical performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding the structural diversity of discrete polymers accessible through iterative exponential growth</title>
<link href="https://hdl.handle.net/1721.1/157967" rel="alternate"/>
<author>
<name>Khokhlov, Khrystofor</name>
</author>
<id>https://hdl.handle.net/1721.1/157967</id>
<updated>2025-04-08T04:07:46Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Expanding the structural diversity of discrete polymers accessible through iterative exponential growth
Khokhlov, Khrystofor
Iterative exponential growth is a powerful method for the synthesis of atomically defined macromolecules. However, preparation of enantiopure IEG-ready monomers can be challenging, which may limit the attractiveness of IEG as a tool for the study of structurerelationship properties in discrete macromolecules, both in materials and in biological systems. Here, we present a new strategy for the synthesis of orthogonally protected monomers, suitable for IEG through cycles of azidation, alkyne deprotection, and CuAAC, in fewer steps and from readily available and affordable building blocks. This monomer synthesis was achieved through the development of a novel allylation methodology. Using alkynylation of epichlorohydrin, LiBr Finkelstein, and TfOH-promoted allylation, we have been able to prepare a monomer for 3A (number of carbons in each polymer repeat unit, excluding alkyne) IEG in just three steps. Furthermore, the same reactions can be integrated in the synthesis of other IEG architectures (2A/4A/5A), thus expanding the structural diversity and readily accessible substrate scope for atomically defined macromolecules. The configurations of stereogenic centers in IEG-mer backbones are defined by the starting material (R or S epichlorohydrin) and can be further controlled by combining different stereoisomers in desired fashion. This work outlines a conceptual strategy to diversify and expand the chemical space of discrete macromolecules and enable efficient and quick access to a variety of IEG-mer scaffolds.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abrupt change of load on a synchronous machine</title>
<link href="https://hdl.handle.net/1721.1/157909" rel="alternate"/>
<author>
<name>Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.</name>
</author>
<id>https://hdl.handle.net/1721.1/157909</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1927-01-01T00:00:00Z</published>
<summary type="text">Abrupt change of load on a synchronous machine
Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1927; Includes bibliographical references (leaves [102]-[103]).
</summary>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Void formation in copper and selenium ion irradiated molybdenum.</title>
<link href="https://hdl.handle.net/1721.1/157908" rel="alternate"/>
<author>
<name>Chernock, Richard Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/157908</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Void formation in copper and selenium ion irradiated molybdenum.
Chernock, Richard Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consolidation circuit for an MHD channel.</title>
<link href="https://hdl.handle.net/1721.1/157907" rel="alternate"/>
<author>
<name>Cheng, Rowley Lop Wah.</name>
</author>
<id>https://hdl.handle.net/1721.1/157907</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Consolidation circuit for an MHD channel.
Cheng, Rowley Lop Wah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A fast approximate solution to the electrical power generation rescheduling and load shedding problem</title>
<link href="https://hdl.handle.net/1721.1/157906" rel="alternate"/>
<author>
<name>Chan, Sherman Man.</name>
</author>
<id>https://hdl.handle.net/1721.1/157906</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A fast approximate solution to the electrical power generation rescheduling and load shedding problem
Chan, Sherman Man.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.</title>
<link href="https://hdl.handle.net/1721.1/157905" rel="alternate"/>
<author>
<name>Cheng, Irene Teresa.</name>
</author>
<id>https://hdl.handle.net/1721.1/157905</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.
Cheng, Irene Teresa.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1978; Bibliography: leaves 127-129.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns</title>
<link href="https://hdl.handle.net/1721.1/157883" rel="alternate"/>
<author>
<name>Zhang, San</name>
</author>
<id>https://hdl.handle.net/1721.1/157883</id>
<updated>2024-12-19T03:33:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns
Zhang, San
Witnessing and attempting to comprehend China’s controversial response to COVID-19 over the past three years from a geographically distant yet culturally and emotionally intimate standpoint, I have grappled with multiple perspectives, sometimes as an insider, sometimes as an outsider, and most of the time as an impostor to both. As I continually query the incoherence of my positionality, I find myself in an obscure middle ground where my voice is filtered as inauthentic and unheeded. I ask myself: What should I do? What can I do?&#13;
&#13;
This project is an effort to give myself a voice in the process of figuring out the “middle ground”—a gradient of unsettled propositions stretching between cultural identities, negotiating with constructed collective memories, and discursively evolving over a three-year-long uncanny journey trying to perceive the COVID-19 lockdowns in China. By accepting the “middle ground” as a valid stance, I was able to devise a set of methods for navigating the complexity of materials gathered at various times and locations. In addition, utilizing architectural representation tools, I curated a collection of works that reproduce the research process and exhibit the processed information.&#13;
&#13;
This endeavor is not intended to rationalize pandemic control. Rather, it cultivates a ground for reflection that deconstructs a dichotomous perception of right or wrong, drawing attention to individual lived experiences that provide a nuanced interpretation of the COVID-19 pandemic as an international health emergency that affected everyone. Although somewhat fuzzy and uneasy, the “middle ground” position indicates the possibility that a personal desire to develop one’s authorship can lead to a means of making sense of a global crisis.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topics in Marma (မာရမာ)</title>
<link href="https://hdl.handle.net/1721.1/157882" rel="alternate"/>
<author>
<name>Marma, Rani Ukhengching (ဦး ချမ်း စိန် မာရမာ)</name>
</author>
<id>https://hdl.handle.net/1721.1/157882</id>
<updated>2024-12-19T03:32:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Topics in Marma (မာရမာ)
Marma, Rani Ukhengching (ဦး ချမ်း စိန် မာရမာ)
Marma¹ an endangered indigenous language of Bangladesh, is spoken by approximately 200,000 Marma individuals residing in Bangladesh’s southern region called the Chittagong Hill Tracts (CHT). Marma language is closely related to Rakhine and Burmese, and many lexical items are almost identical to those in Burmese and Rakhine, “although Marma exhibits a more conservative phonological profile than Burmese in the grammatical particles” Keisuke (2011). This research study analyzed several morphemes and their roles in shaping discourse structures in Marma information structure (topic-focus articulation). Marma has “agglutinative morphology”, meaning words are formed by stringing together morphemes in specific sequences. We observed prefixation, suffixation, and infixation in Marma. We analyzed the multifunctionality of these selective morphemes [“က=ga/ka, ကိ ု =go/ko, စာ=cha,ရာ=ra, ယည်=yi”] within Marma discourse and explored their implications for a better understanding of information structure in Marma language. At the end of this paper, through instrumental analysis, we proposed three tones in Marma (i.High and creaky, ii. low, and iii. falling).&#13;
 &#13;
Key words: Marma, indigenous language, information structure, topic and focus,morphology and tone.&#13;
&#13;
¹“According to Bradley (1985:180), the Marma group would have first migrated from Arakan to&#13;
the Chittagong Hill Tracts by the early sixteenth century and then after the Burmese conquest in&#13;
1785. They live mainly in the Chittagong Hill Tracts where they form one of the main Indigenous&#13;
groups ( Htin, 2015) ”
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not Function but Function Conquered: Against a Functionalist Theory of Directives</title>
<link href="https://hdl.handle.net/1721.1/157880" rel="alternate"/>
<author>
<name>Hill, John</name>
</author>
<id>https://hdl.handle.net/1721.1/157880</id>
<updated>2024-12-19T03:05:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Not Function but Function Conquered: Against a Functionalist Theory of Directives
Hill, John
Ordering, requesting, and inviting are examples of directive speech acts. Philosophers have offered different accounts of what it is to perform a directive, which they have developed using different theoretical resources. Attitudinal theories of speech acts try to explain what it is to perform a directive in terms of a speaker’s beliefs, desires, and intentions. Nonattitudinal theories of speech acts try to explain directives in terms of something else.&#13;
&#13;
This thesis is concerned with functionalism, a nonattitudinal theory of speech acts. According to functionalism, performing a directive is making an utterance with the etiological function of causing hearers to act in response to one’s utterance. I argue that functionalism is false. I develop counterexamples that show functionalism is too permissive about the kinds of causation suitable for generating directives. I argue further that the most plausible way to address these counterexamples is to become more attitudinal: rather than be permissive, functionalism should hold that directives and hearers’ responses to them are caused by specific internal processes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences</title>
<link href="https://hdl.handle.net/1721.1/157879" rel="alternate"/>
<author>
<name>Lu, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157879</id>
<updated>2024-12-19T04:35:16Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences
Lu, Nicole
Microsatellites are short segments of repeated DNA motifs (i.e., base pair patterns) that are widespread in our genomes. Microsatellites are inherently more mutable than other genomic locations, and since cancer cells undergo many more cell divisions, microsatellites are useful for distinguishing tumor DNA from normal (non-cancerous) DNA.&#13;
&#13;
Microsatellite instability (MSI) arises as a result of mismatch repair deficiency (MMRD), wherein a patient loses function of both copies of certain genes related to mismatch repair.&#13;
&#13;
Current MMRD diagnostics rely on deep sequencing of tumor tissue samples, which can be expensive and overly-invasive to perform for early or routine screening. Less expensive sequencing methods such as ultra-low pass (ULP) sequencing exist, but thus far have not been utilized for detection of microsatellite instability. In this thesis, we focus on 0.1× ULP sequences, in which about 10% of the genomic locations have one read in expectation. Having so few reads makes it difficult to differentiate experimental noise from true mutations. Similarly, cell-free DNA (cfDNA) are DNA fragments from cells all over the body, which circulate in the blood. Collecting and sequencing cfDNA is much less invasive than collecting tissue samples, but presents another challenge in that the fraction of DNA fragments from any particular cell (or group of cells) is low. Thus, if cancerous cells exist within the body, its representation in a given cfDNA sample is likely low. Together, these challenges present a obvious trade-off between signal strength and cost/invasiveness for screening and detection of MSI.&#13;
&#13;
This thesis focuses on the implementation, validation, and additional research of a computational tool to detect microsatellite instability in ultra-low pass cell-free DNA samples.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis</title>
<link href="https://hdl.handle.net/1721.1/157876" rel="alternate"/>
<author>
<name>Stites, Corwin Wesley</name>
</author>
<id>https://hdl.handle.net/1721.1/157876</id>
<updated>2024-12-19T03:01:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis
Stites, Corwin Wesley
This thesis topic stems from a U.S. Navy effort to alter an existing remotely operated vehicle (ROV) system. A vehicle reliant on a tethered connection to an operator requires adaptation to an untethered acoustically controlled vehicle. This project provides a tradespace exploration based in simulation of factors which limit untethered ROV performance. Factors which promote the use of an untethered system over a tethered system are also explored. A MATLAB simulation has been constructed to analyze a hypothetical ROV grid search mission across multiple parameters relating to the vehicle specifications, the mission layout, the acoustic communication system, and the operating environment. This simulation can then be used to generate a wide range of data regarding ROV performance by use of the Monte Carlo method. The performance metrics output by the simulation, along with an automated analytical tool created to process simulation data, provide quantitative insight into the viability of an ROV utilizing an acoustic communication system across a variety of scenarios.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/157871" rel="alternate"/>
<author>
<name>Wang, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/157871</id>
<updated>2024-12-19T04:14:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy
Wang, Daniel
A novel in-situ FTIR method is developed to probe the Li anode/liquid electrolyte interface. Three different conventional electrolyte systems were tested: 1.2 M LiPF₆ in EC, 1.0 M LiPF₆ in EMC, and LP57 (1.0 M LiPF₆ in EC:EMC (3/7 vol %)). Using the spectroelectrochemical cell, FTIR measurements for first plating step and cycled cells (up to 50 cycles) were collected to look for new species formation. In the case of 1.2 M LiPF₆ in EC, LEMC formation was observed when the potential was brought below 1.50 VLi. LEMC growth accelerated when the potential was reduced below 0.0 VLi, upon contact with freshly plated Li metal. When 1.0 M LiPF₆ in EMC was used for the same study, either lithium methyl carbonate or lithium ethyl carbonate were formed. Upon switching to LP57, Li₂CO₃ became the dominant SEI component. When the three electrolytes were cycled in the spectroelectrochemical cell, the SEI peaks continued to grow for the first 10 cycles. After the first 10 cycles, LEMC and Li₂CO₃ growth plateaued, indicating SEI stabilization. On the other hand, LRC signal diminished, indicating an unstable SEI formed by EMC. Additionally, anion decomposition was observed to be more pronounced under high concentrations of EC. Since anion decomposition can be used as a proxy for LiF formation, high concentration electrolytes perform better possibly due to larger amounts of LiF formation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness</title>
<link href="https://hdl.handle.net/1721.1/157864" rel="alternate"/>
<author>
<name>Nawaz, Hesham</name>
</author>
<id>https://hdl.handle.net/1721.1/157864</id>
<updated>2024-12-19T03:34:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness
Nawaz, Hesham
Identifying the key pathways relevant to cardiorespiratory fitness is of great importance for both predicting exercise responsiveness and potentially finding which interventions are likely to affect it. While contemporary deep learning models have demonstrated great success in pattern recognition and generation for various data modalities, their ability to decipher the causal mechanisms underlying these patterns is limited. This work proposes and evaluates a methodology using state-of-the-art causal discovery and causal inference methods to uncover the relationships between different proteins and their impact on changes in individuals’ maximal oxygen consumption (a proxy for physical fitness).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation</title>
<link href="https://hdl.handle.net/1721.1/157862" rel="alternate"/>
<author>
<name>Zimmer, Philipp</name>
</author>
<id>https://hdl.handle.net/1721.1/157862</id>
<updated>2024-12-19T03:24:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation
Zimmer, Philipp
Violent conflicts, in their varied and complex forms, have long been a subject of research and political discourse. Despite increased attention for the field, various nuances and dynamics are yet to be explored. This thesis seeks to study three aspects of the multifaceted nature of conflicts through the lens of natural language processing (NLP), thereby not only offering new insights but also advancing the field's methodological landscape.&#13;
&#13;
First, the study delves into the identification of causal predictors of conflicts. By showcasing the potential of a frame-semantic parser, I am able to quantify the precursors that contribute to conflict and examine the potential for enhancing prediction models with greater qualitative depth. This chapter utilizes a rich but under-examined data source, news articles, which can aid closing the data gap in conflict studies.&#13;
&#13;
In the second chapter, the communication strategies of political leaders during crises are scrutinized to understand the rationale behind their messaging and the impact thereof. I argue that leaders' engagement frequency and style with their citizens is dependent on the political systems' characteristics and that it matters for societal conceptions.&#13;
&#13;
The final chapter addresses the spread of misinformation, such as in times of crisis, investigating which themes are prone to the widespread propagation on social media and presenting a novel ensemble method for the detection of misleading and false content.&#13;
&#13;
By integrating computational techniques with political theory, this work contributes to a nuanced understanding of conflict dynamics and offers rich potential for anticipatory actions of policymakers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments</title>
<link href="https://hdl.handle.net/1721.1/157861" rel="alternate"/>
<author>
<name>Wicks, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/157861</id>
<updated>2024-12-19T04:14:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments
Wicks, Kathryn
As industrial control systems become universally integrated with software and connected to the internet, they have become targets for cyberattacks and sabotage. Detecting cyberattacks on these networks is difficult because existing datasets on attacks is minimal and the bulk of intrusion detection systems are designed for enterprise environments rather than industrial environments. In industrial environments, mechanical failures, stress states, and electrical problems are expected, with repairs included in daily operations. In enterprise environments, such failures are rarer and more high-impact as a result. We investigate the extent to which this mismatch in the impact of physical stressors failures degrades the ability of traditional intrusion detection algorithms to perform in the industrial environment. In the sub-area that this thesis focuses on, power microgrids, such disturbances can come in the form of line-line faults, line-ground faults, lack of generation capacity to meet demand, and unintentional islanding, among many others. Microgrids must be resilient to these events, and this thesis investigates to what extent they are currently and if they can be improved. Specifically, this thesis asks: do traditional IDSs cause false alarms when placed in a failure-prone environment? How do these intrusion detectors perform overall? Can they be improved with additional training? And finally, can intrusion detection systems be tricked by attacks which appear to be "benign" failure modes? This thesis answers these questions by comparing the performance of different anomaly detection methods on cyberattack datasets with varying levels of stressor complexity and severity, and finds that stress on an industrial system can degrade anomaly-based intrusion detector performance. Expanding on this idea, an attacker is then trained to adversarially mask a dataset, and a detector is co-evolved alongside it to detect the attacks. Finally, the coevolution is brought into the hardware-in-theloop simulation environment, where attackers and defenders act in real time to change the state of a realistic microgrid simulation. From these experiments, it is found that attackers can leverage grid disturbances to hide their actions, and that accurate realtime simulations are highly useful for identifying vulnerabilities in a cyberphysical system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics</title>
<link href="https://hdl.handle.net/1721.1/157834" rel="alternate"/>
<author>
<name>Cherry, Maranda F.</name>
</author>
<id>https://hdl.handle.net/1721.1/157834</id>
<updated>2024-12-12T03:16:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics
Cherry, Maranda F.
This thesis presents two projects, an analysis of rotating stall inception for axial compressors in turbomachinery, and a description of the creation of Concept Questions for a text on internal flows. The first part of this thesis identifies flow behavior that defines two routes to rotating stall, known as modal and spike type rotating stall inception. It continues previous studies by MIT and the University of Cambridge surrounding unification of these two stall types under a dynamical system framework. Calculations were carried out for an isolated rotor, with a high hub to tip radius ratio, using TBLOCK, a Reynolds Averaged Navier Stokes solver. The results show (i) the dependence of stall inception on the compressor axisymmetric pressure rise characteristic and the characterization of mode and spike stall inception as two paths, located at the ends of a continuum of possible paths to stall. (ii) the effect of blade passage accelerations and asymmetry in the onset process, and (iii) the divergence of stall inception from two-dimensionality as a function of the slope of the total-to-static compressor pressure rise characteristic. The calculations show that compressor pressure rise characteristic slopes, dψ/dϕ, less than 0.3 have a stall cell growth rate, σ, that agrees with two-dimensional theory. The divergence of stall inception from two-dimensionality is suggested as a distinguishing feature of spike type stall inception compared to modal type stall inception. The second part of this thesis encompasses the creation, editing and compilation of Concept Questions for seven book chapters in a new text that describes the use of Concept Questions in teaching (and learning) fluid mechanics. The composition and qualities of a good concept question are defined, and the process of generating and editing questions for the intended audience is discussed.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae</title>
<link href="https://hdl.handle.net/1721.1/157832" rel="alternate"/>
<author>
<name>Zhang, Caroline Lige</name>
</author>
<id>https://hdl.handle.net/1721.1/157832</id>
<updated>2024-12-12T03:52:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae
Zhang, Caroline Lige
Affective states, often referred to as emotional states, exert substantial influence on behavior and decision-making processes. Traditionally, researchers have turned to functional imaging to delve into the neural mechanisms that drive both behavior and decision making. However, functional imaging of behaving animals often focuses on a singular brain region. Whole-brain imaging, on the other hand, has the capacity to significantly advance our understanding of the brain's functional architecture. In this pursuit, zebrafish larvae emerge as an ideal model for whole-brain imaging due to their transparency, small size, genetic manipulability, rapid development, high reproducibility, Recent advances in protein engineering and fluorescence microscopy have empowered researchers to observe neural activity across extensive neuronal populations. Genetically Encoded Calcium Indicators (GECIs) and Genetically Encoded Voltage Indicators (GEVIs) provide the means to probe brain dynamics with single-cell precision. The advent of lightsheet microscopy technologies has further enriched our capabilities, enabling the recording of brain activity at remarkable frame rates, ranging from several hundred to several thousand frames per second, all while the animal is exposed to precise visual, auditory, and/or olfactory stimulation. Leveraging these experimental advancements in conjunction with machine learning and computer vision techniques, our study aims to forge connections between stimulation, neural activity, and behavior through a larval zebrafish model.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance</title>
<link href="https://hdl.handle.net/1721.1/157828" rel="alternate"/>
<author>
<name>Ruecker, Kinjal A. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157828</id>
<updated>2024-12-12T03:43:09Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance
Ruecker, Kinjal A. L.
Centimeter-scale turbopump impellers typically used in liquid rocket engines of small launch vehicles suffer from reduced performance due to manufacturing challenges and nonuniform geometric scaling. This thesis aims to characterize the impact of impeller blade tip clearance and surface roughness on the performance of small-scale turbopump impellers by assessing the dominant flow features, quantifying the underlying loss mechanisms, and determining the sensitivity of performance losses to changes in tip clearance and surface roughness. The study identifies the primary flow features governing impeller performance to be blade tip leakage flow and secondary flow. The analysis identified two distinct flow regimes based on tip clearance: above 5% of tip clearance, the losses are predominantly due to blade tip leakage flow, whereas below this threshold, losses are governed by both secondary flow and blade tip leakage flow. For tip clearances above 5% of the blade span, blade tip leakage flow is estimated to contribute more than 80% of total impeller loss. A 1% change in tip clearance is estimated to result in a 0.8% loss in efficiency. The calculations suggest increasing surface roughness reduces the effective tip clearance due to increased viscous effects in the tip gap, but strengthens the secondary flow. This lowers the effective tip clearance that separates the flow regimes. The contribution of blade tip leakage loss to total impeller loss decreases by up to 22% for surface roughness increased from an Rₐ value of 1 µm to 10 µm. The strengthened secondary flow at higher surface roughness increases mixing of the blade tip leakage flow with the blade passage flow, leading to larger regions of blockage. Increasing the surface roughness from an Rₐ value of 1 µm to 10 µm results in a 4% loss in impeller efficiency. This study demonstrates that surface roughness is more impactful on small-scale impeller performance than blade tip clearance, and so manufacturing for smooth surfaces should be prioritized over reducing the blade tip clearance gap.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Authenticity in the Workplace:  What does it really mean?</title>
<link href="https://hdl.handle.net/1721.1/157826" rel="alternate"/>
<author>
<name>Pervaaz, Viquar A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157826</id>
<updated>2024-12-12T03:09:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Authenticity in the Workplace:  What does it really mean?
Pervaaz, Viquar A.
Recently, the word authenticity has been used quite prevalently in organizations, specifically as an attribute needed in leaders.  However, during the pandemic, the use of the word authenticity became more prominent and organizationally universal. While the term is great in concept, the power of the word “authenticity” remains nebulous.  This poses a potential problem for organizations and teams as it presents the risk of not delivering on this commitment if the elements of authenticity are not defined and understood.  Making a promise of authenticity without delivering on it may have a negative impact on the individual and organizational morale/culture and a longer-ranging impact in terms of employee engagement and retention.  Using the lens of the cognitive dissonance theory as a construct to view authenticity as a “product” from a marketing perspective, one has a framework to postulate that if expectations are not clear and the perceived performance (delivery on the promise of specific elements of authenticity) is not optimal, then there will be ramifications of this in terms of satisfaction (e.g. employee engagement).  This paper will explore why defining this word in an organizational context is important, what are the macro dimensions of authenticity to help frame and define it, and what variables contribute to bringing authenticity to life.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limits to extreme event forecasting in chaotic systems</title>
<link href="https://hdl.handle.net/1721.1/157825" rel="alternate"/>
<author>
<name>Yuan, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/157825</id>
<updated>2024-12-12T03:24:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Limits to extreme event forecasting in chaotic systems
Yuan, Yuan
Predicting extreme events in chaotic systems, characterized by rare but intensely fluctuating properties, is of great importance due to their impact on the performance and reliability of a wide range of systems. Some examples include weather forecasting, traffic management, power grid operations, and financial market analysis, to name a few. Methods of increasing sophistication have been developed to forecast events in these systems. However, the boundaries that define the maximum accuracy of forecasting tools are still largely unexplored from a theoretical standpoint. Here, we address the question: What is the minimum possible error in the prediction of extreme events in complex, chaotic systems? We derive the minimum probability of error in extreme event forecasting along with its information-theoretic lower and upper bounds. These bounds are universal for a given problem, in that they hold regardless of the modeling approach for extreme event prediction: from traditional linear regressions to sophisticated neural network models. The limits in predictability are obtained from the cost-sensitive Fano’s and Hellman’s inequalities using the Rényi entropy. The results are also connected to Takens’ embedding theorem using the information can’t hurt inequality. Finally, the probability of error for a forecasting model is decomposed into three sources: uncertainty in the initial conditions, hidden variables, and suboptimal modeling assumptions. The latter allows us to assess whether prediction models are operating near their maximum theoretical performance or if further improvements are possible. The bounds are applied to the prediction of extreme events in the Rössler system and the Kolmogorov flow.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Urban Building Energy Modeling</title>
<link href="https://hdl.handle.net/1721.1/157824" rel="alternate"/>
<author>
<name>Le Hong, Zoe</name>
</author>
<author>
<name>Wolk, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157824</id>
<updated>2024-12-12T03:55:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accelerating Urban Building Energy Modeling
Le Hong, Zoe; Wolk, Samuel
Enabling data-driven decision-making in the built environment is critical to achieving ambitious and urgent decarbonization goals. In the building sector, urban building energy models (UBEMs) have become a valuable tool for jurisdictions to develop evidence-based retrofitting policies, but dynamically exploring solutions is hampered by the computational expense and organizational overhead of physics-based building energy models. In order to address these challenges, we present a fast, flexible, and comprehensive UBEM methodology which can be used to reduce identified barriers to time-sensitive decision-making in building stock decarbonization spheres. The methodology combines the speed of current data-driven approaches with the flexibility of computationally intensive, but accurate, engineering models. Identifying machine learning methods as a viable approach, we implement convolutional neural networks (CNNs) which embed timeseries from hourly weather data and building schedules; the embeddings are then combined with static building characteristics and projected to monthly heating and cooling loads. The proposed approach allows for programmatic flexibility and robustness to unique hourly weather conditions globally, while contextual abstraction enables geometric independence. A dataset of over 1 million detailed thermodynamics-based simulations was constructed to train and validate the surrogate model. Model results at the individual shoebox, building, and urban scales compare favorably to traditional numerical methods and meet accepted error bounds under national energy simulation standards.  Additional validation at the urban- and national-scales are performed using public building simulation datasets.  We then demonstrate expanded applications, which leverage the reduced computational cost of the framework to make traditionally infeasible analysis modes tractable and deployable. The methodology presented is intended to be utilized for both very-large-scale systematic analysis and near-real-time interactive explorations. In developing this framework, we aim to provide new mechanisms for key stakeholders in the decarbonization effort to quickly generate actionable insights and engage in iterative discussions to develop evidence-based policy across global building stocks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration</title>
<link href="https://hdl.handle.net/1721.1/157823" rel="alternate"/>
<author>
<name>Peterson, Mason B.</name>
</author>
<id>https://hdl.handle.net/1721.1/157823</id>
<updated>2024-12-12T04:11:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration
Peterson, Mason B.
The growing field of collaborative robotics has the potential to enable and improve the execution of many challenging robot applications. For instance, with teamwork between multiple agents, dynamic object tracking can more completely cover an environment and trajectory planning becomes safer. However, for robots to share the quickly changing spatial information involved in these tasks, robots need to be able to express information originally sensed or planned in their own frame into the frame of neighboring agents. This can be challenging in cases where robots have no global pose information resulting in steady accumulation of error, or drift, in their local pose estimates. To mitigate the effects of drift, neighboring agents must make up-to-date estimates of the alignment between their frames, which can be difficult due to ambiguous alignments and the presence of outlier measurements. To address these issues, the first contribution of this thesis is a method for performing fast incremental frame alignment between pairs of robots, enabling collaborative multiple object tracking (MOT), the task of monitoring the locations of dynamic objects in an environment. To perform frame alignment, robots build up maps of recently seen static objects and use these maps and the detections of tracked dynamic objects to correct for frame drift. Using frame alignment estimates, agents share object detection information and account for additional uncertainty associated with the alignment estimate. The second contribution of this thesis presents a method to perform frame alignment with no initial guess. Many potential frame alignments are computed and we develop a filter that uses temporal consistency to reject outlier alignments and only accept a series of alignments that are consistent over time. We demonstrate in hardware experiments our ability to perform frame alignment in difficult scenarios and improve the quality of collaborative object tracking onboard real robots.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Database and Application Programming Interface Development for Rotational Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/157820" rel="alternate"/>
<author>
<name>Cheung, Jasmine So Yee</name>
</author>
<id>https://hdl.handle.net/1721.1/157820</id>
<updated>2024-12-12T03:38:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Database and Application Programming Interface Development for Rotational Spectroscopy
Cheung, Jasmine So Yee
The Species-agnostic Automated Gas Analyzer (SAAGA) project aims to automate the detection and characterization of chemical compounds in a complex chemical mixture in the gas phase through experimental rotational spectroscopy and&#13;
computational tools. A database of spectroscopic data serves as the foundation of the automation pipeline for assigning&#13;
spectral lines to species. While there are existing databases available for use, we developed our custom database, named&#13;
SAAGAdb, and an application programming interface (API) to access the database to fulfill the needs of SAAGA.&#13;
SAAGAdb is designed to store structured, high quality spectroscopic data of all species not limited to astrochemically&#13;
relevant ones, enabling convenient data manipulation, integration into future automation pipelines, deployment, and&#13;
maintenance. We implemented software development best practices, including software development life cycle, continuous&#13;
integration/continuous delivery, and version control, to develop a PostgreSQL database with a Python API built on Django&#13;
with RDKit integration. The product passed all unit tests and was successfully seeded with data. With the flexibility&#13;
provided by the Django framework as well as detailed documentation of the software, SAAGAdb and its API can be easily improved and expanded in the future to suit the needs of the SAAGA project.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics</title>
<link href="https://hdl.handle.net/1721.1/157810" rel="alternate"/>
<author>
<name>Costa, Samuel Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/157810</id>
<updated>2024-12-12T03:04:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics
Costa, Samuel Thomas
Computational Fluid Dynamics (CFD) is an key tool in the design of aircraft, allowing engineers to predict the performance of a configuration without having to conduct expensive physical tests. However, in order to move to a greater reliance on CFD, the industry requires a high level of accuracy and fast turnaround time, which current methods cannot deliver. In recent years, the rapid development of the GPU industry has led to an explosion of computational power with the GPU architecture. This has allowed wall-modeled large eddy simulation (WMLES), a higher fidelity simulation technique, to become practical for industry use. WMLES requires the use of both a sub-grid scale (SGS) model and a wall model in order to close the system of equations for integration. Although WMLES delivers an improvement over previous methods, classical SGS and wall models do not deliver the accuracy required by the aviation industry. To help close this gap, we introduce a GPU-compatible version of the Building-Block Flow Model (BFM), a machine learning based unified sub-grid scale and wall model for LES introduced in [1]. In this thesis, we discuss the implementation of the BFM for GPU, timing of the BFM versus other closure models for WMLES, and a variety of tests with the BFM designed to evaluate its performance, and possible avenues of improvement.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails</title>
<link href="https://hdl.handle.net/1721.1/157807" rel="alternate"/>
<author>
<name>Barbosa, Maria Paula</name>
</author>
<id>https://hdl.handle.net/1721.1/157807</id>
<updated>2024-12-12T03:44:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails
Barbosa, Maria Paula
Long-lasting or "persistent" contrails are line-shaped clouds that form when airplanes fly through cold and humid parts of the atmosphere that are ice-supersaturated. Various studies have shown that persistent contrails may be responsible for more than half of aviation’s radiative forcing [1]. Efforts to mitigate persistent contrail formation include operational contrail avoidance. Current research suggests that minor (∼ 2000 ft) deviations in altitude of flights during cruise, in conjunction with advancing engine technologies, have the potential to reduce contrail climate forcing by approximately 90% [2]. Identifying and attributing observed contrails to specific individual flights is necessary to demonstrate the success of flight deviations. Reliable flight attribution, therefore, is critical in verifying large-scale implementation of contrail avoidance strategies. Flight attribution leverages both Earth-observation methods, such as satellite images and weather data, and flight data. However, temporal and spatial "blindspots" in satellite instruments, coupled with uncertainties in wind fields, have hindered reliable flight attribution. In this work, we consider eight different probabilistic flight attribution algorithms. All algorithms rely on the use of "similarity measures" which we define as the differences in distance, heading, and altitude between a contrail and flight line segment candidates. We define two-dimensional (2D) algorithms as those that use only distance and heading difference measures and the ones that additionally include altitude as three-dimensional (3D) algorithms. The probabilistic aspect of all eight algorithms is intended to account for errors in wind data and relies on the calculation of a Gaussian probability density function for each similarity measure. In an attempt to mitigate wind and positional errors that compound over time, four of the algorithms feature the inclusion of contrails from previous timestamps as potential match candidates. To account for the changes in flight path due to temporal factors, four of the algorithms include the use of time-dependent Gaussian parameters. The inputs to all algorithms include contrail detections, weather data, and flight data. To perform this analysis, a dataset of 180 manually-attributed, unique contrails was created that captures regional (across the continental United States) and diurnal variation. Each contrail was tracked for part of its lifetime, which results in the generation of 1980 total attributions. These attributions were created by seven labelers, with some overlapping scenes. A parameter sweep was performed on the four 2D algorithms to determine locally optimal Gaussian parameters. This sweep was performed on a reduced dataset that consists of 32 unique contrails and 218 total labels. The results of this sweep show that the performance of the algorithms, when using optimal Gaussian parameters, range from 79.7% to 83.6% accuracy. Accuracy is defined as the percentage of contrails that were attributed to the correct flights. These results are solely for the 2D algorithms that were analyzed on the reduced dataset. We then applied the "locally" optimal Gaussian parameters from the four 2D algorithms to the respective 3D algorithms and ran all eight algorithms on the remaining 148 contrails (1762 labels). We find that the optimal performance for all eight algorithms ranges from 68.2% to 76.2%. A deeper analysis is also conducted to evaluate the scene conditions that affect algorithm performance.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site-Selective Anion Exchange in a Palladophosphorane</title>
<link href="https://hdl.handle.net/1721.1/157805" rel="alternate"/>
<author>
<name>Khuichad, Nichakan</name>
</author>
<id>https://hdl.handle.net/1721.1/157805</id>
<updated>2024-12-12T03:30:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Site-Selective Anion Exchange in a Palladophosphorane
Khuichad, Nichakan
Reported here are studies on the chemoselective ligand substitution at a palladophosphorane possessing two potential sites of chloride substitution. Ligation of palladium(II) chloride with a tridentate chelating ligand (L, P(N(o-N(2-pyridyl)C₆H₄)₂) results in formation of a complex comprising a d⁸ square planar palladium center supported by a geometrically constrained chlorophosphorane (PdClL superscript Cl). The complex&#13;
thus formed was studied for ligand substitution reactions of the chloro ligand at Pd and P, respectively. Treatment with phenol resulted in substitution of the chloride at the P center while the chloride of the Pd stayed intact, giving complex PdClL superscript OPh. Relatedly, treatment with AgF provided a compound whose NMR spectra are consistent with formation a P–F containing pallado-phosphorane PdClL superscript F. However, attempt to recrystallize the fluoride complex resulted in a formation of a cationic complex with a fluoride-bridged species instead although the fluoride still resided between the two phosphorus centers. Overall, substitution experiments of this palladophosphorane indicated a preference for P–Cl substitution over Pd–Cl. The driving force for the favor toward the exchange at phosphorus has not been extensively explored, but hypotheses have been made which may entail the concept of hard-soft acid-base chemistry and the strength of the bonds involved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The development of the side-rod locomotive</title>
<link href="https://hdl.handle.net/1721.1/157771" rel="alternate"/>
<author>
<name>Voelcker, J. Westgarth.</name>
</author>
<id>https://hdl.handle.net/1721.1/157771</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">The development of the side-rod locomotive
Voelcker, J. Westgarth.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1923; Includes bibliographical references (leaf [86]).
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical evaluation and correlation of tool-life data</title>
<link href="https://hdl.handle.net/1721.1/157770" rel="alternate"/>
<author>
<name>Colding, Bertil N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157770</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Critical evaluation and correlation of tool-life data
Colding, Bertil N.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Bibliography: leaves 46-47.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile</title>
<link href="https://hdl.handle.net/1721.1/157769" rel="alternate"/>
<author>
<name>Edeburn, Mark Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/157769</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile
Edeburn, Mark Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references.
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.</title>
<link href="https://hdl.handle.net/1721.1/157768" rel="alternate"/>
<author>
<name>Chernick, Paul Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/157768</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.
Chernick, Paul Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 222-234.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toeplitz operators</title>
<link href="https://hdl.handle.net/1721.1/157766" rel="alternate"/>
<author>
<name>Gencarelli, Frank Thomas.</name>
</author>
<id>https://hdl.handle.net/1721.1/157766</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Toeplitz operators
Gencarelli, Frank Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1977; Bibliography : leaf 45.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tomorrow's Typography</title>
<link href="https://hdl.handle.net/1721.1/157735" rel="alternate"/>
<author>
<name>van de Seyp, Vera</name>
</author>
<id>https://hdl.handle.net/1721.1/157735</id>
<updated>2024-12-03T03:50:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Tomorrow's Typography
van de Seyp, Vera
This thesis is an exploration for new tools for typography that investigates how emerging (AI) technologies can contribute to the type design practice in a meaningful way. I created computational design experiments focusing on three areas: (A) design automation, (B) interfacing, and (C) creative exploration. A lot of care has been put in understanding the current scene through expert interviews, workshops, talks and surveys. With pose estimation, generative visual AI, and large language models that operate on text, I explore whether typographic shapes can be created and manipulated with different modes of expression, in a playful, intuitive and collaborative way.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform</title>
<link href="https://hdl.handle.net/1721.1/157733" rel="alternate"/>
<author>
<name>Zikrallah, Ahmed S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157733</id>
<updated>2024-12-03T03:46:49Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform
Zikrallah, Ahmed S.
Cytokines secretion is a core component of the function of many cell therapy products: It affects the tissue repair capacity of induced Pluripotent Stem Cells (iPSCs) and Mesenchymal Stem cells (MSCs) and the tumorigenicity of Chimeric Antigen Receptor (CAR) T-cell therapies. Ideally, we would be able to continuously monitor the secretome of these cell therapies as they are transformed and expanded in manufacturing.However, state-of-theart techniques for monitoring typically low concentrations of cytokines require either Mass Spectroscopy (MS) or immunoassays like Enzyme-linked Immunosorbent Assay (ELISA). We propose the use of CMOS technology to build a proteomic platform with a single biomolecule resolution. A prototype chip has been designed and fabricated using standard foundary process incorporating a new implementation of a Solid State Nanopore (SSN) of size 55nm×162nm×100nm (w×l×h) with nanofluidic access channels that bridge the buffer solution between the assay space in the packaging structure – a poly carbonate/Polydimethylsiloxane (PDMS) package- and the nanopore on the chip. A silicon Single Photon Avalanche Detectors (SPADs) was also implemented and placed near the nanochannels to utilize fluorescence labeling imaging techniques. In addition, a read-out amplifier that achieves a midband gain of 36.2 dB at a 3 dB bandwidth of 0.1-3.6 MHz is also implemented on the same silicon die, paving the way to superior performance compared to ionic current read-out systems used earlier for electrical biomolecule detection, thanks to low parasitics as a result of integration. The aforementioned modalities integrated on a single chip open the space for the use of CMOS platforms in the electrical and optical interrogation of biomolecules, opening a new horizon for near real-time biomarker assays. The following thesis builds on earlier work that was performed in [1][2] with the objective of expanding on different techniques to interface and characterize the performance of these modalities, especially after post-processing the chips with the aid of tools at MIT.nano. The thesis explores the further deployment of integrated SPAD in a Fluorescence Lifetime Imaging (FLIM) system to image fluorescence-labeled molecules, showcasing the capabilities of the CMOS nanofluidic platform to detect biomarkers such as cytokines.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating cofactor transfer for a B₁₂-dependent enzyme</title>
<link href="https://hdl.handle.net/1721.1/157731" rel="alternate"/>
<author>
<name>Duong, Alexander T.</name>
</author>
<id>https://hdl.handle.net/1721.1/157731</id>
<updated>2024-12-03T03:07:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating cofactor transfer for a B₁₂-dependent enzyme
Duong, Alexander T.
The metallocofactors utilized by enzymes can range in complexity from single metal ions to organometallic cofactors well over 1000 Da. These cofactors enable these metalloenzymes to accomplish a diverse set of unique and challenging chemistry that are critical to core life functions. One of these metallocofactors, adenosylcobalamin (AdoCbl), has only one cognate enzyme in humans: methylmalonyl-CoA mutase (MCM), which is involved in the catabolism of several amino acids, cholesterol, and odd-chain fatty acids. MCM relies on two other proteins, a G-protein metallochaperone called methylmalonic aciduria type A protein (MMAA) and a protein called adenosyltransferase (ATR), to load and off-load cofactor. Mutations or deletions of the gene for MCM, or in any of the genes corresponding to accessory proteins which interfere with cofactor delivery and removal, can lead to a potentially lethal inborn error in metabolism. If the cofactor becomes damaged in the active site of MCM, ATR unloads the cofactor, repairs it, and reloads the regenerated AdoCbl onto the mutase. A molecular understanding of this process has been challenging to obtain due to the difficulty of structurally characterizing a three-protein MCM-MMAA-ATR complex that is transient in nature. An orthologous protein from C. metallidurans in which the G-protein metallochaperone is naturally fused to its target mutase isobutyryl-CoA mutase (IcmF) provides an alternative two-protein IcmF-ATR system for structural and biochemical characterization. Recent work has shown that the IcmF system utilizes a mechanism of active site opening similar to non-fused systems like that of humans. However, the mechanisms by which ATR recognizes the presence of damaged cofactor and then removes it remains unclear. In this thesis, we discuss the development of an assay based on UV-Vis spectroscopy to monitor cofactor transfer between IcmF and ATR. We also discuss efforts to substitute histidine residues in IcmF suspected of serving as intermediate binding sites during cofactor transfer, with the goal of using the developed assay as a means of observing potential changes in transfer efficiency by perturbing these histidine residues. This work seeks to improve our understanding of AdoCbl-dependent enzyme maturation, and inform our ability to harness their unique reactivity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Listening by Synthesizing</title>
<link href="https://hdl.handle.net/1721.1/157728" rel="alternate"/>
<author>
<name>Cherep, Manuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157728</id>
<updated>2024-12-03T03:31:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Listening by Synthesizing
Cherep, Manuel
Generative audio models offer a scalable solution for producing a rich variety of sounds. This can be useful for practical tasks, like sound design in music, film, and other media. However, these models overwhelmingly rely on deep neural networks, and their massive complexity hinders our ability to fully leverage them in many scenarios, as they are not easily controllable or interpretable. In this thesis, I propose an alternate approach that relies on a virtual modular synthesizer; a computational model with modules for controlling, generating, and processing sound that connect together to produce diverse sounds. This approach has the advantage of using only a small number of physically-motivated parameters, each of which is intuitively controllable and causally interpretable in terms of its influence on the output sound. This design takes inspiration from devices long used in sound design and combines it with state-of-the-art machine learning techniques. In this thesis, I present three projects that use this formulation. The first is SynthAX, an accelerated virtual modular synthesizer that implements the core computational elements in an accelerated framework. The second, CTAG, combines the synthesizer with an audio-language model into a novel method for text-to-audio synthesis via parameter inference. This method produces more abstract sketch-like sounds that are distinctive, perceived as artistic, and yet similarly identifiable to recent neural audio synthesis models. The third is audio doppelgängers, sounds generated by randomly perturbing the parameters of the synthesizer to create positive pairs for contrastive learning, encompassing more of the variety found in real-world recordings, with controlled variations in timbre, pitch, and temporal envelopes. This method offers an efficient alternative to collecting real-world data, producing robust audio representations that compete with real data on established audio classification benchmarks. This thesis contributes tools for understandably generating rich and diverse sounds, using them and their parameters for sound design and understanding at scale.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging</title>
<link href="https://hdl.handle.net/1721.1/157727" rel="alternate"/>
<author>
<name>Du, Wenya</name>
</author>
<id>https://hdl.handle.net/1721.1/157727</id>
<updated>2024-12-03T03:14:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging
Du, Wenya
Ultrasound is widely used in clinical practice because it is safe, non-invasive, non-ionizing, low-cost, and provides real-time imaging, monitoring, and therapy. However, conventional ultrasound probes are rigid, pressure-required, and operator-dependent. Replacing rigid transducers with conformable ultrasound transducer arrays can allow image acquisition on curved body parts, improve image quality, and enable functions such as long-term monitoring. In this thesis, I propose a conformable ultrasound breast patch (cUSBr-Patch) consisting of a one-dimensional (1D) phased array and a nature-inspired patch design, which offers large-area, deep tissue scanning and multi-angle, repeatable breast imaging while avoiding the drawbacks of conventional ultrasound imaging technologies. I used a Yb/Bi-doped PIN-PMN-PT single crystal as the active element due to its superior piezoelectric properties (d33 = 2,800 pC/N, εr = 7,000, k33 = 0.93). I then fabricated a 1D phased array transducer consisting of 64 elements with an operational frequency of 7.0 MHz. The 1D array exhibits promising acoustic performance with i) a maximum imaging depth of 80 mm, ii) contrast sensitivity of 3 dB, iii) axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, and iv) a larger field of view than the commercial handheld linear probe at depths of approximately 30 mm or deeper, indicating a potential reliable capability to detect early-stage breast tumors. Beyond this, comprehensive in vitro experimental studies establish that the cUSBr-Patch can provide accurate and reproducible imaging of different phantoms. The clinical trials reveal that the patch exhibits a sufficient contrast resolution (~3 dB) and axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, allowing the observation of small cysts (~ 0.3 cm) in the breast. This research develops a first-of-its-kind ultrasound technology for breast tissue scanning and imaging which offers a non-invasive method for tracking real-time dynamic changes of soft tissue.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps</title>
<link href="https://hdl.handle.net/1721.1/157723" rel="alternate"/>
<author>
<name>Xiao, Wen-Xin</name>
</author>
<id>https://hdl.handle.net/1721.1/157723</id>
<updated>2024-12-03T03:04:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps
Xiao, Wen-Xin
The rise of e-commerce has led to a surge in package deliveries, resulting in the proliferation of unattended delivery methods to address the "last-meter" problem – the challenge of delivering packages from the roadside or sidewalk to the customer's front door. This thesis proposes a methodology for implementing Large Language Model (LLM), and Vision Language Model (VLM) to enable delivery robots to identify the final delivery target and navigate the complex terrain from the curb to the front door. The proposed solution aims to enhance the autonomy and safety of last-mile delivery systems, addressing the "last-meter" challenge and improving the customer experience.&#13;
&#13;
This thesis presents a comprehensive overview of the last-meter delivery concept, aiming to bridge the gap between the roadside/sidewalk and the customer's front door. It begins by introducing the significance of last-meter delivery in the growing e-commerce industry and the challenges posed by unattended deliveries. The thesis then reviews the existing literature on autonomous and unmanned delivery systems, multimodal delivery approaches, and the application of large language models and vision language models in robotics. This research identifies the advancements and gaps in the field that the proposed methodology aims to address.&#13;
&#13;
The thesis primarily focuses on leveraging Large Language Models, the Segment Anything Model, and the open-source Florence-2 vision foundation model to enable the transmission of customers' delivery instructions to the final delivery target in the context of last-meter delivery. It outlines the methodology for data preparation, object detection and labeling, as well as the integration of Large Language Models to handle customer instructions and coordinate delivery target. It also describes the experimental design and methodologies employed to validate the effectiveness of the proposed system. This includes the use of a last-meter dataset and the evaluation of last-meter scene and target coordinate identification.&#13;
&#13;
The thesis concludes by summarizing the key findings and contributions, discussing the broader implications of the proposed methodology, and suggesting directions for future work, such as enhancing system robustness and scalability.&#13;
&#13;
KEYWORDS: Last-Mile Delivery, last-meter Delivery, Large Language Models (LLM), Vision Language Models (VLM), Robotics, Segment Anything Model (SAM), Open-Vocabulary&#13;
Object Detection (OVD).
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI</title>
<link href="https://hdl.handle.net/1721.1/157722" rel="alternate"/>
<author>
<name>Chadha, Karishma</name>
</author>
<id>https://hdl.handle.net/1721.1/157722</id>
<updated>2024-12-03T03:13:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI
Chadha, Karishma
Generative Artificial Intelligence (AI) technology has been promoted with many exciting promises to enhance human creativity. However, it has also been shown to amplify human bias and perpetuate harmful stereotypes. In the new age being ushered in by this technology, this thesis explores how educators and designers can use this technology to support young people in exploring and expressing aspects of their unique identities. In particular, I use a design based research methodology to iteratively create Imagine Yourself, a new digital experience adapting off-the-shelf text-to-image generation technology to support young people creating personal representations and stories.&#13;
Imagine Yourself combines OpenAI’s Dall-E 3 image generation technology with Scratch, a rich environment for young people to imagine and create interactive multimedia stories, animations, and more. Guided by a core value of designing for belonging, this project explores how experiences with generative AI can be designed to foster young people’s creative process in creating personally meaningful stories reflecting their own unique identities, experiences, and cultures. I discuss the iterative design process of creating Imagine Yourself in tandem with creative workshops, aiming to support more diverse representation within the image generation output and invite a tinkerable and iterative process of creating. I discuss observations and feedback from creative workshops with young people and adults, creating with Imagine Yourself. Finally, I conclude with reflections on the design process as well as a discussion of challenges, limitations,  opportunities, and open questions for future work incorporating generative AI into young people’s creative learning experiences.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam</title>
<link href="https://hdl.handle.net/1721.1/157720" rel="alternate"/>
<author>
<name>Fetell, Robert Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/157720</id>
<updated>2024-12-03T03:48:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam
Fetell, Robert Henry
Following the recent catastrophic failure of several mine tailings dams there has been much interest in the use of numerical modeling and remote sensing for monitoring the safety and stability of these structures. This thesis presents a case study that investigates the accuracy of InSAR measurements and the predictive capabilities of finite element models using ground truth surface and sub-surface monitoring data applied to the Zelazny Most (SW Poland) copper tailings storage facility.  This site has a well-documented history of lateral deformations in a critical section (XVIE) of the East dam that have been attributed to a deep-seated translation mechanism of shearing through the underlying Pliocene, glacial clays. Since 2014, operators of the facility have constructed a series of stabilizing berms at this critical section. We investigated the accuracy of InSAR over this period, ending in 2019, by analyzing 186 ascending Sentinel-1 C-band images and 219 descending images using Persistent Scatterer Interferometry and SARProzTM software, comparing results with two surface geodetic benchmarks. Finite element analyses of the structure required a 2D model of section XVIE. We developed and integrated a stratigraphic model for the foundation soils, the complete construction history of the dam (since 1975), and selected input parameters for constitutive models to represent the soil behavior (foundation soils, tailings, dyke and berm materials) using PlaxisTM software. Our results show that InSAR achieves very consistent agreement with geodetic measurements for vertical (Up-Down) and lateral (E-W) surface deformations, over a time period where construction was limited to raising of the dyke near the crest of the dam and berm construction at the toe. The InSAR data are also insightful in showing relatively uniform lateral deformations occurring over the face of the dam, consistent with the interpreted translational failure mechanism. In contrast, it has proved much more challenging to predict subsurface deformations by FE analyses. The computed movements reflect accumulation of deformations over multiple stages of construction and involve shearing through the complex foundation stratigraphy.  We were able to achieve credible estimates of lateral deformations within the range of laboratory shear strength properties published in the literature and using the Hardening Soil (HS) model for non-linear shear stress-strain properties. However, the predictions of surface settlements and lateral deformation are much less reliable and depend on undocumented properties of the tailings, phreatic conditions in the tailings and details of the construction history.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Connection with Inner Processes</title>
<link href="https://hdl.handle.net/1721.1/157719" rel="alternate"/>
<author>
<name>Mindel, Jessica Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/157719</id>
<updated>2024-12-03T03:44:33Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Designing for Connection with Inner Processes
Mindel, Jessica Rachel
At a time of division, it is more important than ever that we help each other feel truly understood. Today's online ecosystems offer us many new ways to communicate personal stories, often through fast-paced, reactive channels, but few if any technologies enable us to share what I posit to be a crucial component of how we implicitly understand each other: our inner processes, e.g., how we form our values and identities, navigate unspoken tensions in a community, or feel that something resonates with us.&#13;
&#13;
This thesis explores inner processes as a resource for the design of systems that support human connection, interpersonal understanding, and reflection. Through a series of design iterations, I weigh approaches to eliciting inner processes, choosing media to externally, evocatively represent them, and encouraging perspective-taking behavior by guiding users through each other's inner processes. I approach this topic through three streams of projects, grounded in literatures that outline guidelines for successful perspective-taking and the development of interpersonal closeness, and that assert the value of creative play in surfacing and communicating inner processes, supporting perspective-taking, making room for new social norms, and enabling reframing.&#13;
&#13;
First, I present our collaborative work on Closer Worlds, a two-player, AI-assisted game in which players generate a world they might both want to live in in order to scaffold an emotionally intimate conversation about their memories and shared values. Next, to better understand inner processes entangled with creative practice, I conduct interviews with creative practitioners about the relationships they build through their practice, and design and develop prototypes for implicitly retracing inferred versions of one's own or another person's creative process, capitalizing on room for interpretation. Prototypes include Sjuzet, a compass that anchors the latent space of a user's creative writing to a local map in order to prompt reflection as a user physically wanders through memories, and Pull It Together, a material speculation on textile swatches whose wear and tear modulates to correspond to invisible sociocultural tensions. Finally, I shift my focus to explicitly, informatively trading inner processes in my design of Metaswap, an asynchronous, written activity in which strangers compare annotations about inner processes that arise as they tell personal stories about an uncertainty they are working to resolve in their lives.&#13;
&#13;
Making inner processes explicit and prompting revisitation of them offered both benefits and drawbacks for connection and reflection, but revealed important questions. A mixed-methods analysis across this work presents tensions in the human and machine instinct to make inferences and assumptions about others, and offers opportunities for interpersonally insightful, vulnerable, and trusting conversation when computer-mediated communication and sense-making systems produce deep content rather than deep interactions. Through this work, I hope to lay the foundation for future research on technology's role in supporting interpersonal understanding at a time when so many subjectivities collide and are summarized at the speed of data.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Matters of Illuminance - Transforming Light into Material Artifacts</title>
<link href="https://hdl.handle.net/1721.1/157713" rel="alternate"/>
<author>
<name>Callender III, Dexter</name>
</author>
<id>https://hdl.handle.net/1721.1/157713</id>
<updated>2024-12-03T03:37:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Matters of Illuminance - Transforming Light into Material Artifacts
Callender III, Dexter
This research explores a process to transform light into physical artifacts. It develops a series of custom software systems to capture images of sunlight moving through a building and transform them into three-dimensional forms. It uses digital manufacturing methods to create the three-dimensional forms out of glass. The aim of this work is 1) to construct a methodology for recording light’s interaction with architecture as three-dimensional forms 2) to produce glass sculptures that exist in a fine art setting and contribute to the lineage of 21st century light artists. The academic contribution of this research builds upon the autographic design framework defined by Dietmar Offenhuber. Offenhuber describes the autographic design process as “the practice of shaping the conditions that allow traces to emerge and guiding their interpretation to demonstrate causality and evidence”.1 The technique I use to transform light into three-dimensional forms follows the four steps of the autographic design process. The goal of this technique is to provide a repeatable process and data format that captures information about light’s interaction with architecture at specific locations. The process produces three-dimensional forms, physical glass sculptures, and media that guide their interpretation, which can be interpreted to provide insight on the design and history of the building. The artistic contribution of this research produces glass sculptures that physicalize the shapes of light I observed and recorded at the location. The goal of these sculptures is to create meaningful physical artworks that reflect the nuanced shapes and subtle aesthetic qualities of natural light. Exhibiting the sculptures in spaces that are abundant with natural light creates new interactions between the glass and the light, offering unique visual experiences that change over time. I bolster these artworks with experiential accounts of my time spent in the building. The artwork I produced as part of this research was exhibited at the Wiesner Gallery at MIT and aims to exist in a fine arts setting, contributing to the lineage of Light &amp; Space artists such as Larry Bell and Robert Irwin.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations</title>
<link href="https://hdl.handle.net/1721.1/157712" rel="alternate"/>
<author>
<name>Lee, Cassandra</name>
</author>
<id>https://hdl.handle.net/1721.1/157712</id>
<updated>2024-12-03T04:01:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations
Lee, Cassandra
In this age of constant communication, we’ve never been more connected, yet all of our numerous, fast, and convenient connections lack the depth and intimacy we truly crave. The desire for more authentic social experiences necessitates vulnerability, honesty, and risk; but introducing such dynamics presents a great challenge in the context of the wider landscape of public discourse. Designers across disciplines have suggested using games to facilitate stronger social connection, since the structures within games can expose players to alternate social norms and encourage risk-taking. However, few have designed games that specifically foster more intimate forms of dialogue or offer scaffolding for players to see the act of sharing authentically and listening deeply as ways to play. In this thesis, I explore the novel intersection between play, intimate conversation, and technology by presenting a variety of prototypes and fully developed games that employ innovative mechanics designed to facilitate authenticity, vulnerability, complexity, and subjectivity. This work builds on formal knowledge from the social sciences, HCI, and game design, as well as informal knowledge from facilitation, gathering practices, party games, and Tarot, by presenting five distinct design principles aligned with theories grounded in past work: 1) Make emotional disclosure special; 2) Scaffold responsiveness; 3) Approach depth through fun; 4) Empower “the work” through constraints and permissions; 5) Center objects to feel with. Following a thorough Research through Design (RfD) method, I designed 15 unique prototypes and proof-of-concepts which explore various aspects of the five principles. Two of the games were designed, developed, playtested, and evaluated – Analogia, a card game that uses generative images to inspire emotion-rich conversations and Crossroads, a digital game where players are guided to unlock a secret insight by co-creating generative images inspired one another’s real experiences. This work contributes two well-tested games that evoke five compelling principles; a series of mechanics for stimulating dialogue (dual-stimulus, bridge-and-tunnel, image scrying, listener roles); and pilot data from playtests that demonstrate the ability and challenge of these mechanics to create conversational outcomes. Additionally, both spotlighted games creatively employ generative artificial intelligence (AI) to help mediate player interactions through image interpretation and co-creation. Although this is a thesis about conversation games, it critically engages with the current social zeitgeist, provides widely applicable insights and presents nuanced ways to think about the future of social-technical systems that seek to encourage deeper, more authentic ways of connecting.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Perceptual Augmentation</title>
<link href="https://hdl.handle.net/1721.1/157710" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<id>https://hdl.handle.net/1721.1/157710</id>
<updated>2024-12-03T03:01:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Towards Perceptual Augmentation
Chin, Sam
This thesis explores the concept of perceptual augmentation, focusing on expanding human sensory capabilities beyond their biological limitations. It challenges traditional approaches to sensory enhancement by emphasizing the importance of perception over mere sensory input. Drawing inspiration from the diverse sensory abilities found in nature, the research aims to develop methods for meaningful augmentation of human perception that can impact daily life. The study adopts an ecological approach to perceptual augmentation, grounded in Gibsonian ecological psychology. Key principles include providing correct mental models of augmentation devices, leveraging environmental training and natural tasks, emphasizing multisensory interfaces with sensorimotor feedback, and creating affordances that mimic the natural world. This approach seeks to facilitate perceptual learning through natural interaction with the environment, rather than relying on extensive explicit training.&#13;
The thesis presents early work in exploring and evaluating individual principles of this ecological framework for perceptual augmentation. While acknowledging the gap between the proposed theoretical approach and current research outcomes, the studies conducted focus on augmenting perception for specific tasks such as pitch interval perception, pilot situation awareness, and sleep staging.  The research does not yet demonstrate a generalized, "all-purpose" augmented sense, but lays groundwork for future investigations, including a proposed experiment to mitigate age-related hearing loss using the developed principles.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temporal Telepresence: Immersive Interfaces for TeleAbsence</title>
<link href="https://hdl.handle.net/1721.1/157709" rel="alternate"/>
<author>
<name>Pillis, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157709</id>
<updated>2024-12-03T03:30:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Temporal Telepresence: Immersive Interfaces for TeleAbsence
Pillis, D.
To store the past in a simulation may enable greater understanding of ourselves, our stories, and our histories. The urge to capture our past into networks of photographic, written, filmed, and object-based narratives has long been a means for individuals to identify change, growth, and gain perspective on themselves. Using a dataset of human narratives derived from records and ephemera, this thesis explores a novel approach to preserving and interacting with memories. We present an interactive system of objects and applications that supports intergenerational memory preservation by enabling individuals to actively explore the relationship between personal artifacts, photographs, the spaces of their past, and their memories. This system integrates personal digital twins, photogrammetry, Gaussian splatting, and tangible interfaces to create a new way of experiencing the past, based on interactivity with architectural artifacts and simulations from an individual’s life. Using an iterative participatory design process, we developed a set of multisensory interaction experiences that allow individuals to explore their relationship to autobiographical memory. The system dynamically links autobiographical memories with the environments where they took place, responding to text, photo, and object-based interactions. This experience invites individuals to modify their recollections by exploring how photo, video, and 3D space relate to the experience of revisiting narratives from the past. Applications of this system include assisting with dementia, aging, memory loss, and Alzheimer’s. Our initial studies were promising. When using the simulation system, individuals spent more time reminiscing, discussing more memories, and experiencing greater presence in their recollections than without the interactive paradigm. The system also encouraged family members to reinforce their memories by actively re-encoding them through the simulation interfaces. Results demonstrated that presence in memories seemed more vivid, detailed, and spatially accurate than before the intervention. The result is a new memory-sharing experience that benefits individuals and families by allowing them to understand how their interactions with the past can be enriched through the integration of artifacts and simulations that impact the development of autobiographical memory.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk-Benefit Assessment of Pandemic Virus Identification</title>
<link href="https://hdl.handle.net/1721.1/157708" rel="alternate"/>
<author>
<name>Jeyapragasan, Geetha</name>
</author>
<id>https://hdl.handle.net/1721.1/157708</id>
<updated>2024-12-03T03:36:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Risk-Benefit Assessment of Pandemic Virus Identification
Jeyapragasan, Geetha
Pandemic Virus Identification (PVI) aims to assess unknown viruses for their pandemic potential in immunologically naive human populations. While proponents argue that PVI could facilitate targeted spillover prevention and accelerate medical countermeasure development, critics raise concerns about biosafety and biosecurity risks. This thesis presents a comprehensive mathematical framework to evaluate the benefits, biosafety risks, and biosecurity risks associated with PVI research.&#13;
&#13;
Using a combination of mathematical modeling and expert elicitation, we developed a structured approach to estimate the potential impacts of PVI. Our framework suggests that identifying a single pandemic-capable virus through PVI could potentially save lives by reducing natural pandemic risks. However, this benefit is substantially outweighed by the estimated anthropogenic risks from potential accidental pandemic events and deliberate misuse scenarios. The overall expected value of identifying a single pandemic-capable pathogen was estimated to be strongly negative. &#13;
&#13;
Significant uncertainty exists in many key parameters estimated through surveys, with wide confidence intervals reflecting the lack of consensus among experts. Expert opinions varied considerably on topics such as the likelihood of funding for medical countermeasures and the potential for deliberate misuse of pandemic agents. This modeling work primarily aims to provide exploratory estimates to guide future work. &#13;
&#13;
Our findings underscore the urgent need for improved governance of research involving potential pandemic pathogens. This study provides a quantitative basis for ongoing discussions about the balance between scientific advancement and public safety in high-risk areas of life sciences research.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR</title>
<link href="https://hdl.handle.net/1721.1/157707" rel="alternate"/>
<author>
<name>Lin, Tsung-Han</name>
</author>
<id>https://hdl.handle.net/1721.1/157707</id>
<updated>2024-12-03T03:01:53Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR
Lin, Tsung-Han
This thesis proposes an approach to leverage multi-bounce returns of a flash LiDAR on portable smartphones for 3D specular surface reconstruction. This is an important research problem as most traditional LiDAR systems fail to detect specular surfaces. As mirror and glass are everywhere, vision systems failing to detect specular surface can be detrimental. Applications like mapping may become inaccurate, and more critically, robots could crash into undetected windows during navigation, leading to potentially fatal outcomes. We perceive this work can impactfully enhance the robustness of specular surface detection, with LiDAR complementing any kind of vision system, particularly image-based.&#13;
&#13;
Traditional LiDAR systems typically assume that all returns are single-bounce, which can lead to inaccurate representations of specular surfaces like mirrors or glass, often causing them to appear as though there is a hole. In contrast, this approach models the multi-bounce paths, providing a more accurate reconstruction of these specular surfaces.&#13;
&#13;
We operate with a consumer-grade LiDAR that does not require manual calibration and can be operated in real-time on an affordable and portable smartphone. Consumer-grade LiDAR multi-beam flash LiDAR is challenging with its coarse resolution, co-located sensors, and multiplexing setup. In face of these challenges, we propose to solve the association problem with the `reciprocal pair’ algorithm, which can discern different types of bounces from the multi-bounce returns.&#13;
&#13;
The algorithm is shown to detect over multiple consecutive frames for dense mirror mapping. In addition to 3D reconstruction, we show multi-bounce returns help to enhance performances on applications such as segmentation and novel view synthesis. Our method can be combined with these state-of-the-art learned-based model, enhancing its robustness by discerning ambiguous scenarios. In general, this approach can map various specular surfaces like mirrors and glasses, without making assumption about particular specular surface shapes, and can operate on non-perpendicular specular-diffuse surface pairs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Timbral Transformations</title>
<link href="https://hdl.handle.net/1721.1/157706" rel="alternate"/>
<author>
<name>Shand, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/157706</id>
<updated>2024-12-03T03:23:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Timbral Transformations
Shand, Jessica
From folk songs to festivals, cafes to concert halls, and religious rituals to recording studios, the flute has long had a shapeshifting, cross-cultural presence. This thesis leverages 21stcentury technologies not only to explore and extend the timbral versatility of flutes, but also to underscore the performative, fluid, and ever-evolving nature of timbre more generally. At the core of the project is the creation of sequences of discrete sounds that interpolate between semantic categories and a collection of fixed media compositions based on those sequences, both of which consist entirely of flute sounds that have undergone varying degrees of electronic manipulation. By means of digital signal processing techniques, the flute wavers in and out of a multitude of sonic identities. Sometimes, it masquerades as another familiar object or interface (e.g., a ticking clock) or abstractly evokes a concept or phenomenon (e.g., a storm); at other times, it beckons toward the ethereal or ineffable, resisting indexical identification altogether. With source materials warped, layered, and splayed across the frequency spectrum, such concerns as “the real” and “the true” begin to move out of focus, making way for attention to embodied phenomenological experiences of sound. As this thesis positions compositional practice as a form of research, its outputs range from the conceptual to the creative and the computational. In addition to the music at its core, the project interfaces with gender studies in its original exposition on timbre and timbral identity, includes a rigorous set of experiments with human and machine listeners, and makes original applications of multimodal language models not before seen in musicology or music theory. A live performance incorporating each of these project vectors and an audience discussion following the event offer further opportunities for reflection and critique.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practice of consulting firms in corporate strategic planning.</title>
<link href="https://hdl.handle.net/1721.1/157653" rel="alternate"/>
<author>
<name>Chapman, Beverly Jean.</name>
</author>
<id>https://hdl.handle.net/1721.1/157653</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Practice of consulting firms in corporate strategic planning.
Chapman, Beverly Jean.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Bibliography: leaves 82-84.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Derived distribution of water volume above a given threshold discharge.</title>
<link href="https://hdl.handle.net/1721.1/157652" rel="alternate"/>
<author>
<name>Chan, Siu-On.</name>
</author>
<id>https://hdl.handle.net/1721.1/157652</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Derived distribution of water volume above a given threshold discharge.
Chan, Siu-On.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography : leaves 138-139.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wear studies of abrasive particles</title>
<link href="https://hdl.handle.net/1721.1/157640" rel="alternate"/>
<author>
<name>Distel, Joseph William.</name>
</author>
<id>https://hdl.handle.net/1721.1/157640</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Wear studies of abrasive particles
Distel, Joseph William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1956; Bibliography: leaf 50.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wear studies of single aluminum oxide grains during grinding</title>
<link href="https://hdl.handle.net/1721.1/157639" rel="alternate"/>
<author>
<name>Cole, John M.
            (John Martin)</name>
</author>
<id>https://hdl.handle.net/1721.1/157639</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1955-01-01T00:00:00Z</published>
<summary type="text">Wear studies of single aluminum oxide grains during grinding
Cole, John M.
            (John Martin)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1955; Includes bibliographical references (leaf 48).
</summary>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forces in internal grinding</title>
<link href="https://hdl.handle.net/1721.1/157638" rel="alternate"/>
<author>
<name>Reichenbach, George S.
            (George Sheridan)</name>
</author>
<id>https://hdl.handle.net/1721.1/157638</id>
<updated>2024-11-22T03:52:12Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">Forces in internal grinding
Reichenbach, George S.
            (George Sheridan)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1952; Includes bibliographical references (leaves 28-29).
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The mechanics of dry surface grinding</title>
<link href="https://hdl.handle.net/1721.1/157637" rel="alternate"/>
<author>
<name>Marshall, Earle Robert,
            1919-</name>
</author>
<id>https://hdl.handle.net/1721.1/157637</id>
<updated>2024-11-22T03:42:16Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">The mechanics of dry surface grinding
Marshall, Earle Robert,
            1919-
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1949
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of adhesion mechanisms</title>
<link href="https://hdl.handle.net/1721.1/157630" rel="alternate"/>
<author>
<name>Yee, Geary Yee.</name>
</author>
<id>https://hdl.handle.net/1721.1/157630</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">An investigation of adhesion mechanisms
Yee, Geary Yee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices</title>
<link href="https://hdl.handle.net/1721.1/157601" rel="alternate"/>
<author>
<name>Hammond, Ian M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157601</id>
<updated>2024-11-19T03:16:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices
Hammond, Ian M.
Numerous nanophotonics applications necessitate designs that enhance distributed incoherent emission. Representative applications include light-emitting diodes, thermal emitters, and Raman sensing. Previous efforts in full-scale topology optimization for Surface Enhanced Raman Sensing (SERS) have predominantly focused on single particle emissions or two-dimensional systems, which are impractical for actual fabrication. An objective function represented by ട|E|⁴dV effectively approximates Raman enhancement. This function tends to diverge near sharp tips and other singular geometries in three-dimensional spaces for relevent materials. This thesis delves into methodologies for regularizing the optimization process to preclude the formation of such problematic geometries. Additionally, it integrates lithography constraints to ensure that the optimized SERS substrates are viable for fabrication. To align with computational limits, various strategies are employed to make the system manageable. The techniques developed in this study facilitate the practical design of 3-D systems that enhance incoherent emission through topology optimization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management</title>
<link href="https://hdl.handle.net/1721.1/157594" rel="alternate"/>
<author>
<name>Guter, Willem J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157594</id>
<updated>2024-11-19T03:55:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management
Guter, Willem J.
Supply chains are complex networks where changing one variable can have unforeseen&#13;
effects on the entire chain. Interactive supply chain visualizations are useful for understanding these effects, and can lead to decreased cost. However, these interactive visualizations&#13;
can require technical and domain expertise to operate and understand. A solution for this&#13;
is natural language interfaces, allowing users to use natural language commands to control&#13;
the visualization. Additionally, natural language interfaces can be difficult to implement,&#13;
and require applications specific programming or training. This thesis proposes integrating&#13;
a pre-trained large language model as the natural language interface. An example application is created using an existing supply chain network visualization application. Various&#13;
large language models are then evaluated for usability, functionality, and accuracy. We find&#13;
that a state of the art commercial model is able to practically fulfill the role of a natural&#13;
language interface, but that open-source large language models are not currently capable of&#13;
functioning in this way.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Fine-Tuning of Language Models for Multiple-Choice Questions</title>
<link href="https://hdl.handle.net/1721.1/157591" rel="alternate"/>
<author>
<name>Wang, Ivy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157591</id>
<updated>2024-11-19T03:46:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Investigating Fine-Tuning of Language Models for Multiple-Choice Questions
Wang, Ivy A.
This thesis investigates the positional and contextual bias of large language models (LLMs) when used to answer multiple-choice questions (MCQs). Given the increasing use of generative language models in fields ranging from cybersecurity to biomedical research, it is important to understand the causes of their behavior in order to mitigate biases and prevent errors. One known method of improving the performance of LLMs is fine-tuning, wherein a model is additionally trained on data from a specified distribution or subject area. We specifically investigate training data properties related to positional bias in fine-tuned language model performance on correctly answering MCQs. To improve model efficiency, we used parameter-efficient fine-tuning, specifically LoRA (Low-Rank Adaptation), which reduces the dimensionality of weight matrices used in the model’s layers. We verify that if the training data for the model possesses the same qualities and distributions as the test data, the LLM will achieve the best performance. In our experiments, we scaled and balanced our fine-tuning datasets and learned that both processes improve the accuracy on test sets of MCQs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor</title>
<link href="https://hdl.handle.net/1721.1/157583" rel="alternate"/>
<author>
<name>Kudriavtseva, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/157583</id>
<updated>2024-11-19T03:57:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor
Kudriavtseva, Anna
With the objective that nuclear power plants utilizing small High Temperature Gas-Cooled Reactors (HTGRs) can provide economic, environmentally favorable and reliable electricity and heat for community and industrial purposes, Boston Atomics LLC initiated the design of Horizontal Compact HTGR (HC-HTGR). This work addresses shielding, activation analysis and the decommissioning cost assessment as an integrated part of the design process.&#13;
Reinforced regular and borated concrete were considered as shielding materials for the reactor building and Reactor Cavity Cooling System (RCCS) tanks. It was found that for locations of the reactor building where the dose rates during normal operation were greater than the Nuclear Regulatory Commission (NRC) limit of 0.1 rem/hr, 175 cm of borated concrete is required. The shielding concerns motivated the decision to separate RCCS tanks from the reactor room with a 75 cm borated concrete wall to ensure that the radiation levels do not exceed the NRC limit. Additionally, several shielding options were proposed to protect steam generator modules from radiation-induced activation.&#13;
The activation analysis was performed for the key equipment and graphite reflector components of the HC-HTGR design. The core barrel made of Incoloy 800H was characterized as a class C waste component after 40 years of reactor operation. It was proposed that 2.25Cr-1Mo alloy be considered as barrel material to decrease activity levels. The reactor pressure vessel (RPV) and RCCS tubes made of carbon steel were characterized as a class A waste component. The graphite reflector components are characterized as Class C level waste.&#13;
Furthermore, this work discusses the neutron irradiation effects and their impact on the integrity of the barrel, RPV, and graphite reflector against material property changes. It was found that 2.25Cr-1Mo alloy has a higher radiation resistance due to the higher iron content in the composition. Based on the results, the reactor vessel is safe from radiation damage for 32 years of operation. The data evaluated for the graphite reflectors indicate that the components should be replaced after 20 years before they pass the turnaround point. &#13;
The concentrations of radionuclides computed during activation analysis were used to predict the radiation levels from beta and gamma sources that could be encountered during the disposal of the core barrel and RPV. Based on the obtained data, it is clear that if the barrel is not replaced during operation, the radiation dose rate will remain above acceptable levels, requiring a more rigorous disposal approach. The radiation levels are reduced for the reactor vessel as it was exposed to a lower flux and radiation-induced activation. A similar analysis was performed to derive the exposure dose rate from gamma and beta rays that can be detected by a sensor of a refueling camera. Beta particles will deposit most of the energy in a graphite layer, and the camera will register negligible dose rates. The gamma ray estimates indicate that a more enduring refueling machine is required. &#13;
The results of this work provide the disposal costs for HC-HTGR immediate dismantlement and after a given decay period. Overall, the disposal costs of core barrel, RPV and graphite reflector are $13 million for HC-HTGR design after 40 years of full operation if the billable charge limits are set on radioactivity levels. If this option is not considered, the total disposal costs grow up to $225 million. However, extending the storage up to 10 years would decrease the activity, reducing the cost of disposal.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions</title>
<link href="https://hdl.handle.net/1721.1/157568" rel="alternate"/>
<author>
<name>Allen, Marissa D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157568</id>
<updated>2024-11-19T04:14:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions
Allen, Marissa D.
This study explores the synthesis and characterization of borafluoronium ions via a ligand-based strategy using bidentate amine and phosphine bases as chelating agents to cationic boronium ions.The borafluoronium complexes A–C were synthesized in high yields (80%–95%) and characterized using NMR spectroscopy and single crystal X-ray diffraction. Further investigations into the coordination of other bisphosphine ligands, such as dppe, rac-BINAP, and Xantphos, resulted in the formation of Lewis adducts rather than the desired borafluoronium ions. The challenges in isolating these species are attributed to steric and chelate effects inherent of the ligands, with NMR analysis providing insights into the coordination chemistry and stability of these complexes.This work advances the understanding of borafluoroniumion formation and the impact of ligand structure on their properties.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization under ecological realism reproduces signatures of human speech perception</title>
<link href="https://hdl.handle.net/1721.1/157565" rel="alternate"/>
<author>
<name>Magaro, Annika K.</name>
</author>
<id>https://hdl.handle.net/1721.1/157565</id>
<updated>2024-11-19T03:12:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optimization under ecological realism reproduces signatures of human speech perception
Magaro, Annika K.
Recent advances in machine learning have made real-world perception tasks feasible for computers, in many cases approaching levels of performance similar to those of humans. In particular, optimizing models for ecologically realistic training datasets has helped to yield more human-like model results. In the field of speech recognition, models trained under realistic conditions with simulated cochlear input reproduce some characteristics of human speech recognition. However, it is unclear how similar the behavior of these models is to that of humans across the many ways in which speech can be manipulated or degraded, since human and model behavior have not been extensively compared. In this paper, we address this question by comprehensively testing a neural network model trained in ecological conditions across a large set of speech manipulations, comparing its behavior to that of humans. We find that training in ecological conditions yields a fairly good overall match to human behavior, with some discrepancies that can be largely resolved by training specifically on these conditions. The results support the idea that the phenotype of human speech recognition can be understood as a consequence of having been optimized for the problem of speech recognition in natural conditions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of competition in freight transportation to and from Boston, Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/157488" rel="alternate"/>
<author>
<name>Luykx, H. M. C.</name>
</author>
<author>
<name>McHugh, G. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157488</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1931-01-01T00:00:00Z</published>
<summary type="text">A study of competition in freight transportation to and from Boston, Massachusetts
Luykx, H. M. C.; McHugh, G. E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1931; Appendix contains numerous pamphlets.
</summary>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The use of discriminators in the linear detection of F-M signals</title>
<link href="https://hdl.handle.net/1721.1/157485" rel="alternate"/>
<author>
<name>Lu, Pao-Wei.</name>
</author>
<id>https://hdl.handle.net/1721.1/157485</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">The use of discriminators in the linear detection of F-M signals
Lu, Pao-Wei.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1944; Includes bibliographical references (leaf 51).
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Valuation model for Less Developed Countries' Debt in the secondary market</title>
<link href="https://hdl.handle.net/1721.1/157480" rel="alternate"/>
<author>
<name>Carballo, Carlos Federico.</name>
</author>
<id>https://hdl.handle.net/1721.1/157480</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">Valuation model for Less Developed Countries' Debt in the secondary market
Carballo, Carlos Federico.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1989; Includes bibliographical references (leaves 75-79).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban space heating with a heat pump-condenser temperature water system</title>
<link href="https://hdl.handle.net/1721.1/157478" rel="alternate"/>
<author>
<name>Yee, Wee Tong.</name>
</author>
<id>https://hdl.handle.net/1721.1/157478</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Urban space heating with a heat pump-condenser temperature water system
Yee, Wee Tong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Average frequency trajectory control : normal mode.</title>
<link href="https://hdl.handle.net/1721.1/157475" rel="alternate"/>
<author>
<name>Yared, Khaled Ibrahim.</name>
</author>
<id>https://hdl.handle.net/1721.1/157475</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Average frequency trajectory control : normal mode.
Yared, Khaled Ibrahim.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The implementation of a joint disaggregate demand model in an urban simulation</title>
<link href="https://hdl.handle.net/1721.1/157474" rel="alternate"/>
<author>
<name>Worms, Vincent Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/157474</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">The implementation of a joint disaggregate demand model in an urban simulation
Worms, Vincent Robert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 114-115.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore</title>
<link href="https://hdl.handle.net/1721.1/157368" rel="alternate"/>
<author>
<name>Alrifai, Hajar</name>
</author>
<id>https://hdl.handle.net/1721.1/157368</id>
<updated>2024-10-17T03:12:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore
Alrifai, Hajar
Partially buried in the landscape of Hauran in southern Syria, my family’s 1500-year-old house, Alali—formerly a Byzantine church—further erodes with each passing year. Throughout the decades, the house has been subjected to various forms of destruction: from development, demolition, and rocket strikes to violent reconstruction. Its crumbling stones are laden with the memories of four generations and echo with a way of life that is disappearing. At the heart of Hauran are the fellahin, farmers who permanently settled in its villages in the late 19th century. As they settled, the fellahin reclaimed, inhabited, dismantled, and rebuilt the Byzantine structures, often rearranging or reimagining the original programs: chapels, houses, and cemeteries. In my family’s border village of Nasib—a place both liminal and at the margin—this rich local history lives not in formal archives but in scattered material like architectural ruins, oral poems, folk songs, diasporic transcripts, and 8mm video cassettes, many of which resonate as sonic artifacts. What began as a project of documenting the decay of our old house evolved into a meditation and manifesto on preservation outside the purview of top-down institutions. Through creative writing and cinematic intervention, Echoes from the Stone asks: what does it mean to preserve a place, and preservation for whom? In this proposed paradigm, ‘story’ becomes integral to architectural preservation. This story of Alali interweaves my journal entries with the encounters of my greatgreat grandfather, Hassan Ali, an oral poet who founded the village. I further draw from my grandfather Faisal’s diaries, our family’s archival videos, and interviews with Nasib’s elders, including my grandmother Um Ghazi, an olive farmer, and Um Saado, a Bedouin matriarch and shepherd who once lived in the old home with her family. By foraging for this counter-archive of living memories, I reveal intergenerational intersections which complicate and reimbue the colonial history of the village—and of Syria—with voices that echo from the stone, voices that persist and whisper from the ground, from across borders and oceans, and from within. This interdisciplinary chronicle draws from architecture, agriculture, literature, anthropology, and film, to reconstruct a social history of the village and speculate on alternate ways of dwelling, building, and preserving— reclaiming the archive, reinserting narrative, and reframing heritage through the folklore of Hauran.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation</title>
<link href="https://hdl.handle.net/1721.1/157367" rel="alternate"/>
<author>
<name>Brice, James Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/157367</id>
<updated>2024-10-17T03:22:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation
Brice, James Vincent
There has been great interest in the potential of constructed oyster reefs (CORs) to function as nature-based coastal protection infrastructure, but most projects to-date are designed primarily for wave attenuation and fail to consider both the environmental conditions necessary for long-term oyster reef sustainability as well as the importance of education and outreach in fostering environmental stewardship. Realizing the promise of nature-based coastal adaptation means building physical, ecological and social infrastructure simultaneously, requiring a design-research methodology that combines an understanding of biological design constraints, physical analysis and community engagement. &#13;
&#13;
Physical and numerical wave flume experiments were conducted to investigate mechanisms of wave energy loss in oyster shell gabion-type CORs that place oyster biology in the foreground— particularly, the influence of across-shore width, spacing and structure porosity on wave attenuation under non-breaking wave conditions. Gabion widths of O(1) wavelength were found to attenuate waves by 40%. These losses were driven primarily by internal drag which was characterized experimentally and accurately modeled with the modified Ergun Equations and the waves2Foam library of the open-source CFD software OpenFOAM. &#13;
&#13;
This research was then translated into a suite of interactive design activities, featuring a tabletop wave flume, scale models of coastal features, and a set of coastal community member cards. Through design and creative inquiry, these tools seek to communicate complex biophysical processes in coastal ecosystems while empowering communities to reimagine what it really means to "build with nature".
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When the Earth Breathes: An Anthology of Volcanic Urbanism</title>
<link href="https://hdl.handle.net/1721.1/157366" rel="alternate"/>
<author>
<name>Carucci Alvarez, Maria Gabriela</name>
</author>
<id>https://hdl.handle.net/1721.1/157366</id>
<updated>2024-10-17T04:01:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">When the Earth Breathes: An Anthology of Volcanic Urbanism
Carucci Alvarez, Maria Gabriela
Malpaís. A Spanish word used in volcanically-active landscapes to refer to the new basalt terrain that solidifies after an eruption. It translates literally to “bad country”, and it is defined as a “sterile, arid surface”. This thesis looks at the Tajogaite volcano, the most recent eruption in La Palma, one of the youngest of eight islands in the oceanic volcanic arc formation of the Canary Islands. It positions this event not as a unique site but as a manifestation of a network of bureaucratic colonial imaginaries that still operate within a disaster relief framework that exists in volcanic landscapes throughout the world. Together, these imaginaries draw an unyielding binary narrative about volcanoes as purely destructive entities, and further dismiss the porosity that exists between the geos, the bios and the polis. Igneous landscapes, through the production of new basalt floors, rich soils and ocean intrusions, traverse and redefine property boundary lines and national coastlines, which extends beyond plan views and into sectional shifts. This project aspires to spatialize the temporal moments of one volcanic eruption, questioning, ultimately, how the ownership of materials in flux, along with their transformations, can reframe our imagination of a city-volcano production that frames both as ephemeral, ever changing entities. Through ten allegories, cities are positioned inside of the geological realm, and are de-centered to contextualize them within a volcano’s lifespan. The first five stories describe the current framework, while the other half become allegories through which architecture and urbanism are leveraged as tools through which to understand the earth’s movements at different scales, temperatures and states of matter, in order to provide an alternative imaginary to current answers to the question of volcanic urbanism.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature</title>
<link href="https://hdl.handle.net/1721.1/157365" rel="alternate"/>
<author>
<name>Yi, Wangli</name>
</author>
<id>https://hdl.handle.net/1721.1/157365</id>
<updated>2024-10-17T03:12:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature
Yi, Wangli
After COVID-19, some employees have opted to continue working from home (WFH) or have chosen a hybrid working mode. Previous research has shown that satisfaction with the physical environment and characteristics of home workspaces are directly related to mental health, which can affect productivity and well-being. This underscores the need for better designed WFH environments. This study explores the use of data-driven tools in interior design to enhance WFH setups. It posits that these tools can transcend traditional design limitations by incorporating professional expertise and facilitating user- driven design processes.&#13;
The tool's backend is built on a comprehensive collection and classification of research literature on WFH environments, creating an interactive platform where users can engage directly in the design process. This is achieved through real-time, machine-mediated suggestions that enhance well-being without the need for professional human designers. Employing a user-centered design framework, the study develops and tests a prototype to assess its effectiveness in empowering users to intentionally and sensitively redesign their home workspaces.&#13;
Results show that participating graduate students became more aware of their WFH environment during the design process, but largely it did not change their existing workspace decisions. This observation indicates the potential benefit of this interactive machine-mediated system as a design education tool. Further test on other demographic groups, such as those who need to focus for long hours professionally at home as well as those who are specifically concerned with mental health issues, is anticipated as the next step for the evaluation of this platform.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design</title>
<link href="https://hdl.handle.net/1721.1/157362" rel="alternate"/>
<author>
<name>Sørensen, Karl-Johan I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157362</id>
<updated>2024-10-17T03:51:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design
Sørensen, Karl-Johan I.
The design-to-construction process of buildings predominantly follows a top-down linear workflow, where a design is drawn and subsequently refined to determine the required materials and components. This approach assumes an infinite material supply or the capability to manufacture what is needed for the design. Constructing in this manner is resource-intensive and wasteful, making it incompatible with our global climate goals. One way to significantly reduce our material and environmental footprint is by extending the lifespan of building materials through circular design practices. In this approach, the available materials define the architecture, inverting the process from top-down to bottom-up. This method, known as Inventory-Constrained Design, enables the creation of new buildings using materials sourced from construction and demolition waste streams. These inventories, characterized by their non-standard and uniquely varied elements, are hard to design with due to the enormous quantity of possible combinations of even a few discrete elements. Identifying a feasible design that aligns with the designer's intent and meets functional requirements becomes an overwhelmingly time-consuming task, heavily reliant on manual trial and error. Computational optimization has been implemented to automate the process, but state-of-the-art algorithms still require manually pre-defining a parametric target design-space or take too long to compute when applied to larger problems.&#13;
&#13;
This thesis proposes a new method for circular design utilizing Deep Reinforcement Learning (RL) to design structures, requiring only a design gesture and the inventory as input. It works by training an artificial neural network to sequentially assemble a structure from inventory elements, following the gesture while meeting a structural goal. Hence, the design layout directly arises from available inventory. After training, the neural net can be employed instantaneously to design new structures with new inventories without any significant computational expense. To evaluate the effectiveness of the RL method, it is applied to the specific problem of inventory-constrained design of planar roof trusses and demonstrated in a realistic example of assembling a long-span roof from a disassembled transmission tower.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exœrcising a Haunted City</title>
<link href="https://hdl.handle.net/1721.1/157361" rel="alternate"/>
<author>
<name>Wong, Bryan Hon Ting</name>
</author>
<id>https://hdl.handle.net/1721.1/157361</id>
<updated>2024-10-17T04:04:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exœrcising a Haunted City
Wong, Bryan Hon Ting
With the looming threat of cultural erasure posed by Hong Kong’s repatriation to China no later than 2047, rituals emerge as the last resource sustaining the collective identity of the city. This thesis documents, through the study of local Taoist-Buddhist practices, the choreographies of rituals as a reparative tool to resist the disap- pearance of local culture. It is linked to findings from everyday domestic offerings to ancestors, annual festive performances of traumatic cleansing, and the booming clientele businesses of precautionary rites, all of which demonstrate their spatial and temporal qualities as methods to resist modern state control.&#13;
&#13;
To retain the residue of pre-modern practices as a critique of socio-political turmoil, this thesis suggests an alternative design that preserves and promotes the annual ghost festival for public participation. By revising the festival’s pilgrimage route and ritual sheds, this thesis transforms the traditional nature of ephemeral scaffold- ings into permanent poles and follies. Situated along the city’s most haunted public estate, these structures are programmed as public facilities for fitness training and children’s playscapes. During the festival, they will be activated into ritual sheds, demonstrating a formal and functional contrast between the everyday and the ritu- al—from form to formlessness, exposure to closure, and lightness to heaviness.&#13;
&#13;
Designed to evade institutional surveillance, these clandestine transformations preserve solidarity and identity not by emphasizing the significance of priests exorcising in rituals, but by highlighting the quotidian motor memories developed from locals exercising within. The duality of ritual and everyday movements shall exercise the ghosts of a haunted city.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar</title>
<link href="https://hdl.handle.net/1721.1/157360" rel="alternate"/>
<author>
<name>Mehta, Dhwani</name>
</author>
<id>https://hdl.handle.net/1721.1/157360</id>
<updated>2024-10-17T03:39:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar
Mehta, Dhwani
Along the west coast of India, in the waters of Gosabara-Mokarsagar, conflicting visions for the landscape mix and muddle. In 2016, Muslim fisherfolk of Gosabara, 100 families, already marginalized by religious, caste, and class distinctions, were banned from fishing, which was their sole traditional livelihood due to environmental protection claims. This led the community to file a petition for mass euthanasia to protest the loss of their rights. Despite their protests, the Government of India announced the Kerly Recharge Reservoir Ecotourism project in 2022 that overlooked their needs, threatened their cultural identity linked to fishing, and exacerbated their traumatic history of displacement that dates back to India and Pakistan’s 1947 partition. &#13;
&#13;
Although many groups’ contested visions map onto the shared waters of Gosabara-Mokarsagar, the fisherfolk are particularly excluded from decision-making processes. Finding a singular common ground among the contesting groups is challenging due to vast differences in power, position, and privilege. This thesis, therefore, aims to ensure equitable representation for all stakeholders, particularly disempowered fisherfolk, by  an integrative design approach of forging a network of multiple ‘common grounds.’ The term ‘common grounds’ defines partnerships of two or three stakeholders, instead of all, based on mutual understanding and shared objectives like sustainable livelihoods, economic development, ecotourism, and avian conservation. &#13;
&#13;
First, I established a common ground with a local NGO, Mokarsagar Wetland Conservation Committee, by using photography, videography, and drawings to raise public awareness about this unique landscape. Initially intuitive and later strategic, I represented the lush waters as a shared home for both the fisherfolk and the birds. Second, I present a network of localized design strategies to enable partnerships that position the NGO as a mediator between the government and local communities, especially the fisherfolk, enabling it to foster alternative models of environmental stewardship. Through these partnerships, rooted in figurative ‘common grounds,’ the fisherfolk become primary, active collaborators in development processes. This thesis creates the conditions for a more equitable development model for this landscape by using design to enable grassroots partnerships that integrate communities into ecological conservation and economic growth projects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Office of Back of House</title>
<link href="https://hdl.handle.net/1721.1/157359" rel="alternate"/>
<author>
<name>Bilal, Ekin</name>
</author>
<id>https://hdl.handle.net/1721.1/157359</id>
<updated>2024-10-17T03:38:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Office of Back of House
Bilal, Ekin
Office of Back of House (OoBoH, pronounced “ooh-boo”), is an architectural practice that operates at the intersection of ducts, conduits, scaffolding, custodial carts, mechanical rooms and sheds. OoBoH conducts design experiments in and around these maintenance objects and spaces typically separated from “architecture-proper.” By looking at the regulations, funding initiatives, zoning amendments and energy consumption routines that rule these spaces, OoBoH questions the boundaries that separate them from the “front of house” to begin with.&#13;
These “back of house” spaces exist right inside the thick poché line that bounds what is thought to be the domain of design. Back of house (BoH) is dictated by an obscured regime of maintenance processes, and by leveraging these currently unexamined spaces, OoBoH believes that they can become the site for tactical design interventions and new visions of maintenance culture. OoBoH is an attempt at entering architecture from the back door, re-characterizing existing buildings as dependent on the spaces and labor often hidden behind&#13;
pastiche and façade.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tectonics of the semi-permanent: Reassembling fit-out architecture</title>
<link href="https://hdl.handle.net/1721.1/157358" rel="alternate"/>
<author>
<name>Schnitzler, Jenna</name>
</author>
<id>https://hdl.handle.net/1721.1/157358</id>
<updated>2024-10-17T04:14:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tectonics of the semi-permanent: Reassembling fit-out architecture
Schnitzler, Jenna
In New York engineer Reginald Pelham Bolton’s 1911 obsolescence study “Building for Profit: Principles Governing the&#13;
Economic Improvement of Real Estate”, he foretold a truth that remains today, that “the useful or economic existence of&#13;
all classes of buildings, in the rapid march of modern conditions, is constantly shortening” (Bolton, 68). He details how&#13;
the parts of buildings lose value at different rates—as they physically deteriorate, materials wear and things fall out of style,&#13;
but even more quickly, he notes, do our structures become economically obsolete. Then and still today the durability of&#13;
building materials is the least of our concerns when considering functional obsolescence. The physical is almost certain to&#13;
exceed the economic durability of a building as a whole.&#13;
Designers and developers recognize this gap between physical and economic obsolescence, and in response have called&#13;
for a moratorium on new construction—opting instead to convert existing structures to meet changing programmatic&#13;
demands. Yet in these conversions, we use the same extractive methods as new construction, filling existing frames and&#13;
envelopes with non-structural light framing to differentiate the space inside. In this paradigm, to build inside an existing&#13;
frame still relies first on the tool of demolition.&#13;
The uneven wearing that Bolton wrote about in 1911, appears again in the iconic shearing layers diagram from Frank&#13;
Duffy and Stewart Brand, who make a very similar economic argument, demonstrating that the economically fast-wearing&#13;
interior layer accumulates the most investment over time, rebuilt on a cycle of every 5-10 years. We are facing a turning&#13;
point in building; as of 2020, over 35% of total construction activity is renovation work, and we are making increasingly&#13;
rapid changes to building function. This creates a paradigm of fit out architecture that answers unpredictability and&#13;
shifting values with indeterminacy, perpetuating a cycle of repetitive building. This project takes the converted structure&#13;
as its starting point, experimenting with disassembly, reassembly, and the boundaries between fit out and frame, sited&#13;
within a larger material and economic framework that expands the definition of “value” beyond the monetary to include&#13;
material resources embodied by a given structure.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Engineering Design for Reusable Concrete&#13;
Building Structures</title>
<link href="https://hdl.handle.net/1721.1/157357" rel="alternate"/>
<author>
<name>Wongsittikan, Pitipat</name>
</author>
<id>https://hdl.handle.net/1721.1/157357</id>
<updated>2024-10-17T04:15:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Engineering Design for Reusable Concrete&#13;
Building Structures
Wongsittikan, Pitipat
Concrete contributes to 8% of global CO2 emission through reinforced concrete (RC) structural system. Unlike steel and timber structures, RC components are rarely reused due to the inseparable phase between concrete and steel. This results in down cycling of the components into aggregates or landfill material. The Pixelframe structural system [1] was proposed to facilitate the reusability of concrete components by implementing the existing external post-tensioning system in bridge structures and fiber reinforced system to design building beams and columns. This work presents an automated engineering design workflow for Pixelframe, including a engineering mechanics of the system that conforms to ACI 318- 19 [2] and fib Model Code 2010 [3], half-scale tests to verify the preliminary behavior of the system, and a scalable design algorithm for minimum embodied carbon designs. The workflow also uncovers new insights on choosing ranges of concrete strengths based on the element lengths and potential carbon reduction from refining the number of different concrete strengths in a building. This work demonstrates the utilization of existing building systems in the context of reusability and the potential of automated computational structures in aiding the design decisions to facilitate the circular economy of concrete structures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Salt to Scale: The Seasoning of Buildings</title>
<link href="https://hdl.handle.net/1721.1/157356" rel="alternate"/>
<author>
<name>Battikha, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/157356</id>
<updated>2024-10-17T03:02:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Salt to Scale: The Seasoning of Buildings
Battikha, Christina
We exist in thick layers of ancient minerals and material formations that perform to shape human architectural practices. Yet, with a continuous desire to force materials into designs, humanity has never ceased to disregard the active strength of a material to perform with time. The next twenty years align with a future of salt in the form of a dynamic, preservative, and corrosive mineral that shall never expire from Earth’s crust. Nevertheless, aspiring to mine, build, maintain, and preserve, humanity remains in constant search of other more durable materials designed with the presumption to last forever.&#13;
&#13;
Salt is certainly not the neutral product of a chemical reaction. It actively performs to preserve, corrode, accumulate, or maintain humanity’s creations. Embracing its ability to expand and reduce timescales, I investigate salt as a material that provides both corrosive and preservative properties offering current architectural practices the choice and responsibility of building for eternity or for a finite moment.&#13;
&#13;
I explore ancient salt cycles shaping the last human activities remaining on the Eastern coast of the Mediterranean, in Anfeh, Lebanon. Molded into a series of geo-cultural objects, salt containers embrace their materiality and escape the dullness of a mold to acknowledge the continuous cultural cycles that exists between time, salt, and its people.&#13;
&#13;
This thesis invites current design and construction practices to think across new intervals of time that reflect the building and un-building capacities of salt as a scalable mineral contributing to a salty architectural ritual that passes from generation to the next; a source of luck amidst a time of ongoing crisis. Providing recipes from a salty kitchen, the work integrates seasonal practices to mine and craft salt into animate typologies embracing the forces of salt to challenge the standard architectural practice against one that thinks with the durations of salt.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In Tension: Computational exploration of the design space of tensile network structures</title>
<link href="https://hdl.handle.net/1721.1/157354" rel="alternate"/>
<author>
<name>Burke, Adam T.</name>
</author>
<id>https://hdl.handle.net/1721.1/157354</id>
<updated>2024-10-17T03:43:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">In Tension: Computational exploration of the design space of tensile network structures
Burke, Adam T.
Cable and rope net structures are lightweight tensile systems and generally cannot resist&#13;
compression or bending. Tensile network structures are often used to span long distances&#13;
without intermediate supports and have found applications in art, architecture, and structural engineering due to their physical and visual lightness. However, the design of tensile&#13;
net structures is generally challenging since their form cannot be arbitrarily defined. Instead&#13;
a process of form-finding must be used to establish a geometry where all edges of the network&#13;
carry only tensile forces.&#13;
Physical models and computational methods can be used for the form-finding of tensile&#13;
network structures; however the primary challenge in the design process is the adjustment of&#13;
the network parameters to achieve a specific design. Recent work has shown that automatic&#13;
differentiation software packages can be used to efficiently design funicular structures (that&#13;
is, those that work in pure tension or pure compression) with additional designer driven&#13;
objectives, but these techniques remain largely inaccessible to general designers, architects,&#13;
and engineers due to the involved process of problem setup and limited interactivity of&#13;
existing tools.&#13;
To address this limitation, I introduce a new tool set consisting of two main components, Ariadne and Theseus. These components take advantage of automatic differentiation&#13;
of objective functions for efficient tensile network simulation and provide a user interface&#13;
for architects, engineers, and other designers as a plugin for a commonly used 3d modeling&#13;
software. In this thesis, I outline the structure and features of this tool set, show results of&#13;
networks optimized with different composable objectives, and show some fabricated examples. Next, I explore the the generation of more complex 3d network topologies through a&#13;
procedural shape grammar. Finally, I explore the use of differentiable simulation in conjunction with machine learning techniques to optimize the geometry of tensile networks using&#13;
semantic input and to develop an implicit representation of the space of equal edge length&#13;
tensed network poses. Together, this new tool set and additional methods enable a more expansive exploration of the design space of tensile networks where design intent and practical&#13;
constraints are respected.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing frameworks for an equitable future: from building decarbonization to generative modeling.</title>
<link href="https://hdl.handle.net/1721.1/157353" rel="alternate"/>
<author>
<name>De Simone, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/157353</id>
<updated>2024-10-17T03:20:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing frameworks for an equitable future: from building decarbonization to generative modeling.
De Simone, Zoe
In this thesis I develop computational frameworks to understand equity under two perspectives: building decarbonization policy and generative modeling.&#13;
&#13;
Part 1 - Equitable building decarbonization&#13;
Buildings significantly contribute to global carbon emissions, necessitating urgent decarbonization to meet 2050 climate targets. The U.S. strives for net-zero emissions by 2050, supported by federal incentives promoting building upgrades. However, financing deep retrofits for all U.S. homes exceeds available public funds. This chapter proposes a model that examines long-term carbon reduction trajectories under various incentive policies, focusing on fairness and equity. Using Oshkosh, WI, as a case study, it explores the philosophical, economic, political, and mathematical dimensions of creating just and effective decarbonization policies that ensure healthy, low-carbon homes for all.   &#13;
&#13;
Part 2 - Equitable diffusion models&#13;
Generative Text-to-Image (TTI) models, while capable of producing high-quality images, often replicate training data biases. Traditional fairness views in machine learning, which consider fairness as binary, are challenged. This section introduces DiffusionWorldViewer, a novel framework with a Web UI that enables users to analyze the underlying worldviews of diffusion models and edit model outputs to align with their personal fairness perspectives, thus promoting a diverse understanding of fairness in AI technologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems</title>
<link href="https://hdl.handle.net/1721.1/157352" rel="alternate"/>
<author>
<name>Haile, Nebyu Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157352</id>
<updated>2024-10-17T03:09:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems
Haile, Nebyu Samuel
The world's population is projected to grow rapidly in urban areas, with a projected 2.5 billion more urban dwellers by 2050 (UN-DESA, 2019). This urban growth will notably concentrate in Less Economically Developed Countries (LEDCs), where 16 of the top 20 most populous cities are anticipated to be situated by 2100 (Hoornweg &amp; Pope, 2017). LEDCs face a critical challenge in meeting the demand for affordable housing due to various factors, notably the high material costs, which can make up to 90% of residential construction expenses (Meikle, 2011). Most multi-story housing in LEDCs relies on reinforced concrete frames with flat slabs. This structurally inefficient system heavily depends on imported cement and steel for many locations. Compounding this issue, in LEDCs, the construction sector contributes significantly to their annual carbon emissions, sometimes doubling the global average and exacerbating the climate crisis (Yokoo et al., 2016). Addressing the pressing need for affordable housing requires alternative, more efficient structural systems that utilize affordable and environmentally conscious materials.&#13;
&#13;
This thesis aims to address the challenge of affordable housing by proposing the implementation of unreinforced barrel-vaulted earthen floor systems as an alternative to conventional concrete flat slabs, which are often cost-prohibitive in LEDCs. While existing research predominantly focuses on thin concrete shells for vaulted floors, this study emphasizes earthen vaulted floor systems, utilizing locally available and cost-effective materials. Specifically, it analyzes the maximum spanning capacity of three shallow unreinforced earthen barrel-vaulted floor typologies, examining their associated costs and carbon footprints. Furthermore, the thesis investigates the feasibility of one of these typologies by constructing and evaluating a physical 3m span prototype subjected to international building code loads. The outcomes highlight the structural integrity, cost-effectiveness, and reduced carbon footprint of earthen vaulted floor systems, offering insights into a more environmentally conscious and economically feasible floor system typology for building construction in LEDCs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Matter of the Hold: Housing futures and the paradigm of the ship</title>
<link href="https://hdl.handle.net/1721.1/157351" rel="alternate"/>
<author>
<name>Donovan, Inge</name>
</author>
<author>
<name>Pankhurst, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157351</id>
<updated>2024-10-17T03:44:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Matter of the Hold: Housing futures and the paradigm of the ship
Donovan, Inge; Pankhurst, David
Many of the port cities of North America are built upon ballast stones, discarded by ships after their transit across the Atlantic. Oftentimes, this material was sourced from waste, such as stone offcuts from quarrying, and transported across space and time, slipping through value systems; from waste, to weight, to commodity. In time, structures across the continent boasted chimneys or foundations that had begun their life in the distant granite quarries of Cornwall, and from bricks that had rounded Cape Horn - their material transience obscured by a perceived stability of form.&#13;
Buildings are usually seen as the endpoint of material flows, where they remain in intractable, fused assemblies until they reach obsolescence. This familiar pattern is currently playing out in the phased demolition of the Bunker Hill Public Housing Development, the largest affordable housing community on the East Coast. The BHHD can be seen in contrast to the Charlestown Navy Yard, an adjacent shipyard where centuries of investment have established a robust infrastructure of maintenance. We ask: how could the paradigm of the ship, and the creation of material strategies for large, complex assemblages funded by public spending be applied to housing in a resource constrained world?&#13;
In The Matter of the Hold, the demolition waste from Bunker Hill is inherited as ballast and transformed, a process made possible by the concept of the “building as hold.”&#13;
In light of the increasing shift towards buildings as storehouses of material to be held for future reuse, and as vessels of carbon sequestration, our thesis explores how design for the uneven, yet cyclical ebbs and flows of renewable resources erodes architecture’s traditionally rigid temporal boundaries of planning, construction, and occupancy, and produces temporally dynamic regimes of figure and form. The collection, administration and reconfiguration of waste materials results in the creation of new, regenerative forms of collective living that challenge the boom-and-bust logic of investment in public infrastructures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stories of the Sky</title>
<link href="https://hdl.handle.net/1721.1/157349" rel="alternate"/>
<author>
<name>Chen, Zhanyi</name>
</author>
<id>https://hdl.handle.net/1721.1/157349</id>
<updated>2024-10-17T03:01:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stories of the Sky
Chen, Zhanyi
My art practice probes how soft science fiction provides intervals to contemplate the tension among the relentless advancement of infrastructural technologies, their environmental and psychological repercussions, and the metaphors and culture in weather and environments. In this thesis, I explore such tension with a specialized focus on the sky via a series of artworks that engage with clouds, weather satellites, and human feelings. My experience receiving image signals from the Russian weather satellite Meteor-M2 has led me to understand the pervasive presence of satellites and their silent integration into, and control over, various environments—similar to numerous other contemporary infrastructures. The sky has never been merely a smooth surface but is striated with all kinds of machines, politics, and power dynamics. My thesis can be seen as exploring methods of coping as responses from an individual caught in such an intermingled environment, and as an inquiry into how we perceive things that are distant from us. Referring to soft science fiction approaches, I strategically misuse technologies to prioritize human subjectivity over technological functionality. In moments where the misused technologies cease to function, but to obscure, to resist, to complicate, to affect, I put the current dynamics between the self and technologies into play. Parallel to my artistic practice, I also take inspiration from elemental media studies for their broader theoretical discourse on the interplay between the environment and media. Media historian John Durham Peters argues for a more encompassing definition of media that includes environmental elements, including the sky, challenging the traditional dichotomy between nature and culture and the previous academic emphasis on culture over nature. This perspective allows for the exploration and appreciation of the sky’s cultural, emotional, and historical values which are just as important, if not more so, than any other conventional media, resonating with the intentions behind my artworks. Thus, “media” becomes a term that is semantically richer than it already is and requires a nuanced interpretation embracing all its connotations, and my thesis provides ways to explore this materially. By focusing on the sky as a juncture where nature and culture collide, my thesis advocates for a synthesized view that recognize the multifaceted narratives woven through the sky—stories of technology, of culture, of grand dreams and of small melancholy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels</title>
<link href="https://hdl.handle.net/1721.1/157345" rel="alternate"/>
<author>
<name>Chamdal, Harshal</name>
</author>
<id>https://hdl.handle.net/1721.1/157345</id>
<updated>2024-10-17T03:21:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels
Chamdal, Harshal
Advances in machine learning, particularly through algorithmic innovations and large datasets, have led to models with hundreds of billions of parameters. Deploying these models is challenging and costly, especially due to the extensive finetuning required. Parameter-efficient finetuning techniques (PEFT) have been proposed to address this issue by significantly reducing the number of trainable parameters, achieving comparable results to full-parameter finetuning. Despite widespread adoption, PEFT methods are often used interchangeably without considering their qualitative differences and performance under various data distributions. This thesis extensively compares three PEFT methods: LoRA, BitFit, and (IA)³, using the ModelDiff framework to identify and apply data interventions. Our analysis reveals that the performance of these methods varies widely with different interventions, with BitFit showing the most variance, while LoRA and (IA)³ demonstrate greater resilience. This study informs the selection and optimization of PEFT techniques based on specific NLP task requirements, balancing performance, computational efficiency, and robustness to text variations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media</title>
<link href="https://hdl.handle.net/1721.1/157344" rel="alternate"/>
<author>
<name>Akdoğan, Merve</name>
</author>
<id>https://hdl.handle.net/1721.1/157344</id>
<updated>2024-10-17T04:10:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media
Akdoğan, Merve
Situated at the intersection of digital media studies, queer theory, and glitch art, this thesis critically examines the normative biases and centralization in artificial intelligence (AI) and, more specifically, machine learning systems as they relate to marginalized identities. Unlike conventional approaches that prioritize optimization and polishing of AI, this thesis introduces the notion of a glitch—a short-lived digital error—as both a metaphorical and an artistic technique that critically subverts societal norms. The thesis interrogates AI’s structure, dissecting it to reveal “black box” complexities to question the vulnerability of computational systems. It proposes an alternative approach that embraces error as a means of resistance, developing a critical commentary on technology production through artistic interventions. Grounded in Judith Butler’s “Matrix of Intelligibility,” the artistic interventions introduced in this thesis aim to craft a glitch aesthetic that integrates queer theoretical perspectives with practical machine learning applications. This thesis interrogates how AI models can potentially propagate entrenched societal norms about gender, what the political errors made by AI systems are and what can be the activist potential of technology in challenging these cisheteronormative renderings. Aiming to develop and test machine learning models for identifying bias in digital media, this research is organized into four sections, beginning with the development of a theoretical framework and a review of relevant literature on AI errors and glitch art. Subsequently, the thesis explores the design of glitch prototypes through training and testing machine learning models. Finally, through experiments using these methodologies, including archival work, media manipulation, and attribution studies with AI models, this thesis reveals the AI systems’ deficiencies as they relate to queer identities. This work underscores the transformative potential of integrating artistic techniques to subvert and reveal technological development. It envisions technology not merely as a mechanism for perfecting systems but as a powerful conduit for advocating a more inclusive future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning</title>
<link href="https://hdl.handle.net/1721.1/157343" rel="alternate"/>
<author>
<name>Anouti, Ghida</name>
</author>
<id>https://hdl.handle.net/1721.1/157343</id>
<updated>2024-10-17T03:57:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning
Anouti, Ghida
Set in Beirut in the aftermath of the Lebanese Civil War (1975-1990), the pseudo-documentary film Tango of Yearning (1998) follows the lives of several subjects who speak of love, loss, dreams, and cinema as they navigate their fragmented postwar city. Directed by underground Lebanese filmmaker Mohamed Soueid (b. 1959) and shot purely on video, the film is saturated with cinematic references, images of urban sites, sensual and religious symbols, and sociopolitical intimations. Soueid sees Tango of Yearning – the first in a trilogy titled Civil War – as an ‘obituary’ of his life prior to making this film. Hence, for him, the film is rooted in the past, yet I argue that it is a significant augury of Beirut itself as a palimpsest of urban memories sublimated by Soueid. This argument is nestled between Soueid’s assessment of his film as a personal work of cinema, and my own reception of it as symptomatic of Beirut’s history in the periods prior to, during, and after the Civil War.&#13;
Tango of Yearning is, at its core, a meditation on the city of Beirut as it transformed throughout various periods governed by the traumatic event of the Civil War. Through a close reading of the film, I reveal how an ostensibly private essay is also a medium for archiving memories either forgotten or suppressed by the nation’s contested amnesia of the war, while also investigating how the postwar city’s history intertwines with the filmmaker’s biography. A largely unrecognized yet significant contributor to the Arab world’s video and cinema scene, Soueid – an agent, actor, and narrator of the city – is one of the most sensitive chroniclers of life in Beirut during the 1990s and early 2000s. Weaving historical realism with fabulation to fill or distort representational lacuna, his film offers doubled lenses – one of his life and another of Beirut’s contemporary history. Through a chronological reading of an otherwise nonlinear film, I extract a history of Beirut in three stages: its cosmopolitan yet polarized 1960s with a brimming arts, film, and literature scene; its violent war characterized by sectarianism and fragmented nationalism; and its amnesic postwar era in which the film was created. Accordingly, I ask how Soueid’s private image-making apparatus draws an image of Beirut through his own autobiographical narration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness</title>
<link href="https://hdl.handle.net/1721.1/157342" rel="alternate"/>
<author>
<name>Chan, Cheng-Hsin</name>
</author>
<id>https://hdl.handle.net/1721.1/157342</id>
<updated>2024-10-17T03:48:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness
Chan, Cheng-Hsin
This thesis is an intricate exploration of Taiwanese life under the constant dampness, weaving together the present with historical threads and personal memories of home and motherhood alongside broader socio-historical narratives. It examines Taiwanese domesticity through the dual prisms of “dampness” and “enclosure failure” to reveal how these elements influence or fail to meet Taiwanese people’s physical comfort and needs. Central to this research is exploring the historical marginalization of the Taiwanese body in domestic spatial development under the influence of external powers.&#13;
&#13;
Damp Skin unfolds through three intertwined registers that offer diverse materials and perspectives spanning time and space, providing a layered understanding of Taiwanese history and contemporary experiences: I. Home, Memory, and Motherhood, II. Planetary Climate and Body, and III. Domesticity and Architectural Enclosure in Taiwan. This thesis argues the continuous repositioning of our bodies (ourselves and family) in response to external factors — climate, society, and power. It serves to revisit the past, document the present, and speculate on the future, enhancing our understanding of everyday life in Taiwan and exploring potential cultural adaptations. Each thread collects materials and offers distinct perspectives on Taiwanese identity and space’s historical and contemporary shaping. Together, they form portraits of the complexities and nuances of Taiwanese domesticity, resilience, and otherness, framed through the intimate and expansive lens of dampness and enclosure.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Cycles of aMaízing Things</title>
<link href="https://hdl.handle.net/1721.1/157341" rel="alternate"/>
<author>
<name>del Busto, Juan Manuel Chávez Fernández</name>
</author>
<id>https://hdl.handle.net/1721.1/157341</id>
<updated>2024-10-17T03:40:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Cycles of aMaízing Things
del Busto, Juan Manuel Chávez Fernández
Throughout this thesis, maíz becomes a trans-scalar agent of exchange across time, cultures, and territories. Maíz, as both a symbol and a subject, is intensely charged with tradition and disruption, operating within a jumbled feedback state that transcends myths and industry. The work situates my reading of the artwork, Río Revuelto by the Mexican artist José Chávez Morado (1949), as a guiding framework to approach a kaleidoscopic entanglement of different narratives. Considered under four different lenses: the cosmological, the national identity, the resistance, and the product, I argue for constant feedback between them for the re-transforming cycles of maíz. The crucial concern driving this exploration is how maíz and humans are ingrained into each other's systems — re-configuring methods, spaces, and forms of display. The display refers not only to maíz as a ‘product’ but as a continuous entity in transition, transforming and adapting to the social and cultural conditions where it circulates —whether through myth, ritual, portrayal, strategy of preservation, building typology, commodity, by-product, or history. The design approach is presented through performative artifacts that symbolize the systems through which maíz circulates. They are further represented in an essay film. Whether referencing myths, projections, displays, or products, the artifacts become mnemonic objects to think with—depicting the cycles of maíz as a world-building exercise. Maíz becomes the point that traces simultaneity in the history of humanity, representing a symbol eternally under construction. Acknowledging this monumental scale requires my work to be only a grain-sized glimpse of speculative potentials in design.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene</title>
<link href="https://hdl.handle.net/1721.1/157340" rel="alternate"/>
<author>
<name>Cheng, Chi-Li</name>
</author>
<id>https://hdl.handle.net/1721.1/157340</id>
<updated>2024-10-17T03:17:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene
Cheng, Chi-Li
This thesis introduces "Alive Scene," an online participatory platform for recording dynamic 3D environments and building collective interpretations of objects, events, and atmospheres within them. For instance, a user can browse a recording of a room and describe objects or events to locate them; or select a time frame, adjust the camera angle, and add a comment to share a new narrative of the scene with others. Unlike traditional digital formats such as simple videos or 3D models, this platform is both three-dimensional and temporal at the same time, and the views are searchable using natural language sentences and sorted by relevance. By building the platform and testing it with human subjects, this thesis demonstrates that such a new participatory media of dynamic 3D environments fosters communal knowledge and enhances the spatial understanding of individual users. Alive Scene produces rich, semantic-level communication among users, akin to the dynamic propagation of cultural memes. The Alive Scene System integrates two advanced techniques: 3D scene reconstruction using Gaussian splatting, and semantic linking of human perceptions through the Contrastive Language-Image Pretraining (CLIP) model. These methods are currently among the most popular and efficient. The platform continually enriches its collection of users' views and interpretations through interactions with this semantic AI system, enabling the archiving of user inputs and suggesting new avenues for exploring diverse perspectives. The streamlined interaction interface promotes user engagement and facilitates the discovery of related views and perceptions. The user test employs a dynamic 3D scene of a student lounge, recorded at four different times, and involves 20 participants generating a total of 235 inputs. Four types of interactive behaviors were observed regarding users' views and interpretations: Disagreement, Simple Agreement, Sharing Perception by adding comments, and Adjusting Views. The analysis indicates evolutionary trends: Initially, users’ express disagreements and provide objective, general comments. As the platform gathers these inputs, a transition occurs where users begin sharing more subjective information and reinterpreting others' views. Eventually, users adjust camera angles when the captions are agreeable. Visualizations of this analysis illustrate that these dynamic behavioral changes facilitate the development of collective perception. For further investigations, this study could benefit from incorporating more elaborate 3D scenes, additional recording times, and a larger number of participants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential</title>
<link href="https://hdl.handle.net/1721.1/157339" rel="alternate"/>
<author>
<name>Herb, Svenja</name>
</author>
<id>https://hdl.handle.net/1721.1/157339</id>
<updated>2024-10-17T04:09:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential
Herb, Svenja
Technological advancements in the building industry have significantly transformed climate and comfort control in buildings. This allows for air conditioning in deserts and heating in the Arctic, ensuring occupant comfort. This innovation, however, has contributed to a homogenization in architectural designs globally, from the hot climates of Mumbai to the cold environments of Boston, and moderate settings like London. Such uniformity often overlooks local climatic conditions, resulting in increased energy consumption and elevated greenhouse gas emissions. Climate-responsive design on the other hand creates solutions that leverage local climates—such as through natural ventilation and optimal solar gain management—to reduce energy consumption. Depending on climate and program, the coordinated use of these passive design strategies may or may not lead to indoor thermal comfort conditions without the need for an air-conditioning system. There are two primary approaches to explore the passive design potential of a building during schematic design: Bioclimatic chart and building energy modeling (BEM). The former method is a key feature in building science textbooks and is solely based on widely available local weather data. It provides general design advice without requiring previous knowledge or the need to describe the building program. BEMs facilitate detailed testing of how a building is operated and how the above listed passive design techniques can be combined to obtain the highest possible comfort conditions and energy savings. However, the use of BEM has traditionally been more complex and time-consuming to use as it requires significant knowledge of the underlying building physics and numeric methods to mimic them. This thesis evaluates the bioclimatic chart's accuracy in predicting overheating hours associated with various passive design strategies, through comparison with BEM data. Furthermore, it introduces a new simulation-based approach called “ECOmpass”. ECOmpass automates early-stage design simulations and offers design recommendations for passive strategies with just one click.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold</title>
<link href="https://hdl.handle.net/1721.1/157338" rel="alternate"/>
<author>
<name>Khalil, Mahwish</name>
</author>
<id>https://hdl.handle.net/1721.1/157338</id>
<updated>2024-10-17T03:47:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold
Khalil, Mahwish
In the lower riparian landscape of Punjab, Pakistan, various communities confront the challenges of living within the active floodplain of river Ravi as it flows alongside the city of Lahore. These communities navigate the dissonances of the river’s edge—its Kinara, marked and molded by persistent colonial (mis)representations rooted in practices of erasure and division. Stepping away from historical depictions that have reduced the river to a mere resource for acquisition, this thesis engages with design and the oral tradition of storytelling, known as Qissa Khwani, to propose new modes of knowing, witnessing, and ultimately, cultivating alternative imaginaries for Ravi. This thesis seeks to illuminate the overlooked narratives of a river and its communities by drawing inspiration from, and centering the voices and legacies of, those most impacted by regressive depictions of a linear floodplain. It stages newer encounters and engagements with Ravi and its communities by stitching together stories of numerous community members, the dwellers, the boatmen, and the civil defense divers, actively defying and transforming the seemingly static Kinara—their home—through cultural and economic production. These pluralistic alternatives serve as a deliberate departure from the current large-scale, mega-urban development projects planned for the riverfront, which not only overlook the communities living along its banks but also employ idealized depictions of Ravi to attract capital. Finally, this thesis questions how the river's edge can be remapped to allow for the dismantling of top-down visions while addressing an urgency embodied within the shallow, receding flows of a polluted river, whose uncertain future remains contingent on distinct lines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disrupting Monocultural Tendencies through&#13;
Multimodal Montage</title>
<link href="https://hdl.handle.net/1721.1/157337" rel="alternate"/>
<author>
<name>Singha, Mrinalini</name>
</author>
<id>https://hdl.handle.net/1721.1/157337</id>
<updated>2024-11-12T18:32:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Disrupting Monocultural Tendencies through&#13;
Multimodal Montage
Singha, Mrinalini
This thesis contends with the pervasive impact of monocultural tendencies as manifested in the political, cultural, and media landscapes of contemporary India, particularly focusing on the unfolding context of 2024. Amidst an intensifying crisis marked by polarization, historical erasure, and the rise of hegemonic nationalism, this thesis posits art, particularly through the framework of `multi-modal montage,' as a agent of political disruption for `redistributing the sensible.' Tracing the aesthetic and political evolution of montage from its early 20th-century origins in Soviet cinema to its contemporary forms, the thesis outlines the transition from montage defined by collision and conflict to the soft, spatial, and interactive practices of figures such as Nam June Paik and Harun Farocki. It further investigates how `surface tension' and `unquiet objects' manifest within the multi-modal montage in the works of artists like Nalini Malani, Krzysztof Wodiczko, Shilpa Gupta and Nida Sinnokrot.&#13;
&#13;
As an Indian artist, the author situates her own practice within this discourse, highlighting projects such as `The Whistleblower' (2023), a tangible archive within an everyday object, and `A Mystery for You' (2023-24), a fact-checking game that merges a tangible interface with a large language model (LLM). These works exemplify the thesis's argument that artistic interventions can critically challenge and reframe dominant sociopolitical narratives, offering new perspectives and resistances against the monocultural hegemonies. Extending this analysis, the author discusses her exhibition 'Forensic Artifacts of a Democracy in Crisis' (2023) as an operative space. Through a curated assemblage of works, the exhibition provided a physical space for interaction, reflection, and conversation, enabling audiences to engage with the themes of the thesis viscerally. In all, this thesis argues for the critical role of art in challenging memory and forgetting, from fabricated histories to the fall and rise of monuments. From the polarization of media to the flattening of identities, of echo-chambers and absences and grand narratives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing (with) Trees: Active Agents in Architectural Production</title>
<link href="https://hdl.handle.net/1721.1/157335" rel="alternate"/>
<author>
<name>Garinois, Laura-India</name>
</author>
<id>https://hdl.handle.net/1721.1/157335</id>
<updated>2024-10-17T03:37:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing (with) Trees: Active Agents in Architectural Production
Garinois, Laura-India
This thesis embarks on a multifaceted exploration of the relationship between urban trees, architectural representation, and the legal framework governing their existence, with a particular focus on tree hearings in Boston as a platform for this study. Against the backdrop of capitalist influences shaping urban landscapes, standardized modes of representation often prioritize economic interests, relegating urban trees to two-dimensional depictions in architectural drawings. Such representations obscure the rich complexity and ecological significance of trees, thereby shaping design choices that threaten their vitality. Amidst these challenges, Massachusetts has initiated efforts towards granting public trees legal recognition, providing a foundation upon which this study builds on to advocate for further improvements in tree rights and protections. This encompasses tree hearings, where developers and residents seek permission for the removal of healthy public trees, involving municipal authorities, tree wardens, and local communities. Through extensive dialogue with experts and stakeholders dedicated to this cause, the thesis identifies loopholes within existing laws and institutional frameworks, leading to the development of a tree appraisal system that employs alternate representations of trees that encourage new ways of valuing their role within architectural thinking and production. The exploration examines how a more nuanced collaboration with trees in design processes can enhance the value of architecture, and how design can in turn contribute to the protection of trees. Ultimately, the goal is to enrich tree hearing conversations by recognizing them as reflections of a larger climate conversation around trees and nature. By intervening in their legal site and imagination, the thesis fosters a more inclusive dialogue that transcends the binary decision of whether to cut down a tree or not.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What is Ecology?</title>
<link href="https://hdl.handle.net/1721.1/157334" rel="alternate"/>
<author>
<name>James, Aubrie R. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157334</id>
<updated>2024-10-17T03:03:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">What is Ecology?
James, Aubrie R. M.
There are many ways to try to make sense of that which is. Ecology, which deals with organisms in relation to their environments, makes sense of that which is through the study of relations among and between organisms and their environments. Modern ecology is predominantly understood as a scientific enterprise. However, science as a methodology is too often aligned and entangled with extractive, capitalist logics: the cycle of enclosure–dispossession-scientific practice-imperial expansion not only undergirds and defines the ecological crises of our times but forecloses our ability to conceive of the diverse ways in which life is configured. For ecology, this is a predicament of ethics, yes, but also of a cleareyed understanding of what is (and our relationship to it). The urgent question for ecologists given this predicament is to ask is how to break out of this cycle. This thesis explores the potential of building an artistic practice to question the forms of ecology: how it is conducted, how it is communicated, and what it produces. Drawing inspiration variously from feminist, postcolonial, and ecosocial art, media theory, and philosophy, this thesis probes the limits of ecology under the suspicion that the point of leverage for change is to differently enact how we think, make, and do in relation to the world in, around, and constituting us.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways</title>
<link href="https://hdl.handle.net/1721.1/157333" rel="alternate"/>
<author>
<name>Kirkeby, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/157333</id>
<updated>2024-10-17T03:36:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways
Kirkeby, Amanda
Carbon emissions are driving the planet out of its delicate Goldilocks balance. Evidence and the call-to-action date back to 1896 with Swedish scientist Svante Arrhenius and his seminal paper that first predicted the effect of carbon dioxide on the global temperatures. With the Intergovernmental Panel on Climate Change (IPCC) goal of global net zero emissions by 2050, the urgency is stronger than ever. An ever-growing number of municipalities are setting pledges to do their part, often without a concrete plan. With buildings accounting for 40% of total global emissions, building retrofits are a key component to these pathways to zero carbon. Urban building energy modeling (UBEM) research efforts have developed physics-based decision-making tools to define city-scale technology pathways to reach climate goals. However, a crucial question in making these pathways actionable has been largely neglected: how much does it really cost? The scarcity of contemporary cost data and methods for cost prediction at the urban scale makes this question difficult, and further questions around equitable incentive programs nearly impossible to answer. This work demonstrates the concept and relevance of implementing a dynamic cost model in the UBEM context. Several cost models are applied to a case study of 13,000 residences in Oshkosh, WI to predict costs for homeowners to retrofit their homes with three different upgrade packages. A willingness to pay analysis is then performed with upfront cost predictions from different models, illustrating the impact a more robust cost model may have in providing more realistic predictions of an upgrade strategy’s techno-economic success. Through its compatibility with existing UBEM frameworks and local input costs, the dynamic building upgrade cost model hosts the potential to further support municipalities in developing economically feasible building retrofit strategies for decarbonization pathways.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pathways to Net Zero: Financing Strategies For Low-Income Homeowners</title>
<link href="https://hdl.handle.net/1721.1/157332" rel="alternate"/>
<author>
<name>Moore, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/157332</id>
<updated>2024-10-17T03:48:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pathways to Net Zero: Financing Strategies For Low-Income Homeowners
Moore, Lauren
Housing retrofits are crucial for accomplishing national housing sector decarbonization goals. Single measure retrofit improvements are not sufficient for low-income homes which are often in less-thanoptimal condition and are subsequently uncomfortable and expensive to operate. Comprehensive retrofit approaches are necessary to achieve the energy efficiency targets for the aging housing stock. Historic educational and economic barriers pose challenges for incentivizing low-income homeowners to retrofit their homes. Proactive strategizing that considers both educational and economic factors are needed to see increased retrofit adoption amongst these groups. Policy makers need an understanding of retrofit impact for more effective resource allocation and homeowners need better incentives, and tools to conceptualize the benefits, time commitment and cost associated with deep retrofits. To address this problem, we present a retrofit pathway modeling framework to accurately predict the time required to achieve comprehensive retrofits for the homeowner. Taking retrofit cost and annual energy saving into account, we are proposing a new Government sponsored and led financing program inspired by the successful 401(k) retirement plans and level 529 saving programs, which offers an either 2x or 3x match to the annual investment the homeowner commits to saving each year to ensure low-income homeowners are accounted for in the journey to building sector decarbonization by 2050 and beyond. For a case study home in the Grove Park neighborhood located in Atlanta, Georgia, hot water heat pump retrofits are the most impactful on building annual energy use but retrofits that have low cost and short payback periods such as installing LED light fixtures and low-flow showerheads are the recommended have the largest potential for shortening the years required to achieve comprehensive retrofits and therefore, are recommended for policy makers to incentivize in the community. Strategic financing can be used to ensure a financially feasible pathways for homeowners with varying annual budget amounts. For the example home, the program allows homeowners who invest only $50 annually to achieve comprehensive retrofits four times faster than if they only utilize existing incentive programs. Individual building energy simulation combined with socioeconomic analyses are needed to meet the needs of diverse low-income communities across the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Liquid to Stone: Reimagining the design of concrete structures for reuse</title>
<link href="https://hdl.handle.net/1721.1/157330" rel="alternate"/>
<author>
<name>Donovan, Inge</name>
</author>
<author>
<name>Schnitzler, Jenna</name>
</author>
<id>https://hdl.handle.net/1721.1/157330</id>
<updated>2024-10-17T03:01:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Liquid to Stone: Reimagining the design of concrete structures for reuse
Donovan, Inge; Schnitzler, Jenna
Every year, 360 million metric tons of concrete construction waste are sent to landfill in the United States, in large part originating from the demolition of economically obsolete buildings. Meanwhile, global demand for new concrete is accelerating – in 2021, the production of new concrete was responsible for up to 9% of global CO2e emissions, and our dependence on concrete is only expected to rise over the next 50 years.&#13;
Concrete’s ubiquity is reinforced by its liquidity; it is simultaneously invisible and ever-present, undergirding global modernization through its cheap, local nature and its ability to take on any form in quick order. However, design with concrete has remained mostly unchanged, with inefficient, irreversibly fused structures cast in place to meet quickly changing programmatic needs, few of which survive longer than 30-50 years. Due to its careless application, concrete is perceived as a low-value material, and is therefore used wastefully, discarded quickly, and usually downcycled. The monolithic and inflexible nature of reinforced concrete structures perpetuate concrete’s culture of obsolescence and demolition.&#13;
To meet emissions targets and demand for building, we need to close the loop by developing a circular economy of structural materials. Instead of reusing salvage materials that have already entered the waste stream, this thesis confronts the design of new concrete structures directly, presenting the design of and methodology behind Pixelframe, a precast kit of parts for reconfigurable concrete structures. In a future where buildings are increasingly seen as stockpiles for subsequent reuse, the reinvention of concrete structures is an imperative that presents an opportunity for a new tectonic – concrete is no longer a liquid poured once and cured on site, but instead is a material more akin to stone, retaining value across multiple lifespans.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories</title>
<link href="https://hdl.handle.net/1721.1/157329" rel="alternate"/>
<author>
<name>Tas, Demircan</name>
</author>
<id>https://hdl.handle.net/1721.1/157329</id>
<updated>2024-10-17T03:49:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories
Tas, Demircan
Design is an iterative process where physical or virtual prototypes are created, rendered, evaluated and modified repeatedly. Sketches and direct manipulations are made on the rendered or fabricated mediums to create and communicate intended changes. Parametric design is a prominent paradigm in design and architecture where hand crafted functions map input parameters to a design space to rapidly generate samples. Direct modifications often lead to novel states outside the design space of a parametric model. Moreover, Parametric models are not cyclic, their input and output spaces are not interchangeable without human intervention. Models must be reconfigured to accommodate out-of-domain changes, preventing parametric design tools from being integrated into early phases of design where changes are commonplace. We propose latent spaces of large pre-trained auto-encoders as shared, design spaces for translating states of design among mediums and dimensions. We implement rendering and image encoding to use images as an interface among the outputs and inputs of the model, enabling users with direct modification via painting over. We use sketches, renderings, and 3d models for sampling latent spaces. We share experiment results acquired through linear interpolation and a custom spline implementation in latent spaces. We present samples from found latent trajectories matching to samples from ground truth parametric design models. We find that trajectories exist in latent spaces that approximate axes in parameter spaces. Using images and 3d models as input and output, we provide a cyclic, software agnostic tool for design generation with parameter approximation capabilities that generalize. We provide findings from experiments and present a software repository for parametric paintover including our sketch augmentation model Inverse Drawings and many-dimensional latent spline implementation L-NURBS.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line</title>
<link href="https://hdl.handle.net/1721.1/157327" rel="alternate"/>
<author>
<name>Pal, Kanishk</name>
</author>
<id>https://hdl.handle.net/1721.1/157327</id>
<updated>2024-10-17T03:09:15Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line
Pal, Kanishk
This thesis provides a comprehensive analysis and implementation plan for enhancing machine connectivity within a manufacturing facility at SLB. The study investigates the existing limitations of the facility's connectivity infrastructure and proposes an advanced connectivity software suite as a solution, presenting a compelling business case for its implementation. The software’s scope involved DNC (direct numerical control), allowing for line-by-line feeding of CNC code to machine controllers, as well as machine data collection for real-time shop floor monitoring. The research emphasizes the development and implementation of an advanced network infrastructure designed to improve efficiency, security, and data handling capabilities. There is discussion regarding cybersecurity practices, specifically those related to industrial control systems that leverage CNC machining processes. The software implementation process is detailed, highlighting the necessary steps and information required for successful integration. These include: 1) securing connection to critical CNC machine controllers, 2) acquisition of hardware including local server and network switch, 3) server bring-up through remote imaging and installation of standard monitoring tools and 4) implementation of software on edge devices for CNC file transfer and machining data collection. Additionally, the thesis discusses the limitations encountered during implementation and outlines future steps to address these challenges.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capitalization of electric railways</title>
<link href="https://hdl.handle.net/1721.1/157307" rel="alternate"/>
<author>
<name>Zee, J. Zohn.</name>
</author>
<author>
<name>Zi, Su.</name>
</author>
<id>https://hdl.handle.net/1721.1/157307</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1915-01-01T00:00:00Z</published>
<summary type="text">Capitalization of electric railways
Zee, J. Zohn.; Zi, Su.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1915
</summary>
<dc:date>1915-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs</title>
<link href="https://hdl.handle.net/1721.1/157304" rel="alternate"/>
<author>
<name>Mascioli, Edward A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157304</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs
Mascioli, Edward A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1984; Vita.; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.</title>
<link href="https://hdl.handle.net/1721.1/157302" rel="alternate"/>
<author>
<name>Yamanouchi, Ichiro.</name>
</author>
<id>https://hdl.handle.net/1721.1/157302</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.
Yamanouchi, Ichiro.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Bibliography: leaves 153-155.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids</title>
<link href="https://hdl.handle.net/1721.1/157300" rel="alternate"/>
<author>
<name>Brauner, Octavia Flora,
            1975-</name>
</author>
<id>https://hdl.handle.net/1721.1/157300</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids
Brauner, Octavia Flora,
            1975-
Thesis: S.M., Massachusetts Institute of Technology, Department of Chemical Engineering, 2001; Includes bibliographical references (p. 111-115).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency</title>
<link href="https://hdl.handle.net/1721.1/157259" rel="alternate"/>
<author>
<name>Bao, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/157259</id>
<updated>2024-10-10T03:01:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency
Bao, Caroline
Inosine is a nucleoside formed by deamination of adenosine by adenosine deaminases acting on RNA (ADAR). ADAR editing activity is known to play a key role in modulating the host cell’s immune response to RNA. Here, we specifically study the effect of the presence of inosine in RNA by generating an inosine-containing reporter mRNA sequence. We also generated mRNA sequences that contained pseudouridine, an RNA modification known to decrease immune response to in vitro transcribed (IVT) mRNA and elevate the expression of the encoded gene, to examine the interaction between pseudouridine and inosine modifications. &#13;
While A-to-I editing activity is required for endogenous RNA to evade the innate immune response, our results show that inosine-containing IVT RNA induces an elevated immune response and is translated at a lower efficiency. This effect is dominant over pseudouridine modification, such that mRNAs containing both pseudouridine and inosine modifications still potently activate the innate immune response and exhibit a loss of translation. These results point to the potent immunostimulatory effects of inosine in transfected IVT mRNA. This elevated immune response is likely receptor-specific and we have demonstrated that it cannot be attributed to the sensors RIG-I, MDA5, TLR3, or PKR.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods</title>
<link href="https://hdl.handle.net/1721.1/157257" rel="alternate"/>
<author>
<name>Chun, Albert Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157257</id>
<updated>2024-10-10T04:07:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods
Chun, Albert Y.
In the era of digital transformation, Accounting and Finance (A&amp;F) functions face the challenge of making well-informed decisions about which technologies to adopt, which processes to prioritize, and why. These decisions require stakeholders to carefully evaluate available options, assess their implications and tradeoffs, and align diverse preferences to make well-supported investment choices. Conducting this process in a siloed and unstructured manner can lead to inefficiencies.&#13;
This study explores the application of Systems Thinking (ST) and Systems Engineering (SE) methods, developing an integrated framework that combines Rich Picture, Object-Process Diagram (OPD), Design Structure Matrix (DSM), and Multi-attribute Tradespace Exploration (MATE) to enhance the technology adoption decision-making process within A&amp;F functions. The focus is on Internal Audit (IA) as a case study for a simplified model and demonstration. While empirical data collection and hypothesis testing were not conducted due to data and time constraints, qualitative insights were gathered from industry practitioners.&#13;
Key findings suggest that the integrated framework can potentially reduce the time and effort needed to reach technology adoption decisions. Providing a structured and comprehensive approach ensures that the decision-making process is more holistic, unbiased, and quantifiable. This can also offer post-implementation benefits, as the technologies adopted align better with the organization’s requirements and preferences, resulting in improved efficiency and effectiveness.&#13;
This study extends the practical application of ST methodologies into A&amp;F. By presenting this integrated framework, it contributes to the foundation for future research on applying ST to improve the technology adoption decision-making in A&amp;F.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Red Teaming Language Conditioned Robotic Behavior</title>
<link href="https://hdl.handle.net/1721.1/157255" rel="alternate"/>
<author>
<name>Abhangi, Nishant</name>
</author>
<id>https://hdl.handle.net/1721.1/157255</id>
<updated>2024-10-10T04:00:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Red Teaming Language Conditioned Robotic Behavior
Abhangi, Nishant
Natural language instruction following capabilities are important for robots to follow tasks specified by human commands. Hence, many language conditioned robots have been trained on a wide variety of datasets with tasks annotated by natural language instructions. However, these datasets are often limited in their size and hence the distribution and nature of the instructions given by real world users might be different from that in the datasets. This makes it unclear how these robots will perform in real world environments. Hence, a large scale evaluation with diverse instructions is needed to benchmark the performance of these robots. However, using humans to collect more annotations is prohibitively expensive. We show that recent large language models provide a scalable and inexpensive way to do such an evaluation. Moreover, there is a large performance drop in robots when evaluated on this larger set of instructions. We also show that we can use different prompts to LLMs to control properties such as diversity of the generated instructions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Instrumenting Observability in a Decentralized Microservice Architecture</title>
<link href="https://hdl.handle.net/1721.1/157254" rel="alternate"/>
<author>
<name>Liu, Helen X.</name>
</author>
<id>https://hdl.handle.net/1721.1/157254</id>
<updated>2024-10-10T03:01:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Instrumenting Observability in a Decentralized Microservice Architecture
Liu, Helen X.
Software systems have increased in complexity over time, and with this increased complexity has come an increased need to keep these systems organized and functioning efficiently. Observability is closely attached to ensuring this correct and effective system function. Without system monitoring, it is difficult to pinpoint when errors occur and correct them at their sources. Monitoring systems also helps to understand a system from the outside by allowing developers to ask questions about the system’s state and function without needing to know the details of what comprises the system’s internal behavior. While there are existing solutions for observability frameworks, these solutions do not target microservice architectures, which are used more and more with expansive code bases, such as those likely to be employed in an industry environment. They also require extensive configuration to be fully integrated with a pre-existing system. As such, the challenge lies primarily in adapting observability solutions to a decentralized, microservice architecture found in an industry setting. The existing solutions also come with advantages and disadvantages for different situations, so they are often incomplete in addressing an entire system’s needs. The integrated system created here satisfies our system’s requirements of a consolidated observability platform while also enabling future customizations, thereby allowing problems to be identified more quickly and proactively.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation</title>
<link href="https://hdl.handle.net/1721.1/157249" rel="alternate"/>
<author>
<name>Arora, Ajay</name>
</author>
<id>https://hdl.handle.net/1721.1/157249</id>
<updated>2024-10-10T03:36:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation
Arora, Ajay
We propose SongGen, an AI-based song-writing and song co-creation framework. Building upon existing AI tools like Suno.ai, SongGen features a chat interface with a trained AI songwriter assistant, emulating the traditional back-and-forth of human collaboration. The system offers enhanced capabilities for greater control over the songwriting process, including concept ideation, lyric generation and editing, real-time song generation, and granular instrumental specification. Comparative evaluations demonstrate SongGen’s superiority in key metrics such as steerability, expressiveness, personalization, and user satisfaction. We also present an extension of the SongGen framework for artist emulation and on-demand song generation. Future development aims to incorporate voice-based interaction and real-time voice conversion, enabling music artists to guide fans in creating personalized songs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hofstadter Physics and Composite Fermionic Phase in Moiré Systems</title>
<link href="https://hdl.handle.net/1721.1/157248" rel="alternate"/>
<author>
<name>Ding, Shuhan</name>
</author>
<id>https://hdl.handle.net/1721.1/157248</id>
<updated>2024-10-10T04:07:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Hofstadter Physics and Composite Fermionic Phase in Moiré Systems
Ding, Shuhan
This thesis explores the intricate electronic phenomena in Moiré systems, particularly focusing on twisted bilayer transition metal dichalcogenides (TMD). These systems, with their unique superlattice structures and strong electron correlations, provide fertile ground for investigating novel quantum states. A key focus is on understanding Hofstadter physics and the emergence of composite fermion phases in these materials. In this work, we first develop a continuum model to describe the low-energy electronic structure of twisted TMD bilayers, emphasizing the role of the Moiré superlattice in modifying the band structure and introducing non-trivial topological properties. We analyze the resulting Hofstadter spectrum under an external magnetic field, revealing the rich fractal pattern and the impact of valley polarization induced by the magnetic field. Building on this framework, we delve into the concept of composite fermions, particularly in the context of the fractional quantum Hall effect (FQHE). We extend Jain’s composite fermion theory and the Chern-Simons field theory to Moiré TMD systems, proposing the existence of an anomalous composite fermion liquid state at half-filling. Through a detailed mean-field analysis, we demonstrate that this state, characterized by a strong valley polarization and an effective magnetic field arising from Berry curvature, could be energetically favored under certain conditions. Our findings suggest that Moiré TMDs are promising candidates for realizing fractional Chern insulators and other exotic quantum phases, opening up new avenues for experimental exploration and potential applications in quantum technology.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft</title>
<link href="https://hdl.handle.net/1721.1/157247" rel="alternate"/>
<author>
<name>Aron, Aklilu</name>
</author>
<id>https://hdl.handle.net/1721.1/157247</id>
<updated>2024-10-10T03:30:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft
Aron, Aklilu
Switched capacitor converters are a category of power electronic converters that harness the significantly improved energy density of capacitors as opposed to that of their conventional, inductor-based counterparts to reap benefits in terms of efficiency, size, and utilization. This work presents the analysis, design, construction, and evaluation of one such converter, inspired by the flying capacitor multilevel topology and referred to as a condensed buck-boost converter. This converter is designed and built for an application as the interface between the battery voltage and DC bus on partially electrified aircraft, where the advantages of its ability to step up/down voltage in an efficient and lightweight fashion can be fully realized. In order to be implemented in hardware for the first time, this work utilize new monolithic, bidirectional GaN FETs, whose reverse voltage blocking capabilities open new possibilities for a converter design that wastes less power and occupies less board area. This converter is compared with others that perform similar functions to showcase the benefits that this topology has to offer.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cellulose Nanofoams: 3D Printing and Characterization</title>
<link href="https://hdl.handle.net/1721.1/157245" rel="alternate"/>
<author>
<name>Padia, Vineet</name>
</author>
<id>https://hdl.handle.net/1721.1/157245</id>
<updated>2024-10-10T03:44:40Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cellulose Nanofoams: 3D Printing and Characterization
Padia, Vineet
In recent years, the advancement in cellulosic nanofoams has been considerable. Yet, their customization potential for diverse application requirements has been constrained by reproducibility challenges. Our research, therefore, focused on two primary objectives: enhancing the thermal regulation capabilities and mechanical properties of cellulose nanofibrils (CNF) nanofoams, and developing a reproducible methodology for printing customized three-dimensional (3D) structures using direct-ink-write (DIW) technology and molding.&#13;
&#13;
We developed composite nanofoams using TEMPO-modified cellulose nanofiber (TCNF). The resultant composite nanofoams showcased remarkable properties such as ultra-low thermal conductivity, low density, outstanding flexibility, and infrared shielding capabilities.&#13;
&#13;
In a bid to create robust and environmentally friendly nanofoams, we employed a crosslinking process with CaCl2. The crosslinked nanofoams were extraordinarily lightweight yet boasted superior mechanical properties, significantly amplified by the crosslinker. Remarkably, these freeze-dried T-CNF/CaCl2 nanofoams maintained their form and demonstrated admirable flexibility, even when subjected to weight exceeding thousands of times their own. Furthermore, transient characterization confirmed their excellent thermal insulation capabilities.&#13;
&#13;
In conclusion, our research has pioneered the fabrication of sustainable, high-stability cellulose nanofoams. We have significantly enhanced the thermal management capabilities and mechanical performance of these nanofoams, marking a remarkable advancement in the field. The demonstrated sustainability, biocompatibility, ultra-light weight, high porosity, and deformability of the resultant nanofoams suggest considerable potential for diverse applications, including thermal insulation, shock and vibration damping, as well as tissue engineering.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Machine Connectivity Guidelines for Production&#13;
Floor</title>
<link href="https://hdl.handle.net/1721.1/157244" rel="alternate"/>
<author>
<name>Sehnawi, Kenan Hayel</name>
</author>
<id>https://hdl.handle.net/1721.1/157244</id>
<updated>2024-10-10T03:01:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of Machine Connectivity Guidelines for Production&#13;
Floor
Sehnawi, Kenan Hayel
This thesis introduces and uses a standardized method for assessing machine connectivity at manufacturing facilities and develops a roadmap for an organization looking to implement connectivity at its facilities. As technology rapidly advances and Industry 4.0 takes hold of manufacturing worldwide, it is essential for manufacturing companies to utilize the latest technology to maintain a competitive advantage by optimizing operations, improving productivity, and increasing throughput. In this work, an overview of machine connectivity and its benefits are presented, and technologies and security measures used for connectivity are explored. Upon compilation of this information, a comprehensive rubric was developed with six weighted connectivity criteria, each scored from 0 (no progress) to 4 (fully complete), from which a total connectivity score can be computed. The rubric serves as a guiding tool for gauging a manufacturing facility’s level of maturity with regards to connectivity, and helps identify areas of need both within a facility and within an organization as a whole. The connectivity levels of six different manufacturing facilities were assessed using the rubric. The results were compiled to understand the development of connectivity at different facilities across the organization. The learnings from this analysis are used to develop guidelines as the organization continues its push towards full connectivity across all of its facilities. The next steps in this initiative are to: 1) utilize the developed rubric to assess connectivity at all of its manufacturing facilities, 2) identify facilities in need of the most resources in order to plan and execute connectivity, and 3) encourage collaboration between facilities to expedite the connectivity implementation process.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/157243" rel="alternate"/>
<author>
<name>Sun, Brandon Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/157243</id>
<updated>2024-10-10T04:03:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing
Sun, Brandon Christopher
The demand for the product under investigation exceeds the available manufacturing capacity, with the CNC milling workcell identified as the bottleneck operation. This research, conducted in an active, high-mix, low-volume production environment, focuses on evaluating and implementing improvements to CNC machining parameters to enhance the workcell's capacity. Key areas of investigation include machining speeds and feeds, depth of cut, machine settings, toolpath strategies, stepover percentages, and alternative tooling. The study specifically targeted the initial roughing operation, which uses a feed mill and is the longest milling process. Addressing the challenges of high mix and low volume, the research successfully optimized machining and CNC programming parameters, reducing total machining cycle times by 25% and resulting in a 33% increase in throughput. Additionally, the methodologies and findings from this work have provided a framework for implementing further milling process improvements outside of the roughing operation, demonstrating their applicability to similar production scenarios.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Execution of a Testing Strategy for Omnidirectional Wheels</title>
<link href="https://hdl.handle.net/1721.1/157242" rel="alternate"/>
<author>
<name>Donnellan, Michael J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157242</id>
<updated>2024-10-10T03:10:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development and Execution of a Testing Strategy for Omnidirectional Wheels
Donnellan, Michael J.
Omnidirectional wheels enable robots to achieve holonomic motion; however, this often comes at the cost of increased rolling resistance compared to traditional caster wheels. The rolling resistance in omnidirectional wheels is higher than in many other wheels due to several factors including an irregular tread shape, material compliance, and friction in the bushinglike cross rollers during lateral motion. Testing standards exist for characterizing the rolling resistance, compressive strength, and other attributes of commonly used wheels such as caster wheels. However, there are no comprehensive testing standards or research that broadly characterize the performance of omnidirectional wheels. Here, test methods are described for characterizing the load relaxation, stiffness, and rolling resistance of omnidirectional wheels, and the results from these tests are presented. Test apparatuses for static loading and rolling resistance were created. Test results were analyzed to determine important factors for determining the ultimate compressive strength in static loading and the rolling resistance coefficient of an array of omnidirectional wheels, and results indicate wheel manufacturing methods and materials are the most important factors for determining these responses.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior</title>
<link href="https://hdl.handle.net/1721.1/157241" rel="alternate"/>
<author>
<name>Lee, Eunhae</name>
</author>
<id>https://hdl.handle.net/1721.1/157241</id>
<updated>2024-10-10T04:08:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior
Lee, Eunhae
This thesis investigates the psychological factors that influence belief in AI predictions, comparing them to belief in astrology- and personality-based predictions, and examines the "personal validation effect" in the context of AI, particularly with Large Language Models (LLMs). Through two interconnected studies involving 238 participants, the first study explores how cognitive style, paranormal beliefs, AI attitudes, and personality traits impact perceptions of the validity, reliability, usefulness, and personalization of predictions from different sources. The study finds a positive correlation between belief in AI predictions and belief in astrology- and personality-based predictions, highlighting a "rational superstition" phenomenon where belief is more influenced by mental heuristics and intuition than by critical evaluation. Interestingly, cognitive style did not significantly affect belief in predictions, while paranormal beliefs, positive AI attitudes, and conscientiousness played significant roles. The second study reveals that positive predictions are perceived as significantly more valid, personalized, reliable, and useful than negative ones, emphasizing the strong influence of prediction valence on user perceptions. This underscores the need for AI systems to manage user expectations and foster balanced trust. The thesis concludes with a proposal for future research on how belief in AI predictions influences actual user behavior, exploring it through the lens of self-fulfilling prophecy. Overall, this thesis enhances understanding of human-AI interaction and provides insights for developing AI systems across various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach</title>
<link href="https://hdl.handle.net/1721.1/157240" rel="alternate"/>
<author>
<name>Blackford, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/157240</id>
<updated>2024-10-10T04:10:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach
Blackford, Timothy
In the pursuit of sustainable energy solutions, this thesis explores the lifecycle emissions and economic feasibility of geologic hydrogen production. This research extends Brandt's 2023 study of 'prospective' lifecycle assessment (LCA), enhancing the underlying open-source LCA model used in this work and adding a preliminary techno-economic analysis (TEA). The findings demonstrate that geologic hydrogen developments should have emissions intensities that compare favourably to all other hydrogen production pathways. The value of lifetime emissions intensity for Brandt’s Baseline case is estimated at 0.40 kgCO2e/kgH2, representing an increase of ~6% over Brandt’s estimation. The study also highlights the potential for geologic hydrogen to achieve competitive levelized costs (estimated at $1.45/kg), making it a promising candidate in the hydrogen economy. It finds that to achieve the best possible emissions and economic results, proponents of geologic hydrogen developments should seek to maximise the productivity of each well. It also studies the impact of the United States regime of production tax credits for hydrogen, finding that the fivefold increase in the magnitude of credits for meeting employment conditions is generally more impactful than lowering emissions intensity. The thesis underscores the importance of continued refinement of LCA and TEA models to understand geologic hydrogen resources better and ensure they are developed appropriately.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program</title>
<link href="https://hdl.handle.net/1721.1/157239" rel="alternate"/>
<author>
<name>Kime, Jeremy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157239</id>
<updated>2024-10-10T03:49:53Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program
Kime, Jeremy A.
The United States Coast Guard faces significant challenges transitioning new ships from shipbuilding to operations. Historically the low volume and irregular pace of major ship deliveries, combined with diverse homeporting factors, have resulted in anomalous post-delivery requirements. Today, a growing fleet, personnel shortages, and sweeping technological advancements are amplifying the complexity of post-delivery activities. At the same time, the Coast Guard is engaged in its largest shipbuilding effort since World War II, with seven acquisition programs scheduled to deliver 134 new ships over the next 15 years. In light of these factors the current approach, which places significant strain on crews, escalates costs, and delays operational use of the Coast Guard’s newest assets, warrants thorough examination. This thesis examines the issue through case study analyses using the Offshore Patrol Cutter (OPC) Program. The Coast Guard’s challenges are driven by three primary factors: the inherent uncertainty in ship construction, sociotechnical system dynamics associated with organizational management of pre-commissioning crews, and the ongoing evolution of technology. To address these challenges, this analysis employs an integrated approach, synthesizing principles and techniques from Architecting Innovative Enterprise Strategy (ARIES), Flexible Engineering Design (FED), and System Design and Management (SDM). This systems thinking approach aims to develop opportunities to reduce costs, improve schedules, and optimize workforce outcomes. The analysis recommends a three-phased strategy that could yield cost savings on the order of $400 million over the OPC Program’s lifespan, significantly mitigate risks associated with unforeseen shipbuilding developments, and enhance organizational outcomes regarding workforce, operational availability, and life cycle sustainment. The staffing of pre-commissioning crews is pinpointed as a pivotal discretionary event that triggers an exponential increase in system complexity and a surge in scope by introducing interdependent yet organizationally disparate requirements. Consequently, major personnel activities are decoupled from highly variable ship construction milestones. This paves the way for a paradigm shift from fixed to flexible approaches, replacing fragmented, ad hoc approaches with a flexible system architecture capable of continuous enterprise learning and improvement. Dynamic post-delivery activities are reimagined as a continuous business line, to professionalize the transition of new ships from shipbuilding to operations.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Intangible Reverberations Following Mergers &amp; Acquisitions</title>
<link href="https://hdl.handle.net/1721.1/157238" rel="alternate"/>
<author>
<name>Warren, Laura N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157238</id>
<updated>2024-10-10T03:05:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Intangible Reverberations Following Mergers &amp; Acquisitions
Warren, Laura N.
This study preliminarily investigates how merger and acquisition (M&amp;A) activities affect employees as stakeholders of the company system - specifically in the areas of leadership, communication, company direction, project autonomy, and path for career growth.&#13;
Interviews of 14 employees supporting the oil and gas industry were conducted to determine the effect (if any) that M&amp;A activities had on their careers and any similarities in their experiences. This data was evaluated against research completed by Steigenberger &amp; Mirc and Schweizer &amp; Patzelt.&#13;
While the hypotheses presented cannot be proven, recommendations for future research are provided to gain and evaluate additional information.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Evaluation of Underwater Semantic SLAM</title>
<link href="https://hdl.handle.net/1721.1/157234" rel="alternate"/>
<author>
<name>Song, Thomas Jeongho</name>
</author>
<id>https://hdl.handle.net/1721.1/157234</id>
<updated>2024-10-10T03:07:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Experimental Evaluation of Underwater Semantic SLAM
Song, Thomas Jeongho
Autonomy is crucial for underwater vehicles due to the challenging and inaccessible nature of underwater environments. These environments pose significant difficulties for human-operated systems because of limited visibility, high pressure, and vast areas that are costly and risky to explore manually. Implementing autonomy in underwater vehicles presents unique challenges due to the marine environment's harsh and complex nature. Underwater communication is severely limited as water absorbs and scatters most electromagnetic signals used in terrestrial communications. This necessitates the use of acoustic communication, which has a lower bandwidth and is prone to delays and signal distortion. Similarly, GPS signals do not penetrate water, complicating navigation and creating dependence on inertial and sonar sensors, which suffer from noisy measurements that are guaranteed to drift over time. The unpredictable dynamics of underwater environments, including varying currents, lighting conditions and obstacles, further complicate autonomous navigation. As such, data collection while moving through a preplanned course is the traditional mission of the Autonomous Underwater Vehicle (AUV), defining the limitation of current technology. Higher-level missions such as search, surveillance, maintenance and manipulation require greater situational awareness, decision-making and navigation abilities, facilitated by processing semantic visual information and applying it to map generation and localization. To address the limited autonomy of current AUVs and enhance their capability for complex missions, this thesis presents the development and evaluation of a real-time, monocular visual-inertial semantic Simultaneous Localization and Mapping (SLAM) system for underwater environments, implemented on the cost-effective BlueROV2 platform. The research aims to enhance AUV autonomy and enable complex underwater missions through improved navigation and semantic mapping capabilities. Key contributions include the integration of a custom-trained object detector for underwater environments, adaptation of a hybrid SLAM algorithm combining Gaussian and Non-Gaussian landmarks for underwater operation, preliminary assessment of the SLAM system's accuracy using motion capture-based ground truth measurements, and comparative evaluation of the developed semantic SLAM system against state-of-the-art alternatives in an indoor pool experiment using the BlueROV2. This work addresses the challenges of underwater navigation and semantic mapping, offering a potential solution to extend the operational capabilities and mission complexity of affordable AUV platforms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC</title>
<link href="https://hdl.handle.net/1721.1/157233" rel="alternate"/>
<author>
<name>Du, Katelin</name>
</author>
<id>https://hdl.handle.net/1721.1/157233</id>
<updated>2024-10-10T03:02:13Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC
Du, Katelin
Fusion reactors utilizing deuterium and tritium fuel produce high-energy 14.1 MeV neutrons, necessitating a thorough understanding of their behavior for effective reactor design. Neutron transport codes play a critical role in determining key parameters such as tritium breeding ratio, neutron wall loading, and heat deposition, vital for assessing operational considerations. Monte Carlo (MC) radiation transport methods have become standard in fusion neutronics due to their ability to handle energy and angular variables continuously. However, manual modeling of complex fusion geometries with traditional constructive solid geometry (CSG) methods remains labor-intensive, prompting the integration of computer-aided design (CAD) models into MC radiation transport. This thesis investigates the integration of CAD-based geometry representations into MC radiation transport, focusing on computational performance implications of the Direct Accelerated Geometry Monte Carlo (DAGMC) approach. This work examines different neutronics model representations, including CSG, Unstructured Mesh (UM), and DAGMC for the practical solutions they can provide for fusion neutronics needs. Tracking algorithms associated with each representation are explored, highlighting UM and DAGMC’s versatility in the way they integrate with CAD-based design processes. Performance comparison between CSG and DAGMC geometries in OpenMC is analyzed by evaluating particle simulation rates and memory usage across four progressively complex fusion-like models. Performance results reflect positively on DAGMC transport, but areas of future work are identified for more comprehensive results. From the lens of computational performance, this study contributes to determining the viability of CAD-based geometry representations for use in fusion-relevant MC radiation transport.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility of Vector Instruction-Set Semantics Using Abstract Monads</title>
<link href="https://hdl.handle.net/1721.1/157232" rel="alternate"/>
<author>
<name>De Belen, Arthur Reiner</name>
</author>
<id>https://hdl.handle.net/1721.1/157232</id>
<updated>2024-10-10T03:53:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Feasibility of Vector Instruction-Set Semantics Using Abstract Monads
De Belen, Arthur Reiner
Formalizations of instruction-set semantics help establish formal proofs of correctness of both hardware designed to implement these instruction sets and the software implemented against this specification. One such prior work1 formalizes a specification of a subset of the RISC-V instruction-set architecture using a general-purpose language, Haskell, using its monad and typeclass support to abstract over effects. Another member of the same family is the RISC-V V extension, which specifies instructions for operating on multiple data elements in a single instruction, which is useful for domains with high levels of data parallelism, such as graphics rendering and machine learning. In this work I examine the question of whether the same prior work can be extended to formalize the semantics of the vector extension. I answer this question with a tentative “yes”, backed by a partial specification in Haskell of a small but nontrivial subset of this vector extension, a translation of the same specification into Coq using hs-to-coq², and work towards demonstrating the utility of this specification.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Labeling Schemes for Improving Cilksan Performance</title>
<link href="https://hdl.handle.net/1721.1/157231" rel="alternate"/>
<author>
<name>Holla, Satya</name>
</author>
<id>https://hdl.handle.net/1721.1/157231</id>
<updated>2024-10-10T03:21:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Labeling Schemes for Improving Cilksan Performance
Holla, Satya
While race detection algorithms like SP-bags have provably good theoretical properties, large overheads exist in practice, which urges the need for performance optimization. In this thesis, I propose labeling schemes as a method of circumventing many of the expensive operations in Cilksan, an implementation of the SP-bags algorithm. The proposed labeling schemes give strands of a parallel program labels during the execution of Cilksan, allowing Cilksan to shortcut the processing of certain memory accesses if the label comparison allows. I describe and prove correctness for two labeling schemes, the procedure labeling scheme and the prefix labeling scheme, implement both in Cilksan, and measure their performance. While the results show that the overhead of maintaining labels is too high in my implementation, the labeling schemes manage to circumvent many of the memory access operations, suggesting the merit of a more performant implementation of the same schemes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images</title>
<link href="https://hdl.handle.net/1721.1/157230" rel="alternate"/>
<author>
<name>Cai, Fiona X.</name>
</author>
<id>https://hdl.handle.net/1721.1/157230</id>
<updated>2024-10-10T03:33:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images
Cai, Fiona X.
Recent advancements in text-to-image generation models have sparked a growing interest in using synthesized training data to improve few-shot learning performance. Prevailing approaches treat all generated data as uniformly important, neglecting the fact that the quality of generated images varies across different domains, datasets, and methods of generation. Using poor-quality images can hurt learning performance. In this work, we present Uncertaininclusive Contrastive Learning (UniCon), a novel contrastive loss function that incorporates uncertainty weights for synthetic images during training. Extending the framework of supervised contrastive learning, we add a learned hyperparameter that weights the synthetic input images per class, adjusting the influence of synthetic images during the training process. We evaluate the effectiveness of UniCon-learned representations against traditional supervised contrastive learning, both with and without synthetic images. Across three different finegrained classification datasets, we find that the learned representation space generated by the UniCon loss function leads to significantly improved downstream classification performance in comparison to supervised contrastive learning baselines.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora</title>
<link href="https://hdl.handle.net/1721.1/157228" rel="alternate"/>
<author>
<name>Garber, Samantha C.</name>
</author>
<id>https://hdl.handle.net/1721.1/157228</id>
<updated>2024-10-10T03:23:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora
Garber, Samantha C.
Coastal vegetation can provide protection to the coastline through its root structures, which reduce soil erosion, and its stem structures, which dissipate wave energy. The drag a plant induces could be used to quantify the amount of coastal protection that is provided. This study combined field measurements and drone surveys to develop methods for quantifying vegetation drag, focusing on Spartina alterniflora (S. alterniflora), a smooth cordgrass native to the study site: Waquoit Bay National Estuarine Research Reserve. The drag of a single plant is proportional to frontal area. The drag per bed area is proportional to the drag of a single plant and the number of stems per bed area. This study collected plant samples over the growing season to generate allometric relationships between tiller height and individual plant biomass and frontal area, which provides a way to translate remotely-sensed measures of biomass into stem count and frontal area per bed area. The frontal area was measured through digital imaging of individual plants. The elastic modulus of the stem was also measured using an Instron testing machine. For sixteen 1m x 1m test plots, Normalized Difference Vegetation Index (NDVI) extracted from drone multispectral imagery was compared to measured stem count and estimated biomass. The study compared two different years and three time points within a growing season [August 2022; June, August, October 2023]. In addition, at three plots the stem count was manually altered by cutting out 50% and 100% of the plants. This study found that while NDVI can be used to determine the abundance of S. alterniflora, there are several limitations that cause the correlations to be case-specific. Limitations to NDVI-S. alterniflora correlations included: (1) saturation, (2) species inhomogeneity of the area tested, (3) shoot density inhomogeneity of the area tested, and (4) environmental conditions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block</title>
<link href="https://hdl.handle.net/1721.1/157227" rel="alternate"/>
<author>
<name>Hernandez, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157227</id>
<updated>2024-10-10T03:02:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block
Hernandez, David
The development of the integrated Power Electronics Building Block (iPEBB) is key to the full electrification of future United States Navy ships. The creation of this modular, universal power converter takes full advantage of modern electronics; however, the high heat generation of these components, 9.6 kW from the MOSFET switches and 624 W from the transformer, makes thermal management crucial to their successful implementation. As a result of additional requirements, indirect liquid cooling using a detached cold plate is being studied; however, preliminary analysis revealed concerns regarding the hot spot temperatures of the transformer using this approach. This thesis explored the feasibility of using heat pipes to supplement the cooling provided by the cold plate to maintain iPEBB transformer core and coil temperatures below 100°C and 155°C respectively. First, experiments and analytical solutions were used to provide accurate estimates for the thermal conductivity values of the 3F36 ferrite and litz wire in the transformer. Then, a standalone thermal model of the transformer was built in StarCCM+ and used to test various cooling solutions, including forced airflow and heat pipe configurations. The proposed design utilized 16 copper-water heat pipes configured to provide alternative paths of heat flow for the regions of the transformer furthest from the cold plate. Shapal HiM Soft Machinable AlN ceramic was utilized to provide high voltage insulation, and electromagnetic simulations were used to estimate the induced losses in the heat pipes as a result of high frequency coil operations. Using a half-iPEBB thermal model, the final configuration, coupled with the cold plate cooled by 22°C deionized water at a flow rate of 0.37 kg/s, achieved a core maximum temperature of 99.7°C, coil maximum of 93.2°C, and MOSFET maximum of 144.6°C, all within their respective limits, while only adding a net weight of 0.29 kg to the iPEBB. The thermal results of this study showcase the effectiveness of heat pipes in the iPEBB and invite further analysis and experimentation to validate the electromagnetic implications of the concept. These results also contribute to the general ongoing study of heat pipe usage near high-frequency electronics.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploiting irregular parallelism to accelerate FPGA routing</title>
<link href="https://hdl.handle.net/1721.1/157224" rel="alternate"/>
<author>
<name>Zhu, Alan Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157224</id>
<updated>2024-10-10T04:12:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploiting irregular parallelism to accelerate FPGA routing
Zhu, Alan Y.
In the era of hardware specialization, field-programmable gate arrays (FPGAs) provide a promising platform for computer architects, combining the programmability of software with the speed and performance of hardware. Despite this, compiling hardware programs onto FPGAs can be incredibly time-consuming, making it hard to develop and iterate on complex FPGA programs. Of particular relevance is the routing phase, which takes a circuit’s technology-mapped netlist and routes its signals using the switches and wires present on a given FPGA architecture, often with a target of minimizing critical path delay. This optimization problem is known to be NP-hard, and existing algorithms for approximating it exhibit very little regular parallelism.&#13;
This thesis accelerates the routing phase of VTR 8.0, a commonly used, open-source research tool for FPGA CAD flow. We show that despite the lack of regular parallelism, routing still exhibits significant irregular parallelism. This parallelism can be exploited on parallel architectures that provide hardware support for ordered tasks and fine-grained speculation, such as the Swarm architecture. Using Swarm, we exploit the parallelism present at the core of VTR’s algorithm, achieving a 35.9x speedup on a single routing iteration of a large benchmark (cholesky_mc) on 256 cores.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach</title>
<link href="https://hdl.handle.net/1721.1/157223" rel="alternate"/>
<author>
<name>Arsalan, Naveed</name>
</author>
<id>https://hdl.handle.net/1721.1/157223</id>
<updated>2024-10-10T03:46:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach
Arsalan, Naveed
This thesis presents a comprehensive framework for calculating Zakat on modern financial assets specifically tailored for American Muslims. As one of the five pillars of Islam, Zakat is an obligatory form of charity for those who meet specific wealth criteria. However, applying traditional Zakat principles to contemporary financial instruments poses significant challenges, particularly within the context of the U.S. financial system.&#13;
&#13;
The research addresses these complexities by developing methodologies that consider diverse financial instruments, valuation challenges, tax implications, accessibility issues, and Shariah compliance. The framework covers a wide range of assets, including cash and bank accounts, stocks, mutual funds, bonds, cryptocurrencies, retirement accounts (401(k)s, Traditional and Roth IRAs), Health Savings Accounts (HSAs), employee stock options, precious metals and jewelry, and real estate investments.&#13;
&#13;
Bridging classical Islamic jurisprudence with modern financial realities, this thesis provides detailed calculation methodologies for each asset class, incorporating U.S.-specific considerations such as tax-deferred accounts and capital gains implications. The framework is designed to be adaptable to evolving financial markets and balances various scholarly opinions on contentious issues. To enhance accessibility, both comprehensive and simplified calculation methods are offered, catering to users with different levels of financial literacy.&#13;
&#13;
In conclusion, this thesis makes a significant contribution to Islamic finance by offering a structured, principle-based approach to Zakat calculation that is both Shariah-compliant and applicable in the modern American financial context. It provides a valuable resource for American Muslims striving to fulfill their religious obligations amidst the complexities of the U.S. financial system and lays the groundwork for future research in Islamic finance in Western contexts.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes</title>
<link href="https://hdl.handle.net/1721.1/157222" rel="alternate"/>
<author>
<name>Deng, Leon</name>
</author>
<id>https://hdl.handle.net/1721.1/157222</id>
<updated>2024-10-10T03:41:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes
Deng, Leon
G-Net is a neural network framework that implements g-computation, a causal inference method for making counterfactual predictions and estimating treatment effects under dynamic and time-varying treatment regimes. Two G-Net models have been successfully implemented: one that uses recurrent neural networks (RNNs) as its predictors, and one that uses transformer encoders (G-Transformer). However, one limitation of G-Net is that its counterfactual predictive density estimates do not take into account uncertainty about model parameter estimates. These uncertainty estimates are necessary for establishing confidence intervals around the effect estimation, enabling a robust assessment of whether the effects of two treatment options exhibit statistically significant differences. An important area of work is adding support for quantification of model uncertainty for conditional effect estimation. This thesis aims to add uncertainty quantification to both the RNN-based G-Net and the G-Transformer. To achieve this, we use two well-known techniques in uncertainty modeling, namely variational dropout and deep ensembling. We evaluate our methods using two simulated datasets based on mechanistic models. We demonstrate that G-Net and G-Transformer models with uncertainty quantification are better-calibrated and perform better for individual-level clinical decision making than their baseline counterparts.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts</title>
<link href="https://hdl.handle.net/1721.1/157221" rel="alternate"/>
<author>
<name>Cota, Jaron F.</name>
</author>
<id>https://hdl.handle.net/1721.1/157221</id>
<updated>2024-10-10T03:01:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts
Cota, Jaron F.
The measurement of hydrogen transport properties of molten salts like FLiBe is crucial for the development of advanced nuclear technologies like lithium-bearing liquid immersion breeding blankets for fusion reactors. Tritium production and the quantification of its mobility in these materials is necessary for efficient operation of these technologies. A common method of measuring these properties is with hydrogen permeation experiments. Hydrogen permeation experiments involve measuring the flux of hydrogen permeating through a substance, and from this flux transport properties like the diffusivity and solubility of hydrogen in the molten salt can be derived with various models of the experimental setup. This thesis describes the process of fabricating and assembling a HYdrogen PERmeatION (HYPERION) experiment and provides preliminary results of the functionality as well as some issues and troubleshooting of the experiment. Using the code Finite Element Simulation of Tritium In Materials (FESTIM), the experiment was also modeled. The models were used to explore the design parameter space of the experiment to determine the experiment’s effectiveness in producing the desired result of accurately calculating the hydrogen transport properties of the molten salt. Through the process of modeling, the assumptions that were normally made when performing these experiments were called into question and their validity was quantified, suggesting that the experiments that have been previously conducted might have been significantly affected by these assumptions. Using these models could eventually improve the accuracy of measured transport properties for molten salts like FLiBe and other nuclear fusion-relevant molten salts and inform the design of hydrogen permeation experiments moving forward.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise</title>
<link href="https://hdl.handle.net/1721.1/157220" rel="alternate"/>
<author>
<name>Orderique, Piero</name>
</author>
<id>https://hdl.handle.net/1721.1/157220</id>
<updated>2024-10-10T03:58:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise
Orderique, Piero
Despite advancements in causal inference and prescriptive AI, its adoption in enterprise settings remains hindered primarily due to its complexity and lack of interpretability. This work at the MIT-IBM Watson AI Lab focuses on extending upon the proof-of-concept agent, PrecAIse, by designing a domain-adaptable conversational agent equipped with a suite of causal and prescriptive tools. The objective is to make advanced, novel causal inference and prescriptive tools widely accessible through natural language interactions. The presented Natural Language User Interface (NLUI) enables users with limited expertise in machine learning and data science to harness prescriptive analytics in their decision-making processes without requiring intensive compute. We present an agent capable of function calling, maintaining faithful, interactive, and dynamic conversations, and supporting new domains.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound</title>
<link href="https://hdl.handle.net/1721.1/157219" rel="alternate"/>
<author>
<name>Chen, Yiming</name>
</author>
<id>https://hdl.handle.net/1721.1/157219</id>
<updated>2024-10-10T04:09:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound
Chen, Yiming
Precisely estimating lumen boundaries in intravascular ultrasound (IVUS) is needed for sizing interventional stents to treat deep vein thrombosis (DVT). Unfortunately, current segmentation networks like the UNet lack the precision required for clinical adoption in IVUS workflows. This arises due to the difficulty of automatically learning accurate lumen contour from limited training data while accounting for the radial geometry of IVUS imaging. We propose the Geo-UNet framework to address these issues via a design informed by the geometry of the lumen contour segmentation task, building anatomical constraints directly into the architecture. We first convert the input data and segmentation targets from Cartesian to polar coordinates. Starting from a convUNet feature extractor, we propose a two-task setup, one for conventional pixel-wise labeling and the other for single boundary lumen-contour localization. We directly combine the two predictions by passing the predicted lumen contour through a new activation (named CDFeLU) to filter out spurious pixel-wise predictions. Our unified loss function carefully balances area-based, distance-based, and contour-based penalties to provide near clinical-grade generalization in unseen patient data. We also introduce a lightweight, inference-time technique to enhance segmentation smoothness. The efficacy of our framework on a venous IVUS dataset is shown against state-of-the-art models. We will make the code repository for this project available soon after approval from industry collaborators.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming</title>
<link href="https://hdl.handle.net/1721.1/157218" rel="alternate"/>
<author>
<name>Shonkwiler, Lara</name>
</author>
<id>https://hdl.handle.net/1721.1/157218</id>
<updated>2024-10-10T03:35:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming
Shonkwiler, Lara
There are many different approaches to beamforming and interferer cancellation. The earliest methods of beamforming assumed prior knowledge of the receive array geometry and of the incoming signal directions. This information is normally found via array calibration. Blind source separation methods do not require this information and therefore are more robust to array calibration errors. Traditional blind source separation methods generally leverage some intrinsic characteristic of the signal, such as constant envelope properties or second or higher order statistics. Traditional blind source separation methods such as CMA, SOBI, JADE, and FastICA tend to be highly effective at beamforming datasets with moderate to large sample supports, but they do not perform well when they only have access to a limited number of data samples. They also bear the disadvantage that the appropriate algorithm must be selected based on the properties of the expected signal. Machine learningbased methods are of interest because they show promise in low sample support regimes, and because they offer the possibility of a ‘one size fits all’ solution that can adaptively recognize and exploit different signal features. This thesis describes the performance of two machine learning-informed beamforming methods — Classification-Based Transfer Learning (CBTL) [1] and Denoising-Based Transfer Learning (DBTL). CBTL and DBTL are evaluated with respect to each other and with respect to traditional blind beamforming methods across a variety of signal detection environments, and are found to offer superior or equivalent performance in a majority of environments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography</title>
<link href="https://hdl.handle.net/1721.1/157217" rel="alternate"/>
<author>
<name>Protyasha, Nishat Fahmida</name>
</author>
<id>https://hdl.handle.net/1721.1/157217</id>
<updated>2024-10-10T03:34:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography
Protyasha, Nishat Fahmida
Minimally verbal adults with Autism Spectrum Disorder (mvASD) experience significant speech production challenges linked to impaired motor skills. Despite the prevalence of these speech difficulties, the underlying motor mechanisms remain poorly understood. This thesis investigates the neuromuscular activity associated with speech motor movement in mvASD using surface electromyography (sEMG). By capturing and analyzing sEMG signals with 8 electrodes from key facial muscles during speech production tasks, this study provides insights into the distinct motor patterns exhibited by mvASD individuals compared to neurotypical controls. The sEMG data was collected while 25 participants, including 10 mvASD individuals and 15 neurotypical controls performed a series of carefully designed speech tasks. Features such as Root Mean Square (RMS) values, Pearson correlation coefficients, and eigenvalues from auto and cross correlation matrices were extracted to measure muscle activation and coordination complexity. The results reveal that mvASD individuals exhibit higher RMS values and greater synchronization between sEMG channels, indicating stronger muscle activation and tighter coupling among facial muscles. Furthermore, the analysis of eigenvalues suggests lower complexity in motor coordination among mvASD participants, reflecting fewer degrees of freedom in muscle control. These findings were supported by classification models, which demonstrated that features from diadochokinetic tasks were more effective in distinguishing mvASD from neurotypical individuals.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biometric and Biomechanical Sensing for Violin Performance Analysis</title>
<link href="https://hdl.handle.net/1721.1/157216" rel="alternate"/>
<author>
<name>Kydd, Aria</name>
</author>
<id>https://hdl.handle.net/1721.1/157216</id>
<updated>2024-10-10T03:41:18Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Biometric and Biomechanical Sensing for Violin Performance Analysis
Kydd, Aria
Expressive violin performance demands the coordination of multiple physical and physiological processes. Students, especially those engaged in infrequent private lessons, often struggle to manage these demands. Outside of lessons, they lack access to the resources and external feedback that technology has made readily available in other learning settings. In this study, we propose the Expressive Violin Performance Sensing (EVPS) system as a solution to this issue. The EVPS system uses low-cost and accessible electronic sensors to provide objective, quantitative insights into the physical and physiological aspects of a violinist’s performance. Results from experimental trials reveal that the EVPS system provides relatively reliable data on expressive violin performance. While the general measures of physicality did not reveal significant differences between players of distinct skill levels, physiological and specific physical measurements aligned well with predictions. The successful utilization of low-cost sensors in the EVPS system highlights their potential for use in future performance analysis studies, challenging the precedent of relying on expensive, medical-grade systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading</title>
<link href="https://hdl.handle.net/1721.1/157214" rel="alternate"/>
<author>
<name>Martinez, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/157214</id>
<updated>2024-10-10T04:13:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading
Martinez, Alejandro
This study aims to enhance the understanding of self-loosening in mesoscale bolt assemblies, specifically those with characteristic dimensions ranging from 100 to 3,000 micrometers. These bolts pose unique design challenges due to the small difference between their nominal dimensions and manufacturing tolerances. This work discusses the design of new instrumentation to test multimesoscale bolt assemblies under various loading conditions, an area previously focused only on larger bolts. A case study was conducted in collaboration with a mesoscale multi-bolt system that was experiencing self-loosening failures. This system was tested to determine its susceptibility to the self-loosening failure mode. An experimental study was conducted to identify the sensitivities of the system to geometric and loading environment parameters. A set of hypotheses were proposed as a way to facilitate new learnings about the system’s sensitivities to four different parameters. The findings from the experimental study provide valuable insights into how different geometric configurations and types of loading conditions contribute to the performance of mesoscale multi-bolted systems. Through these investigative efforts, the study successfully identified the existence of a critical displacement threshold for self-loosening in mesoscale multi-bolted systems that is sensitive to factors such as clamp length, amplitude of input displacement load, and screw position.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research</title>
<link href="https://hdl.handle.net/1721.1/157213" rel="alternate"/>
<author>
<name>Aissi, Eunice I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157213</id>
<updated>2024-10-10T03:50:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research
Aissi, Eunice I.
Materials development is the foundation for innovation in many industries and fields, however, this process is traditionally slow and resource-intensive. Most often, new materials are developed and characterized on the time scale of years which can limit the pace of scientific and industry innovation. I address the material synthesis and characterization bottleneck by presenting a framework that I believe is suitable for smaller labs: Self-built, low-cost automation. The design philosophy is to de-risk the lab automation process by keeping costs low, failing fast, and leveraging common resources in electronic systems and additive manufacturing. I present an improved version of a low-cost but high-throughput inkjet material printer developed by Siemenn et al. and adapted to operation in the glovebox, hood, and benchtop environments. The tool is capable of depositing gradients of droplets with unique compositions at a rate of up to 1000 materials per minute, is self-built, and costs around $500. I also present a computer-vision-enabled high-throughput material characterization algorithm for stability quantification through color degradation. The synthesis and characterization methods are validated on a methylammonium lead iodide (MAPbI3) and formamidinium lead iodide (FAPbI3) perovskite material system. X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and hyperspectral imaging measurements show equivalence between high-throughput synthesis and more traditional spin-coating methods. Results obtained through the high-throughput stability characterization method are aligned with stability trends reported in the literature and have an accuracy of 96.9% when compared to ground-truth degradation as measured by a domain expert.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device</title>
<link href="https://hdl.handle.net/1721.1/157212" rel="alternate"/>
<author>
<name>Rosko, Rachael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157212</id>
<updated>2024-10-10T03:04:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device
Rosko, Rachael S.
FrED (Fiber Extrusion Device) Factory is a manufacturing facility at MIT which educates its students on fundamental and advanced manufacturing principals. The factory produces multiple FrED devices, which are "desktop fiber extrusion systems that mimic continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing. It allows learners to perform experiments, vary manufacturing parameters and control system, collect data, and perform analysis." [1] This year’s thesis work builds off of the progress from 2023, which aimed to produce a low cost variant of earlier versions of the FrED. In 2024, the aim for the lab was to implement design refinements, design for manufacturing, design the assembly line, design packaging, develop supply chain using Tulip, develop educational content, perform user testing, and execute pilot runs. The focus of this thesis will be on design refinements related to graphical user interface (GUI), inclusion of threading for improvement to program speed, and characterization of performance related to diameter control as well as advancements in educational content development, user testing, production level assembly, and pilot runs. The results of this thesis include significant improvements made to the FrED device such as a user-controlled GUI as well as close-loop controls. Furthermore, key components of the device were quantified such as fps rate of the USB camera and motor stability which aided in understanding how diameter control and modulation can be implemented in future work. At the time of submission, there were inherent complications still not understood about the FrED that limited its potential as an end user product. Some complications included reliability of the diameter reading from the USB camera, physics of the hot glue preform, and motor speed assumptions which did not perform well under close-loop testing (spool speed going to 0 in order to make the diameter larger consequently prevents the camera from reading any future diameter measurements which is problematic). In terms of pilot runs, user testing, and educational content development, the results were promising. 78.3% of the 23 user testing respondents at Venture Cafe said they were interested in receiving a FrED and getting access to more learning content. Suggestions were made by the users for future work and implementation. Educational content was developed for mass flow and data acquisition, however, a formal pilot run session where this could be tested for feedback was not performed.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations</title>
<link href="https://hdl.handle.net/1721.1/157211" rel="alternate"/>
<author>
<name>Knight, Caleb M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157211</id>
<updated>2024-10-10T03:45:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations
Knight, Caleb M.
Natural gas power generation retrofitted with carbon capture technology is poised to play a crucial role in ensuring energy reliability amidst the transition to variable renewable energy resources. While natural gas generation is used primarily for baseload power, it is expected to transition towards an intermittent power generator, serving as a load-following resource during periods of low renewable energy availability. It will be critical to understand how start-up, shutdown, and load-following behavior may impact system performance and influence future grid design. &#13;
&#13;
This thesis performs a comprehensive literature review to establish context on various techniques of carbon capture technology. Post-combustion carbon capture, specifically absorption-based technology, remains the preferred candidate for retrofitting natural gas plants due to its technical maturity, scalability, relatively high capture efficiencies, and ease of retrofitting. The literature highlights that absorption-based carbon capture units exhibit degraded performance during non-steady-state operating conditions. Specifically, cold start-ups result in lower capture efficiencies and higher heat rates, although hot start-ups incur significantly less performance reduction. &#13;
&#13;
The literature review findings are integrated into GenX, a grid optimization tool, to evaluate natural gas combined cycle power plants equipped with carbon capture technology. The modified optimization models are run using the ISO New England grid system, and results suggest that incorporating advanced start-up penalties for natural gas plants reduces operational flexibility in an emissions-constrained environment. As capture efficiencies decrease and heat rates increase during start-ups, utilizing natural gas plants becomes more expensive due to the additional emissions and reduced thermal efficiency. Comparing models with different levels of performance degradation during start-up suggests that installing less gas capacity could be optimal, with those units operating at higher capacity factors to mitigate start-up penalties. Under modest emissions constraints, natural gas units may be operated continuously even during periods of renewable energy surplus. Harsher start-up penalties applied to natural gas plants likely increase the incremental value of alternative energy technologies, although natural gas retains a critical role in the energy mix.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential</title>
<link href="https://hdl.handle.net/1721.1/157210" rel="alternate"/>
<author>
<name>Daly, Noah</name>
</author>
<id>https://hdl.handle.net/1721.1/157210</id>
<updated>2024-11-14T17:03:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential
Daly, Noah
The past decade has seen a surge of interest in psychedelic compounds as therapeutic medicine. Ibogaine, an indole alkaloid extracted exclusively from an endangered family of shrubs from Central African nations of Gabon and Cameroon, is a psychedelic currently being studied for its unique therapeutic potential. It is also considered the most extreme of the psychedelic drugs currently known to researchers. For the past fifty years, it’s been used to treat severe substance use disorders, particularly with highly addictive opioids and stimulants. In the past ten years, American special operations forces veterans have begun to take ibogaine to treat traumatic brain injuries (TBI). Anecdotal evidence has suggested that the permanent, downstream symptoms TBI patients experience after these injuries are effectively managed after a single ibogaine treatment. Advocacy from the special operations veterans community prompted Stanford University researchers to embark on the first-ever U.S.-based clinical trial of ibogaine to treat TBI. The study, published in January, 2024, further evidenced decades of evidence of ibogaine’s clinical use potential. Yet questions still remain about whether or not ibogaine’s cardiac toxicity can effectively be managed in human patients, as well as the true therapeutic utility of the prolonged period of dreamlike consciousness ibogaine produces in patients. This thesis examines the cases of three patients–all United States military veterans–undergoing ibogaine therapy, examining how the biological impacts of ibogaine, as well as their psychedelic experiences, may have saved their lives.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks</title>
<link href="https://hdl.handle.net/1721.1/157208" rel="alternate"/>
<author>
<name>Chambe, Enoch</name>
</author>
<id>https://hdl.handle.net/1721.1/157208</id>
<updated>2024-10-10T03:51:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks
Chambe, Enoch
Affinity networks, also known as Employee Resource Groups (ERGs), are increasingly essential in today’s corporate world as they play a crucial role in fostering diversity, equity, and inclusion within organizations. These groups provide a platform for employees from underrepresented or marginalized communities to connect, share experiences, and find&#13;
support. ERGs geared towards Hispanic employees are often advertised as not only a means to connect with others and provide a sense of belonging but are also often promoted as avenues towards successful professional development and growth for underrepresented employees. This research explores the perspectives of a group of experienced engineers from various technical backgrounds and industries to understand if there is a correlation between generational status for Hispanic Americans and their overall perceived benefits from participating in ERGs. The study provides a detailed literature review of relevant existing research on this subject, followed by semi-structured interviews of ten participants, and a thematic analysis approach used to analyze the data into the following five themes: diversity considerations for school and job selections, employee perspective on ERGs, sense of belonging and generational differences, the meaning of inclusiveness, and continued participation. Finally, a research conclusion and a series of recommendations are provided.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success</title>
<link href="https://hdl.handle.net/1721.1/157207" rel="alternate"/>
<author>
<name>Wu, Kedi</name>
</author>
<id>https://hdl.handle.net/1721.1/157207</id>
<updated>2024-10-10T03:08:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success
Wu, Kedi
Science, Technology, Engineering, and Math (STEM) are the critical growth engines that develop the economy and society and improve our lives overall. However, women are underrepresented in STEM, which means 50% of the world's brain power is untapped. We know that, in general, women face unique barriers and challenges than men, such as gender bias and stereotypes. However, we know less about the unique obstacles and challenges women face in STEM and even less about overcoming the barriers in STEM. This research aims to identify the challenges faced by women in STEM and to gain a practical understanding of what women can do to evolve as leaders. As STEM is extremely broad, this thesis focused on studying the 11 female Nobel laureates who won the prize after 2000 under the three STEM-related Nobel categories: physics, chemistry, and medicine or physiology.&#13;
&#13;
First, a comprehensive literature review was conducted to understand the study results of existing barriers faced by women in STEM and the enablers that can increase the likelihood of women's success in STEM. Next, data were collected about the 11 Women STEM Noble laureates, including their biographies, life stories, newspaper reports, and interview transcripts. The thematic analysis was then adopted to analyze the collected data, in which four themes are identified and presented: 1) Overcome Barriers and Challenges; 2) Qualities of a Good Scientist; 3) Supportive Systems; 4) Impactful, Humanity, Innovative. Finally, the findings are summarized in relation to the research objectives to provide insights for women who want to pursue a STEM career.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation</title>
<link href="https://hdl.handle.net/1721.1/157206" rel="alternate"/>
<author>
<name>Legoupil, Aurelien Y. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157206</id>
<updated>2024-10-10T03:37:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation
Legoupil, Aurelien Y. M.
In the context of quench detection systems for fusion superconducting magnets, temperature sensors based on optical fibers provide an effective solution for rapid, distributed measurement, with low sensitivity to electromagnetic interference. At the cryogenic temperatures and high radiation doses associated with this application, however, optical fibers undergo radiation-induced attenuation (RIA): light-absorbing point defects form within the silica glass structure, reducing the longevity and effectiveness of these sensors. In this work, we investigate the underlying microscopic defects and mechanisms of RIA and assess strategies for mitigation, namely, annealing via heat treatment (thermal annealing) and annealing via light propagation through the fiber (optical annealing, or “photobleaching”). We design a white light absorption spectroscopy setup with in-situ irradiation and optical annealing, working at liquid nitrogen temperature and different post-irradiation warm-up rates. For the pure silica core and F-doped cladding fibers studied, the RIA spectrum obtained is decomposed into known radiation-induced defect absorption bands, highlighting the key role of self-trapped holes in RIA at telecommunication wavelengths. Furthermore, absorption spectroscopy experiments are performed to show that thermal annealing at liquid nitrogen temperature is negligible, validating the transferability of the experimental results obtained at 77 K to 20 K applications. The decomposition of RIA into different defect contributions is supported by cold post-irradiation electron paramagnetic resonance (EPR) spectroscopy of fiber preform fragments, which reveals the presence of two types of paramagnetic centers: self-trapped holes and E'_gamma centers. The post-irradiation transient grating spectroscopy (TGS) technique is adapted to glass samples with continuous cooling at liquid nitrogen temperature and in-situ optical annealing. With this technique, we could observe the changes in thermal and acoustic properties resulting from the evolution of defect populations, with the potential to complement other experimental techniques to better understand RIA build-up and annealing kinetics. To improve the modeling of thermo-optical annealing, we propose future experiments including isothermal annealing tests and a larger exploration of optical annealing parameters. Our RIA build-up and annealing tests can help companies aiming to operate optical fibers under irradiation at cryogenic temperatures optimize their heat treatments to restore fiber transmission and the prevention of RIA during operation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Energy and Area Estimation Plugin for Accelerator Architecture Simulation</title>
<link href="https://hdl.handle.net/1721.1/157205" rel="alternate"/>
<author>
<name>Wu, Wendy</name>
</author>
<id>https://hdl.handle.net/1721.1/157205</id>
<updated>2024-10-10T03:56:36Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">An Energy and Area Estimation Plugin for Accelerator Architecture Simulation
Wu, Wendy
Development of domain-specific hardware accelerators has been an important focus for high performance computing research in recent years, enabling significant gains in a variety of practical applications. Of particular interest is accelerator design for applications involving sparse data. Such accelerators inherently tend towards a diverse array of architecture designs, and often rely on custom simulators for evaluation. In addition to raw performance, energy consumption and chip area are both important considerations for evaluating accelerators. Accelergy is a tool that provides a good general framework for fine-grained energy and area estimation. However, output from simulation tools may not be compatible with Accelergy’s expected input format, which is the case for the custom simulator Accelsim. To address this gap, this work presents a streamlined plugin for processing Accelsim simulator output into Accelergy input, for the purpose of generating accurate and explainable energy consumption and area models for accelerator architectures. We demonstrate the plugin’s flexibility by performing energy and area estimates for two state-of-the-art hardware accelerators, ISOSceles and Trapezoid. Overall, this plugin is easy-to-use, self-contained, and supports a wide variety of configurable functionalities, making it an excellent general tool for running Accelergy on Accelsim output.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations</title>
<link href="https://hdl.handle.net/1721.1/157204" rel="alternate"/>
<author>
<name>Eastman, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157204</id>
<updated>2024-10-10T03:04:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations
Eastman, John M.
Recreating the physical behavior of fluids from real-world footage remains a significant challenge, particularly for non-Newtonian fluids. This work introduces a novel method that combines neural radiance fields (NeRF), which map 3D scene coordinates to color and density using deep neural networks, with the material point method (MPM), a simulation technique that represents materials as moving points capable of large deformation. Our approach aims to accurately recover physical parameters and achieve high-fidelity 3D reconstructions from single-view videos of fluids, even those with complex rheological behaviors like shear thinning and thickening. In this study, we apply our method to a Herschel-Bulkley fluid, namely ketchup, under two different real-world conditions: a 50mm column collapse and being squeezed from a bottle. By leveraging the differentiable nature of NeRF and the fluid simulation capabilities of MPM, our approach extracts parameters from real-world footage after initially training on approximate geometry derived from virtual models. The actual video footage is then used to estimate initial velocities and retrieve constitutive parameters, including modulus, yield stress, and viscosity. The iterative optimization process, which integrates continuous feedback between the NeRF-MPM simulation and the video data, enables us to extract constitutive parameters from real footage and perform predictive simulations that closely reflect the behavior observed in the training videos. Key results include the retrieval of constitutive parameters, such as modulus, yield stress, and viscosity, as well as reconstructed videos that reflect the fluid behavior observed in the training video. The results demonstrate that our method can reconstruct the fluid’s flow behavior from limited perspectives, accurately enough to visually reproduce the flow, showcasing its flexibility and robustness. This work not only validates the approach through 3 a series of experiments but also highlights the potential for differentiable rendering and simulation techniques to advance our understanding and simulation of complex material dynamics, particularly in cases where direct measurements are challenging or impossible.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development</title>
<link href="https://hdl.handle.net/1721.1/157203" rel="alternate"/>
<author>
<name>Taylor, Temi</name>
</author>
<id>https://hdl.handle.net/1721.1/157203</id>
<updated>2024-10-10T03:32:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development
Taylor, Temi
For people without programming experience, integrating their work into the main project forms a common bottleneck in video game development. Particularly for dialogue writing, existing approaches for moving the text into the codebase are either highly tedious or excessively heavyweight for faster paced projects. Given that writers often initially produce loosely-formatted scripts, this thesis describes Game-DAP, an adaptive parsing system that accounts for the variation in individual dialogue writing styles. Examinations of pre-existing systems and a survey conducted on developers form a basis for a syntactic model of the information commonly encapsulated by dialogue scripts. This model lends itself to a design for the parsing process used by Game-DAP, which aims to provide as much flexibility to writers as possible with those assumptions as a baseline. User testing results informed the evaluation of the system, focusing on its accuracy, flexibility, and accessibility from the perspective of various authors. Although this analysis revealed several classes of inputs that Game-DAP struggles to process with full correctness, the more successful cases and instances of positive feedback suggest that a refined approach to this kind of domain-specific parsing could provide great value in the creative writing process of game dialogue.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides</title>
<link href="https://hdl.handle.net/1721.1/157202" rel="alternate"/>
<author>
<name>Luo, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/157202</id>
<updated>2024-10-10T03:54:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides
Luo, Ashley
2D materials, or layers of one-atom thick crystalline solids, offer a flexible solution for a variety of applications that require certain characteristics. As a result of modifications in physical and chemical design involving 2D materials such as stacking, twisting and ion intercalation, properties such as electrical conductivity, spin diffusion length, thermal conductivity, and mechanical strength observe more degrees of freedom than in their bulk material counterpart. Currently, small optical systems comprise of passive devices that are rigid in their light pathing design and require modulators to control light post-fabrication for use. These systems are confined by the material used to fabricate the device and their associated effective indices, which are determined pre-fabrication by the ultimate desired optical effect. However, 2D materials can exhibit tunable band structures that yield the optimal optical response, even post-fabrication. This thesis will discuss the properties of mechanically and chemically manipulated niobium oxydichloride (NbOCl₂) and niobium oxydiiodide (NbOI₂) ultrathin structures that have the potential to integrate into flexible optical systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Elastic Resistive Force Theory &amp; Applications to Uprooting</title>
<link href="https://hdl.handle.net/1721.1/157201" rel="alternate"/>
<author>
<name>Yilmaz, Lale</name>
</author>
<id>https://hdl.handle.net/1721.1/157201</id>
<updated>2024-10-10T03:04:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of Elastic Resistive Force Theory &amp; Applications to Uprooting
Yilmaz, Lale
Granular intrusion processes such as sand locomotion, uprooting, and digging are commonly present. While these phenomena can be accurately modeled via discrete element methods and continuum models, this accuracy comes at a great computational cost, especially for large systems. Granular Resistive Force Theory (RFT) is a reduced-order, rateindependent model that has been shown to successfully capture the motion of rigid intruders in granular media, with a reduced computational cost. RFT is based on a rate-independent theory that calculates the force experienced by a body using its direction of velocity. This makes it difficult to handle scenarios that are near-stagnant which occur frequently in uprooting of plants. To overcome this limitation, we introduce elastic RFT (eRFT) which is based on a rate-independent plasticity flow-rule–like criterion, and pair it with deformable intruders. We focus on modeling uprooting processes which inherently have flexible intruders and are often dynamically controlled. This allows us to address both previously mentioned shortcomings of RFT (stagnancy and flexible intruders) at once. By combining eRFT with a nonlinear beam theory to represent slender, inextensible roots we create a speedy computational tool. Using MATLAB, we simulate various uprooting scenarios to better understand anchoring mechanisms of different root geometries. We showcase the validity of eRFT results by comparing them to experimental data. To implement eRFT in ABAQUS, we make use of an existing user subroutine which allows the study of a broader range of intruder materials and shapes. While the subroutine has its limitations, initial comparisons to computational and experimental results are demonstrative.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile</title>
<link href="https://hdl.handle.net/1721.1/157200" rel="alternate"/>
<author>
<name>Garcia III, George Reuben</name>
</author>
<id>https://hdl.handle.net/1721.1/157200</id>
<updated>2024-10-10T03:49:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile
Garcia III, George Reuben
Is it possible for major political events, such as the U.S. Capitol insurrection on Jan. 6, 2021, to influence political attitudes in other countries? Such events may act as framing devices that influence individuals to think somewhat differently about democracy and populism, primarily by reminding them of domestic shortcomings. Some previous literature has found international attitude effects from major events like terrorism or environmental disasters. In this study, I take advantage of the fact that the insurrection took place in the middle of a set of surveys administered to bureaucrats in Argentina, Brazil, and Chile. The events of Jan. 6 thus act as a type of exogenous shock, thus allowing for an interrupted time series analysis. I find that satisfaction with democracy generally declined across all three countries but only in Chile did support for democracy and elections fall and populist attitudes rise.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness</title>
<link href="https://hdl.handle.net/1721.1/157199" rel="alternate"/>
<author>
<name>Blair, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157199</id>
<updated>2024-10-10T03:09:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness
Blair, Andrew D.
Controlling nano- and microscale morphology is essential for tailoring the appearance of structurally colored stretchy films. An effective approach for controlling the optical properties of such color-dynamic photonic films, which are manufactured holographically, is demonstrated using two simple control handles: the texture of the photonic structure and the surface roughness of a transmissive topcoat. Texture of the photonic structure affects the spectral signature and angular distribution of reflected light. Surface roughness of the topcoat affects the angular distribution of incident and reflected light. Fourier optics concepts are harnessed for modeling and predicting the optical characteristics of the materials as a function of their photonic texture and topcoat roughness. The model is verified with data obtained by imaging the angular scattering distribution and spectroscopic analysis of four representative combinations of photonic texture and surface coat roughness. The findings presented in this thesis validate the hypothesis that controlling texture of the photonic film and roughness of its topcoat allows for tailoring the visual appearance of structurally colored materials. This approach provides access to a rich design space of different appearances, including strong iridescence, color constancy with collimated light sources at small angles of incidence, pure and muted colors, and specular and highly diffuse reflections.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites</title>
<link href="https://hdl.handle.net/1721.1/157198" rel="alternate"/>
<author>
<name>Chen, Andrew Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157198</id>
<updated>2024-10-10T03:03:12Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites
Chen, Andrew Y.
The design of modern composite materials, as used in a wide range of engineering applications, is largely derived from a traditional framework based on laminates. While resulting in desirable strength and stiffness properties, the laminate-based structure leads to a high degree of anisotropy and unique failure modalities like interlaminar failure, limiting the performance of these composites under complex loading conditions. Meanwhile, recent work in the field of architected materials has yielded a thorough understanding of geometry-dependent material behavior, enabling the development of highly robust architectures with tunable (an)isotropy. However, such advances have focused primarily on describing the response of lightweight architected geometries comprised mostly of air. The effect of adding a load-bearing matrix is not well understood. Here we investigate the effect of geometry and constituent material properties on the mechanics of 3D-architected interpenetrating phase composite (IPC) materials, i.e., two-phase materials consisting of an architected structure surrounded by a matrix. Using computational homogenization, we first predict how resultant coupled stress states in the composite change with the material properties of each individual phase and contextualize the results within traditional stiffness scaling laws. We then demonstrate two robust fabrication pathways for realizing polymer- and carbon-based centimeter-scale architected IPCs with micro-scale features. Using these prototypes, we study the mechanical behavior of the fabricated composites under uniaxial compression, with particular emphasis on the non-linear and failure regimes. We show that independent of the material system, the presence of a load-bearing matrix distributes the stress in the composite, contributing to a high-strength, globally stretchingdominated failure behavior, regardless of nodal connectivity. Moreover, the development of a 3D, highly tortuous pathway for stress delays or prevents catastrophic failure of the traditionally brittle architecture phase, resulting in energy dissipation performance of the composite that exceeds the sum of its individual constituents. Finally, we demonstrate that the composite stress state can be architected using geometric design of the IPC and introduce an example of tunable mechanical response in an architected composite inspired by traditional auxetic metamaterials. Altogether, this work broadens our established understanding of the link between architecture and mechanical performance by considering the framework of interpenetrating phase composites, creating the foundation for a new class of strong, resilient, and programmable materials with architected stress states.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System</title>
<link href="https://hdl.handle.net/1721.1/157197" rel="alternate"/>
<author>
<name>Merton, Harvey</name>
</author>
<id>https://hdl.handle.net/1721.1/157197</id>
<updated>2024-10-10T03:01:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System
Merton, Harvey
Over the past decade, aerial drones have been used to address problems in areas such as sensing and measurement, inspection, delivery, security, and defense. Adding a load attached to one or more drones using a flexible cable can significantly enhance the capabilities of these platforms. This work aims to develop a multi-drone platform, built on open-source tools such as PX4 and ROS2, that can be used to lift a general slung load in an outdoor environment. Various fidelity simulators, including a pseudo-photo-realistic Gazebo simulator, are developed alongside a functional real world platform for testing load pose estimation methods. A novel cable-based testing apparatus that enables drone translation is used to facilitate stability testing of a quasi-static formation control method for lifting a slung load. This work aims to be the first to use visual feedback to estimate a load’s pose in a multi-drone slung load system operating without external motion capture devices. In simulation, perspective-n-point-based visual estimation achieves position errors of 0.1 m, and geodesic distance attitude errors around 0 ◦ . Real world testing shows errors of 0.2 m and 5 ◦ respectively. Applying extended Kalman filter and unscented Kalman filter formulations, simulated position estimates average around an error of 0 m, while the error noise magnitude is only 6% of the cable length at 0.06 m. Achieving accurate load pose estimates without an inertial measurement unit mounted to the load requires a good cable dynamics model. This work concludes by presenting a novel model for the effect of cables in a drone-slung-load system. A method based on universal differential equations shows promising early results.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains</title>
<link href="https://hdl.handle.net/1721.1/157194" rel="alternate"/>
<author>
<name>Hegarty, Bartholemew</name>
</author>
<id>https://hdl.handle.net/1721.1/157194</id>
<updated>2024-10-10T03:34:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains
Hegarty, Bartholemew
Macroeconomic events are putting unprecedented pressure on the warehouse industry. Among these are labor shortages, increased operating costs, and the desire for greater customization and higher throughput from these facilities. Focused on these challenges and strategic issues for warehouse applications, this thesis investigates the obstacles to implementing robotic automation in supply chains. The thesis explores this environment and the lens of using three common integration methods. These are the traditional purchase, lease, and emerging robotic-as-a-service (RaaS) model. With these methods in scope, the study incorporates a multicriteria decision-making framework (MCDM) that is built based on an analytical hierarchy process (AHP) and combined with the technique for order of preference by similarity to the ideal solution (TOPSIS). From this framework, the research identifies key decision criteria and their impact on selecting the most suitable integration strategy for automation.&#13;
&#13;
Through a literature review, the study identified the essential criteria for the project design decision. These include infrastructure requirements, system capabilities, usability, provider reputation, project duration, and the total cost of ownership. We then gained insight from industry professionals familiar with automation integration using a focused field study. Furthermore, we underlined practical issues and general opinions on the criteria and how well they correspond to their integration plans. The results highlight notable trade-offs in the decision criteria, emphasizing the need for a more tailored strategy to make automation adoption more efficient.&#13;
&#13;
This thesis provides an effective decision support system to guide the choice of appropriate automation solutions. It helps clarify how decision makers give the most importance to different criteria when implementing robotic automation. The research findings offer helpful details for practitioners navigating the challenging warehouse automation environment. This, therefore, encourages better informed and more efficient decision-making procedures.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Bayesian Inference of Reaction Networks via Guiding</title>
<link href="https://hdl.handle.net/1721.1/157193" rel="alternate"/>
<author>
<name>Arya, Gaurav</name>
</author>
<id>https://hdl.handle.net/1721.1/157193</id>
<updated>2024-10-10T04:00:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Automatic Bayesian Inference of Reaction Networks via Guiding
Arya, Gaurav
Jump process models based on chemical reaction networks are ubiquitous, especially in systems biology modeling. However, performing inference on the latent variables and parameters of such models is challenging, particularly when the observations of the system state are noisy and incomplete. This thesis presents CatalystFitting, a system for inferring the latent variables and parameters of stochastic reaction network models given observational data. CatalystFitting provides primitives for performing changes of measure on jump processes. Building on top of these primitives, CatalystFitting further provides a library of strategies for guiding a jump process to match an observation set. These strategies exploit the form of the underlying symbolic reaction network to automatically produce guides optimized to the particular reaction network structure of interest to the modeler, accelerating otherwise costly Bayesian inference procedures. We present inference results on a bistable switch system and a repressilator system.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning Multimodal Extraction of Reaction Data</title>
<link href="https://hdl.handle.net/1721.1/157191" rel="alternate"/>
<author>
<name>Wang, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/157191</id>
<updated>2024-10-10T03:32:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Deep Learning Multimodal Extraction of Reaction Data
Wang, Alex
Automated extraction of structured information from chemistry literature is vital for maintaining up-to-date databases for use in data-driven chemistry. However, comprehensive extractions require reasoning across multiple modalities and the flexibility to generalize across different styles of articles. Our work on OpenChemIE presents a multimodal system that reasons across text, tables, and figures to parse reaction data. In particular, our system is able to infer structures in substrate scope diagrams as well as align reactions with their metadata defined elsewhere. In addition, we explore the chemistry information extraction potential of Vision Language Models (VLM), which allow powerful large language models to leverage visual understanding. Our findings indicate that VLMs still require additional work in order to meet the performance of our bespoke models.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Scalable Electrification Infrastructure in&#13;
Logistics</title>
<link href="https://hdl.handle.net/1721.1/157190" rel="alternate"/>
<author>
<name>Alam, Muhammad Ashhad</name>
</author>
<id>https://hdl.handle.net/1721.1/157190</id>
<updated>2024-10-10T03:46:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Building a Scalable Electrification Infrastructure in&#13;
Logistics
Alam, Muhammad Ashhad
The transportation sector in the US contributes to about a third of all greenhouse gas emissions, about a quarter of which stems from road freight. A major driver of this environmental footprint remains a heavy reliance on trucking—the least fuel-efficient mode of transportation. A key pathway toward freight decarbonization, therefore, involves shifting from internal combustion engines (ICE) to electric powertrains in truck fleets. This work develops analytics-based solutions to support and assess the electrification of long-haul logistics operations, by applying the methods to PepsiCo’s operations in Texas.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆</title>
<link href="https://hdl.handle.net/1721.1/157189" rel="alternate"/>
<author>
<name>Ono, Rick R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157189</id>
<updated>2024-10-10T03:15:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆
Ono, Rick R.
As engineers continue to develop more sophisticated algorithms to optimize cryptographic algorithms, their often simple mathematical specifications become convoluted in the algorithms, from which a class of correctness bugs arise. Because cryptographic algorithms often secure sensitive information, their correctness, and in turn their security is a top priority. The Number Theoretic Transform (NTT) is an algorithm that enables efficient polynomial multiplication and has recently gained importance in post-quantum cryptography. This thesis presents a proof of correctness of the NTT in F⋆ , a proof-oriented programming language that extracts to OCaml, and shows that we can use the NTT to perform polynomial multiplications. We provide an implementation of the Cooley-Tukey fast NTT algorithm and a proof that it matches the original NTT specification. This thesis also presents a representation of polynomials in the F⋆ subset Low*, which extracts to performant C code.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous UAV Navigation using Millimeter Wave&#13;
Radar</title>
<link href="https://hdl.handle.net/1721.1/157188" rel="alternate"/>
<author>
<name>Herrera, Joshua I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157188</id>
<updated>2024-10-10T03:03:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Autonomous UAV Navigation using Millimeter Wave&#13;
Radar
Herrera, Joshua I.
We present the design, implementation and evaluation of MilliNavigator, an autonomous navigation system for drones capable of mapping, path-planning, self-localizing, and navigating in indoor environments by leveraging strategically-placed millimeter wave anchors. Autonomous drones are an increasingly relevant tool for completing and automating hard-to-reach tasks. State of the art navigation systems rely primarily on cameras and GPS for environmental perception and self-localization. These solutions can impose restrictions on existing systems, which limit their navigable environment to well-lit, outdoors, and unobstructed paths. This thesis presents MilliNavigator, the first system to use millimeter wave radar and anchor-aware path planning to achieve high accuracy, 6DOF, online localization. By generating a localization precision score map from known anchor deployments, the system jointly optimizes travel distance and localization performance. We implemented and evaluated MilliNavigator on a drone built with commercial, off-the-shelf parts. We ran over 165 successful missions across 7 different tag deployments. Our system successfully achieved 7.9cm overall median error and had a 90th percentile error of less than 21cm.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages</title>
<link href="https://hdl.handle.net/1721.1/157187" rel="alternate"/>
<author>
<name>Qian, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/157187</id>
<updated>2024-10-10T03:26:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages
Qian, Kevin
High performance computing libraries provide efficient implementations of common computational kernels. Traditionally, such libraries are written in C or assembly. User-schedulable languages provide performance engineers a productive way to optimize these kernels with welldesigned interfaces which provide users control over performance-relevant decisions and automate unnecessary concerns. Often, this is a tradeoff: too much control with too little automation is tedious to program, and too much automation with too little control will hinder obtaining peak performance. The principle of exocompilation advocates for one end of the extreme: to give performance engineers maximal control over code execution so they can maximize performance, its current implementation in existing systems is impractical to use. This thesis broadly explores ways to make exocompilation a practical solution for performance engineers. We show that providing more control does not necessitate sacrificing automation, as long as the language is designed so that users can build their own automation. We explore the necessary design features to enable such a system, demonstrate the types of automation users can build in the system, and brainstorm ways to further push the amount of control user-schedule languages expose to the user.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support</title>
<link href="https://hdl.handle.net/1721.1/157185" rel="alternate"/>
<author>
<name>Golden, Adina H.</name>
</author>
<id>https://hdl.handle.net/1721.1/157185</id>
<updated>2024-10-10T03:02:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support
Golden, Adina H.
The number of objects in Earth’s orbit is increasing rapidly, raising urgency for intensified observations of satellites and other resident space objects (RSOs) to manage space traffic and prevent collisions. Current methods for RSO detection and tracking rely on ground-based and space-based observatories with optical or radar sensors, but these telescopes require complex scheduling to achieve surveillance of all objects. Previous works have implemented scheduling algorithms and machine learning models that optimize the assignment of tasks to the sensors for RSO observations. However, prior methodologies rely on different datasets, making it hard to make comparisons across methods. This paper presents satdatagen: a software package that generates datasets that can be used as inputs to sensor task schedulers. The datasets generated from the satdatagen library are intended to be used as a baseline input to satellite sensor task schedulers. The datasets contain information about every satellite that passes in view of the sensor such as its angle of altitude and its brightness. Additionally, actual cloud cover data is included for optical telescopes that need to take visibility into account while scheduling observations. satdatagen is simple to use, and does not require excess outside knowledge from developers of scheduling tools.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion Phantom Development for MRI</title>
<link href="https://hdl.handle.net/1721.1/157184" rel="alternate"/>
<author>
<name>Liu, Kerlina</name>
</author>
<id>https://hdl.handle.net/1721.1/157184</id>
<updated>2024-10-10T03:48:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Motion Phantom Development for MRI
Liu, Kerlina
The development of magnetic resonance imaging (MRI) has enabled health care professionals to non-invasively visualize subjects' soft tissue for medical diagnosis. Since it's conception, artifacts due to patients' movement have shown themselves to be an issue and an assortment of tools and methods have been developed to help mitigate the effect of motion on MRI but such mitigation methods are generally only applicable on a case by case basis depending on the specific type of motion. As such, additional research is required to develop novel methods and a standardized method of testing, validating, and ultimately comparing mitigation strategies.&#13;
&#13;
This work provides a design to develop a motion stage as well as build instructions for the Martinos head phantom which moves in four degrees of freedom (linear translation in the plane parallel to the floor, a head shaking "no" motion, and a head nodding "yes" motion) independently of one another to limited success. Only the translation in direction (into and out of the bore hole, along the z-axis) worked as expected, while the translation perpendicular to it (x-axis) did not. The total range of motion that head phantom was capable of turning in the head shaking/"no" motion was approximately 19 degrees, though the torque required is on the higher end (on the order of 0.06 N*m) and the position of the rotational actuator needs some reexamination. The head nodding/"yes" mechanism is more promising, allowing for a tilt downwards of 1 degrees and upwards of 2 degrees, but requires actuators capable of exerting 6N of force or more.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI</title>
<link href="https://hdl.handle.net/1721.1/157183" rel="alternate"/>
<author>
<name>Feld, Joseph W.</name>
</author>
<id>https://hdl.handle.net/1721.1/157183</id>
<updated>2024-10-10T03:45:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI
Feld, Joseph W.
Magnetic Resonance Imaging (MRI) is a powerful, safe imaging technique based on using magnetism to provide contrast between soft tissues. Portable, low-field MRI is a growing area that has already demonstrated value in both educational and clinical domains. Low-field MRI systems need to acquire data with sample rates in the tens of megahertz, which can make the data acquisition system the bulk of the overall cost of low-cost systems. This work presents the Streamoscope: an open-source data acquisition system designed for low-field MRI that streams two 14-bit resolution channels at 60 megasamples per second over USB-3 into Python. It is approximately $300 in parts, about a quarter of the price of the cheapest data acquisition system on the market that would work in our case study. The Streamoscope can stream full-sample-rate raw MRI data into a computer to be processed in Python, enabling real time imaging. The system has been validated by generating 2D images of a phantom on a system with an 8 MHz Larmor frequency.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/157182" rel="alternate"/>
<author>
<name>Nyakiongora, Geoffrey Mosoti</name>
</author>
<id>https://hdl.handle.net/1721.1/157182</id>
<updated>2024-10-10T03:00:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence
Nyakiongora, Geoffrey Mosoti
This research explores the innovative application of Artificial Intelligence (AI), particularly Generative Pre-trained Transformer (GPT) models, in designing culturally sensitive hospitals for rural Kenya. The research addresses the critical need for improved healthcare infrastructure in underserved areas, focusing on the potential of AI to create efficient, adaptable, and contextually appropriate hospital designs. The study employs a mixed-methods approach, combining qualitative analysis of cultural practices and healthcare needs with quantitative data on environmental factors and health statistics. A GPT model is developed and fine-tuned on a comprehensive dataset of Kenyan cultural information, healthcare data, and architectural knowledge. This AI model is then used to generate hospital design concepts that are evaluated against newly developed cultural sensitivity metrics. Key findings demonstrate the potential of AI to significantly reduce design time, improve space utilization, and enhance cultural appropriateness in hospital designs. The thesis also highlights the importance of human-AI collaboration, with local experts and community representatives playing crucial roles in refining and implementing AI-generated concepts. Challenges identified include data quality and availability in rural settings, the need for ongoing model refinement, and the importance of establishing ethical guidelines for AI use in healthcare design. The thesis concludes with a set of recommendations for implementing AI-driven, culturally sensitive hospital design processes in rural Kenya, including the development of specialized AI models, and establishment of collaborative design methodologies. These findings have significant implications for improving healthcare infrastructure in resource-constrained settings and offer a model for culturally sensitive, AI-driven architectural design in developing contexts globally.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Shape of Kubler: George A. Kubler in Peru, 1948-49</title>
<link href="https://hdl.handle.net/1721.1/157181" rel="alternate"/>
<author>
<name>Schweig, Johann</name>
</author>
<id>https://hdl.handle.net/1721.1/157181</id>
<updated>2024-10-10T03:10:50Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Shape of Kubler: George A. Kubler in Peru, 1948-49
Schweig, Johann
Yale art history professor George Kubler’s seminal 1962 publication The Shape of Time is, according to his own words, representative of a “crossroads between the history and anthropology of art.” This work does not stand alone, but is rather part of a larger corpus of study through which Kubler recurred to disciplines, methods and tools outside of what is traditionally considered art historical—including anthropology, architectural representation, and biology—in order to generate new readings and understandings of the history of South and Central American art. This thesis takes a look into a year of Kubler’s life in 1948-49, spent in Peru conducting archival research and field work on culture change with the Institute for Social Anthropology at the Smithsonian Institution and teaching a seminar on the use of archival sources in ethnology at Universidad Nacional Mayor de San Marcos in Lima; during this time, Kubler also engaged in the construction of an archive of his own. Drawing from correspondence and other records from the period in question, a series of lost episodes resurface, providing a reconstruction of various strata of 1940s Peruvian society: an increasingly cosmopolitan Lima stands in stark contrast to the underdeveloped, feudal Andean world, evidencing its colonial underpinnings. I contend that witnessing the coexistence of various temporalities within a single geographic territory had a significant impact on Kubler’s later theories on spatialized historical time.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/157179" rel="alternate"/>
<author>
<name>Pronk, Morgen</name>
</author>
<id>https://hdl.handle.net/1721.1/157179</id>
<updated>2024-10-10T03:51:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models
Pronk, Morgen
The advancement of neural networks in the last several years has yielded some astonishing results. However, the applicability to system identification and modelling dynamical systems still has a great amount of room for exploration. This thesis reviews different neural network architectures and their application to complex non-linear dynamic system identification. In particular, it uses the intricate process of coffee roasting as a case study to explore and demonstrate these techniques. Coffee roasting is a complex process that requires precise control to achieve the desired coffee quality. The ability to develop models that represent a system, i.e. system identification, is of great value to industry. Coffee roasting poses several challenges for system identification from complex chemical reactions occurring inside the bean, to temperature trajectories being dependent on several states that cannot be explicitly measured, such as moisture content, or reaction rate, making it an ideal candidate for exploring the application and limitations of neural networks. The primary contributions of this study are a proposed "grey-box" model that augments previously established physics based models, as well as illustrating the limits of LSTM, Deep NARX models using "one-step" forward prediction techniques. Although the study focuses explicitly on coffee roasting, the conclusions drawn are applicable to other similarly complex industrial and manufacturing processes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Reinforcement Learning for Autonomous Robotics</title>
<link href="https://hdl.handle.net/1721.1/157178" rel="alternate"/>
<author>
<name>Vincent, Caroline R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157178</id>
<updated>2024-10-10T03:42:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Multi-Agent Reinforcement Learning for Autonomous Robotics
Vincent, Caroline R.
Technological advancements in autonomous robotics, including autonomous vehicles, have created new opportunities for innovative solutions to many everyday challenges. The impact of integrating robotic agents into real-world applications may be significantly enhanced by leveraging advancements in multi-agent autonomous systems. However, the coordination required in multi-agent systems demands complex motion planning to deconflict actions and prevent collisions of vehicles moving at increasingly high speeds. This thesis explores the application of multi-agent reinforcement learning (MARL) to autonomous robotics by teaching a central controller to navigate multiple agents across various environments without collisions. The simulated scenarios range from simple, obstacle-free environments to complex environments with obstacles configured to form narrow passageways or represent other complexities in dense urban environments. The findings demonstrate the potential of MARL to achieve high accuracy in navigating these different environments, highlighting the method's flexibility and adaptability across diverse settings and the resulting implications for applying MARL to real-world scenarios.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study on Deploying Large Language Models as Agents</title>
<link href="https://hdl.handle.net/1721.1/157177" rel="alternate"/>
<author>
<name>Cao, Jiannan</name>
</author>
<id>https://hdl.handle.net/1721.1/157177</id>
<updated>2024-10-10T03:04:37Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Study on Deploying Large Language Models as Agents
Cao, Jiannan
This thesis investigates the deployment and utilization of Large Language Models (LLMs) as agents, exploring their potential in automating workflows and enhancing user interactions. The study begins with an in-depth analysis of language models, tracing their evolution from pure statistical models to advanced neural network architectures like Transformers and their bidirectional variants. It then delves into the operational framework of LLM agents, detailing user interactions, environmental considerations, memory management, task planning, and tool use. The study addresses critical limitations in LLM inputs, such as the context window and introduces Retrieval-Augmented Generation (RAG) as a solution to extend the model’s capability. Key APIs provided by OpenAI for deploying GPT models are discussed, highlighting their functionalities and applications. Finally, the practical application of LLMs in creating Robotic Process Automation (RPA) workflows is demonstrated through a divide-and-conquer methodology, showcasing the efficiency, scalability, flexibility, and accuracy of this approach. This comprehensive study underscores the transformative impact of LLMs in automating complex processes and enhancing user experiences through intelligent agent deployment.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cross-Shelf Exchange Driven by Dense Flow Down a Canyon</title>
<link href="https://hdl.handle.net/1721.1/157175" rel="alternate"/>
<author>
<name>Mier, Christian M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157175</id>
<updated>2024-10-10T03:52:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cross-Shelf Exchange Driven by Dense Flow Down a Canyon
Mier, Christian M.
Laboratory experiments investigated the dynamics controlling the cross-shelf exchange in a prograde sloping canyon induced by dense shelf water descending into the canyon. This thesis is motivated by the dispersal of dense water generated by polynyas on the Arctic and Antarctic continental shelves. Laboratory results corroborate prior numerical results suggesting that canyons are hotspots of cross-shelf exchange. When the dense water descends a canyon, it induces an onshore return flow of offshore water into the canyon. This return flow is initially driven by the dense water eddies descending the canyon and acting like a bucket brigade. At later times, another mechanism may also be at play where large dense cyclonic (anticlockwise) eddies on the northern continental shelf may pull more dense water out of the canyon producing a region of low pressure, near the canyon head, which induces an increase in ambient flow into the canyon. The Burger number (Rossby radius of deformation/canyon width) and the dense water source location with respect to the canyon head affect the offshore ambient water velocity up the canyon. Additionally, as the offshore water reaches the canyon head, the offshore water volume flux becomes larger than the dense water volume flux, possibly due to the low pressure region described above. Understanding these dynamics in the Antarctica region is of global significance for two main reasons: 1. The offshore flowing dense water forms Antarctic Bottom Water and thus affects the global meridional circulation; 2. The onshore heat transport induced by the return flow drives glacial ice melt and therefore contributes to sea level rise.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis</title>
<link href="https://hdl.handle.net/1721.1/157173" rel="alternate"/>
<author>
<name>Zamora, Izabella</name>
</author>
<id>https://hdl.handle.net/1721.1/157173</id>
<updated>2024-10-10T03:52:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis
Zamora, Izabella
Tumor cell plasticity in cancer is a key driver in tumor progression, heterogeneity, metastasis, and treatment resistance. Tumor cells change states from the conventionally easier to treat epithelial state to the more resistant mesenchymal state. Understanding the transition dynamics of these states and the extrinsic factors influencing them is crucial for improving therapeutic strategies and patient outcomes. Utilizing spatial transcriptomics extrinsic driving factors of plasticity can be probed. We introduce PlastiNet, which uses a graphical attention-based network to create a spatial aware embedding. The utility of our approach is validated in model systems, specifically in the brain and colon, where it successfully identifies biologically relevant neighborhoods and maps differentiation pathways. When applied to pancreatic ductal adenocarcinoma (PDAC), distinct, conserved neighborhoods within the tissue, including diverse immune and cancer clusters. By estimating a differentiation path from epithelial to mesenchymal-like cells, we can identify intermediate states despite a limited set of tumor marker genes. This cellular differentiation path shows enrichment and depletion of certain cell types within local neighborhoods aligning with known correlations, and by leveraging inferred ligand-receptor interactions, we can pinpoint potential drivers of plasticity to test in vitro. PlastiNet effectively generates hypotheses directly from patient-derived spatial transcriptomics samples, offering insights into the cellular mechanisms driving tumor plasticity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products</title>
<link href="https://hdl.handle.net/1721.1/157172" rel="alternate"/>
<author>
<name>Vianco, Sara L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157172</id>
<updated>2024-10-10T03:06:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products
Vianco, Sara L.
The East Greenland Coastal Current (EGCC) carries some of the freshest outflow from the Arctic southward along the East Greenland Shelf and into the Nordic Seas and subpolar North Atlantic. How this fresh water initially flows onto the Northeast Greenland Shelf (NEGS) and feeds the EGCC is not well known due in part to the lack of observations in the region. In this thesis, I use two ocean reanalyses, the Regional Arctic Ocean/sea-ice Reanalysis (RARE) and Global Ocean Physics Reanalysis (GLORYS) to explore the structure and dynamics of the ocean circulation on the NEGS. To validate the use of these products in the region, I compare the reanalysis products to the Fram Strait Arctic Outflow Observatory for the period of 2003-2019. In the mean, RARE is too warm and salty compared to the moorings, while the properties in GLORYS track more closely to the observations. However, the observed velocity field is better represented in RARE than GLORYS. From there, I analyze the cross-shelfbreak flow from 74°N to 81.5°N in the two reanalysis products, and conclude that transport onto the NEGS of waters fresher than 34 salinity is driven by an Ekman circulation that arises from along-shelfbreak winds and a widening shelf south of 81.5°N. The enhanced transport of fresh water also shifts the isohalines across the shelfbreak, directing a geostrophic flow onshelf between 81°N and 79°N. The convergence of fresh water on the NEGS initiates the EGCC as an identifiable and distinct feature around 80°N in RARE, uniting the EGCC along the southwest coast of Greenland and its northern counterpart, the Polar Surface Water (PSW) Jet. In GLORYS, the EGCC is not present throughout the domain, though there is a weak net southward flow on the NEGS. The EGCC in RARE is primarily buoyancy-driven, though the along-coast winds likely play a major role in maintaining the density front that supports the EGCC. Results from this thesis have implications for the transport and fate of Arctic and Greenland-sourced fresh water, and stratification in the high latitude North Atlantic and Nordic Seas.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Stack Replacement Across User-Kernel Boundaries</title>
<link href="https://hdl.handle.net/1721.1/157170" rel="alternate"/>
<author>
<name>Mohr, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/157170</id>
<updated>2024-10-10T03:27:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">On-Stack Replacement Across User-Kernel Boundaries
Mohr, Katherine
In large, distributed computations with small amounts of work done at each node, networking latencies quickly add up, especially in comparison to the time taken to execute small tasks. As such, lowering network latencies is crucial to getting good performance. Previous research has shown that often the largest contributors to network latencies are data copies between kernel and application buffers. Conventional wisdom argues that to solve this problem, one should move the networking stack out of the kernel and into the user space or networking hardware. Instead, we build upon an alternative approach, known as LakePlacid. LakePlacid mitigates the kernel-user boundary overhead issue by moving the most important application logic out of the user space and into the kernel. This thesis proposes and implements a key improvement to LakePlacid. Because only part of the application logic is migrated to the kernel, some packets necessarily must be resolved in the standard user space application. The system discussed in this thesis allows packets which cannot be handled in the kernel to seamlessly continue in user space via on-stack replacement, thus preventing side effects from being executed erroneously. This system for on-stack replacement is very general, allowing execution to switch between code versions at any conditional, and it is novel in its ability to switch stacks across the user-kernel boundary. With this change, LakePlacid is able to better maintain the semantics of user applications, making it more feasible in practice.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models</title>
<link href="https://hdl.handle.net/1721.1/157169" rel="alternate"/>
<author>
<name>Figueroa, Reinaldo</name>
</author>
<id>https://hdl.handle.net/1721.1/157169</id>
<updated>2024-10-10T03:48:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models
Figueroa, Reinaldo
Language models are initially trained on large datasets, enabling them to extract patterns and establish rich contextual connections. When dealing with data scarcity, transfer learning has become the go-to method to use these models in specialized downstream tasks via fine-tuning. However, fine-tuning on small datasets can lead to overfitting and a lack of generalization. Generalization is crucial when deploying models that perform a sensitive tasks in a real world environment, as it dictates how well it performs on unseen data. Conversely, overfitting is highly likely to occur when training on small datasets. This thesis proposes and evaluates a new method for fine-tuning language models by adaptively choosing specific learning rates for each transformer layer that provide higher performance on in-domain low-volume datasets. Additionally, we explore which layers inside the models usually hold more contextual information from pre-training that might be valuable to keep ‘frozen’ when fine-tuning on small datasets. This analysis provides insights into fine-tuning approaches during initial experiments when data is limited. Our results demonstrate limited performance gains on certain models while achieving more significant gains on others when fine-tuning using our proposed method. Additionally, our work also provides valuable insight into per-layer importance of language models by showing that certain layers have a stronger direct correlation with the overall model accuracy.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations</title>
<link href="https://hdl.handle.net/1721.1/157168" rel="alternate"/>
<author>
<name>Venkat, Naveen</name>
</author>
<id>https://hdl.handle.net/1721.1/157168</id>
<updated>2024-10-10T03:39:43Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations
Venkat, Naveen
For the past decade, online deliberation platforms like Polis have expanded the reach of deliberative democracy, which calls for political decisions to be based on the results of fair and balanced discussions among citizens, by enabling larger deliberations. However, as these discussions often generate a large volume of comments, which is infeasible for policymakers to thoroughly review, these platforms often include analysis algorithms that distill the conversation into a small set of comments, which policy-makers can use as the base of citizen input into decision-making. While Polis currently provides a clustering-analysis summary of the discussion, two newer aggregation algorithms, inspired by computational social choice theory and abstract argumentation theory, have recently been proposed. These algorithms seek to provide more representative (i.e. portraying all perspectives) and consistent (i.e. comments within a perspective do not oppose each other) summaries of the discussion, respectively. Still, though these newer algorithms may have theoretical advantages over Polis’s current methods, they have yet to be evaluated in a real-world application. Through a randomized controlled trial of all three approaches using a nationally representative sample, we compare their practical effectiveness, as measured by participants’ subjective experiences regarding how well these summaries represent their concerns. We find that the computational social choice-inspired algorithm consistently outperforms Polis’s current methods in this regard, though future theoretical work is still needed to fully adapt this approach to a real-world setting.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs</title>
<link href="https://hdl.handle.net/1721.1/157167" rel="alternate"/>
<author>
<name>Awoufack, Kevin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157167</id>
<updated>2024-10-10T03:16:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs
Awoufack, Kevin E.
The rapid integration of Large Language Models (LLMs) like OpenAI’s GPT series into diverse sectors has significantly enhanced digital interactions but also introduced new security challenges, notably the risk of "jailbreaking" where inputs cause models to deviate from their operational guidelines. This vulnerability poses risks such as misinformation spread and privacy breaches, highlighting the need for robust security measures. Traditional red-teaming methods, involving manually crafted prompts to test model vulnerabilities, are labor-intensive and lack scalability. This thesis proposes a novel automated approach using Reinforcement Learning from Human Feedback (RLHF) to transform unsuccessful adversarial prompts into a successful jailbreak. Thus it learns a policy based on relation to existing jailbreak prompts that informs the generator LLM of what makes an adversarial prompt successful. This was implemented using Proximal Policy Optimization (PPO) and tested with both a classifier and judge reward model, attaining at best a 16% attach success rate on a target model. This research can be applied to any prompt at the word level and further analyzed on characteristics of toxicity. This work contributes to advancing LLM security measures, ensuring their safer deployment across various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems</title>
<link href="https://hdl.handle.net/1721.1/157166" rel="alternate"/>
<author>
<name>Costello, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/157166</id>
<updated>2024-10-10T03:34:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems
Costello, Jeffrey
This study presents an analytical tool for characterizing a wide swath of the designspace for time-variant electrodialysis reversal brackish water desalination (TEDR) while avoiding the computation time oft required by mechanistic models of electrodialysis reversal (EDR) and time-variant processes. In place of explicit computation, this paper proposes a simplifying assumptions to simulate desalination power and production rate of a TEDR process without explicit computation, enabling rapid year-long simulation and system optimization. The output of the model is compared to experimental data from a pilot TEDR system and found to have good agreement between desalination power and production rate. Disagreement between the modeled and experimental pressure losses suggesting additional losses in the experiment which may be accounted for in future work. Two case studies, one case for potable water in the American Southwest and another case for irrigation water in the Middle-East and North Africa (MENA) region, compare the results from 54 optimized systems. The results illustrate the complexity of system design and selection, elucidating tradeoffs between different models of electrodialysis (EDR) stacks, operating modes, and system configurations. The output of this model will enable system designers to confidently design and implement cost-effective TEDR systems to combat rising global freshwater scarcity.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting</title>
<link href="https://hdl.handle.net/1721.1/157164" rel="alternate"/>
<author>
<name>Talal, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/157164</id>
<updated>2024-10-10T03:59:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting
Talal, Omar
Sheet metal manufacturers continuously seek methods to enhance automation and reduce costs. This thesis explores process substitution and design standardization through a parameter-driven cost model and case studies applying Design for Manufacturability &amp; Assembly (DFMA) principles. Specifically, it evaluates substituting conventional sheet metal components with extruded steel profiles and replacing manual press brake operations with automated tube laser cutting. The findings show that tube laser adoption across a broad range of channels can reduce costs by 49% to 79%, with a payback period of under two years, even in scenarios with fluctuating raw material prices. The study proposes strategies for maximizing tube laser utilization through product mix analysis, redesign for compatibility, and designing with tube laser as the primary method. A developed automation tool using clustering aids profile identification, though the study highlights the need for improved data management around C-channel dimensions to enhance process standardization. The investigation confirms that extruded steel can be a cost-effective alternative to large-scale channel products, providing solutions for industry transition through direct replacement, compatibility-focused redesign, or design guidelines optimized for extruded steel.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Koopman Operator Theory to Legged Locomotion</title>
<link href="https://hdl.handle.net/1721.1/157163" rel="alternate"/>
<author>
<name>Terrones, Jasmine G.</name>
</author>
<id>https://hdl.handle.net/1721.1/157163</id>
<updated>2024-10-10T03:45:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Application of Koopman Operator Theory to Legged Locomotion
Terrones, Jasmine G.
Nonlinearities from complicated robot systems and harsh contact dynamics have long impeded the effectiveness of optimal control strategies for legged robots. In this work, we present a linearized simple walking model using Koopman Operator Theory, and its usage in Linear Model Predictive Control (L-MPC). Various walking and contact models were evaluated, but ultimately the rimless wheel was selected due to its inherent stability and low dimensionality, and a nonlinear viscoelastic model was used to accurately capture floor contact and impact dynamics. Koopman models were developed using both Radial Basis Functions (RBFs) and neural network-generated observables for the passive rimless wheel. A novel actuation method with linear actuators, combined with the Control Coherent Koopman methodology, resulted in accurate linear models that effectively enabled L-MPC to control the wheel on flat ground. This model outperformed those created using the more traditional Dynamic Mode Decomposition with Control method. This work demonstrates the power of Koopman linearization to produce a unified set of linear dynamical equations that encompass various contact and non-contact configurations and demonstrates the effectiveness of the Control Coherent Koopman methodology in generating an accurate input matrix across these different contact modes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.</title>
<link href="https://hdl.handle.net/1721.1/157161" rel="alternate"/>
<author>
<name>Nin, Jorge A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157161</id>
<updated>2024-10-10T03:09:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.
Nin, Jorge A.
This study introduces a novel approach to tissue-on-chip device fabrication using low-cost picosecond laser ablation, addressing critical limitations in current manufacturing methods such as soft lithography, particularly in terms of material compatibility, feature resolution, and scalability. We developed a comprehensive finite element method (FEM) model for the laser ablation process, incorporating key physical phenomena including laser-material interactions, heat transfer, and material removal dynamics. This model, validated against experimental results, accurately predicts ablation depths within 20% of measured values across a range of laser parameters. Our experimental setup, utilizing a cost-effective 10 kHz picosecond laser system, demonstrates superior capabilities in creating high-aspect-ratio microchannels exceeding 20:1, surpassing traditional manufacturing techniques. We achieve precise control over channel dimensions, with widths ranging from 20 to 500 micrometers and depths up to 1 mm, while maintaining sub-micron surface roughness (Ra &lt; 0.8 &#120583;m). The system’s versatility is showcased through the fabrication of complex structures such as Tesla valves and high-resolution text features, with a minimum feature size of 20 &#120583;m. We present practical techniques for component selection and process parameter optimization 3 using our simulation, reducing expensive and time-consuming experimentation. This work establishes low-cost picosecond laser ablation as a viable and advantageous method for tissue-on-chip manufacturing. With fabrication times of 6-8 minutes for small features and less than an hour for a full chip, our method represents a significant advancement in rapid prototyping capabilities. These findings demonstrate that laser ablation is a powerful technique for manufacturing tissueon-chip devices, offering high resolution, flexibility, and scalability. This approach has the potential to overcome the limitations of traditional methods, enabling the next generation of sophisticated, physiologically relevant in vitro models for biomedical research and drug development. The successful development and validation of the FEM model, coupled with practical demonstrations, provide a solid foundation for further advancements in laser-based fabrication of tissue-on-chip devices, potentially accelerating drug discovery processes and enabling more accessible production of personalized medicine platforms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Design Study Using Simulation Techniques in Roll Form&#13;
Production</title>
<link href="https://hdl.handle.net/1721.1/157160" rel="alternate"/>
<author>
<name>Lee, Joo Won</name>
</author>
<id>https://hdl.handle.net/1721.1/157160</id>
<updated>2024-10-10T03:43:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Design Study Using Simulation Techniques in Roll Form&#13;
Production
Lee, Joo Won
Sheet metal roll forming is a continuous bending process where metal strips are fed through a sequence of rolls to achieve a specific cross-sectional profile. This method is vital in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-efficiency. This project focuses on optimizing Novelis’s aluminum roll forming process using Computer-Aided Engineering (CAE) techniques, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) tools such as Ansys and LS-Dyna. Initial simulations on a square tube profile were key in identifying critical stations, leading to performance improvements through targeted adjustments. Stress and strain analyses revealed how operational factors, such as roll adjustments, affect the section shapes and angles, facilitating the refinement of roll forming station settings. With a Design of Experiment (DOE) framework, the study identified key variables to enhance simulation output accuracy and optimize roll forming settings. The team successfully built a digital twin of the new roll forming line, which accurately predicted the final product's geometry and provided precise recommendations for machine settings to achieve the desired shape. Novelis can apply these insights to enhance their software, thereby potentially increasing production efficiency. This approach not only supports current operations but also lays the foundation for future research and development advancements.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up</title>
<link href="https://hdl.handle.net/1721.1/157159" rel="alternate"/>
<author>
<name>Zhang, Yiqian</name>
</author>
<id>https://hdl.handle.net/1721.1/157159</id>
<updated>2024-10-10T04:12:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up
Zhang, Yiqian
The Fiber Extrusion Device (FrED) is an affordable desktop tool intended for engineering education. It mimics the fiber draw process, allowing students to study topics such as data acquisition, control systems, computer vision, data analytics, and smart manufacturing. As an educational tool, the goal of the device is to replicate the practical laboratory experience in remote learning scenarios. FrED has gone through multiple iterations, yet several outstanding issues remain. Building on the 2023 team’s progress, the 2024 project objectives include refining the design, developing controls, scaling up manufacturing, designing the assembly line, managing inventory, creating educational content, and conducting user testing and pilot runs. This thesis specifically details the author’s contributions to enhancing mechanical designs, advancing control systems, increasing production capacity, and planning educational materials. Mechanical components in the frame, the cooling system, and the diameter measurement system were redesigned to improve stiffness and stability. Local PID controllers were implemented for the DC motor and heater, effectively closing the feedback loop for fiber diameter control. The production target of manufacturing 35 FrED units was successfully achieved within the planned timeframe, with the packaging design optimized for efficient shipping. Additionally, an assembly manual, a graphical user interface, and control activities were developed as part of the educational content. Three user testing sessions were conducted to gather feedback.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology Performance Curves to Inform Government and Private Investment</title>
<link href="https://hdl.handle.net/1721.1/157158" rel="alternate"/>
<author>
<name>Roberts, Matthew R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157158</id>
<updated>2024-10-10T03:23:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Technology Performance Curves to Inform Government and Private Investment
Roberts, Matthew R.
Forecasts of technological progress are used to inform decisions in the public and private sectors that shape the modern technology landscape on a global scale. Technology performance curves are the quantitative, model-based representations of technological change employed in industrial, economic, and integrated assessment models to inform decision-making processes. Technology performance curves have evolved from their origins in the 1920s modeling of airframe manufacturing labor cost to consider mechanisms of technological progress, including learning-by-doing, learning-by-searching, economies of scale, and exogenous improvement. Examining changes to the performance and prevalence of technologies can provide insight that is relevant for product strategy and market forecasts. This knowledge can also help estimate the potential impact of government market policy and funding for research and development. This thesis seeks to consolidate the available literature on the various models of technology performance curves into a conceptual framework that can be used to understand the features and limitations of models, and their potential use cases.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups</title>
<link href="https://hdl.handle.net/1721.1/157157" rel="alternate"/>
<author>
<name>Ali Osman, Mohamed Mamdouh</name>
</author>
<id>https://hdl.handle.net/1721.1/157157</id>
<updated>2024-10-10T03:39:37Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups
Ali Osman, Mohamed Mamdouh
In the last decade, the financial sector has changed significantly. The introduction of new technologies and mobile applications transformed the entire industry, leading to the rise of financial technology (fintech startups). Fintech startups offer a wide range of products/services, such as digital payments, Buy Now, Pay Later (BNPL), crowdfunding, peer-to-peer lending, etc. Middle East and North African (MENA) countries have seen significant growth in the number of fintech startups and the total investment value in these companies. For example, in Egypt, Fawry is the biggest payment service provider; it covers nearly 25% of Egyptian customers and has more than 3 million daily operations. Also, some fintech companies in MENA became unicorns, such as Tabby of Saudi Arabia and MNT-Halan of Egypt. The increased penetration of fintech in MENA countries has consistently raised concerns about data security, consumer protection, and financial stability that these companies can cause. This always raised a couple of questions for the financial sector authorities or regulators: how these authorities can increase the number of these companies to support financial inclusion and growth of financial sectors and, at the same time, alleviate the dangers and concerns that these fintech companies present. This thesis provides a comprehensive analysis of the growth of fintech startups in the MENA region, focusing on four countries: Egypt, Saudi Arabia, UAE, and Jordan. Then, the study investigates the fintech regulations in these countries. This study aims to understand how recent regulations have impacted the growth of fintech startups through qualitative insights and case studies from four countries. The study reveals the following: First, Jordan's fintech regulations are still in their early stages. Despite having some fintech regulations, significant regulations such as data protection and cyber security laws still need to be made available. The absence of some fintech regulations might cause investors and entrepreneurs not to launch or expand their fintech businesses in Jordan. Second, in Egypt, the fintech regulations align with investors' and entrepreneurs' expectations; however, the economic conditions-budget deficit and currency fluctuations might hinder the growth of the fintech sector in Egypt. Third, for Saudi and UAE, the fintech ecosystem and regulations encouraged entrepreneurs to start and grow their businesses and customers to increase the adoption of fintech products and services. The development of regulations, laws, and guidelines in both countries contributed to the growth of the fintech sector and, at the same time, safeguard customers.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN</title>
<link href="https://hdl.handle.net/1721.1/157153" rel="alternate"/>
<author>
<name>Murman, Charles E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157153</id>
<updated>2024-10-10T03:28:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN
Murman, Charles E.
Numerical model simulations (Delft3D SWAN) are used to examine the impact of small alongshore variations in the bathymetry of an outer sandbar (in about 5-m water depth) on the nearshore wave field as the shallow (&lt; 3 m) bathymetry changes from near alongshore uniform to strongly spatially variable to understand wave driven morphologic evolution. Waves were observed at Duck, NC with an array of 14 pressure gages between 1- and 3-m water depth spread over 250 meters alongshore. Bathymetry was measured between the dune toe and about 8-m water depth on September 26 and October 2, 2013. The bathymetry evolved from roughly alongshore uniform on September 26 to strongly alongshore variable on October 2. Between these dates incident significant wave heights ranged from 0.5 meters to 2.3 meters, with incident angles from 20 degrees north to 5 degrees south of shore normal. Simulations were run with observed bathymetry for both the outer bar and inner shallow bathymetry, with smoothed outer bar and observed shallow bathymetry, and with digital elevation model bathymetry to determine the effects of outer bar and shallow bathymetry on wave evolution.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception</title>
<link href="https://hdl.handle.net/1721.1/157152" rel="alternate"/>
<author>
<name>Ozor-Ilo, Ozioma</name>
</author>
<id>https://hdl.handle.net/1721.1/157152</id>
<updated>2024-10-10T04:00:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception
Ozor-Ilo, Ozioma
Humans lack specialized receptors for perceiving wetness and so it is a compound sensation based on changes in skin temperature and contact pressure that are sensed by thermoreceptors and mechanoreceptors in the skin. In addition to perceiving the wetness of damp fabrics in contact with the skin or the presence of sweat on the skin, humans can perceive wetness in the absence of any moisture, a phenomenon known as illusory wetness. The illusion has been shown to arise when the skin is in contact with a surface and is cooled.   This thesis is focused on understanding the variables that contribute to illusory wetness by first determining the difference threshold for perceiving the rate of skin cooling and relating this to perceived wetness. The results from the first two experiments showed that the difference threshold averaged 0.9 °C/s -1.06 °C/s at a reference value of 0. 5 °C/s. For perceiving wetness, the threshold averaged 1.08 °C/s - 1.41 °C/s. The latter finding indicates that the rate the skin cools exceeds some threshold value before it is perceived as being wet. A third experiment explored the role of temperature and surface material in the perception of illusory wetness. The results showed that temperature was the more critical valuable, with ratings of perceived wetness increasing as the temperature decreased further below the baseline skin temperature. These experiments have demonstrated the effect that rates of cooling have on perceiving illusory wetness and have contributed to a better understanding of the role of surface material and temperature on perceiving wetness during static contact. These findings are relevant to simulating wetness in prosthetic devices and virtual reality environments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal</title>
<link href="https://hdl.handle.net/1721.1/157151" rel="alternate"/>
<author>
<name>Yuan, Chenyu</name>
</author>
<id>https://hdl.handle.net/1721.1/157151</id>
<updated>2024-10-10T03:36:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal
Yuan, Chenyu
The sheet metal manufacturing industry, with its rich history and legacy, continues to seek innovative methods to enhance automation and reduce costs in an increasingly competitive market. Design for Manufacturability &amp; Assembly (DFMA) has emerged as a strategy to simplify product designs, thereby improving manufacturing eOiciency and reducing production costs. This research suggests the use of extruded steel profiles as an alternative to traditional sheet metal components that pose challenges for automation, particularly heavy gauge narrow channels. Additionally, it advocates for replacing manual press brake operations with advanced automated tube laser technology. The proposed shift not only simplifies the manufacturing process but also aligns with the broader goal of global cost reduction and process standardization, which are essential for enhancing New Product Introduction (NPI) eOiciencies. The findings demonstrate that maximizing the application of tube laser technology across a diverse range of channels and products can lead to significant cost savings, ranging from 49% to 79%, with a payback period of less than two years. Even under fluctuating raw material prices, the tube laser method remains economically advantageous. Moreover, redesigning products to enhance compatibility with tube laser technology has shown to increase the automation compatibility of an example product to 100%, underscoring the importance of incorporating DFMA principles from the early stages of product design.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling</title>
<link href="https://hdl.handle.net/1721.1/157150" rel="alternate"/>
<author>
<name>Kompella, Sarvagnya</name>
</author>
<id>https://hdl.handle.net/1721.1/157150</id>
<updated>2024-10-10T03:45:54Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling
Kompella, Sarvagnya
Sheet metal roll forming is a continuous bending process where metal strips pass through a series of rolls to achieve a specific cross-sectional profile. This technique is crucial in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-effectiveness. This project aims to optimize Novelis’s aluminum roll forming process by employing Computer-Aided Engineering (CAE) tools, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) software such as LS-DYNA. Initial simulations of a square tube profile identified key stations and led to performance enhancements through targeted adjustments. Stress and strain analyses demonstrated how operational factors, such as roll settings, influence section shapes and angles, facilitating the fine-tuning of roll forming station parameters. Using a Design of Experiments (DOE) framework, the study pinpointed critical factors to improve simulation accuracy and optimize roll forming settings. The results indicated that optimized stand height settings significantly improved the accuracy of the desired angles. These insights can be integrated within Novelis’ production line to boost production efficiency and roll performance. This research not only supports current operations, but also provides a foundation for future advancements in roll forming technology.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping</title>
<link href="https://hdl.handle.net/1721.1/157149" rel="alternate"/>
<author>
<name>Glasser, Kaili</name>
</author>
<id>https://hdl.handle.net/1721.1/157149</id>
<updated>2024-10-10T03:01:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping
Glasser, Kaili
The Fiber Extrusion Device (FrED) is a hands-on desktop tool designed to facilitate the teaching of manufacturing engineering concepts through remote laboratory experiences. FrED simulates the continuous fiber draw process used in various industries, including fiber optics, synthetic textiles, medical devices, aerospace, and construction. This device translates industrial-scale fiber draw towers into a compact version, allowing users to experiment with different parameters to understand their effects on manufacturing processes. Over the past three years, successive groups of MEng students have refined FrED’s design with the goal of creating a robust, functional, and affordable device for in-house manufacturing at the MIT FrED Factory. While the 2023 model achieved significant cost reduction, it required further hardware and electronics refinement for stable and repeatable performance. This thesis encompasses two main objectives: enhancing the hardware design and assembly processes for the final 2024 educational FrED model, and developing an alternative design for an advanced FrED version suitable for academic lab settings to rapidly prototype synthetic fibers. The first objective was met by improving the two most dynamic sub-assemblies—the gearbox and extrusion system—to ensure smooth and consistent operation. Additionally, conjoining part tolerances and hardware insert locations and geometries within manufactured parts were verified and adjusted according to manufacturing standards. Multiple jigs were also designed and fabricated to facilitate the assembly process of the gearbox and extrusion sub-assemblies, and two new parts were created to enhance user operation of FrED. For the second objective, an enhanced version of FrED capable of handling a wider range of preform materials was developed by upgrading the extrusion sub-assembly to operate at temperatures over three times higher than the educational version. This feature had been previously attempted with older, more expensive versions of FrED but had not been pursued with the recent, more affordable iteration. The new high-temperature FrED successfully drew fibers from PLA, a biodegradable thermoplastic, using 3D printed preforms with distinctive geometries, demonstrating its potential for providing an affordable solution for rapid synthetic fiber prototyping in academic labs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Internet Celebrity City: Social Media and Urban Space in China</title>
<link href="https://hdl.handle.net/1721.1/157148" rel="alternate"/>
<author>
<name>Chen, Yufei</name>
</author>
<id>https://hdl.handle.net/1721.1/157148</id>
<updated>2024-10-10T03:13:39Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploring the Internet Celebrity City: Social Media and Urban Space in China
Chen, Yufei
“Internet celebrity space” offers a fresh perspective for studying urban spaces in the mobile Internet era as a new visual consumption space. The term "Internet celebrity," or wanghong in Chinese, is utilized in modern Chinese media to refer to celebrities and the specific cultural and consumption trends linked to them. This concept has surfaced alongside the growth of e-commerce platforms, with the recognition that wanghong often engages in promoting products, services, or lifestyles to their followers. The internet celebrity spaces, or wanghong spaces, can elevate the popularity of certain areas and influence local neighborhoods, communities, and economies. Internet celebrity urbanism involves broadening this trend from certain locations to greater scales, encompassing entire districts or extending this status through urban scale. This thesis explores the impact of internet celebrity spaces in China. It is divided into three parts: Firstly, it demonstrates the phenomenon and background: study investigates the way Internet Celebrity spaces are represented in social media. Then, the studies focus on exploring the latest research and analyzing the research perspectives and methods to anchor the author’s research questions with appropriate approaches. Lastly, the influence of Internet Celebrity spaces is discussed through examining the case in Shanghai by observing internet celebrity spaces’ influence on street activity. With the analysis and conclusion, suggestions for future development are given.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Target Design and Optimizations for Spent Fuel Transmutation</title>
<link href="https://hdl.handle.net/1721.1/157147" rel="alternate"/>
<author>
<name>Tukharyan, Grigor</name>
</author>
<id>https://hdl.handle.net/1721.1/157147</id>
<updated>2024-10-10T03:04:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Target Design and Optimizations for Spent Fuel Transmutation
Tukharyan, Grigor
There are six long-lived fission products (LLFPs) identified in nuclear spent fuel, which account for at least 99% of the long-term radiotoxicity once actinide recycling is completed. This thesis examines the feasibility of using proton beams to transmute LLFPs into shorterlived or stable isotopes. While long-term storage for high-level waste would still be necessary, transmuting the LLFPs can reduce the volume of waste material that needs to be stored. The objectives of this research are to explore the design of a proton transmutation facility, as well as to determine the optimal LLFP target-blanket material configuration for maximizing the transmutation efficiency. This thesis analyzes the use of intermediate energy beams of 18-70 MeV from commercial cyclotrons for transmutation. This thesis also analyzes the use of 1000 MeV proton beams to generate a substantial number of secondary neutrons through spallation interactions with target materials. The secondary neutrons produced from the spallation process are utilized by the LLFP materials, while surrounding blanket materials are selected to enhance the transmutation efficiency. PHITS, a Monte Carlo transport code, is employed to computationally model the interactions between LLFP materials and the proton beam. In this thesis, PHITS is used to estimate the flux-energy spectrum and the number of atoms irradiated in the LLFP target during beam interaction. This data is then post-processed using a 0-dimensional analysis in FISPACT to estimate the transmutation rate for each LLFP. PHITS is also used to find the depletion rate of the LLFPs for the 18-70 MeV beam case and for spallation-induced transmutations in the 1000 MeV case. Geant4, a Monte Carlo transport toolkit, is used to calculate the production rate of particles attributed to the spallation process. Analysis of the performance of commercial cyclotrons with energies of 18-70 MeV indicates that transmutation rates increase with higher proton beam energy. A cyclotron with a beam current of 10 mA and beam energy of 70 MeV running continuously can transmute 15.401 ± 0.069 g/year of Tc-99. However, Tc-99 is produced at a rate of approximately 8.54 kg/year in a 1 GW reactor, suggesting that a single commercial cyclotron beam is currently not viable for transmutation purposes. A proposed tank design with a lead/Tc-99 target that is surrounded by LLFP pins and heavy water is considered for the spallation study. Although using Tc-99 as a target directly transmutes 0.893 ± 0.002 kg/year from transmutation attributed to spallation, using lead as a target instead approximately doubles the transmutation rates in the LLFP regions for almost all of the LLFP isotopes. In both cases, the depletion rate of the LLFPs is greatly increased compared to using a commercial cyclotron of 70 MeV. A proton spallation source 3 with a beam current of 10 mA and beam energy of 1000 MeV, using a Tc-99 target, achieves a transmutation rate of approximately 10.9 kg/year of Tc-99 in the LLFP pins through secondary neutrons produced by the spallation process. In contrast, using a lead target achieves a higher transmutation rate of around 20.0 kg/year of Tc-99 in the LLFP pins. This work was supported by the DOE ARPA-E Project under the award number DEAR0001578.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Singular Value Decomposition Through&#13;
Least Squares</title>
<link href="https://hdl.handle.net/1721.1/157145" rel="alternate"/>
<author>
<name>Zhao, Freddie</name>
</author>
<id>https://hdl.handle.net/1721.1/157145</id>
<updated>2024-10-10T03:04:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Distributed Singular Value Decomposition Through&#13;
Least Squares
Zhao, Freddie
Singular value decomposition (SVD) is an essential matrix factorization technique that decomposes a matrix into singular values and corresponding singular vectors that form orthonormal bases. SVD has wide-ranging applications from principal component analysis (PCA) to matrix completion and approximation. Methods for computing the SVD of a matrix are extensive and involve optimization algorithms with some theoretical guarantees, though many of these techniques are not scalable in nature. We show the efficacy of a distributed stochastic gradient descent algorithm by implementing parallelized alternating least squares and prove theoretical guarantees for its convergence and empirical results, which allow for the development of a simple framework for solving SVD in a correct, scalable, and easily optimizable manner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Survey Techniques to Examine Morphological Evolution of Coastal Regions</title>
<link href="https://hdl.handle.net/1721.1/157143" rel="alternate"/>
<author>
<name>Ammons, Seth N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157143</id>
<updated>2024-10-10T03:01:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Survey Techniques to Examine Morphological Evolution of Coastal Regions
Ammons, Seth N.
Beaches are dynamic, changing with tides, winds, and waves. Here, a beach was mapped daily for 3 weeks from the dune to the low-tide water line on the Outer Banks of North Carolina at the US Army Corps of Engineers Field Research Facility in Duck. The 22,500 m2 area of interest was surveyed daily by a walker carrying a GPS-equipped backpack and occasionally with a lidar equipped drone. Surveys of the northern region of interest also were collected with a stationary terrestrial lidar mounted on the dune. The observed morphological events include the destruction and formation of a cusp field during which there was 1.4 m of erosion and accretion associated with bays and horns, and the formation over 7 days of a ~1-m high ridge and runnel system. The GPS-equipped backpack apparatus was used as ground truth for estimates made with the lidar systems. Along both cross- and alongshore transects the lidar elevations were within approximately 0.05 m of those estimated by the backpack surveys, with RMS errors less than 0.11 m.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Engineering for Carbon Capture and Storage</title>
<link href="https://hdl.handle.net/1721.1/157140" rel="alternate"/>
<author>
<name>Zhang, Tiantian</name>
</author>
<id>https://hdl.handle.net/1721.1/157140</id>
<updated>2024-10-10T03:32:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Systems Engineering for Carbon Capture and Storage
Zhang, Tiantian
Carbon Capture and Storage (CCS) is a crucial technology in the mission to achieve NetZero carbon emissions by midcentury. By capturing and storing CO2 from large industrial sources and power plants, CCS mitigates the impact of existing industrial activities while maintaining energy security and economic stability. The study underscores the necessity of a systematic approach to CCS system design and development to meet stakeholder requirements. It highlights the versatility of CCS in addressing emissions across various sectors, its ability to be retrofitted to existing infrastructure, and its potential for immediate emissions reduction compared to the longer timelines required for integrating renewable energy sources.&#13;
This study analyzes CCS systems holistically, identifying primary components and alternative options for capture, transport, storage, and utilization. It reveals that the transport type significantly impacts system utility, with pipelines being the most effective. The analysis also indicates that CCS systems capturing CO2 from power plants, ammonia, and chemical production facilities and utilizing onshore pipelines and saline aquifers offer high utility and low cost. The Gulf Coast and Permian &amp; Midcontinent regions show better performance due to existing infrastructure and storage capacity. The study emphasizes the benefits of staged CCS development for broader deployment, technology maturation, and cost recovery. Sensitivity analyses suggest that future technology advances could further improve CCS system performance and economic viability.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions</title>
<link href="https://hdl.handle.net/1721.1/157139" rel="alternate"/>
<author>
<name>Fan, Jie</name>
</author>
<id>https://hdl.handle.net/1721.1/157139</id>
<updated>2024-10-10T03:41:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions
Fan, Jie
This study presents an approach to identifying underutilized urban spaces, focusing on parking areas, and explores potential reutilization strategies in Greater Boston. Under the milieu of the information age, global urbanization, and technological development, the prosperity of urban data serves as the new method to approach urban proposals. The city, as a multifaceted artifact, is examined through the lens of advanced data-driven techniques, particularly deep learning. With the computer vision model, the underused surface parking lots will be automatically detected according to historical satellite imageries, highlighting a misalignment between the current infrastructure and the actual urban needs. This study then leverages miscellaneous urban factors to analyze the parking patterns. Associated with the multimodal system, there are possibilities underlying the usage of redundant surface parking. Considering the high rents and housing situation, these spaces could be transformed into housing units or even mixed-used districts, to alleviate the housing crisis in Greater Boston.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives</title>
<link href="https://hdl.handle.net/1721.1/157138" rel="alternate"/>
<author>
<name>James, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/157138</id>
<updated>2024-10-10T04:09:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives
James, Lauren
This thesis explores the implementation of Carbon Capture and Sequestration (CCS) technologies, focusing on the stages of capture, transportation, and sequestration. Utilizing a system dynamics model, the research evaluates CCS's effectiveness and economic viability across various scenarios, including those outlined by the International Energy Agency (IEA). The baseline model suggests that even under favorable assumptions, CCS permanently sequesters only a small fraction of total global emissions.&#13;
&#13;
The economic analysis reveals a slight decrease in total costs, attributed to the learning curve, but offset by increasing costs as more complex projects are undertaken. The model also highlights the energy penalty associated with high energy requirements for capture. Additionally, the alignment of capacities across capture, transportation, and sequestration phases is important because discrepancies can lead to inefficiencies and bottlenecks.&#13;
&#13;
This research acknowledges limitations, including the use of aggregated data and assumptions across many parameters. These limitations emphasize the need for further research to refine these estimates and enhance the model's accuracy. Despite these challenges, the model serves as a beneficial tool for testing policy interventions and assessing the potential of CCS as a component of global climate strategy.&#13;
&#13;
Overall, the findings highlight the complexities and challenges of deploying CCS technologies at scale, emphasizing the importance of coordinated policy, technological innovation, and infrastructure development. This research provides a foundation for future studies and policy discussions to better understand CCS's role in achieving climate goals.&#13;
&#13;
Disclosure: The following content is the author’s, and responsibility is taken for all content. Noting this, it was generated by the author with the assistance of an AI-based system to&#13;
augment the effort.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.</title>
<link href="https://hdl.handle.net/1721.1/157136" rel="alternate"/>
<author>
<name>Estep, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/157136</id>
<updated>2024-10-10T03:27:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.
Estep, Joseph
This study conducts a comprehensive technoeconomic analysis of geothermal district heating (GDH) in the Boston, MA area, with a specific focus on the MIT campus. The research begins by reviewing the evolution of district energy systems, highlighting various use cases, technologies, and policy developments. It then defines the system problem and establishes a framework for implementing a geothermal district heating system at MIT. The analysis examines the economic viability and decarbonization potential of the GDH system, identifying various system architectures and phased campus sector implementation scenarios. These scenarios are compared to a 'business as usual' reference case. The study reveals that the recommended implementation scenario, MG-E-N-W, not only offers the lowest cost but also achieves the lowest emissions. Over a 30-year period, this scenario presents a net present value (NPV) savings of more than $700 million and 2 million MTCO2e compared to the reference case, making it the most economically and environmentally favorable option for MIT's campus energy system transformation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places</title>
<link href="https://hdl.handle.net/1721.1/157127" rel="alternate"/>
<author>
<name>Schutt, Neal</name>
</author>
<id>https://hdl.handle.net/1721.1/157127</id>
<updated>2024-10-03T03:53:09Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places
Schutt, Neal
The transition to renewable energy is a critical step in reducing global carbon emissions, yet it introduces new challenges for the aging electrical grid, particularly in urban areas. Battery Energy Storage Systems (BESS) are emerging as key infrastructure in this transition, capable of enhancing grid resiliency, managing peak loads, and facilitating the integration of renewable energy sources. Federal and state incentives and a recent sharp decline in the cost of battery cells have made BESS development economically viable. This thesis explores the potential of BESS to create public and economic value in underutilized urban spaces through the exploration of a hypothetical redevelopment proposal for the Alewife MBTA Complex in Cambridge, Massachusetts.&#13;
&#13;
The Alewife MBTA Complex presents significant challenges for redevelopment due to the high cost of demolishing the decaying existing structure. However, its proximity to a major substation and the increasing local demand for electricity make it an ideal candidate for a BESS project. This thesis demonstrates how integrating energy storage into the redevelopment of the site can enable an otherwise financially infeasible project.&#13;
&#13;
The paper provides an overview of the BESS development process, detailing each phase from creating a business strategy to disposition. It offers insights into the common challenges encountered, and how these might be navigated to optimize project outcomes. By breaking down the development timeline and key decision points, this thesis serves as a practical guide for real estate professionals to gain familiarity with Battery Energy Storage Systems. &#13;
&#13;
Through detailed financial modeling and analysis, including sensitivity testing, this research quantifies the expected financial performance of a BESS project at the Alewife site. The study concludes that BESS can unlock ‘found value’ in sites with little other economic potential. The findings suggest that incorporating BESS into real estate development projects can provide substantial public benefits, including enhanced grid resilience, lower energy costs, and increased property values, making it a strategic tool for urban planners and developers.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Severity of a Cybersecurity Incident for Incident Reporting</title>
<link href="https://hdl.handle.net/1721.1/157124" rel="alternate"/>
<author>
<name>Conard, Chelsea Foushee</name>
</author>
<id>https://hdl.handle.net/1721.1/157124</id>
<updated>2024-10-03T03:47:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantifying the Severity of a Cybersecurity Incident for Incident Reporting
Conard, Chelsea Foushee
In the field of cybersecurity, the lack of standardized data collection and incident reporting&#13;
methods pose significant challenges to address and respond to incidents affecting critical&#13;
infrastructure. Various initiatives aim to resolve this issue by mandating the collection of&#13;
data on cyber incidents; however, there is often a lack of clear guidelines on how the collected&#13;
data will be utilized effectively.&#13;
This paper introduces the Cyber Incident Severity Scale (CISS), a framework designed&#13;
to guide the selection of relevant data for analysis and communicate the severity of a cybersecurity incident. By drawing insights from established scales in other fields, such as&#13;
natural disasters and public health, this research produces a single score for a reporting&#13;
entity which can be aggregated to determine the overall severity of an incident. The ability&#13;
to swiftly assess and score an incident is a critical tool to quantify incident severity and&#13;
prioritize response, support policy development, and bolster the overall security of critical&#13;
infrastructure.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients</title>
<link href="https://hdl.handle.net/1721.1/157123" rel="alternate"/>
<author>
<name>Stewart, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/157123</id>
<updated>2024-11-14T17:04:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients
Stewart, Lily
PCOS is a common hormonal condition found in 10 to 19 percent of people with ovaries. It frequently causes irregular periods and ovulation and is one of the most common forms of female infertility. However, the effects do not stop there. People with PCOS are at higher risk for a slew of health complications: insulin resistance, sleep apnea, depression, and anxiety. They are also more likely to develop metabolic syndrome—a combination of high cholesterol, high blood pressure, diabetes, and high waist-to-hip ratios. Together, many of these symptoms are risk factors for fatty liver disease or heart attacks and strokes. &#13;
&#13;
Despite the commonness and potential seriousness of the condition, many patients go undiagnosed, and those with diagnoses frequently go under-treated. The reasons for this are aplenty. PCOS’s cause is unknown. It has no known cure. It looks different from patient to patient. Its research is underfunded. Physicians do not learn much about it in medical school. &#13;
&#13;
But one reason at the root of it all, some experts say, is how tightly this condition has been intertwined with reproduction and fertility. Over the past decade, researchers and physicians who specialize in the condition have been pushing for everyone to recognize PCOS for what it is: a full-body endocrine syndrome with wide-reaching effects on health and quality of life. And one way to combat these is to change something fundamental about the condition: its name.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trouble on the Range: When Does a National Park Become a Bison Zoo?</title>
<link href="https://hdl.handle.net/1721.1/157119" rel="alternate"/>
<author>
<name>Hartley, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/157119</id>
<updated>2024-11-14T16:59:43Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Trouble on the Range: When Does a National Park Become a Bison Zoo?
Hartley, Sophia
Yellowstone National Park is often credited for bringing American bison back from the brink of extinction. In 1902, there were merely 25 individual bison in the park, but now, Yellowstone’s herd fluctuates between 3,000 and 5,500 animals. Over the past century, the national park’s conservation effort pushed bison into the public spotlight. The animal has become a symbol of the great American west, and recently, bison were named the US National mammal.&#13;
&#13;
Many of Yellowstone National Park’s bison reside in the park’s northern range, a 380,000-acre network of valleys, mountains, and river basins. One of these valleys, Lamar, is a hotspot for bison viewing, but, unbeknownst to many casual tourists, the area has also long-been the center of an intense scientific debate. &#13;
&#13;
Before thousands of bison covered the floor of Lamar Valley, a different hooved mammal stood in their place. Over the 19th and 20th centuries, hunting pressure, federal policy, and unnatural predator-prey relationships made Yellowstone’s northern range a haven for elk herds. As they proliferated in peace, elk chewed through the northern range’s preexisting ecosystems. Their appetites took a severe toll on native flora, which in turn, shrank habitats for other wildlife. Debates about park management and range science broke out between independent scientists and Yellowstone officials. The disagreements lasted for decades. But in the late 1990s, a whirlwind of decisions reduced (and maintained) elk herds to a more manageable level. Scientists thought that finally, the northern range’s native flora and fauna might have a chance to recover. &#13;
&#13;
For many years, it seemed like an ecological revival was beginning. But not in some places. Regrowth in regions of the northern range where bison heavily grazed were lagging behind. A growing body of research suggests that bison are having a similar adverse effect on Yellowstone’s ecosystems as the historic overabundance of elk. In Lamar Valley, many riverbanks are still devoid of trees, beavers are few and far between, and non-native species are increasingly prevalent. &#13;
&#13;
Yellowstone officials disagree with this consensus. Instead, they point to research showing how bison positively impact the landscape. In 2023, the park released a bison management proposal that has only intensified the debate. The proposal dismissed a large body of research as insignificant, going on to suggest an increase to the size of the park’s bison herd. In addition to concern about ecological degradation, many independent researchers are perplexed as to why Yellowstone — the world’s first national park — is seemingly intent on diminishing or ignoring the significance of legitimate scientific research.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt</title>
<link href="https://hdl.handle.net/1721.1/157117" rel="alternate"/>
<author>
<name>Verbeek, Erkin Emiel</name>
</author>
<id>https://hdl.handle.net/1721.1/157117</id>
<updated>2024-10-03T03:02:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt
Verbeek, Erkin Emiel
Radiative transfer (RT) is a crucial ingredient for self-consistent modelling of numerous astrophysical phenomena across cosmic history. However, on-the-fly integration into radiation-hydrodynamics (RHD) simulations is computationally demanding, particularly due to the stringent time-stepping conditions and increased dimensionality inherent in multifrequency collisionless Boltzmann physics. The recent emergence of exascale supercomputers, equipped with extensive CPU cores and GPU accelerators, offers new opportunities for enhancing RHD simulations. We present the first steps towards optimizing the RHD solver AREPO-RT for such high-performance computing environments. We implement a novel node-to-node communication strategy that utilizes shared memory to substitute intranode communication with direct memory access. Furthermore, combining multiple internode messages into a single message substantially enhances network bandwidth utilization and performance for large-scale simulations on modern supercomputers. The single-message node-to-node approach also improves performance on smaller-scale machines with less optimized networks. Additionally, by transitioning all RT-related calculations to GPUs, we achieve a significant computational speedup of around 15x for standard benchmarks compared to the original CPU implementation. As a case study, we perform cosmological RHD simulations of the Epoch of Reionization, employing a similar setup as the THESAN project. In this context, RT becomes sub-dominant such that even without modifying the core AREPO codebase, there is an overall threefold improvement in efficiency. The advancements presented here have broad implications, potentially transforming the complexity and scalability of future simulations for a wide variety of astrophysical studies. This work serves as a blueprint for porting similar simulation codes based on unstructured resolution elements to GPU-centric architectures.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?</title>
<link href="https://hdl.handle.net/1721.1/157116" rel="alternate"/>
<author>
<name>Hopkins, Sarah Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/157116</id>
<updated>2024-11-14T17:02:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?
Hopkins, Sarah Rebecca
Since the 19th century, researchers have attempted to uncover the biological roots of criminality. The process has been both scientifically dubious and ethically fraught. While biological theories of criminal behavior faded after World War II, they arose again in the 1990s and early 2000s, when new brain imaging techniques collided with a growing interest in understanding how biological drivers of crime, if they exist, could be analyzed to understand, and even predict, criminal behavior. This thesis examines the research and claims of a prominent neuropsychologist within that historical context. He claims to have conducted promising brain research on incarcerated people that could uncover biological markers of criminal behavior, or even predict future criminality. Yet methodological and ethical questions have been raised about his research. Is it scientifically valid to have a brain-based view of criminal behavior? Is it ethically valid to assume that criminal behavior can be decoded from the brains of people incarcerated in a system that disproportionately impacts people of color and those from low socio-economic backgrounds? His critics are doubtful.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nipah: The history, and future, of one of the world’s most lethal viruses</title>
<link href="https://hdl.handle.net/1721.1/157115" rel="alternate"/>
<author>
<name>Viveros, Alex Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/157115</id>
<updated>2024-11-14T17:02:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Nipah: The history, and future, of one of the world’s most lethal viruses
Viveros, Alex Gabriel
The Nipah virus kills around three quarters of people who contract it, making it one of the most lethal viruses known to infect humans. The virus first emerged in 1998, when hundreds of pig farmers in Malaysia fell ill with fevers and encephalitis, or brain inflammation. Nipah has caused smaller outbreaks in nearby Bangladesh nearly every year since then. The Malaysian farmers appeared to have been infected directly from their pigs, rather than from each other. For a time, there was no clear evidence that Nipah could spread from humans to other humans. That changed in April of 2004, when investigators responding to a Nipah outbreak in a remote district in Bangladesh discovered that the virus was spreading person to person. Pteropus fruit bats, which are native to South Asia, were identified as the natural reservoirs of the Nipah virus. Researchers have spent the last two decades studying the virus’ transmission in bats and how the virus spills over into humans. Institutions across the world have even recently started developing Nipah vaccines. Scientists believe the Nipah strains that currently circulate in humans are likely not transmissible enough to ignite a pandemic in people. That could change. Whether the virus one day evolves to spread better within humans, or hits a particularly susceptible place and thrives, officials worry about what could happen if Nipah ever affects larger populations. The Nipah virus is just one of many zoonotic pathogens that scientists are studying to understand how humanity can prepare for future deadly pathogens.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators</title>
<link href="https://hdl.handle.net/1721.1/157114" rel="alternate"/>
<author>
<name>Fawcett, Robert Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/157114</id>
<updated>2024-10-03T03:22:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators
Fawcett, Robert Logan
This thesis investigates the real estate investment decisions of data center operators, with a focus on how key infrastructure characteristics influence data center development. Using a sequential econometric approach, the research applies both a logit and a hedonic model to evaluate the importance of various factors. The logit model explores the likelihood of data center development at the county level, highlighting geographical characteristics. The hedonic model examines the impact of specific site attributes, such as proximity to power infrastructure and fiber, on the scale of data center facilities in megawatts. The findings suggest that colocation data centers prioritize connectivity, electrical infrastructure, and urban proximity, while the location of hyperscale facilities is more variable and less predictable. This study enhances our understanding of how modern technological demands, particularly in the AI era, shape real estate strategies and offers insights into future trends in digital infrastructure investments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters</title>
<link href="https://hdl.handle.net/1721.1/157112" rel="alternate"/>
<author>
<name>Linn, Brittany</name>
</author>
<id>https://hdl.handle.net/1721.1/157112</id>
<updated>2024-10-03T03:04:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters
Linn, Brittany
Radical S-adenosylmethionine (SAM) enzymes (RS enzymes) use a 3:1 site-differentiated [Fe₄S₄]⁺ cluster to reductively cleave the SAM cofactor and generate a 5’-deoxyadenosyl radical intermediate (5’-dAdo•) that regio- and stereospecifically abstracts an H-atom from the target substrate. It has been proposed that 5’-dAdo• binds to the unique Fe site before abstracting an Hatom from the substrate. However, due to the transient nature of captured reaction intermediates, their precise structures have yet to be fully elucidated and, therefore, their role in the mechanism of RS enzymes remains unclear. Our group has established reliable methods of synthesizing alkylated [Fe₄S₄] clusters that can serve as models of organometallic intermediates in RS enzyme catalysis. These clusters are competent for radical release and, upon oxidation, undergo an alkyl migration process to yield S-alkylated clusters. A cluster species containing a unique alkylated Fe site with a coordination number greater than four is likely generated in these processes, although a stable cluster of this type has yet to isolated and crystallographically characterized. This work reports the synthesis of α-²H and α-¹³C isotopically labeled Fe and S ethyl ligated [Fe₄S₄] clusters to determine their electron-nuclear hyperfine parameters by ENDOR spectroscopy. These parameters will aid in identification of alkylated [Fe₄S₄] cluster intermediates generated in biological studies. Additionally, in an attempt to synthesize an [Fe₄S₄]⁺³ cluster with a five coordinate, Fe-alkylated site, a series of benzyl and phenyl ligated clusters were prepared and analyzed by NMR and EPR spectroscopies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Drivers of Deforestation using Games on Spatial Networks</title>
<link href="https://hdl.handle.net/1721.1/157109" rel="alternate"/>
<author>
<name>Seby, Jean-Baptiste</name>
</author>
<id>https://hdl.handle.net/1721.1/157109</id>
<updated>2024-10-03T03:01:59Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Understanding Drivers of Deforestation using Games on Spatial Networks
Seby, Jean-Baptiste
As the impacts of climate change become more extensive and intense, effective actions for mitigation and adaptation become imminent. Since deforestation is a key driver of CO₂ emissions and forests constitute a crucial carbon sink, mitigating deforestation is an essential policy lever for governments. However, much of tropical deforestation results from actions of private entities that use the cleared land for activities such as palm oil tree cultivation, timber plantation, and agriculture. Often, the incentives to engage in (often illegal) deforestation within a forest concession are coupled with these activities and are also shaped by the activities in neighboring concessions. In this thesis, we focus on the problem of modeling these strategic interactions using game theory. We analyze a class of games in which agents engage in coupled activities over a spatial network and study a policy intervention to limit illegal deforestation.&#13;
&#13;
Firstly, we conduct equilibrium analysis of a game in which each agent decides the production levels of her coupled activities in the presence of network effects. Practically, these network effects are induced by spatial arrangements of concessions and their ownership structures. We consider the general case where network effects are heterogeneous, i.e. network effects influencing palm oil tree cultivation and time logging are described by different graphs. We provide a sufficient condition for existence and uniqueness of Nash equilibrium. This result follows by leveraging potential function of the game or via a general variational inequality. &#13;
&#13;
Secondly, we analyze how the spatial structure of concessions impacts the equilibrium outcome. In addition to the basic game in which each agent simultaneously engages in two activities, we consider a variation in which agents engage in one of the activities (but not both). We show that in both cases equilibrium structure can be expressed as a linear combination of weighted Bonacich centrality vectors -- a node-centrality measure that depends on the total number of walks that depart from a node (concession). Our analysis provides new insights on the drivers of illegal logging in forest regions where palm oil cultivation and timber logging are coupled.&#13;
&#13;
Thirdly, we evaluate the impact of ``edge removal’’ intervention policy in which the boundary between two neighboring concessions is monitored or a buffer is created between them. We characterize the policy of a social planner who is interested in maximally reducing the illegal production of timber. Interestingly, we identify a regime shift (or phase transition) as the local network effect and level of coupling between activities vary. This result identifies conditions for which the social planner should incentivize specialization (enforce production of palm oil trees or timber) versus diversification (allow for both palm oil trees and timber) cultivation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends</title>
<link href="https://hdl.handle.net/1721.1/157106" rel="alternate"/>
<author>
<name>Xu, Yujian</name>
</author>
<id>https://hdl.handle.net/1721.1/157106</id>
<updated>2024-10-03T03:19:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends
Xu, Yujian
This study closely examines the correlation between high street retail rents and key economic indicators, specifically Consumer Price Index (CPI) and Gross Domestic Product (GDP). Utilizing data on rent levels from prominent high streets globally, the analysis incorporates these macroeconomic indicators to discern patterns and relationships. Through methodologies such as multiple linear regression and Error Correction Model (ECM), the paper aims not only to analyze how high street retail rents align with CPI and GDP but also to explore the primary factors influencing these rents. In studying high street retail properties or considering the acquisition of such properties, this methodology can be used to determine whether a high street is susceptible to macroeconomic fluctuations. If not, it may be necessary to consider the uniqueness of the area or potential risks involved.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction</title>
<link href="https://hdl.handle.net/1721.1/157105" rel="alternate"/>
<author>
<name>Alsobay, Mohammed</name>
</author>
<id>https://hdl.handle.net/1721.1/157105</id>
<updated>2024-10-03T03:44:54Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction
Alsobay, Mohammed
This work addresses an under-explored aspect of people's utilization of algorithmic decision support systems: How do people perceive and use these systems under social influence? Through a pre-registered randomized human-subject experiment, I study the effect of two forms of social information-direct conversations and summarized peer decisions----on users' reliance and effectiveness in leveraging algorithmic advice across a series of decision-making tasks, and how t he availability of local model explanations and performance feedback moderates this effect. I find t hat, on average, neither form of social information affects t rust directly, yet they both moderate t he extent to which feedback and model explanations influence trust in the algorithm. However, while social information can influence trust in the algorithm, I detect no effect on how effectively people utilize algorithmic advice. By describing this interplay between social information, algorithmic transparency, and user behavior, this work contributes to recent research on collective intelligence and sociotechnical approaches to human-AI interaction.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Curve of Inflation Expectations and Firms’ Investments</title>
<link href="https://hdl.handle.net/1721.1/157104" rel="alternate"/>
<author>
<name>Perinelli, Giuditta</name>
</author>
<id>https://hdl.handle.net/1721.1/157104</id>
<updated>2024-10-03T03:35:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Curve of Inflation Expectations and Firms’ Investments
Perinelli, Giuditta
Using rich survey data on Italian firms, this paper studies the formation mechanisms of inflation expectations at different forecasting horizons. Starting from empirical evidence embedded in firms’ inflation expectation curve, we obtain 3 main findings: (1) firms extrapolate for long forecasting horizons, (2) inflation forecasts overreact (underreact) at long (short) forecasting horizons, (3) long-term inflation expectations impact investment decisions. Specifically, we find that a 1% wedge between the 4-year and 1-year ahead expected inflation is associated with a 0.8% increase in the probability of investing. What motivates this result? After ruling out alternative channels of (1) an increase in expected demand, (2) a decrease in supply of input goods, and (3) an improvement in financing conditions, we claim that a decrease in the perceived cost of capital is the main driver.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Seoul Apartment Prices during Population Decline Era</title>
<link href="https://hdl.handle.net/1721.1/157103" rel="alternate"/>
<author>
<name>Cho, Moohyun</name>
</author>
<id>https://hdl.handle.net/1721.1/157103</id>
<updated>2024-10-03T03:50:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Analysis of Seoul Apartment Prices during Population Decline Era
Cho, Moohyun
Since the early 2020s, South Korea has faced a population decrease due to the lowest birth rates globally, but the apartment prices in capital regions covering Seoul, capital city of South Korea and Gyeonggi-do, have ironically shown a consistent upward trend. This thesis explores the persistent rise in apartment prices despite diminishing population in Seoul, providing insights into the economic and social factors affecting this trend. Through an analysis of the characteristics of Seoul apartments, including the unique Jeonse system, and the impacts of population trends by region, this research demonstrates the broader implications of single person household trends and aging population demographics. Furthermore, comparative case studies from Japan and France supports the relationship between aging populations and housing markets. By applying various indices related to apartment prices, this study demonstrates the correlations between apartment prices and demographic changes, consequently exploring the potential future scenarios for the housing market in Seoul.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals</title>
<link href="https://hdl.handle.net/1721.1/157101" rel="alternate"/>
<author>
<name>Richter, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/157101</id>
<updated>2024-11-14T18:24:40Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals
Richter, Hannah
Parasites have a bad rap. Most people think of them as scary, gross, or both, but they are also diverse creatures that have evolved in and on every animal and ecosystem on the planet. Parasitism is the most successful way of life for an animal — representing more than 40% of all species — and the wormy and crawly creatures it encompasses are vastly understudied. An increasing volume of research shows that parasites play important ecological functions, from keeping animal populations in check to stabilizing food chains to driving evolution and biodiversity. While parasites can cause horrible human suffering, especially in countries without reliable clean water or sanitation systems, only a fraction of parasites affect humans, with estimates as low as 0.1%. &#13;
&#13;
As climate change and habitat loss threaten animals, so too do they endanger the parasites that live on and inside them. At the same time parasite biodiversity faces shrinkage, the field of parasitology reckons with its own crisis: membership in the American Society of Parasitologists has declined by 76% in the past 50 years, and many of the world’s most important parasitologists are elderly or dead. To revitalize the field, parasitologists are charming younger generations with parasite Pokémon cards and stuffed animals and attempting to integrate parasites into global conservation programs. One main question is on parasitologists’ minds: How can they convince people to discover, catalog, and understand the world's parasite biodiversity before parasites, the field’s leaders, and their valuable knowledge die off?
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil</title>
<link href="https://hdl.handle.net/1721.1/157099" rel="alternate"/>
<author>
<name>Wertheimer, Sarah R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157099</id>
<updated>2024-10-03T03:34:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil
Wertheimer, Sarah R.
Choosing at how many and which treatment centers to offer a gene therapy to patients is&#13;
a crucial decision which impacts how far the treatment has to be transported and how far&#13;
patients have to travel to receive treatment. Many gene therapies are for patients with severe diseases that make it difficult to travel. On the other hand, cold chain requirements&#13;
make shorter transportation preferable for gene therapies, and few centers have prior experience handling them. Using multi-criteria optimization modeling paired with local input,&#13;
this thesis explores different approaches to the gene therapy treatment center location selection decision and how these approaches would affect patients’ geographic accessibility to&#13;
treatment.&#13;
We focus on Brazil and a specific gene therapy product as our case study. We interview&#13;
local pharmaceutical company employees to understand the stakeholders involved in this&#13;
decision and the approaches being considered. We model how these approaches would affect patients’ geographic accessibility to treatment and discuss potential modifications to&#13;
our model. Finally, by means of an interactive workshop, we explore the decision-making&#13;
discussion between stakeholders in choosing which approach to follow.&#13;
We find that the approaches under consideration result in a wide range of geographic accessibility for patients. Early stage decisions have impacts across stages, and even therapies,&#13;
due to a reluctance to select new locations. Patients in the northwest of Brazil would need&#13;
stakeholders to consider candidate locations beyond government reference centers or those&#13;
with gene therapy experience, in order to have a treatment center nearby. Regarding facilitation, we find that quick, low-stakes modeling and joint discussion could allow stakeholders&#13;
to consider approaches they might not otherwise consider.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Wildfire Suppression: A branch-and-price-and-cut approach</title>
<link href="https://hdl.handle.net/1721.1/157098" rel="alternate"/>
<author>
<name>Wachspress, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/157098</id>
<updated>2024-10-03T03:25:12Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optimizing Wildfire Suppression: A branch-and-price-and-cut approach
Wachspress, Jacob
In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia</title>
<link href="https://hdl.handle.net/1721.1/157097" rel="alternate"/>
<author>
<name>Gallo, Sebastian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157097</id>
<updated>2024-10-03T03:04:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia
Gallo, Sebastian A.
The dynamics of brain states under general anesthesia in infants are complex and exhibit significant developmental changes, particularly in the context of neurophysiological responses. Traditional EEG analysis has been valuable in tracking these changes, but there is a critical need for more precise, quantitative methods to assess neural synchrony and coherence in this vulnerable population. This thesis explores advanced state-space modeling techniques, specifically focusing on State Space Global Coherence (SSGC), to estimate global coherence (GC) during sevoflurane general anesthesia in an infant. Two different SSGC approaches were employed: one approach directly estimated GC from the data, while the other first estimated the covariance matrix and then used this matrix to compute GC. The SSGC approaches were first applied to a validation dataset that had been previously analyzed using SSGC for covariance estimation. This was done to ensure that my analysis was functioning correctly by validating it against a dataset with known outcomes before proceeding with exploratory analysis. Once this was certain, the next step involved applying this pipeline to EEG data from a 10-month-old infant—a dataset where SSGC had not been previously utilized. Following this, both the validation dataset and the infant dataset were used to compare the effectiveness of SSGC for covariance estimation versus direct GC estimation. The infant dataset, in particular, provided an opportunity to explore the utility of SSGC in a new context. Both datasets that the SSGC methods were applied to had a low signal to noise ratio. This revealed that direct GC estimation provided improved temporal resolution for GC and the ability to capture dynamic changes in coherence over time. In contrast, SSGC for covariance estimation produced results nearly identical to empirical GC, suggesting that it is more susceptible to noise. The resilience of direct GC estimation to noisy data highlights its potential as a robust tool for capturing the spatiotemporal dynamics of neural synchrony under anesthesia. This thesis emphasizes the importance of advanced modeling techniques in enhancing neurophysiological monitoring, with significant implications for improving pediatric anesthetic care and outcomes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>State Estimation in Dynamical Robotic System with Non-Gaussian Noise</title>
<link href="https://hdl.handle.net/1721.1/157096" rel="alternate"/>
<author>
<name>Jin, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157096</id>
<updated>2024-10-03T03:28:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">State Estimation in Dynamical Robotic System with Non-Gaussian Noise
Jin, David
State estimation is critical for robot operation. Most estimation algorithms assume that the robotic sensor measurements are contaminated by Gaussian noise. However, in practical applications, the noise is often non-Gaussian, heavy-tailed, or even multi-modal. In this thesis, we develop algorithms that perform state estimation in dynamical systems with arbitrary noise and prove their theoretical guarantees. We tackle two challenging state estimation problems: multi-model point cloud registration and state estimation in polynomial dynamical systems, both contaminated by non-Gaussian noise. In the multi-model 3D registration problem, we are given two point clouds picturing a set of objects at different poses (and possibly including points belonging to the background) and we want to simultaneously reconstruct how all objects moved between the two point clouds. We propose a simple approach based on Expectation-Maximization (EM) and establish theoretical conditions under which the EM approach recovers to the ground truth. We evaluate the approach in simulated and real datasets ranging from table-top scenes to self-driving scenarios and demonstrate its effectiveness. For state estimation in polynomial systems corrupted by arbitrary noise, we develop a new filtering approach called the Generalized Moment Kalman Filter (GMKF). The GMKF formulates the prediction and update steps as polynomial optimization problems (POP) and solves them using moment relaxations, carrying over a possibly non-Gaussian belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise and outperforms common alternatives, including the Extended and Unscented Kalman Filter, and their variants on matrix Lie groups. We also showcase applications to challenging landmark-based and lidar-based robot localization problems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health</title>
<link href="https://hdl.handle.net/1721.1/157095" rel="alternate"/>
<author>
<name>Cornman, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/157095</id>
<updated>2024-11-14T17:01:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health
Cornman, Eva
In the wake of the antibiotic resistance crisis, alternative options to prevent and treat bacterial infections are desperately needed. Researchers across the world are turning to the most abundant &#13;
biological particle on our planet: bacteriophage. Often called phage, these microscopic viruses infect bacteria, and their high specificity and incredible abundance may make them viable treatment options. Scientists have known about phage for over a century, but renewed interest over the past few decades has spurred a wide variety of research into the biology and applications of these viruses. The benefits, and some of the challenges, of phage therapy for both &#13;
aquaculture and human health are discussed here.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia</title>
<link href="https://hdl.handle.net/1721.1/157092" rel="alternate"/>
<author>
<name>Ringoot, Evelyne</name>
</author>
<id>https://hdl.handle.net/1721.1/157092</id>
<updated>2024-10-03T03:07:39Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia
Ringoot, Evelyne
High-performance computing (HPC) is essential for scientific research, enabling complex simulations and analyses across various fields. However, the specialized knowledge required to utilize HPC effectively can be a barrier for many scientists. This work introduces a hardware-agnostic, large-scale tiled linear algebra framework in Julia designed to enhance accessibility and usability without compromising performance. By providing a flexible abstraction layer, the framework simplifies the development and testing of new algorithms across diverse computing architectures. Julia language’s multiple-dispatch and type inference facilitate the development of type-agnostic, hardware-agnostic, and multi-use frameworks by allowing composability. Utilizing a tiled approach, the implemented framework improves data locality, parallelism, and scalability, making it well-suited for modern heterogeneous environments. Its practical benefits are demonstrated through the implementation of tiled QR-based singular value decomposition (SVD), demonstrating how it streamlines the development process and accelerates scientific discovery. The developed framework is used to implement an in-GPU tiled SVD and an out-of-core GPU-accelerated SVD. Furthermore, its extensibility is demonstrated by implementing a tiled QR algorithm. This work aims to democratize HPC resources by bridging the gap between advanced computational capabilities and user accessibility, empowering a broader range of scientists to fully leverage modern computing technologies.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism</title>
<link href="https://hdl.handle.net/1721.1/157090" rel="alternate"/>
<author>
<name>Liang, Chen E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157090</id>
<updated>2024-10-03T03:41:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism
Liang, Chen E.
Current sociological theories attribute the recent surge in political extremism to mechanisms of opinion “homophily” (i.e., like-minded individuals interact more while dissimilar ones might distance) and “assimilation (i.e., interactions homogenize opinions),” which collectively suggest a social world dominated by extreme views. Yet, this view contradicts empirical evidence showing that extremists still represent a minority and individual opinions remain largely stable. We resolve this apparent paradox by illustrating how extreme collective action can arise from a moderate majority that retains moderate opinions yet responds positively to recruitment by extremists. We break down this task into three steps. First, we theoretically distinguish between opinion homophily and identity homophily (i.e., individuals who share the same identity interact more). Second, we develop an agent-based model to manipulate the strength of identity homophily relative to opinion homophily, while excluding the effect of assimilation (i.e., holding opinions constant). Our model reveals that strong identity-based tolerance can create a “radicalized” structure, which allows extremists and moderates–who disagree in opinion but share an identity–to maintain stable relationships in emergent clusters; Further, the structure concentrates extremists at the center of the clusters, enabling them to form a critical mass that enlists a broader population. Finally, beyond confirming our expectations, we uncover unexpected model behaviors by exploring how the "radicalized" structure can transition between three other distinct structures the model generates. We show that homogeneous groups, often seen as indicators of polarization, could paradoxically be key to reducing organized extremism when dominated by moderates who can effectively mobilize collective action while marginalizing extremists.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Social Science: Language Models as Scientist and Subjects</title>
<link href="https://hdl.handle.net/1721.1/157089" rel="alternate"/>
<author>
<name>Manning, Benjamin S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157089</id>
<updated>2024-10-03T03:42:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Automated Social Science: Language Models as Scientist and Subjects
Manning, Benjamin S.
We present an approach for automatically generating and testing, in silico social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM's predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Central Bank Real Estate Purchases on Asset Prices</title>
<link href="https://hdl.handle.net/1721.1/157083" rel="alternate"/>
<author>
<name>Batista, Quentin</name>
</author>
<id>https://hdl.handle.net/1721.1/157083</id>
<updated>2024-10-03T03:46:32Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Impact of Central Bank Real Estate Purchases on Asset Prices
Batista, Quentin
This paper estimates the impact of central bank real estate purchases on asset prices, demonstrating an increase of 0.1% to 0.2% of Real Estate Investment Trust (REIT) prices in the hours following a typical intervention of 0.014% of market capitalization. At longer horizons, the purchases do not appear to have a significant aggregate effect. The primary identification strategy exploits the nature of the Bank of Japan’s (BoJ) policy rule, which triggers purchases when the Tokyo Stock Exchange Real Estate Investment Trust index falls below a certain threshold. Alternative research designs that exploit the counter-cyclical nature of the BoJ’s policy rule and cross-sectional variation in the eligibility of REITs for BoJ purchases are also considered. Overall, these findings are inconsistent with the predictions of canonical and recent models of asset pricing.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea</title>
<link href="https://hdl.handle.net/1721.1/157082" rel="alternate"/>
<author>
<name>Cho, Kibong</name>
</author>
<id>https://hdl.handle.net/1721.1/157082</id>
<updated>2024-10-03T03:42:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea
Cho, Kibong
A fundamental goal of housing policy is to provide a safe and quality place to live for the population. This thesis studies the provision of affordable homeownership in Seoul, South Korea and particularly for non-homeowners and first-time buyers who did not have an opportunity to participate in the housing boom that the previous generations experienced. For Seoul, 58% of the population is non-homeowners. First, this thesis provides a brief introduction to the Korean housing history. Second, it discusses the housing policy under President Moon Jae In, and how housing prices soared under his administration due to misguided efforts. Finally, it describes the necessary path towards mitigating the housing affordability crisis that has been created in Seoul using both supply and demand side arguments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation</title>
<link href="https://hdl.handle.net/1721.1/157081" rel="alternate"/>
<author>
<name>Poirier, Richard Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/157081</id>
<updated>2024-10-03T03:38:49Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation
Poirier, Richard Scott
Private equity-backed real estate debt funds play a crucial role in providing capital to borrowers seeking financing for construction projects. These funds raise capital from investors, deploy it strategically, and actively manage debt investments to generate returns for their limited partners. The appeal lies in the potential for attractive yields and risk management strategies in a complex investment landscape. There are countless potential fund structures to address a range of investment strategies, risk profiles, investor appetites, geographic considerations, and manager experience and deal access. This study delves into the dynamics of capital raising for a real estate debt fund specializing in private construction loans. It covers the essential elements of the Private Placement Memorandum (PPM), including legal disclosures, investment terms, risk factors, and fundspecific details. This research aims to provide a real-world example of a fund designed according to current trends and market terms for use by a real-life investment manager, ProBuilder Financial LLC. The PPM and the associated investor presentation utilize best practices for presenting complex financial information in a clear and concise manner. Bridging theory and practice sheds light on the strategies, risk-reward trade-offs, and market implications associated with this capital-raising channel.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Airline operating cost reduction through enhanced engine health analytics</title>
<link href="https://hdl.handle.net/1721.1/119307.2" rel="alternate"/>
<author>
<name>Luu, Henry H. T</name>
</author>
<id>https://hdl.handle.net/1721.1/119307.2</id>
<updated>2024-10-02T03:59:43Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">Airline operating cost reduction through enhanced engine health analytics
Luu, Henry H. T
Engine Health Management (EHM) is a comprehensive maintenance service offered by engine manufacturer Pratt &amp; Whitney (PW) to its airline customers. In its current form, engine performance is monitored through recorded physical metrics, such as gas temperature, pressure, and altitude, taken as single snapshots at various phases of flight. The advent of the Enhanced Flight Data Acquisition, Storage and Transmission (eFASTTM) system, which allows for near-continuous recording of engine metrics, provides Full-Flight Data Analytics (FFDA) that may proactively alert and recommend maintenance activity to airlines. Adopting eFASTTM may help avoid Adverse Operational Events (AOE) caused by unexpected engine failures and the associated cost burdens. With respect to operating cost, airlines standardly report Cost Per Available Seat Mile (CASM) and Cost Per Block Hour (CBH). EHM services that prevent operational disruptions can help airlines reduce these unit-cost metrics, whose scrutiny by industry analysts affect investment guidance, stock performance, and overall business outlook. In this study, the value of FFDA services to airlines is investigated on the International Aero Engines V2500, a mature engine with customers' operational histories well-documented. Using a Poisson distribution to model the occurrence of six operational disruption types-Inflight Shutdown, Aircraft-On-Ground, Aborted Takeoff, Air Turn-Back, Ground Turn-Back, and Delay/Cancellation-the cost savings potential is quantified as a function of events avoided by a hypothetical FFDA service. Airline Form 41 financial data from the Bureau of Transportation Statistics is then used to estimate the magnitude of savings on CASM and CBH retroactively for 2012-16. Results show that unit cost reductions of 0.5% to 1.5% are possible through engine event avoidance, representing savings up to $104M annually, but outcomes are highly dependent on assumptions about cost of operational disruptions for each individual carrier. Overall, a baseline model and procedure is developed for valuating FFDA and associated EHM services. Further collaboration between airlines and Pratt &amp; Whitney on data availability and accuracy will help refine this model, which is the first to bridge publicly available airline costs with engine history data, helping stakeholders transition to an eFASTTM ecosystem that promises greater operational efficiency and safety.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development</title>
<link href="https://hdl.handle.net/1721.1/157055" rel="alternate"/>
<author>
<name>Aizman, Asya</name>
</author>
<id>https://hdl.handle.net/1721.1/157055</id>
<updated>2024-09-27T03:51:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development
Aizman, Asya
In May, 2023, the City of Somerville achieved the highest S&amp;P Global Ratings AAA credit rating. The accompanying report, citing one gentrifying neighborhood as a “notable contributor to increased market value,” signaled the city’s “attractiveness” to potential investors by promising low interest rates on local real estate development projects. But while the city increasingly appeared to be a sure bet for investors, life became more strenuous for residents, with steep and climbing rents, failing infrastructure, and fewer reasons to stay in a changing city that they no longer recognized. This is a case study of twenty years in Somerville real estate development, spanning 2004 to 2024. Through interviews with residents, activists, and senior city officials, I present a story of a city attempting to rectify its progressive values with the forces of neoliberalism, which it seems unable—and unwilling—to stop.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric nonlinearities in guyed towers</title>
<link href="https://hdl.handle.net/1721.1/157046" rel="alternate"/>
<author>
<name>McClure, Ghyslaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/157046</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Geometric nonlinearities in guyed towers
McClure, Ghyslaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1984; Vita.; Bibliography: leaves 110-114.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation</title>
<link href="https://hdl.handle.net/1721.1/157037" rel="alternate"/>
<author>
<name>Heller, Peter J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157037</id>
<updated>2024-09-25T04:05:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation
Heller, Peter J.
Guaranteeing sufficient and affordable access to energy services is increasingly critical as climate change continues to worsen, energy costs increase due to the need to meet decarbonization goals, and the trend in general inequality among citizens persists. To ensure the affordability of energy services, in this thesis, I analyze the design of policies and programs addressing energy poverty according to the four strategy decisions that I argue must be made during their ideation: assistance, targeting, funding, and governance. I focus on the strategies designed and implemented in the US and the EU and discuss the benefits and disadvantages of the different approaches followed in both contexts. Based on this comparative analysis, I find there are changes to US federal policy design that should be implemented to better serve households living in energy poverty. Specifically, current allocations under the US Low Income Home Energy Assistance Program (LIHEAP) to states have been nearly static since 1984, while the distribution of energy poverty is dynamic in location and time. To improve the allocation of federal resources, I produce a novel machine learning approach based on sociodemographic and geographical information to estimate energy burden in each US census tract for 2015 and 2020. This analysis reveals an increase in the average household energy burdens, and the range of households experiencing energy poverty broadened. To improve the targeting strategy of LIHEAP, I design an optimized allocation structure that illustrates a shift in funding to the southern US from northern states. To better match household assistance needs, this analysis urges policy makers to revise the distribution of resources to reflect where concentrations of energy poverty exist in the US.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NBA Sleep Tracking Data Imputation</title>
<link href="https://hdl.handle.net/1721.1/157036" rel="alternate"/>
<author>
<name>Licht, Joseph D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157036</id>
<updated>2024-09-25T03:34:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">NBA Sleep Tracking Data Imputation
Licht, Joseph D.
This thesis investigates imputation methods for nights of missing sleep wearable data from NBA Academy athletes. Sparsity in sleep tracking data arises as a result of behavioral non-compliance or device malfunction, hindering the NBA Academy's ability to provide actionable insights that improve player sleep, a crucial component for player development. Motivated by existing work on time series data imputation, four main techniques are evaluated: K-Nearest Neighbors Regression, Linear Interpolation, Linear Regression, and Quadratic Regression. Each technique is applied and evaluated on key sleep metrics such as sleep duration, rMSSD (Root Mean Square of the Successive Differences between Heartbeats), and average heart rate. Results indicate K-Nearest Neighbors Regression and Linear Interpolation, with access to data in the past and future (offline imputation), as the best-performing sleep imputation methods. Furthermore, this thesis utilizes the NBA Academy's shooting and jumping datasets in conjunction with the sleep dataset to explore a relationship between sleep and athletic performance, finding a generally weak correlation between sleep and athletic performance data, regardless of the time lag. This research has applications in all areas of sport and performance as well as in domains where data sparsity is problematic.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization</title>
<link href="https://hdl.handle.net/1721.1/157035" rel="alternate"/>
<author>
<name>Trézarieu, Raphaël</name>
</author>
<id>https://hdl.handle.net/1721.1/157035</id>
<updated>2024-09-25T04:03:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization
Trézarieu, Raphaël
Morocco faces increasing water scarcity with an anticipated decline in rainfall. Rising temperatures have resulted in drier and denser soil, causing water to be trapped on the surface and evaporate. One solution is to shift water management from large-scale to farm-scale. Underground water reservoirs allow the catchment of sparse rainfall events and the resultant overland flows before their evaporation. This research develops a methodology to design such rectangular reinforced concrete water reservoirs using a parametric approach in Python coupled with a Finite-Element Analysis (FEA) software. The aim is to offer both low embodied carbon and affordable designs, for an individual farmer to build. The first method section is used to identify a small region of the design space containing the Pareto front before running FEA on a limited set of geometries in the second section. In the first section, the global shape of the reservoir and the local structural elements are simultaneously designed using analytical expressions of the Eurocodes on multi-dimensional arrays. One key added value of the method lies in the framework developed to handle numerous arrays of different dimensions, while monitoring the indices of each design variables combinations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems</title>
<link href="https://hdl.handle.net/1721.1/157034" rel="alternate"/>
<author>
<name>Liu, Monica</name>
</author>
<id>https://hdl.handle.net/1721.1/157034</id>
<updated>2024-09-25T03:35:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems
Liu, Monica
Chiplets have risen in popularity since their intermediate level of chip integration allows for high performance, low cost, and higher flexibility. There are currently programmable gain instrumentation amplifier chips on the market, which are widely used in industrial and instrumentation data acquisition systems. However, with built-in operational and fully differential amplifiers, these products cannot be easily upgraded as new and improved amplifiers are released to the market. To address this issue, this thesis proposes the design of a programmable gain chiplet that will offer the desired flexibility in changing a system’s gain, but will add the ability to interface with various amplifiers without sacrificing significant performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of Gradient Flow with Contrastive Learning</title>
<link href="https://hdl.handle.net/1721.1/157033" rel="alternate"/>
<author>
<name>Tepe, Cem</name>
</author>
<id>https://hdl.handle.net/1721.1/157033</id>
<updated>2024-09-25T04:08:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamics of Gradient Flow with Contrastive Learning
Tepe, Cem
Contrastive learning (CL), in di erent forms, has been shown to learn discriminatory representations for downstream tasks without the need of human labeling. In the representation space learnt via CL, each class collapses to a distinct vertex of a simplex on a hypersphere during training. This property, also seen in other types of learning tasks, might explain why CL works as well as it does. Having class collapse on the test distribution, which determines how well the model generalizes to new samples and new classes, is tied to class collapse on the training distribution under certain conditions as studied by Galanti et al. (2022). In the case of CL, minimizing the contrastive loss has been shown to lead to collapse during training by Graf et al. (2021). In a recent study, Xue et al. (2023) show that the minimizing the contrastive loss is not enough to observe class collapse in the representation space for a single layer linear model and that we need minimum norm minimizers for the collapse to happen. However, their results don't explain how class collapse can occur without adding an explicit bias. The implicit bias of the gradient descent is a likely candidate to explain this phenomena. Here, we investigate the gradient ow of the spectral contrastive loss and give a theoretical description of the learning dynamics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy</title>
<link href="https://hdl.handle.net/1721.1/157031" rel="alternate"/>
<author>
<name>Bakker, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157031</id>
<updated>2024-11-15T20:29:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy
Bakker, Nicole
In the next two decades, an exponentially growing quantity of waste will be generated as solar panels reach their end-of-life. Meanwhile, demand for new solar capacity will increase the value of key raw materials, underscoring the importance of recycling and movement toward a “circular economy”. However, uncertainties over the quantity and the exact material composition of solar panel waste hamper investments by recyclers, manufacturers, and governments. In this study, I construct a Material Flow Analysis model to forecast the global quantity of recoverable materials through 2100, informed by an experimental characterization of representative solar panels from the 1930s to 2020s. To account for potential changes in future demand, I develop two distinct scenarios: one explores the growing electricity demand from artificial intelligence use (‘Artificial Intelligence Boom’), while the other features renewable hydrogen production for steelmaking, shipping and the chemical industry (‘Green Hydrogen Takes Off’). The combined model predicts a lower material demand for silicon than previously anticipated in the base case, with a cumulative installed solar PV capacity of 50 TW and a waste volume of 3,600 metric megatonnes by 2100. This will require 45 megatonnes of solar-grade silicon by 2100, while 18 megatonnes could theoretically be obtained from recovered material. Achieving a circular economy for silicon is possible by the mid-2040s, but will require recovery rates above 70% and continued improvements in material efficiency as observed in the retrospective analysis. Recovery would suffice for all silicon demand through the mid-2060s, but not through 2100, because the demand for new solar panels and replacements outpace secondary supply. Of specific concern for material recovery is the material composition: results from characterization indicate the presence of toxic materials, including lead, and scarce elements in solar cells.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meat Me for Supper? Envisioning the Future of Protein Food</title>
<link href="https://hdl.handle.net/1721.1/157030" rel="alternate"/>
<author>
<name>Maynard, Christopher Coleman</name>
</author>
<id>https://hdl.handle.net/1721.1/157030</id>
<updated>2024-09-25T04:02:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Meat Me for Supper? Envisioning the Future of Protein Food
Maynard, Christopher Coleman
This report investigates future challenges associated with protein food and explores two proposed mitigation strategies for overcoming them: dietary change and cultivated meat. Utilizing IMPACT, this report assesses the food security dimensions of availability and economic access for protein food relative to the EAT-Lancet recommendations, projected to 2050, under various shared socioeconomic pathways. This work reveals a near universal over-supply of red meat as well as an under-supply in plant protein across UN member states, even as animal sources of protein far exceed their plant counterparts on a price per kilocalorie basis. Additionally, this report conducts a high level SWOT analysis of key issues in cultivated meat, finding that the technology platform could deliver meaningful environmental and health benefits, but without overcoming important technical and political barriers, will remain unavailable and inaccessible for the foreseeable future. Together, these findings offer insights for food and agricultural policymakers interested in planning and preparing for protein-related issues in the next quarter-century. This report concludes with policy recommendations, intended primarily for the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models</title>
<link href="https://hdl.handle.net/1721.1/157029" rel="alternate"/>
<author>
<name>Ramirez Sanchez, Edgar</name>
</author>
<id>https://hdl.handle.net/1721.1/157029</id>
<updated>2024-09-25T03:56:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models
Ramirez Sanchez, Edgar
Technological advancements and interventions in the transportation sector play a crucial role in addressing climate change, given its major contribution to greenhouse gas emissions. The industry actively explores electrification, automation, and Intelligent Infrastructure to mitigate emissions. However, the successful design and implementation of these solutions require accurate and representative emission models. The Motor Vehicle Emission Simulation (MOVES) serves as the gold standard emission software provided by the Environmental Protection Agency (EPA). Despite its prominence, using MOVES faces challenges, including a steep learning curve and technical complexities. This makes it cumbersome for macroscopic analysis and unsuitable for microscopic analyses like eco-driving, which demands emissions estimation for individual steps. To address these issues, we present a comprehensive family of high-performance and lightweight CO₂ emission models devised through reverse engineering MOVES and surrogate learning. Our models show a promising 6% end-to-end error relative to MOVES, exhibit significant differences from alternative reduced-order models, and offer improved precision. The implications of our work are twofold: our models simplify GHG emission evaluation in transportation-related analyses by providing a faster, programmatic alternative to MOVES and improve control-based approaches by offering microscopic and environment feature-rich models compared to alternative models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations</title>
<link href="https://hdl.handle.net/1721.1/157028" rel="alternate"/>
<author>
<name>Blanks, Lauren J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157028</id>
<updated>2024-09-25T04:07:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations
Blanks, Lauren J.
As natural disasters become more frequent and severe, pitfalls of emergency logistics are exacerbated. Protracted time between the disaster and the restoration of critical infrastructure, like the power grid, can extend beyond hours or days. In the meantime, communities are left without critical resources like electricity. To address this gap, this research seeks to investigate the possibility of a system that would leverage the debris fields of a disaster to a community's advantage. Building on MIT researchers' activation of high purity aluminum to produce heat and hydrogen in a reaction with water, aluminum scrap from the field could be used to generate hydrogen for fuel cell power systems. Therefore, practical aluminum scrap, specifically the used beverage can, was investigated for its ability to react efficiently and produce hydrogen under the constraints of expeditionary equipment and techniques. Moreover, a preliminary characterization of the reaction's gas output informed the potential for fuel cell contamination. Finally, the proposed system's feasibility within the disaster policy framework is discussed. Together, these findings underscore the potential to harness aluminum scrap as a post-disaster energy source, encouraging further research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Documentation as a Tool for Algorithmic Accountability</title>
<link href="https://hdl.handle.net/1721.1/157026" rel="alternate"/>
<author>
<name>Curtis, Taylor Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/157026</id>
<updated>2024-09-25T03:16:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Documentation as a Tool for Algorithmic Accountability
Curtis, Taylor Lynn
This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil liability regime. It also highlights that civil liability is an already existing and effective regulatory tool that can be applied to AI. The rest of this thesis develops this argument by looking at what is necessary for such a framework to exist. It argues that an understanding of system behaviour is essential and achievable through documentation. It is divided into two substantive chapters. Firstly, Chapter 2 outlines how system behaviour can inform policy through documentation, linking the necessity of documentation to liability and proposing a concrete liability scheme based on documenting system understanding. Secondly, Chapter 3 discusses how documentation can alter a person's understanding of system behaviour, presenting a user study that demonstrates how system understanding can be achieved through documentation and structured data interaction. It argues that testing and system understanding are not insurmountable challenges and that by engaging in a relatively simple process, AI deployers can better understand the behaviour of their models. Overall, this thesis provides a methodical guide to understanding AI system behaviour and the establishment of a new pathway for effective regulation, arguing for the understanding of system behaviour and documentation at deployment as the path forward to achieve civil liability in AI.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-Augmented Interface for Incremental App Development in MIT App Inventor</title>
<link href="https://hdl.handle.net/1721.1/157025" rel="alternate"/>
<author>
<name>Granquist, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/157025</id>
<updated>2024-09-25T03:43:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AI-Augmented Interface for Incremental App Development in MIT App Inventor
Granquist, Ashley
The recent revolutionary advancements in Artificial Intelligence (AI) have presented im- mense opportunities and challenges in computer science education. This thesis presents the development of an AI-powered tool built on top of MIT App Inventor to help students in- crementally design mobile applications. The tool allows students to describe desired changes to their MIT App Inventor mobile applications in natural language and have those changes be implemented automatically. Students can alternate between manually editing their app and using this tool, enabling them to collaborate with AI and incrementally develop apps with a degree of assistance from AI that meets their needs and is appropriate for their skill level and workflow preferences. This thesis also explores the benefits and detriments of such a tool, as well as observations and lessons learned from studying the ways students interact with the tool during a pilot study.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/157024" rel="alternate"/>
<author>
<name>Lahogue, Malo</name>
</author>
<id>https://hdl.handle.net/1721.1/157024</id>
<updated>2024-09-25T03:06:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure
Lahogue, Malo
Traditionally, NG's impact on power supply has been studied from a reliability perspective, focusing on frequent and low-impact events. Furthermore, power-NG interdependence has been considered at a local scale, with few possibilities for extension to future climate impacts. Our work contributes to a framework for scenario-based resilience quantification of regional power systems under power-NG interdependencies. Specifically, we develop a scenario generation approach to model disruptions in the intra-regional transmission infrastructure as well as supply restrictions due to contingencies in inter-regional NG supply chains. To account for the interregional interdependencies through the import capacity of NG into the regional system, we implement a Long Short-Term Memory (LSTM) model that predicts NG import capacity probability density based on weather conditions along transregional supply pipelines. Our ML model does not require detailed modeling of gas extraction rates and flows along pipelines since such information is not readily available. Furthermore, we develop a sampling procedure to capture low-probability but potentially severe disruption scenarios within the regional transmission infrastructure. To compute the corresponding probabilities, we utilize a physically-based structural reliability model for pipelines. &#13;
 &#13;
Crucially, by sampling the scenarios first and then estimating the corresponding probabilities, we account for low-probability ``rare’’ events that can negatively impact the reliability of power supply. The resulting scenario set enables more precise quantification of power system resilience to correlated transmission and supply disruptions in the NG infrastructure. Since we utilize weather data to forecast NG import capacities as well as compute pipeline disruption probabilities, our work is well-suited for the integration of future climate projections in the risk-sensitive planning and resilient operations of power-NG systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contractor Learning and Home Energy Efficiency in Heat Pump Installations</title>
<link href="https://hdl.handle.net/1721.1/157022" rel="alternate"/>
<author>
<name>Ontiveros, Johnattan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/157022</id>
<updated>2024-09-25T03:37:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Contractor Learning and Home Energy Efficiency in Heat Pump Installations
Ontiveros, Johnattan H.
The displacement of fossil-fuel based heating is essential for achieving decarbonization in the building sector, which represents about a third of national emissions in the United States. Electric heat pumps are the primary technology needed to do so, but widespread adoption is hindered by a variety of factors including higher upfront costs and a shortage of experienced labor to fulfill installations. This work examines the role of learning on the cost and size of heat pump installations throughout the Massachusetts Clean Energy Center (MassCEC) rebate program. We find that as contractors gain experience, heating systems are downsized at the cost of less hours of displaced fossil-fuel based heating. This learning impact is strongest for homes with a natural gas backup heater, which is the cheapest source of heating in Massachusetts followed by electric heat pump heating. We then analyze the structure of the MassCEC rebate, and its potential influence on the benefits of the program.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Community-Driven Determination of Values for Language Models</title>
<link href="https://hdl.handle.net/1721.1/157021" rel="alternate"/>
<author>
<name>Raman, Deepika</name>
</author>
<id>https://hdl.handle.net/1721.1/157021</id>
<updated>2024-09-25T03:02:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Community-Driven Determination of Values for Language Models
Raman, Deepika
Emerging technologies like Artificial Intelligence and Large Language Models are often developed in Western contexts and carry implicit values, from developer choices or underlying training data, which are not adequately representative of the diverse contexts in which they are deployed. The resultant misalignment from the lack of engagement with non-Eurocentric value paradigms results in inadequate, and potentially harmful outcomes that impact these unconsidered communities. To codify fundamentally subjective human values therefore necessitates the elicitation of these nuances through the inclusion and involvement of these very communities.&#13;
&#13;
This thesis argues that participants’ lack of familiarity with new technologies like Artificial Intelligence impacts their engagement and contribution to participatory processes of AI development. This thesis also helps demonstrate how grounded theory approaches can be leveraged to contextualize awareness-building efforts that can potentially empower community participation by addressing such familiarity gaps.&#13;
&#13;
This two-fold objective of (i)eliciting community-relevant attributes for language model alignment (ii)through the necessary familiarization of the technology in question is demonstrated through the means of sample case studies. A grounded participatory process CALMA (Community-aligned Axes for Language Model Alignment) is designed and evaluated through these cases to illustrate this contextualized alignment exercise. Learnings from this comparative case study are then extended to explore avenues for communities and institutions to adopt similar techniques that center the voices of the final users.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings</title>
<link href="https://hdl.handle.net/1721.1/157020" rel="alternate"/>
<author>
<name>Ha, Lan L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157020</id>
<updated>2024-09-25T03:57:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings
Ha, Lan L.
This thesis aims to investigate the effectiveness of low-cost interventions in promoting energy conservation in commercial and residential environments. The first chapter employs social norms to design and analyze three behavioral change programs in a large biopharmaceutical company, with a focus on reducing electricity consumption and plastic waste. The second chapter evaluates the effectiveness of a new behavioral initiative that aims to reduce residential electric and gas consumption. We employ econometric and machine learning techniques to measure average and heterogeneous treatment effects, as well as to identify disparities in households with the highest versus lowest reductions. Covering the process from designing to evaluation, these chapters collectively offer a holistic perspective on the application of low-cost behavioral nudges in both workplace and residential energy usage. The implications drawn from our findings hold significant relevance for corporations, utilities, households, policymakers, and researchers alike, offering invaluable insights in promoting sustainable practices in both the workplace and the home.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Structure of the Registry Hall at Ellis Island</title>
<link href="https://hdl.handle.net/1721.1/157019" rel="alternate"/>
<author>
<name>Wilson, Ruth Hodin</name>
</author>
<id>https://hdl.handle.net/1721.1/157019</id>
<updated>2024-09-25T03:56:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Structure of the Registry Hall at Ellis Island
Wilson, Ruth Hodin
This thesis presents the historical and structural analysis of the Guastavino barrel vault at the Registry Hall on Ellis Island. The Guastavino Construction Company's innovative tile structures from the late 18th and early 19th centuries, characterized by their efficiency in material use and formwork, are not fully understood by many engineers, especially in terms of their structural behavior as unreinforced masonry structures. The unique aspect of the Registry Hall vault is its construction below a steel truss framed ceiling system, a configuration that has not been previously studied.&#13;
&#13;
The primary objective of this study is to provide structural engineers with techniques for analyzing an unreinforced masonry structure in conjunction with a steel frame. Additionally, it aims to provide historical context by exploring how the Registry Hall structure fits into the history of the Guastavino Company. The structural behavior of the system is analyzed through three separate cases:&#13;
&#13;
1. Graphical analysis for the vault alone (Case 1)&#13;
2. Finite element analysis for the truss carrying the entire system (Case 2)&#13;
3. Analysis of the combined system (Case 3)&#13;
&#13;
Case 1 demonstrates the vault is stable on its own and the thrust forces are resolved in the columns. Case 2 demonstrates the truss has the capacity to support all loads, including the weight of the vault. Case 3 presents a third solution where the truss carries half the weight of the vault, indicating the two systems can work together effectively. &#13;
&#13;
This study offers three structural solutions for the complex ceiling at Registry Hall, demonstrating that there are infinite solutions for Guastavino structures. This improved understanding of a Guastavino barrel vaults' structural behavior not only aids in evaluating the current state of Registry Hall, but also lays a foundation for analyzing historic masonry structures that incorporate a steel system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overturning of No-Tension Towers</title>
<link href="https://hdl.handle.net/1721.1/157018" rel="alternate"/>
<author>
<name>Moir, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/157018</id>
<updated>2024-09-25T03:37:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overturning of No-Tension Towers
Moir, Katherine
This study investigates the overturning behavior of leaning masonry towers on a rigid foundation. Unreinforced masonry is assumed to be incapable of withstanding tension, thus anticipating a progressive fracturing to occur outside the compressive zone of masonry towers as they incline under the force of self-weight alone. A theoretical model for the analysis of rectangular towers is extended to cylindrical towers, where overturning is assumed to occur when the fracture reaches through the entire width of the tower. The results of the theoretical model offer an approximate prediction for the critical angle of inclination that may be reached by a leaning no-tension cylindrical tower of variable slenderness and hollowness. A comparison of the predictions for each of the two tower geometries shows that the predicted critical angles of overturning are very close, while the cylinder is likely to begin cracking at lower inclinations compared to rectangular towers. The theoretical predictions for both rectangular and cylindrical towers are validated experimentally by tilting masonry model towers until failure. The experimental results are found to have reasonable agreement with the predictions, though overturning occurs earlier than predicted in all cases, which is attributed to imperfections in the models and scaling effects. As such, the theoretical predictions are unconservative for the critical angle of overturning of the models in the experiment. Furthermore, two case studies are conducted for existing leaning masonry towers in Italy, where theoretical predictions for their critical angles of overturning are put forth. The results of the case studies indicate that the Garisenda tower in Bologna is relatively close to its theoretical critical inclination, while the Leaning Tower of Pisa is not close. Both towers are found to be very close to their predicted angle of first cracking. However, the assumption of a rigid foundation does not account for the possibility of soil failure which remains a risk for leaning towers on compressible soils. Overall, the research guides further understanding of the failure conditions of masonry towers, which is useful in assessing their safety and preventing catastrophic collapses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tree-based Data Replay for More Efficient LLM Continual Learning</title>
<link href="https://hdl.handle.net/1721.1/157017" rel="alternate"/>
<author>
<name>Bailey, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/157017</id>
<updated>2024-09-25T03:58:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tree-based Data Replay for More Efficient LLM Continual Learning
Bailey, Brian
As Large Language Models (LLMs) gain popularity, they face a crucial challenge: effectively updating their knowledge bases with new data while retaining knowledge of prior information. This challenge is compounded by the considerable computational resources and time required to do so. This problem has been previously addressed using multiple approaches, including data replay, Elastic Weight Consolidation (EWC), and others. This study introduces an evolutionary tree-based data replay method designed to enhance the efficiency of LLMs’ continual training. It leverages the evolutionary relationships among domain-specific data to inform the replay strategy, selectively excluding similar data from the training of current subdomains to optimize efficiency. Initial experiments identified Mistral-7B as the appropriate model for this analysis. Subsequent tests assessed its performance under different data replay configurations, focusing on perplexity as the primary performance measure. The results indicate that focused data replay maintains model performance and enhance training efficiency. Models trained under restrictive replay conditions—excluding data from parent nodes—achieved perplexity scores within 1.5% of the baseline and reduced training time by up to 20%. Moreover, an ablation study established that a minimum replay ratio of 0.4:1 is essential to keep performance within 8.2% of the baseline. The findings suggest significant potential for structured data replay in improving continual learning processes for LLMs. Future research should explore data selection based on similarity metrics or automatic data categorization to enhance scalability and applicability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers</title>
<link href="https://hdl.handle.net/1721.1/157016" rel="alternate"/>
<author>
<name>Malkin, Elian</name>
</author>
<id>https://hdl.handle.net/1721.1/157016</id>
<updated>2024-09-25T03:04:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers
Malkin, Elian
Traditional methods of neuronal activity modulation, like pharmacological interventions and noninvasive techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) have limitations in specificity and penetration depth. Deep brain stimulation (DBS), while effective, is invasive and carries surgical risks. This thesis advances the approach of utilizing magnetic nanoparticles as mechanical force transducers to achieve minimally invasive, wireless neuromodulation using magnetic fields as the stimulation modality. By leveraging magnetic fields and mechanically sensitive ion channels, this method aims to provide precise neuronal activation of deep neural circuits without surgery. We describe the molecular biology behind conferring mechanosensation to neurons, the design of a membrane targeting mechanism via SNAPtags expressed on neuronal membranes, and the observed neuromodulatory effects for a gamut of mechanoreceptors and stimulation conditions. Calcium imaging results demonstrate that this method of nanotransducer targeting can elicit neuronal responses at 40mT even via endogenous ion channels, and that greater amplitudes of response can be achieved through mechanosensitive ion channel expression and increased stimulation strength. We also develop data analysis code that is highly automated and employs advanced curve-fitting techniques to isolate the calcium imaging signal from background noise and fluorescence decay. The findings described in this thesis suggest that minimally-invasive mechanical neuromodulation can offer a safe and precise alternative to DBS for both clinical and research applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New Parallel Algorithms for Planarity Testing</title>
<link href="https://hdl.handle.net/1721.1/157015" rel="alternate"/>
<author>
<name>Hu, Amelia Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157015</id>
<updated>2024-09-25T03:56:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">New Parallel Algorithms for Planarity Testing
Hu, Amelia Y.
Planar graphs (defined as graphs in which no edges cross) have special properties and are often used in applications such as circuit design or transportation networks. While many linear work implementations of planarity testing algorithms exist, to our best knowledge, there is no practical implementation of a parallel planarity testing algorithm. In this thesis, we will describe and analyze two new parallel algorithms for planarity testing, both derived from the Boyer-Myrvold algorithm. First, we will present a divide-and-conquer approach, where the graph's edges are evenly distributed among worker threads. Each thread independently executes the sequential Boyer-Myrvold algorithm on its designated subgraph. Then, pairs of subgraphs are merged by embedding the edges between subgraphs with modified Boyer-Myrvold methods. The primary challenge of the divide-and-conquer approach is the merge step as determining the relative positions of subgraphs is a complicated and difficult process. Next, we describe the design and implementation of a new and simpler parallel algorithm. This algorithm modifies the Boyer-Myrvold algorithm by processing vertices in layers from the bottom-up (rather than sequentially by reverse DFI order). The computation in each layer is parallelized. On planar graphs, this algorithm achieves 2.4--2.7 times speedup over the sequential algorithm when run on 16 cores. On non-planar graphs, the performance gain is even more significant, with speedups ranging from 9 to 22 times on 16 cores.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Data Heterogeneity on Distributed Linear System Solvers</title>
<link href="https://hdl.handle.net/1721.1/157014" rel="alternate"/>
<author>
<name>Velasevic, Boris</name>
</author>
<id>https://hdl.handle.net/1721.1/157014</id>
<updated>2024-12-24T14:40:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effects of Data Heterogeneity on Distributed Linear System Solvers
Velasevic, Boris
We focus on the fundamental problem of solving a system of linear equations. In particular, we are interested in distributed linear system solvers, where one taskmaster coordinates any number of workers to attain a solution. There are two predominant and fundamentally different ways of doing this: optimization-based and projection-based solvers. Although there is extensive literature on both classes of algorithms, a rigorous analytical comparison of their performance is lacking. Consequently, there is no concrete understanding of why numerical experiments show that projection-based solvers tend to perform better in many real and synthetic scenarios. In this work, we develop a framework for such analysis, and we use that framework to investigate the comparison of optimization-based and projection-based solvers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification</title>
<link href="https://hdl.handle.net/1721.1/157013" rel="alternate"/>
<author>
<name>Xu, Muhua</name>
</author>
<id>https://hdl.handle.net/1721.1/157013</id>
<updated>2024-09-25T03:56:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification
Xu, Muhua
Motivated by the significant progress in NLP prompt learning, there have been great research interests recently in adopting the prompting mechanism for graph machine learning. Despite the prior success of prompting methods applied in node-level and graph-level learning tasks, subgraph-level tasks are highly underexplored, and the potential of prompting remains unclear. This thesis fills this gap by exploring the prompting mechanism for subgraph classification, which is a much more challenging task as it requires understanding both global and local graph structures. In this work, we build upon state-of-the-art self-supervised graph learning models to develop a subgraph-specific prompting scheme Membership Prompt (MPrompt) based on traditional graph neural networks (GNN). Our proposed prompting scheme relies on node membership knowledge to help GNN distinguish between border and local connections, which increases its expressive power while maintaining the prompt’s independence from any specific dataset or model architecture. Additionally, we also present Subgraph Reconstructive Pretraining (SRP) which can provide MPrompt with better structural embeddings during pretraining. Experiments are conducted on both synthetic and real-world datasets, including protein function prediction and social network analysis. Our method demonstrated performance improvement under few-shot experiment setting and maintained comparable performance in full-shot settings while requiring less computation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a Transformer-Based Solid-State Relay</title>
<link href="https://hdl.handle.net/1721.1/157012" rel="alternate"/>
<author>
<name>Mondal, Neelambar</name>
</author>
<id>https://hdl.handle.net/1721.1/157012</id>
<updated>2024-09-25T04:04:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a Transformer-Based Solid-State Relay
Mondal, Neelambar
Automatic Test Equipment (ATE) systems require relays to perform complex high-speed tests on semiconductor devices. However, existing relays all come up short in some aspect. Electromechanical reed relays have a limited lifetime and slow switching speeds, while solid-state photoMOS relays have high on-resistance and low bandwidth. This thesis presents the design, simulation, and analysis of a new solid-state relay tailored for ATE applications. We use Analog Devices’ iCoupler technology to design this relay, relying on on-chip transformers to provide reliable input-to-output isolation. In Cadence simulations, the iCoupler relay achieves 100 mOhm on-resistance, 7.5 us turn-on time, and 4.8 GHz output 3dB bandwidth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Question-Answering over Distributed EHR Data</title>
<link href="https://hdl.handle.net/1721.1/157011" rel="alternate"/>
<author>
<name>Jiang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/157011</id>
<updated>2024-09-25T03:02:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clinical Question-Answering over Distributed EHR Data
Jiang, Emily
Electronic health records (EHRs) have become standard in US clinical practice. However, the distributed, dynamic, private, and jargon-dense nature of medical data is a barrier in harnessing Large Language Models (LLMs) for the domain. Retrievalaugmented generation (RAG), in which an LLM is provided with both the question and context returned by an external retriever, is a promising technique for addressing the unique qualities of clinical text. LLMs using RAG can answer questions about patient records without training on privacy-sensitive data; updated records can also be queried immediately without finetuning. By exposing the source documents that inform the model response, RAG enables greater physician interpretability as well as reduced hallucination, both of which are crucial for safe deployment in healthcare. This thesis presents FedRAG, a retrieval-augmented clinical question-answering (QA) system for clinicians to explore trends in patient data across distributed storage. We introduce a novel hierarchical design for federated document retrieval, in which leaf nodes perform local similarity search while non-leaf nodes route queries based on access policies and aggregate documents returned by their children. We also create a dataset on clinical trends over the MIMIC-IV database for the evaluation of QA systems on EHR data. FedRAG is implemented in Python as a federation of Flask servers using LangChain, the Qdrant vector database for retrieval, and GPT-3.5 Turbo for generation. We present a case study of three medical organizations, and find that the federation scheme results in no loss of quality against a centralized baseline. We explore the impact of resource accessibility among users with varying access permissions, observing that retrieval and generation quality degrade reasonably as document access is restricted. Finally, we evaluate performance in the key abilities required of RAG systems. We conclude that despite remaining challenges in achieving high retrieval quality and noise robustness, FedRAG is effective at synthesizing clinical trends through information integration across EHR documents.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rethinking the Evaluation of Compositional Reasoning for Modern VLMs</title>
<link href="https://hdl.handle.net/1721.1/157010" rel="alternate"/>
<author>
<name>Huang, Irene Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157010</id>
<updated>2024-09-25T03:31:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Rethinking the Evaluation of Compositional Reasoning for Modern VLMs
Huang, Irene Y.
Recent advancements in modern Vision-Language Models (VLMs), comprising a visual encoder coupled with a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in Compositional Reasoning (CR). CR entails grasping the significance of attributes, relations, and word order. This prompts a crucial question: have VLMs effectively tackled the CR challenge? Our conjecture suggests that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to their reliance on a negative text generation pipeline. Consequently, the negatives produced often deviate either as outliers from the natural language distribution learned by VLMs’ LLM decoders or as improbable within the corresponding image context. To redress these limitations, we propose a novel pipeline integrating GPT-4V alongside a suite of contemporary open-source VLMs. Through the application of in-context-learning and prompt engineering methodologies, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, to establish a robust CR benchmark, also subsequently validated manually. The meticulously curated dataset evinces a noteworthy, up to 45%, decrease in CR performance compared to preceding benchmarks, thereby reinstating the CR challenge even for state-of-the-art VLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Implementation of the U.S. Hydrogen Production Tax Credit</title>
<link href="https://hdl.handle.net/1721.1/157008" rel="alternate"/>
<author>
<name>Giovanniello, Michael A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157008</id>
<updated>2024-09-25T04:05:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling and Implementation of the U.S. Hydrogen Production Tax Credit
Giovanniello, Michael A.
Low-carbon hydrogen (H2) could contribute to achieving long-term climate goals by supporting the decarbonization of several hard-to-abate industries. The U.S. Inflation Reduction Act includes a tiered hydrogen production tax credit (PTC) awarded for producing H2 below certain emissions thresholds. One pathway for producing PTC-eligible H2 is water electrolysis supplied with low-carbon electricity. But assessing the systems-level emissions associated with electrolytic H2 is challenging, not only because instantaneous power flows from a particular producer cannot be directly associated with a particular user, but also because of the risk that electrolyzers might divert clean electricity away from the grid. Following the passage of the IRA, there has been a vigorous debate focusing primarily on the time-matching requirements — that is, the period over which electricity use must match production from contracted generators — for grid-connected H2 production to receive the PTC.&#13;
&#13;
Applying a macro-energy systems model to case studies of Texas and Florida, we show that divergent results in the literature, which presented a conundrum for regulators trying to pick between policy options, are explained by different interpretations of the proposed “additionality” requirement. Specifically, the emissions associated with H2 production under different “time-matching” requirements are conditional on how additionality is modeled. We further show that the interaction of these qualifying time-matching requirements with other energy system policies could reduce the merits of more stringent time-matching requirements. For instance, if a region has a relatively high renewable portfolio standards (RPSs) to enable grid decarbonization, we show that less stringent (and therefore less costly) time-matching requirements are sufficient to avoid any increases in system-level emissions. &#13;
&#13;
Building on this analysis, we explore how uncertainty in inter-annual variable renewable energy (VRE) generation complicates the implementation of stringent PTC requirements. We confirm that a system design that accounts for inter-annual VRE uncertainty comes at a cost premium — a reality ignored by the existing literature. In addition, we show that inter-annual VRE uncertainty will necessitate the formation of markets for hourly electricity attribution certificates (EACs) to make up for inevitable shortfalls in supply of contracted VRE electricity supply under an hourly time-matching requirement. &#13;
&#13;
We recommend that the Treasury adopt a phased and regionally differentiated approach to implementing the PTC — regions without RPS policies could transition to an hourly time-matching requirement in the mid-term (e.g., by 2030), whereas regions with sufficient RPS policies could continue with looser requirements. In addition to PTC implementation, these results are relevant to the broader field of Scope 2 emissions accounting for voluntary (e.g. corporate net-zero goals) and regulatory purposes. As more private enterprises, such as data centers owners, pursue voluntary measures to reduce their electricity-related emissions, our work provides a foundation for further research into clean energy procurement standards (voluntary or mandated) that support power sector decarbonization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation</title>
<link href="https://hdl.handle.net/1721.1/157007" rel="alternate"/>
<author>
<name>Schug, Jennifer Lin</name>
</author>
<id>https://hdl.handle.net/1721.1/157007</id>
<updated>2024-09-25T03:54:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation
Schug, Jennifer Lin
Recent decades have seen a rapid increase in global warming due to anthropogenic greenhouse gas emissions. One prevalent climate change mitigation strategy is tree planting, as trees sequester large amounts of carbon in their aboveground biomass. However, there is emerging evidence that under some conditions, soil carbon decreases following forestation, offsetting the carbon accumulated aboveground and rendering carbon sequestration efforts ineffective. The factors driving these changes in net ecosystem carbon are currently unknown. Here, we conducted a global meta-analysis on the factors affecting aboveground biomass versus soil carbon (SOC) accumulation following forestation in grasslands and croplands. We considered the effects of prior land use, regrowth strategy, mycorrhizal associations, and environmental factors on total ecosystem carbon and SOC accumulation over time. Results indicate that while there is a tradeoff between SOC and aboveground carbon accumulation, the loss of SOC does not negate the increase in aboveground carbon following forestation. Sites with low initial SOC before forest establishment accumulate more SOC than sites with high SOC, regardless of prior land use. Overall, forest stand age, prior land use, regrowth strategy, and mycorrhizal associations drive carbon accumulation over time and should be considered in the context of future forestation projects implemented for carbon sequestration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Segment Anything on the Edge</title>
<link href="https://hdl.handle.net/1721.1/157006" rel="alternate"/>
<author>
<name>Stiles, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157006</id>
<updated>2024-09-25T03:02:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Efficient Segment Anything on the Edge
Stiles, Nicole
The Segment-Anything Model (SAM) is a vision foundation model facilitating promptable and zero-shot image segmentation.  SAM-based models have a wide range of applications including autonomous driving, medical image segmentation, VR, and data annotation.  However, SAM models are highly computationally intensive and lack a flexible prompting mechanism.  On an NVIDIA A100 GPU, SAM runs at 11 frames/second, missing the mark for real-time performance and preventing the usage of SAM on edge devices.  To tackle both the latency constraint and the prompt flexibility constraint, we introduce GazeSAM, a new real-time gaze-prompted image segmentation model.  GazeSAM uses face and gaze detection to determine the direction of a user's gaze, object detection to find candidate objects of interest, depth estimation to perform background detection, and image segmentation to generate masks.  The final output is a mask segmenting the object at the focus of the user's gaze.  By performing algorithmic optimizations, employing inference engines, and applying FP16 and INT8 quantization, we achieve a 24x speedup relative to the baseline FP32 PyTorch implementation.  GazeSAM runs at a speed of over 30 FPS, enabling real-time performance on an RTX 4070 GPU.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soil moisture-based drought monitoring using remote sensing over Africa</title>
<link href="https://hdl.handle.net/1721.1/157005" rel="alternate"/>
<author>
<name>Lu, Catherine S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157005</id>
<updated>2024-09-25T04:10:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Soil moisture-based drought monitoring using remote sensing over Africa
Lu, Catherine S.
Agricultural droughts, or persistent deficits in soil moisture, can have severe consequences on crop production and can result in economic crisis and widespread food insecurity. The impacts of drought are especially relevant in Africa, where agriculture is largely supported by rainfall. Currently, drought monitoring systems for Africa are not as prevalent on the continental scale and are limited in the number of in-situ observations for model validation, in contrast to developed regions. In this study, we use soil moisture data gathered from the Soil Moisture Active Passive (SMAP) mission with dates ranging from April 2015 to December 2023, in order to develop a drought monitoring system that incorporates seasonality and climatology. Monthly drought thresholds are developed based on percentiles of soil moisture found in previous literature, creating location-specific thresholds of drought for each month. This data was applied at the continental, regional, and country level to reconstruct historical records of drought throughout the SMAP time record (time series) and localities of drought intensities for a given time period (drought maps). Additionally, a methodology of exponential time filtering is explored to convert surface soil moisture from SMAP into root-zone soil moisture, which can be more relevant for agricultural production. The reconstructed historical drought results align with literature on drought events in regions of Africa (e.g. 2017-18 drought anomalies). For future events, this study could inform drought monitoring through remote sensing and allow for measures of drought response to improve overall food security.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants</title>
<link href="https://hdl.handle.net/1721.1/157003" rel="alternate"/>
<author>
<name>Araiinejad, Layla</name>
</author>
<id>https://hdl.handle.net/1721.1/157003</id>
<updated>2024-09-25T03:09:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants
Araiinejad, Layla
This thesis presents the techno-economic analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants (FPP), tailored to enhance the economic viability and scalability of FPPs in response to global energy challenges and climate change. Amidst a backdrop of substantial investments in fusion technology, totaling $6.2 billion to date, this study critically assesses the overnight capital costs of a FPP that hosts ARAI, a 350 MWe tokamak reactor based on the MIT ARC fusion concept. This research evaluates the economic viability of constructing an Nth-of-a-kind ARAI-FPP. The overnight capital costs for ARAI-FPP are estimated to range between $8,800/kW and $22,200/kW, with this variation largely driven by differing regulatory and manufacturing assumptions. The overall cost breakdown is found to be similar to past and recent fusion literature, where the direct cost of fusion reactor equipment is the largest cost driver. The Levelized Cost of Electricity is estimated to be between $140/MWh and $550/MWh. The findings aim to deepen the understanding of absolute and relative cost drivers in fusion energy and suggest strategies to improve its economic feasibility. The analysis highlights the significant role of fabrication costs and regulatory frameworks in influencing cost dynamics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection</title>
<link href="https://hdl.handle.net/1721.1/157001" rel="alternate"/>
<author>
<name>Gutierrez Arango, Samantha</name>
</author>
<id>https://hdl.handle.net/1721.1/157001</id>
<updated>2024-09-25T03:35:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection
Gutierrez Arango, Samantha
The sensory protection procedure, involving the reinnervation of a motor-denervated muscle with a sensory nerve, has shown promise in preserving muscle function and structure. This thesis investigates the impact of sensory protection on the force dynamics and muscle architecture of the lateral gastrocnemius muscle in a rat animal model. Using a within-subjects experimental design, this preliminary study compared Sensory Protected and contralateral Intact muscles within a cohort of four rats. In situ ergometry experiments suggest that normalized Force-Velocity-Power (FVP) properties may be largely preserved after sensory protection, with small percent differences in normalized FVP curves between the Sensory Protection muscles versus contralateral muscle controls. Key FVP parameters such as peak velocity and specific peak power exhibited higher percent differences for the Sensory Protected muscles, but lower percent differences in pennation angles and physiological cross-sectional area, suggesting that sensory reinnervation may influence muscle structure and fundamental force dynamics. Despite limitations, such as the small sample size, the study lays the groundwork for future research investigating the cellular and molecular mechanisms underlying the observed changes. The findings highlight the potential of Sensory Protected muscles as biological actuators in prosthetic devices, and suggest that sensory reinnervation may be a promising strategy to maintain or restore muscle function in individuals with motor impairment.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation</title>
<link href="https://hdl.handle.net/1721.1/156998" rel="alternate"/>
<author>
<name>Obeng-Marnu, Naana</name>
</author>
<id>https://hdl.handle.net/1721.1/156998</id>
<updated>2024-09-25T03:48:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation
Obeng-Marnu, Naana
This thesis builds on a body of sociotechnical research at the MIT Center for Constructive Communication that draws upon "ancient wisdoms" of dialogue and listening and harnesses the power of technology to inform the design of dialogue  spaces that promote deep, meaningful, and authentic conversations. Our approach hinges on the belief that society functions best when we hear and understand each other, an outcome that our work strives to advance by exposing people to the personal stories of others in ways that connect rather than divide. I take inspiration from anthropological practices and recent Data Humanism and Activism epistemologies to derive a set of design considerations for restorative interfaces. These principles inform the development of Translations, an interactive experience that invites audiences to more deeply engage with a curated collection of stories surfaced during small group facilitated conversations. The design of this visual and auditory experience allows audiences to explore stories they may otherwise not hear through websites that center thematic summaries and high level insight visualizations. The selected stories are curated using AI emotion analysis and sensemaking which are leveraged to draw the user’s attention to moments of interest across conversations, such as moments of affirmation. The efficacy of this curation method to engender empathy and emotional disruption, precursors to restorative listening, is evaluated and the results from user tests for and interviews about the overarching interface are discussed. Ultimately, this thesis presents both a framework for automatic curation of audio narratives as well as an interactive interface to present these selected stories, both of which have wide-ranging applications in the media and civic space.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting</title>
<link href="https://hdl.handle.net/1721.1/156997" rel="alternate"/>
<author>
<name>Wojtyna, Adrianna D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156997</id>
<updated>2024-09-25T03:10:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting
Wojtyna, Adrianna D.
Micro-robots play an important role in numerous tasks, including search and rescue, exploration, and navigation. A significant challenge to their deployment is their limited energy capacity, which constrains the computation such systems can complete. Specifically, 3D mapping algorithms significantly contribute to the compute power footprint as a result of repeated memory accesses. A promising approach involving Gaussian Mixture Models (GMMs), Single-Pass Gaussian Fitting (SPGF) algorithm, allowed for real-time 3D mapping with minimal memory and energy requirements due to its single-pass processing of input data. To further decrease demonstrated energy results, we propose the design of an FPGA (Field Programmable Gate Array)-based hardware accelerator that enables Gaussian fitting based on the SPGF algorithm with 10.4× lower energy per image (based on post-implementation power analysis), compared to the original, software implementation. By using fixed-point numerical representation and concurrent processing of data inputs, our proposed hardware accelerator, when operating at 100MHz, is capable of processing depth images at an average rate of 303.09 frames per second (fps), allowing for 7.97× improvement compared to the original software implementation of SPGF (32fps). We also demonstrated 46.1× lower average FPGA resource utilization compared to the previously proposed hardware accelerator for GMMs. Our proposed design was demonstrated as part of the complete subsystem, allowing for visualization of the constructed map in real-time. The proposed design was demonstrated to perform at 100MHz in isolation and verified for its performance with a 50MHz subsystem on AMD Virtex UltraScale+ VCU118 FPGA.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data</title>
<link href="https://hdl.handle.net/1721.1/156996" rel="alternate"/>
<author>
<name>Wickman, Sydney</name>
</author>
<id>https://hdl.handle.net/1721.1/156996</id>
<updated>2024-09-25T03:54:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data
Wickman, Sydney
Increased agricultural production has spurred the need for irrigated land in areas that may not be supported by surface water. Instead, groundwater is primarily used for irrigation in states such as Kansas to supplement the water needed for this land. The increase in groundwater use for irrigation may be contributing to areas of increasing groundwater decline, and more precise tracking of irrigation should take place on a larger, regional scale. This will allow for more effective tracking of irrigation trends and their possible effects. This thesis tests the challenges and possibilities of applying the Backward-Averaged Iterative Two-Source Surface temperature and energy balance Solution (BAITSSS) model with high-resolution PlanetScope (Planet) satellite data to the county of Cheyenne, Kansas. The drop of reflectance data observed in fields from Planet satellite data was used as a signal for the first irrigation event, and the model subsequently ran from there. The results from this demonstrate that the BAITSSS evapotranspiration (ET) is comparable to the OpenET model; BAITSSS overall estimates a higher ET in agricultural areas compared to OpenET. However the irrigation results are underestimated, but there are many limiting factors that could be adjusted with further consideration. More research should be conducted toward the efficient and effective running of the BAITSSS model on a larger region.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Smartwatch App for Automated Targeted Memory Reactivation</title>
<link href="https://hdl.handle.net/1721.1/156994" rel="alternate"/>
<author>
<name>Podrug, Anita</name>
</author>
<id>https://hdl.handle.net/1721.1/156994</id>
<updated>2024-09-25T03:48:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing a Smartwatch App for Automated Targeted Memory Reactivation
Podrug, Anita
Targeted Memory Reactivation (TMR) experiments have shown potential in enhancing learning and memory by pairing sensory stimuli with specific memories during learning and reintroducing these stimuli during slow-wave sleep. This process aids in memory consolidation, where recent neural representations are reactivated and transferred to long-term storage. Traditionally, TMR has been limited to laboratory settings. For my thesis, I developed a TMR system usable at home and investigated the effectiveness of this system on memory recall of a nature documentary, using vibration as a stimulation cue. I developed a machine-learning model that performs sleep stage classification from heart rate and motion data that can be collected from a smartwatch in real-time. Using this model, the smartwatch was programmed to deliver TMR cues when participants enter stage N3 (slow-wave) sleep. This TMR system was found to improve recall 24 hours and 1 week after the initial learning, but the results were not found to be statistically significant due to an insufficient amount of data. Further studies would be required to confirm these results. This advancement of at-home TMR can be extremely useful for further understanding sleep’s role in memory and can provide a system to be used by the general public for improving their learning and memory. Additionally, the development of an automated real-time sleep-stage classification model can enable more reliable and better quality experiments to be used on a variety of sleep studies in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Extracting and Analyzing Political Content on TikTok</title>
<link href="https://hdl.handle.net/1721.1/156993" rel="alternate"/>
<author>
<name>Fadel, Marie Diane</name>
</author>
<id>https://hdl.handle.net/1721.1/156993</id>
<updated>2024-09-25T04:09:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Methods for Extracting and Analyzing Political Content on TikTok
Fadel, Marie Diane
In this thesis, I investigate the dynamics of political discourse on TikTok, with a focus on crafting a comprehensive methodology for extracting and analyzing political content related to the 2024 U.S. Presidential Election. This research utilizes a blend of advanced computational tools and crowd-sourced evaluations to delve into the mechanisms through which political influence is both exerted and perceived on the platform. For data collection, the study employed TikAPI, a tool designed for systematic scraping of TikTok videos, which targeted specific political hashtags to amass a substantial dataset. This dataset was analyzed using a variety of innovative methods, including snowball sampling to ensure a representative range of political engagement, and integration with Python to automate the data collection process. Additionally, I utilized Large Language Models (LLMs) to evaluate the relevance and persuasive impact of the content, and these machine-generated insights were then benchmarked against human judgments. Overall, the findings indicate a slight preference for Republican discourse on TikTok. Moreover, I demonstrate that OpenAI’s GPT can effectively classify videos by topic, although human input remains essential for more nuanced tasks such as stance detection and evaluation of persuasive effect. This exploration into the political landscape of TikTok represents one of the first of its kind, with the primary aim of this thesis being to develop a methodology that will support future research in this field.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Developmental Change in Ego-Motion Experience Across Infancy</title>
<link href="https://hdl.handle.net/1721.1/156992" rel="alternate"/>
<author>
<name>Fuchs, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/156992</id>
<updated>2024-09-25T04:05:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring Developmental Change in Ego-Motion Experience Across Infancy
Fuchs, Ariel
Humans flexibly and intuitively use vision to plan and guide navigation through the local environment. How does this ability develop in infancy? One possibility is that the development of visual representations for navigation is driven by passive exposure to the visual statistics of scenes. Another possibility is that active navigation experience using vision to plan and guide locomotion is the driving factor. In order to distinguish between these two hypotheses, it is necessary to understand the nature of infants’ early visual scene experience itself. Surprisingly little prior work has characterized infants’ early experiences with ego-motion through scenes, before and after learning to locomote. We use ecological momentary assessments to quantify infants’ exposure to ego-motion through scenes, and how that changes with locomotor experience. We found that pre-crawling infants who have never independently navigated already experience significant passive visual exposure to forward-facing ego-motion through scenes. Nevertheless, this experience increases substantially with age and locomotor status.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contextual Predictability and Phonetic Reduction</title>
<link href="https://hdl.handle.net/1721.1/156991" rel="alternate"/>
<author>
<name>Martin, Kinan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156991</id>
<updated>2024-09-25T03:29:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Contextual Predictability and Phonetic Reduction
Martin, Kinan R.
Phonetic reduction is a process which alters the acoustic quality of a sound, often a vowel or word, to a perceived weaker or shorter state. Previous research suggests that the degree of reduction of a word is influenced by the contextual predictability of words in the context. However, the nature of how the context direction and size governs phonetic reduction has not been thoroughly explored. The advancement of self-supervised language models provides a means to assign meaningful estimates of word predictability conditioned on different contexts. This paper explores the effect of contextual predictability on phonetic reduction making use of such models. We train instances of GPT-2 on different context directions (past, future, and bidirectional) and context sizes (bigram vs. sentence) to provide measures of conditional word predictability, then use linear regression to quantify their correlation with a measure of phonetic reduction (word duration). Our results provide evidence suggesting that the contextual probability of a word given the following context correlates with word duration more strongly than the past context and the bidirectional contexts for both context sizes, suggesting that phonetic reduction may be a reliable indicator of reduced cognitive load in a speaker’s planning of the rest of an utterance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156989" rel="alternate"/>
<author>
<name>Johnson, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156989</id>
<updated>2024-09-25T04:01:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models
Johnson, Zachary D.
While directly asking a Large Language Model (LLM) a harmful request (e.g. "Provide me instructions on how to build a bomb.") will most likely yield a refusal to comply due to ethical guidelines laid forth by developers (e.g. OpenAI), users can trick the LLM into providing this information using a tactic called a Role-play based Jailbreak Attack. This attack consists of instructing the LLM to take on the role of a fictional character that does not adhere to the model developer’s ethical guidelines and will comply with any request. Role-play based jailbreak attacks remain a critical safety issue and open-ended research question due to their success in getting a LLM to comply with a harmful request, as well as their ability to be generated without a formal technical background. Companies such as OpenAI employ manual tactics like red-teaming in order to enhance a LLM’s robustness against these attacks, however these tactics may fail to defend against all role-play based jailbreak attacks due to their potentially limited ability to predict unseen attacks. In this work, we aim to better understand the landscape of role-play based jailbreak attacks so that we can precisely detect these attack attempts in the wild before they yield a harmful output from a LLM. Specifically, we focus on three main categories: generating synthetic examples of role-play based jailbreak attack prompts, testing these role-play prompts on a target LLM in order to evaluate whether they successfully jailbreak the LLM and labeling our prompts accordingly, and training a robust detection model that can precisely predict whether a role-play prompt will successfully yield a jailbreak attack in a LLM before being fed any malicious requests. Through these processes, we learn the following information, respectively. 1) Out-of-the-box models such as GPT-4 are effective at generating successful role-play jailbreak attack prompts when being generated on just a few examples via fewshot prompting. 2) We can automatically classify LLM responses as jailbroken or not with high accuracy using statistical methods including Principal Component Analysis (PCA) and Support Vector Machines (SVMs). 3) Most classification architectures are unable to perform the complex task of accurately predicting whether a role-play prompt will successfully yield a jailbreak attack. By better understanding the nature of role-play based jailbreak attacks, we hope to be able to contribute to the research area of jailbreak attack detection in LLMs so that they can be robustly defended against in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Graph Transformers Toward Scalability for Large Graphs</title>
<link href="https://hdl.handle.net/1721.1/156988" rel="alternate"/>
<author>
<name>Lim, Katherine S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156988</id>
<updated>2024-09-25T03:05:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benchmarking Graph Transformers Toward Scalability for Large Graphs
Lim, Katherine S.
Graph transformers (GTs) have gained popularity as an alternative to graph neural networks (GNNs) for deep learning on graph-structured data. In particular, the self-attention mechanism of GTs mitigates the fundamental limitations of over-squashing, over-smoothing, and limited expressiveness that GNNs face. Furthermore, like transformers used for natural language processing and computer vision, GTs have the potential to become foundation models that can be used for various downstream tasks. However, current GTs do not scale well to large graphs, due to computational cost. Here, we formulated a GT architecture as part of a larger scheme to build a GT made scalable through hierarchical attention and graph coarsening. Specifically, our goal was to optimize the GT building block of the scalable GT. By adding GraphGPS-inspired message-passing neural network (MPNN) layers to a modified version of the Spectral Attention Network (SAN) and performing hyperparameter tuning, we built a GT architecture that performs comparably to GraphGPS on the node classification task on the Cora and CiteSeer datasets. Compared to the modified version of SAN that we started with, our architecture is faster to train and evaluate, and also obtains higher node classification accuracies on the Cora and CiteSeer datasets. Our results demonstrate how message passing can effectively complement self-attention in GTs such as SAN to improve node classification performance. With further architectural improvement, we expect our model to serve as an effective building block for scalable GTs. Such scalable GTs may be used for node classification on large graphs, a common task for industrial applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Learning for Generative Scene Editing and&#13;
Motion</title>
<link href="https://hdl.handle.net/1721.1/156987" rel="alternate"/>
<author>
<name>Fang, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156987</id>
<updated>2024-09-25T03:53:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unsupervised Learning for Generative Scene Editing and&#13;
Motion
Fang, David S.
Unsupervised learning for images and videos is important for many applications in computer vision. While supervised methods usually have the best performance, the amount of data curation and labeling that supervised datasets require makes it difficult to scale. On the other hand, unsupervised learning is more scalable, generalizable, and requires much less data curation, but is harder because it lacks a clear target objective. In this thesis, we propose two distinct lines of unsupervised learning work with generative applications: 1) BlobGSN and 2) optical flow estimation and flow generation with diffusion models. BlobGSN explores the unsupervised learning of spatially disentangled mid-level latent representations for 3D scenes in a generative context. Within this generative framework, we show that BlobGSN facilitates novel scene generation and editing. In a different vein, current state-of-the-art optical flow learning models rely on ground truth data collection for sequences of frames in videos. Unsupervised learning of optical flow, which would not require ground truth data, could theoretically leverage any publicly available video data for training. We explore different frameworks for unsupervised optical flow learning to tackle different problems such as photometric error, occlusion handling, and flow smoothness. Additionally, we propose a generative framework for generating optical flow from a single frame that can be trained in an unsupervised manner.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain</title>
<link href="https://hdl.handle.net/1721.1/156985" rel="alternate"/>
<author>
<name>Alhakbani, Alanoud</name>
</author>
<id>https://hdl.handle.net/1721.1/156985</id>
<updated>2024-09-25T04:00:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain
Alhakbani, Alanoud
This thesis evaluates Saudi Arabia’s potential to establish a foothold in the global battery industry, an industry that would be pivotal for its energy transition and economic diversification goals. Key enablers such as Saudi Arabia’s commitment to renewable energy and industrial growth in adjacent sectors, including automotive and refinery, provide a foundation for entry into the battery value chain. However, the Kingdom must navigate barriers such as market competition and the need for technological expertise in advanced battery production, a market led by heavyweights like China and innovators across the globe. This study assesses the viability of a bottom-up technology catch-up approach for industrial competency in battery technology—a contrast to the top-down models employed by established players. The research comprises an in-depth analysis of enablers and barriers for technology catch-up utilizing a proposed assessment framework, and strategies for effectively localizing different parts of the battery value chain. The outcome aims to offer a strategic blueprint for Saudi Arabia to capitalize on the burgeoning demand for battery technology and enhance its global economic stature.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation</title>
<link href="https://hdl.handle.net/1721.1/156984" rel="alternate"/>
<author>
<name>Sutanto, Antony</name>
</author>
<id>https://hdl.handle.net/1721.1/156984</id>
<updated>2024-09-25T03:19:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation
Sutanto, Antony
The chicken egg possesses a shell structure that is conventionally thought to be strongest when loaded on its vertical poles, particularly the sharp end, which resembles a structural arch. This notion has influenced educational activities such as the "egg drop challenge", where participants typically orient the egg with its sharp end facing downwards to improve its chances of resistance to fracture upon impact. This study tests this conventional wisdom by investigating the egg's strength, or energy sustained before rupture, depending on its orientation. First, static compression tests were conducted to determine the maximum energy absorbed by the egg based on its compression axes. Eggs yielded greater deformations and energy absorbed before rupture when compressed horizontally rather than vertically, suggesting potential advantages under dynamic loading conditions. To validate that these trends also held under dynamic loading, drop tests from varying heights were performed to assess the kinetic energy required to fracture the egg. Contrary to intuitive understanding, eggs dropped on their equators could undergo greater drop heights without rupturing compared to those dropped on their vertical poles. This unexpected finding challenges the prevailing notion of the egg's structure and suggests a new perspective on its impact behavior.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture</title>
<link href="https://hdl.handle.net/1721.1/156983" rel="alternate"/>
<author>
<name>Sinha, Varnika</name>
</author>
<id>https://hdl.handle.net/1721.1/156983</id>
<updated>2024-09-25T03:59:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture
Sinha, Varnika
As cloud adoption increases, cloud providers are competing to build more robust and secure platforms to keep growing and attract more users by ensuring their data is highly available but not susceptible to malicious attacks. Many cloud platforms are distributed systems based on a microservices architecture where many services communicate with one another. Communication among services should be authenticated to implement security in depth and not just rely on the security of networks and infrastructure. However, these services can be on the order of hundreds or thousands, which increases the number of specialized secrets needed to provide authentication. This means that systems like these involve a large number of secrets. These large numbers of secrets are hard to manage and track in the case of exposure, which leads to a risk of misconfiguration and leaks. We implement a framework that accounts for these secrets by managing the creation, rotation, and deletion in accordance with the existing architecture of the platform with a Kubernetes custom resource and controller and ensure that a secret with the correct permissions is always present when needed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hollywood Workers vs Tech: In Theory and In the News</title>
<link href="https://hdl.handle.net/1721.1/156982" rel="alternate"/>
<author>
<name>Cmehil-Warn, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/156982</id>
<updated>2024-09-25T03:28:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hollywood Workers vs Tech: In Theory and In the News
Cmehil-Warn, Christian
The 2023 SAG-AFTRA and WGA strikes on Hollywood were notable because of their explicit ties to technology and labor’s changing relationships. In particular, disputes around using generative AI in the workplace were widely reported in the news. This thesis examines the Hollywood strikes in two parts. The first part takes a political economy approach to examine the underlying causes of these changes in technology-labor relations. In particular, the thesis argues that an industry shift to distribution via streaming services alongside increased vertical integration brought about new imperatives to production and exponentially increased levels of data capture, enabling the labor conditions that led to the strike. Theories of creative labor and technology-labor relations are used to describe the tensions. The resulting SAG-AFTRA and WGA collective bargaining agreements are then examined within these framings. The second part of the thesis quantitatively explores the relationship between news media (which its own complex relationship with technology) and the Hollywood strikes using natural language processing techniques. Sentiment analysis and sentence embeddings are used to quantify and compare news articles across different characteristics. The results of the analysis are inconclusive.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Data Markets</title>
<link href="https://hdl.handle.net/1721.1/156981" rel="alternate"/>
<author>
<name>Lu, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/156981</id>
<updated>2024-09-25T03:30:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Decentralized Data Markets
Lu, Charles
Acquiring access to massive amounts of data has become fundamental to state-of-the-art artificial intelligence systems. However, as data value increases, data owners have challenged current norms and practices of data acquisition. Data marketplaces have been promoted to fairly compensate data producers and incentivize greater data sharing. In this thesis, I describe a decentralized model of data markets to overcome privacy concerns in siloed, data-limited domains such as healthcare. I propose two federated techniques to automatically select a subset of data sellers and datapoints for a buyer given some sample data. I also examine the socio-technical implications of emerging data markets for medical data and synthesize ethical principles for medical data marketplaces. Decentralized data markets have the potential to enable new AI economies through more robust, transparent, and participatory data sharing platforms. Through the contributions in this thesis, I hope to make a positive step towards realizing a future where transformative data-enabled technologies such as general-purpose machine learning systems are developed more responsibly and the benefits are distributed more equitably.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US</title>
<link href="https://hdl.handle.net/1721.1/156978" rel="alternate"/>
<author>
<name>Randall, Abigail Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/156978</id>
<updated>2024-09-25T03:29:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US
Randall, Abigail Marie
To meet the growing demands of the energy transition, we need to rapidly deploy mines to supply the minerals for clean energy technologies. This presents a set of challenges, or tensions, at the energy transition level, policy level, and mine level. This thesis seeks to answer two questions: What are the tensions for mining in the US? How do we decide where to permit these mines given the realities of environmental and community impacts? To address the tensions at the energy transition level, I establish copper, cobalt, nickel, and lithium, or energy transition minerals, as the focus of this thesis. Then, to address policy tensions, I conducted a geospatial analysis and found that 38% of the US’ energy transition mineral resources are on or near difficult to permit lands, with 92.7% of those resources being copper. To understand how these tensions play out in practice, I created three case studies through a series of interviews and pulling public comments. The first case study is of the East Boulder and Stillwater Mines. In this case, stakeholders came together to form a Good Neighbor Agreement, or a legally binding contract between the mine owner and grassroots community organizations. The agreement is an adaptable framework for mine decision making, which shows how stakeholders can work creatively within the tensions of mining for the energy transition. The second case, the Twin Metals Minnesota Case Study, shows how political tensions can introduce risk and uncertainty in the mine permitting process and prevent a mine from moving forward. The third is an Indigenous lands case study centered around the Thacker Pass lithium mine, which illustrates how a tensions framing is critical when the tradeoff framing has historically risked Indigenous sovereignty over their lands. The identified tensions flow into the policy recommendations, which are to: 1. Replicate solutions that maximize gains to stakeholders, 2. Rely on currently underutilized policy options to increase transparency and consolidate review in the permitting process, and 3. Look downstream in the energy transition to learn from newer industries. Taken all together, this thesis tells a story of what types of mines need to be deployed in the US to meet the needs of the clean energy transition, whether and where mines can be deployed under current policy constraints in the US, and how mines are deployed in practice.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unlocking Collective Intelligence in Decentralized AI</title>
<link href="https://hdl.handle.net/1721.1/156977" rel="alternate"/>
<author>
<name>Gupta, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/156977</id>
<updated>2024-09-25T03:47:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unlocking Collective Intelligence in Decentralized AI
Gupta, Gauri
In the current evolving digital landscape, vast repositories of data and knowledge often remain siloed and untapped due to privacy concerns and centralized control. Thus, despite the transformative potential of artificial intelligence, its utilization in societal sectors lags behind other industries. For example in healthcare, data privacy and lack of incentives and trust in the system prevent collaboration on a large scale. This necessitates the development of efficient methods for decentralized learning while preserving privacy to generate wisdom whose quality is on par with the case of data centralization. It involves first identifying and creating essential building blocks that encourage collaboration while preserving the decentralized nature of these critical digital paradigms. A key challenge here is to facilitate collaboration among distrustful, disconnected, and disincentivized entities possessing distinct assets such as data, models, and computation resources. Harnessing the collective wisdom latent within decentralized networks will unlock new avenues for innovation and human collaboration. Therefore, the primary aim of this thesis is to expedite AI adoption in decentralized systems by introducing novel algorithms and systems capable of extracting collective intelligence while preserving privacy. &#13;
&#13;
This thesis addresses the following research questions: First, it delves into methods for training machine learning models collaboratively while simultaneously protecting the privacy of raw data and the proprietary nature of individual models. Second, it explores the coordination mechanisms among system nodes in the absence of a central authority or trusted server to ensure orderly collaboration. Specifically, it answers questions like who should a node talk to. When does random collaboration selection work? Finally, it investigates strategies for conducting crowd-sourced decision-making to obtain population-level predictive results, scaling efficiently to encompass millions of agents.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy</title>
<link href="https://hdl.handle.net/1721.1/156976" rel="alternate"/>
<author>
<name>Vaidyanath, Varsha</name>
</author>
<id>https://hdl.handle.net/1721.1/156976</id>
<updated>2024-09-25T03:59:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy
Vaidyanath, Varsha
Recently, federal and state governments have been implementing policy to reduce embodied emissions coming from the production of materials. However, pavement materials impact emissions throughout the pavement lifecycle, not just during production. This paper addresses how a new pavement evaluation system and policy framework might drive better solutions to reduce carbon emissions, from a climate change standpoint. The main components include: establishing why current pavement rating systems and current policy are not sufficient, performing a data-driven analysis with a grading and scorecard system to assess, compare, and summarizing pavement design quality, and proposing an effective policy framework to implement the system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design</title>
<link href="https://hdl.handle.net/1721.1/156975" rel="alternate"/>
<author>
<name>Ortea Varela, Ines</name>
</author>
<id>https://hdl.handle.net/1721.1/156975</id>
<updated>2024-09-25T03:13:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design
Ortea Varela, Ines
This research examines the mechanical behavior of a traditional Japanese joint, the Mortised Rabbeted Oblique (MRO) splice. Through computational simulations employing Finite Element Analysis (FEA), the study examines a continuous beam and an unmodified MRO splice, revealing expected behavior in the beam and unexpected tress concentration and displacement asymmetry in the splice. Topology optimization of the splice’s end sections yields iterations with varying volume reductions (50%, 70%, and 90%), showing significant topology differences between the two ends. Subsequently, all iterations were fabricated through 3D printing using PLA and subjected to three-point bending testing. Experimental results confirm the computational findings, demonstrating reduced strength in the MRO splice compared to the continuous beam. A surprising increase in ductility and maximum load resisted by the iterations with 50% and 70% volume reductions is observed. This finding underscores how modifying the end beams significantly influences the overall behavior of the splice mechanism.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification</title>
<link href="https://hdl.handle.net/1721.1/156974" rel="alternate"/>
<author>
<name>Sandadi, Varsha</name>
</author>
<id>https://hdl.handle.net/1721.1/156974</id>
<updated>2024-09-25T03:26:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification
Sandadi, Varsha
With the increasing prevalence of AI-assisted decision-making in the healthcare domain, evaluating fairness of machine learning models is more central than ever. Measuring the fairness of medical decision-support systems has enormous impacts on patients of different backgrounds and can influence how clinicians make decisions. In this study, we conduct a fairness analysis on the top 8-10 performing machine learning and artificial intelligence models from the Radiological Society of North America cervical spine fracture detection challenge and abdominal trauma detection challenge. Seven metrics are used for a more comprehensive assessment on fairness. Our findings indicate that cervical spine fracture detection models exhibit overall fairness, while abdominal trauma detection models demonstrate some unfairness in specific injury regions, possibly due to limited sample size. We also explore the performance of top models from the intracranial hemorrhage detection challenge across clinician-labeled "easy," "medium," and "hard" cases, revealing a lower accuracy rate on hard cases. This study underscores the need for additional model testing and comprehensive data representation to ensure fairness before real-world deployment in healthcare systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Sim-to-Real Robot Parkour from RGB Images</title>
<link href="https://hdl.handle.net/1721.1/156972" rel="alternate"/>
<author>
<name>Jenkins, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/156972</id>
<updated>2024-09-25T03:26:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning Sim-to-Real Robot Parkour from RGB Images
Jenkins, Andrew
Advancements in quadrupedal robot locomotion have yielded impressive results, achieving dynamic maneuvers like climbing, ducking, and jumping. These successes are largely attributed to depth-based visual locomotion policies, known for their robust transferability between simulated and real-world environments (sim-to-real). However, depth information inherently lacks the semantic information present in RGB images. This thesis investigates the application of an RGB visual locomotion policy for navigating complex environments, specifically focusing on extreme parkour terrain. While RGB data offers a deeper understanding of the scene through semantic cues, it presents challenges in sim-to-real transfer due to large domain gaps. This work proposes a novel approach for training an RGB parkour policy and demonstrates that it achieves performance comparable to depth-based approaches in simulation. Furthermore, we successfully deploy and evaluate our RGB policy on real-world parkour obstacles, signifying its potential for practical applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents</title>
<link href="https://hdl.handle.net/1721.1/156971" rel="alternate"/>
<author>
<name>Pyo, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/156971</id>
<updated>2024-09-25T03:28:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents
Pyo, Bryan
In the rapidly growing field of information extraction, the ability to automatically and accurately extract structured data from sources has grown in importance across several industries. This need has arisen largely due to the vast quantity of data that is currently available and still being actively collected by these industries for various purposes. In a world where data has grown greatly in quantity and importance, the ability to parse this data into usable information has grown to become an even more essential endeavor. Although information extraction has traditionally been a relatively labor-intensive task, with the rising sophistication and applicability of machine learning and computer-aided document analysis, automatic and more generalized methods of extracting relevant data from documents have become a major focus of research. This thesis discusses several pipelines that have been developed to extract data in the form of key-value pairs from specification sheets describing mechanical parts achieving accuracies ranging from 80% to 100% depending on the pipeline and the target documents and key-value pairs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Achieving Secure and Performant Databases with Minimal Resource Overhead</title>
<link href="https://hdl.handle.net/1721.1/156970" rel="alternate"/>
<author>
<name>Lim, Darren</name>
</author>
<id>https://hdl.handle.net/1721.1/156970</id>
<updated>2024-09-25T03:05:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Achieving Secure and Performant Databases with Minimal Resource Overhead
Lim, Darren
Modern cloud databases run in virtualized environments, which are typically implemented with Linux virtual machines (VMs). However, this poses two main risks. Typically, trusted database code is run alongside stored procedure code, which means that user-inputted stored procedure code can pose a security risk to the database and data itself, if the code contains vulnerabilities. Additionally, since Linux has such a large codebase, Linux-based VMs are subject to complex latency concerns and also a large attack surface. Using a low-level shared memory protocol, it is possible to create a secure and performant communication channel between a database VM and the VMs of its stored procedures. This protects the database from vulnerabilities in the stored procedure code. Furthermore, by using unikernels instead of Linux VMs, the machines running the VMs can minimize the CPU/memory overhead per VM while also improving security for the DMBS. Overall, these changes allow cloud-hosted machines to more efficiently utilize resources.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior</title>
<link href="https://hdl.handle.net/1721.1/156969" rel="alternate"/>
<author>
<name>Bao, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/156969</id>
<updated>2024-09-25T03:40:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior
Bao, Claire
With block rewards dwindling in Bitcoin, a miner’s revenue will become increasingly reliant on transaction fees. However, these transaction fees are highly variable, which could result in undercutting attacks occurring. Undercutting attacks are when miners intentionally fork the blockchain in an attempt to steal transactions from an already-mined block. These attacks could cause repeated forking of the blockchain, thereby rendering Bitcoin unstable and less secure long-term. The original paper by Carlsten et. al. proposing these attacks made assumptions about the future mining environment. For instance, they assumed that block size limits were large relative to the number of transactions and that all transactions had the same fee. &#13;
&#13;
This thesis aims to examine whether undercutting attacks would still be a threat under different mining dynamics. Specifically, we examine two important mempool characteristics that have changed since the original paper was written: the block size limit and the fee gradient. By investigating what happens as these characteristics and factors change, our research is able to not only generate a holistic view of whether undercutting attacks are a threat for a wide variety of possible mempool dynamics, but it also provides guidelines on what range each of these measurable characteristics must fall within in order for the blockchain to be secure and stable long-term. Our research found that the blockchain is safe from undercutting attacks when the block size limit is small relative to the number of transactions, but the blockchain becomes more susceptible to undercutting attacks if transactions with much higher fees enter the mempool infrequently even for smaller block size limits. Moreover, we extend the logic of undercutting attacks from the original paper to show that, if the mempool dynamics are such that the undercutting occurs long-term, the tangible impact on users is that very little progress will be made as fully rational miners will end up only including one transaction per block, regardless of the total amount of available transactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long-range Genomics Benchmark Technology and More</title>
<link href="https://hdl.handle.net/1721.1/156968" rel="alternate"/>
<author>
<name>Polen, McKinley</name>
</author>
<id>https://hdl.handle.net/1721.1/156968</id>
<updated>2024-09-25T03:44:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Long-range Genomics Benchmark Technology and More
Polen, McKinley
The transformer architecture has emerged as a popular choice in various domains, owing to its ability to capture long-range dependencies and parallel processing capabilities. In the context of genomics, where dependencies often span over 100,000 base pairs, the quadratic computational complexity of the attention mechanism, a core feature of the transformer architecture, poses a significant bottleneck. With the goal of creating a genomics foundation model (FM), this paper aims to address challenges associated long range dependencies in genomics. Our survey encompasses modifications to the attention mechanism, the creation of a genomics long range benchmark (GLRB), and the evaluation of various transformer and other non-transformer architectures. These efforts collectively develop the groundwork supporting the development of a robust genomics foundation model, opening new possibilities for genomics research and applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156967" rel="alternate"/>
<author>
<name>Tysinger, Emma P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156967</id>
<updated>2024-09-25T03:55:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks
Tysinger, Emma P.
In bioinformatics and proteomics, determining protein functions experimentally is expensive and slow. There’s a growing need for precise and quick computational prediction methods, filling the gap between sequence discovery and functional understanding. Over recent years there has been an influx of deep-learning protein folding algorithms used for predicting function by transfer learning. Protein function is only partially captured by each of a large number of modalities including structure, however, in isolation they only give us a partial understanding of function. Uniting these is an important step to understanding function more holistically. We present a multi-modal framework using two graph neural networks to infer a joint embedding space that captures many properties of a protein including structure, disease associations, drug interactions, protein interactions, biological processes and more. We evaluate the embedding space on downstream prediction tasks including enzyme commission (EC) numbers and gene ontology (GO) terms. Experimental results on protein function prediction, as well as a qualitative visual analysis of the protein embedding space show that our framework is able to successfully capture both structure and biomedical context of proteins, and outperforms structure-only based encoders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inertial Navigation System Drift Reduction Using Scientific Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156966" rel="alternate"/>
<author>
<name>McManus, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156966</id>
<updated>2024-09-25T03:49:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inertial Navigation System Drift Reduction Using Scientific Machine Learning
McManus, Matthew
Inertial Navigation Systems (INS) are crucial for accurate navigation in GPS-denied environments, but they suffer from drift errors that accumulate over time. This thesis introduces Scientific Machine Learning (SciML) as an innovative approach to mitigate INS drift by integrating physical models with machine learning algorithms. The proposed SciML architecture leverages neural networks to learn complex error patterns and relationships from simulated IMU data, outperforming conventional techniques like Kalman filtering. Utilizing a simulation-focused approach with the Julia programming language and the HighPerformance Inertial Navigation Development Repository (HIDR) library, the research generates realistic datasets encompassing diverse trajectories, sensor errors, and operational conditions. The SciML methodology incorporates data generation, INS mechanization, error modeling using neural networks, and a filtering framework that integrates the Extended Kalman Filter (EKF) with batch filtering techniques. Experimental results demonstrate the superior performance of the SciML-based INS in reducing position, velocity, and attitude errors compared to a baseline Kalman filter. This pioneering approach of fusing SciML with INS physical models holds promise for revolutionizing drift error mitigation and advancing the field of navigation systems, paving the way for more accurate, reliable, and resilient navigation in GPS-denied environments, with potential applications in aviation, robotics, and autonomous vehicles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Interfaces for Augmenting Episodic Memory</title>
<link href="https://hdl.handle.net/1721.1/156965" rel="alternate"/>
<author>
<name>Zulfikar, Wazeer Deen</name>
</author>
<id>https://hdl.handle.net/1721.1/156965</id>
<updated>2024-09-25T03:29:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AI Interfaces for Augmenting Episodic Memory
Zulfikar, Wazeer Deen
Episodic memory, the memory of personal experiences, is a core component of human cognition. It functions within the neural substrate to store progress towards personal goals. Thus, it influences human behavior by enriching social interactions, forming a personal narrative, and facilitating personal growth. With the rise of challenges such as poor sleep, aging and dementia, and fragmented attention, people experience difficulties with episodic memory retrieval. These difficulties range from momentary lapses such as forgetting previous interactions during conversations, to recalling multiple events during reminiscing and decision-making. &#13;
&#13;
In this work, we explore artificially intelligent (AI) systems that augment episodic memory by enabling people to interact with their memories effectively. We design, develop, and evaluate two systems: (i) Memoro, a wearable audio-based memory assistant that presents concise suggestions in real-time while minimizing disruption to the user’s primary task, and (ii) Resonance, a web-based reflective memory assistant that offers actionable suggestions to help users savor their past, present, and future experiences for mental health benefits. By conducting an in-person user study for Memoro and a longitudinal online user study for Resonance, we investigate the effects of these systems on users, measure their technical efficacy, and gather feedback on user experiences.  Recent advances in artificial intelligence offer novel opportunities to enhance episodic memory. Therefore, exploring interfaces that seamlessly integrate with human behavior is crucial to ensure that AI-based systems enrich everyday experiences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedding engineering intuition into computational design through interactive topology optimization</title>
<link href="https://hdl.handle.net/1721.1/156964" rel="alternate"/>
<author>
<name>Schiffer, Gillian</name>
</author>
<id>https://hdl.handle.net/1721.1/156964</id>
<updated>2024-09-25T03:01:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Embedding engineering intuition into computational design through interactive topology optimization
Schiffer, Gillian
With increasing pressure to generate low environmental impact designs, topology optimization presents a flexible, material efficient solution. Topology optimization is a computational design method that produces lightweight, high performing designs uniquely suited to a user’s objective function and constraints. However, there exist major obstacles to topology optimization’s widespread use, including increased complexity and computational time for advanced, nonlinear optimization formulations such as buckling or stress, lack of geometric control, and difficulty manufacturing. Interactive topology optimization algorithms overcome these obstacles by prompting users to directly modify the geometry of the design as the optimization runs. By embedding their engineering intuition into the design, users address concerns for complex failure modes, manufacturability, or alternative engineering performance metrics. This work presents two interactive approaches: HiTop 2.0 which empowers users to selectively enforce minimum and/or maximum solid and/or void feature size controls, and interactive infill topology optimization which incorporates user drawn infill patterns into regions of the optimized design. The interactive methods are demonstrated on numerical 2D examples, HiTop 2.0 is extended to a numerical 3D example, and interactive infill is experimentally validated with 2.5D additively manufactured test beams.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Electricity Distribution Network Tariffs for Beneficial Electrification</title>
<link href="https://hdl.handle.net/1721.1/156963" rel="alternate"/>
<author>
<name>Turk, Graham</name>
</author>
<id>https://hdl.handle.net/1721.1/156963</id>
<updated>2024-09-25T03:24:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing Electricity Distribution Network Tariffs for Beneficial Electrification
Turk, Graham
Decarbonizing the transportation and residential building sectors will require rapid electrification through the uptake of electric vehicles (EVs) and cold climate heat pumps (CCHPs), respectively. There is broad consensus that the flat volumetric electricity tariffs currently in place for residential customers in most of the US discourage electrification and do not reflect the underlying marginal costs of electricity delivery. Under flat volumetric tariffs, utilities are projecting sharp rises in distribution-level peak demand, which will necessitate network upgrades whose costs are recovered from all grid users. Alternative rate designs can help mitigate the need for these upgrades by shifting new demand away from peak periods. However, there is an emerging narrative that electricity tariff design is a zero-sum game: regulators can either protect vulnerable households or encourage electrification, but not both. In this thesis, we challenge that perception by asking whether well-designed distribution network tariffs can deliver a win-win in the long run, reducing operating costs for EVs and/or CCHPs and average network costs for households that cannot yet afford to electrify. We answer this question by running a series of bottom-up optimizations to simulate household’s responses to alternative network tariff designs in two distinct geographies, then assessing cost impacts on different household groups. We use open-source data on household electricity consumption and travel behavior. We find that beyond very low adoption levels, time-of-use per-kWh network tariffs, which several states have adopted as the default, perform poorly on all metrics and lead to large increases in local peak demand. Per-kW capacity tariffs (subscription and demand charges) are effective at mitigating EV-driven peaks, especially when paired with TOU energy tariffs. We recommend that regulators separate network charges from energy charges and introduce a per-kW subscription network tariff to collect a portion of the network revenue requirement. This approach will reduce the total cost of ownership of electrified devices while mitigating the network upgrades needed to maintain reliability. Our recommendations offer a path towards rapid electrification that benefits all grid users.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations</title>
<link href="https://hdl.handle.net/1721.1/156962" rel="alternate"/>
<author>
<name>Soni, Prajna</name>
</author>
<id>https://hdl.handle.net/1721.1/156962</id>
<updated>2024-09-25T03:38:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations
Soni, Prajna
Language model-based applications are increasingly being deployed in the real world across a variety of contexts. While their rapid success has realized benefits for society, ensuring that they are trained to perform according to societal values and expectations is imperative given their potential to shape societal values, norms, and power dynamics. Evaluation plays a key role in language model (LM) alignment and policy-making. Presently, LM alignment and evaluations are based on developer- and researcher-prescribed attributes, with many benchmarks focusing on performance as dictated by generalized or primarily Western datasets that may not accurately reflect the deployment context. This results in an inevitable misalignment where a model trained on human preference proxies in context A is deployed in context B. &#13;
&#13;
Existing evaluation measures and alignment techniques are heavily biased towards the values and perspectives of model developers. In this thesis, I argue that in order to ensure that alignment efforts are specific to their deployment contexts, it is necessary and feasible to design open-ended and participatory methods to elicit a broader range of context-specific axes. I demonstrate the viability of this through, CALMA, a non-prescriptive and grounded participatory process that successfully elicits distinct and context-specific alignment axes for evaluation datasets through in-context studies with two different communities. I further explore the ways in which broader participation can enable more effective adaptive AI regulation due to the crucial role of evaluations in addressing the technology-policy lag.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Garabit Viaduct: A Historical and Structural Study</title>
<link href="https://hdl.handle.net/1721.1/156961" rel="alternate"/>
<author>
<name>Harlin, Anne-Sixtine</name>
</author>
<id>https://hdl.handle.net/1721.1/156961</id>
<updated>2024-09-25T03:44:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Garabit Viaduct: A Historical and Structural Study
Harlin, Anne-Sixtine
This thesis investigates the Garabit Viaduct, providing a historical study and structural analysis of its truss arch. It aims to unravel the ingenuity behind the arch's elegant shape and design process. By examining historical plans and the memoirs of engineers such as Gustave Eiffel and Léon Boyer, this research uncovers the evolution of the viaduct's design and shape, revealing that the geometry of the arch was form-found using graphic statics. This study sheds light on the structural design hypotheses employed by Gustave Eiffel and Maurice Koechlin in sizing the members, providing insights into design practices of the late 19th century. Additionally, the study of the primary source documents left behind by the engineers suggests the method used for the arch's design may have influenced the shaping of the supporting piers, opening avenues for future research into the broader implications for Eiffel's later iconic tower.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices</title>
<link href="https://hdl.handle.net/1721.1/156960" rel="alternate"/>
<author>
<name>Patel, Preet</name>
</author>
<id>https://hdl.handle.net/1721.1/156960</id>
<updated>2024-09-25T03:35:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices
Patel, Preet
Designing novel neural interfaces is essential for various medical applications, scientific research, and human augmentation. One of the foundations of neural interface and bioelectronic medicine is the electrical stimulation of excitable cells, to interface the body with electronics and treat a variety of diseases. Current technologies, while efficacious, are limited by their bulkiness, require highly invasive surgeries, are unable to target at single-cell level resolution and are prone to foreign body reactions. Optogenetics can address these issues but fundamentally requires genetic modifications which makes it difficult to implement in-vivo and has issues of muscle atrophy and toxicity specifically in the Peripheral Nervous System (PNS).&#13;
&#13;
This work aims to advance bioelectronic medicine by developing efficient, wireless, cellular- sized electronic devices that can be administered in a drug-like fashion. These innovative, substrate-free nanoelectronic devices, termed injectable electronics, can be activated, and controlled using near-infrared (NIR) light, enabling minimally invasive, targeted neuromodulation deep within the peripheral nervous system (PNS). By overcoming the limitations of current implantable devices, this groundbreaking approach has the potential to transform the way we diagnose and treat a wide range of neurological disorders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Prime Factorization of Proteins</title>
<link href="https://hdl.handle.net/1721.1/156959" rel="alternate"/>
<author>
<name>Radev, Simeon</name>
</author>
<id>https://hdl.handle.net/1721.1/156959</id>
<updated>2024-09-25T03:14:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards a Prime Factorization of Proteins
Radev, Simeon
A classical problem of machine learning is the interpretability of a model’s latent information processing. This is particularly the case in the richly complex field of protein analysis, whereby unique and novel insights into the structural organization of proteins can help illuminate their functional space, and in particular lead toward a factorization of the structural space into a set of motif building blocks, which completely span this universe. This thesis creates a new inference interface for performing such analysis, by leveraging the sequential learning process of a neural autoencoder to construct a decomposition of proteins as a hierarchical sequence of embedded representation vectors. The further development of this work could lead to a greater understanding of the organizational complexity of natural phenomena, and in particular, as it relates to the uniquely complex relationship between protein structures and their function.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure</title>
<link href="https://hdl.handle.net/1721.1/156958" rel="alternate"/>
<author>
<name>Morgan, Jacob A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156958</id>
<updated>2024-09-25T03:58:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure
Morgan, Jacob A.
This thesis presents an evaluation of the performance of three well-established deep learning algorithms in predicting the response of a six-story instrumented reinforced concrete hotel in California to seismic excitation. Given the increasing availability of strong-motion data and expanded usage of deep learning in structural health monitoring, this thesis seeks to evaluate the predictions of purely data-driven and physics-informed architectures using processed instrumentation data in order to more accurately predict structural response for use in structural health monitoring and performance-based design applications.&#13;
&#13;
By employing a variety of results metrics previously used in the literature, including correlation coefficients, normalized error distributions, and peak errors, this thesis examines different components of the models’ capabilities to learn more about patterns in the data learned by the computational mechanisms of each architecture, and exploring the feasibility of a generalized approach for further application in structural response prediction. &#13;
&#13;
Findings from the work show the data-driven Long Short-Term Memory (LSTM) network performing the most accurately, but not consistently outperforming the other algorithms. Some trends in the data could be evidence of how different architectures may be better equipped in predicting different mode shapes and frequency contents. For example, the data-driven and physics-guided LSTM models predicted the third floor’s response more accurately than the roof, whereas the physics-guided convolutional neural network (CNN) was the opposite, showing a contrast between the two base architectures. This thesis also contributes to this growing field by documenting the experimental setup in detail to allow for the replication of results and for the facilitation of future application by structural engineers.&#13;
&#13;
As structural engineering research in deep learning continues to gain popularity, this thesis provides an experimental basis of a case study that can be followed and replicated to motivate future experimentation, as well as offering compelling different directions that future work could be directed to further the usage of deep learning in structural response prediction and structural health monitoring as a whole.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content</title>
<link href="https://hdl.handle.net/1721.1/156957" rel="alternate"/>
<author>
<name>Cucu, Theodor</name>
</author>
<id>https://hdl.handle.net/1721.1/156957</id>
<updated>2024-09-25T03:52:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content
Cucu, Theodor
This thesis investigates the human valence response to sequences of visual images. We f irst use crowd-sourcing and a novel nine-point psychometric scale to estimate human valence responses to individual images from the OASIS image set with high reliability (split-half Spearman rank-correlation ρ = 0.95). In a separate group of human participants, we then estimate valence responses following short, random sequences of those images (of length ≤ 10). Our key finding is that these sequence-contingent valence responses can be closely predicted by a simple linear combination of the estimated human valence responses to individual images (held-out ρ = 0.94). The combination weights are largest for the final image in the sequence; intuitively, this means the final image by itself can make predictions with high goodness-of-fit (ρ = 0.87). In summary, this research shows new evidence for a simple relationship between valence responses to individual images and valence responses to image sequences, with implications for future studies and practical applications in psychological assessment and beyond.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias</title>
<link href="https://hdl.handle.net/1721.1/156956" rel="alternate"/>
<author>
<name>Howe, Stephanie Pui-kay</name>
</author>
<id>https://hdl.handle.net/1721.1/156956</id>
<updated>2024-09-25T03:32:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias
Howe, Stephanie Pui-kay
The advent of single cell sequencing has revolutionized the granularity at which we can understand genetics and underlying cell biology. This enables us to analyze both the transcriptome and epigenome of various tissues, offering new insights into the molecular mechanisms that underlie disease such as neurodegeneration. This study focuses on neurodegenerative disease at the single cell resolution of the following proteinopathies: Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), Lewy Body Dementia (LBD), and Vascular Contributions to Cognitive Impairment and Dementia (VCID). We utilize both single-cell RNA sequencing (scRNA-seq) and single-cell ATAC sequencing (scATAC-seq) to perform a joint analysis of these conditions, examining both modalities holistically. Our research characterizes a multi-omic data set comprising 2,820,565 cells from 491 samples of prefrontal cortex across the aforementioned conditions, with all samples subjected to scRNA-seq and 63 to scATAC-seq. Leveraging this data, we conduct a multi-omic analysis of Alzheimer’s Disease and Related Dementias (ADRD) by exploring differences in the transcriptome and epigenomic erosion profile across conditions, shedding light on the intricacies of cortical aging. Ultimately, we identify potential molecular and genetic markers that drive the heterogeneous relationship between pathology, epigenetic erosion, and cognition in individuals affected by these conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Cortex-Hippocampus Interactions During Language Processing</title>
<link href="https://hdl.handle.net/1721.1/156955" rel="alternate"/>
<author>
<name>Lee, Jiachen Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/156955</id>
<updated>2024-09-25T03:29:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing Cortex-Hippocampus Interactions During Language Processing
Lee, Jiachen Elizabeth
The role of medial temporal lobe structures, including that of the hippocampus, in language processing, remain largely unknown. In patients with hippocampal damage, language is left largely intact [Vargha-Khadem et al., 1997], suggesting that the hippocampus is likely not necessary for language processing. Recent evidence, however, has shown that the hippocampus may serve functions outside its traditional roles in episodic memory and spatial navigation, and may generally aid in the encoding of relationships across time and space [Cohen and Eichenbaum, 1993]. Hence, the hippocampus may be involved in processes that are also implicated for language processing. Indeed, some patients with hippocampal damage, show deficits in resolving ambiguous discourse referents [Rubin et al., 2011] [Duff et al., 2011], reconstructing narratives [Race et al., 2011a], and display limited linguistic flexibility in engaging "verbal play" [Duff et al., 2009]. Here we leverage a large-scale fMRI dataset (n=790) and identify a region that responds to meaningful language in the anterior portion of the left hippocampus. We then characterize its response profile and show that it is responsive to semantically meaningful material but is not engaged during cognitively demanding spatial working memory and arithmetic tasks. Next, we examine the relationship between hippocampal and cortical language processing, starting with the neural correlates of word- and sentence- memorability in both the hippocampal and cortical language areas. Lastly, we leverage an encoding-model-guided procedure to search through a large set of sentences to identify those that are predicted to maximally differentiate responses in the cortical and hippocampal language areas. We find that cortical language areas are largely driven by surprisal, while hippocampal language areas display preferences towards more imageable and concrete sentences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accurate and Fast Approximate Graph Mining at Scale</title>
<link href="https://hdl.handle.net/1721.1/156954" rel="alternate"/>
<author>
<name>Arpaci-Dusseau, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/156954</id>
<updated>2024-09-25T03:53:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accurate and Fast Approximate Graph Mining at Scale
Arpaci-Dusseau, Anna
Approximate graph pattern mining (A-GPM) is an important data analysis tool for numerous graph-based applications. There exist sampling-based A-GPM systems to provide automation and generalization over a wide variety of use cases. Despite improved usability, there are two major obstacles that prevent existing A-GPM systems being adopted in practice. First, the termination mechanism that decides when to terminate sampling lacks theoretical backup on confidence, and performs significantly unstable and thus slow in practice. Second, they particularly suffer poor performance when dealing with the “needle-in-the-hay” cases, because a huge number of samples are required to converge, given the extremely low hit rate of their lazy-pruning strategy and fixed sampling schemes. We build ScaleGPM, an accurate and fast A-GPM system that removes the two obstacles. First, we propose a novel on-the-fly convergence detection mechanism to achieve stable termination and provide theoretical guarantee on the confidence, with negligible online overhead. Second, we propose two techniques to deal with the “needle-in-the-hay” problem, eager-verify and hybrid sampling. Our eager-verify method drastically improves sampling hit rate by pruning unpromising candidates as early as possible. Hybrid sampling further improves performance by automatically choosing the better scheme between fine-grained and coarse-grained sampling schemes. Experiments show that our online convergence detection mechanism can precisely detect convergence, and results in stable and rapid termination with theoretically guaranteed confidence. We also show the effectiveness of eager-verify in improving the hit rate, and the scheme-selection mechanism in correctly choosing the better scheme for various cases. Overall, ScaleGPM achieves an geomean average of 565× (up to 610,169×) speedup over the state-of-the-art A-GPM system, Arya. ScaleGPM is also four orders of magnitude faster than state-of-the-art exact GPM system, GraphZero. In particular, ScaleGPM handles billion-scale graphs in seconds, where existing systems either run out of memory or fail to complete in hours.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation on ImageNet Remaining Errors with TRAK</title>
<link href="https://hdl.handle.net/1721.1/156953" rel="alternate"/>
<author>
<name>Ma, Lingyi</name>
</author>
<id>https://hdl.handle.net/1721.1/156953</id>
<updated>2024-09-25T03:12:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigation on ImageNet Remaining Errors with TRAK
Ma, Lingyi
The Imagenet dataset is an important benchmark and test bed for computer vision models. Two of its most important characteristics are the size and difficulty, which were what motivated the breakthrough deep learning model Alexnet a decade ago. As researches progress and computation power grows, the best models nowadays can achieve accuracy as high as 90% on Imagenet. With such high accuracy, model predictions are usually of high precision and the causes of this long tail of error are unknown. Many studies have suggested that reassessing Imagenet, a nontrivial amount of label error and noise is found and effort had been made to fix this label noise in the test set, mainly through manual review. However, not many studies have dived into fixing labels for the training set, largely due to its large scale. The proposed thesis aims to understand the remaining errors that models are still making on the ImageNet dataset and investigate the labeling problems in the Imagenet training set, utilizing TRAK- a recently developed efficient data attribution method to help identify problematic images among the 1.4 million images in Imagenet training set.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia</title>
<link href="https://hdl.handle.net/1721.1/156950" rel="alternate"/>
<author>
<name>Malik, Rameen Hayat</name>
</author>
<id>https://hdl.handle.net/1721.1/156950</id>
<updated>2024-09-25T04:00:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia
Malik, Rameen Hayat
This thesis explores the evolution and contemporary challenges of Indonesia’s nickel industry within the context of the electric vehicle (EV) supply chain. It critically examines the sustainability and ethical considerations as Indonesia positions itself as a key player in the global transition to clean energy. The study provides a comprehensive analysis of Indonesia’s strategic moves to enhance the value derived from its extensive nickel reserves, underscored by the implementation of policies such as the raw export ban aimed at fostering local processing industries. Central to this examination is the dual role of nickel as both a critical and contentious resource, reflecting on its classification as a critical mineral by multiple countries due to its indispensability in EV battery production and the substantial environmental and social challenges associated with its extraction and processing. Employing a policy mobility framework, this thesis navigates the trans-local dynamics of policy making in Indonesia, juxtaposing these with global economy wide pursuits of transportation decarbonization via the EV industry. Through a mixed-methods approach, combining literature review, stakeholder interviews, and field observations, the research unveils the multifaceted perspectives of various stakeholders including industrial entities, government bodies, and civil society organizations. The findings highlight the significant influence of international investment, mainly Chinese investment in shaping Indonesia’s nickel processing capabilities, while also noting the ethical dilemmas and environmental hazards posed by the industry’s expansion. Indonesia’s strategy to escalate value addition locally is critically assessed, revealing both progress and persistent ethical and environmental challenges. Strategies are proposed to leverage the myriad of resources, influence and authority of actors along the EV supply chain to spur the growth of sustainable and responsible supply of Indonesian nickel. The thesis contributes to the discourse on sustainable mineral supply chains by proposing policy recommendations aimed at reconciling economic ambitions with environmental and social imperatives. These recommendations advocate for enhanced governance structures, transparent supply chains, and international collaboration to achieve ethical sourcing practices. The research underscores the need for a balanced approach that not only caters to the economic aspirations of resource-rich nations but also adheres to global sustainability standards.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States</title>
<link href="https://hdl.handle.net/1721.1/156949" rel="alternate"/>
<author>
<name>Armstrong, Les Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/156949</id>
<updated>2024-09-25T03:11:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States
Armstrong, Les Gabriel
Hydrogen is widely understood to be critical for decarbonizing hard-to-abate sectors like heavy industry, long-distance transportation, as well as balancing a variable renewable energy dominated power grid.&#13;
In this thesis, we first propose a methodology for evaluating the potential for hydrogen storage in geological salt resources. Our results show that the Michigan and Appalachian Salina basins are promising locations for hydrogen storage in salt caverns. After applying a coarse techno-economic filter, the storage potential of the remaining high value caverns is 9.7 × 108 metric tons of H2 or 32.4 PWh in Michigan and 1.6 × 107 metric tons of H2 or 0.54 PWh in the Appalachian region.&#13;
We then perform a techno-economic analysis on these salt cavern resources which we utilize as hydrogen storage options in Macro, an open source energy system optimization model that couples the power, hydrogen, and carbon sectors. We then analyze the impact of the Inflation Reduction Act and the presence of salt caverns on the United States Mid- Atlantic region in the year 2035. We find that salt caverns do not have a significant impact on the overall coupled energy system dynamics unless we force a 100% decarbonization constraint. In addition, we also uncovered a perverse behavior induced by the IRA’s hydrogen production tax credit within the model. Further work is required to understand whether this behavior is likely in practice or can be attributed to difficulties modeling real world interactions and internal frictions between actors in the energy sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of turbine motion on floating off shore wind turbine aerodynamics</title>
<link href="https://hdl.handle.net/1721.1/156948" rel="alternate"/>
<author>
<name>Tignol, Bo Junior</name>
</author>
<id>https://hdl.handle.net/1721.1/156948</id>
<updated>2024-09-25T03:30:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effect of turbine motion on floating off shore wind turbine aerodynamics
Tignol, Bo Junior
The quest to meet renewable energy targets and anticipate future energy consumption growth has driven the continuous development of wind energy system design and its push for larger and more efficient wind turbines, especially in offshore environments. Floating Offshore Wind Turbines (FOWTs) are a promising alternative for capturing high wind energy potential in more difficult offshore environments that pose challenges to traditional bottom-fixed turbines. Yet, the understanding of FOWT behaviour under dynamic floating motion-induced translational and rotational degrees of freedom remains a significant and important challenge. Indeed, there is considerable inconsistency with regards to the interpretation of FOWT behaviour under floating motion. This thesis aims to evaluate the influence of surge and pitch motions on the aerodynamic behaviour of FOWT through the interpretation of several modeling approaches and their differences. Various surge and pitch amplitude and frequency ranges are considered, and two large eddy simulation (LES) approaches, along with a simplified analytical model, are assessed with regards to their predictions of the axial induction, induced velocity, power production, and wake velocities. It was found that there is generally close agreement between surging inflow and surging actuator disk LES simulations, with a difference in time-averaged power production no larger than 1.8% for any of the investigated cases, confirming the hypothesized similarity between these two methods to simulate turbines in kinematic motion. Furthermore, it was found that, although the simplified analytical model performed well at low frequency surge motions, it exhibited increasing underprediction of power production with increasing frequency. As for the pitch cases, the model exhibited low error compared to LES simulations across the amplitudes investigated. Moreover, unlike the variability in the surging data, the pitching LES exhibited less variations with surging frequencies, which suggests that the analytical model maintains better predictive capability across a diverse range of pitching motions. Looking forward, the results of this study suggest the need for continued in-depth evaluation of additional LES parameters such as the tip-speed ratio and thrust coefficient, along with validation with the development of an analytical model that can capture the observed frequency dependence. Finally, future work should also further hone in on the inclusion of LES at different freestream wind and surge and pitch combinations to explore the potential formation of complex wake states, as well as the investigation of in-sync and out-of-sync joint pitch-and-surge cases to explore the occurrence of any nonlinear aerodynamic interactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters</title>
<link href="https://hdl.handle.net/1721.1/156944" rel="alternate"/>
<author>
<name>Muradyan, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/156944</id>
<updated>2024-09-25T03:37:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters
Muradyan, Natalie
In recent years, the increasing complexity of hardware designs has given rise to a growing array of vulnerabilities and security threats, as exemplified by instances such as Spectre, Microarchitectural Data Sampling, and Zenbleed. The inherent permanence of hardware vulnerabilities poses a significant threat, making early identification crucial for preventing security compromises once a device is manufactured. However, identifying hardware vulnerabilities is challenging due to the large and complex design of current CPUs, resulting in a substantial search space and numerous unknowns. This thesis proposes leveraging software fuzzing methods for hardware testing, focusing on the automated generation of instruction sequences that reveal hardware vulnerabilities. Unlike software fuzzing, hardware fuzzing faces challenges such as a lack of visibility into the microarchitectural processor states and difficulty in directing the search for test case generation. To address these challenges, this research draws inspiration from software fuzzers that use insights into the internal workings of the software for effective test case generation. We propose PCBleed, a coverage-guided mutational hardware fuzzer that enhances CPU fuzzing by using hardware performance counters as insight into the CPU’s behavior to improve test case generation. Since performance counters measure architectural events relevant to CPU performance, they provide insights that we use to estimate coverage, marking instruction sequences as novel. This approach aims to maximize the functionality exercised during hardware fuzzing, ultimately identifying interesting, bug-triggering behavior. Our methodology is distinctive, utilizing performance counters for hardware fuzzing enhancement, and aligns with recent research findings that highlight the versatility of performance counters in debugging, dynamic software profiling, CPU power modeling, malware detection, and cache side-channel attack detection. By incorporating performance counters into the hardware testing paradigm, this research seeks to contribute to the proactive fortification of hardware security through insightful analyses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing</title>
<link href="https://hdl.handle.net/1721.1/156943" rel="alternate"/>
<author>
<name>Cummings, Jesse E.</name>
</author>
<id>https://hdl.handle.net/1721.1/156943</id>
<updated>2024-09-25T03:30:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing
Cummings, Jesse E.
In recent years, computational models trained to do object recognition have become increasingly capable. Models have demonstrated significant improvements and have achieved saturated performance on many standard image classification benchmarks sparking discussion of whether these models have achieved parity with human object recognition ability and whether we can consider this problem solved. However, these models continue to fail in real-world applications and in un-human-like ways creating a disparity between the performance that benchmarks report and the performance that users experience. In this thesis, we investigate why standard datasets are misaligned with real-world performance by exploring image recognition difficulty as defined by human psychophysics. Using behavioral experiments with humans, we are able to identify images that humans struggle to recognize and investigate the prevalence of these images in datasets and their effect on model performance. To shed light on how humans are able to recognize these images, we conduct preliminary analysis with neuroimaging to take the first steps at identifying the neural signature of image difficulty.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Classification of Pharmaceutical and Biotechnology Companies</title>
<link href="https://hdl.handle.net/1721.1/156942" rel="alternate"/>
<author>
<name>Xu, Angelina</name>
</author>
<id>https://hdl.handle.net/1721.1/156942</id>
<updated>2024-09-25T03:33:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Classification of Pharmaceutical and Biotechnology Companies
Xu, Angelina
This study presents a novel approach for classifying biopharmaceutical companies from 2000 to 2023. We use fundamental financial data, 10-K filings, and company drug development data to develop this new classification scheme. Return correlations are used to measure the similarity of companies within a cluster, and our analysis demonstrates that this data-driven improves upon industry standards. Additionally, we evaluate the risk-return characteristics of the clusters developed from this classification scheme as consideration for investment opportunities in these industries.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing an eCommerce Pricing Model Using Rank Centrality</title>
<link href="https://hdl.handle.net/1721.1/156941" rel="alternate"/>
<author>
<name>Tong, Kevin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156941</id>
<updated>2024-09-25T03:17:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing an eCommerce Pricing Model Using Rank Centrality
Tong, Kevin C.
In recent years, eCommerce websites have become a popular alternative to traditional marketplaces, providing convenience to customers to order products from home and have them shipped. As a result, competition between sellers on the eCommerce websites has intensified in recent years, making a pricing strategy necessary to perform well in this marketplace.&#13;
&#13;
This paper attempts to model eCommerce competition between different sellers using the principle of Rank Centrality, and uses neural networks to accurately predict the winning seller on eCommerce websites, such as Amazon, based on factors including pricing, seller rating, and shipping guarantees for each seller. Using this prediction, a pricing strategy is formed to maximize sales volume and profits on these sites. This strategy is then implemented and evaluated as part of a 6-month internship with Spero Goods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy</title>
<link href="https://hdl.handle.net/1721.1/156940" rel="alternate"/>
<author>
<name>Lewis, Benjamin B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156940</id>
<updated>2024-09-25T03:26:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy
Lewis, Benjamin B.
Drug criminalization has disproportionately impacted communities of color and has insufficiently addressed substance use disorder and its associated risk of death through overdosing. Decriminalization has the potential to restore justice to communities decimated by traditional U.S. drug policy and could shift public focus towards medical approaches to treating addiction, however, inertia in drug policy persists, influenced by America’s popular political beliefs about illicit substances. A long-standing narrative in the United States views marijuana as a “gateway drug” that introduces users to harder substances, which then have adverse effects on their health and livelihood. As a result, many argue that policies which decriminalize marijuana are exacerbating the problem of drug addiction. Seemingly in line with this argument, overdose-related deaths–largely driven by increases in opioid consumption–have soared in recent years, and at the same time an increasing number of states have decriminalized marijuana. Little work, however, has examined the extent to which marijuana legalization has caused an increase in overdose deaths. Here, we address this question. To examine the causal effect of marijuana legalization on overdose deaths, we combine state-year level data on marijuana policy and overdose deaths with state-of-the-art techniques from the field of causal inference, namely Two-Way Fixed Effect Difference-in-Differences analysis with Synthetic Control. We include data from all states that enacted one of five marijuana legalization policies between 2010 and 2020. We estimate the causal effect of each policy separately for each state, and then use meta-analysis to calculate the overall effect of each policy intervention. We find that the passage of medical marijuana legalization laws, the opening of recreational dispensaries, and the implementation of Medical marijuana patient ID programs had no significant effect on annual state overdose death rates. The opening of medical marijuana dispensaries and the passage of recreational marijuana legalization laws also had no significant overall effect on overdose death rates, but the effect of these policies varied significantly across states such that there were significant increases in some states and significant decreases in others. Overall, these findings contradict the popular claim that marijuana decriminalization leads to increased use of more dangerous drugs (and thus overdose deaths) in most cases – and more generally questions the characterization of marijuana as a gateway drug.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Securing the Future: Critical Materials Policies for the US Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156939" rel="alternate"/>
<author>
<name>Concordel, Adrien</name>
</author>
<id>https://hdl.handle.net/1721.1/156939</id>
<updated>2024-09-25T03:41:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Securing the Future: Critical Materials Policies for the US Energy Transition
Concordel, Adrien
As the U.S. pushes forward industrial policies to support its energy transition with policies like the Inflation Reduction Act (IRA) to develop domestic green-tech supply chains, it overlooks the crucial need for a sustainable and secure supply of critical materials. This oversight threatens the success of the nation’s sustainable transition due to limited resilience and dependencies on geopolitically, environmentally, and socially sensitive international sourcing, particularly from China. This thesis examines the key considerations for the US to secure a sustainable supply of these materials, hypothesizing that a comprehensive policy framework integrating sustainable practices, domestic production incentives, and international cooperation can effectively reduce risks and externalities. Methods include empirical and case studies that highlight specific challenges such as permitting delays and dependency on foreign minerals, alongside economic models analyzing the impacts of these dependencies and market dynamics. Industry roundtables provide insights into prospective innovations and recent trends in the industry. Findings indicate significant market outlook uncertainty, critical dependence on imports, and significant limitations and inertia for new domestic resources development. The thesis proposes a policy framework aimed at addressing these deficiencies to support the U.S. in leading the global transition to sustainable technologies. Recommendations focus on enabling domestic production increase through better regulation and innovation, adopting sustainable practices, and diversifying supply chains to enhance resilience. This framework is crucial for policymakers, industry stakeholders, and academics involved in shaping a resilient U.S. energy strategy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models</title>
<link href="https://hdl.handle.net/1721.1/156938" rel="alternate"/>
<author>
<name>Wanichkul, Athikom</name>
</author>
<id>https://hdl.handle.net/1721.1/156938</id>
<updated>2024-09-25T03:02:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models
Wanichkul, Athikom
To improve the design for structural resilience and reduced environmental impact, we need to make the structure function relation of concrete more accurate, accessible, and cost-effective. First, we formulate and implement the Semi-Grand Canonical Monte Carlo (SGCMC) simulation for fracture mechanics, which is a stochastic method that is capable of capturing both the initiation and the propagation of fractures in a medium. We then optimize the performance of our SGCMC simulation to reduce its time complexity from O(n²·³⁸) to O(n¹·²⁴) and its space complexity from O(n²) to O(n). The key step to performance optimization is exploiting the sparsity of the stiffness matrix. We also deploy our code to run multiple simulations concurrently on a super-computing infrastructure to achieve scalability. Then, we try to achieve an even more accessible and cost-effective structure function relation by applying statistical modeling to predict the strength of a two-dimensional porous material without running the simulation. We generate samples by randomly placing circular pores with radii drawn from a log-normal distribution until we reach the target porosity and run our SGCMC simulations on the generated samples to create a data set to train our statistical models. We defined several parameters, including the two-point correlation function, the multi-scale disorder index, the distribution of pore radius as recovered by Circle Hough Transformation (CHT), and the area moments of the pores to parameterize the porous geometry of the samples beyond the porosity, which is a well-known and very important parameter. We found our best model to be a Gradient Boosting Decision Trees (GBDT) regression model, whose out-of-sample R2 is 0.904, as opposed to the baseline model of linear regression with the porosity, whose out-of-sample R2 is 0.752.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases</title>
<link href="https://hdl.handle.net/1721.1/156935" rel="alternate"/>
<author>
<name>Qin, Yuting</name>
</author>
<id>https://hdl.handle.net/1721.1/156935</id>
<updated>2024-09-25T03:03:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases
Qin, Yuting
Deep learning has transformed almost all types of data (e.g., images, videos, documents) into high-dimension vectors, which in turn forms Vector Databases as the data engines of various applications. As a result, queries on vector databases have become the cornerstone for many important online services, including search, eCommerce, and recommendation systems. In a vector database, the major operation is to search the &#119896; closest vectors to a given query vector, known as &#119896;-Nearest-Neighbor (&#119896;-NN) search. Due to massive data scale in practice, Approximate Nearest-Neighbor (ANN), which builds a search index offline to accelerate search online, is often used instead. One of the most promising ANN indexing approaches is the graphbased approach, which first constructs a proximity graph on the dataset, connecting pairs of vectors that are close to each other, then traverse the proximity graph for each query to find the closest vectors to a query vector. The search performance, in terms of the scope of traversal that leads to convergence, is highly dependent on the quality of the graph. There exist lots of prior work on improving the graph quality with various heuristics. However, no analysis or modeling work has been done to quatitatively evaluate the heuristics and their impact on the performance. Hence, it is unclear how to pick or combine the right heuristics to build a high-quality graph. This thesis aims to establish this connection to fill the gap. The key challenge in quantifying the heuristics is the complex tradeoff between the search accuracy and search speed, which makes it almost impossible to establish an analytical model. To this end, we propose to leverage machine learning as the modeling tool. We first build an unified framework to characterize various graph building heuristics, by decoupling the graph construction and search phases. We then extract graph attributes (e.g., diameter), and collect ground-truth performance data (e.g., search speed and accuracy) within our framework, across multiple datasets and graph configurations. Based on the collected data, we train a linear regression model to predict the search performance. We show experimental results on our model performance, and also discuss the implications on selecting heuristics that improve the quality of the indexing graphs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming the Expressivity-Efficiency Tradeoff in Program Induction</title>
<link href="https://hdl.handle.net/1721.1/156932" rel="alternate"/>
<author>
<name>Acquaviva, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/156932</id>
<updated>2024-09-25T04:03:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overcoming the Expressivity-Efficiency Tradeoff in Program Induction
Acquaviva, Samuel
People are incredibly flexible and efficient inductive reasoners. On the other hand, current approaches in program synthesis show strong domain-specific performance, but are both less sample-efficient and less flexible. Large language models improve upon this sample-efficiency and domain-generality, but lack robustness and still fall far short of people and traditional approaches on difficult induction tasks. In this thesis, we propose two hypotheses for how people seemingly overcome this trade-off between flexibility and efficiency. In the first, we propose that people may operate over an incredibly vast language which is made tractable via a strong, bottom-up proposal model. In the second, we propose that, alternatively, people may relax the necessity of such a strong proposal model by learning task-specific reasoning languages through experience. We build models operationalizing both hypotheses and show that they can improve the generality and efficiency of previous models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An initial design procedure for the motion analysis of flexible marine risers</title>
<link href="https://hdl.handle.net/1721.1/156858" rel="alternate"/>
<author>
<name>Jones, Hobart Todd.</name>
</author>
<id>https://hdl.handle.net/1721.1/156858</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">An initial design procedure for the motion analysis of flexible marine risers
Jones, Hobart Todd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1987; Bibliography: leaves 262-264.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic stability testing with a wind tunnel magnetic model suspension system</title>
<link href="https://hdl.handle.net/1721.1/156852" rel="alternate"/>
<author>
<name>Tilton, Edward Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/156852</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Dynamic stability testing with a wind tunnel magnetic model suspension system
Tilton, Edward Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooperative research--improving university-industry joint efforts</title>
<link href="https://hdl.handle.net/1721.1/156850" rel="alternate"/>
<author>
<name>Jones, Ruth J.
            (Ruth Jiling)</name>
</author>
<id>https://hdl.handle.net/1721.1/156850</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Cooperative research--improving university-industry joint efforts
Jones, Ruth J.
            (Ruth Jiling)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Bibliography: leaves 69-70.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal evaluation of selected ablative materials in transient, low-heat flux environments</title>
<link href="https://hdl.handle.net/1721.1/156849" rel="alternate"/>
<author>
<name>Marques, Joseph Peter.</name>
</author>
<id>https://hdl.handle.net/1721.1/156849</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Thermal evaluation of selected ablative materials in transient, low-heat flux environments
Marques, Joseph Peter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1983; "CSDL-T-809."; Includes bibliographical references.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An examination of surgical scheduling policies.</title>
<link href="https://hdl.handle.net/1721.1/156846" rel="alternate"/>
<author>
<name>Hill, Claire Louise.</name>
</author>
<id>https://hdl.handle.net/1721.1/156846</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An examination of surgical scheduling policies.
Hill, Claire Louise.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults</title>
<link href="https://hdl.handle.net/1721.1/156840" rel="alternate"/>
<author>
<name>Sanchez, Karissa</name>
</author>
<id>https://hdl.handle.net/1721.1/156840</id>
<updated>2024-09-17T03:06:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults
Sanchez, Karissa
Some of the first words that children can comprehend and produce are nouns like ball and fork. Despite this apparent early command, children deviate from adult-like behavior when categorizing and quantifying objects falling under noun descriptions. Even beyond four years of age, when they are asked to count the Xs given a set of objects that includes whole objects that fall under the noun and detached parts of objects that do, they have a tendency to count the individual partial objects as if they were wholes. Prior accounts attribute this difference to either a child's nascent numerical and quantificational abilities or to their semantic and pragmatic understanding of nominal label usage. These accounts are informed by experiments which varyingly probe categorization, counting, and quantification. However, no account can fully explain the data across all experiments, making it difficult to adjudicate between them. In this thesis, I propose a new approach to analyzing the deviation in child and adult behavior by considering how both nominal and quantificational abilities could influence it. We design a novel paradigm that examines the same children’s categorization of partial objects under noun labels and their numerical judgements about the items they had just categorized. This paradigm allows us to pinpoint where the cause of the deviation in child-like and adult-like behavior lies. Is it due to a difference in understanding nominal usage, their ability to quantify items, or both?  Ultimately, we find evidence that both nominal usage and quantificational abilities could be contributing to the deviation in behavior. We also suggest that in addition an overly flexible standard of application for count nouns, children's lack of granularity in numerical measurements could be causing them to count partial objects as wholes. For instance, children might be less adept than adults at accessing measurements between 0 and 1 such as half an X, causing them to count partial objects under a noun label as one such object.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System</title>
<link href="https://hdl.handle.net/1721.1/156839" rel="alternate"/>
<author>
<name>Mintzer, Gabriel L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156839</id>
<updated>2024-09-17T03:19:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System
Mintzer, Gabriel L.
Quantum computation has long been dominated by a digital approach using the qubit, which exists in a two-dimensional vector space, as its basic unit.  More recently, there has been increasing interest in an analog approach, which uses as its basic unit a qudit in an infinite-dimensional vector space.  Alongside these two approaches is a third less-studied approach, that of combining digital and analog quantum computation.  This approach is perhaps best exemplified by, and most researched via, the system of a qubit coupled to a quantum harmonic oscillator, which has been realized with many of the leading platforms for quantum computation.  In this thesis, we ask how machine learning and other high-level computational techniques can be employed in the design of applications of a qubit-oscillator system to implementing fundamental components of quantum technology.  In order to begin to answer this question and lay the groundwork for future investigation, both with this system and with others, we demonstrate the application of such high-level computational techniques toward addressing the problems of quantum compilation, quantum sensing, and quantum error-correction with the qubit-oscillator system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma</title>
<link href="https://hdl.handle.net/1721.1/156838" rel="alternate"/>
<author>
<name>Li, Audrey</name>
</author>
<id>https://hdl.handle.net/1721.1/156838</id>
<updated>2024-09-17T03:45:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma
Li, Audrey
Diffuse large B-cell lymphoma (DLBCL), the most prevalent form of non-Hodgkin lymphoma is marked by significant heterogeneity in its morphology, genetic irregularities, and clinical behavior. Current prognostic tools, including the International Prognostic Index and cell-of-origin transcriptional classifications such as germinal center B-cell-like and activated B-cell-like, do not adequately reflect DLBCLs complex nature. Front-line standard of care treatment predominantly consists of a regimen with cyclophosphamide, doxorubicin, prednisone, rituximab, and vincristine (R-CHOP); however, the relapse rate remains high, underscoring the need for improved diagnostic and therapeutic methods. In this comprehensive analysis, we investigated the genetic substructure of DLBCL in both newly diagnosed and relapsed/refractory cases, focusing on genetic abnormalities pertinent to relapsed settings and the immune microenvironment’s influence on therapy response. Our f indings revealed significant enrichment of specific genetic clusters, notably clusters 2 and cluster 5, which are associated with an inferior prognosis and high relapse rates following R-CHOP therapy. These clusters were characterized by distinct genetic alterations, including prevalent mutations in TP53, BCL2, and MYD88. The results of this study suggest that integrating detailed genetic profiling into the clinical management of DLBCL could significantly refine therapeutic approaches, tailoring them to the unique genetic backdrop of each patient’s disease. This approach promises to enhance the precision of prognostic assessments and the efficacy of subsequent therapeutic interventions, paving the way for personalized medicine in the treatment of DLBCL.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Causal Inference and Attribute Prediction Through Visual Information</title>
<link href="https://hdl.handle.net/1721.1/156837" rel="alternate"/>
<author>
<name>Chau, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/156837</id>
<updated>2024-09-17T03:18:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Causal Inference and Attribute Prediction Through Visual Information
Chau, Eileen
Causal inference is an active area of research in computer science and statistics as it is used to understand casual conclusions that traditional statistics cannot. A naive way to conclude the cause of an outcome is by using correlations, but this is not always accurate because there may be other variables that indirectly affect an outcome. Causal inference aims to find the root cause by considering those variables called confounders. Frequently, confounding variables are attributes in existing data, but sometimes they can be missing from the existing data. In those cases, data analysts have to look for confounders from outside sources such as tables, knowledge graphs, and text. Our focus is to look for confounding variables from visual data such as videos and images. Discovering confounders from visual data is a challenge because videos and images are unstructured unlike tables and graphs. Thus, it is difficult to identify features and also extract them from visual data. Additionally, the identified and extracted features must be relevant to the casual question being studied. With the recent advancement in visual language models (VLMs) such as GPT-4V(ision), VLMs can provide a versatile solution to the confounder discovery and feature extraction problem when using visual data. This thesis proposal investigates confounder discovery, feature extraction, and casual inference from visual data by utilizing the power of VLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Distributed Transaction Processing System Using DARQ</title>
<link href="https://hdl.handle.net/1721.1/156836" rel="alternate"/>
<author>
<name>Zhu, Ophelia Min</name>
</author>
<id>https://hdl.handle.net/1721.1/156836</id>
<updated>2024-09-17T03:41:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building a Distributed Transaction Processing System Using DARQ
Zhu, Ophelia Min
Building distributed transaction processing systems in the context of cloud microservices poses challenges related to fault tolerance, resilience, and composability. Composable Resilient Steps (CReSt) and its implementation, Deduplicated Asynchronously Recoverable Queues (DARQ), provide an abstraction to address these challenges by separating application logic from resilience mechanisms. This thesis explores the performance and usability of DARQ through the development of a distributed transaction processing system. DARQ is evaluated by performance on the YCSB and TPCC benchmark and by the ease of programming with it. The abstraction of CReSt and DARQ, while requiring additional setup, simplifies the programming for fault-tolerant applications and provides performance optimizations out of the box compared to a standard baseline implementation, enabling a 6.89x speedup for TPCC. The abstraction reduced the amount of logic needed in components that required persistence, namely the write-ahead log and two-phase commit protocol. As complex systems compose on one another, DARQ can be a useful abstraction for developers to simplify their application logic whilst providing fault-tolerance and performance optimizations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation</title>
<link href="https://hdl.handle.net/1721.1/156835" rel="alternate"/>
<author>
<name>Sinha, Anjali</name>
</author>
<id>https://hdl.handle.net/1721.1/156835</id>
<updated>2024-09-17T03:58:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation
Sinha, Anjali
The mitigation of exacerbated coastal erosion and reef degradation warrants thorough examination and enhancement of existing coastal defense strategies. Severe threats to ecosystems, communities, and infrastructure from climate change, including rising sea levels and intensified weather events, necessitate the development of new technologies for protection and damage prevention. The focus of this research is to inform optimization efforts for the design of an architected reef structure aimed at maximizing wave energy dissipation when placed under various real-world environmental conditions. By testing reef structures in sea storm conditions with random oscillatory motion, this study aims to assess the effectiveness of the architected reefs in mitigating the adverse effects of wave energy. Validating the performance of reef structures in random wave motion, as compared to regular, sinusoidal motion, will improve testing efficiency, advancing the development of sustainable and resilient solutions for future coastal preservation efforts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System to Exploit Symmetry in Common Tensor Kernels</title>
<link href="https://hdl.handle.net/1721.1/156832" rel="alternate"/>
<author>
<name>Patel, Radha</name>
</author>
<id>https://hdl.handle.net/1721.1/156832</id>
<updated>2024-09-17T03:10:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A System to Exploit Symmetry in Common Tensor Kernels
Patel, Radha
Symmetric tensors arise naturally in many domains including linear algebra, statistics, physics, chemistry, and graph theory. Symmetry arises through both mathematical properties and scientific phenomena. Taking advantage of symmetry in matrices saves a factor of two, but taking advantage of symmetry in a tensor of order n can save a factor of n! in memory accesses and operations. However, implementing this symmetry by hand significantly increases the complexity; for instance, leveraging symmetry in 2D BLAS nearly doubles the implementation burden, and this burden escalates further in the case of higher-dimensional tensors. Existing compilers to compute those kernels either do not take advantage of symmetry or do not take advantage of it to the extent possible. My thesis will identify and categorize methods to exploit symmetry in common and uncommon tensor kernels. We will depict a methodology to systematically generate and optimize symmetric code and will present a compiler in Julia that automates this process. Our symmetric implementation demonstrates significant speedups ranging from 1.36x for SSYMV to 7.95x for a 4-dimensional MTTKRP over the naive implementation of these kernels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement</title>
<link href="https://hdl.handle.net/1721.1/156831" rel="alternate"/>
<author>
<name>Magrefty, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156831</id>
<updated>2024-09-17T03:29:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement
Magrefty, David S.
The Secure Cyber Risk Aggregation and Measurement (SCRAM) framework allows multiple parties to compute aggregate cyber-risk measurements without the need to disclose publicly any information about their identity and their personal data. The framework, through the use of Multi-Party Computation (MPC) and Homomorphic Encryption (HE), guarantees each party that their participation in the computation is confidential and that the aggregated results will not be decrypted without their authorization [1]. However, the system fails to guarantee what the output of the aggregated computations reveals about their identity, their security posture, and their losses.&#13;
&#13;
In this work, we tackle the challenging problem of preserving privacy in small datasets while maximizing utility, a critical issue in the context of the SCRAM framework. We first construct a linear programming problem that demonstrates how the aggregate outputs of SCRAM do not provide adequate privacy, revealing sensitive information about individual parties. Then, we establish new privacy guarantees for the framework based on the concepts of Predicate Singling Out (PSO) and Differential Privacy (DP). These guarantees aim to protect the identity and data of the participating parties while still allowing for meaningful aggregate measurements.&#13;
&#13;
We then demonstrate the inadequacy of existing privacy solutions for small datasets and propose two novel techniques specifically designed for small datasets: integer-binary randomized response and clustering-based output perturbation. The integer-binary randomized response transforms integer inputs into binary questions, enabling the application of randomized response techniques while minimizing the impact on data utility. The clustering-based approach aggregates similar values into clusters and reports summary statistics, effectively obfuscating individual data points while preserving the overall distribution and relative magnitudes. These techniques offer a balance between privacy and utility, demonstrating the feasibility of privacy-preserving computation on small datasets.&#13;
&#13;
Our work highlights the limitations of existing privacy solutions for small datasets and the necessity of developing specialized techniques to address this challenge. The proposed methods not only enhance the privacy guarantees of the SCRAM framework but also contribute to the broader field of privacy-preserving computation, providing a foundation for future research and applications involving sensitive data aggregation and analysis in small dataset scenarios.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs</title>
<link href="https://hdl.handle.net/1721.1/156829" rel="alternate"/>
<author>
<name>Liu, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/156829</id>
<updated>2024-09-17T03:53:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs
Liu, Dylan
Generative AI tools for the creative arts have become increasingly popular over the past few years. Several well-known models, such as ChatGPT and DALL-E, can even produce writing and artwork comparable to those created by human professionals. Thus, it's no surprise that many technology firms, such as OpenAI and Google, have trained models that can create music as well. These state-of-the-art models usually take in an artist or genre, and they output a song corresponding to the received inputs. However, none of these models are designed to generate music according to an \emph{emotional} input, nor are they able to generate their own styles of music (i.e. they are all trained on well-known works).&#13;
&#13;
Because music is designed to target and evoke specific feelings within the listener, we aim to produce a tool that accounts for this emotional aspect. To this end we create EVA, a new type of generative music model. EVA is the first model takes in a quantitative representation of an emotion as input and returns an instrumentalized musical performance that evokes such an emotion as output. Furthermore, without the reliance on past works of well-known composers for training data, EVA produces a unique style of music that is dissimilar to any particular artists.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Engineering of Modular Symbols</title>
<link href="https://hdl.handle.net/1721.1/156828" rel="alternate"/>
<author>
<name>Boonsiriseth, Krit</name>
</author>
<id>https://hdl.handle.net/1721.1/156828</id>
<updated>2024-09-17T03:08:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Performance Engineering of Modular Symbols
Boonsiriseth, Krit
We present a new program MFSplit which computes information about newform subspaces for modular forms of weight 2 and trivial character. Modular forms are certain functions in mathematics that appear in many different subfields of mathematics, including number theory and complex analysis; newform subspaces are spaces spanned by a special type of modular forms and are, in some sense, building blocks of spaces of modular forms. Our program MFSplit is based on modular symbols, which is a formalism commonly used to compute modular forms. Existing computer algebra systems such as Sage and Magma include implementations of modular symbols. Our implementation applies the principles of performance engineering to this computational number theory problem, and MFSplit is at least 3 times faster than existing implementations. Consequently, we were able to compute information about newform subspaces for level N ≤ 50000, extending previous efforts that computed this information up to N ≤ 16000. Based on this computation, we analyze the performance characteristics of our program and generate more data related to certain conjectures in mathematics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection</title>
<link href="https://hdl.handle.net/1721.1/156827" rel="alternate"/>
<author>
<name>Darby, Brady J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156827</id>
<updated>2024-09-17T03:50:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection
Darby, Brady J.
Developments of computer vision techniques in the past decade have rapidly accumulated and enabled the application of vision systems to use cases that were once out of reach. In conjunction with standard image processing techniques, deep learning models for vision tasks have received increasing attention, and they both see considerable utility in space exploration. Specifically, real-time obstacle detection and motion planning require advanced vision logic. However, retroactive data analysis is an area with less emphasis but promising application for computer vision. This thesis project explores how both image processing and deep learning-based computer vision methods can be leveraged to analyze drill bits on board the Mars 2020 Perseverance Rover, a Jet Propulsion Laboratory (JPL) mission. The effectiveness of thresholding and segmentation on two critical tasks, drill bit identification and mechanical wear detection, is demonstrated. Then, transfer learning of convolutional neural networks (CNNs) is applied to the same tasks, allowing comparison of results. This thesis also explores a means of presenting processed image outputs to non-technical operators in order to assist manual analysis of drill bit wear state.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expectation-based comprehension of linguistic input: facilitation from visual context</title>
<link href="https://hdl.handle.net/1721.1/156826" rel="alternate"/>
<author>
<name>Pushpita, Subha Nawer</name>
</author>
<id>https://hdl.handle.net/1721.1/156826</id>
<updated>2024-09-17T03:03:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Expectation-based comprehension of linguistic input: facilitation from visual context
Pushpita, Subha Nawer
Context fundamentally shapes real-time human language processing, creating linguistic expectations that drive efficient processing and accurate disambiguation (Kuperberg and Jaeger, 2016). In naturalistic language understanding, the visual scene often provides crucial context (Ferreira et al., 2013; Huettig et al., 2011). We know that visual context guides spoken word recognition (Allopenna et al., 1998), syntactic disambiguation (Tanenhaus et al., 1995), and prediction (Altmann and Kamide, 1999), but much about how visual context shapes real-time language comprehension remains unknown. In this project, we investigate how visual information penetrates the language processing system and real-time language understanding. Here we show that relevant visual context significantly facilitates reading comprehension, with the amount of facilitation modulated by a word’s degree of grounding in that visual context or image in our case. Our results also demonstrate that the facilitation is largely mediated by the effect of multimodal surprisal(the relative entropy induced by the word between the distributions over interpretations of the previous words in the sentence and the image). We also found that the errors that people are prone to make in reading comprehension tasks can be largely predicted by the amount of multimodal surprisal. The results also highlight the strong correlation between a word’s degree of grounding and reduction of surprisal for the presence of an image. Our work offers new possibilities for how multimodal large language models may be used in psycholinguistic research to investigate how visual context affects language processing. This work will also pioneer questions about how information processed in different modalities such as audio, video, or structured visuals like graphs and diagrams shape our upcoming linguistic comprehension or even language generation, providing fundamental theoretical insights into the understanding of the way we use language to navigate in a complex world.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Further Hardness Results for Stephen’s Sausage Roll</title>
<link href="https://hdl.handle.net/1721.1/156825" rel="alternate"/>
<author>
<name>Liu, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/156825</id>
<updated>2024-09-17T04:00:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Further Hardness Results for Stephen’s Sausage Roll
Liu, Jason
Stephen’s Sausage Roll is a relatively unstudied puzzle game with a fascinating set of mechanics for computational hardness problems. The only past results are from a class project in MIT’s 6.5440 class of Fall 2023, which only dealt with two specific subsets of the mechanics restricted to two-dimensional forms of the game [1]. This project presents a more complete characterization of problems based off of Stephen’s Sausage Roll, and provides solutions for a significant portion. In particular, both variants of Stephen’s Sausage Roll considered in prior work can be solved by one of these results.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries</title>
<link href="https://hdl.handle.net/1721.1/156824" rel="alternate"/>
<author>
<name>Gupta, Sejal</name>
</author>
<id>https://hdl.handle.net/1721.1/156824</id>
<updated>2024-09-17T03:42:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries
Gupta, Sejal
Recent advancements in visual language models (VLMs) have transformed the way we interpret and interact with digital imagery, bridging the gap between visual and textual data. However, these models, like Bard, GPT4-v, and LLava, often struggle with specialized fields, particularly when processing scientific imagery such as plots and graphs in scientific literature.&#13;
&#13;
In this thesis, we discuss the development of a pioneering reconstruction pipeline to extract metadata, regenerate plot data, and filter out extraneous noise like legends from plot images. Ultimately, the collected information is presented to the VLM in structured, textual manner to assist in answering domain specific queries. The efficacy of this pipeline is evaluated using a novel dataset comprised of scientific plots extracted from battery domain literature, alongside the existing benchmark datasets including PlotQA and ChartQA. Results about the component accuracy, task accuracy, and question-answering with augmented inputs to a VLM show promise in the future capabilities of this work.&#13;
&#13;
By assisting VLMs with scientific imagery, we aim to not only enhance the capabilities of VLMs in specialized scientific areas but also to transform the performance of VLMs in domain specific areas as a whole. This thesis provides a detailed overview of the work, encompassing a literature review, methodology, results, and recommendations for future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science</title>
<link href="https://hdl.handle.net/1721.1/156822" rel="alternate"/>
<author>
<name>Hasan, Adib</name>
</author>
<id>https://hdl.handle.net/1721.1/156822</id>
<updated>2024-09-17T03:17:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science
Hasan, Adib
This work introduces WeatherFormer, a transformer encoder-based model designed to robustly represent weather data from minimal observations. It addresses the challenge of modeling complex weather dynamics from small datasets, which is a bottleneck for many prediction tasks in agriculture, epidemiology, and climate science. Leveraging a novel pretraining dataset composed of 39 years of satellite measurements across the Americas, WeatherFormer achieves state-of-the-art performance in crop yield prediction and influenza forecasting. Technical innovations include a unique spatiotemporal encoding that captures geographical, annual, and seasonal variations, input scalers to adapt transformer architecture to continuous weather data, and a pretraining strategy to learn representations robust to missing weather features. This thesis for the first time demonstrates the effectiveness of pretraining large transformer encoder models for weather-dependent applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility</title>
<link href="https://hdl.handle.net/1721.1/156821" rel="alternate"/>
<author>
<name>Mei, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/156821</id>
<updated>2024-09-17T03:52:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility
Mei, Catherine
Diagrams are data structures for problem-solving and communication because they allow users to formalize and analyze complex concepts through spatial relations. However, their visual nature presents significant accessibility challenges for blind and low-vision users who rely on screen readers. Existing methods for making diagrams accessible often fall short, providing only superficial overviews and lacking detailed, navigable structures. This paper introduces Benthic, a system for generating intermediate representations and depicting relational information in diagrams. Benthic provides an interface that allows screen reader users to navigate the diagram data structure. Benthic uses a hypergraph traversal structure, where diagram nodes are grouped by hyperedges that represent diagram relations. These relations are presented in the screen reader interface according to their priority (or visual salience), allowing screen reader users to traverse the information similarly to how sighted users might view the diagram. Additionally, users can explore diagrams at various levels of detail by choosing to navigate high-level relations or more detailed relations based on their needs. We evaluate Benthic’s effectiveness through three comparative case studies with existing diagram accessibility systems. Benthic aims to create a design space of traversal structures that will allow blind and low-vision users to leverage the same affordances available to sighted users, enabling intuitive interaction and comprehensive understanding of diagrams.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations</title>
<link href="https://hdl.handle.net/1721.1/156820" rel="alternate"/>
<author>
<name>Sologuren, Emily R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156820</id>
<updated>2024-09-17T03:51:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations
Sologuren, Emily R.
The diverse set of traits that soft-rigid robots possess have the potential to be applied towards a multitude of applications that require both strength and flexibility. This thesis looks at two kinds of soft-rigid robotic systems: the first is a series assembly of soft-rigid modules with stiffness modulation to form a soft-rigid robotic arm, and the second system is a parallel assembly of rigid bones casted into silicone to form a passive soft-rigid flipper for a robotic sea turtle. We first introduce a new class of soft-rigid modules that can modulate their stiffness on a continuum through tendon-driven actuation and the integration of "soft" and "rigid" components. Their serial assembly form a self-standing, soft-rigid robotic arm (SRRA). When coupled with an adapted soft PD+ controller, we generate trajectories that demonstrate the manipulator’s ability to deform for maneuvering tasks and stiffen for load-bearing tasks. The robotic sea turtle’s parallel, soft-rigid flippers emulate those of its animal counterpart. To leverage this structure for underwater locomotion, we look at a CPG-coupled reinforcement learning framework to optimize for a forward swimming gait.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic</title>
<link href="https://hdl.handle.net/1721.1/156819" rel="alternate"/>
<author>
<name>Ding, Jessica H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156819</id>
<updated>2024-09-17T03:02:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic
Ding, Jessica H.
With the advent of autonomous vehicles (AVs), and with the slow but steady consumer adoption of AVs on road networks, there is a newfound need to study the interactions between efficient traffic flow and driving safety in mixed autonomy traffic. Extending from reinforcement learning methods in robotic control methods and from learning methods for location-based actuators like traffic lights, this thesis considers control strategies afforded by individual AVs, which have recently seen potential for direct optimization of singular system objectives, such as traffic smoothing and emission reduction, and introduces a reinforcement learning-based methodological framework to facilitate a study of the trade offs between performance and safety at a fleet level. This investigation automatically produces Pareto frontier curves for four diverse traffic scenarios based on established mixed traffic benchmarks. The results of this study will inform decision-makers regarding inherent trade-offs in traffic control systems, and this framework can be extended to study arbitrary objectives in complex control systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modulated Frequency Multiplier Inverter</title>
<link href="https://hdl.handle.net/1721.1/156818" rel="alternate"/>
<author>
<name>Coston, Sarah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156818</id>
<updated>2024-09-17T03:38:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modulated Frequency Multiplier Inverter
Coston, Sarah M.
Many industrial applications such as plasma generation and wireless power transfer require high frequency power inverters (or rf power amplifiers) that are able to output a wide power range despite highly variable load reactances, while also maintaining high efficiency. Previous approaches to this problem, such as switched-mode inverters combined with tunable matching networks provide adequate, albeit bulky, costly, and complex solutions at lower HF frequencies, while at higher frequencies inefficient linear amplifiers dominate. This thesis introduces an efficient inverter (or switched-mode power amplifier) approach that can provide efficient wide-power-range control into a variable load, while being scalable to increased output frequencies compared to conventional designs. We introduce a wide-range power amplifier that uses frequency control to manage reactive load variations, and phase modulation to modulate output power, and frequency multiplication to achieve high output frequency, all while maintaining soft switching. The proposed thesis provides a preliminary development of this modulated frequency multiplier inverter, analyzing and demonstrating it functionality and effectiveness through simulation, showing its ability to achieve high output frequencies, manage wide load reactances, control power over a wide range, and maintain a high efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recognizing Brain Regions in 2D Images from Brain Tissue</title>
<link href="https://hdl.handle.net/1721.1/156817" rel="alternate"/>
<author>
<name>Lohawala, Sabeen Imtiyaz</name>
</author>
<id>https://hdl.handle.net/1721.1/156817</id>
<updated>2024-09-17T03:45:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Recognizing Brain Regions in 2D Images from Brain Tissue
Lohawala, Sabeen Imtiyaz
Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities. Various tools and deep learning models have been developed to automatically identify different anatomical structures in 3D MRI volumes. However, the only method that exists to segment the anatomical structures in 2D brain slices, whether they be 2D slices extracted from an MRI or photographs of slices from a physically dissected brain, is manually labeling by a trained neuroanatomist, which is costly, resource-intensive, and time-consuming. In this project, we develop a new deep learning method to automatically segment 50 different regions in 2D photographs of the brain. Because a supervised image and segmentation map dataset does not exist for the photographs, we train the state-of-the-art SegFormer model on a supervised dataset of 2D MRI slices. We employ multiple data augmentation techniques to increase the variability of the training data to more closely resemble the variability seen in brain photographs, so that the model is robust enough to segment the anatomical regions in brain photographs. In this project, the SegFormer model achieved test dice scores between 0.6-0.75 on the segmentation of 50 different anatomical regions in 2D MRI slices, depending on which augmentations were incorporated during training. Additionally, the project demonstrated that incorporating complex augmentations that forced the model to learn the segmentation task with reduced contextual information as well as those that decoupled the tissue and background by manipulating them independently helped improve the robustness of the model, allowing it to better segment 2D photographs of the brain. Although there is much room for improvement, this project provides a set of techniques that can be extended to further improve the model’s robustness so that it can be applied to other imaging modalities as well in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Genomic Language Models for Protein Function and Property Prediction</title>
<link href="https://hdl.handle.net/1721.1/156816" rel="alternate"/>
<author>
<name>Boshar, Sam T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156816</id>
<updated>2024-09-17T03:48:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Genomic Language Models for Protein Function and Property Prediction
Boshar, Sam T.
In the field of natural language processing (NLP), large language models (LLMs) trained on enormous corpora of unlabeled sequence data have demonstrated state-of-the-art performance on a variety of downstream tasks. This approach is appealing because one model can be easily adapted to do well in many modalities, rather than requiring many specialized models. This same architecture has found great success modeling biological data, including protein, mRNA and genomic sequences. Representations from biological language models have also outperformed highly specialized models, especially in data-scarce scenarios. How- ever, since the genome contains all of the information encoding proteins, genomic language model (gLMs) have the potential to model DNA, RNA and proteins. In spite of this, the performance of gLMs on proteins is largely unknown due to the lack of datasets pairing proteins with their true coding sequences. In this work, we curate five such coding sequence datasets and use them to study gLMs and protein language model (pLM) performance on protein function and property prediction. We show that gLMs are competitive and even outperform their pLMs counterparts on some tasks and that they perform best using the curated true coding sequences over alternative codon sampling strategies. We perform a series of experiments to find interpretable explanations for gLM performance, and investigate architecture changes to address their shortcomings and improve the ability of gLM to represent proteins. We found that a joint genomic-proteomic architecture outperforms each individual approach, showing that they capture different, but complementary sequence representations. We identify examples of such distinct representations in a detailed analysis of their respective embedding spaces. In studying the application of gLMs to proteomics, we look to encourage further research into a unified and synergistic approach to many biological modalities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics</title>
<link href="https://hdl.handle.net/1721.1/156815" rel="alternate"/>
<author>
<name>Sollee III, Richard P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156815</id>
<updated>2024-09-17T03:22:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics
Sollee III, Richard P.
The field of Lattice Quantum Chromodynamics faces massive scaling problems because of the large iteration spaces of the sums required which scale with the factorial of the number of atoms represented. The LQCD IR and rewrite system from this thesis allows tackling these scaling problems quicker and more effectively. The IR allows representing both mathematical concepts such as products and sums as well as algorithmic concepts such as precomputations. Our system requires minimal code to initialize the naive algorithm and apply effective rewrites to increase performance. This development time speedup allows trying various approaches with ease. The rewrite system allows correctness to be maintained at each step while being able to drastically change the algorithmic approach in search of better asymptotic bounds. Our approaches lead to up to 5x speedups and at worse 2x slowdowns for our most important problem, but with a better development cycle, requiring only 100s of SLOC compared to 1000s of SLOC.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Video Games for Empathy and Understanding Towards Human Migration</title>
<link href="https://hdl.handle.net/1721.1/156814" rel="alternate"/>
<author>
<name>Casillas, Enrique</name>
</author>
<id>https://hdl.handle.net/1721.1/156814</id>
<updated>2024-09-17T03:38:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Video Games for Empathy and Understanding Towards Human Migration
Casillas, Enrique
Video games have recently started playing a more important role in education, though there is limited research on how they can be used to generate empathy and understanding towards their subject matters. To address this limitation, we present Vida Migrante, an online interactive simulation game about the struggles of Venezuelan migrants living in Ecuador, and analyze whether or not the game can foster empathy and understanding towards the migrant experience. This study uniquely looks at how the game can communicate the findings from real migrant data in such a way that users can empathize with them. A set of 52 students at the Massachusetts Institute of Technology were surveyed and asked a series of Likert-style and open-ended questions to determine whether or not this game generated empathy and understanding towards the topic. An in-depth quantitative and qualitative analysis reveals that although respondents already had high levels of empathy and understanding, the game was able to increase those levels rather significantly. This work shows that video games like these can be used not only to increase familiarity and understanding of a humanitarian issue, but also empathy towards the data and the presented human experiences. This paper lastly contributes a discussion of the specific features of this game that allows empathy generation to occur, which may help motivate future work to create effective games that allow its players to empathize with important issues in today’s technology driven world.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy-Based Access Control in Federated Clinical Question Answering</title>
<link href="https://hdl.handle.net/1721.1/156813" rel="alternate"/>
<author>
<name>Chen, Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/156813</id>
<updated>2024-09-17T03:44:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Policy-Based Access Control in Federated Clinical Question Answering
Chen, Alice
Retrieval augmented generation (RAG) has recently expanded large language model versatility in answering domain-specific questions using dynamic external knowledge bases, particularly demonstrating promise in assisting clinical settings. However, due to its sensitive nature, patient medical data often requires retrieval to be federated across a decentralized network of hospital institutions, each maintaining internal databases and access control policies. Applying standard RAG to clinical question-answering tasks is complicated by the lack of an interface for hospital resource owners to regulate and restrict access to sensitive clinical documents during retrieval, which is essential for model feasibility in practice. We propose to leverage federated RAG retrieval for clinical trends inference across distributed medical records while adding authorization security mechanisms during retrieval to guarantee security of patient data. We propose (i) user identity authentication administered through a trusted federation of per-hospital OpenID Connect servers, (ii) a framework for integrating policy-based access control (PBAC) security mechanisms at flexible granularity into a federated RAG system to restrict medical data access based on user role attributes, and (iii) ClinicalTrendQA, a novel dataset to evaluate model performance for synthesizing clinical trends grounded on decentralized patient EHR information. To facilitate evaluation of our authorization PBAC framework on protecting information leakage during retrieval, we additionally present a federated 3-hospital case study and demonstrate that the same ClinicalTrendQA query under different user profiles holding varying degrees of access privileges observes the expected EHR information reduction. We also analyze metrics concerning the impact of this retrieval loss on end-to-end response quality against federated insecure and centralized RAG baselines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager</title>
<link href="https://hdl.handle.net/1721.1/156812" rel="alternate"/>
<author>
<name>Gerszberg, Nina R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156812</id>
<updated>2024-09-17T03:39:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager
Gerszberg, Nina R.
The growing importance of large language models (LLMs) in daily life has heightened awareness and concerns about the fact that LLMs exhibit many of the same biases as their creators. In the context of hiring decisions, we quantify the degree to which LLMs perpetuate biases originating from their training data and investigate prompt engineering as a bias-mitigation technique. Our findings suggest that for a given resumé, an LLM is more likely to hire a candidate and perceive them as more qualified if the candidate is female, but still recommends lower pay relative to male candidates.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models</title>
<link href="https://hdl.handle.net/1721.1/156810" rel="alternate"/>
<author>
<name>Wang, Annie</name>
</author>
<id>https://hdl.handle.net/1721.1/156810</id>
<updated>2024-09-17T03:52:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models
Wang, Annie
Large language models are useful tools for generating and synthesizing short code snippets that solve straightforward programming problems. However, their performance on more advanced code generation tasks remains limited, due to the complex algorithmic nature of these tasks. Yet, large language models are often capable of crafting nearly-correct answers to such questions; model-generated responses are prone to small errors that may render an otherwise-correct program incorrect. To address this issue, we investigate whether large language models can be combined with enumerative program synthesis techniques to build solutions to difficult algorithmic problems. This thesis presents and evaluates compressive enumeration as a strategy for improving large language model performance on code generation tasks. Given a question q and a corpus P of model-generated responses to q, compressive enumeration isolates shared code components within P; combining these components in novel ways may make it possible to generate a new solution to q. Experimentation with the Stitch library learning algorithm shows that compressive enumeration is able to generate a working solution for a small number of questions. However, its best performance is typically attained on problems that are already solvable by current large language models. This suggests that compressive enumeration has limited practical value as a code generation strategy; however, future improvements to the technique may make it more widely applicable.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model</title>
<link href="https://hdl.handle.net/1721.1/156808" rel="alternate"/>
<author>
<name>Kang, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/156808</id>
<updated>2024-09-17T03:01:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model
Kang, Benjamin
The empirical Bayes estimator for the Poisson mixture model in [1], [2] has been an important problem studied for the past 70 years. In this thesis, we investigate extensions of this problem to estimating polynomial functions of the Poisson parameter rather than just the parameter itself. We generalize three different algorithms for estimation, specifically the Robbins estimator from [2], the NPMLE method from [3], and the ERM method from [4]. For each of these algorithms, we prove upper bounds on the minimax regret. We also prove a general lower bound that applies to any estimation algorithm for this setup. In addition to the theoretical bounds, we empirically simulate the performance of all three algorithms in relation to both the number of sample and the degree of the polynomial function we estimate.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-scale Trends in Vision Systems: Novel Methods for Identifiability</title>
<link href="https://hdl.handle.net/1721.1/156807" rel="alternate"/>
<author>
<name>Yang, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/156807</id>
<updated>2024-09-17T03:21:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Large-scale Trends in Vision Systems: Novel Methods for Identifiability
Yang, Helen
While the analogy between artificial neural networks (ANNs) and the brain have been well validated in past work, one question without a clear answer is—what causes an ANN to be more or less brain-like? A better understanding of this may lead to the discovery and implementation of more performant and human-like AI systems. However, despite ANNs having been proposed as models of primate visual systems, the success in predicting both neural and behavioral responses of primates by ANNs has not been without contention. Increasing architectural and dataset sizes bring forth concerns of black boxes (artificial systems) explaining other black boxes (human intelligence), leading to our level of understanding of the relationship between artificial and biological visual systems hitting a wall. In addition, there is increasing empirical evidence that the representations learned by artificial vision systems are convergent: artificial vision systems trained on large datasets tend to learn similar representations despite having numerous differences in architecture and training. This lack of identifiability presents a challenge to comparison pipelines commonly used to validate artificial vision systems as models of biological vision—if two artificial vision systems with different architectures have convergent representations, we are limited in our ability to reason about the structural properties of an individual artificial vision system and determine which system provides a better model of the brain. In light of these issues, we provide an analysis of current frameworks for measuring artificial and biological visual system similarity and propose a novel approach toward improving identifiability between artificial vision systems via contrastive stimuli. We show that our approach offers better identifiability between artificial vision systems compared to standard benchmarks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smarter Agents for Agent-Based Models</title>
<link href="https://hdl.handle.net/1721.1/156806" rel="alternate"/>
<author>
<name>Kuru, Nurullah Giray</name>
</author>
<id>https://hdl.handle.net/1721.1/156806</id>
<updated>2024-09-17T03:35:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Smarter Agents for Agent-Based Models
Kuru, Nurullah Giray
Agent-based models (ABMs) are powerful tools for decision-making due to their ability to simulate systems with individual-level granularity. Recent advances have mitigated the computational costs of scaling ABMs to real-world population sizes; however, the potential of ABMs is also constrained by the quality of the underlying data and feedback loops. We introduce two approaches to improving data quality in ABMs. First, we incorporate LLM peers in ABM simulations to guide agent decision-making and thought generation, leveraging the world model learned by LLMs. We analyze both proprietary and open-source LLMs for suitability in ABM use, and find GPT-3.5 to be a strong candidate for distinguishing between agent characteristics and producing plausible isolation decisions in an epidemic. We introduce an effective and scalable system for using LLMs in ABMs by characterizing agents using a small set of characteristics and using LLM peers to guide agent groups. We conduct experiments in a synthetic replica of the Astoria neighborhood of New York City and show that this system achieves better calibration and enables more detailed analysis. Second, we propose privacy-preserving ABMs that can integrate real agents into ABM simulations in a distributed system using cryptographic protocols. We describe algorithms for running simulations, calibration, and analysis of ABMs, and provide a proof of concept. This approach enables adding real human feedback into ABMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inverse Constitutional AI</title>
<link href="https://hdl.handle.net/1721.1/156804" rel="alternate"/>
<author>
<name>Kostolansky, Timothy H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156804</id>
<updated>2024-09-17T03:47:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inverse Constitutional AI
Kostolansky, Timothy H.
The alignment of large language models (LLMs) to human values becomes more and more pressing as their scale and capabilities have grown. One important feature of alignment is understanding the preference datasets that are used to finetune LLMs. Inverse Constitutional AI (ICAI) is presented as a novel interpretability framework to discover the principles underlying preference datasets. Motivated by the Constitutional AI training paradigm of instilling principles in models, ICAI aims to extract a succinct "constitution" of natural language principles from data. This thesis contributes an initial attempt at realizing ICAI through a clustering-based methodology applied to preference datasets. The proposed approach involves embedding preference pairs into vector representations, clustering the embeddings to group related preferences, generating interpretable principles for each cluster using language models, and validating these principles against held-out samples. Empirical evaluation is conducted on the hh-rlhf dataset for training helpful and harmless AI assistants, as well as a synthetic dataset constructed by relabeling hh-rlhf samples with predefined principles. Results demonstrate promising capabilities in clustering semantically coherent topics and generating human-interpretable principles, while also highlighting limitations in achieving fully disentangled, principle-based clustering. Directions for future work are discussed, including soft clustering, bottom-up principle extraction, prompt optimization approaches, and sparse dictionary learning methods. In this work, I argue the following thesis: ICAI shows promise as a strategy to disentangle and explain the preferences represented in preference data. A clustering-based approach to ICAI, though, fails to successfully extract a constitution of principles from preference data, as a result of clustering occurring along the topics in the data instead of the preferences themselves.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications</title>
<link href="https://hdl.handle.net/1721.1/156803" rel="alternate"/>
<author>
<name>Voronin, Diana Nguyen</name>
</author>
<id>https://hdl.handle.net/1721.1/156803</id>
<updated>2024-09-17T03:55:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications
Voronin, Diana Nguyen
In this thesis, we present Kodless, a platform that enables users to automatically build web applications from natural language descriptions without requiring them to write, review, or debug the generated code. Kodless structures applications using concept design, a theory which views software as a collection of interacting yet independent units of functionality mapping to human behavior patterns. The platform leverages large language models to generate functional backend code, combining concept design principles with a robust framework for developing concept implementations and integrating them into a standardized application architecture. To evaluate Kodless's performance, we conduct a study in which we use the platform to develop an application through an iterative prompt refinement process. We argue that the case study illustrates the importance of concept-driven prompt engineering and offer guiding principles for designing effective prompts. Furthermore, this thesis contributes improvements to the Kodless platform, including extended support for MongoDB integration and the automatic generation of a frontend testing client. We also introduce a frontend code generation assistant to enable automatic generation of reactive user interfaces. Ultimately, Kodless represents a promising path towards changing how we approach AI driven software design and development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas</title>
<link href="https://hdl.handle.net/1721.1/156802" rel="alternate"/>
<author>
<name>Janicki, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/156802</id>
<updated>2024-09-17T03:31:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas
Janicki, Adam
Tree-like data structures are very commonly used data types found in the wild in a wide array of projects JavaScript projects. A specific example of one of these structures is an abstract syntax tree (AST). However, the lack of good libraries to handle trees has led to many developers and large-scale code bases having to implement their utility functions over and over again. To address these concerns within the JavaScript developer community, we propose Treecle and Vastly: two free open-source libraries that provide utility functions and operations to help developers work with trees and ASTs respectively.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Recommendation System for Ideation: Enhancing Supermind Ideator</title>
<link href="https://hdl.handle.net/1721.1/156801" rel="alternate"/>
<author>
<name>Papacica, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156801</id>
<updated>2024-09-17T03:49:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Recommendation System for Ideation: Enhancing Supermind Ideator
Papacica, Daniel
Recommendation systems are widely utilized across various domains such as e-commerce, entertainment, and social media to enhance user experience by personalizing content and suggestions. Despite their widespread use, these systems are rarely applied to the ideation process, presenting unique challenges due to the inherently creative and complex nature of generating and developing novel ideas. This thesis details the creation and assessment of a recommendation system for the Supermind Ideator platform, aimed at enhancing the creative ideation processes. The recommendation system leverages machine learning techniques to dynamically adapt to user input statements based on statement "scope", a sub-task that is thoroughly explored and tested in this paper. "Scope" is then integrated into the recommendation system’s static rules-based algorithm to suggest the next best Supermind Design "move". This work not only contributes a practical tool to the field of ideation but also extends the theoretical understanding of recommendation systems in facilitating complex, subjective cognitive tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156800" rel="alternate"/>
<author>
<name>Terpstra, Irene</name>
</author>
<id>https://hdl.handle.net/1721.1/156800</id>
<updated>2024-09-17T03:32:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning
Terpstra, Irene
Analog Integrated Circuit design consists of several complex steps that are difficult to optimize. Automating the transistor sizing process specifically comes with many challenges. The problem has a large design space, requires complex performance trade-offs, and needs to adjust to rapidly advancing semiconductor technology. As a result, the task of sizing transistors is traditionally performed by experts with years of experience. Various optimization and reinforcement learning methods have been proposed to automate this process. While having shown great competency, these methods must learn complex circuit dynamics from scratch, resulting in black-box solutions. This thesis proposes that the background knowledge contained in Large Language Models (LLMs) can guide the decisions of circuit designers, and that this guidance can be used to improve the exploration efficiency of both mathematical optimizers and reinforcement learning algorithms. This thesis demonstrates that LLMs possess a foundational understanding of analog circuit design including circuit calculation and netlist comprehension. It also built a framework to integrate LLMs as heuristic tools with existing optimization methods. This is a first-of-its-kind exploration into linking LLMs with optimization techniques for analog circuit design. While the current experimental results do not show improvements in design quality or speed, this work establishes the groundwork for further advancements with more sophisticated or fine-tuned LLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coherency Loss for Hierarchical Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/156799" rel="alternate"/>
<author>
<name>Hensgen, Michael Lowell</name>
</author>
<id>https://hdl.handle.net/1721.1/156799</id>
<updated>2024-09-17T03:02:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Coherency Loss for Hierarchical Time Series Forecasting
Hensgen, Michael Lowell
In hierarchical time series forecasting, some series are aggregated from others, producing a known coherency metric between series. We present a new method for enforcing coherency on hierarchical time series forecasts. We propose a new loss function, called Network Coherency Loss, that minimizes the coherency loss of the weight and bias of the final linear layer of a neural network. We compare it against a baseline without coherency and a state of the art method that uses projection to strictly enforce coherency. We find that, by choosing our Network Coherency Loss parameters based on validation data, for four datasets of varying sizes we produce improved accuracy over our two benchmark models. We also find that, when compared to an alternative loss function also designed to produce coherency, our Network Coherency Loss function produces similar accuracies but improves the coherency on the test data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representation Learning for Extrapolation via Bilinear Transduction</title>
<link href="https://hdl.handle.net/1721.1/156798" rel="alternate"/>
<author>
<name>Spiride, Andrei</name>
</author>
<id>https://hdl.handle.net/1721.1/156798</id>
<updated>2024-09-17T03:01:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Representation Learning for Extrapolation via Bilinear Transduction
Spiride, Andrei
Typical machine learning systems, such as deep neural networks, perform well at predicting on new examples that come from the same distribution as initial training data. However, these systems are not typically robust to examples that do not come from the same distribution as the training samples. These testing samples are characterized as out-of-distribution (OOD). Using a proven bilinear transduction [1] method for accurately predicting on OOD examples, we propose a method to apply this framework to learned representations instead of hand designed state representations. This work is geared towards enabling the bilinear transduction approach to generalize to a wider range of data types and tasks when such designed representations are not available. We use deep neural networks to learn representations of certain data types, such as images, and apply bilinear transduction to these learned representations. This has the potential to further expand the out-of-support prediction capabilities of the bilinear transduction framework.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connecting Deep Learning Models to the Human Brain</title>
<link href="https://hdl.handle.net/1721.1/156797" rel="alternate"/>
<author>
<name>Subramaniam, Vighnesh</name>
</author>
<id>https://hdl.handle.net/1721.1/156797</id>
<updated>2024-09-17T03:24:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Connecting Deep Learning Models to the Human Brain
Subramaniam, Vighnesh
In this thesis, we introduce innovative methodologies for connecting new deep learning models, particularly models that integrate vision and language with human brain processing. These models have shown remarkable advancements in tasks such as object recognition, scene classification, and language processing, achieving near-human accuracy in some cases. This raises intriguing questions about how closely the computations and geometric structure of these models mirror that of the human brain. Our method starts with measuring brain activity in response to vision and language stimuli and then exposes these stimuli to deep learning models to collect their internal activations. We analyze the similarity between these activations and brain activity using a specific representational distance metric. We focus on introducing statistical algorithms to assess whether one model is significantly more similar with the brain than another. Through our novel methodology, we assess whether there’s a more significant correlation between brain regions and multimodal models compared to unimodal ones. Our investigation reveals brain areas associated with vision-language integration and models of vision-language integration that are potentially most similar to the brain.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Optimizing the Networking Stack in Databases</title>
<link href="https://hdl.handle.net/1721.1/156796" rel="alternate"/>
<author>
<name>Kafle, Prabhakar</name>
</author>
<id>https://hdl.handle.net/1721.1/156796</id>
<updated>2024-09-17T03:05:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing and Optimizing the Networking Stack in Databases
Kafle, Prabhakar
Databases are latency-critical applications, and client-database communication is a significant contributor to the end-to-end latency. However, the database community has paid little attention to the networking overhead in databases. This thesis focuses on the overhead from the network stack in the server. I characterize the contributions of different components in the database server to the end-to-end latency, focusing on the networking stack. I observe that in transactions involving a single read query, the server network stack accounts for almost 15\% of the total end-to-end latency in VoltDB. Most of this overhead comes from TCP packet processing, interrupt handling, context switches, and I/O multiplexing. Additionally, this work also explores avenues to optimize the networking stack overhead. I find that moving networking to the userspace by bypassing the kernel can significantly reduce the networking stack overhead. This switch in the network stack can help achieve a significant improvement in throughput and lower latency for both the benchmarks used. While the thesis is focused on server networking stack, similar optimization can be applied to client side if necessary hardware (CPU, NIC) is available.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Language Models to Understand Molecular Structures</title>
<link href="https://hdl.handle.net/1721.1/156795" rel="alternate"/>
<author>
<name>Fan, Vincent K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156795</id>
<updated>2024-09-17T03:58:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Language Models to Understand Molecular Structures
Fan, Vincent K.
In data rich modalities such as text and images, large foundation models have demonstrated remarkable capabilities. However, in life sciences, datasets of comparable scale are prohibitively costly to assemble, pointing towards the imperative need to leverage advances in language modelling to improve machine learning techniques for life sciences. This thesis details research in two such directions, information extraction and text retrieval. Information extraction from chemistry literature is vital for constructing up-to-date reaction databases. Complete extraction requires combining information across text, tables, and figures, whereas prior work has mainly investigated extracting reactions from single modalities. In this thesis, I present OpenChemIE to address this complex challenge and enable the extraction of reaction data at the document level. OpenChemIE approaches the problem in two steps: extracting relevant information from individual modalities with specialized neural models and then integrating the results via chemistry-informed algorithms to obtain a final list of reactions. I meticulously annotated a challenging dataset of reaction schemes with R-groups to evaluate OpenChemIE, which achieves an F1 score of 69.5%. Additionally, the reaction extraction results of OpenChemIE attain an accuracy score of 64.3% when directly compared against the Reaxys chemical database. OpenChemIE is most suited for information extraction on organic chemistry literature, where molecules are generally depicted as planar graphs or written in text and can be consolidated into a SMILES format. Additionally, I detail preliminary research in developing a tool to retrieve full text documents that are relevant to specific protein sequences. I describe the dataset which is currently in construction, as well as experiments pointing at the promise of this approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpreting and Editing Memory in Large Transformer Language Models</title>
<link href="https://hdl.handle.net/1721.1/156794" rel="alternate"/>
<author>
<name>Meng, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/156794</id>
<updated>2024-09-17T04:02:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Interpreting and Editing Memory in Large Transformer Language Models
Meng, Kevin
This thesis investigates the mechanisms of factual recall in large language models. We first apply causal interventions to identify neuron activations that are decisive in a model’s factual predictions; surprisingly, we find that factual recall corresponds to a sparse, localizable computation in the MLP weights of the GPT models we study. Harnessing this insight, we then develop methods for efficiently and surgically inserting up to 10,000 new memories into a transformer; these methods perform well in terms of both generalization and specificity. We conclude with some directions for future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels</title>
<link href="https://hdl.handle.net/1721.1/156793" rel="alternate"/>
<author>
<name>Patnaik, Ritik</name>
</author>
<id>https://hdl.handle.net/1721.1/156793</id>
<updated>2024-09-17T03:32:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels
Patnaik, Ritik
We present Analog Underwater Backscatter (AUB), the first technology for microwatt-level underwater wireless sensor networks. AUB departs from past underwater backscatter technologies in that it encodes sensor data directly into the physical layer through analog (frequency) modulation. Our design introduces multiple innovations that enable it to address challenges in practical underwater environments arising from mobility (Doppler shift) and the low-frequency carrier, which makes it vulnerable to small hardware imperfections. AUB’s design also introduces the first ultra-low-power wakeup receiver for underwater backscatter, enabling it to operate for a long time on small batteries. We built an end-to-end prototype of AUB and evaluated it in a river. Our evaluation demonstrates that AUB consumes 5.9 µW, 46× lower power than state-of-the-art past underwater backscatter systems. We also demonstrate AUB’s ability to sense two of the most important oceanographic vitals: temperature and depth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning an Embedding for Vehicle Telematics</title>
<link href="https://hdl.handle.net/1721.1/156792" rel="alternate"/>
<author>
<name>Leonard, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156792</id>
<updated>2024-09-17T03:59:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning an Embedding for Vehicle Telematics
Leonard, Matthew
Vehicular telematics involves the collection and processing of data about driving behavior; however, mining and modeling this data is difficult due to its large volume. We hypothesize that the data will follow regular patterns of events that occur during drives, and that we can learn these patterns. With this intuition, we design a neural network that will effectively embed sections of accelerometer data into a lower-dimensional space, with a low loss of information and accuracy of the embedding relative to the dimensionality reduction, as well as several other desirable geometric properties for indexing and analysis of the data. We further develop an accurate summary of the distribution of each drive in this lower-dimensional space, which would serve as a proxy for the occurrence of events within these drives. From this system, we develop a method of comparison between different drives that highlights whether or not particular events occurred in each drive. This could be used to develop a more robust and nuanced risk model, and determine which events in a drive are associated with risk, to provide feedback to end users on their driving.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability</title>
<link href="https://hdl.handle.net/1721.1/156791" rel="alternate"/>
<author>
<name>Luo, Victor</name>
</author>
<id>https://hdl.handle.net/1721.1/156791</id>
<updated>2024-09-17T03:13:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability
Luo, Victor
This Master’s thesis investigates three diverse problem domains through the lens of computational inapproximability: Max 2SAT-3, the Net tile-rotating puzzle family, and the mobile game Euclidea. Max 2SAT-3 is a problem long known to be APX-complete, but finding a clear proof is harder than one might expect. We examine the history of Max 2SAT-3, addressing past misconceptions and clarifying where the reduction chain has been opaque, and present a novel proof of its APX-completeness. Net variants form a wide class of puzzles with lots of potential for future research. We introduce a natural optimization variant of Net and demonstrate its inapproximability, as well as consolidate existing findings and present other new results. Euclidea is a mobile game based on Euclidean straightedge-and-compass constructions. We define the game as an optimization problem and establish its APX-hardness, as well as discuss challenges in upper-bounding its complexity, relating to current knowledge gaps regarding the constructible and algebraic numbers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir</title>
<link href="https://hdl.handle.net/1721.1/156790" rel="alternate"/>
<author>
<name>Hilton, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/156790</id>
<updated>2024-09-17T03:03:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir
Hilton, Jay
Rust + Cilk is an extension to the Rust language incorporating Cilk’s keywords for language level parallelism. The Rust + Cilk compiler leverages the Rust compiler’s static verification of data race freedom and the OpenCilk parallelism platform’s strong theoretical guarantees for performance of parallel programs. I compare Rust + Cilk to existing librarybased parallelism solutions in Rust such as Rayon, as well as to C programs parallelized with OpenCilk, based on performance and ergonomics. I find that Rust + Cilk exhibits marginally worse performance than Rayon, although I expect these differences are possible to bridge with further work. Additionally, Rust + Cilk has ergonomic advantages for some kinds of parallel programs. I outline further research that could make Rust + Cilk a more complete and performant system to further take advantage of the benefits language-based parallelism solutions can offer while statically verifying data race freedom.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection</title>
<link href="https://hdl.handle.net/1721.1/156789" rel="alternate"/>
<author>
<name>Song, Grace Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156789</id>
<updated>2024-09-17T04:07:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection
Song, Grace Y.
Automated time series anomaly detection methods can provide insights while reducing the load placed on human experts in a variety of settings. Machine-generated signals, such as those produced by sensors, often contains control signals in addition to the target observation signal. These signals may provide additional insight about the normal vs. abnormal properties of the observation signal. Despite this fact, even recent anomaly detection methods using deep learning give limited consideration to the relationship between observation and control signals, often failing to handle the control signal at all. This work proposes pre-processing, modeling, and evaluation methods for multivariate, heterogeneous time series to examine how using information from the control signal can improve anomaly detection. We develop a deep learning reconstruction-based pipeline and test its performance on the NASA Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) Rover, which contains heterogeneous sensing data from exploratory missions. The pipeline follows the Sintel machine learning framework and is accessible through the Meissa library, which builds on the capabilities of the open-source library Orion for end-to-end unsupervised time series anomaly detection pipelines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Mechanistic Interpretability for Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156787" rel="alternate"/>
<author>
<name>Liao, Isaac C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156787</id>
<updated>2024-09-17T03:36:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Mechanistic Interpretability for Neural Networks
Liao, Isaac C.
Mechanistic interpretability research aims to deconstruct the underlying algorithms that neural networks use to perform computations, such that we can modify their components, causing them to change behavior in predictable and positive ways. This thesis details three novel methods for automating the interpretation process for neural networks that are too large to manually interpret. Firstly, we detect inherently multidimensional representations of data; we discover that large language models use circular representations to perform modular addition tasks. Secondly, we introduce methods to penalize complexity in neural circuitry; we discover the automatic emergence of interpretable properties such as sparsity, weight tying, and circuit duplication. Last but not least, we apply neural network symmetries to put networks into a simplified normal form, for conversion into human-readable python; we introduce a program synthesis benchmark with this and successfully convert 32 out of 62 of them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI</title>
<link href="https://hdl.handle.net/1721.1/156786" rel="alternate"/>
<author>
<name>Nwigwe, Alexandra C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156786</id>
<updated>2024-09-17T03:08:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI
Nwigwe, Alexandra C.
While MRI (Magnetic Resonance Imaging) technology allows us to get detailed images of the inside of a subject’s body, it most commonly requires very expensive and large-scale machinery which limits the scenarios it can be used in. These types of costly MRI are usually high-field MRI, which operates at magnetic fields of 1.5T and above, and produces images with short scan times and high resolution. Yet because of the drawbacks in accessibility and affordability high-field MRI poses, there has been an effort to devote more research to portable low-field MRI. Low-field MRI opens doors for low-cost and point-of-care imaging but it unfortunately comes at the expense of decreased image quality and greater noise interference. An RF head coil that molds to the user’s head would be able to better excite and receive signal from the subject and counteract some of the inherent disadvantages of low-field MRI. My proposed thesis will pursue the idea of using flexible, subject-adaptable RF head coils in conjunction with an autotuning circuit as a way to extract better signal from a subject at low magnetic fields.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach</title>
<link href="https://hdl.handle.net/1721.1/156785" rel="alternate"/>
<author>
<name>Kumar, Nitin A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156785</id>
<updated>2024-09-17T03:44:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach
Kumar, Nitin A.
In this work, we give the first implementation of an algorithm to learn a mixture of linear dynamical systems (LDS’s), and an analysis of algorithms to learn a single linear dynamical system. Following the work of Bakshi et al. ([1]), we implement a recent polynomial-time algorithm based on a tensor decomposition with learning guarantees in a general setting, with some simplifications and minor optimizations. Our largest contribution is giving the first expectation-maximization (E-M) algorithm for learning a mixture of LDS’s, and an experimental evaluation against the Tensor Decomposition algorithm. We find that the E-M algorithm performs extremely well, and much better than the Tensor Decomposition algorithm. We analyze performance of these and other algorithms to learn both a single LDS and a mixture of LDS’s under various conditions (such as how much noise is present) and algorithm settings.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets</title>
<link href="https://hdl.handle.net/1721.1/156783" rel="alternate"/>
<author>
<name>von Wrangel, David</name>
</author>
<id>https://hdl.handle.net/1721.1/156783</id>
<updated>2024-09-17T03:11:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets
von Wrangel, David
Collision-free motion planning with trajectory optimization is inherently nonconvex. Some of this nonconvexity is fundamental: the robot might need to make a discrete decision to go left around an obstacle or right around an obstacle. Some of this nonconvexity is potentially more benign: we might want to penalize high-order derivatives of our continuous trajectories in order to encourage smoothness. Recently, Graphs of Convex Sets (GCS) have been applied to trajectory optimization, addressing the fundamental nonconvexity with efficient online optimization over a "roadmap" represented by an approximate convex decomposition of the configuration space. In this thesis, we explore some of the most useful nonconvex costs and constraints and introduce a novel hierarchical GCS structure, composing subgraphs that represent different task phases or alternative paths and enabling efficient planning for complex tasks involving both discrete decision-making and continuous trajectory generation. We investigate the suitability of combining convex "global" optimization using GCS with nonconvex trajectory optimization for rounding the local solutions. Through extensive experiments on diverse robotic systems, we demonstrate that this combination can effectively guide a small number of nonconvex optimizations, ultimately finding high-quality solutions to challenging nonconvex motion planning problems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Test Suite for Saliency Method Evaluation Metrics</title>
<link href="https://hdl.handle.net/1721.1/156781" rel="alternate"/>
<author>
<name>Kaspar, Moulinrouge</name>
</author>
<id>https://hdl.handle.net/1721.1/156781</id>
<updated>2024-09-17T03:34:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Test Suite for Saliency Method Evaluation Metrics
Kaspar, Moulinrouge
This thesis introduces a structured test suite designed to evaluate the input sensitivity of saliency methods, a crucial factor when interpreting machine learning models, particularly in high-stakes environments. Saliency methods, by highlighting essential input features inf luencing model decisions, serve as a key tool for understanding model behavior. Yet, their effectiveness can vary, often presenting challenges in selection due to their inconsistent reliability and the potential for unfaithful representations of model dynamics. To address these challenges, our work enhances the process of selecting and applying saliency methods by rigorously testing their response to input perturbations, from adversarial modifications to minor variations. This test suite specifically assesses aspects such as completeness, deletion, faithfulness, and robustness across various data types—including textual and image data—and model architectures like convolutional and transformer models. We demonstrate the utility of the test suite by using it to compare how different saliency methods, as well as the same method across different architectures, behave under varied conditions. Our findings reveal significant variations in how these methods respond to changes in input data, providing insights that guide users in choosing more reliable techniques for interpreting model decisions. This facilitates a deeper understanding of which methods are best suited for specific tasks and promotes the selection of techniques that enhance the transparency and accountability of AI systems. Ultimately, this thesis contributes to advancing ethical compliance and fostering trust in automated decision-making processes by providing a comprehensive evaluation platform for saliency methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extensible Real-Time Sensor and Test Interface for a System-on-Chip</title>
<link href="https://hdl.handle.net/1721.1/156777" rel="alternate"/>
<author>
<name>Studer, Alexandre S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156777</id>
<updated>2024-09-17T03:32:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extensible Real-Time Sensor and Test Interface for a System-on-Chip
Studer, Alexandre S.
This thesis describes the development of a printed circuit board (PCB) that enables connecting external sensors and a host computer to a custom Application-Specific Integrated Circuit (ASIC). The ASIC, previously developed by the Low-Energy Autonomy and Navigation research group, is designed for autonomous navigation on microrobots, such as drones. To enable the real-time data processing required for this application, the ASIC includes a custom Sensor-and-Debug IP block that provides Serial Peripheral Interface (SPI) and First-In/First-Out (FIFO) buses. The custom PCB includes a multiplexer circuit that allows multiple sensors to be connected to the ASIC's single SPI bus. It also includes a USB-to-FIFO interface, developed around the RP2040 microcontroller, which enables connecting a host computer to the ASIC's FIFO bus. Ultimately, the PCB simplifies the connection of external sensors, facilitates debugging of the ASIC, and can be miniaturized for mounting on an autonomous microrobot, such as a drone, in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Control-oriented Meta-learning on Hardware</title>
<link href="https://hdl.handle.net/1721.1/156775" rel="alternate"/>
<author>
<name>Sohn, Joshua C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156775</id>
<updated>2024-09-17T03:33:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Implementing Control-oriented Meta-learning on Hardware
Sohn, Joshua C.
Unpredictable weather conditions pose a daunting challenge for the robust control of unmanned aerial vehicles, also known as drones. The control-oriented meta-learning algorithm aims to solve this problem by learning a controller that can adapt to dynamic environments. This algorithm has already been derived and simulated for a two-dimensional model. This project explores the implementation of the control-oriented meta-learning algorithm on a hardware platform. After extending the algorithm to a three-dimensional model, it was tested in a physics-based simulator and deployed on a hexarotor in the real world. Both in simulation and in real life, the learned controller outperformed a traditional controller in the presence of wind.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs</title>
<link href="https://hdl.handle.net/1721.1/156774" rel="alternate"/>
<author>
<name>Dighe, Kaustubh</name>
</author>
<id>https://hdl.handle.net/1721.1/156774</id>
<updated>2024-09-17T04:07:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs
Dighe, Kaustubh
Machine learning applications are increasingly requiring fast and more computational power. Many applications like language models have become so large that they are run on distributed systems in parallel. However, getting into the details of optimally scheduling or even just running machine learning models on distributed systems can be a distraction for researchers ideating models. Hence there has been development of abstractions to facilitate running machine learning models in parallel on distributed systems. We present a compiler for the StreamIt language- a language made for abstract signal processing and multicore programming. We use that abstraction as a way to distribute the computation of machine learning models programmed in PyTorch.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data</title>
<link href="https://hdl.handle.net/1721.1/156773" rel="alternate"/>
<author>
<name>Bian, George C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156773</id>
<updated>2024-09-17T03:29:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data
Bian, George C.
With the rapid growth of soccer data collection technology worldwide, there has come about an increasing need for new efficient methods to analyze match data. This would help soccer stakeholders more easily and efficiently scrutinize game events for strategy improvement and individual player evaluation. Currently, most existing event data is annotated manually by hand, which is an extremely time-consuming task. Recent works in automatic event generation leverage decision tree algorithms to partially identify game events from player center of mass and ball tracking data, but have shown to be limited in accuracy in practice. New computer vision models have enabled the extraction of player joint data from video broadcast, providing a newer, richer dataset for automatic event detection. The proposed thesis will seek to validate brand-new skeletal joint data, determine the last player to touch the ball at any timestamp during a match, and build a decision tree algorithm for classifying duel-like events and goalkeeping outcomes with the additional context of player joint location.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Patient Outcomes in the EPOCH Clinical Trial</title>
<link href="https://hdl.handle.net/1721.1/156772" rel="alternate"/>
<author>
<name>Parsan, Nithin</name>
</author>
<id>https://hdl.handle.net/1721.1/156772</id>
<updated>2024-09-17T03:27:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Predicting Patient Outcomes in the EPOCH Clinical Trial
Parsan, Nithin
Metastatic colorectal cancer (mCRC) has a poor prognosis and high mortality rate, but innovative therapies such as transarterial radioembolization (TARE) can improve patient outcomes. The EPOCHclinical trial demonstrated that TARE improved hepatic progressionfree survival (hPFS) in patients with colorectal liver metastases, and computational methods to analyze the multimodal data collected can identify patient subgroups and predict treatment response for personalized medicine. First, a comprehensive data preprocessing pipeline curated a high-quality dataset of liver-region Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans paired with patient biomarkers. Multi-Dimensional Subset Scanning (MDSS) identified a group of patients with shared biomarkers that exhibited poor response to TARE, and Cox Proportional Hazards (CoxPH) modeling revealed hazard ratios for biomarkers aligning with clinical expectations, albeit with a limited C-index. Augmenting CoxPH modeling with embeddings from a deep learning foundation model pre-trained on liver CT and MRI scans and fine-tuned to predict treatment response resulted in a substantially higher C-index. Interestingly, models fine-tuned to predict one clinical feature had improved predictive accuracy for other features they were not specifically trained on, and Class Activation Mapping (CAM) visualizations showed that salient embedding dimensions focus on the liver region, providing interpretability. The ensemble of computational techniques applied to multimodal clinical trial data successfully identified patient subgroups, extracted predictive biomarkers, and enhanced the accuracy of treatment response predictions, contributing to the development of more effective, personalized treatment strategies for mCRC patients undergoing TARE.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification</title>
<link href="https://hdl.handle.net/1721.1/156771" rel="alternate"/>
<author>
<name>Joglekar, Natasha</name>
</author>
<id>https://hdl.handle.net/1721.1/156771</id>
<updated>2024-09-17T03:22:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification
Joglekar, Natasha
We seek to conduct an analysis of the Camden Coalition’s Health Information Exchange (HIE) data to gain deeper insights into the trajectories of Medicaid patients through the health system. Recognizing the complex challenges of social determinants of health, this study seeks to find patterns and opportunities within the Medicaid population’s healthcare journeys. Through time series analysis we try to understand the utilization trajectories of Medicaid patients over time. Using this insight combined with predictive modeling, we then begin to develop a methodology for identifying persistent high-cost healthcare utilization, and think about how having this information may change program implementation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Learning Genetic Dependencies</title>
<link href="https://hdl.handle.net/1721.1/156769" rel="alternate"/>
<author>
<name>Cai, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/156769</id>
<updated>2024-09-17T03:05:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Learning Genetic Dependencies
Cai, Cathy
Synthetic lethality refers to a genetic interaction where the simultaneous perturbation of gene pairs leads to cell death. Synthetically lethal gene pairs (SL pairs) provide a potential avenue for selectively targeting cancer cells based on genetic vulnerabilities. The rise of large-scale gene perturbation screens such as the Cancer Dependency Map (DepMap) offers the opportunity to identify SL pairs automatically using machine learning. We build on a recently developed class of feature learning kernel machines known as Recursive Feature Machines (RFMs) to develop a pipeline for identifying SL pairs based on CRISPR viability data from DepMap. In particular, we first train RFMs to predict viability scores for a given CRISPRgene knockout from cell line embeddings consisting of gene expression and mutation features. After training, RFMs use a statistical operator known as average gradient outer product to provide weights for each feature indicating the importance of each feature in predicting cellular viability. We subsequently apply correlation-based filters to re-weight RFMfeature importances and identify those features that are most indicative of low cellular viability. Our resulting pipeline is computationally efficient, taking under 3 minutes for analyzing all 17,453 knockouts from DepMap for candidate SL pairs. We show that our pipeline more accurately recovers experimentally verified SL pairs than prior approaches. Moreover, our pipeline finds new candidate SL pairs, thereby opening novel avenues for identifying genetic vulnerabilities in cancer.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/156768" rel="alternate"/>
<author>
<name>Gillespie, Fiona J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156768</id>
<updated>2024-09-17T04:05:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles
Gillespie, Fiona J.
With the global population on the rise, there is an increased demand for seafood, underscoring the crucial role of aquaculture- the practice of farming aquatic organisms [1]. In the realm of aquaculture, oyster farming is relatively low maintenance, except for the challenge of manually flipping heavy oyster-laden bags. To address this issue, MIT Sea Grant introduced the Oystermaran, an autonomous catamaran specifically designed for this task. This thesis presents contributions to the electronics, controls, and perception systems of the Oystermaran project. In particular, it presents an oyster basket detection and tracking method using the object detector You Only Look Once (YOLO) [2]. In addition, the electronics system has been updated and new manual controllers were created to enable the use of a new f lipping mechanism developed this year. This system is evaluated on data from field testing at Ward Aquafarms, a Cape Cod-based oyster farming business. The results show that oyster baskets can be robustly detected in new environments, despite environmental factors. This marks a significant step towards real-time viability for autonomous oyster farming.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Supervised Audio-Visual Speech Diarization and Recognition</title>
<link href="https://hdl.handle.net/1721.1/156767" rel="alternate"/>
<author>
<name>Wongprommoon, Arun</name>
</author>
<id>https://hdl.handle.net/1721.1/156767</id>
<updated>2024-09-17T03:01:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Self-Supervised Audio-Visual Speech Diarization and Recognition
Wongprommoon, Arun
Many real world use cases of automatic speech recognition (ASR) contain video and multiple speakers, such as TV broadcasts and video conferences. However, state-of-the-art end-to-end multimodal ASR models generally do not support diarization. This thesis extends one such model, AV-HuBERT, to address the diarization problem while maintaining word recognition accuracy. The proposed Audio-Visual Cocktail (AVC) HuBERT model extends video input dimenions, lengthens feature size, and adds projection layers to split outputs into corresponding speakers. A complementary synthesized dataset is constructed by mixing audio and video samples from LRS3 at varying overlap thresholds, resulting in the LRS3Mix dataset. This is used to train the model, whose weights are transferred from AV-HuBERT. Computing several word error rate (WER) metrics to measure recognition and diarization performance of several versions of AVC-HuBERT models demonstrates that the method improves diarization, albeit with a small tradeoff in word recognition. Augmenting the synthesized mixed dataset with the original clean single-speaker dataset boosts recognition ability, and the same effect can be observed when the dataset size increases.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication</title>
<link href="https://hdl.handle.net/1721.1/156766" rel="alternate"/>
<author>
<name>Yan, Binwei</name>
</author>
<id>https://hdl.handle.net/1721.1/156766</id>
<updated>2024-09-17T03:02:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication
Yan, Binwei
This thesis investigates diverse computational methodologies for modeling cellular interactions using single-cell RNA sequencing (scRNA-seq) data. We evaluate the performance of Graph Neural Networks (GNNs) both with and without gene-gene edges, Contrastive Learning, and Variational Autoencoders (VAEs) across multiple datasets. Our study compares these methods and establishes benchmarks for assessing their effectiveness beyond traditional case studies. By integrating extensive signaling pathway data, we aim to unveil complex cell-cell communication patterns and regulatory mechanisms that conventional scRNA-seq analysis methods might overlook. Our approach emphasizes the use of spatial data as a crucial indicator, facilitated by the advanced capabilities of heterogeneous GNNs to model physical proximity. We found that our analysis of the functioning genes aligns with previous findings, proving our model’s effectiveness as a potential method for further analyze communication mechanisms.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Privacy Perserving Payments</title>
<link href="https://hdl.handle.net/1721.1/156765" rel="alternate"/>
<author>
<name>Ali, Ayesha</name>
</author>
<id>https://hdl.handle.net/1721.1/156765</id>
<updated>2024-09-17T03:37:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scaling Privacy Perserving Payments
Ali, Ayesha
We explore privacy-preserving payments in a centralized setting, such as CBDCs. Specifically, we focus on two classes of designs that hide the transaction graph: Chaumian e-cash and Merkle tree-based systems (e.g., Tornado Cash), which differ both in their security assumptions and scalability. In our work we highlight scalability limitations in Merkle tree-based privacy systems that would be encountered in a network as large as a CBDC, and propose a sharded Merkle tree design to improve scalability while maintaining strong privacy. However, as we analyze, conventional sharding methods pose privacy risks, prompting introduction of a ’tree of sharded trees’ design that preserves privacy at a modest increase of latency. We describe, implement and evaluate all three designs, and find that unmodified Tornado Cash indeed suffers from resource-contention induced scalability bottlenecks. In contrast, our new design is achieves throughput that is less than an order of magnitude away from e-cash, despite providing auditability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SAGE: Segmenting and Grouping Data Effectively using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156764" rel="alternate"/>
<author>
<name>Pedraza Pineros, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/156764</id>
<updated>2024-09-17T04:02:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">SAGE: Segmenting and Grouping Data Effectively using Large Language Models
Pedraza Pineros, Isabella
Grouping is a technique used to organize data into manageable pieces, reducing cognitive load and enabling users to focus on discovering higher-level insights and generating new questions. However, creating groups remains a challenge, often requiring users to have prior domain knowledge or an understanding of the underlying structure of the data. We introduce SAGE, a novel technique that leverages the knowledge base and pattern recognition abilities of large language models (LLMs) to segment and group data with domainawareness. We instantiate our technique through two structures: bins and highlights; bins are contiguous, non-overlapping ranges that segment a single field into groups; highlights are multi-field intersections of ranges that surface broader groups in the data. We integrate these structures into Olli, an open-source tool that converts data visualizations into accessible, keyboard-navigable textual formats to facilitate a study with 15 blind and low-vision (BLV) participants, recognizing them as experts in assessing agency. Through this study, we evaluate how SAGE impacts a user’s interpretation of data and visualizations, and find our technique provides a rich contextual framework for users to independently scaffold their initial sensemaking process.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Genetic Basis of Sex Differences in Human Height</title>
<link href="https://hdl.handle.net/1721.1/156763" rel="alternate"/>
<author>
<name>Aluru, Amulya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156763</id>
<updated>2024-09-17T03:51:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding the Genetic Basis of Sex Differences in Human Height
Aluru, Amulya S.
Sex differences are prevalent across health, development and disease. Driven by the sex chromosomes, the largest source of genetic variation in the human population, trait differences between males and females can have important implications in treatment response and disease diagnosis. Genes along the X and Y chromosomes encode broadly-expressed regulators of the transcriptome and epigenome that have diverged in function and expression. These sex chromosome-linked gene pairs enforce differences in regulatory landscapes and autosomal gene expression patterns between biological males (XY) and females (XX), which can have far-reaching consequences. Despite this, the field of population genetics has rarely considered the special role of sex-linked loci and sex-biased genetic effectors in establishing sex-dependent trait variation.  In this thesis, I integrate existing tools in statistical genetics for the repurposed goal of understanding the genetic basis of sex differences in complex traits. Through combining genome-wide association study (GWAS) data with gene expression panels and sex-biased gene expression information, previous work in the lab has demonstrated that genes with conserved sex bias contribute to the establishment of sex bias in height. First, to understand the relationship between GWAS power and sex differences, we compared the performance of two differently powered GWAS in their ability to explain sex bias in height, finding a modest increase in genetic insight by the larger GWAS. Second, we assessed functional elements across the genome that may differentially contribute to height between males and females to propose alternative mechanisms alongside gene expression that may establish sex differences in height. Altogether, the work presented in this thesis demonstrates the potential of sex differences research to utilize well-powered studies of sex-biased regulators and variant-trait associations to better understand the genetic mechanisms— including, but not limited, to gene expression— that cultivate and maintain sex differences in complex traits.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo Tree Search Applications to Neural Theorem Proving</title>
<link href="https://hdl.handle.net/1721.1/156761" rel="alternate"/>
<author>
<name>LaBelle, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/156761</id>
<updated>2024-09-17T03:02:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Monte Carlo Tree Search Applications to Neural Theorem Proving
LaBelle, Ethan
A common problem of LLM inference is hallucination, where models generate false information. Another such problem is the tradeoff between model size and computational cost. Larger models use more VRAM, in addition to requiring longer training and inference times. This work explores solutions to these problems, namely search and verification, following Yang’s recent contribution: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models. In their work, Yang et al. introduce LeanDojo, an environment for programmatic interaction with the Lean theorem proving language, alongside ReProver, a ByT5-Small transformer-based ATP fine-tuned using the open source Lean mathlib. The smaller model requires fewer resources, enabling faster inference, which when combined with search, improves the effective performance of the model. We use the language model to generate a space of partial proof trees in Lean. As the core GPT can be interchanged with a larger or more performant model, this work focuses on search algorithms for finding novel proofs given the same computational budget. Three classes of algorithms are explored: best first search, random walk, and Monte Carlo Tree Search. Search algorithms are evaluated on the random split test dataset of the LeanDojo Benchmark. Finally, we present common failure modes of various methods, search results of algorithm variants, and novel proofs discovered relative to the baseline. Across our trials, we show the search space defined by ReProver’s tactic generator contains proofs for approximately 55.0% of theorems in the LeanDojo Benchmark random test split. In Yang’s evaluations, ReProver achieves a 51.2% solve rate Pass@1 on this benchmark.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables</title>
<link href="https://hdl.handle.net/1721.1/156760" rel="alternate"/>
<author>
<name>Choi, Shelley</name>
</author>
<id>https://hdl.handle.net/1721.1/156760</id>
<updated>2024-09-17T03:41:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables
Choi, Shelley
Sleep plays a major role in regulating human cognitive function, performance, mood, and well-being. Despite its significance, the intricate relationship between various sleep components—such as duration, quality, and regularity—and wellbeing outcomes remains inadequately explored. The nature of sleep data poses challenges in capturing and interpreting temporal patterns, but the growing popularity of wearable devices capable of collecting vast multi-modal data presents a promising avenue to bridge this gap. In this thesis, the aim is two-fold: first, identify the impact of different combinations and transformations of sleep regularity (Sleep Regularity Index- SRI, Composite Phase Deviation- CPD, Interdaily Stability- IS) and duration calculated from wearable devices across varying time frames on self-reported morning wellbeing scores (alertness, happiness, energy, health, calmness); and second, evaluate both linear and nonlinear associations between different sleep metrics and wellbeing. To address high user variability found by the personalized nature of sleep and the subjective nature of wellbeing assessments, we employ mixed effects modeling techniques where each individual is treated as their own cluster, including Linear Mixed Effects models (LMM) and Mixed Effects Random Forest (MERF), where the latter is benchmarked against classic machine learning models. The LMM results were most statistically significant for independent regularity (SRI, IS), combined regularity (SRI and IS), total sleep time as duration (TST), and combined regularity and total sleep time (SRI and TST, IS and TST) for alertness and energy over 2-4 nights. MERF outperformed other models in Mean Absolute Error (MAE), for all time split scenarios. This research further emphasizes the importance of addressing data leakage due to the time sensitivity of sleep data and calculation of regularity spanning multiple days. Bye stablishing correlations between sleep parameters and wellbeing indicators, this study hopes to provide deeper insights into fluctuations in wellbeing and inform the development of wearables that monitor sleep patterns.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Code Summarization and Program Synthesis with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156757" rel="alternate"/>
<author>
<name>Lam, Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/156757</id>
<updated>2024-09-17T03:59:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Code Summarization and Program Synthesis with Large Language Models
Lam, Kelly
Automatic source code summarization and generation are naturally complimentary operations because they bridge the gap between natural-language text and executable programs, allowing users to flow between the two modes. Even though large language models, have become increasingly popular, it is unclear how effective they are with code summarization and generation, especially as we examine longer source code segments or more complicated prompts for generation. In this thesis, we will formalize the automatic code summarization and generation problems, identify some cases where large-language models can perform poorly, propose some techniques to correct the initial bad results, and evaluate our results against appropriate baselines using suitable evaluation metrics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison</title>
<link href="https://hdl.handle.net/1721.1/156756" rel="alternate"/>
<author>
<name>Li, Bridget</name>
</author>
<id>https://hdl.handle.net/1721.1/156756</id>
<updated>2024-09-17T03:47:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison
Li, Bridget
Comparative analysis of brain patterns across species can advance understanding of different biological processes and functions. Spatially resolved transcriptomics (SRT) technologies present the ability to measure gene expression of single cells within tissues, enabling the detection of unique spatial molecular patterns in the brain. Several computational methods that rely on cellular neighborhood information have been developed for characterizing molecular tissue regions in SRT data. Here, we show that spatial integration (SPIN) improves the performance of existing methods and enables the clustering of molecular tissue regions. Then, we test SPIN and signal-processing approaches on SRT data from mouse and macaque brains. We integrate the brain atlases of these two species to identify shared and distinct spatial molecular patterns. This work offers new insights into spatial molecular features between mouse and macaque brains and proposes a framework for integrating SRT datasets on a large scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Algorithmic Progress in Data Structures and Approximation Algorithms</title>
<link href="https://hdl.handle.net/1721.1/156755" rel="alternate"/>
<author>
<name>Li, Jeffery</name>
</author>
<id>https://hdl.handle.net/1721.1/156755</id>
<updated>2024-09-17T03:29:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On Algorithmic Progress in Data Structures and Approximation Algorithms
Li, Jeffery
In the big data regime, computer systems and algorithms must process large amounts of data, making many traditional exact algorithms too costly to run. To work around this, researchers have developed approximation algorithms, which trade off some accuracy for asymptotic improvements in runtime, and data structures, which can efficiently store and answer multiple queries about a dataset. This naturally leads to the question, how have approximation algorithms and data structures improved over the years? Here, we provide some insight into this question, looking into trends in algorithmic and data structure progress, tradeoffs between speed and accuracy or between runtimes of specific data structure operations, and specific problems of interest. Our analysis is based on a dataset of around 300 approximation algorithms and around 250 data structures. For both fields, we find that research is still fairly active even to the present day, even though significant or asymptotic gains for data structures have been slowly on the decline. Improvements have also been fairly heterogeneous– some problems see a lot of work and improvements put into them, while others have not seen as much progress. In addition, of the problems that have both exact and approximation algorithms, around 1/6 of the problems have seen approximation algorithms have immensely large average yearly improvement rates compared to exact algorithms, while around 1/2 of the problems have seen approximation algorithms have minimal improvement over exact algorithms. For data structures, we find that only 4 out of the 28 abstract data types in our dataset have ever had a tradeoff between storage requirements and/or runtimes of specific operations, with only 2 still existing in the present, suggesting that improvements generally build off of each other without increasing space usage or time required for other operations. This research helps us understand how approximation algorithms and data structures have progressed through the years and how they are now.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression</title>
<link href="https://hdl.handle.net/1721.1/156754" rel="alternate"/>
<author>
<name>Li, Jerry</name>
</author>
<id>https://hdl.handle.net/1721.1/156754</id>
<updated>2024-09-17T03:04:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression
Li, Jerry
Recent innovations in large language models (LLMs) have led to their widespread use, but the long context problem remains a fundamental challenge. Transformer-based LLMs are constrained by the quadratic scaling of the self-attention mechanism, which restricts most popular LLMs to a context length of several thousand tokens. Many methods have been introduced to extend the context of LLMs, including the Activation Beacon approach. In this work, we propose two key advancements to the existing methodology. First, we generate long context synthetic data across a variety of tasks for training context-extended models, which can supplement or even replace expensive human-annotated data. Second, we introduce a novel two-pass, adaptive compression technique for more intelligent compression of long contexts. We find that the two strategies lead to orthogonal performance improvements on real-world long context tasks, resulting in an overall 4.2% increase in accuracy compared to the previous benchmark.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Irreversible Actions in Assistance Games with a Dynamic Goal</title>
<link href="https://hdl.handle.net/1721.1/156753" rel="alternate"/>
<author>
<name>Mayer, Hendrik T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156753</id>
<updated>2024-09-17T03:01:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Irreversible Actions in Assistance Games with a Dynamic Goal
Mayer, Hendrik T.
Reinforcement Learning (RL) agents optimize reward functions to learn desirable policies in a variety of important real-world applications such as self-driving cars and recommender systems. However, in practice, it can be very difficult to specify the correct reward function for a complex problem, in what is known as reward misspecifcation. Impact measures provide metrics to determine how robust a particular agent’s behavior is to reward misspecification. This thesis analyzes one particular impact measure: the frequency of irreversible actions that an agent takes. We study this impact measure using a time-varying model of the principal’s preferences. This choice was motivated by two primary considerations. First, many real-world scenarios consist of a principal with time-varying preferences. Second, an agent assuming time-varying preferences may be more averse to performing irreversible actions. In this thesis, we examine principal-agent (human-robot) assistance games in toy grid environments inspired by cooperative inverse reinforcement learning [1], where irreversible actions correspond to removing transitions from a POMDP. In these games, we focus on how the frequency of changes in the principal’s preferences and the optimality of the principal influence the agent’s willingness to take irreversible actions. In 2-node and 4-node assistance games, we find two main results. First, in the presence of a random or approximately optimal human, the robot performs more irreversible actions as the goal state changes position more often. Second, in the presence of an optimal human, the robot rarely performs irreversible actions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations</title>
<link href="https://hdl.handle.net/1721.1/156752" rel="alternate"/>
<author>
<name>Droubi, Samir</name>
</author>
<id>https://hdl.handle.net/1721.1/156752</id>
<updated>2024-09-17T03:56:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations
Droubi, Samir
Kernel libraries are designed to support numerical computations and provide efficient implementations of them. The goal of these libraries is to provide many optimized functionalities, which is a challenge when the implementations of those programs are often written in C or assembly. BLAS (Basic Linear Algebra Subprograms) is a famous example of such libraries where the dimensionality of the interface imposes a huge space of functions to implement, which makes it particularly challenging to support. Our work tackles the problem of implementing BLAS in the context of meta-programming, particularly user-scheduling in the Exo programming language. We base our solution on three key ideas to achieve reuse at the level of the meta-program. First, there are similarities in the individual optimizations that are performed on these kernels, which we capture as scheduling operations with which we extend the Exo programming language. Secondly, the end-to-end optimization strategies (or schedules) for groups of these kernels are the same, and we capture them as scheduling automations. Lastly, more complex BLAS operations from higher levels can be transformed into less complex BLAS-like operations similar to operations from lower levels, so we can use the automation of a lower level to build the automation of a higher level. We evaluated our results against industry and open source implementations of BLAS and show that we achieve competitive performance with a small implementation in terms of lines of code.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156751" rel="alternate"/>
<author>
<name>Fuangkawinsombut, Siwakorn</name>
</author>
<id>https://hdl.handle.net/1721.1/156751</id>
<updated>2024-09-17T03:49:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks
Fuangkawinsombut, Siwakorn
In the domain of machine learning, "grokking" is a phenomenon where neural network models demonstrate a sudden improvement in generalization, distinct from traditional learning phases, long after the initial training appears complete. This behavior was first identified by Power et al. (2022) [5]. This thesis explores grokking within the context of the (&#119899;, &#119896;)-parity problem, aiming to uncover the mechanisms that trigger such transitions. Through extensive empirical research, we examine how different neural network configurations and training conditions influence the onset of grokking. Our methodology integrates advanced visualization techniques, such as t-SNE, and kernel density estimations to track the evolution from memorization to generalization phases. Furthermore, we investigate the roles of weight decay and network robustness against outliers, focusing on optimizing neural network architectures to achieve effective generalization with fewer computational resources. This study advances our understanding of grokking and proposes practical strategies for designing more efficient neural networks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Accountability Mechanisms in the Judiciary System using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156750" rel="alternate"/>
<author>
<name>Shastri, Ishana</name>
</author>
<id>https://hdl.handle.net/1721.1/156750</id>
<updated>2024-09-17T03:02:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automating Accountability Mechanisms in the Judiciary System using Large Language Models
Shastri, Ishana
Holding the judicial system accountable often demands extensive effort from auditors who must meticulously sift through numerous disorganized legal case files to detect patterns of bias and systemic errors. For example, the high-profile investigation into the Curtis Flowers case took nine reporters a full year to assemble evidence about the prosecutor’s history of selecting racially-biased juries. Large Language Models (LLMs) have the potential to automate and scale these accountability pipelines, especially given their demonstrated capabilities in both structured and unstructured document retrieval tasks. We present the first work elaborating on the opportunities and challenges of using LLMs to provide accountability in two legal domains: bias in jury selection for criminal trials and housing eviction cases. We find that while LLMs are well-suited for information extraction from eviction forms that have more structure, court transcripts present a unique challenge due to disfluencies in transcribed speech.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Topology for Capacitively Isolated Switched Capacitor Converter</title>
<link href="https://hdl.handle.net/1721.1/156749" rel="alternate"/>
<author>
<name>Jerez, Raiphy</name>
</author>
<id>https://hdl.handle.net/1721.1/156749</id>
<updated>2024-09-17T03:02:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Novel Topology for Capacitively Isolated Switched Capacitor Converter
Jerez, Raiphy
This thesis introduces a novel topology for capacitive isolation in switched-capacitor DCDC converters, taking inspiration from previous work1. The research endeavors to develop a unique switched-capacitor topology that enables isolation between input and output voltages. By integrating elements of the Cockcroft-Walton generator into the Dickson converter framework, the proposed design seeks to leverage the inherent advantages of switched-capacitor converters—such as compactness, lightweight design, and higher efficiency at low to moderate power levels—over traditional magnetic converters. Additionally, the incorporation of isolation in the switched-capacitor converter architecture offers enhanced flexibility, allowing for selective power processing and more precise regulation. This feature is particularly beneficial in applications requiring dynamic power management and improved efficiency in power conversion.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanistic Interpretability for Progress Towards Quantitative AI Safety</title>
<link href="https://hdl.handle.net/1721.1/156748" rel="alternate"/>
<author>
<name>Lad, Vedang K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156748</id>
<updated>2024-09-17T03:32:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mechanistic Interpretability for Progress Towards Quantitative AI Safety
Lad, Vedang K.
In this thesis, we conduct a detailed investigation into the dynamics of neural networks, focusing on two key areas: inference stages in large language models (LLMs) and novel program synthesis methods using mechanistic interpretability. We explore the robustness of LLMs through layer-level interventions such as zero-ablations and layer swapping, revealing that these models maintain high accuracy despite perturbations. As a result, we hypothesize the stages of inference in LLMs. This work suggests implications for LLM dataset curation, model optimization, and quantization. Subsequently, we introduce MIPS, an innovative method for program synthesis that distills the operational logic of neural networks into executable Python code. By transforming an RNN into a finite state machine and applying symbolic regression, MIPS successfully addresses 32 out of 62 algorithmic tasks, outperforming GPT-4 in 13 unique challenges. The work intends to take a step forward in enhancing the interpretability and reliability of AI systems, promising significant advances in our understanding and utilization of current and future AI capabilities. Together, these studies highlight the importance of comprehending the inferential behaviors of neural networks to foster more interpretable and efficient AI.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steerable Alignment with Conditional Multiobjective Preference Optimization</title>
<link href="https://hdl.handle.net/1721.1/156747" rel="alternate"/>
<author>
<name>Manyika, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/156747</id>
<updated>2024-09-17T03:03:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Steerable Alignment with Conditional Multiobjective Preference Optimization
Manyika, Julian
As the scale, capabilities and use-cases of large language models (LLMs) continue to grow, it is imperative that these systems are aligned with human preferences. Current state of the art strategies for alignment such as Reinforcement Learning from Human Feedback (RLHF) have provided useful paradigms for finetuning LLMs to produce outputs that are more consistent with human preferences. These approaches, however, assume that preferences are formed by a single, underlying reward model, which is likely insufficient for representing an individual’s preferences, certainly unable to represent diverse group preferences, and inf lexible for users at inference time. To address these limitations, we propose Conditional Multiobjective Preference Optimization (CMPO), a novel alignment strategy that trains a user-steerable LLM along multiple attributes of text, such as helpfulness and humor. CMPO simulates the pareto front of multiple single-attribute preference-optimized models through structural plurality and finetuning with Direct Preference Optimzation (DPO), and allows users to condition outputs on the predefined attributes at inference-time. Experiments show that CMPO generates responses that are preferred to those from separate attribute-specific DPO models and from models trained using SteerLM, a alternate model steering approach. CMPO empirically shows promise as a scalable and flexible finetuning strategy for creating LLMs that are attribute-steerable from parameterized preferences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verifying Hardware Security Modules With True Random NumberGenerators</title>
<link href="https://hdl.handle.net/1721.1/156746" rel="alternate"/>
<author>
<name>Zhao, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/156746</id>
<updated>2024-09-17T03:54:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Verifying Hardware Security Modules With True Random NumberGenerators
Zhao, Katherine
Hardware security modules (HSMs) are powerful tools in building secure computer systems, allowing developers to factor out security-critical code to separate devices. Because HSMs usually work with sensitive data, it is crucial that we are able to verify that they are secure. Many HSMs today also include true random number generators (TRNGs) as part of their architecture to seed cryptographic functions for generating keys, creating nonces, padding, and more. This thesis presents a definition of Information-Preserving Refinement with Randomness (IPRR) that captures the idea that a HSM with a TRNG is correct and is secure from timing side channel attacks. We additionally construct a strategy to prove IPRR, and develop Karatroc, a tool for verifying that a HSM satisfies IPRR. Through the creation and evaluation of Karatroc, we demonstrate the ability to verify HSMs with TRNGs without incurring significant added cost in performance and proof length as compared to existing proof methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction</title>
<link href="https://hdl.handle.net/1721.1/156745" rel="alternate"/>
<author>
<name>Gustafson, Nicholas F.</name>
</author>
<id>https://hdl.handle.net/1721.1/156745</id>
<updated>2024-09-17T04:09:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction
Gustafson, Nicholas F.
This thesis addresses the challenge of standardizing electronic component datasheets to improve systematic data extraction. The absence of uniformity in datasheet design complicates the process of systematically extracting critical information, leading to significant manual effort and potential errors. This research explores the current state of datasheet standardization and examines existing systematic data extraction efforts from semi-structured documents. It highlights the limitations of current methods and emphasizes the need for further standardization to facilitate accurate and efficient data extraction. The thesis proposes a detailed methodology for transitioning electronic component datasheets from semistructured to structured formats through standardization. By defining common standards and specific structures for different types of datasheets, this approach aims to enhance both human readability and machine processing. The thesis concludes by discussing the broader implications of these standards and their potential applications in other fields. Through this work, the goal is to streamline the datasheet creation process, reduce manual intervention, and ultimately improve the accuracy and efficiency of systematic data extraction in the electronic components industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications</title>
<link href="https://hdl.handle.net/1721.1/156744" rel="alternate"/>
<author>
<name>Sund, Jade</name>
</author>
<id>https://hdl.handle.net/1721.1/156744</id>
<updated>2024-09-17T03:40:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications
Sund, Jade
On market rechargeable pulse generators, use inductive wireless power transfer (I-WPT), but capacitive wireless power transfer (C-WPT) has the potential to provide safety and size improvements over I-WPT. Current C-WPT research is focused on resonant capacitive coupling methods. Such works have reported power transfer efficiency of less than 40%. In the proposed thesis, a capacitively isolated Dickson converter, a type of hybrid switched capacitor converter, will be investigated to determine if it can be used to safely, efficiently, and in a small package deliver power to biomedical implants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing 3D Scene Graph Generation with Multimodal Embeddings</title>
<link href="https://hdl.handle.net/1721.1/156743" rel="alternate"/>
<author>
<name>Morales, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/156743</id>
<updated>2024-09-17T03:58:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing 3D Scene Graph Generation with Multimodal Embeddings
Morales, Joseph
3D Scene Graphs are expressive map representations for scene understanding in robotics and computer vision. Current approaches for automated zero-shot 3D Scene Graph generation rely on spatial ontologies that relate objects with the semantic locations they are found in (e.g., a fork is found in a kitchen). While conferring impressive zero-shot performance, these approaches are conditioned on the existence of disambiguating objects in a scene, the expressiveness of the generated spatial ontologies, and knowing during data collection that a robot needs to observe specific objects in the environment. This thesis proposes a method for zero-shot scene graph generation by leveraging Vision-Language Models (VLMs) to construct a layer of Viewpoints in the scene graph, which allow for after-the-fact open-vocabulary querying over the scene. Methods for utilizing different VLM features are explored, which result in improvement over the ontological approach on region segmentation tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion</title>
<link href="https://hdl.handle.net/1721.1/156742" rel="alternate"/>
<author>
<name>Ravichandar, Sanjna</name>
</author>
<id>https://hdl.handle.net/1721.1/156742</id>
<updated>2024-09-17T03:37:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion
Ravichandar, Sanjna
Bipedal locomotion presents a complex challenge in the field of reinforcement learning (RL), due to the high dimensional state and action space. Hierarchical abstractions and inductive biases emerge as critical components in navigating this complexity, offering pathways for effective learning and adaptation in bipedal locomotion tasks. By leveraging hierarchical structures and inductive biases, RL controllers can distill the inherent complexity of bipedal locomotion into manageable components, facilitating more efficient learning and adaptation processes. This work explores hierarchical abstractions within the context of RL for bipedal locomotion. We investigate three distinct RL locomotion controllers: a baseline controller, an action space abstraction controller, and a novel Hierarchical RL (HRL) controller implemented on velocity tracking tasks. We assess the controllers across various RL metrics, including task performance, learning efficiency, stability, and human-likeness metrics derived from human locomotion studies. We quantify the effectiveness of hierarchical abstractions and inductive biases in enhancing locomotion task performance and aligning RL-generated behaviors with human locomotion patterns. The action space abstraction controller emerges with superior performance, and our investigation underscores the potential of HRL approaches, indicative of its ability to leverage hierarchical structures for optimized locomotion behaviors and highlights the importance of selecting appropriate and well-designed abstractions. By analyzing the role of hierarchical abstractions and inductive biases in bipedal RL, our study contributes to advancing the understanding and development of RL algorithms for bipedal locomotion, with implications for the design of more efficient and human-like locomotion behaviors in robotic systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Twofish: Automatic Edit Cascading for Diagrams</title>
<link href="https://hdl.handle.net/1721.1/156741" rel="alternate"/>
<author>
<name>Huang, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/156741</id>
<updated>2024-09-17T03:38:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Twofish: Automatic Edit Cascading for Diagrams
Huang, Grace
Creating and editing diagrams, whether for scientific research, education, or otherwise, is tedious and time consuming. When a user makes a small change to a diagram element, they often have to make additional downstream edits to fully propagate the change to the diagram. This is because these relative positioning constraints are often defined through layout commands, such as alignment, which are viewed by many direct manipulation editors as one-time operations. That is, a layout command enforces spatial relationships between objects by mutating them but does not enforce these relationships when the user makes later edits. While viewing these commands as one-time operations improves the editing flexibility of the editor, it makes editing less efficient. To balance the tradeoff between editing flexibility and efficiency, we present Twofish, a graphical editor that persists relations between elements. In this context, relations, such as alignment or an arrow, associate elements with each other by defining relative spacing constraints between them. Through persisting these relations, we can reapply them automatically to the diagram when corresponding elements are edited. This allows Twofish to automatically cascade edits downstream to fix any positioning constraints that were broken because of a change. This system is built as an extension of an existing graphical editor. In doing so, Twofish makes it easier to create and edit diagrams without sacrificing expressibility. To evaluate Twofish, we compared using Twofish and Figma to edit diagrams in six different scenarios, using three example diagrams. From this comparison, we found that Twofish generally improved editing efficiency but had worse editing flexibility than Figma.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Grit in MLB Batters</title>
<link href="https://hdl.handle.net/1721.1/156740" rel="alternate"/>
<author>
<name>Yang, Angel</name>
</author>
<id>https://hdl.handle.net/1721.1/156740</id>
<updated>2024-09-17T03:42:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Grit in MLB Batters
Yang, Angel
This thesis investigates the quantification of grit in Major League Baseball (MLB) batters, a crucial yet underexplored area in sports analytics traditionally gauged through qualitative assessment. Utilizing 2023 game data from the top 160 most utilized MLB batters, this study develops a Grit Score for each player based on the number of at-bats required to return to average performance after a period of below-average performance. At-bat performance is measured through Delta Runs Expected, and the at-bat group size of the window is selected by testing for correlation and consistency in player grit rankings. Results reveal significant variations in Grit Scores among batters; players identified as the most gritty generally correspond to those with top offensive performance, though grit and performance do not perfectly correlate. Furthermore, gritty batters tend to experience a higher number of hitting slumps but with shorter average lengths, regardless of the at-bat group size used to define the performance window. This research has implications in player valuation and development, team management, and scouting and drafting, suggesting that MLB teams should favor players who recover quickly from poor at-bats due to their more consistent performance and reliable offensive contributions to team success.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Informing decision-making in single-objective, mixed-variable design problems</title>
<link href="https://hdl.handle.net/1721.1/156739" rel="alternate"/>
<author>
<name>Fang, Demi L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156739</id>
<updated>2024-09-17T04:04:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Informing decision-making in single-objective, mixed-variable design problems
Fang, Demi L.
Data-driven decision-making in mixed-variable design problems presents a variety of challenges and opportunities, especially in the increasingly data-rich field of emissions in architectural and structural design. Designers can benefit from an underlying knowledge about, for example, whether material choice (discrete) or span (continuous) have more important consequences on structural emissions. This intuition need not be built purely through experience nor optimization: data-driven approaches can offer quantitative feedback. However, traditional approaches of sensitivity analysis are limited to continuous variables, while certain types of machine learning models can handle combinations of continuous and discrete variables. In this thesis, a hybrid gradient-based, sampling-based technique determining the directional importance of mixed variables in a design space is benchmarked against state-of-the-art variable importance methods (also known as feature importance or interpretability) from machine learning. The importance evaluations and runtimes are compared across workflows.  First, a concise literature review is presented, clarifying and unifying terminology across fields. Tree-based models are identified as a machine learning model that readily handles mixed-variable design spaces, and the following variable importance metrics are identified: impurity-based importance metrics (also known as Mean Decrease Impurity), permutation feature importance (PFI, also known as Mean Decrease Accuracy), and Shapley values. These existing workflows are applied to varying sample sizes of three different datasets related to the application to low-carbon structural design. The same samples are evaluated using the hybrid technique previously proposed by the author, which trains the data on a conditional variational autoencoder (cVAE), approximates gradients on the model, and summarizes gradients into “influence metrics” using a Gaussian mixture model (GMM) (in contrast to a mean absolute value).  Through this comparison, this thesis establishes several findings, including several advantages to using the hybrid cVAE and GMM-to-influence workflow over typical tree-based feature importance approaches. First, the hybrid method’s evaluation of gradients is consistently faster than the evaluation of importance in all other workflows for all sample sizes and datasets. Secondly, it avoids the known drawback of tree-based models’ tendency to assign higher importance to high-cardinality variables. Third, its definition of performance “gradients” with respect to each category (as opposed to each categorical variable) offers more specific, useful insights. For example, it is more useful to know which structural framing system is associated with large reductions in emissions (gradients by category) than to know that the choice of structural framing system is associated with a range of reductions and increases in emissions (gradients by categorical variable, which is typical in feature importance methods). These advantages come at the expense of more time (in this case, 10-fold) needed to train the model compared to state-of-the-art gradient-boosted tree models and the additional time needed to fit a GMM (as opposed to taking the mean absolute value of importance values across the sample). The hybrid workflow is still 2 to 10 times faster than the random forest workflows. Finally, these comparisons highlight the importance of cardinality of categorical variables in mixed variable design spaces, both in the process of selecting a model and selecting an importance evaluation method. &#13;
Key words: variable importance, feature importance, mixed-variable design spaces, gradients, design space exploration, data-driven decision-making
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hybrid Approach for Key-Value Extraction from Technical Specification Documents</title>
<link href="https://hdl.handle.net/1721.1/156738" rel="alternate"/>
<author>
<name>Lee, Samuel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156738</id>
<updated>2024-09-17T03:43:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Hybrid Approach for Key-Value Extraction from Technical Specification Documents
Lee, Samuel S.
As the number of documents processed by businesses across the world increases daily, the demand for streamlined and automated document processing methods grows. However, commercial methods for information extraction from documents do not generalize well across different document formats, as each solution is tailored to specific types of documents. This thesis provides an overview of a hybrid document processing pipeline designed to extract key-value pairs from technical specification documents with high accuracy. Two different phases of the pipeline are introduced, both employing rule-based methods and machine learning to cover a variety of document types. The first is an earlier iteration that extracts information from a simpler collection of documents, and the second is the current iteration designed to handle a much larger dataset containing more complex documents. Lastly, the initial stages of a module designed for key-value extraction from a specific type of technical specification document is also proposed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-Quantum Verifiable Oblivious Pseudorandom Functions</title>
<link href="https://hdl.handle.net/1721.1/156650" rel="alternate"/>
<author>
<name>Propson, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/156650</id>
<updated>2024-09-04T03:41:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Post-Quantum Verifiable Oblivious Pseudorandom Functions
Propson, Helen
This work presents the construction of a post-quantum verifiable oblivious pseudorandom function (VOPRF) with a focus on efficiency and practicality. Leveraging lattice-based cryptographic primitives, particularly the Learning With Errors (LWE) problem, our VOPRF construction aims to address the limitations of existing approaches by reducing proof sizes. The key component in our work is the integration of an efficient zero-knowledge proof of knowledge (ZKPoK) protocol. This ZKPoK is notably more efficient than the proof systems used in prior VOPRF constructions, ensuring the verifiability of PRF outputs while providing smaller proof sizes. Our construction relies on the hardness of the ring-LWE and short integer solution (SIS) problems, and we demonstrate its security in the random oracle model. Overall, our VOPRF construction represents a step towards the development of more practical post-quantum secure cryptographic protocols, highlighting the potential for further improvements in efficiency and real-world applicability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robot Planning in Uncertain, Dynamic Environments</title>
<link href="https://hdl.handle.net/1721.1/156644" rel="alternate"/>
<author>
<name>Cheerla, Anika</name>
</author>
<id>https://hdl.handle.net/1721.1/156644</id>
<updated>2024-09-04T03:08:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Robot Planning in Uncertain, Dynamic Environments
Cheerla, Anika
Many real-world applications require robots to operate in dynamic environments characterized by moving objects or agents whose trajectories are unpredictable. This thesis addresses the challenges posed by such environments by introducing Relative Temporal Probabilistic Roadmaps (Rel-T-PRM), a novel motion planning algorithm that builds upon the Temporal Probabilistic Roadmap (T-PRM) algorithm. The Rel-T-PRM allows for variable dynamic obstacle size, enables robustness with respect to minor changes in time and position and and introduces the concept of waiting until obstacles clear. Furthermore, we leverage Rel-T-PRM’s strengths to propose two replanning strategies. The first attempts to rapidly replan on-the-fly by using waiting to modify the trajectory without needing to modify the path. The second proposed replanning strategy identifies and plans to safe locations, where the robot can safely replan under a longer time horizon. We demonstrate Rel-T-PRM through a variety of simulation experiments on a fixed-base robotic manipulator.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering</title>
<link href="https://hdl.handle.net/1721.1/156640" rel="alternate"/>
<author>
<name>Cai, Miranda J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156640</id>
<updated>2024-09-04T03:42:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering
Cai, Miranda J.
Graph sampling extracts representative samples of a graph, so that approximate graph algorithms can be used in place of expensive, exact algorithms while still achieving highquality results. Thus, graph sampling plays an important role in many modern graph-based applications, such as graph machine learning and graph data mining. However, because of unstructured sparsity in the graph data and the randomness in the sampling algorithms, graph sampling often is the computational bottleneck. To accelerate it, there exist parallel graph sampling methods on multicore CPUs or GPUs. However, limitations arise at both sides. Due to lower throughput, CPU implementations are much slower than GPU ones, while GPU memory capacity is limited to only being able to handle small input graphs. We present the idea behind a scalable graph sampling framework, ScaleGPS, to support high performance graph sampling on huge graphs in a single machine with a CPU and a GPU. The key idea is to cooperatively employ data caching and compression to reduce memory footprint and data movement overhead, and thus achieve high performance and scalability. The challenge in applying caching and compression for graph sampling is two-fold. First, the randomness in sampling leads to redundant computation and memory accesses, and thus low work efficiency. Second, real-world graphs often exhibit skewed degree distribution, where a f ixed strategy cannot optimally handle various cases. We propose a hybrid and adaptive strategy to address this challenge. First, we split the vertices in the graph into two groups based on their degrees. For each group, we store the neighbor lists in different formats, to make full use of the scarce GPU memory resources. Based on this hybrid compression method, we use the GPU memory as a cache of the CPU memory, and adaptively cache hot data to minimize the data movement overhead between the CPU and GPU. We implement our strategy in ScaleGPS and evaluate it on a single machine with a 48-core CPU and an A100 GPU. Our experimental results on various sampling algorithms show that ScaleGPS is able to support billion-edge graphs (up to 84-billion) in a single machine. While the performance benefits over these large graphs are still undetermined, ScaleGPS achieves an average of 33.4× (up to 93×) speedups for smaller graphs over state-of-the-art parallel CPU implementations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MashupMuse: A Web Application for Easier Music Mashup Creation</title>
<link href="https://hdl.handle.net/1721.1/156639" rel="alternate"/>
<author>
<name>Meng, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/156639</id>
<updated>2024-09-04T03:26:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MashupMuse: A Web Application for Easier Music Mashup Creation
Meng, Julie
The intersection of music and technology enables a form of musical expression known as a music mashup—a creative work that combines elements from multiple existing songs into a new, cohesive piece. The traditional process for creating a mashup with standard music editing software can be time-consuming for experienced mashup creators and intimidating for new creators. This software has a steep learning curve and more functionality than required for mashup enthusiasts. Over the last fifteen years, researchers have attempted to simplify this process through solutions with user-friendly interfaces for streamlined mashup creation. With the rise of artificial intelligence, some recent tools automate the mashup process entirely, which strips users of creative control and potentially leads to musically unsatisfying results. Current mashup software falls short either in functionality or userfriendliness, leaving a need for a platform that balances technological assistance and creative freedom. In response to this need, we propose MashupMuse, a web application that simplifies music mashup creation by automating certain parts of the mashup creation process, while simultaneously leaving room for creative freedom. MashupMuse separates each song’s audio into individual tracks, such as vocals, bass, and drums. It allows users to select sections from these tracks and arrange them on a master track while automatically handling beat and key adjustments. This balance of automation and creative freedom offers users a streamlined yet flexible music editing experience. During user testing, we found notable advantages in comparison with a similar mashup creation application. Finally, we outline future work to further improve the user experience.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion-Compensated Viewpoint Shift</title>
<link href="https://hdl.handle.net/1721.1/156638" rel="alternate"/>
<author>
<name>Tao, Julius L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156638</id>
<updated>2024-09-04T03:51:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Motion-Compensated Viewpoint Shift
Tao, Julius L.
Eye contact is an essential social cue that conveys our attention to others but is difficult to maintain during video calls. Many existing methods to synthesize a gaze-corrected view involve estimating a 3D face model and projecting it into the desired camera view, which is too computationally expensive for most personal computers. By drawing inspiration from 2D methods of video frame interpolation, we wish to not only correct eye gaze but also better align the face towards the camera without this expensive 3D modeling. Our findings suggest that adding a second webcam opposite the first and interpolating between the two outer camera views can give realistic, gaze-aligned center views. We conclude that the prevailing approach of 3D modeling is surprisingly not necessary for gaze correction. Not only do 2D techniques suffice, but their synthesized frames can appear more natural than prior results. We believe that this work is a crucial step towards true-to-life viewpoint shift for live video conferences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes</title>
<link href="https://hdl.handle.net/1721.1/156626" rel="alternate"/>
<author>
<name>Tan, Zipei</name>
</author>
<id>https://hdl.handle.net/1721.1/156626</id>
<updated>2024-09-04T03:07:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes
Tan, Zipei
Polygenic risk scores (PRS) estimate an individual’s risk of developing a certain disease, suggesting that differences between cells of individuals with high versus low PRS could give us insight into the cellular disease mechanisms. To study metabolic diseases, we analyze the distribution of cell states of lipocytes of individuals with different PRS for metabolic diseases, thereby associating individual-level genotypes with cell-level features. To accomplish this, we make use of a recent large-scale lipocyte microscopy imaging dataset. By learning a representation of multi-channel lipocyte microscopy images using a convolutional autoencoder, we perform unsupervised clustering on the learnt representations to identify different cell states. We analyze the distribution of these cell states in different individuals and associate their PRS to the observed cell state distributions. Finally, we show that it is possible to generate counterfactual lipocyte images and understand the effect of increased or reduced PRS on cell states through transforming the learnt representations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime</title>
<link href="https://hdl.handle.net/1721.1/156624" rel="alternate"/>
<author>
<name>Yang, Jixiang</name>
</author>
<id>https://hdl.handle.net/1721.1/156624</id>
<updated>2024-09-04T03:31:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime
Yang, Jixiang
Two-dimensional (2D) materials and their heterostructures, especially those with moiré superlattices, have been one of the most fascinating topics in physics in recent years. Many interesting physics, for example the correlated insulating state at half- or quarter- fillings of the moiré band, happened in the far-infrared energy range. However, there are very few optical spectroscopic studies of these 2D materials due to many intrinsic limitations. In this thesis, I will introduce a method named Fourier-transform infrared (FTIR) photocurrent spectroscopy. I will discuss the advantage of this method, and why it is suitable for far-infrared studies of 2D materials. Then I will apply it to the monolayer graphene / hexagonal boron nitride (hBN) moiré superlattices, where I accurately measure the gap ∆ opened at charge-neutral point (CNP) by the moiré superlattice. The relationship between the gap size and the moiré wavelength will also be discussed. Finally, I will discuss the possibility of applying this technique to other novel physics phenomena and other 2D systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom</title>
<link href="https://hdl.handle.net/1721.1/156622" rel="alternate"/>
<author>
<name>Wang, Archer</name>
</author>
<id>https://hdl.handle.net/1721.1/156622</id>
<updated>2024-09-04T03:55:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom
Wang, Archer
This thesis proposes a novel approach to photonics, wherein waveguides are formed entirely within a homogeneous liquid crystal layer using Liquid-Crystal-on-Silicon (LCoS) technology. Utilizing the electro-optical properties of LCs, we demonstrate the theoretical feasibility of inducing refractive index variations solely within the LC medium to guide light. This method diverges from traditional waveguiding techniques that rely on solid core and cladding structures, offering a new paradigm in reconfigurable photonic devices. Additionally, we develop and explore the idea of a programmable Multi-Mode Interferometer using LCoS technology, enabling the performance of arbitrary unitary transformations. Future work will focus on developing robust simulations of coupled-mode theory with liquid crystals, paving the way for next-generation photonic technologies that perform universal linear optics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Tropospheric Hydrogen Peroxide Trends from 1950-2014</title>
<link href="https://hdl.handle.net/1721.1/156613" rel="alternate"/>
<author>
<name>Sun, Vanessa</name>
</author>
<id>https://hdl.handle.net/1721.1/156613</id>
<updated>2024-09-04T03:17:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating Tropospheric Hydrogen Peroxide Trends from 1950-2014
Sun, Vanessa
The oxidizing capacity of the atmosphere, or the ability of the atmosphere to clean itself of the pollutants that build up in the troposphere, is determined by oxidants including ozone (O₃), HOx radicals (OH and HO₂) and hydrogen peroxide (H₂O₂). O₃ is the primary source for HOx radicals, while H₂O₂ is a key sink for HOx radicals that terminates the rapid cycling between OH and HO₂. The concentrations of the HOx radicals and H₂O₂ are difficult to measure directly, with scarce long term data of H₂O₂ primarily available through ice core records. Given the lack of observational data, much of our knowledge on the history of tropospheric oxidants relies on modeling studies. We quantify the global H₂O₂ burden and trends between 1950 and 2014 from the Community Earth System Model - Whole Atmosphere Community Climate Model version 6 (CESM2-WACCM6). This is a global chemistry-climate model, with each of the 13 ensemble members simulating the historical period. Each has a miniscule difference in their initial conditions, and subsequently yield different responses when reacting to the same external forcing. In this study, we discern where H₂O₂ is increasing in the troposphere, particularly in the Southern Hemisphere and over Antarctica. We quantify a rate of increase for the H₂O₂ annual burden, noting the rise beginning in the 1970's and growing from 14% in the 1970's to 34% in the 2000's, with respect to the burden in the 1950's. We find that changes in globally averaged annual mean H₂O₂ are most strongly correlated with changes in ozone, whereas over Antarctica, the strongest relationships for H₂O₂ trends occur with ozone photolysis rates. This aligns well with previous ice core and modelling studies in the literature. Lastly, we also find evidence of stratospheric ozone depletion having no discernible impact on global H₂O₂ burden changes using an additional parallel set of simulations holding ozone depleting substances at 1950 levels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Characterization of an Open-Source, High-Efficiency, Easily-Reconfigurable Switch-mode Current Driver for Magnetic Resonance Imaging Applications</title>
<link href="https://hdl.handle.net/1721.1/156607" rel="alternate"/>
<author>
<name>Govindarajan, Ishaan</name>
</author>
<id>https://hdl.handle.net/1721.1/156607</id>
<updated>2024-09-04T03:31:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Characterization of an Open-Source, High-Efficiency, Easily-Reconfigurable Switch-mode Current Driver for Magnetic Resonance Imaging Applications
Govindarajan, Ishaan
B₀ shimming and local B₀ field control (will be collectively referred to as “local field control") is a process employed by current and proposed Magnetic Resonance Imaging (MRI) techniques to yield faster and more detailed scans with greater diagnostic utility. Additional scanner hardware, specifically local field control coils and power electronic circuits to drive current into these coils (referred to as “current drivers") are required for these techniques. While current driver designs exist today, they typically trade off efficiency and imaging noise. This work demonstrated a proof-of-concept switch-mode current driver with heatsink-free 10ADC drive capability, &lt;25µsec step response rise-times with multiple loads, acceptable disturbance rejection, all while maintaining comparable imaging quality to that of a linear driver. Design and source files for the driver are released under open source licenses. Further performance areas of improvement have been identified, and work will continue to develop this proof-of-concept device into one with greater research and clinical utility.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric Deep Learning for Biomolecules</title>
<link href="https://hdl.handle.net/1721.1/156606" rel="alternate"/>
<author>
<name>Mitnikov, Ilan</name>
</author>
<id>https://hdl.handle.net/1721.1/156606</id>
<updated>2024-09-04T03:56:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Geometric Deep Learning for Biomolecules
Mitnikov, Ilan
Recent advancements in machine learning offer a promising pathway to deeper insights into biological phenomena. This manuscript explores the integration of geometric deep learning techniques to model biological structures. By embedding inductive biases based on geometry and physical laws, we aim to enhance our understanding and predictive capabilities in biomolecular systems. We present methods using equivariant neural networks for geometrical protein representation learning, molecular representation learning for electron density prediction, and scalable molecular dynamics simulations using stochastic interpolants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Enhanced Signal Processing Toolbox for Electrical Energy Monitoring</title>
<link href="https://hdl.handle.net/1721.1/156601" rel="alternate"/>
<author>
<name>Langham, Aaron William</name>
</author>
<id>https://hdl.handle.net/1721.1/156601</id>
<updated>2024-09-04T03:57:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Enhanced Signal Processing Toolbox for Electrical Energy Monitoring
Langham, Aaron William
A nonintrusive load monitor (NILM) aims to perform power system analysis with a minimally invasive sensor profile. A wealth of literature exists for load identification and energy disaggregation under ideal, healthy conditions. However, a significant value proposition of nonintrusive load monitoring comes from fault detection and diagnostics. Early detection of electromechanical faults aids safety, reduces energy waste, and saves money. However, load identification and energy disaggregation are complicated by faulty or time-varying load operation profiles. This thesis extends previous thesis work by the author that addresses this issue. A new, “multistream” feature extraction approach to nonintrusive power monitoring is presented. This approach enables targeted electrical data analysis on non-stationary electrical systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Generative Agent Social Dilemmas</title>
<link href="https://hdl.handle.net/1721.1/156591" rel="alternate"/>
<author>
<name>Yocum, Julian R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156591</id>
<updated>2024-09-04T03:52:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating Generative Agent Social Dilemmas
Yocum, Julian R.
In social dilemmas, individuals would be better off cooperating but fail to do so due to conflicting interests that discourage cooperation. Existing work on social dilemmas in AI has focused on standard agent design paradigms, most recently in the context of multi-agent reinforcement learning (MARL). However, with the rise of large language models (LLMs), a new design paradigm for AI systems has started to emerge—generative agents, in which actions performed by agents are chosen by prompting LLMs. This paradigm has seen recent success, such as Voyager, a highly capable Minecraft agent. In this work, we perform an initial study of outcomes that arise when deploying generative agents in social dilemmas. To do this, we build a multi-agent Voyager framework with a contracting and judgement mechanism based on formal contracting, which has been effective in mitigating social dilemmas in MARL. Wethen construct social dilemmas in Minecraft as the testbed for our open-source¹ framework. Finally, we conduct preliminary experiments using our framework to provide evidence that contracting helps improve outcomes for generative agents in social dilemmas.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ECO-LENS Addressing Urban Biodiversity with Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156590" rel="alternate"/>
<author>
<name>Montas, Enrique B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156590</id>
<updated>2024-09-04T03:03:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ECO-LENS Addressing Urban Biodiversity with Machine Learning
Montas, Enrique B.
The link between global climate change and biodiversity is well recognized. Humandriven destruction and degradation of ecosystems amplify the negative and complex impacts of climate change, increasing the strain on remaining ecosystems and wildlife. Therefore, it is essential for climate change mitigation efforts to include strategies that protect and conserve biodiversity, enhancing ecosystem productivity, resilience, adaptability, and sustainability. Identifying and prioritizing ecosystem functions that support key ecosystem services is crucial for targeted conservation actions, particularly in urban areas. Urban regions have doubled in size since 1992, and compared to 2020, they are expected to expand by 30% to 180% by 2100. Most of this growth will occur in the global south, in regions rich in biodiversity, and will impact global ecosystems through resource demands, pollution, and climate effects. Urban biodiversity management is an emerging discipline, with significant gaps in our understanding that are vital for improving biodiversity conservation policies and management in urban areas to support global biodiversity goals. As research on ecosystem services progresses, the importance of urban vegetation in promoting the sustainability of urban ecosystems and environments is increasingly recognized. Recently, remote sensing technology has become a valuable tool for obtaining detailed information and mapping urban vegetation, offering numerous benefits. Leveraging remote sensing tools in the form of satellite imagery and LiDAR enables extensive coverage of urban areas, providing an opportunity to evaluate biodiversity patterns across entire regions without causing disturbance to ecosystems. While remote sensing has significantly improved our capacity to monitor landscape-level biodiversity losses, its application for assessing urban biodiversity has been limited. This research paper offers several ways of leveraging remote sensing and machine learning techniques to close the existing data gap. Through this paper, we showcase the potential use of Normalized Difference Vegetation Index (NDVI), satellite imagery, and LiDAR point clouds to provide data for urban biodiversity assessment, management, and conservation. By leveraging technologies and the data they provide, urban planners, policymakers, and conservation practitioners can make more informed decisions to protect and enhance urban biodiversity systematically.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Contextual Annotation Framework for Short Linear Motifs in Proteins</title>
<link href="https://hdl.handle.net/1721.1/156589" rel="alternate"/>
<author>
<name>Nyiam, Nten P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156589</id>
<updated>2024-09-04T03:02:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing a Contextual Annotation Framework for Short Linear Motifs in Proteins
Nyiam, Nten P.
Identifying and validating short linear motifs (SLiMs) is challenging due to their low sequence complexity and high prevalence across the proteome. Many false positives—sequences that match the pattern of the SLiM but are not involved in the biological functions typically associated with SLiMs—complicate this task. Distinguishing functional SLiMs from false positives requires an approach that incorporates not just sequence analysis but also biological, structural, and evolutionary context. This thesis presents a framework designed to annotate candidate SLiM motifs and differentiate true binders from false positives. The proposed framework uses several annotation metrics, including sequence conservation, post-translational modifications (PTMs), structural context derived from AlphaFold model scores, and the proximity of neighboring motifs. We evaluate each of these metrics using a test dataset sampled from the Eukaryotic Linear Motif (ELM) protein database. Our results indicate that sequence conservation has a consistent but moderate ability to differentiate true binders from unverified candidate motifs. Additionally, integrating AlphaFold’s structural data may help reduce false positives arising from predictions of disordered regions when sampling the motif data. We show that the tool currently underestimates the number of PTMs, suggesting a need for integrating additional PTM databases or predictive tools to improve motif annotation accuracy. Finally, we find that known functional SLiMs tend to cluster more closely than potential false positives, indicating that spatial proximity may help identify true SLiMs in motifs that serve specific roles. These findings highlight the importance of a context-based approach in SLiM annotation and open routes for future research and development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Scene and Object Generalization of Neural Policies Trained in Synthetic Environments</title>
<link href="https://hdl.handle.net/1721.1/156571" rel="alternate"/>
<author>
<name>Quach, Alex H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156571</id>
<updated>2024-09-04T04:01:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Robust Scene and Object Generalization of Neural Policies Trained in Synthetic Environments
Quach, Alex H.
Achieving generalization for autonomous robotic systems operating in real-world environments remains a significant challenge. Training robots solely in simulations can be limiting due to the "sim-to-real gap"– discrepancies between simulated and real-world conditions. We present two novel approaches to enhance the generalization capabilities of autonomous quadrotor navigation systems when transferring from simulation to the real world. Our f irst approach integrates a 3D Gaussian Splatting radiance field with a quadrotor flight dynamics engine to generate high-quality, photorealistic training data. We design imitation learning schemes to train liquid time-constant neural networks on this data. Through rigorous evaluations, we demonstrate successful zero-shot transfer of the learned navigation policies from simulation to real-world flight, exhibiting generalization to complex, multi-step tasks in novel indoor and outdoor environments. Notably, we showcase autonomous quadrotor policies trained entirely in simulation that can be directly deployed in the real world without fine-tuning. Our method leverages the complementary strengths of photorealistic rendering and irregularly time-sampled data augmentation for enhancing generalization with liquid neural networks. Additionally, we compose off-the-shelf vision-and-language models with neural policies, enabling real-world generalization to complex objects and instructions unseen during training. To the best of our knowledge, this is the first report of zero-shot sim-to-real transfer and semantic generalization for autonomous quadrotor navigation using imitation learning. Our key contributions include: (1) a dynamics-augmented Gaussian splatting simulator, (2) implicit closed-loop augmentation via expert trajectory design, (3) robustifying liquid neural networks through irregularly sampled data, (4) extensive simulation and real-world validation, (5) demonstrating zero-shot real-world transfer capabilities, and (6) enabling zero-shot instruction generalization to novel objects using multimodal representations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/156570" rel="alternate"/>
<author>
<name>Gupta, Diptasri</name>
</author>
<id>https://hdl.handle.net/1721.1/156570</id>
<updated>2024-09-04T03:02:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence
Gupta, Diptasri
This thesis investigates the impact of automation and advanced technologies, specifically focusing on Large Language Models (LLMs), on traditional employment structures in the modern workplace. Historically, the master-apprentice model has been integral to vocational training across various industries, facilitating the transfer of knowledge, skills, and professional ethics from one generation to the next. However, the rise of AI and machine learning challenges the viability of this model, raising critical questions about the nature and quality of mentorship and skill acquisition in work environments. Part of a broader research initiative led by Professors Atkin, Li, and Beraja, this study explores the hypothesis that apprentices promoted without foundational mentorship may struggle in their advanced roles, potentially reducing long-term productivity gains from AI. Utilizing a comprehensive dataset from Brazilian Social Security records (RAIS) spanning 2003-2015, the research focuses on industries with a clear apprentice-master dynamic, such as finance, legal, and insurance sectors. By analyzing job code changes and pay adjustments, the study aims to correlate technological influx within companies with the productivity of workers promoted to master roles, using pay as a proxy for productivity. Findings indicate that while technological influx does not significantly affect immediate post-promotion wages, it negatively impacts wages one and two years after promotion, suggesting potential wage stagnation or reduction. Additionally, technological influx initially increases promotion likelihood and stabilizes employee retention, though longer-term effects are less clear. These results imply that apprentices are more likely to be promoted and retained in the short term but face reduced wage growth and potentially diminished performance. The study concludes that technological advancements can alter the traditional apprenticeship model, affecting skill acquisition and long-term productivity. Recommendations are provided for educators, industry leaders, and policymakers on optimizing apprenticeship models in an increasingly automated world. Further research will involve AI-focused evaluations to observe the real-world impact of AI integration on team dynamics, productivity, and skill development, aiming to refine our understanding of its effects on employment structures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ICσOS: σOS for Intercloud Environments</title>
<link href="https://hdl.handle.net/1721.1/156569" rel="alternate"/>
<author>
<name>Chen, Kevin S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156569</id>
<updated>2024-09-04T03:42:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ICσOS: σOS for Intercloud Environments
Chen, Kevin S.
The cloud computing market offers myriad service offerings with diverse performance guarantees, yet tenants who want to explore this diversity are often punished for doing so: vendor lock-in and the lack of cross-cloud compatibility make it difficult for tenants to migrate their workloads to other clouds, or to utilize multiple clouds in an interconnected manner. This thesis presents ICσOS, an intercloud operating system that enables tenants to interact with multiple clouds’ infrastructure as a single interconnected system with minimal additional management overhead. ICσOS extends σOS— a cloud operating system that provides per-tenant namespaces via the novel realm abstraction — with intercloud features, and leverages namespaces to allow tenants to perform intercloud communication, service discovery, workload placement, coordination, and more without regard to cluster-level management details. ICσOS also introduces placement policies, a framework for intercloud workload placement that enables tenants to express fine-grained placement criteria that can be dynamically updated as applications run. An evaluation of ICσOS and placement policies on a distributed image-resizing application demonstrates ICσOS’s capabilities as an intercloud platform, as well as its ability to quickly and effectively respond to situations where intercloud placement behavior changes frequently.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining Optimal Halftone Angles for CMYK Printing</title>
<link href="https://hdl.handle.net/1721.1/156568" rel="alternate"/>
<author>
<name>Monsalve Rodriguez, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/156568</id>
<updated>2024-09-04T03:31:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Determining Optimal Halftone Angles for CMYK Printing
Monsalve Rodriguez, Catalina
CMYK halftone prints are all around us, yet the halftone angles used to generate these prints are traditionally set to specific values without substantial documentation explaining why these angles are optimal. Investigating the optimization of these angles is important for enhancing print quality and minimizing visual artifacts, which can significantly impact the visual appeal and accuracy of printed materials especially in low-resolution printing such as relief and screen printing techniques. This research investigates optimal halftone angles under low-resolution. The algorithm for this system generates low-resolution images from an input image, aiming to cover the full range of permutations of possible angles for each halftone in discrete variations of 15◦. We performed this on a various range of input images and computed a similarity score between each output image and its original input image, to assess a specific angle permutation’s performance. The study led to the formulation and validation of two hypotheses: 1) images with distinct halftone angles for each color channel generally achieve higher similarity scores than those with repeated angles; 2) permutations with the black halftone oriented at 0◦ benefits images with a high prevalence of black pixels. This thesis contributes to understanding halftone angle optimization in CMYK printing, offering practical guidelines for improving print quality and reducing visual artifacts, thus benefiting the printing industry and its diverse applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Large Language Models (LLMs) for Automated Extraction and Processing of Complex Ordering Forms</title>
<link href="https://hdl.handle.net/1721.1/156567" rel="alternate"/>
<author>
<name>Daqqah, Bilal H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156567</id>
<updated>2024-09-04T03:09:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging Large Language Models (LLMs) for Automated Extraction and Processing of Complex Ordering Forms
Daqqah, Bilal H.
Data extraction from business documents is a critical but under-exploited area capable of unlocking significant value from vast document archives. Traditional methods relying on manual intervention or outsourcing are inefficient, error-prone, and costly, and commercial Deep Learning-based and OCR solutions still struggle with highly unstructured documents. This thesis explores the use of Large Language Models (LLMs) to automate the extraction and processing of ordering forms and procurement documents in collaboration with SiliconExperts. These documents contain complex codes used in electronic component procurement, which guide the manufacture and specification of parts. We developed an end-to-end pipeline comprising four key modules: Page Classification, OCR and Table Extraction, LLM Inference, and Code Combination Generation. Two approaches for key-value extraction were compared: one-shot prompting with in-context learning using GPT-4 Turbo with Vision (GPT-4V) and a fine-tuned GPT-3.5 model, in which the GPT-4V approach demonstrated superior performance. The pipeline effectively generated correct code combinations with high accuracy, although data quality issues impacted precision and performance. This research highlights the potential of LLMs to transform document processing workflows, bridging the gap between academic advancements and practical business applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Cycle-Level Verification of Constant-Time Cryptography</title>
<link href="https://hdl.handle.net/1721.1/156566" rel="alternate"/>
<author>
<name>Xu, Jessica Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156566</id>
<updated>2024-09-04T03:22:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Cycle-Level Verification of Constant-Time Cryptography
Xu, Jessica Y.
Cryptographic primitives–hash functions, symmetric key encryption algorithms, asymmetric key exchange algorithms, and more–are used everywhere to achieve security in modern computing. Since these algorithms have complicated, math-heavy implementations, they are typically used through cryptographic library functions. However, many timing side-channel attacks, which leak information when execution time depends on secrets, have been found in popular cryptographic libraries, such as OpenSSL. Formal verification aims to rule out timing side channels in cryptographic software. This thesis presents Quake, a framework for verifying cryptographic library functions are constant-time for a specific hardware implementation, regardless of where the code is located in memory. Quake represents the location of code in memory using symbolic addresses and introduces a ROM model that gets concrete memory data from symbolic addresses. This thesis evaluates Quake and demonstrates that it can detect address-dependent timing behavior and does so in a reasonable amount of time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Induction Drive for a Magnetically Levitated Control Moment Gyroscope</title>
<link href="https://hdl.handle.net/1721.1/156565" rel="alternate"/>
<author>
<name>Gershon, Levi</name>
</author>
<id>https://hdl.handle.net/1721.1/156565</id>
<updated>2024-09-04T03:55:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hybrid Induction Drive for a Magnetically Levitated Control Moment Gyroscope
Gershon, Levi
In order to support future astronautical missions, in light of the rapid growth of miniaturized smaller satellites, lower jitter and higher torque density multi-axis attitude control systems (ACS) will be needed [1]. This thesis aims to create a hybrid-drive reaction sphere with a spherical 1.5” diameter, diametrically magnetized grade N42 NdFeB rotor. A permanent magnet drive is used for vertical translation control, and an induction drive is used to spin the magnet about its axis of magnetization. Future work can then add more axes of permanent magnet drives to enable control about the other translation and rotation axes, such as was done in [2], setting the stage for full six-axis control of a monolithic rotor.&#13;
In this work, analytic models for both magnetic levitation of the rotor and the dipole-field induction are developed, leveraging previously reported models. Additionally, the gyroscopic precession potentially induced by a rotating dipole field is analyzed and determined to be negligible. A benchtop prototype of the system was designed, fabricated, and assembled, where a solenoid is used to magnetically levitate the rotor using its magnetization, and an induction motor is used to spin the rotor about its axis of magnetization. An optical sensor previously developed for position sensing was adapted for spin measurements at high speeds by creating an optical encoder pattern on the rotor. Speeds up to 401 RPM and a torque up to 3.0 μNm were measured, with no significant nutation observed, indicating such a hybrid drive may be a viable architecture for future reaction sphere ACS designs requiring both rotor simplicity and 6 axes of control.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory-Scale Thermal Energy Grid Storage (TEGS) Prototype</title>
<link href="https://hdl.handle.net/1721.1/156562" rel="alternate"/>
<author>
<name>Buznitsky, Kyle Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/156562</id>
<updated>2024-09-04T04:03:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Laboratory-Scale Thermal Energy Grid Storage (TEGS) Prototype
Buznitsky, Kyle Joseph
Grid-scale long duration energy storage will be necessary to maintain grid reliability in the US and beyond as intermittent renewables become the dominant source of electricity generation. An appealing long duration energy storage technology is thermal energy storage due to its low energy-based cost. One embodiment of thermal energy storage is the thermal energy grid storage (TEGS) concept, which is an envisioned graphite-based thermal energy storage system cycling between 1900-2400°C. Such a system would pump molten tin as a heat transfer fluid and use thermophotovoltaics to convert the thermal energy back to electricity. While many of these individual components have been demonstrated in isolation, there has yet to be a system which combines all these technologies into a working prototype. The focus of this work is creating this prototype and operating it at an intermediate temperature to uncover and overcome any system integration challenges that arise. In this work, a laboratory-scale TEGS prototype was designed and tested at temperatures up to 1000°C, uncovering challenges that are applicable to many high-temperature processes. By doing so, this work hopes to identify design criteria for similar high-temperature systems that must overcome some of the same challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Force Feedback and Tactile Sensing for Robotic Teleoperation of Contact Rich Manipulation Tasks</title>
<link href="https://hdl.handle.net/1721.1/156561" rel="alternate"/>
<author>
<name>Karpoor, Shreya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156561</id>
<updated>2024-09-04T04:05:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Force Feedback and Tactile Sensing for Robotic Teleoperation of Contact Rich Manipulation Tasks
Karpoor, Shreya S.
Imitation learning has shown promising results in teaching robots new skills. We propose augmenting the ALOHA bimanual teleoperation system with haptic feedback to obtain higher quality expert demonstrations. We add two types of haptic feedback: force feedback and cutaneous feedback in both a real and simulation teleoperation system. Additionally, we propose to add tactile sensors to observe the impact of tactile data to imitation learning models in solving fine manipulation tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volterra System Analysis for an Electrochemical Sensor</title>
<link href="https://hdl.handle.net/1721.1/156552" rel="alternate"/>
<author>
<name>Iqbal, Billal</name>
</author>
<id>https://hdl.handle.net/1721.1/156552</id>
<updated>2024-09-04T03:07:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Volterra System Analysis for an Electrochemical Sensor
Iqbal, Billal
Current biological methods for quantifying bacterial and fungal populations are time and labour intensive, whilst remaining expensive to automate. A potential solution to this problem is an electrochemical sensor, which applies a stochastic voltage across a liquid medium and measures the resultant current flow. This data can then be used to model the liquid’s electrochemical interactions and monitor it for bacterial growth and spoilage. Linear dynamic impedance models have previously been explored for this. However, the ability to capture the nonlinear effects observed at higher voltages can provide greater insights into the liquid’s properties. This is extremely difficult with neural networks which offer accurate predictive capabilities without much insight into the system. A different strategy is to model the liquid using a Volterra series representation. This work will document the integration of Volterra system identification capabilities within the sensor and its performance when modelling different liquid media as well as modifications made to the sensor for the applications tested.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The synthesis of ethylene urea</title>
<link href="https://hdl.handle.net/1721.1/156420" rel="alternate"/>
<author>
<name>Hansen, Floyd Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/156420</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">The synthesis of ethylene urea
Hansen, Floyd Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1939; Includes bibliographical references (leaves 32-33).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vapor formation of benzene in an immersed orifice</title>
<link href="https://hdl.handle.net/1721.1/156419" rel="alternate"/>
<author>
<name>Yeh, Hsuan.</name>
</author>
<author>
<name>Zhao, Yaodong.</name>
</author>
<id>https://hdl.handle.net/1721.1/156419</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1943-01-01T00:00:00Z</published>
<summary type="text">Vapor formation of benzene in an immersed orifice
Yeh, Hsuan.; Zhao, Yaodong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1943; Includes bibliographical references (leaves 25-26).
</summary>
<dc:date>1943-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the characteristics of small waves</title>
<link href="https://hdl.handle.net/1721.1/156362" rel="alternate"/>
<author>
<name>Allen, John U.</name>
</author>
<author>
<name>Michel, John F.</name>
</author>
<id>https://hdl.handle.net/1721.1/156362</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Measurement of the characteristics of small waves
Allen, John U.; Michel, John F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1947; Bibliography: leaf 50.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial robustness without perturbations</title>
<link href="https://hdl.handle.net/1721.1/156344" rel="alternate"/>
<author>
<name>Rodríguez Muñoz, Adrán</name>
</author>
<id>https://hdl.handle.net/1721.1/156344</id>
<updated>2024-08-22T03:08:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adversarial robustness without perturbations
Rodríguez Muñoz, Adrán
Models resistant to adversarial perturbations are stable around the neighbourhoods of input images, such that small changes, known as adversarial attacks, cannot dramatically change the prediction. Currently, this stability is obtained with Adversarial Training, which directly teaches models to be robust by training on the perturbed examples themselves. In this work, we show the surprisingly similar performance of instead regularizing the model input-gradients of un-perturbed examples only. Regularizing the input-gradient norm is commonly believed to be significantly worse than Adversarial Training. Our experiments determine that the performance of Gradient Norm critically depends on the smoothness of the activation functions of the model, and is in fact highly peformant on modern vision transformers that natively use smooth GeLU over piecewise linear ReLUs. On ImageNet-1K, Gradient Norm regularization achieves more than 90% of the performance of state-of-the-art Adversarial Training with PGD-3 (52% vs. 56%) with 60% of the training time and without complex inner-maximization. Further experiments shed light on additional properties relating model robustness and input-gradients of unperturbed images, such as asymmetric color statistics. Suprisingly, we also show significant adversarial robustness may be obtained by simply conditioning gradients to focus on image edges, without explicit regularization of the norm.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Score Distillation via DDIM Inversion</title>
<link href="https://hdl.handle.net/1721.1/156343" rel="alternate"/>
<author>
<name>Lukoianov, Artem S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156343</id>
<updated>2024-08-22T03:55:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Score Distillation via DDIM Inversion
Lukoianov, Artem S.
While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, in this paper we prove that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and prevent the algorithm from generating realistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS’s generative process for 2D images identical to DDIM, up to our change of variables. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other works that improve SDS, all without training additional neural networks or 3D supervision. Our findings bridge the gap between 2D and 3D asset generation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep and Dynamic Metabolic and Structural Imaging in Living Tissues</title>
<link href="https://hdl.handle.net/1721.1/156342" rel="alternate"/>
<author>
<name>Liu, Kunzan</name>
</author>
<id>https://hdl.handle.net/1721.1/156342</id>
<updated>2024-08-22T03:36:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Deep and Dynamic Metabolic and Structural Imaging in Living Tissues
Liu, Kunzan
Label-free imaging through two-photon autofluorescence (2PAF) of NAD(P)H allows for non-destructive and high-resolution visualization of cellular activities in living systems. However, its application to thick tissues and organoids has been restricted by its limited penetration depth within 300µm, largely due to tissue scattering at the typical excitation wavelength (∼750nm) required for NAD(P)H. Here, we demonstrate that the imaging depth for NAD(P)H can be extended to over 700µm in living engineered human multicellular microtissues by adopting multimode fiber (MMF)-based low-repetition-rate high-peak-power three-photon (3P) excitation of NAD(P)H at 1100nm. This is achieved by having over 0.5MW peak power at the band of 1100±25nm through adaptively modulating multimodal nonlinear pulse propagation with a compact fiber shaper. Moreover, the 8-fold increase in pulse energy at 1100nm enables faster imaging of monocyte behaviors in the living multicellular models. These results represent a significant advance for deep and dynamic metabolic and structural imaging of intact living biosystems. The modular design (MMF with a slip-on fiber shaper) is anticipated to allow wide adoption of this methodology for demanding in vivo and in vitro imaging applications, including cancer research, autoimmune diseases, and tissue engineering.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Players with Bounded Randomness Capabilities</title>
<link href="https://hdl.handle.net/1721.1/156341" rel="alternate"/>
<author>
<name>Orzech, Edan</name>
</author>
<id>https://hdl.handle.net/1721.1/156341</id>
<updated>2024-08-22T04:00:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Players with Bounded Randomness Capabilities
Orzech, Edan
In this thesis I study the effect of bounded randomness capabilities on the outcomes of games, and their payoffs to the players. I study this subject from two perspectives. The first perspective is the ability to share randomness across team members playing against an opposing team. The second perspective is the capability to store the underlying distribution of the mixed strategy a player intends to play.&#13;
&#13;
The first perspective is the ability to share randomness across team members playing against an opposing team. I consider team zero-sum network congestion games played between a team of n agents and a team of k interceptors over a graph G.&#13;
The agents aim to minimize their collective cost of sending traffic over paths, while the interceptors aim to maximize the collective cost by adding tolls or congestion to road segments. I consider two cases, the correlated case where agents have access to a shared source of randomness, and the uncorrelated case, where each agent has access to only its own source of randomness. I show that the additional cost that the agents have to incur due to being unable to share random bits is bounded by O(min(m_c(G),n)), where m_c(G) is the mincut size of G.&#13;
&#13;
The second perspective is the capability to store the underlying distribution of the mixed strategy a player intends to play. I define a measure of the complexity of finite probability distributions and study the complexity required to play Nash equilibria in finite two-player n times n games with rational payoffs.  &#13;
My central results show that there exist games in which there is an exponential vs. linear gap in the complexity of the mixed distributions that the two players play in the (in these games unique) Nash equilibrium of these games. This gap induces asymmetries in the amounts of space required by players to represent and sample from the corresponding distributions using known state-of-the-art sampling algorithms. I also establish exponential upper and lower bounds on the complexity of Nash equilibria in normal-form games.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peer-to-Peer Group Communication for City-Scale Mesh Networks</title>
<link href="https://hdl.handle.net/1721.1/156340" rel="alternate"/>
<author>
<name>Sussman, William A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156340</id>
<updated>2024-08-22T03:22:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Peer-to-Peer Group Communication for City-Scale Mesh Networks
Sussman, William A.
The Internet has become extremely centralized. The benefits of centralization have thus far outweighed the drawbacks, but users today are much more concerned about privacy, and reachability is increasingly threatened by natural disasters, political repression, cyberattacks, and human error. CityMesh provides an answer to this problem, constructing a decentralized mesh network out of wireless access points. To test our unicast routing protocol, we built a discrete-event network simulator using SimPy. However, we make several simplifying assumptions, and unicast is not sufficient for many applications. In this thesis, I show that our simulator nevertheless achieves 67.4% correlation with real data that we collected, and I generalize our simulator for multicast. Specifically, I compose our unicast primitive into multicast trees using three different topologies, and surprisingly find that Steiner trees perform worse than minimum spanning trees on average.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining LLMs in Economic Settings</title>
<link href="https://hdl.handle.net/1721.1/156339" rel="alternate"/>
<author>
<name>Ross, Jillian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156339</id>
<updated>2024-08-22T03:40:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Examining LLMs in Economic Settings
Ross, Jillian A.
Humans are not homo economicus (i.e., rational economic beings). We exhibit systematic behavioral biases such as loss aversion, anchoring, framing, etc., which lead us to make suboptimal economic decisions. Insofar as such biases may embedded in text data on which large language models (LLMs) are trained, to what extent are LLMs prone to the same behavioral biases? Understanding these biases in LLMs is crucial for deploying LLMs to support human decision-making. To enable the responsible deployment of LLMs, I propose economic alignment. Economic alignment is a specific form of AI alignment that provides a critical perspective to interrogate what human preferences we would like to incorporate into LLM decisions. To illustrate the power of economic alignment, I systematically study the economic decision-making behaviors of LLMs through utility theory, a paradigm at the core of modern economic theory. I apply experimental designs from human studies to LLMs and find that they are neither entirely human-like nor entirely economicus-like. Specifically, I find that LLMs generally exhibit stronger inequity aversion, stronger loss aversion, weaker risk aversion, and stronger time discounting compared to human subjects. I further find that most LLMs struggle to maintain consistent economic behavior across settings. Finally, I present a case study that examines how we can intervene through prompting to better align LLMs with economic goals.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Sleep Assessment from Nocturnal Breathing and Its Applications for Contactless Monitoring</title>
<link href="https://hdl.handle.net/1721.1/156338" rel="alternate"/>
<author>
<name>Li, Chao</name>
</author>
<id>https://hdl.handle.net/1721.1/156338</id>
<updated>2024-08-22T03:21:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automatic Sleep Assessment from Nocturnal Breathing and Its Applications for Contactless Monitoring
Li, Chao
The ability to assess sleep at home, capture sleep stages, and detect the occurrence of apnea (without on-body sensors) simply by analyzing the radio waves bouncing off people’s bodies while they sleep is quite powerful. Such a capability would allow for longitudinal data collection in patients’ homes, informing our understanding of sleep and its interaction with various diseases and their therapeutic responses, both in clinical trials and routine care. In this work, we develop an advanced machine-learning algorithm for passively monitoring sleep and nocturnal breathing from radio waves reflected off people while asleep. Validation results in comparison with the gold standard (i.e., polysomnography) (n=849) demonstrate that the model captures the sleep hypnogram (with an accuracy of 81% for 30-second epochs categorized into Wake, Light Sleep, Deep Sleep, or REM), detects sleep apnea (AUROC = 0.88), and measures the patient’s Apnea-Hypopnea Index (ICC=0.95; 95% CI = [0.93, 0.97]). Notably, the model exhibits equitable performance across race, sex, and age. Moreover, the model uncovers informative interactions between sleep stages and a range of diseases including neurological, psychiatric, cardiovascular, and immunological disorders. These findings not only hold promise for clinical practice and interventional studies but also underscore the significance of sleep as a fundamental component in understanding and managing various diseases.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Offline Reward Learning from Human Demonstrations and Feedback: A Linear Programming Approach</title>
<link href="https://hdl.handle.net/1721.1/156337" rel="alternate"/>
<author>
<name>Kim, Kihyun</name>
</author>
<id>https://hdl.handle.net/1721.1/156337</id>
<updated>2024-08-22T03:30:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Offline Reward Learning from Human Demonstrations and Feedback: A Linear Programming Approach
Kim, Kihyun
In many complex sequential decision-making tasks, there is often no known explicit reward function, and the only information available is human demonstrations and feedback data. To infer and shape the underlying reward function from this data, two key methodologies have emerged: inverse reinforcement learning (IRL) and reinforcement learning from human feedback (RLHF). Despite the successful application of these reward learning techniques across a wide range of tasks, a significant gap between theory and practice persists. This work aims to bridge this gap by introducing a novel linear programming (LP) framework tailored for offline IRL and RLHF. Most previous work in reward learning has employed the maximum likelihood estimation (MLE) approach, relying on prior knowledge or assumptions about decision or preference models. However, such dependencies can lead to robustness issues, particularly when there is a mismatch between the presupposed models and actual human behavior. In response to these challenges, recent research has shifted toward recovering a feasible reward set, a general set of rewards where the expert policy is optimal. In line with this evolving perspective, we focus on estimating the feasible reward set in an offline context. Utilizing pre-collected trajectories without online exploration, our framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, and offers an optimality guarantee with provable sample efficiency. One notable feature of our LP framework is the convexity of the resulting solution set, which facilitates the alignment of reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. Through analytical examples and numerical experiments, we demonstrate that our framework has the potential to outperform the conventional MLE approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Control and Information Exchange for Improved Flight Autonomy of Hybrid Powertrain Drones</title>
<link href="https://hdl.handle.net/1721.1/156336" rel="alternate"/>
<author>
<name>Kosanic, Miroslav</name>
</author>
<id>https://hdl.handle.net/1721.1/156336</id>
<updated>2024-08-22T03:06:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Distributed Control and Information Exchange for Improved Flight Autonomy of Hybrid Powertrain Drones
Kosanic, Miroslav
This work addresses integrating mechanical dynamics and powertrain energy conversion dynamics in Unmanned Aerial Vehicles (UAVs), focusing on hexacopters with hybrid powertrains. The goal is to maximize fuel savings and achieve that through powertrain regulation. One of factors that influence optimal internal combustion engine (ICE) operation is the passively managed battery, which should have the role of a fast supplementary power source. When the powertrain faces disturbances, ICE efficiency may decrease. The question is whether coordinated information exchange through distributed or decentralized control of the battery can outperform centralized powertrain control, which treats the battery as a disturbance in a component-isolated approach. The core contributions of this thesis include developing a novel modeling approach that integrates energy conversion dynamics with the mechanical dynamics of the drone. A second contribution of the thesis estimates parameters of nonlinear dynamics, using flight-mission data, and shows theoretical conditions for which the system exhibits time-scale separation. Using an average-parameter model, a composite Linear Quadratic Regulator (LQR) policy with predictive control was implemented and simulated during the cruise phase of flight phase, achieving 4.5% fuel savings by recognizing battery disturbances. This result from the centralized approach is compared to the thesis’s third contribution, distributed and decentralized control of the battery, where the two differ as decentralized control is achieved through the local information exchange, while distributed components can obtain needed information from components that they are not directly connected. Both approaches enable the increase of supplement power from the battery, reducing the demand impact on the generator and ICE and saving fuel. The distributed control is helping aggressively without proper coordination, ending up as non-cooperative control, as it doesn’t have information on what is the power that the generator needs. Decentralized approach receives the information of supplement power, and as coordination is embedded in this information coming from the generator, and achieves cooperative control. For the fully charged battery during the cruise phase of the flight, distributed saved approximately 34.56% of the initial fuel, while decentralized control saved 50.05% of the initial fuel in the reservoir.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile Underwater Backscatter Networking</title>
<link href="https://hdl.handle.net/1721.1/156335" rel="alternate"/>
<author>
<name>Wang, Purui</name>
</author>
<id>https://hdl.handle.net/1721.1/156335</id>
<updated>2024-08-22T04:01:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mobile Underwater Backscatter Networking
Wang, Purui
Underwater backscatter is a recently introduced technology for ultra-low-power underwater networking. Despite advances in this technology, existing systems are limited to static environments and cannot operate reliably under mobility. This thesis presents EchoRider, the first system that enables reliable underwater backscatter networking under mobility. EchoRider’s design introduces three new components. The first is a robust, chirp-based downlink protocol that brings the benefits of LoRa wireless networks to underwater backscatter, while accounting for the ultra-low-power nature of the backscatter sensor nodes. The second is a novel NACK-based backscatter retransmission algorithm, which enables reliable and efficient underwater backscatter. The third is a Doppler-resilient backscatter decoding pipeline on the uplink that features adaptive equalization, polar coding, and an equalizer retraining mechanism. We implemented an end-to-end prototype of EchoRider and compared it to a state-of-the-art baseline. Our evaluation across more than 1,200 real-world experimental trials in real-world environments demonstrates that EchoRider outperforms the state-of-the-art baseline by more than 160× in BER under mobility, and that it can sustain typical underwater goodput (around 0.5kbps) in scenarios where the baseline’s goodput drops to zero at speeds as low as 0.1m/s. Finally, we demonstrate EchoRider in an example application involving an underwater mobile drone and a backscatter sensor node.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MeMo: Meaningful, Modular Controllers via Noise Injection</title>
<link href="https://hdl.handle.net/1721.1/156334" rel="alternate"/>
<author>
<name>Tjandrasuwita, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/156334</id>
<updated>2024-08-22T03:15:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MeMo: Meaningful, Modular Controllers via Noise Injection
Tjandrasuwita, Megan
Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next generation tools for smart electron microscopy</title>
<link href="https://hdl.handle.net/1721.1/156333" rel="alternate"/>
<author>
<name>Sawmya, Shashata</name>
</author>
<id>https://hdl.handle.net/1721.1/156333</id>
<updated>2024-08-22T03:48:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Next generation tools for smart electron microscopy
Sawmya, Shashata
Smart Electron Microscopy (SmartEM) is a new generation EM imaging technology that promises to revolutionize microscopy. In this research, we explore the integration of advanced techniques to enhance this technology further. These include alternative characterization of high-resolution rescanning, cutting-edge vision models, incorporation of 3D information and vision transformers for improved neuronal segmentation and pipeline speedup. Our goal is to develop tools that improve the existing SmartEM pipeline, making it more versatile and effective for deployment in various practical settings.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Deployment Algorithms for Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156332" rel="alternate"/>
<author>
<name>Xiao, Guangxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/156332</id>
<updated>2024-08-22T03:30:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Efficient Deployment Algorithms for Large Language Models
Xiao, Guangxuan
Large language models (LLMs) have achieved impressive performance on various natural language tasks. However, their massive computational and memory requirements hinder widespread deployment. Additionally, deploying them on extensive inputs presents efficiency and accuracy challenges.&#13;
This proposal introduces two techniques to enable efficient and accurate quantization and streaming deployment of LLMs, facilitating their application in real-world systems with limited resources. First, we develop SmoothQuant, an accurate post-training 8-bit quantization method of both weights and activations in LLMs up to 530B parameters. By smoothing outliers in activations, SmoothQuant enables the use of efficient INT8 kernels on all matrix multiplications with negligible accuracy loss. Second, we present StreamingLLM, enabling LLMs to handle arbitrarily long text sequences using a fixed memory budget. It exploits ``attention sinks'' in LLMs to stably anchor attention computation on lengthy contexts. Experiments show StreamingLLM can model over 4 million tokens with up to 22x speedup compared to recomputation baselines. &#13;
Together, these two techniques can significantly reduce the computational and memory costs of large language models, increasing their accessibility for practical usage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Bayesian Optimization with Asynchronous Batch Selection</title>
<link href="https://hdl.handle.net/1721.1/156331" rel="alternate"/>
<author>
<name>Zuniga, Ane</name>
</author>
<id>https://hdl.handle.net/1721.1/156331</id>
<updated>2024-08-22T03:25:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Bayesian Optimization with Asynchronous Batch Selection
Zuniga, Ane
Multi-objective optimization problems are widespread in scientific, engineering, and design f ields, necessitating a balance of trade-offs between conflicting objectives. These objectives often represent black-box functions, which are costly and time-consuming to evaluate. Multiobjective Bayesian optimization (MOBO) offers a valuable approach to guide the search for optimal solutions. To enhance efficiency, batch evaluations are employed to test multiple samples simultaneously, aiming to further reduce evaluation times. However, in scenarios involving varying evaluation times, standard batch strategies often lead to suboptimal resource utilization and inefficiencies. Asynchronous evaluations emerge as a promising solution to optimize resource usage under these conditions. Despite their potential, there has been no prior work or method specifically tailored to address asynchronous evaluations within the MOBO framework. To bridge this critical gap, this thesis proposes a comprehensive adaptation and analysis of existing Bayesian optimization methods for asynchronous MOBO scenarios. It also introduces a novel selection strategy, α-HVI, empirically validated through tests on both synthetic and real-world functions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Structure Learning through Double Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156327" rel="alternate"/>
<author>
<name>Soleymani, Ashkan</name>
</author>
<id>https://hdl.handle.net/1721.1/156327</id>
<updated>2024-08-22T03:39:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Causal Structure Learning through Double Machine Learning
Soleymani, Ashkan
Learning the causal structure of a system solely from observational data is a fundamental yet intricate task with numerous applications across various fields, including economics, earth sciences, biology, and medicine. This task is challenging due to several reasons: i) observational data alone, as opposed to interventional data, do not characterize the properties of the system under interventions on variables; therefore, it contains information on correlation instead of cause-effect one, ii) unobserved confounders may induce biases for the algorithms, leading to false causal inferences instead of revealing the correct causal structure, like a hidden common confounder, iii) the number of potential underlying structures increases super-exponentially with the number of variables, posing significant statistical and computational challenges and iv) the identifiability problem arises because multiple causal models can yield the same observational distribution, making it impossible to conclusively determine the true structure. In this thesis, we focus on the partial identification of underlying p causal structure from observational data under minimal assumptions necessary for causal identification. To this end, inspired by Debiasd/Double machine learning machinery, we introduce efficient practical doubly robust algorithms enjoying fast √n-semiparametric convergence rate for three different tasks: (1) Finding the direct causes of the target variable under cyclic and unseen confounded high dimensional data with nonlinear structures, (2) Testing Granger causality and therefore causal structure identification from temporal data, and (3) Estimation of counterfactual prediction function in generalized nonlinear Instrumental Variables regression problem. As a natural use case, we tackle the offline policy evaluation of the confounded contextual bandit problem, when actions, contexts, and rewards have common unobserved confounding. By matching the upper bounds with the unconfounded contextual bandit settings, our algorithm is proven to achieve optimal sample complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward In-Context Teaching</title>
<link href="https://hdl.handle.net/1721.1/156326" rel="alternate"/>
<author>
<name>Ross, Alexis J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156326</id>
<updated>2024-08-22T04:07:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Toward In-Context Teaching
Ross, Alexis J.
When a teacher provides examples for a student to study, these examples must be informative, enabling a student to progress from their current state toward a target concept or skill. Good teachers must therefore simultaneously infer what students already know and adapt their teaching to students’ changing state of knowledge. There is increasing interest in using computational models, particularly large language models, as pedagogical tools. As students, language models in particular have shown a remarkable ability to adapt to new tasks given small numbers of examples. But how effectively can these models adapt as teachers to students of different types? To study this question, we introduce a suite of models and evaluation methods we call AdapT. AdapT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods. We additionally introduce (3) AToM, a new probabilistic method for adaptive teaching that jointly infers students’ past beliefs and optimizes for the correctness of future beliefs. In evaluations of simulated students across three learning domains (fraction arithmetic, English morphology, function learning), AToM systematically outperforms LLM-based and standard Bayesian teaching models. In human experiments, both AToM and LLMs outperform non-adaptive random example selection. Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dialogue-driven Multi-Agent Activity Planning</title>
<link href="https://hdl.handle.net/1721.1/156325" rel="alternate"/>
<author>
<name>Sonar, Anoopkumar S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156325</id>
<updated>2024-08-22T03:53:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dialogue-driven Multi-Agent Activity Planning
Sonar, Anoopkumar S.
A fundamental challenge in robotics is to build a general-purpose system with multiple agents that can perform a wide range of tasks based on specifications provided in natural language. This work presents a novel dialogue-driven activity planning framework for multiagent scenarios. We present a method that accepts commands from a user in natural language and translates it to an intermediate form called a state plan by leveraging large language models. We further experiment with chain-of-thought prompting to improve the translation from natural language to state plans. In conjunction with an action model, this state plan is utilized by a constraint-based generative planner called ctBurton which outputs a full grounded plan in the form of a state and control trajectory. We demonstrate the utility of our method across three different scenarios– a presentation system, search-and-rescue, and multi-agent assembly– along with experiments on its scalability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control</title>
<link href="https://hdl.handle.net/1721.1/156324" rel="alternate"/>
<author>
<name>Pfrommer, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156324</id>
<updated>2024-08-22T03:31:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control
Pfrommer, Daniel
Recent work in imitation learning has shown that having an expert controller that is both suitably smooth and stable enables much stronger guarantees on the performance of the approximating learned controller. Constructing such smoothed expert controllers for arbitrary systems remains challenging, especially in the presence of input and state constraints. We show how such a smoothed expert can be designed for a general class of systems using a log-barrier-based relaxation of a standard Model Predictive Control (MPC) optimization problem. Our principal theoretical contributions include (1) demonstrating that the Jacobian of the barrier MPC controller can be written as a convex combination of pieces arising from the explicit MPC formulation, (2) bounding the Hessian of the barrier MPC as a function of the strength of the barrier function, and (3) presenting new results in both matrix and convex analysis for computing perturbed adjugate matrices and a tight (up to constant) lower bound on the distance of a solution with a self-concordant-barrier to the constraint set. We consider randomized smoothing as a point of comparison and show empirically that, unlike randomized smoothing, barrier MPC yields better performance while guaranteeing constraint satisfaction.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitive Multiplexed MicroRNA Spatial Profiling and Data Classification Framework Applied to Murine Breast Tumors</title>
<link href="https://hdl.handle.net/1721.1/156323" rel="alternate"/>
<author>
<name>Mohd, Omar Nazmi</name>
</author>
<id>https://hdl.handle.net/1721.1/156323</id>
<updated>2024-08-22T03:38:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sensitive Multiplexed MicroRNA Spatial Profiling and Data Classification Framework Applied to Murine Breast Tumors
Mohd, Omar Nazmi
MicroRNAs (miRNAs) are small RNAs that are often dysregulated in many diseases, including cancers. They are highly tissue specific and stable, thus making them particularly useful as biomarkers. As the spatial transcriptomics field advances, protocols that enable highly sensitive and spatially resolved detection become necessary to maximize the information gained from samples. This is especially true of miRNAs where the location of where they are expressed within tissue can provide prognostic value with regards to patient outcome. Equally as important as detection are ways to assess and visualize the miRNA’s spatial information in order to leverage the power of spatial transcriptomics over that of traditional non-spatial bulk assays. We present a highly sensitive methodology that simultaneously quantitates and spatially detects seven miRNAs in situ on formalin-fixed paraffin embedded tissue sections. This method utilizes rolling circle amplification (RCA) in conjunction with a dual scanning approach in nanoliter well arrays with embedded hydrogel posts. The hydrogel posts are functionalized with DNA-probes that enable the detection of miRNAs across a large dynamic range (four orders of magnitude) and a limit of detection of 0.17 zeptomoles (1.7×10⁻⁴ attomoles). We applied our methodology coupled with a data analysis pipeline to K14-Cre Brca1 superscript f/f Tp53 superscript f/f murine breast tumors to showcase the information gained from this approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Charting EDA: Characterizing Interactive Visualization Use in Computational Notebooks with a Mixed-Methods Formalism</title>
<link href="https://hdl.handle.net/1721.1/156322" rel="alternate"/>
<author>
<name>Wootton, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/156322</id>
<updated>2024-08-22T03:06:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Charting EDA: Characterizing Interactive Visualization Use in Computational Notebooks with a Mixed-Methods Formalism
Wootton, Dylan
Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets with Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively coding participant utterances, we introduce a formalism that describes EDA as a sequence of analysis states, where each state is comprised of either a representation an analyst constructed (e.g., the output of a data frame, an interactive visualization, etc.) or an observation the analyst made with a representation (e.g., about missing data, the relationship between variables, etc.). By applying our formalism to our dataset, we are able to identify that interactive visualizations, on average, lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, by calculating metrics such as revisiting count and representational diversity, we are able to uncover that some representations serve more as as "planning aids" during EDA rather than tools strictly for hypothesis-answering. &#13;
We show how these measures helped identify other patterns of analysis behavior, such as the "80-20 rule", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Compact Hydraulic Head Auto-Regulating Module (CHARM) for Long-Term Constant Gravity-Driven Flow Microfluidics</title>
<link href="https://hdl.handle.net/1721.1/156321" rel="alternate"/>
<author>
<name>Xue, Fan</name>
</author>
<id>https://hdl.handle.net/1721.1/156321</id>
<updated>2024-08-22T03:00:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Compact Hydraulic Head Auto-Regulating Module (CHARM) for Long-Term Constant Gravity-Driven Flow Microfluidics
Xue, Fan
Gravity-driven flow is a simple microfluidic flow initiation and maintenance mechanism that requires no external power sources and low expertise to use. However, the driving forces created by hydraulic head differences gradually decrease during operation, resulting in unwanted decreased flow rates in many microfluidic applications. The existing methods to maintain a constant flow for gravity-driven mechanisms either require additional bulky control equipment, involve complex fabrication or operation, or introduce interfaces that lack robustness. To solve those problems, a compact hydraulic head auto-regulating module (CHARM) was designed and tested in this thesis. The module was able to maintain the liquid level at the microfluidic inlet port within a small fluctuation range without human intervention for a long operation time. The design’s compactness and its compatibility with the standard 96 well plates enable high-throughput operations, and the chosen material’s bio-compatibility allows the devices’ use on cell culture related applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Congestion Control in Machine Learning Clusters</title>
<link href="https://hdl.handle.net/1721.1/156313" rel="alternate"/>
<author>
<name>Rajasekaran, Sudarsanan</name>
</author>
<id>https://hdl.handle.net/1721.1/156313</id>
<updated>2024-08-22T03:58:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Congestion Control in Machine Learning Clusters
Rajasekaran, Sudarsanan
This paper argues that fair-sharing, the holy grail of congestion control algorithms for decades, is not necessarily a desirable property in Machine Learning (ML) training clusters. We demonstrate that for a specific combination of jobs, introducing unfairness improves the training time for all competing jobs. We call this specific combination of jobs compatible and define the compatibility criterion using a novel geometric abstraction. Our abstraction rolls time around a circle and rotates the communication phases of jobs to identify fully compatible jobs. Using this abstraction, we demonstrate up to 1.3× improvement in the average training iteration time of popular ML models. We advocate that resource management algorithms should take job compatibility on network links into account. We then propose three directions to ameliorate the impact of network congestion in ML training clusters: (i) an adaptively unfair congestion control scheme, (ii) priority queues on switches, and (iii) precise flow scheduling.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploiting Observation Bias to Improve Matrix Completion</title>
<link href="https://hdl.handle.net/1721.1/156312" rel="alternate"/>
<author>
<name>Park, Charlotte</name>
</author>
<id>https://hdl.handle.net/1721.1/156312</id>
<updated>2024-08-22T03:40:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploiting Observation Bias to Improve Matrix Completion
Park, Charlotte
We consider a variant of matrix completion where entries are revealed in a biased manner, adopting a model akin to that introduced by Ma &amp; Chen (2019) [1]. Instead of treating this observation bias as a disadvantage, as is typically the case, the goal is to exploit the shared information between the bias and the outcome of interest to improve predictions. Towards this, we consider a natural model where the observation pattern and outcome of interest are driven by the same set of underlying latent or unobserved factors. This leads to a two stage matrix completion algorithm: first, recover (distances between) the latent factors by utilizing matrix completion for the fully observed noisy binary matrix corresponding to the observation pattern; second, utilize the recovered latent factors as features and sparsely observed noisy outcomes as labels to perform non-parametric supervised learning. The f inite-sample error rates analysis suggests that, ignoring logarithmic factors, this approach is competitive with the corresponding supervised learning parametric rates. This implies the two-stage method has performance that is comparable to having access to the unobserved latent factors through exploiting the shared information between the bias and outcomes. Through empirical evaluation using a real-world dataset, we find that with this two-stage algorithm, the estimates have 30x smaller mean squared error compared to traditional matrix completion methods, suggesting the utility of the model and the method proposed in this work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Fabrication of High Frequency Electromagnetic Coil for Magnetic Particle Imaging</title>
<link href="https://hdl.handle.net/1721.1/156311" rel="alternate"/>
<author>
<name>Whittier, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/156311</id>
<updated>2024-08-22T04:04:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Fabrication of High Frequency Electromagnetic Coil for Magnetic Particle Imaging
Whittier, Elizabeth
Magnetic Particle Imaging (MPI) is a promising modality which uses Magnetic Nanoparticles (MNPs) for tracer-based imaging in biomedical applications. Aside from their use in imaging, MNPs are increasingly being utilized for therapeutics, controlled targeted drug delivery, and diagnostics. These techniques depend on the behavior of MNPs when exposed to alternating magnetic field of a certain frequency and amplitude. However, the frequency typically used for imaging is 25kHz, while the transduction behaviors desired for these biomedical applications are seen at low radio-frequencies and higher amplitude fields than ones used for imaging. This work presents a high frequency electromagnetic coil which fulfills operational, safety, and geometric parameters necessary for incorporation in a custom MPI system and will allow us to simultaneously image and stimulate at specific locations within the body of a mouse. Optimization of the instrument is done through experimentation and electromagnetic theory, with focuses on parasitic elements and metallurgical phenomena. A resonant tank and direct cooling with a water pump allows for increased field strength while maintaining thermal and radio-frequency energy absorption standards for in vivo experiments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Secure Discovery of Genetic Relatives across Large-Scale and Distributed Genomic Datasets</title>
<link href="https://hdl.handle.net/1721.1/156310" rel="alternate"/>
<author>
<name>Hong, Matthew M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156310</id>
<updated>2024-08-22T03:42:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Secure Discovery of Genetic Relatives across Large-Scale and Distributed Genomic Datasets
Hong, Matthew M.
Finding relatives within a study cohort is a necessary step in many genomic studies. However, when the cohort is distributed across multiple entities subject to data-sharing restrictions, performing this step often becomes infeasible. Developing a privacy-preserving solution for this task is challenging due to the significant burden of estimating kinship between all pairs of individuals across datasets. In this thesis, we introduce SF-Relate, a practical and secure federated algorithm for identifying genetic relatives across data silos. SF-Relate vastly reduces the number of individual pairs to compare while maintaining accurate detection through a novel locality-sensitive hashing approach. We assign individuals who are likely to be related together into buckets and then test relationships only between individuals in matching buckets across parties. To this end, we construct an effective hash function that captures identity-by-descent (IBD) segments in genetic sequences, which, along with a new bucketing strategy, enable accurate and practical private relative detection. To guarantee privacy, we introduce an efficient algorithm based on multiparty homomorphic encryption (MHE) to allow data holders to cooperatively compute the relatedness coefficients between individuals, and to further classify their degrees of relatedness, all without sharing any private data. We demonstrate the accuracy and practical runtimes of SF-Relate on the UK Biobank and All of Us datasets. On a dataset of 200K individuals split between two parties, SF-Relate detects 94.9% of third-degree relatives, and 99.9% of second-degree or closer relatives, within 15 hours of runtime. Our work enables secure identification of relatives across large-scale genomic datasets, and thus a wide range of downstream privacy-preserving collaborative studies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Broadband single and multimode quantum light generation using optical nonlinearities</title>
<link href="https://hdl.handle.net/1721.1/156309" rel="alternate"/>
<author>
<name>Pontula, Sahil</name>
</author>
<id>https://hdl.handle.net/1721.1/156309</id>
<updated>2024-08-22T04:03:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Broadband single and multimode quantum light generation using optical nonlinearities
Pontula, Sahil
There is a growing effort in many fields of physics to bridge the classical and quantum realms. To our best understanding, our world is governed by the laws of quantum mechanics, but some of its most interesting features - such as the ability to morph uncertainty and noise - are washed out when system sizes become too large. Light is the ideal playground to investigate the interplay between the classical and quantum domains, with its well-known particle-wave duality and diverse behaviors at both the classical wave and single photon levels. To this end, there is significant interest in generating quantum states of light that can be harnessed for applications in the classical world we are most familiar with. However, maintaining "quantumness'' as the number of photons grows large has proved challenging due to the detrimental effects of loss. In this thesis, I describe two theoretical proposals to make macroscopic quantum light a reality. I focus on bright intensity squeezed states of light that have intensity noise far below the standard quantum limit. If realized, these states would bring the quantum mechanical phenomenon of squeezing to macroscopic intensities, which in turn could pave the way towards widespread quantum light sources that offer enhanced signal to noise ratios. I describe two distinct methods that use tools from nonlinear optics and dissipation engineering to realize broadband squeezing in both single and multiple frequency modes. I show that the squeezing can be tunable across a wide range of the electromagnetic spectrum that spans frequencies where quantum light has never been generated.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Cell-Specific Nanoparticle Delivery Systems</title>
<link href="https://hdl.handle.net/1721.1/156308" rel="alternate"/>
<author>
<name>Murphy, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/156308</id>
<updated>2024-08-22T03:00:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Toward Cell-Specific Nanoparticle Delivery Systems
Murphy, Sean
The targetable delivery of therapeutic nanoparticles remains a significant challenge in modern medicine, particularly due to the complexity, time, and expense involved in experimental design and optimization for cell-specific applications. To address this, NOCAP (Nanoparticle Optimization and Cell Affinity Prediction) was developed, a computational framework designed to (i) predict the affinities between nanoparticles and gene expression signatures of cancer cells and (ii) optimize nanoparticle formulations for specific targets. NOCAP successfully predicts cellular affinity for previously unseen cancer cell lines. The findings demonstrate the potential of machine learning to streamline the rational selection of target-specific nanoparticle drug delivery systems, paving the way for more efficient and precise therapeutic interventions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity</title>
<link href="https://hdl.handle.net/1721.1/156305" rel="alternate"/>
<author>
<name>Xue, Zi Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/156305</id>
<updated>2024-08-22T03:05:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity
Xue, Zi Yu
Sparse tensor algebra is a challenging class of workloads to accelerate due to few opportunities for data reuse and varying sparsity patterns. Prior sparse tensor algebra accelerators have explored tiling sparse tensors to increase exploitable data reuse and improve throughput, but typically allocate tile size in a given buffer for the worst- case number of nonzero values in a given tile. This severely limits the utilization of available memory resources and reduces data reuse. Other accelerators employ complex tiling during preprocessing or at runtime to determine the exact tile size based on its occupancy.&#13;
&#13;
This thesis proposes a speculative tensor tiling approach, called overbooking, to improve buffer utilization by taking advantage of the distribution of nonzero elements in sparse tensors to construct larger tiles with greater data reuse at the cost of occasional instances where data overflows the buffer. To ensure correctness, it proposes a low-overhead hardware mechanism, Tailors, that can tolerate data overflow by design with reasonable data reuse and demonstrates that Tailors can be easily integrated into the memory hierarchy of an existing sparse tensor algebra accelerator. To ensure high buffer utilization with minimal cost to find a tile size, this thesis introduces a statistical approach, Swiftiles, to pick a tile size so that tiles usually fit within the buffer’s capacity, but can potentially overflow, i.e., it overbooks the buffers. Across a suite of 22 sparse tensor algebra workloads, the proposed overbooking strategy introduces an average speedup of 52.7× and 2.3× and an average energy reduction of 22.5× and 2.5× over ExTensor without and with optimized tiling, respectively.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation</title>
<link href="https://hdl.handle.net/1721.1/156303" rel="alternate"/>
<author>
<name>Vendrow, Joshua L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156303</id>
<updated>2024-08-22T03:03:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation
Vendrow, Joshua L.
Distribution shift is a major source of failure for machine learning models. However, evaluating model reliability under distribution shift can be challenging, especially since it may be difficult to acquire counterfactual examples that exhibit a specified shift. In this work, we introduce the notion of a dataset interface: a framework that, given an input dataset and a user-specified shift, returns instances from that input distribution that exhibit the desired shift. We study a number of natural implementations for such an interface, and find that they often introduce confounding shifts that complicate model evaluation. Motivated by this, we propose a dataset interface implementation that leverages Textual Inversion to tailor generation to the input distribution.&#13;
We then demonstrate how applying this dataset interface to the ImageNet dataset enables studying model behavior across a diverse array of distribution shifts, including variations in background, lighting, and attributes of the objects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extensible Platforms for Bosonic Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/156302" rel="alternate"/>
<author>
<name>Jha, Shantanu R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156302</id>
<updated>2024-08-22T03:48:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extensible Platforms for Bosonic Quantum Error Correction
Jha, Shantanu R.
Bosonic quantum error correction (QEC) encodes information in the phase space of a quantum harmonic oscillator and offers a hardware-efficient path towards faulttolerant quantum information processing. With superconducting circuits, bosonic QECusing the Gottesman-Kiteav-Preskill (GKP) encoding has been achieved using the high-Q mode of a macroscopic 3D microwave cavity controlled via fixedfrequency transmon qubits [1, 2, 3, 4, 5, 6]. To date, all previous demonstrations have been limited by bit-flips in the transmon control qubit (with typical T1 lifetimes on the order of 100 microseconds), resulting in logical lifetimes that are upper-bounded by approximately ∼ 10T1. In this thesis, we replace the transmon with a heavy-fluxonium control qubit, which has been shown to possess bit-flip lifetimes in excess of 1 millisecond [7, 8, 9, 10]. Furthermore, we propose using the asymmetrically threaded SQUID as a microwave-activated three-wave mixing coupler to yield faster GKP error-correction rates while suppressing inherited nonlinearity in our bosonic mode. As compared to direct dispersive coupling, this parametric coupling enables us to use a heavier, and therefore more bit-flip-protected, fluxonium qubit. Finally, with an accelerated error correction rate, we can use a lower-Q planar resonator to store logical quantum information in an extensible and fully 2D architecture.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Current Control of Silicon Field Emitter Arrays Using Gate-All-Around MOSFETs</title>
<link href="https://hdl.handle.net/1721.1/156301" rel="alternate"/>
<author>
<name>Sahagun, Alvaro</name>
</author>
<id>https://hdl.handle.net/1721.1/156301</id>
<updated>2024-08-22T03:36:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Programmable Current Control of Silicon Field Emitter Arrays Using Gate-All-Around MOSFETs
Sahagun, Alvaro
Silicon field emitter array (FEA) technology has great potential for applications such as electron microscopy, vacuum electronics, and X-ray sources. However, some challenges such as emitter tip burnout and spatial and temporal non-uniformity of emission current impede the adoption of FEAs in these applications. The current approach to address these challenges involves integrating a resistor, nanowire (NW) current limiter, or metal-oxide-semiconductor field-effect transistor (MOSFET) in series with the emitter tips to regulate current flow. The NW current limiter is preferred for its compact integration, which enables high emitter density in FEAs. However, it restricts the FEA versatility by constraining the emission current to a fixed maximum value. Whereas, MOSFETs provide the ability for programmable control over emission current, enabling FEA versatility. Integrating planar MOSFETs into FEAs demands significant space, leading to a notable reduction in emitter density and FEA compactness. This thesis investigates the integration of vertical gate-all-around (GAA) MOSFETs with individual emitter tips as a solution to enable programmable emission control while preserving the compactness, high emitter density, and versatility of FEAs. To achieve this, SILVACO, a device simulation platform, was used to model the GAA MOSFET, field emitter, and combined GAA MOSFET-FEA devices. The simulation results provide insight into each device's current-voltage (I-V) characteristics, identifying performance-limiting challenges such as breakdown, kinks in the I-V characteristics, and quasi-saturation of the current. Various solutions to these challenges are explored through simulations, and the resulting models show the feasibility of using a GAA MOSFET as a voltage-controlled current source in series with individual field emitter tips to program emission current.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GraphPipe: Improving the Performance and Scalability of DNN Training with Graph Pipeline Parallelism</title>
<link href="https://hdl.handle.net/1721.1/156292" rel="alternate"/>
<author>
<name>Kim, Sunghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/156292</id>
<updated>2024-08-22T03:57:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">GraphPipe: Improving the Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Kim, Sunghyun
Deep neural networks (DNNs) continue to grow rapidly in size, thus it is infeasible to train them on a single device. To address this challenge, current DNN training systems apply pipeline-parallel techniques. They split a DNN into multiple stages, construct a pipeline of them, and assign to each stage a distinct device. Multiple devices, each storing a partial segment of the DNN, perform their respective operations in sequence to train the whole. Applying pipeline-parallel techniques makes it feasible to train large-scale DNNs, yet there is still room for improvement. Existing approaches only consider sequential pipeline stages and thus ignore the inherent topology of a DNN to train. For example, when the architecture of a DNN has computationally-independent parallel branches, serial execution of them mandated by sequential pipeline stages unnecessarily lengthens the processing time of training data. This shortcoming leaves model-parallel opportunities untapped, resulting in suboptimal training throughput. In this paper, we develop graph pipeline parallelism (GPP), a new pipeline-parallel scheme that partitions a DNN into pipeline stages whose dependencies are identified by a directed acyclic graph. GPP generalizes current sequential pipeline stages. By constructing the pipeline based on the DNN topology, GPP enables concurrent execution of computationally independent DNN segments. GPP then optimizes micro-batch schedules for these stages, and parallelizes large-scale DNN training across multiple devices. We show that GPP achieves reduced memory consumption and improved training throughput. We also develop GraphPipe, a distributed system that leverages GPP strategies to enable performant and scalable DNN training. Evaluation on a variety of DNNs demonstrates that GraphPipe outperforms existing pipeline-parallel systems such as PipeDream and Piper by up to 1.6×. Despite the fact that GPP involves a much larger search space of parallelization strategies, GraphPipe reduces the search time by 9–21× compared to PipeDream and Piper.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLM-Directed Agent Models in Cyberspace</title>
<link href="https://hdl.handle.net/1721.1/156291" rel="alternate"/>
<author>
<name>Laney, Samuel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156291</id>
<updated>2024-08-22T04:03:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">LLM-Directed Agent Models in Cyberspace
Laney, Samuel P.
Network penetration testing, a proactive method for identifying vulnerabilities in cy- berspace, has long been the domain of human experts. However, rapid advancements in machine learning have opened up new possibilities for automating many of these tasks. This thesis aims to explore the application of Large Language Models (LLMs) for automating penetration tests and Cyber Capture the Flag (CTF) challenges, bridging the gap between static tools and dynamic human intuition in cybersecurity.&#13;
This work provides an evaluation framework for assessing the performance of LLMs in autonomously solving CTF challenges, with an emphasis on understanding the capabilities, limitations, and best prompting strategies for LLMs in this domain. Notably, this thesis presents an agent configuration that offers a 102% improvement in challenge completion on a database of PicoCTF challenges compared to the published baseline. By analyzing a variety of agent strategies, response formats, and historical action representations in the context of CTF challenges, this work aims to provide insights into the best practices and limitations in leveraging LLMs for cybersecurity tasks. Additionally, this work proposes a hierarchical architecture to guide an LLM-enabled agent in performing complex, multi-step penetration testing tasks with strategic foresight. This proof of concept approach shows success in entry level challenges. While LLMs exhibit impressive capabilities, they are limited out of the box in their ability to solve complex, multi-step tasks requiring exploration, necessitating approaches such as those described in this work to improve performance in these areas.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Teacher Following and Reward Maximization in Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156290" rel="alternate"/>
<author>
<name>Shenfeld Amit, Idan</name>
</author>
<id>https://hdl.handle.net/1721.1/156290</id>
<updated>2024-08-22T03:35:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Balancing Teacher Following and Reward Maximization in Reinforcement Learning
Shenfeld Amit, Idan
Learning from rewards (i.e., reinforcement learning or RL) and learning to imitate a teacher (i.e., teacher-student learning) are two established approaches for solving sequential decision-making problems. To combine the benefits of these different forms of learning, it is common to train a policy to maximize a combination of reinforcement and teacher-student learning objectives. However, without a principled method to balance these objectives, prior work used heuristics and problem-specific hyperparameter searches to balance the two objectives. We present a principled approach, along with an approximate implementation for dynamically and automatically balancing when to follow the teacher and when to use rewards. The main idea is to adjust the importance of teacher supervision by comparing the agent’s performance to the counterfactual scenario of the agent learning without teacher supervision and only from rewards. If using teacher supervision improves performance, the importance of teacher supervision is increased and otherwise it is decreased. We will investigate the capabilities of this algorithm against strong baselines across diverse domains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Short-visible-wavelength GHz Display</title>
<link href="https://hdl.handle.net/1721.1/156289" rel="alternate"/>
<author>
<name>Propson, Thomas C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156289</id>
<updated>2024-08-22T03:53:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Short-visible-wavelength GHz Display
Propson, Thomas C.
Applications such as quantum control, manufacturing, biology, and sensing require blue and ultraviolet light modulated in space and time. We propose and implement a strategy for fast, individual, coherent control of spatially-multiplexed channels at blue and ultraviolet wavelengths by combining an integrated photonic modulator array with a strong pump beam for sum-frequency generation in a bulk nonlinear crystal. We realize a 4x4 array of amplitude-modulated spots at 420nm with a 3dB bandwidth of 2GHz.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Space-Efficient and Noise-Robust Quantum Factoring</title>
<link href="https://hdl.handle.net/1721.1/156288" rel="alternate"/>
<author>
<name>Ragavan, Seyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/156288</id>
<updated>2024-08-22T03:08:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Space-Efficient and Noise-Robust Quantum Factoring
Ragavan, Seyoon
We provide two improvements to Regev's quantum factoring algorithm (arXiv:2308.06572), addressing its space efficiency and its noise-tolerance. &#13;
    &#13;
Our first contribution is to improve the quantum space efficiency of Regev's algorithm while keeping the circuit size the same. Our main result  constructs a quantum factoring circuit using O(n log n) qubits and O(n^(3/2) log n) gates. We achieve the best of Shor and Regev (upto a logarithmic factor in the space complexity): on the one hand, Regev's circuit requires O(n^(3/2)) qubits and O(n^(3/2) log n) gates, while Shor's circuit requires O(n^2 log n) gates but only O(n) qubits. As with Regev, to factor an n-bit integer N, we run our circuit independently ≈ sqrt{n} times and apply Regev's classical postprocessing procedure. &#13;
&#13;
Our optimization is achieved by implementing efficient and reversible exponentiation with Fibonacci numbers in the exponent, rather than the usual powers of 2, adapting work by Kaliski (arXiv:1711.02491) from the classical reversible setting to the quantum setting. This technique also allows us to perform quantum modular exponentiation that is efficient in both space and size without requiring significant precomputation, a result that may be useful for other quantum algorithms. A key ingredient of our exponentiation implementation is an efficient circuit for a function resembling in-place quantum-quantum modular multiplication. This implementation works with only black-box access to any quantum circuit for out-of-place modular multiplication, which we believe is yet another result of potentially broader interest. Additionally, we show how to generalize our reversible exponentiation technique beyond the Fibonacci numbers to obtain constant-factor improvements in the number of qubits and/or gates.&#13;
&#13;
Our second contribution is to show that Regev's classical postprocessing procedure can be modified to tolerate a constant fraction of the quantum circuit runs being corrupted by errors. In contrast, Regev's analysis of his classical postprocessing procedure requires all  ≈ sqrt{n} runs to be successful. In a nutshell, we achieve this using lattice reduction techniques to detect and filter out corrupt samples.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sparse Expansion and Neuronal Disentanglement</title>
<link href="https://hdl.handle.net/1721.1/156287" rel="alternate"/>
<author>
<name>Kong, Linghao</name>
</author>
<id>https://hdl.handle.net/1721.1/156287</id>
<updated>2024-08-22T03:01:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sparse Expansion and Neuronal Disentanglement
Kong, Linghao
We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the same weights and one-shot pruned for a specific cluster of input values. We call this approach Sparse Expansion. We show that for models like Llama 2 7B, as we increase the number of experts, Sparse Expansion outperforms all other one-shot sparsification approaches for the same FLOPs budget, and this gap grows as sparsity increases. But why? To answer this, we provide strong evidence that the mixture of sparse experts is effectively disentangling the input-output relationship of every individual neuron. Sparse experts approximate a neuron’s dense output distribution with fewer weights by decomposing the distribution into a collection of simpler ones, each with a separate sparse dot product covering it. Interestingly, we show that the Wasserstein distance between a neuron’s output distribution and a Gaussian distribution is an indicator of its entanglement level and contribution to the accuracy of the model. Every layer of an LLM has highly entangled neurons, and model performance suffers more when these are sparsified as opposed to others. We believe that these neurons may have implications beyond sparsity in understanding the performance of LLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Device Stack Optimization for Protonic Non-Volatile Programmable Resistors</title>
<link href="https://hdl.handle.net/1721.1/156286" rel="alternate"/>
<author>
<name>Shen, Dingyu</name>
</author>
<id>https://hdl.handle.net/1721.1/156286</id>
<updated>2024-08-22T03:11:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Device Stack Optimization for Protonic Non-Volatile Programmable Resistors
Shen, Dingyu
Analog computing could alleviate computational bottlenecks in digital deep learning systems by utilizing local information processing through the physical properties of devices, such as electrochemical ion-intercalation in three-terminal devices where channel resistance is modulated by ionic exchange via an electrolyte. Previous work has demonstrated such ionic programmable resistors featuring WO₃ as the channel, phosphorous-doped SiO₂ (PSG) as the electrolyte, Pd as the gate reservoir, and protons as the ions.  This thesis aimed to optimize the device stack in four directions and demonstrated a symmetric WO₃-PSG-WO₃ structure in a CMOS-compatible process, with the help of circular transfer length model (CTLM), which efficiently examines the resistance properties of WO₃. We have explored: (a) device protonation as part of the fabrication process, (b) encapsulation preventing proton depletion during device fabrication and operation, (c) contact metal optimization to replace gold with a CMOS-compatible material, (d) PSG evaluation vehicle for device performance optimization. The symmetric device combining all the stack optimizations features non-volatile and repeatable conductance modulation with voltage pulses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic In-memory Computing Using Magnetic Tunnel Junctions</title>
<link href="https://hdl.handle.net/1721.1/156285" rel="alternate"/>
<author>
<name>Wang, Qiuyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/156285</id>
<updated>2024-08-22T03:21:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stochastic In-memory Computing Using Magnetic Tunnel Junctions
Wang, Qiuyuan
Current computing hardware based on von Neumann architecture and digital CMOS circuits face strong challenges to further scale up for big AI models and data-centric applications. However, while being actively studied, it is still not clear which alternative computing paradigm is the best solution considering the fabrication maturity, scalability, operation conditions, cost, power/area efficiency, and so on. In this thesis, we propose a new alternative computing framework – stochastic in-memory computing using magnetic tunnel junctions. By introducing thermally stable and unstable magnetic tunnel junctions as CMOS-compatible circuit building blocks, both general-purpose and application-specific in-memory computing accelerators can be synthesized, providing a versatile and very high-efficiency hardware design framework for multiple applications. A deep learning accelerator is implemented and benchmarked on FPGA following the proposed stochastic in-memory computing architecture, with stochastic bitstreams sampled from thermally unstable magnetic tunnel junction fabricated in lab. The hardware designs for a Bayesian inference accelerator and Ising machine are also provided. Our results show magnetic tunnel junctions could open up rich design space for future computing hardware.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations</title>
<link href="https://hdl.handle.net/1721.1/156284" rel="alternate"/>
<author>
<name>Xu, Haike</name>
</author>
<id>https://hdl.handle.net/1721.1/156284</id>
<updated>2024-08-22T03:48:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations
Xu, Haike
Graph-based approaches to nearest neighbor search are popular and powerful tools for handling large datasets in practice, but they have limited theoretical guarantees. We study the worst-case performance of recent graph-based approximate nearest neighbor search algorithms, such as HNSW, NSG and DiskANN. For DiskANN, we show that its “slow preprocessing” version provably supports approximate nearest neighbor search query with constant approximation ratio and poly-logarithmic query time, on data sets with bounded “intrinsic” dimension. For the other data structure variants studied, including DiskANN with “fast preprocessing”, HNSW and NSG, we present a family of instances on which the empirical query time required to achieve a “reasonable” accuracy is linear in instance size. For example, for DiskANN, we show that the query procedure can take at least 0.1n steps on instances of size n before it encounters any of the 5 nearest neighbors of the query.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Capacity of Scalar Gaussian Channels Subject to State Obfuscation</title>
<link href="https://hdl.handle.net/1721.1/156283" rel="alternate"/>
<author>
<name>Lev, Omri Yaacov</name>
</author>
<id>https://hdl.handle.net/1721.1/156283</id>
<updated>2024-08-22T03:45:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Capacity of Scalar Gaussian Channels Subject to State Obfuscation
Lev, Omri Yaacov
The problem of communication over multiple variants of the scalar Gaussian fading channel subject to a state-obfuscation constraint imposed in the form of near independence between the channel outputs and the channel coefficients has been studied. By defining the operational capacity as the maximal achievable rate under the state obfuscation constraint, an informational counterpart is been derived, which is then proved to coincide with the operational capacity. Conditions for this capacity to be non-zero and closed-form solutions for that capacity in the high signal-to-noise ratio (SNR) limit are derived.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization</title>
<link href="https://hdl.handle.net/1721.1/156280" rel="alternate"/>
<author>
<name>Nrusimha, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/156280</id>
<updated>2024-08-22T03:01:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization
Nrusimha, Aniruddha
We consider the problem of accurate quantization for language models, where both the weights and activations are quantized to 4 bits per parameter with uniform quantization, the lowest bitwidth format natively supported by existing GPU hardware. In this context, the key challenge is activation quantization: it is known that language models contain outlier channels whose values on average are orders of magnitude higher than than other channels, which prevents accurate low-bitwidth quantization with known techniques. We systematically study this phenomena and find that these outlier channels emerge early in training, and that they occur more frequently in layers with residual streams. We then propose a simple strategy which regularizes a layer’s inputs via quantization-aware training (QAT) and its outputs via activation kurtosis regularization. We show that regularizing both the inputs and outputs is crucial for preventing a model’s "migrating" the difficulty in input quantization to the weights, which makes post-training quantization (PTQ) of weights more difficult. When combined with weight PTQ, we show that our approach can obtain a W4A4 model with integer quantization that performs competitively to the standard-precision W16A16 baseline.1
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Plan and Planning to Learn in Long-Horizon Robotics Tasks</title>
<link href="https://hdl.handle.net/1721.1/156279" rel="alternate"/>
<author>
<name>Kumar, Nishanth Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/156279</id>
<updated>2024-08-22T03:33:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Plan and Planning to Learn in Long-Horizon Robotics Tasks
Kumar, Nishanth Jay
A longstanding goal of robotics research has been to produce a single agent capable of solving a variety of useful long-horizon tasks, such as making a cup of tea or tidying up a living room, in multiple different environments (i.e., in any household). In recent years, two dominant paradigms have emerged for constructing such a system: end-to-end model-free learning and model-based planning. Approaches from both paradigms have produced impressive isolated results, but both paradigms themselves are known to have significant limitations. Learning-based approaches often require impractical amounts of data, and struggle to generalize beyond the data and tasks they have been trained on. Planning-based approaches depend on models, which often require significant manual engineering to define, especially as the number of complexity of tasks of interest grows. This thesis proposes a set of approaches that attempt to overcome these limitations by combining aspects of both paradigms. Specifically, we leverage learning to automate the process of designing planning models, and leverage planning to efficiently and autonomously collect data needed for learning. Experiments on a variety of simulated and real-robot domains illustrate that this combination of learning to plan and planning to learn could be a promising approach to enabling robots to solve complex, long-horizon tasks at scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classical Commitments to Quantum States</title>
<link href="https://hdl.handle.net/1721.1/156278" rel="alternate"/>
<author>
<name>Villányi, Ágnes</name>
</author>
<id>https://hdl.handle.net/1721.1/156278</id>
<updated>2024-08-22T04:05:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Classical Commitments to Quantum States
Villányi, Ágnes
We define the notion of a classical commitment to quantum state scheme, which allows a quantum prover to compute a classical commitment to a quantum state and later open each qubit of the state in either the standard or Hadamard basis, while limiting communication with the verifier to a classical channel. Our scheme strengthens the notion of a measurement protocol from [Mah18], which is binding only in the standard basis. We construct our commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and rely directly on any noisy trapdoor claw-free function family that satisfies the adaptive hardcore bit property first introduced in [Bra+18].
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Time Warping Constraints for Semiconductor Processing</title>
<link href="https://hdl.handle.net/1721.1/156276" rel="alternate"/>
<author>
<name>Owens, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/156276</id>
<updated>2024-08-22T03:42:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamic Time Warping Constraints for Semiconductor Processing
Owens, Rachel
Semiconductor manufacturing processes have become increasingly complex with the continued growth of chip manufacturing. Monitoring these processes for anomalies is crucial for maintaining quality and yield. However, a notable challenge for monitoring time series signals are the nonlinear variations in signal timing. These small, but acceptable, temporal variations are typically caused by small run-to-run differences that are inherent to the process. Dynamic time warping (DTW) can be used for temporal alignment of signals, but is computationally expensive and prone to errors.&#13;
&#13;
In this thesis, a new method is presented for preprocessing semiconductor fabrication sensor signals that improves anomaly detection model performance. The new method uses domain knowledge – specifically, process recipe step numbers – to create constraints that better align signals along the time dimension, that addresses this problem of nonlinear signal alignment. These constraints are tested on both synthetic as well as industrial datasets. The new step-constrained DTW is also extended as a distance measure for clustering time series.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining Channel Sounding and Guessing Random Additive Noise Decoding</title>
<link href="https://hdl.handle.net/1721.1/156274" rel="alternate"/>
<author>
<name>Millward, Jane Avril</name>
</author>
<id>https://hdl.handle.net/1721.1/156274</id>
<updated>2024-08-22T04:00:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Combining Channel Sounding and Guessing Random Additive Noise Decoding
Millward, Jane Avril
This thesis investigates how channel estimation can be used to improve the performance of Guessing Random Additive Noise Decoding. The trade-off between devoting resources to channel sounding and data transmission is investigated for pilot symbol assisted modulation schemes. Using a soft-information variant of the GRAND algorithm called Ordered Reliability Bit Guessing Random Additive Noise Decoding- Approximate Independence (ORBGRAND-AI), it is shown that by accounting for the correlation between received symbols bit and block error rate improvements can be obtained. This thesis also considers the achievable communications rate of ORBGRAND-AI when different estimators are used to provide channel estimates. Finally, this thesis investigates the use of ORBGRAND-AI in channels subjected to inter-symbol interference (ISI).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formally Verifying a Programmable Network Switch</title>
<link href="https://hdl.handle.net/1721.1/156273" rel="alternate"/>
<author>
<name>Liu, Jiazheng</name>
</author>
<id>https://hdl.handle.net/1721.1/156273</id>
<updated>2024-08-22T03:59:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Formally Verifying a Programmable Network Switch
Liu, Jiazheng
Programmable network switches are complex pieces of hardware that leverage nonobvious optimizations such as pipelining to offer flexible configuration interfaces. In this thesis, we propose a novel formal-verification methodology aimed at establishing strong correctness theorems for synthesizable hardware designs for network functionality, demonstrated through a case-study analysis of a Tofino-like programmable switch that we call VeriSwit. Our approach hinges on modularity, whereby the system is split into interconnected units, each equipped with its specification and proof, oblivious to the internals of other units. We conduct VeriSwit’s modular verification in the Coq theorem prover. Experiments with synthesis for both FPGA and ASIC targets, combined with simulation, show that 100 GB/s line rate is easily achieved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classroom model of an information and computing system.</title>
<link href="https://hdl.handle.net/1721.1/156249" rel="alternate"/>
<author>
<name>Schroeder, Michael David.</name>
</author>
<id>https://hdl.handle.net/1721.1/156249</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">Classroom model of an information and computing system.
Schroeder, Michael David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 215-216.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transient currents of polyphase induction motors with a single-phase supply</title>
<link href="https://hdl.handle.net/1721.1/156246" rel="alternate"/>
<author>
<name>Lee, Tze-Chang.</name>
</author>
<id>https://hdl.handle.net/1721.1/156246</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1927-01-01T00:00:00Z</published>
<summary type="text">Transient currents of polyphase induction motors with a single-phase supply
Lee, Tze-Chang.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1927; Includes bibliographical references (leaf 80).
</summary>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the effect of steel reinforcement on the moments in a reinforced concrete cellular type bridge</title>
<link href="https://hdl.handle.net/1721.1/156242" rel="alternate"/>
<author>
<name>Cantono, William Paul.</name>
</author>
<id>https://hdl.handle.net/1721.1/156242</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">An investigation of the effect of steel reinforcement on the moments in a reinforced concrete cellular type bridge
Cantono, William Paul.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1932
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prevention of hydrogen cracking in HY-80 welds</title>
<link href="https://hdl.handle.net/1721.1/156238" rel="alternate"/>
<author>
<name>Biederka, John William.</name>
</author>
<id>https://hdl.handle.net/1721.1/156238</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Prevention of hydrogen cracking in HY-80 welds
Biederka, John William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1983; Includes bibliographical references.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory investigation of the variable nature of the blue clay layer below Boston</title>
<link href="https://hdl.handle.net/1721.1/156236" rel="alternate"/>
<author>
<name>Albin, Pedro.</name>
</author>
<id>https://hdl.handle.net/1721.1/156236</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Laboratory investigation of the variable nature of the blue clay layer below Boston
Albin, Pedro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1947; Bibliography: leaf 64.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Second approximations to the solution of laminar boundary layer flow along a flat plate</title>
<link href="https://hdl.handle.net/1721.1/156234" rel="alternate"/>
<author>
<name>Alden, Henry L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156234</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Second approximations to the solution of laminar boundary layer flow along a flat plate
Alden, Henry L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947; Bibliography: leaf 29.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of translation-rotation coupling on helicopter ground resonance</title>
<link href="https://hdl.handle.net/1721.1/156233" rel="alternate"/>
<author>
<name>Amer, Kenneth B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156233</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">The effect of translation-rotation coupling on helicopter ground resonance
Amer, Kenneth B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947; Bibliography: leaf 27.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bosonic Quantum Error Correction with a Heavy Fluxonium Control Qubit</title>
<link href="https://hdl.handle.net/1721.1/156166" rel="alternate"/>
<author>
<name>Chowdhury, Shoumik</name>
</author>
<id>https://hdl.handle.net/1721.1/156166</id>
<updated>2024-08-15T03:01:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bosonic Quantum Error Correction with a Heavy Fluxonium Control Qubit
Chowdhury, Shoumik
Bosonic codes store information in the phase space of a quantum harmonic oscillator and offer a hardware‐efficient path towards quantum error correction (QEC), requiring only an oscillator and an auxiliary qubit for measurement and universal control. Of the many bosonic codes, the so‐called Gottesman‐Kitaev‐Preskill (GKP) code stands out as one of the most robust to dominant physical decoherence mechanisms, but is severely limited by bit‐ flip errors in the control qubit. In this thesis, we develop a new approach for implementing GKP QEC in superconducting circuits based on using a heavy fluxonium as the auxiliary control qubit due to its inherent bit‐flip protection. We demonstrate progress towards this in experiment by using a fluxonium in a 3D superconducting cavity architecture, and also propose novel strategies for moving future experiments to a fully 2D platform.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion Planning along Manifolds with Geodesic Convexity and Analytic Inverse Kinematics</title>
<link href="https://hdl.handle.net/1721.1/156165" rel="alternate"/>
<author>
<name>Cohn, Thomas B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156165</id>
<updated>2024-08-15T03:41:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Motion Planning along Manifolds with Geodesic Convexity and Analytic Inverse Kinematics
Cohn, Thomas B.
Collision-free motion planning is a fundamental problem in robotics. Most motion planning algorithms operate in the configuration space of a robot, where each dimension corresponds to an individual degree of freedom. Oftentimes, these configuration spaces can be viewed as Euclidean spaces, and many motion planning algorithms treat them as such. However, many configuration spaces of interest are inherently non-Euclidean, including those of mobile robots, robot arms that have revolute joints without limits or ball joints, and flying robots, as well as the constrained configuration spaces that arise when planning with task-space constraints. In this thesis, we treat the problem of motion planning along Riemannian manifolds, a broader class of spaces that encompasses many of the problems of interest.&#13;
&#13;
In the first chapter, we present a generalization of the graph of convex sets (GCS) planning framework that can handle smooth manifolds. GCS uses convex optimization, and is thus restricted to Euclidean configuration spaces. Our analysis utilizes geodesic convexity to achieve the same guarantees on Riemannian manifolds, and we leverage this to produce motion plans for mobile robots whose arms have unbounded revolute joints.&#13;
&#13;
In the second chapter, we specifically consider the problem of constrained bimanual manipulation, where a robot has to move an object that is being grasped with two hands. The set of kinematically-valid configurations is a union of submanifolds, implicitly defined by nonlinear equality constraints This presents significant challenges for standard unconstrained planning algorithms. We construct a smooth parametrization of the feasible set, recasting the problem without equality constraints. Our approach is algorithm-agnostic, and we demonstrate that unconstrained planners (working through the parametrization) produce favorable results.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boston Night Owl: A Framework for Introducing Overnight Bus Service That Can Close Significant Spatiotemporal Gaps in Greater Boston's Transit System</title>
<link href="https://hdl.handle.net/1721.1/156164" rel="alternate"/>
<author>
<name>Barrett, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/156164</id>
<updated>2024-08-15T03:31:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Boston Night Owl: A Framework for Introducing Overnight Bus Service That Can Close Significant Spatiotemporal Gaps in Greater Boston's Transit System
Barrett, Gabriel
There are people traveling at every hour of the day. Cities by their nature function throughout the 24-hour day, however the same is not always true of their transit systems. Just as in the day time, overnight public transportation exists to provide mobility access to the people who need or choose to travel at night. This thesis explores the first steps in developing an overnight transit service in a region where it does not currently exist, using the Boston area as a case study. This is done through a two-step process: first, identifying where and when the service should be run, and second, learning from existing overnight systems around the world to understand how the service should operate. As part of the method, the thesis proposes a novel approach to identifying areas with acute disparity between transit supply and demand, colloquially known as “transit deserts,” that involves taking into account how these factors change both spatially and temporally. The end result of this thesis is a framework that planners in cities and transit agencies can use when creating a system that can close these gaps. This is an approach that planners will find useful not just in planning night time service, but for planning service at all times of the day.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeding Trust, Sustaining Equity: Funding and Financing Relationships in the Greater Boston Community Land Trust Network</title>
<link href="https://hdl.handle.net/1721.1/156163" rel="alternate"/>
<author>
<name>Aibinder, Sammi</name>
</author>
<id>https://hdl.handle.net/1721.1/156163</id>
<updated>2024-08-15T03:32:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Seeding Trust, Sustaining Equity: Funding and Financing Relationships in the Greater Boston Community Land Trust Network
Aibinder, Sammi
Interest in community land trusts (CLTs) as one tool for stable, affordable housing and local autonomy over urban planning processes is growing rapidly—particularly in the past decade, in the wake of the subprime mortgage crisis and the destruction of wealth and housing security that foreclosure waves wreaked across the United States. This increasing energy for community ownership and stewardship of land and housing spans grassroots organizing networks; local, state, and federal government authorities; and philanthropic and conventional capital. Though such a broad base of interest in CLTs at both local and national levels is encouraging, CLT organizers continue to struggle within dominant affordable housing policies and practices to sustain their work. As CLTs and their advocates push to reshape public budgets and capture private capital in innovative ways, how do funders and lenders relate to their own role in ceding control over land and housing—and the financial wealth they generate—in ways that share power with the residents and organizers at the heart of these housing justice movements? Drawing on interviews with housing and community development finance professionals, ongoing conversations with CLT practitioners and advocates, and policy research, this thesis explores the funding and financing ecosystem surrounding the Greater Boston Community Land Trust Network (GBCLTN) as a descriptive case study.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference Plans for Hybrid Probabilistic Inference</title>
<link href="https://hdl.handle.net/1721.1/156162" rel="alternate"/>
<author>
<name>Cheng, Ellie Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156162</id>
<updated>2024-08-15T03:49:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inference Plans for Hybrid Probabilistic Inference
Cheng, Ellie Y.
Advanced probabilistic programming languages (PPLs) use hybrid inference systems to combine symbolic exact inference and Monte Carlo sampling to improve inference performance. These systems use heuristics to partition random variables within the program into variables that are represented symbolically and variables that are represented by sampled values, and in general, they make no guarantee that the partitioning is optimal. In this thesis, I present inference plans, a programming interface that enables developers to choose a specific partitioning of random variables during hybrid inference. I further present Siren, a new PPL that enables developers to use annotations to specify inference plans. To assist developers with statically reasoning about whether an inference plan can be implemented, I present an abstract-interpretation-based static analysis for Siren for determining inference plan satisfiability, and prove the analysis is sound with respect to Siren's semantics. In our evaluation, the results show that custom inference plans can produce up to ~1000x better accuracy compared to the default heuristics. They further show that the static analysis is precise in practice, identifying all satisfiable inference plans in 6 out of 7 benchmarks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Generative Models for 3D Molecular Structures</title>
<link href="https://hdl.handle.net/1721.1/156161" rel="alternate"/>
<author>
<name>Daigavane, Ameya</name>
</author>
<id>https://hdl.handle.net/1721.1/156161</id>
<updated>2024-08-15T04:03:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Generative Models for 3D Molecular Structures
Daigavane, Ameya
Generative models have recently emerged as a promising avenue for navigating the high-dimensional space of molecular structures. Such models must be designed carefully to respect the rotation and translation symmetries of molecules. In this thesis, we first provide an overview of existing methods and techniques in this rapidly developing field. Next, we present Symphony, an&#119864;(3)-equivariant autoregressive generative model for 3D molecular geometries that iteratively builds a molecule from molecular fragments, improving upon existing autoregressive models for molecule generation and approaching the performance of diffusion models. The material in this thesis is primarily sourced from the publication “Symphony: SymmetryEquivariant Point-Centered Spherical Harmonics for 3D Molecule Generation" [13] authored by Ameya Daigavane, Song Kim, Mario Geiger and Tess Smidt, and published at the International Conference on Learning Representations (ICLR), 2024.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AC Optimal Power Flow for Physically and Economically Informed Grid Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/156160" rel="alternate"/>
<author>
<name>Anton, Laurentiu Lucian</name>
</author>
<id>https://hdl.handle.net/1721.1/156160</id>
<updated>2024-08-15T03:31:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AC Optimal Power Flow for Physically and Economically Informed Grid Decarbonization
Anton, Laurentiu Lucian
Current practices for power systems operations and planning rely on an approximate optimal power f low formulation known as DC Optimal Power Flow (DC OPF). Results from DC OPF are not implementable, requiring feasibility checks and adjustments from AC power flow analysis. Current methodologies do not guarantee convergence, feasibility, or robustness, and they rely heavily on operator knowledge and intervention. This work uses AC Optimal Power Flow (AC OPF) to directly obtain feasible dispatch signals that guide enhanced grid operations and planning within the context of Puerto Rico’s grid decarbonization efforts. A comprehensive application of AC OPF is used to assess the robustness of the Puerto Rican power grid and explore an array of scenarios involving the retirement of existing generating assets and integration of solar PV. A public model was assessed by analysing of several operational equilibria obtained via economic dispatch and loss minimization. Additionally, a Jacobian-based N-1 screening was performed, identifying critical contingencies requiring corrective actions. These insights, as well as considerations from the Puerto Rico 100 Study and PREPA’s 10Year Plan, were used to assess the deployment of potential solar assets at various stages of retirement for the San Juan, Palo Seco and Aguirre assets, in that order. Results provided locational, quantitative, and timely insights into optimal deployment strategies that align with Puerto Rico’s decarbonization goals. The findings confirm the ability for Puerto Rico to transition to a high-renewable deployment scenario, and provide guidance on where to strategically incentivize renewable deployment and reactive power support, in what quantities, and in response to which generator retirements.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Blocks of a Just Transition: Green Banks and Residential Building Decarbonization in New York</title>
<link href="https://hdl.handle.net/1721.1/156159" rel="alternate"/>
<author>
<name>Downing, Lia</name>
</author>
<id>https://hdl.handle.net/1721.1/156159</id>
<updated>2024-08-15T03:11:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Blocks of a Just Transition: Green Banks and Residential Building Decarbonization in New York
Downing, Lia
The existential threat of climate change has given rise to financial solutions aimed at transitioning global systems away from fossil fuels and towards clean energy. Green banks are one such solution as a specialty finance vehicle aimed at using public funds to induce private investment in climate energy projects such as residential building decarbonization. Given the recent increased investment and policy attention on green banks, we should assess whether the green bank model delivers their professed goals of socially equitable outcomes, market creation, and greenhouse gas emission reductions in line with Net Zero national policy.&#13;
This thesis seeks to understand the political and organizational dynamics of green bank models in the context of the Inflation Reduction Act and identify the existing project deployment gaps remaining for residential building decarbonization projects. Through a case study approach of New York Green Bank and New York Energy Efficiency Corporation, this study investigates green bank 1) additionality; 2) organizational structure; 3) scale; and 4) demand as considerations for green bank formulation to drive building decarbonization investments. These case studies combined with expert interviews provide strategy and programmatic recommendations for policymakers considering whether to create or expand a green bank in the wake of massive federal investment through the Inflation Reduction Act.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shaping the Future Amid Decline: Integrative Strategies for Aging Koreans and Migrant Workers in South Korea’s Shrinking Regions</title>
<link href="https://hdl.handle.net/1721.1/156158" rel="alternate"/>
<author>
<name>Kim, MinJi</name>
</author>
<id>https://hdl.handle.net/1721.1/156158</id>
<updated>2024-08-15T03:41:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Shaping the Future Amid Decline: Integrative Strategies for Aging Koreans and Migrant Workers in South Korea’s Shrinking Regions
Kim, MinJi
This thesis investigates the intricate dynamics between aging Korean populations and foreign migrant workers in South Korea’s shrinking regions. By conducting an in-depth analysis of four cities, each representing a unique aspect of the nation's projected demographic shifts, this study evaluates how urban planning and policy can foster resilient communities amidst significant societal changes. Utilizing a mixed-methods approach, which includes quantitative data alongside interviews and surveys with 81 stakeholders—from local officials to migrants and elderly residents—the research uncovers complex relationships and systemic barriers that impact community cohesion and demographic stability. The findings provide a nuanced perspective on how strategic urban design and innovative policy initiatives can drive transformative growth in these areas, turning demographic challenges into opportunities for development. The analysis highlights the untapped potential within vulnerable populations and recommends a series of interventions, including integrating educational elements into urban infrastructure and promoting cultural inclusivity through diverse partnerships. This approach seeks to reinvigorate shrinking regions, transforming them into vibrant, sustainable communities. Ultimately, the study underscores the critical role of inclusive urban development in revitalizing areas facing demographic and economic decline.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual AI for Sustainable Urban Development Computer Vision and Machine Learning Applications for Climate and Social Impact</title>
<link href="https://hdl.handle.net/1721.1/156157" rel="alternate"/>
<author>
<name>Schrage, Leonard</name>
</author>
<id>https://hdl.handle.net/1721.1/156157</id>
<updated>2024-08-15T03:45:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Visual AI for Sustainable Urban Development Computer Vision and Machine Learning Applications for Climate and Social Impact
Schrage, Leonard
The surge in interest in Artificial Intelligence (AI)—driven by recent advancements—has sparked widespread discourse across various sectors, reflecting mixed reactions of fascination and concern. This thesis focuses on Visual AI, critically analysing the technology’s potential to promote sustainable urban development. Presenting and evaluating three case studies that employ computer vision and machine learning in urban planning contexts, the research highlights the potential of Visual AI in enhancing urban complexity understanding and decision-making to mitigate the built environment’s immense carbon footprint and social shortcomings, whilst cautioning against the technology's ability to exacerbate current urban development issues. The projects—Urban Ingredients, City Aesthetics, and Million Neighborhoods: Reblocking—demonstrate three different approaches to using Visual AI for climate and social impact. The case studies subjects include generating global material stock data, analysing the correlation between facade geometries and urban health, and the scaling of parcel data generation for informal settlements. The thesis reflects on the limitations, impacts, and risks of the presented projects and offers a vision for future research aimed at achieving circular, regenerative, and equitable urban environments at scale.&#13;
Keywords&#13;
Visual Computing, Artificial Intelligence, Computer Vision, Machine Learning, AI Ethics, Urban Science, Climate Change, Equitable Cities, Urban Mining, Circular Economy, Architectural Neuroaesthetics, Facade Patterns, Parcelization, Reblocking, Informal Settlements
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Confronting Glacial Hazards: A Study of Disaster Impact and Community Adaptation to Glacial Lake Outburst Floods in Hunza, Pakistan</title>
<link href="https://hdl.handle.net/1721.1/156156" rel="alternate"/>
<author>
<name>Shahid, Misha</name>
</author>
<id>https://hdl.handle.net/1721.1/156156</id>
<updated>2024-08-15T03:32:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Confronting Glacial Hazards: A Study of Disaster Impact and Community Adaptation to Glacial Lake Outburst Floods in Hunza, Pakistan
Shahid, Misha
As climate change hastens glacial retreat across the world, high mountain communities face increasing risks from glacial lake outburst floods (GLOFs) with limited capacity to mitigate their impact and recover from repeated cycles of losses. This thesis looks at the Hassanabad settlement in Hunza, Pakistan which has faced five GLOF occurrences between 2019 and 2022 to study impact from the f loods, identify differential vulnerability within the settlement and evaluate local adaptation strategies. The study uses a combination of spatio-temporal analysis as well as qualitative field research. Findings indicate that areas closest to the Hassanabad ‘nullah’ (ravine) have suffered immensely from land losses through erosion and continue to be vulnerable to potential occurrences in the future. Field research in Hassanabad proves that community-led disaster risk management (CBDRM) efforts have been central to protecting local residents from the impact of these occurrences. In order to find solutions to the risks facing Hassanabad, the thesis presents five approaches to adaptation that link the remote sensing and community based findings within the region to assess realistic options for the settlement’s future. These include engineering centric solutions such as lake-level lowering and infrastructural adaptation, non-structural efforts such as the deployment of early warning systems (EWS) and community centric approaches that emphasize the role of community-based disaster risk management (CBDRM) and potential relocation of residents to a less risk-prone area.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impacts of Boston’s Fare Free Bus Route on Urban Mobility Behavior: A Framework for Causal Analysis</title>
<link href="https://hdl.handle.net/1721.1/156155" rel="alternate"/>
<author>
<name>Then, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/156155</id>
<updated>2024-08-15T03:09:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating the Impacts of Boston’s Fare Free Bus Route on Urban Mobility Behavior: A Framework for Causal Analysis
Then, Eva
Since the onset of the COVID-19 pandemic, public transit ridership in the US has faced significant challenges in returning to pre-pandemic levels. Nearly $70 billion in federal relief funding has been allocated to transit departments nationwide, with major US cities using these funds to implement free transit programs. Boston, the focal point of the thesis, launched a fare free bus pilot in August 2021 which has been extended multiple times and is set to continue until 2026. The Fare Free Program has seen encouraging results, marked by increased ridership, cost savings for passengers, and reduced dwell times. The following research leverages large-scale mobility data in an effort to gain deeper insights into the impact of the fare free policy, centering its analysis on Route 28. Employing the tools of causal inference, it offers a valuable resource for planners, policymakers, and scientists seeking to analyze the effect of policy interventions through the lens of big data. Drawing on anonymized, large-scale GPS data from mobile phone users in the Boston area, the research introduces a comprehensive framework for evaluating the impact of Boston’s Fare Free Program on urban mobility behavior, expanding research beyond the scope of transit data and surveys used by the City.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Vulnerability: Harmonizing Disaster Risk Reduction and Management with Socio-spatial Construction of Risk in Post-Tsunami Aceh</title>
<link href="https://hdl.handle.net/1721.1/156154" rel="alternate"/>
<author>
<name>Ramadani, Muhammad Rizki Rayani</name>
</author>
<id>https://hdl.handle.net/1721.1/156154</id>
<updated>2024-08-15T03:35:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Navigating Vulnerability: Harmonizing Disaster Risk Reduction and Management with Socio-spatial Construction of Risk in Post-Tsunami Aceh
Ramadani, Muhammad Rizki Rayani
How can a city that once suffered the world's deadliest tsunami prepare for future disasters? This thesis is a collection of stories from those who have historically been considered “unwanted, powerless, and marginalized” due to multi-tiered and differentiated citizenship. It examines the case of Banda Aceh in Indonesia nearly two decades after a devastating earthquake and tsunami that wiped out a third of its population, and a peace agreement that ended three decades of violent conflict. The question posed is: Does the narrative of Build Back “Better” remain relevant in representing the context of long-term development? &#13;
&#13;
This study primarily aims to deconstruct the logic of disaster risk reduction and management (DRRM) and territorial planning, which is rational and techno-scientific, built upon post-colonial relation networks. Through historical comparative analysis, the case of three coastal neighborhoods, also known as “gampong”, reveals the limitations of this approach. It does not necessarily reduce vulnerability. Instead, it intensifies it through a systemic process of “vulnerabilization” (Lamb and Vale, 2024 [forthcoming]), utilizing the logic of sacrifice and necropolitics (Mbembe, 2002), and further reinforcing "quasi-citizenship," where institutions with limited capabilities deny basic rights to marginalized communities. This thesis emphasizes that a disaster is not merely a natural hazard—it is an interaction with vulnerability, a state that is institutionally, historically, politically, ideologically, and spatially produced (Wisner, 2004). &#13;
&#13;
As a result, this study encourages reevaluating disaster risk reduction and management, specifically incorporating post-colonial critiques into theory-building. It proposes shifting away from universal models favoring high modernism or progress and advocates for a balanced approach that genuinely focuses on “the people”. Thus, this thesis advocates for a new methodology for closer relations in addressing affect, lived experience, and historical analysis in planning as legitimate ways of knowing. Acknowledging trauma, collective memory, and spatial expressions of belonging as valid forms of capabilities for disaster risk reduction and management is a crucial step to actualize equitably resilient cities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Deep Learning Models of Metabolism</title>
<link href="https://hdl.handle.net/1721.1/156153" rel="alternate"/>
<author>
<name>Chinn, Itamar</name>
</author>
<id>https://hdl.handle.net/1721.1/156153</id>
<updated>2024-08-15T03:27:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Deep Learning Models of Metabolism
Chinn, Itamar
Enzymes play a critical role in catalyzing the chemical reactions that underpin metabolic processes in living organisms. Despite their importance, a vast majority of enzymes remain uncharacterized, limiting our understanding of their potential roles in metabolism and disease. This thesis aims to address this gap by leveraging recent advancements in protein and molecular modeling to predict the outcomes of enzymatic reactions and identify functions of unannotated enzymes. Two key contributions are highlighted. Firstly, a graph-based forward synthesis prediction model is introduced, which relies only on the molecular structure of the substrates and the enzyme’s primary sequence. By capturing the biochemical interaction between enzyme residues and substrate atoms, the model achieves better generalization to new chemistry, demonstrating significant improvements in predicting unseen products and showcasing its potential for drug metabolism prediction. The second contribution is CLIPZyme, a contrastive learning method for virtual enzyme screening that frames the task of identifying enzymes catalyzing a reaction of interest as a retrieval problem. CLIPZyme outperforms the baseline approach of screening enzymes via their enzyme commission (EC) number. The combination of CLIPZyme with EC prediction consistently yields improved results over either method alone. Both of these contributions aim to provide the initial building blocks to model entire complex metabolic networks with downstream applications including metabolic engineering and drug discovery.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-Signal Characterization of Piezoelectric Resonators for Power Conversion</title>
<link href="https://hdl.handle.net/1721.1/156152" rel="alternate"/>
<author>
<name>Jackson, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/156152</id>
<updated>2024-08-15T03:33:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Large-Signal Characterization of Piezoelectric Resonators for Power Conversion
Jackson, Amanda
Magnetics are key components of conventional power converters, but they are often the bottleneck to achieving high power density due to their size, weight, and poor performance at small sizes. Piezoelectric devices, when operated in their inductive regime, can serve a purpose similar to that of magnetic components and offer favorable scaling properties as components are miniaturized. Several sources have demonstrated the viability of piezoelectric-based power converters, but selection of the optimal material and component size is limited by a lack of data on the performance of these materials at high drive levels. This work aims to fill that gap by collecting data to more completely characterize the losses in piezoelectric resonators owing to both mechanical and dielectric effects. To account for mechanical losses, the variation in resonator quality factor is examined across a range of drive levels for multiple resonator sizes, frequencies, and materials. By normalizing the collected data, material trends are derived that can predict mechanical losses under high drive levels, offering more insight into realistic converter operation than the currently available small-signal data sheet values. Additionally, a method for measuring high-power dielectric loss is presented, with results showing that the small-signal loss tangent provides a good approximation of losses even at higher drive levels. Based on these trends, implications for converter efficiency and selection of material and dimensions are discussed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perspectives on Power: Characterizing Public Perceptions Towards Large-Scale Renewable Energy Development in the United States</title>
<link href="https://hdl.handle.net/1721.1/156151" rel="alternate"/>
<author>
<name>Chaudhuri, Anushree</name>
</author>
<id>https://hdl.handle.net/1721.1/156151</id>
<updated>2024-08-15T03:51:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Perspectives on Power: Characterizing Public Perceptions Towards Large-Scale Renewable Energy Development in the United States
Chaudhuri, Anushree
With rapid growth in renewable energy development expected in the coming decades, it is crucial to ensure the clean energy transition does not perpetuate past injustices in infrastructure siting. In this thesis, I first contextualize historical renewable energy siting policies and patterns in the U.S., as well as current siting policies and debates. Then, I use a mixed-methods approach to characterize community sentiment towards large-scale renewable energy projects. First, I create a database of online narratives surrounding local renewable energy siting disputes using a large language model (LLM). This method analyzes online media, e.g., newspaper articles, public and legal proceedings, and social media, to quantify types of opposition sentiment. My analysis reveals that both wind and higher capacity projects are correlated with greater quantified measures of opposition, as scored by an LLM. To contextualize these national-level quantitative findings, I also conduct case studies of two ongoing siting disputes in California. I use interviews, focus groups, and participatory methods to better understand local context and analyze how recent state preemption of local siting authority affects public perceptions. Stakeholders focus on place-based factors overlooked in national analysis and express a desire for neutral joint fact-finding processes. Finally, I evaluate a university-based clinical model piloted at MIT for proactive stakeholder assessment and joint problem-solving to improve energy justice outcomes in renewable energy siting. Preliminary findings show the clinical approach increases participation of previously underrepresented groups, builds trust between stakeholders compared to a typical siting process, and expands experiential learning opportunities. Ultimately, this thesis suggests that a combination of large-scale empirical research paired with a site-specific clinical approach could enable a more equitable and efficient energy transition.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Salvemos Barranco: Contested visions for the city and transportation in Barranco, Lima, Peru</title>
<link href="https://hdl.handle.net/1721.1/156150" rel="alternate"/>
<author>
<name>Herndon, Marco Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/156150</id>
<updated>2024-08-15T03:30:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Salvemos Barranco: Contested visions for the city and transportation in Barranco, Lima, Peru
Herndon, Marco Leonardo
Densely populated cities like Lima, Peru, face a complex challenge: integrating mass transit into established urban fabrics. This thesis explores this tension through the case of a World Bank-funded Bus Rapid Transit (BRT) system implemented in Lima in 2010. The BRT, built mostly on an exclusive highway corridor, traversed only three neighborhoods–including Barranco, a historic district. Despite promising citywide mobility improvements, the project sparked protests in Barranco due to concerns about reduced pedestrian access, historic preservation, and potential neighborhood segregation. Through historical and spatial analysis, this thesis examines the claims of both residents and stakeholders to understand the root cause of the conflict and propose improved planning processes. The research reveals significant gaps between the planning process and resident concerns, resulting in reduced pedestrian space and unintended traffic impacts. In response, the thesis proposes a three-pronged approach for future World Bank BRT projects: 1) prioritizing local capacity building for meaningful public participation, 2) achieving a balance between city-wide accessibility and neighborhood concerns, and 3) implementing a community-based BRT evaluation framework. The study concludes by offering an opportunity for the World Bank to facilitate a reparative planning process in Barranco, centering residents as decision-makers in shaping their transportation future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redignifying LaVilla: Visualizing and Recentering Black Epistemologies in the Revitalization of LaVilla, Jacksonville, Florida</title>
<link href="https://hdl.handle.net/1721.1/156149" rel="alternate"/>
<author>
<name>Harris, Journee</name>
</author>
<id>https://hdl.handle.net/1721.1/156149</id>
<updated>2024-08-15T03:35:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Redignifying LaVilla: Visualizing and Recentering Black Epistemologies in the Revitalization of LaVilla, Jacksonville, Florida
Harris, Journee
There is a need and desire for planners and designers to atone for racism and white supremacy in the field, and Reparative Planning as a theory and practice is a start. This thesis looks at recent revitalization efforts in LaVilla, a historic African-American neighborhood situated in Downtown Jacksonville, Florida as an example of reparative planning, with specific interest around the upcoming Lift Ev’ry Voice and Sing Park. The creation of Lift Ev’ry Voice and Sing Park signals a pivotal moment for Black Landscapes in the US South in which the City of Jacksonville is looking to use public space to acknowledge and preserve local Black history. As the downtown area transforms, there is a need for grounding revitalization in a reparative process that is informed by lived experience and local expertise. Drawing upon methods such as unstructured interviews, archival research, and visual inquiry, this thesis proposes scrapbooking as an innovative approach to activating archives and visualizing Black Epistemologies within the urban planning context. At the core of this project lies the argument that Black Epistemologies represent a legitimate expertise that is missing from revitalization efforts. Planners and other practitioners engaged in anti-racist, reparative work should embrace these epistemologies as a valuable resource to inform their understanding of the built environment from distinct cultural and historical perspectives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Story of a Towel: A Comprehensive Approach to Disaster Preparedness: Enhancing Inclusivity and Sustainability in Chile's Emergency Disaster Kits</title>
<link href="https://hdl.handle.net/1721.1/156148" rel="alternate"/>
<author>
<name>Letelier, Ana A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156148</id>
<updated>2024-08-15T03:57:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Story of a Towel: A Comprehensive Approach to Disaster Preparedness: Enhancing Inclusivity and Sustainability in Chile's Emergency Disaster Kits
Letelier, Ana A.
This thesis explores the redesign of disaster relief kits in Chile using a methodology known as the Comprehensive Initiative on Technology Evaluation (CITE). Since the last update in 2017, the disaster kits in Chile are now set to be revised in 2024 due to a new agreement within the government. This presents an opportunity to redesign the kits using the CITE methodology to better meet the needs of the end-users. This thesis collaborates with the Chilean government to demonstrate how the kits should be redesigned to be more gender-inclusive and sustainable, reflecting the views of communities who participated in focus groups and surveys conducted for this study. The thesis underscores the importance of consulting with communities to understand their real needs and challenges, which is crucial for designing kits that truly serve those most in need after a disaster. It also highlights the significance of incorporating a gender perspective into disaster management methodologies and research. Ultimately, the redesigned kits include products that are more sustainable and gender-inclusive, and recommendations are provided on how the government can enhance its inclusivity and waste management practices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeking Relief in the City: An Examination of Planning in Karachi to Support Internally Displaced People after the 2022 Floods in Pakistan</title>
<link href="https://hdl.handle.net/1721.1/156147" rel="alternate"/>
<author>
<name>Shad, Daud</name>
</author>
<id>https://hdl.handle.net/1721.1/156147</id>
<updated>2024-08-15T03:04:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Seeking Relief in the City: An Examination of Planning in Karachi to Support Internally Displaced People after the 2022 Floods in Pakistan
Shad, Daud
The 2022 monsoon-season floods in Pakistan caused widespread devastation, resulting in millions of internally displaced people (IDPs) facing difficult choices. Karachi, the country’s largest city and capital of Sindh province, was the destination for tens of thousands of evacuees. Many of these rural-urban migrants ended up in relief camps lacking basic facilities and services. Although the government aimed to address the acute crisis of IDPs entering the city and promote rural rehabilitation, there was minimal accounting for those seeking longer-term support such as resettlement. Still, thousands of IDP households have chosen to stay in Karachi as return has seemed neither safe nor economically feasible. My research – based on key stakeholder interviews and site visits – examines the planning process to accommodate the short- and long-term shelter needs of IDPs who arrived in the city after the floods. It considers the impact of uncertainty on the affected population as well as the critical role of civil society in addressing the crisis. As climate change is exacerbating forced migration, how can the humanitarian response to support IDPs in a megacity like Karachi be more equitable and sustainable? This research recommends that key actors in Karachi plan for a comprehensive and flexible array of shelter and settlements programming to meet the various needs of people after disaster displacement. Additionally, IDPs in 2022 could have been better served through more accessible information on housing and coordination across relief sites. Adopting such measures may decrease the uncertainty inherent in humanitarian response and advance urban planning in assisting populations devastated by circumstances beyond their control.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photonic probabilistic machine learning using quantum vacuum noise</title>
<link href="https://hdl.handle.net/1721.1/156146" rel="alternate"/>
<author>
<name>Choi, Seou</name>
</author>
<id>https://hdl.handle.net/1721.1/156146</id>
<updated>2024-08-15T03:47:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Photonic probabilistic machine learning using quantum vacuum noise
Choi, Seou
Probabilistic machine learning is an emerging paradigm which harnesses controllable random sources to encode uncertainty and enable statistical modeling. The pure randomness of quantum vacuum noise, fluctuation of electromagnetic fields even in the absence of a photon, has been utilized for high speed and energy-efficient stochastic photonic elements. Nevertheless, the experimental demonstration of photonic probabilistic computing hardware has remained elusive so far, due to the lack of programmable stochastic optical elements which can implement probabilistic machine learning algorithms. Here, we implement a photonic probabilistic computer consisting of a programmable stochastic photonic element, which we refer to as a photonic probabilistic neuron (PPN). We implement this PPN using a biased optical parametric oscillator, which utilizes quantum vacuum noise to generate a tunable probability distribution controlled by a bias field. We then implement a measurement-and feedback scheme for time-multiplexed PPNs in electronic processors (FPGA or GPU) to solve certain probabilistic machine learning tasks. We showcase how we can encode probabilistic behavior in two representative models of machine learning, discriminative and generative models, by showcasing probabilistic inference and image generation of MNIST-handwritten digits. While solving these probabilistic machine learning tasks, quantum vacuum noise works as a random source which can encode classification uncertainty in inference and enable probabilistic generation of samples. Furthermore, we propose a path toward an all-optical probabilistic computing platform. We estimate the sampling rate of the PPN as ∼ 1 Gbps and energy consumption as ∼ 5 fJ/MAC. Our work paves the way for scalable, ultrafast, and energy-efficient probabilistic machine learning hardware.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Black Collective Memory as Economic Development Practice: Resistance and Renaissance in Louisiana’s River Parishes</title>
<link href="https://hdl.handle.net/1721.1/156145" rel="alternate"/>
<author>
<name>Allen, Trace</name>
</author>
<id>https://hdl.handle.net/1721.1/156145</id>
<updated>2024-08-15T04:00:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Black Collective Memory as Economic Development Practice: Resistance and Renaissance in Louisiana’s River Parishes
Allen, Trace
Incentivized by federal industrial policy, regional economies around the United States&#13;
are entertaining transitions to sustainable economies. This thesis investigates the role of&#13;
a Black collective memory in shaping the past, present, and future of these economies.&#13;
Utilizing case studies, this thesis profiles two visionary, trailblazing environmental justice&#13;
organizations, Rise St. James and The Descendants Projects. These organizations are&#13;
situated two rural, Black towns (St. James and Wallace respectively) in Louisiana’s&#13;
River Parishes, known infamously as Cancer Alley, due to the possessing the highest&#13;
density of petrochemical infrastructure in the Western Hemisphere, marking Black&#13;
residents as sacrificial for the sake of “economic development.” These current economic&#13;
development practices are descended from what Clyde Woods described as “plantation&#13;
epistemologies” rooted in “...monopoly of land, resources, and capital…and the&#13;
immobility of Black labor” (Woods, 2017, p. 215). An economic transition rooted in this&#13;
plantation logic may soon produce heirs promoting “false solutions” to the intertwined&#13;
environmental justice and climate crises.&#13;
Moving beyond standard deficit narratives, these cases assert the agency of&#13;
these Black descendant organizations (and their ancestors) in leveraging a Black&#13;
collective memory to both “stop the bad” and to “build the good”. This is denoted by the&#13;
Black collective memory of the nation’s largest slave rebellion occurring in the River&#13;
Parishes and in these organizations leading and embodying development rooted in&#13;
honoring these ancestors. As we embark on this seismic economic transition, what&#13;
lessons can be learned from these environmental justice leaders to embody Dr. David&#13;
Pellow’s claim, “these threatened bodies, populations, and spaces are indispensable to&#13;
building socially and environmentally just and resilient futures for us all” (Pellow, 2016,&#13;
p.227)?
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Decarbonization of California’s Transportation Sector: A Comparative Analysis of Aviation, Electric Vehicles, and High-Speed Rail using the ASIF Framework</title>
<link href="https://hdl.handle.net/1721.1/156144" rel="alternate"/>
<author>
<name>Becerril, Kimberly</name>
</author>
<id>https://hdl.handle.net/1721.1/156144</id>
<updated>2024-08-15T03:04:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Deep Decarbonization of California’s Transportation Sector: A Comparative Analysis of Aviation, Electric Vehicles, and High-Speed Rail using the ASIF Framework
Becerril, Kimberly
This thesis explores California's journey towards deep decarbonization in the transportation sector, focusing on the pivotal roles of aviation, electric vehicles, and high-speed rail. Using the ASIF framework, it analyzes the Activities, Share, Intensity, and Fuel of each mode, aiming to illustrate the intricate ways that different transportation options contribute to greenhouse gas emissions. By examining the challenges and opportunities presented by these transportation modes, the thesis underscores the need for comprehensive strategies that transcend incremental technological improvements. Bearing California's ambitious climate goals in consideration, this report explores the complex interplay between transportation, urban development, and land-use patterns, highlighting the importance of systemic changes for achieving sustainable mobility. Through a comparative analysis and case study, the thesis offers valuable insights into the impact of different transportation systems on California’s transition towards a carbon-neutral future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wildly Inaccessible: Reaching Public Lands via Public Transit</title>
<link href="https://hdl.handle.net/1721.1/156143" rel="alternate"/>
<author>
<name>O'Connell, Nineveh</name>
</author>
<id>https://hdl.handle.net/1721.1/156143</id>
<updated>2024-08-15T03:27:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Wildly Inaccessible: Reaching Public Lands via Public Transit
O'Connell, Nineveh
Public transit offers a valuable method to sustainably connect people to highly sought amenities, including outdoor spaces. Outdoor recreation has grown in popularity, with a particular uptick in outdoor activity in response to the Covid-19 pandemic. Additionally, a growing body of research has demonstrated the public health benefits of access to open space. Auto-dependency in the United States often requires visitors to arrive to outdoor spaces via personal vehicle, generating carbon emissions and limiting outdoor access for communities without reliable access to cars. Further, high demand for outdoor access has resulted in visitors parking in unauthorized spots along the shoulder of roads when trailhead parking lots reach capacity, creating congestion and unsafe road conditions as people walk between their cars and trailheads alongside moving traffic. City dwellers and the environment would benefit from public transportation services connecting densely populated areas to beloved outdoor spaces. This paper explores how fixed-route public transit has brought urban communities closer to nearby trailheads with two examples in the American West: Trailhead Direct in King County, WA and the Muir Woods Shuttle in Marin County, CA. Both programs were implemented in the twenty-first century in response to unsafe conditions at trailhead parking lots, yet they have grown to operate under very distinct models. Sequencing the evolution of these transit to trails programs relative to stated program goals provides insight into the degree to which they have been successful, and what further work could be done to improve visitor experience, prioritize ecosystem protection, and increase equitable access to the outdoors. Adaptation to unforeseen circumstances, creative marketing and routing tailored to a clear customer group, and securing funding from relevant stakeholders have constantly influenced both programs. These case studies showcase the value of partnerships between land managers and transit agencies, and analysis of their history highlights key components to consider when designing sustainable, reliable transit to trails service.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Home, Again: Recommendations for strengthening social and financial post-buyout outcomes of the New Jersey Blue Acres Program</title>
<link href="https://hdl.handle.net/1721.1/156142" rel="alternate"/>
<author>
<name>Zhao, Elisha Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/156142</id>
<updated>2024-08-15T03:33:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Home, Again: Recommendations for strengthening social and financial post-buyout outcomes of the New Jersey Blue Acres Program
Zhao, Elisha Rose
The buyout of homes that have undergone significant cumulative flood damage, or are poised to do so, is an increasingly relied upon tool used by government agencies, including the New Jersey Department of Environmental (NJDEP) Protection Blue Acres Program, to adapt within the arena of accelerating climate change. Buyouts act primarily as a form of climate adaptation; residents voluntarily move away from areas of high flood risk and are equipped generally with the market value of their former homes to find safer housing, while the former homes are demolished and made into open space. The post-buyout process, which is where social and financial consequences crystallize materially, forms the focus of this thesis study. In the case of Blue Acres, much effort is made to guide participants towards eligible incentives or supplemental relocation assistance on top of their appraisal value, which requires relocating outside of a flood zone and/or within the same community. Additionally, Blue Acres has established itself within a larger network of community organizations and other state agencies that it can point participants to for disaster recovery relief and housing counseling.&#13;
&#13;
Nevertheless, its post-buyout process has potential to make concrete many of the improvements that buyout scholars across the U.S. advise, further strengthening its role as a national pioneer in managed retreat. I propose five recommendations based on this literature: establishing a tracking system of outcomes, creating a low-income homeowners relocation incentive, expanding on the Smart Move pilot program, involving former and remaining residents to decide how bought-out land in their neighborhood is used, and collaborating with municipalities to bring buyouts into their long-range adaptation planning. These form the basis of my question: How can Blue Acres strengthen the post-buyout branch of its services to ensure better long-term social and financial outcomes for its participating homeowners?&#13;
&#13;
Keywords: Flood, buyouts, climate adaptation, climate resilience, municipal finance, local government, housing, land use, community engagement
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Factories to Classrooms: The Influence of FDI-Led Industrialization on Educational and Vocational Training Infrastructure in Binh Duong Province, Vietnam</title>
<link href="https://hdl.handle.net/1721.1/156141" rel="alternate"/>
<author>
<name>Trinh, Linh</name>
</author>
<id>https://hdl.handle.net/1721.1/156141</id>
<updated>2024-08-15T03:01:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Factories to Classrooms: The Influence of FDI-Led Industrialization on Educational and Vocational Training Infrastructure in Binh Duong Province, Vietnam
Trinh, Linh
This thesis addresses the gap in the literature concerning the impact of Foreign Direct Investment FDI-led industrialisation on educational and vocational training infrastructure in Binh Duong Province, Vietnam. The gap highlights disparities between urban and rural areas and the misalignment between skills provided by local educational and vocational training institutions and those demanded by FDI-driven industries.&#13;
&#13;
The research question guiding this study is: How has Binh Duong Province developed its human resources to meet the demands of its economic and industrial development over the past two and a half decades? This study explores the direct impacts of FDI on the province's economic and industrial landscape, how educational and vocational training systems in response to industrial demands exhibited by investments in physical infrastructure, the alignment between schooling and training outputs and industrial requirements, and the challenges and gaps in current human capital development strategies.&#13;
&#13;
Employing a mixed-methods approach, the research combines quantitative data analysis with qualitative insights. Quantitatively, it uses Pearson correlation analysis to examine the relationship between industrial development and educational infrastructure development, alongside geospatial mapping for spatial insights. Qualitative methods include an extensive review of human capital development strategies, legal frameworks, and global educational and vocational training models.&#13;
&#13;
Key findings indicate significant gaps in Binh Duong's educational and vocational training systems. Despite substantial FDI inflows transforming the province into an industrial hub, there is a misalignment between educational and vocational training outputs, and the skills required by industries, especially in high-tech sectors. The study underscores the need for reforms in educational and vocational training programmes, advocating for tailored vocational training, a shift towards a market-driven human development strategy, and stronger partnerships between public and private sectors in both education and industry.&#13;
&#13;
This research concludes that bridging the gap between industrial needs and educational outputs is crucial for sustainable economic growth and enhancing Binh Duong’s competitiveness. It provides actionable insights for policymakers and industry stakeholders to develop integrated strategies ensuring a skilled and adaptable workforce for a modern economy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Random Access Satellite Communication in the Presence of Interference</title>
<link href="https://hdl.handle.net/1721.1/156140" rel="alternate"/>
<author>
<name>Copley, Jonathon H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156140</id>
<updated>2024-08-15T03:01:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Random Access Satellite Communication in the Presence of Interference
Copley, Jonathon H.
This thesis explores a random access service for satellites with the possibility of unintentional or intentional interference. Previous work does not address large bandwidth-delay product systems and interference. This thesis combines the challenges of each, developing a methodology for modeling and stabilizing the random access protocol, accommodating long delays, and mitigating interference.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear Microscopy for Materials Analysis and Clinical Pathology</title>
<link href="https://hdl.handle.net/1721.1/156139" rel="alternate"/>
<author>
<name>Doshi, Sagar P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156139</id>
<updated>2024-08-15T03:44:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Nonlinear Microscopy for Materials Analysis and Clinical Pathology
Doshi, Sagar P.
From understanding biological systems to characterizing materials, microscopy has facilitated the analysis of micro and nanoscale systems across scientific disciplines. The optical&#13;
transparency of different biological features allows pathologists to relate what they see on a microscope slide to fundamental mechanisms of disease. The same notions of micro-nano&#13;
sized features and optical transparency make microscopy an extremely effective technique for analyzing material properties. Nonlinear microscopy (two-photon absorption fluorescence)&#13;
was used to image surgical specimens in a clinical pathology practice. The optical system design of the instrument is explained, and its performance in terms of diagnostic accuracy&#13;
(sensitivity/specificity) and speed is presented. Exploratory, qualitative studies of imaging histopathologies beyond breast and prostate tissue are also provided. Towards the development&#13;
of high efficiency frequency converters for visible-near infrared light, periodic poling of thin film lithium niobate (TFLN) was conducted. State-of-the-art poling for quasi phase&#13;
matching was achieved via an iterative process. Devices were poled in a custom-built high voltage probing setup and imaged with a second harmonic generation (SHG) microscope to&#13;
provide feedback on the poling parameters. A select number of samples were also imaged with piezo force microscopy. The effect of poling parameters on grating quality is analyzed,&#13;
and the effect of the SHG microscope system design on image quality is quantified. Finally, a successful demonstration of SHG in a TFLN device is shown.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing misspecification in contextual optimization</title>
<link href="https://hdl.handle.net/1721.1/156138" rel="alternate"/>
<author>
<name>Bennouna, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/156138</id>
<updated>2024-08-15T03:22:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing misspecification in contextual optimization
Bennouna, Omar
We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an energy allocation problem when the energy cost in different areas is uncertain. Despite the absence of precise energy cost values at the time of problem-solving, machine learning models are employed to predict these costs, and the resulting optimization problem, which consists for example of minimizing energy costs while meeting some minimal requirements, is solved using state-of-the-art optimization algorithms. When the chosen hypothesis set is well-specified (i.e. it contains the ground truth predictor), the SLO (Sequential Learning and Optimization) approach performs best among state of the art methods, and has provable performance guarantees. In the misspecified setting (i.e. the hypothesis set does not contain the ground truth predictor), the ILO (Integrated Learning and Optimization) approach seems to have better behavior in practice, but does not enjoy theoretical optimality guarantees. We focus on the misspecified setting. In this case, there is no known algorithm that rigorously solves this prediction problem. We provide a tractable ILO algorithm which successfully finds an optimal solution in this setting. Our approach consists of minimizing a surrogate loss which enjoys theoretical optimality guarantees as well as good behavior in practice. In particular, we show that our approach experimentally outperforms SLO and previous ILO methods in the misspecified setting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Complex Dynamics of Regenerating Urban Vacancy: A Case Study of Songkhla, Thailand</title>
<link href="https://hdl.handle.net/1721.1/156137" rel="alternate"/>
<author>
<name>Sahacharoenwat, Ponpat</name>
</author>
<id>https://hdl.handle.net/1721.1/156137</id>
<updated>2024-08-15T03:12:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding the Complex Dynamics of Regenerating Urban Vacancy: A Case Study of Songkhla, Thailand
Sahacharoenwat, Ponpat
This thesis investigates the pervasive issue of urban vacancy in Songkhla City, Thailand, characterized by the prevalence of vacant, abandoned, or underutilized properties. Such urban vacancies arise from a complex interplay of factors, including economic downturns, demographic shifts due to urban depopulation or migration, speculative real estate practices, and disparities in urban development and public infrastructure. These vacancies contribute to urban decay, affecting the vitality and functionality of city centers and leading to economic and social issues.&#13;
The thesis employs causal loop analysis to illustrate the complex interactions involved in regenerating urban vacancies. The thesis begins with a comprehensive overview of the urban vacancy crises in Songkhla City. Following this, the study delves into an analysis of the dynamics involved in regenerating these urban vacancies. It particularly emphasizes the role of private investment and evaluates the impact of existing urban planning tools and policies, as illustrated through causal loop diagrams. Subsequently, the thesis proposes specific strategies and strategic actions aimed at revitalizing these vacant spaces. These proposed measures are integrated into another causal loop diagram to assess their potential impacts on the urban dynamic. Finally, the thesis concludes with a discussion of broader policy implications, reflecting on how the insights gained from Songkhla City could inform and influence national-level policies aimed at revitalizing secondary cities across Thailand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Agroecological Response to the Militarized Urban in Vieques, Puerto Rico</title>
<link href="https://hdl.handle.net/1721.1/156136" rel="alternate"/>
<author>
<name>Ouadani, Oussama</name>
</author>
<id>https://hdl.handle.net/1721.1/156136</id>
<updated>2024-08-15T03:05:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Agroecological Response to the Militarized Urban in Vieques, Puerto Rico
Ouadani, Oussama
Between 1942 and 1950, the United States Navy forcefully occupied and constructed three military facilities, known collectively as the Atlantic Fleet Weapons Training Facility, on the Puerto Rican island-municipality of Vieques. In the process, the Navy dispossessed 70% of the land and displaced 50% of the population, artificially precipitating Vieques’s shift from a rural to urban society. After an errant bomb killed a local, intense grassroots mobilizations succeeded in ousting the Navy from Vieques in 2003, but the extensive ecological and social harm it generated was devasting and enduring. This thesis will contextualize within Vieques the production of what Palestinian urban scholar Abreek-Zubiedat (2023) novelly terms the militarized urbanism(s) and highlight the island’s contemporary agroecological movement in response to it. This thesis then traces how the militarized urban emerged and operated in Vieques vis-à-vis displacement- resettlement logics, the imposition of spatial prohibitions and ecocide, and the gamification of land and society. Finally, I offer possibilities for reimagining our ecological and urban spaces in Vieques and beyond. Complimenting my embodied, archival, and theoretical research methodology is an affective treatment of the island’s militarized history through Pedro Juan Soto’s novel Usmaíl, published and set in mid-20th century Vieques.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Step By Step: Suburban Active Transportation Planning in Spring Hill, Tennessee</title>
<link href="https://hdl.handle.net/1721.1/156135" rel="alternate"/>
<author>
<name>Tucker, Keili A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156135</id>
<updated>2024-08-15T03:03:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Step By Step: Suburban Active Transportation Planning in Spring Hill, Tennessee
Tucker, Keili A.
Suburban form produces car dependency with its circuitous routes, segregated land uses, and sprawling development. Active Transportation (AT), defined as non-motorized travel modes such as walking and cycling, has the potential to provide suburban residents with alternative mobility options. In 2015, Spring Hill, Tennessee, a city with suburban form and no dense urban core, adopted a Bicycle and Greenway Plan (BGP) to develop an AT network. This thesis seeks to understand how AT network plans are institutionalized, maintained, and expanded through policy and other implementation tools in order to accelerate progress on the expansion of AT infrastructure in Spring Hill. The thesis begins with four case studies: Spring Hill, Tennessee; Jefferson County, Alabama; Apex, North Carolina; and Mississippi Mills, Ontario, Canada. The case studies revealed that infrastructure, policy-making, and social programs must go hand in hand for a successful network. The thesis continues with sixteen one-on-one interviews of municipal staff, elected officials, and local developers in Spring Hill. The interviews addressed perspectives on walkability, experiences with AT implementation, and ideas for improving citywide pedestrian accessibility. The interviews reinforced that separated land uses and sprawling development limit the potential for walkability. Additionally, they revealed that greenfield development has been responsible for the majority of the BGP build-out thus far. BGP implementation would benefit from more buy-in from the city through dedicated funding streams and better use of existing programs that target pedestrian infrastructure. This work contributes to Active Transportation research by investigating the unique challenges of establishing walkability in rapidly growing suburban places.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Place for Arts &amp; Culture: how arts &amp; culture play into interdisciplinary strategies for community development without displacement</title>
<link href="https://hdl.handle.net/1721.1/156134" rel="alternate"/>
<author>
<name>Tolani, Yuvika</name>
</author>
<id>https://hdl.handle.net/1721.1/156134</id>
<updated>2024-08-15T03:35:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Making Place for Arts &amp; Culture: how arts &amp; culture play into interdisciplinary strategies for community development without displacement
Tolani, Yuvika
This thesis seeks to deepen our understanding of arts &amp; culture in the context of community development at a neighborhood scale. Specifically, it asks: how might arts and culture interventions be used as part of interdisciplinary strategies to bolster marginalized communities dealing with systemic disinvestment without exacerbating development induced displacement? Focusing on el Punto in Salem, MA, it will surface tools used by a Community Development Corporation (CDC) working in a majority immigrant community in the heart of a city. In doing so, it contemplates key tensions inherent in attempting to align a development strategy with community interests. Intersecting with the work of the Metropolitan Area Planning Council Department of Arts &amp; Culture, it will then turn to Boston’s Chinatown—a deeply different context, with certain shared characteristics—as a site of further inquiry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Rural to Urban: Examining Urbanization and Quality of Life in Hawassa, Ethiopia</title>
<link href="https://hdl.handle.net/1721.1/156133" rel="alternate"/>
<author>
<name>Tesfaye, Bethlehem Fisseha</name>
</author>
<id>https://hdl.handle.net/1721.1/156133</id>
<updated>2024-08-15T03:41:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Rural to Urban: Examining Urbanization and Quality of Life in Hawassa, Ethiopia
Tesfaye, Bethlehem Fisseha
This thesis examines the rapidly urbanizing, secondary city of Hawassa, Ethiopia, studying patterns of urban growth to determine how different models of planning for urban development can influence quality of life for urban residents in Ethiopia. To do so, it proposes its own operational definition for ‘quality of life’ that constitutes (1) increased commercial activity, (2) access to affordable housing, (3) general health &amp; well-being, (4) healthy environments, (5) access to affordable transportation, and (6) community involvement &amp; sense of belonging. Hawassa is one of many secondary cities in Ethiopia and across Sub-Saharan Africa that are growing at a pace faster than the city can plan for, leading to unsustainable informal settlements that exacerbate inequity. While recent planning models have been put in place to avoid such settlements in Hawassa, such as the Urban Expansion Initiative, the Urban Local Government Development Project, and Special Economic Zones, the progress of these models and their effect on the newly urbanized population has yet to be evaluated. Furthermore, a successful model of urban planning that is self-sufficient, localized to its community, and accountable to the welfare of its population has yet to be defined. This project aims to determine a new standard for the evaluation of future urban development projects in secondary cities that incorporates equitable frameworks for decision-making in the formation of local planning policy and urban design by (1) quantitatively assessing the correlation between urban living and an inherited index for individual wealth, used as a proxy for ‘quality of life’, at a national level; (2) compiling and analyzing existing information and data on Hawassa’s recent urban development; (3) constructing a narrative of Hawassa’s city development through new data gathered from the affected population of Hawassa on attitudes towards urbanization in three key study areas of the city: BahilAdarash sub-city, Tabor sub-city, and the area around Hawassa Industrial Park.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Pilots to Stable Services: Documenting the Rise and Diversity of Microtransit in the U.S.</title>
<link href="https://hdl.handle.net/1721.1/156130" rel="alternate"/>
<author>
<name>Humann, McKenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/156130</id>
<updated>2024-08-15T03:40:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Pilots to Stable Services: Documenting the Rise and Diversity of Microtransit in the U.S.
Humann, McKenzie
In 2014, the emergence of public on-demand, ride-sharing services, known as microtransit, (re)captured the attention of techno-positive urbanists. Echoing the same arguments for demand- response transit in the 1970s, new transit technology startups like Via, Chariot, and Bridj touted microtransit as a more affordable alternative to private ride-hailing services, while promising greater efficiency and improved customer experiences compared to traditional bus services. Proponents believed this "disruptive transportation innovation" could alleviate traffic congestion and reduce vehicle emissions if scaled successfully.&#13;
&#13;
Following mixed results from early pilot programs over the previous five years, only the truly disruptive Covid-19 pandemic launched microtransit into an accelerated phase of adoption. Many transit agencies replaced underperforming bus routes with microtransit, while others used federal funding to launch new pilots designed to connect riders to existing transit nodes. Yet the sparsity of public data on microtransit services prevents researchers unaffiliated with any major technology providers from establishing baseline service metrics or comprehensively evaluating the performance of these new programs in relation to each other, let alone assess any broader effect on travel patterns.&#13;
&#13;
This thesis provides the first comprehensive documentation of microtransit's growth and trends in service design in the U.S. as a first step toward assessing its current state. A newly compiled dataset reveals the diversity and variability of microtransit programs in their service goals, types, and designs. Finally, this thesis proposes a new assessment framework to help microtransit administrators balance competing trade-offs like cost-efficiency, reliability, and flexibility based on their service goals and transit needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Purpose and Growth: Evaluating Community Land Trusts (CLTs) as an Organizational Model and the Imperative for Strategic Management</title>
<link href="https://hdl.handle.net/1721.1/156129" rel="alternate"/>
<author>
<name>Rosario, Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/156129</id>
<updated>2024-08-15T03:26:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Balancing Purpose and Growth: Evaluating Community Land Trusts (CLTs) as an Organizational Model and the Imperative for Strategic Management
Rosario, Eduardo
The U.S. is experiencing a severe housing affordability crisis affecting both homeownership and rental markets across income levels. Community land trusts (CLTs) have gained popularity as a promising model to help preserve long-term affordable housing. However, while CLTs have been extensively studied conceptually, relatively little research has examined the strategic management choices and internal practices which ultimately impact the CLT's ability to scale its impact within communities.&#13;
&#13;
This thesis explores pivotal strategic considerations faced by CLT leaders as their organizations evolve. Through a review of the origins and philosophies underlying the CLT model, examples of CLTs across the U.S., and in-depth case studies, the research identifies three key areas where management choices are critical: 1) Clearly defining the CLT's vision, mission, and goals to maintain focus; 2) Navigating tradeoffs in organizational setup, housing types, scale, and speed of development; and 3) Aligning leadership capabilities with the CLT's growth stage.&#13;
&#13;
The findings highlight that while CLTs share the singular purpose of providing permanently affordable housing, their management priorities and pathways to impact can diverge significantly based on contextual factors and strategic decisions. This analysis provides a framework for CLT leaders to intentionally guide the trajectory of their organizations based on their specific missions, needs, market conditions, and aspirations for scale. The research aims to inform both emerging and established CLTs to maximize their impact on the housing affordability crisis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Doing Good By Approaching Well: Enhancing a System Thinking Mindset and Ecosystem for Student Entrepreneurs Tackling Systemic Issues in Growth Markets.</title>
<link href="https://hdl.handle.net/1721.1/156128" rel="alternate"/>
<author>
<name>Briceno Brignole, Raul</name>
</author>
<id>https://hdl.handle.net/1721.1/156128</id>
<updated>2024-08-15T03:24:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Doing Good By Approaching Well: Enhancing a System Thinking Mindset and Ecosystem for Student Entrepreneurs Tackling Systemic Issues in Growth Markets.
Briceno Brignole, Raul
According to the United Nations, if current trends persist, the Sustainable Development Goals (SDGs) will not be met until 2082. The social, environmental, and economic issues behind these goals are inherently systemic and have often been inadequately addressed by governments, companies, and nonprofits. A systemic lens is essential for tackling these root causes and fostering strategic collaboration among stakeholders to achieve a systemic shift. Combining entrepreneurship with systemic thinking emerges as a powerful approach to drive this change. The Legatum Center at MIT has been at the forefront of empowering aspiring student entrepreneurs to address pressing issues in growth markets, fostering innovation and prosperity. This thesis delves into the concept of system thinking, system change entrepreneurship, and proposes tailored frameworks and recommendations for the Legatum Center. These proposals aim to cultivate a systemic change entrepreneurial environment, equip aspiring student system change entrepreneurs, and further position Legatum as a central force in MIT promoting prosperity and change through a systemic approach to purpose-driven entrepreneurship.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collective Bargaining to Community Benefits: Leveraging Organized Labor to Advance an Equitable Clean Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156127" rel="alternate"/>
<author>
<name>Oh, Sung Eun Sally</name>
</author>
<id>https://hdl.handle.net/1721.1/156127</id>
<updated>2024-08-15T03:56:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Collective Bargaining to Community Benefits: Leveraging Organized Labor to Advance an Equitable Clean Energy Transition
Oh, Sung Eun Sally
This research aims to bridge the gap in understanding how community benefits in the clean energy transition can expand opportunities for workers and communities of color, particularly within the context of Community Benefits Programs (CBP) with a focus on the role of organized labor. The federal climate legislation such as the Infrastructure Investment and Jobs Act (IIJA) and the Infrastructure Investment and Reform Act (IRA) are expected to propel the growth of the clean energy sector and it is imperative to ensure that the impact on job creation and wealth-building opportunities are equitably distributed to historically disadvantaged communities. This paper aims to analyze the position of organized labor within the federal framework for addressing equity in energy transition and its potential to bolster labor-climate movements. Positioned in the discourse on the political economy of energy transition and organized labor's historical role in advancing or impeding environmental justice and racial equity goals, this research examines traditional tools of labor and new directions posed by the community benefits movement. The research conducts s a comparative case study using qualitative data to analyze key stakeholder priorities, labor-community engagement, and enforcement mechanisms of CBAs within the auto manufacturing sector from Los Angeles, CA, and Detroit, MI. Findings suggest that organized labor possesses significant leverage in negotiating community benefits but lacks influence in shaping the overall infrastructure for implementation and enforcement. The paper recommends that federal guidelines of the CBP or other funding conditionalities could help fill this gap for coordination, resource allocation needed to shape the legal, political, and civic infrastructure to guide community benefits negotiations, implementation, and enforcement.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Coastal City Resilience and Extreme Heat Action in Zanzibar, Tanzania through Multi-Hazard Risk Assessment (MHRA)</title>
<link href="https://hdl.handle.net/1721.1/156126" rel="alternate"/>
<author>
<name>Shahdadpuri, Anushka</name>
</author>
<id>https://hdl.handle.net/1721.1/156126</id>
<updated>2024-08-15T03:48:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Coastal City Resilience and Extreme Heat Action in Zanzibar, Tanzania through Multi-Hazard Risk Assessment (MHRA)
Shahdadpuri, Anushka
The Coastal City Resilience and Extreme Heat Action Project (CoCHAP) is an ongoing initiative of the Red Cross Red Crescent Climate Center that aims to build climate resilience in urban areas, particularly addressing extreme heat and coastal threats in Southeast Asia, Latin America, and East Africa. This project is conducted in collaboration with the International Federation of Red Cross and Red Crescent Societies (IFRC), American Red Cross (Am. RC), Global Disaster Preparedness Center, and the National Red Cross Societies. As part of CoCHAP, this thesis investigates the spatial vulnerabilities of compound risks related to heatwaves and flooding in Zanzibar, East Africa, in partnership with the Tanzania Red Cross Society (TRCS). Recent increase in temperature and precipitation have heightened Zanzibar's vulnerability. With one of the highest population densities in Africa, the region's economy heavily relies on climate-sensitive activities such as agriculture, tourism, and fishing, making it the most climate-vulnerable small island region. To understand the region's dichotomous predicament, I analyze the location-dependent climatic, socio-economic, physiological, and environmental parameters using a Multi-Hazard Risk Assessment (MHRA). The assessment evaluates three latent variables — exposure, vulnerability, and hazard — derived from remote sensing and household census survey (HCS) data. Principal component analysis and spatial analysis techniques were employed to assess the weighted vulnerability of over 100 wards (the smallest administrative zones) to both heat and flood risk. I find that while the hazard factor itself, does not pose a major risk in Zanzibar, the socio-economic conditions, coupled with inflexible planning under neoliberal frameworks, exacerbate risks, particularly in urban wards. This is evident in the distribution of flood and heat risk, which is random throughout the island city, although high land surface temperatures and precipitation are concentrated around existing built-up coastal areas. 20 wards were identified as highly vulnerable to heatwaves and coastal flooding, revealing nuanced variations in multi-risk distribution across urban, suburban, and agrarian areas, influenced by gradients from coastal low-elevation to high-elevation inland zones. Notably, tourism-dependent wards emerge as potential areas for synergistic ecological and economic gains. These findings offer crucial insights for the TRCS, informing tailored adaptation plans as part of the Zanzibar Climate Change Alliance: City Wide Risk Assessment (CWRA).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pa’ashi National Park: Resiliency, Restoration, and Reparative Planning for California’s Tulare Lake Basin</title>
<link href="https://hdl.handle.net/1721.1/156125" rel="alternate"/>
<author>
<name>O'Neil, Hazel</name>
</author>
<id>https://hdl.handle.net/1721.1/156125</id>
<updated>2024-08-15T03:53:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pa’ashi National Park: Resiliency, Restoration, and Reparative Planning for California’s Tulare Lake Basin
O'Neil, Hazel
In 2023, a series of atmospheric rivers reawakened California's largest sleeping lake, Pa'ashi, in the California Tulare Lake Basin. This thesis supports the proposal of the Tachi Yokut Tribe, one of the watershed's Indigenous communities, to preserve Pa'ashi in the form of a new National Park. I present historical and environmental context that explains how the lake was put to sleep by Manifest Destiny-era agricultural settlement and subsequent consolidation of political control over water. I argue that the Tachi Yokut Tribe's proposal for a National Park is a pragmatic, feasible, and desirable planning response to the region's interwoven challenges of climate change, ecological imbalance, and pervasive environmental injustice. I demonstrate how the community might develop the ideas of the park further through a sample visioning process and landscape design framework for the watershed. This thesis advances a theory of "two-eyed seeing" (Bartlett et al 2012) planning practice by centering Indigenous values and planning scholarship to articulate how planners and designers might foster stronger connections between people, place, and nature when undertaking landscape-scale climate adaptation projects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pathways to Equity: Mapping the Impacts of Nairobi’s Urban Form on Pedestrian Mobility</title>
<link href="https://hdl.handle.net/1721.1/156124" rel="alternate"/>
<author>
<name>Kifetew, Yabework Abebe</name>
</author>
<id>https://hdl.handle.net/1721.1/156124</id>
<updated>2024-08-15T03:44:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pathways to Equity: Mapping the Impacts of Nairobi’s Urban Form on Pedestrian Mobility
Kifetew, Yabework Abebe
As urban growth continues to surge in Nairobi, Kenya, many development projects focus on highway and road improvement, with little to no investment channeled into better pedestrian infrastructure. The lack of proper sidewalks and crossings in Nairobi makes walking in the city a risk that residents take every day – often affecting low-income residents that rely on walking as their main mode of transportation. Although there have been improvements to pedestrian infrastructure in recent years, pedestrian crash rates remain high, particularly along highways. By using various statistical and spatial analysis models, this study explores Nairobi’s built environment and how it may impact the patterns and behaviors of pedestrians in order to better understand where and why crashes occur. This work is grounded by an exploration of the social history of Nairobi’s built forms, and how it’s colonial past has influenced the current policies that favor car-centric mega-infrastructure. It challenges the city’s pursuit of “global” status through these policies at the cost of its residents and uses data analysis as a tool to advocate for a shift in development priorities.&#13;
&#13;
The goal of this study is to create a framework in which the built environment can be studied to identify risk factors for pedestrian safety and to provide insights on how urban design policies can improve infrastructure for pedestrians and marginalized populations. Although focused on Nairobi, the framework is designed to be applicable to other Global Majority cities that face similar urban infrastructure challenges and data scarcity. In a context where cars and highways are prioritized, this work can be leveraged for more equitable design practices in these cities and make them safer and more accessible for captive walkers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Klondike Memory Project: Klondike Memory Project: Race, Counter-Memory, and Planning Processes</title>
<link href="https://hdl.handle.net/1721.1/156123" rel="alternate"/>
<author>
<name>Thompson-Smith, Diamond</name>
</author>
<id>https://hdl.handle.net/1721.1/156123</id>
<updated>2024-08-15T03:24:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Klondike Memory Project: Klondike Memory Project: Race, Counter-Memory, and Planning Processes
Thompson-Smith, Diamond
This thesis expands on existing community-driven archiving work started by the Klondike-Smokey City Community Development Corporation (KSCCDC) in 2016 to share untold and lesser-known collective histories from Klondike and Smokey City neighborhoods in Memphis, Tennessee. Using a photo essay and forth-coming cartographic tools as dissemination methods, I aim to support communal healing and reconciliation following long histories of structural racialized disinvestment in these neighborhoods. In this project, we amplify challenges to state narratives that attempt to decontextualize Black history from racist regimes and legacies to subjugate Black and Brown epistemologies. In this thesis, I propose that memory work and acts of truth-telling offer communities that have experienced racial planning and state erasure a pathway toward acquiring justice and repairing structural harm by helping them reaffirm their identities, assert their humanity, hold perpetrators of harm accountable, and envision liberatory futures. I also claim that memory is a tool planners can employ within the reparative framework to help disrupt “rational” planning logic that attempts to discredit embodied experience and epistemologies of Black people as invalid data or “non-data.” Lastly, I insist that using critical cartographic practices such as counter-mapping further disrupts White supremacy and erasure practices embedded within rational planning logic and archival practices by situating the validity of collective memory in place and landscapes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Jail Is Not Social Housing: making new grounds for Chinatown</title>
<link href="https://hdl.handle.net/1721.1/156122" rel="alternate"/>
<author>
<name>Zhong, Calvin</name>
</author>
<id>https://hdl.handle.net/1721.1/156122</id>
<updated>2024-08-15T03:07:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Jail Is Not Social Housing: making new grounds for Chinatown
Zhong, Calvin
This story begins at the site of The Tombs, a jail in Chinatown that is currently being doubled in size as a part of a distributed alternative to the Rikers Island Jail. The new megajail will have capacity to house up to 886 people in detention and will include space for on-site services and programming, staff facilities, and publicly accessible commercial and community space on the ground floor. This exposes how architecture behaves as a mode of cultural production and acts in service of capitalist and carceral systems. Nowhere else is this more evident than in New York City’s Chinatown - often called the final frontier for development in Lower Manhattan. Immigrants, who’ve long come in search of land, green pastures, and single-family homes, find themselves Downtown and within ethnic enclaves, where homeownership is historically and canonically low. At this site, generations of indigenous tribes, freed African communities, and various immigrant communities endure a cycle of settlement, disenfranchisement, and eventually, destruction. The city, rather than invest in its communities, responds each time with a new jail. Under this urban mode, architecture provides few forms of accessible inhabitation beyond the neo-feudal rental system and racialized prison industrial complex. It exists to extend exploitation by selling the dream of homeownership, yet only makes room to support a select few. This thesis is interested in the limited means of shelter that are encapsulated within the architectural imagination - it asks to reconsider new value systems beyond ownership and incarceration. If architecture were to reimagine how it produces - culturally, tectonically, morally - how could it act in service of the people of Chinatown, and in earnest support of the Dream that the profession has helped to proliferate? Or better yet, this thesis will reject and reverse the pattern of the site to wholly reimagine Chinatown and its dreams: first, to destroy the jail, then, to facilitate reconstruction, re-enfranchisement, and resettlement of communities lost.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Home Share Program for Asylum Seeking Migrants in New York City</title>
<link href="https://hdl.handle.net/1721.1/156121" rel="alternate"/>
<author>
<name>Mackin-Plankey, Francisco "Pancho"</name>
</author>
<id>https://hdl.handle.net/1721.1/156121</id>
<updated>2024-08-15T03:27:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing a Home Share Program for Asylum Seeking Migrants in New York City
Mackin-Plankey, Francisco "Pancho"
Throughout 2022, 2023, and 2024 increasing numbers of asylum-seeking migrants have sought shelter in New York City through the city’s shelter system. To provide shelter to asylum seekers, New York City expended shelter capacity by expanding services at congregate shelters, contracting with hotels to provide emergency shelter, and by opening Humanitarian Emergency Response and Relief Centers.  In late 2022, Enterprise Community Partners was contracted to evaluate the feasibility of operating a home share program for asylum-seeking migrants as an alternative to New York City’s shelter system. By early 2024 Enterprise completed the design of a home share program for asylumseeking migrants. In their analysis, Enterprise found that difficulties recruiting hosts was one of the biggest challenges to operating the home share program.  Enterprise’s program design focuses on minimizing the level of engagement and effort required of hosts, to lower the bar to entry for participation in the program. This thesis explores Enterprise’s research process, and proposes a program structure oriented on making hosting a more substantive experience and to build on the strengths of a potential program operator. Similarly, during the earlier phases of Enterprise’s research, Enterprise considered focusing the program on specific neighborhoods in New York City, but Enterprise ultimately moved on from a neighborhood specific strategy. This thesis identifies which neighborhoods would be most suitable for a home share program, based on Enterprise’s initial criteria.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooking Together: Form &amp; Function of Community Kitchen as Urban ‘Third Place’ Promoting Community Wellbeing</title>
<link href="https://hdl.handle.net/1721.1/156120" rel="alternate"/>
<author>
<name>Heneine, Emma M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156120</id>
<updated>2024-08-15T03:01:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Cooking Together: Form &amp; Function of Community Kitchen as Urban ‘Third Place’ Promoting Community Wellbeing
Heneine, Emma M.
Global food insecurity has surged in recent years, with nearly one-third of the world population experiencing food insecurity between 2020-2022. Malnutrition remains a leading cause of death globally, making food access a major determinant of health. Cities are increasingly grappling with challenges as urban populations expand and urban food systems are affected by various systemic factors including geopolitical conflicts, economic crises, environmental anomalies, and epidemics. Urban planning and physical characteristics of the urban built environment also affect food access; single-use zoning, suburbanization, rising food costs, proliferation of processed foods, and food-deserts contribute to urban food insecurity, disproportionately affecting low-income communities. As a result, urban populations have seen a rise in the prevalence of both undernourishment and obesity. Dating back centuries and found globally, community kitchens are places where food is prepared en masse by community members to address local food insecurity. During the COVID-19 pandemic, community kitchens (re)gained prominence, offering essential nourishment as well as solace and community amidst widespread hardship and isolation. Research indicates the success of community kitchens in improving nutrition, as well as a number of other benefits including improving mental health, individual and collective empowerment, environmental sustainability, and social cohesion. Despite their effectiveness, reliance on community kitchens to address food insecurity reveals a tension of whether such responsibility should fall on communities, rather than being addressed structurally. Nonetheless, community kitchens represent vital interventions in the absence of adequate public services, showcasing the collective power of communities to address food insecurity and broader social challenges. Drawing from a sample of nine contemporary community kitchens around the world, this thesis explores how community kitchens’ form and function can evolve into critical urban infrastructures, offering benefits beyond food relief to promote community wellbeing in the aftermath of a community shock. In so doing, community kitchens represent urban ‘third places’ – becoming essential informal gathering spaces for communities through their promotion of the arts and culture, education and skills building, economic development, ecological stewardship, and community development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of a Useful Place: The Gas Station in America</title>
<link href="https://hdl.handle.net/1721.1/156119" rel="alternate"/>
<author>
<name>Capozzi, Bennett</name>
</author>
<id>https://hdl.handle.net/1721.1/156119</id>
<updated>2024-08-15T03:25:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evolution of a Useful Place: The Gas Station in America
Capozzi, Bennett
The accelerating transition to EVs in the United States raises questions about the economic viability of fuel retailing in the coming decades; therefore it is necessary to think more deeply about the future of the 150,000 gas stations in the United States, most of which are polluted petroleum brownfields. This thesis uses three research methods to understand the past and present state of gas stations in America and the impact they have had on the built environment: historical research, site visits, and the case study method. First, the thesis explores the way that gas stations in America have adapted their form and program to changes in their political, economic, and technological environments throughout the twentieth century. Then, turning to existing sites, the thesis generates four gas station typologies based on location. These types differ based on key formal and programmatic characteristics, and they are likely to have different reuse futures in a post-gas station world. Photography and site visits capture the way that this process of reuse has already begun; the thesis documents how many former gas stations in the contemporary landscape have been redeveloped, converted to new uses, or abandoned over the past several decades. These adaptations reveal the way that context influences these sites beyond the lifespan of fuel retailing. With the understanding that the transition away from combustion-engine vehicles is likely to continue, the thesis presents a policy framework focused on three scenarios: continued fuel retailing, conversion to EV charging, and industry exit. The framework is designed to help policymakers and planners make informed decisions about how to adapt these sites as the number of gas stations in the United States steadily decreases, leaving a trail of polluted brownfields in its wake.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Claiming Identity through Space: LGBTQ+ Community Building via Commercial Development in West Hollywood and Palm Springs</title>
<link href="https://hdl.handle.net/1721.1/156118" rel="alternate"/>
<author>
<name>Ng, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/156118</id>
<updated>2024-08-15T03:21:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Claiming Identity through Space: LGBTQ+ Community Building via Commercial Development in West Hollywood and Palm Springs
Ng, Jason
Examining the relationship between queer identity and urban space, this thesis focuses on LGBT+ commercial real estate and its role in community building. Through the cities of West Hollywood and Palm Springs in California, it explores historic, contemporary, and forward-looking narratives of LGBT+-oriented commercial development, with an emphasis on retail, hospitality, and multifamily. Key questions address how LGBT+ communities claim and shape space (socially, economically, and physically) within “gayborhoods”, as well as strategies for navigating urban change. By analyzing these narratives with qualitative and quantitative methods, this thesis offers insights for developers, planners, and other stakeholders invested in creating vibrant, inclusive communities.&#13;
&#13;
This interdisciplinary mixed-methods approach includes original GIS and data analysis of historic LGBT+ establishments, demographic study, literature review, site observation, interviews with stakeholders ranging from economic development professionals to mayors, and case studies of a queer women-owned small business and LGBT+ senior living community. The findings underscore the subversive and politically charged origins of gayborhoods, characterized by authenticity, entrepreneurship, and community-centric values. The analysis also reveals challenges to gayborhood identity as West Hollywood and Palm Springs grapple with questions of gentrification vs. preservation, commercialization, and shifting demographics (aging populations, increasing affluence, mainstream audiences, etc.). &#13;
&#13;
Given increased LGBT+ acceptance in the US since the mid-century (generally speaking) and the advent of social media and dating apps, some question whether the gayborhood is dying or even necessary anymore. I argue that the gayborhood as a framework, though evolving, persists in its relevance due to its core commitment to LGBT+ community building. And its resilience is reflective of the historic legacy of the LGBT+ community itself.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Segment Unseen Tasks In-Context</title>
<link href="https://hdl.handle.net/1721.1/156117" rel="alternate"/>
<author>
<name>Butoi, Victor Ion</name>
</author>
<id>https://hdl.handle.net/1721.1/156117</id>
<updated>2024-08-15T03:52:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Segment Unseen Tasks In-Context
Butoi, Victor Ion
While deep learning models have become the predominant method for medical image segmentation, they are typically incapable of generalizing to new segmentation tasks---involving new anatomies, image modalities, or labels. For a new segmentation task, researchers will often have to prepare new task-specific models. This process is time-consuming and poses a substantial barrier for clinical researchers who often lack the resources and expertise to train neural networks. &#13;
&#13;
We present UniverSeg, an in-context learning method for solving unseen medical segmentation tasks. Given a new image to segment, and a set of image-label pairs that define the task, UniverSeg can produce accurate segmentation predictions with no additional training. We demonstrate that UniverSeg substantially outperforms existing methods in solving unseen segmentation tasks, and thoroughly analyze important aspects of our proposed data, training, and inference paradigms.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Imperfect Question of Stadium Development: A Typology of Contemporary Development and Strategies for a Sustainable Future</title>
<link href="https://hdl.handle.net/1721.1/156116" rel="alternate"/>
<author>
<name>Hill, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/156116</id>
<updated>2024-08-15T03:01:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Imperfect Question of Stadium Development: A Typology of Contemporary Development and Strategies for a Sustainable Future
Hill, Melissa
This thesis explores the paths forward at the intersection of economic development, public space, and parking. Using OpenStreetMap data and American Community Survey estimates, this project uses GIS analysis to develop a typology of contemporary NFL stadium developments. Using illustrative case studies informed by this analysis, site visits, and pre-existing literature, the thesis evaluates the tradeoffs presented by various approaches to stadium development. Rather than recommend a single path forward, this thesis provides suggestions for working within the constraints of local landscapes to develop strategies to best support the public good in each context.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Fiber Source for Label-free Nonlinear Microscopy</title>
<link href="https://hdl.handle.net/1721.1/156115" rel="alternate"/>
<author>
<name>Cao, Honghao</name>
</author>
<id>https://hdl.handle.net/1721.1/156115</id>
<updated>2024-08-15T03:16:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adaptive Fiber Source for Label-free Nonlinear Microscopy
Cao, Honghao
Nonlinear microscopy enables label-free visualization of biological processes in live samples at sub-cellular spatial resolution and sub-millimeter penetration depth, enabling the in-vivo study of mechanisms underlying several cellular functions. Due to the low absorption cross-section of the two-photon and three-photon excitation processes, especially for the endogenous fluorophores, high peak power broadband laser sources are important in improving nonlinear microscopy generation efficiency. Multimode fibers (MMFs) are regaining interest as light sources due to their high-dimensional spatiotemporal nonlinear dynamics and scalability for high power. MMF sources with effective control of nonlinear processes would enable new possibilities in many areas, such as high-power fiber lasers, biomedical imaging, and chemical sensing, as well as a platform for investigation of intriguing physics phenomena. In this thesis, we present a simple yet effective way of controlling nonlinear effects at high peak power levels in MMFs. This is achieved by leveraging not only the spatial but also the temporal degrees of freedom during multimodal nonlinear pulse propagation using a programmable fiber shaper that introduces time-dependent disorders. We achieve high spectral-temporal-spatial tunability in the output laser pulses of the MMF, resulting in a broadband high-peak-power source. We further demonstrate its potential as a laser source for nonlinear microscopy through widely tunable two-photon and three-photon excitation. This approach provides possibilities for technological advances in a wide range of fields, such as nonlinear optics, biomedical imaging, and spectroscopy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Encouraging reuse in rural Italy: A case study implementing new frameworks to collect local data and understand feasible reprogramming strategies in Guadagnolo</title>
<link href="https://hdl.handle.net/1721.1/156114" rel="alternate"/>
<author>
<name>Consilvio, Annabel</name>
</author>
<id>https://hdl.handle.net/1721.1/156114</id>
<updated>2024-08-15T03:27:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Encouraging reuse in rural Italy: A case study implementing new frameworks to collect local data and understand feasible reprogramming strategies in Guadagnolo
Consilvio, Annabel
This thesis presents a new survey methodology for collecting data on occupancy, building typologies, and building conditions in small, depopulating towns in rural Italy. The survey methodology is split into two phases: one in which granular data is gathered through a series of visual surveys and a second in which this data is analyzed through a series of assessments aimed at identifying the most strategic buildings for reuse to support economic development. With one in three Italian municipalities losing population since 1951, this new framework aims to equip municipalities with critical data that can inform strategic reprogramming efforts and strengthen funding applications (Serico Gruppo Cresme, 2008). The research is built on the prior efforts and knowledge of Liminal, the thesis client and an organization in Italy working to build capacity within these rural communities. By providing tools like this framework, Liminal empowers residents to envision new futures and supports municipalities to realize these visions. &#13;
&#13;
This approach was tested in Guadagnolo, a rapidly depopulating town in the Monti Prenestini region of Lazio, which witnessed a 50% population decline in just two decades (Progetto - Campo Base Guadagnolo, 2022). Through this methodology, a robust and granular spatial database model of Guadagnolo’s built fabric was constructed, permitting analysis of possible sites of reuse to support a university satellite campus and develop a long-term tourism destination. The assessment methodology provided several key buildings for the town to consider adapting to support these two reuse scenarios, while also generating extensive data that the town can utilize in a variety of future initiatives and funding applications. Ultimately, this thesis endeavors to support rural Italian communities by providing a data-driven framework that can unlock funding opportunities and initiate strategic planning efforts, providing a path forward that protects the cultural and ecological richness of these small towns.&#13;
&#13;
Keywords: rural development, strategic reuse, economic revitalization, survey methodology
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Retrofitting Affordable Multifamily Housing: A Survey of Landlords in Cincinnati, Ohio</title>
<link href="https://hdl.handle.net/1721.1/156113" rel="alternate"/>
<author>
<name>Fang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/156113</id>
<updated>2024-08-15T03:03:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Retrofitting Affordable Multifamily Housing: A Survey of Landlords in Cincinnati, Ohio
Fang, Emily
Building energy efficiency retrofits are a crucial part of decarbonizing the building sector and decreasing residential energy burden—low-income households, renters, and residents of multifamily buildings disproportionately bear this burden. This study serves as a case study on WarmUp Cincy (2020-2022), a local government-led pilot program that provided grants to landlords of affordable multifamily housing to help implement energy efficiency retrofits. In partnership with the City of Cincinnati Office of Environment &amp; Sustainability, I assess results from the pilot program, develop and analyze a survey of affordable housing landlords in Cincinnati, and conduct interviews with key energy stakeholders in the region to answer: 1) what are landlords’ current priorities and understandings of the cost and energy savings of specific upgrades, and 2) what energy efficiency program elements will be most effective in serving these buildings? As the City transitions towards a second phase of WarmUp Cincy to better address its climate and energy equity goals, this study seeks to provide insight on how to approach key program design questions, such as selecting a program administrator and determining a list of eligible technologies. In addition, this study explores WarmUp Cincy’s synergies with other federal and state funding programs, WarmUp Cincy’s continuing role in addressing local planning challenges of outreach and workforce development, and the importance of program evaluation as building technologies, funding opportunities, and community education change over time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Civic Design Room: Conversations on What It Looks Like To Operationalize Design in Government? With Community, Within Government, and Your Team</title>
<link href="https://hdl.handle.net/1721.1/156112" rel="alternate"/>
<author>
<name>N'Diaye, Mariama</name>
</author>
<id>https://hdl.handle.net/1721.1/156112</id>
<updated>2024-08-15T03:06:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Civic Design Room: Conversations on What It Looks Like To Operationalize Design in Government? With Community, Within Government, and Your Team
N'Diaye, Mariama
The Civic Design Room is a podcast and media thesis project that engages designers in the public sector, primarily in the US, on how they have operationalized design methodologies in the public sector. This podcast is a series of thirteen forty-five-minute to one-hour episodes, each featuring a different guest. These guests range from current or former US federal and local government employees to urban planners and designers working in local US governments and researchers based internationally in Colombia, the United Kingdom, and Finland.&#13;
Each episode covers similar topics of design, politics, and the management skills needed to foster an innovative team in government. This thesis calls for a new mode of design - Caring Systems Design, which seeks to infuse principles of care ethics - attentiveness, responsiveness, competence, and responsibility - throughout the multiple, nested levels of government work - from the individual and team level to cross-departmental collaboration, to engaging with external communities and stakeholders. The project will live on Spotify, and the notes of each episode include supportive materials for those listening. The written thesis represents the breadth of my research, including the methods and processes used to create the podcast, the findings from each podcast, and the implications of my findings and strategies in urban planning and the public sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A New Era for the Old Dominion: Strategies for the Virginia State Government to Lead an Equitable &amp; Ambitious Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156111" rel="alternate"/>
<author>
<name>Jaye, Dyanna</name>
</author>
<id>https://hdl.handle.net/1721.1/156111</id>
<updated>2024-08-15T03:39:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A New Era for the Old Dominion: Strategies for the Virginia State Government to Lead an Equitable &amp; Ambitious Energy Transition
Jaye, Dyanna
In August 2022, President Biden signed the Inflation Reduction Act, which mobilizes nearly $1 trillion in federal investment, primarily for decarbonization and a clean energy transition.1 Alongside other federal legislation, this marks a transformation in the approach to economic policy in the United States: it is a move away from neoclassical economic policy and a move towards mission driven industrial policy. In this era of transition, the emerging clean energy industry faces a particular legal regime where much of the authority over and regulation of our energy system happens at the state level. Recognizing this dynamic, this thesis is a case study on the Virginia state government and aims to analyze and identify effective policy tools to reduce GHG emissions at the state level, including transitioning away from fossil fuel power generation, increasing energy efficiency and load flexibility, and stimulating clean energy generation. This case study is structured in three parts: (1) an institutional analysis and energy profile of Virginia, (2) a history and analysis of energy regulation in Virginia, and (3) a climate and energy policy analysis. I conclude with five recommendations for state leadership to support the emerging clean energy industry and a climate transition that prioritizes the health, wellbeing, and economic gains for Virginia communities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Community Transportation Acts Archive</title>
<link href="https://hdl.handle.net/1721.1/156110" rel="alternate"/>
<author>
<name>Oliver, Elyse</name>
</author>
<id>https://hdl.handle.net/1721.1/156110</id>
<updated>2024-08-15T03:28:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Community Transportation Acts Archive
Oliver, Elyse
The Community Transportation Acts Archive is a planning and design approach for advancing transportation planning based in the experiences of individuals with mobility challenges and/or transit reliance. “Community transportation acts” are actions taken by residents—whether within systems or informally—to address mobility needs not met by existing policy and service. In the pages that follow, I introduce the process for building a place-specific community transportation acts archive for the Greater Portland region of Maine and outline the value that such an archive provides to the public and transportation planners. &#13;
&#13;
The Greater Portland Community Transportation Acts Archive (the Archive) draws attention to residents’ challenges in transportation, their impact and influence on transportation planning, and their visions for transportation in the Greater Portland region of Maine. The Archive comes together through reparative archiving, an archival approach based in critical studies that focuses on the records and stories of individuals and groups with underrepresented perspectives in existing historical narratives. Reparative archiving draws from Black studies, Indigenous studies, queer studies, etc. and encourages expansive and inclusive record collection and interpretation practices. I hypothesize that engagement with the Greater Portland Community Transportation Acts Archive—by the public and planners—will contribute to new and novel transportation initiatives in and around Portland, ME that better support mobility for those with the greatest transportation barriers. This thesis documents the first test of this hypothesis—my own engagement, as a planner, with the Archive—and presents a prototype archival product ready for further testing as part of upcoming Greater Portland planning efforts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Message From The Grassroots: Exploring Black liberation in grassroots economic practice and planning in the Americas</title>
<link href="https://hdl.handle.net/1721.1/156109" rel="alternate"/>
<author>
<name>Cole, Austin K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156109</id>
<updated>2024-08-15T03:33:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Message From The Grassroots: Exploring Black liberation in grassroots economic practice and planning in the Americas
Cole, Austin K.
Building from theories of underdevelopment and economic warfare on Black peoples (Africans and Afrodescendants) globally, this study brings into the fields of urban planning and local community &amp; economic development the analytic and urgency of the Black Radical Peace Tradition. This involves an exploration of alternatives to traditional paradigms of economic development and planning that might help reclaim and reconstitute “the economy” towards practices and efforts that serve human life and dignity, popular sovereignty, connection to the Earth, and self-determinative capacities of African peoples throughout the Americas. Intent on contributing toward an anti-colonial praxis in this field, the following study is in part an application of the lens of Black political economy to geographic and urban challenges. It is also an exploration of grassroots people-centered efforts, both operating within the spatial-political confines of empire and those revolutionary programs outside of its physical bounds. And finally, it is a reflection on the possible purposes and roles of the “intellectual” and “planner” in supporting the liberation of Black peoples in the Americas, as part of the program of the liberation of all peoples globally.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Mirror Project: A Portrait of Urban Inequality</title>
<link href="https://hdl.handle.net/1721.1/156108" rel="alternate"/>
<author>
<name>Phya, Nolen</name>
</author>
<id>https://hdl.handle.net/1721.1/156108</id>
<updated>2024-08-15T03:46:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Mirror Project: A Portrait of Urban Inequality
Phya, Nolen
Photography has historically played a vital role in highlighting urban inequality, as seen in the work of Jacob Riis documenting late 19th-century New York City. Today, amidst ongoing gentrification, traditional mapping methods often fall short in capturing the lived experiences of communities. To address this, my thesis proposes using photography to document contemporary urban inequality in New York City. By engaging native or local New Yorker photographers and providing them with free black-and-white film rolls, the project aims to create an authentic archive of images reflecting the realities of gentrification. This approach not only offers a nuanced understanding of the phenomenon but also serves as a catalyst for empathy, dialogue, and action among policymakers, activists, and the broader public. Ultimately, the project seeks to empower communities and contribute to more equitable urban development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Power: How Colombia’s National Oil Company Can Support the Country’s Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156107" rel="alternate"/>
<author>
<name>Beron, David</name>
</author>
<id>https://hdl.handle.net/1721.1/156107</id>
<updated>2024-08-15T03:39:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On Power: How Colombia’s National Oil Company Can Support the Country’s Energy Transition
Beron, David
This thesis is organized in two parts. Part I argues that national oil companies, which now own and produce most of the world’s oil, will be protagonists in the transition to low-carbon energy sources. The pathways that these companies take will be distinct from country to country and will define how the transition plays out globally. Part II sites my analysis in Colombia. It is an exercise in memory, reflection, and imagination based on a series of conversations with current and former decisionmakers in the country’s energy sector. I show how the power supply crisis of 1992 revealed inseparable links between climate, energy, capital, and policy. I argue that growing and greening the power sector will require stronger central planning and favoring power purchase agreements over spot transactions. And I envision a country in which Colombia’s state-owned Ecopetrol is no longer an oil company. It contributes to a sovereign wealth fund for the country’s transition, leads R&amp;D efforts, and has become an important player in power transmission and generation. Ecopetrol sells green hydrogen — instead of fossil fuels — to Europe and Asia. It has shifted from geology to geography, from offshore drilling to offshore wind. Is this country inherently different from twenty-first century Colombia?
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How post-pandemic public transit journeys can inform employers’ return to office strategies in Boston, MA and Washington, DC</title>
<link href="https://hdl.handle.net/1721.1/156106" rel="alternate"/>
<author>
<name>Uzoh, Nwakaego</name>
</author>
<id>https://hdl.handle.net/1721.1/156106</id>
<updated>2024-08-15T03:25:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How post-pandemic public transit journeys can inform employers’ return to office strategies in Boston, MA and Washington, DC
Uzoh, Nwakaego
This research focuses on the changes in public transit in Boston, Massachusetts, and Washington, D.C., against the backdrop of companies facing challenges in bringing employees back to the office. These challenges include rolling back official in-office dates due to resistance from remote-capable employees  experiencing significant shifts catalyzed by the pandemic and layered upon decades of transit disinvestment in the United States. The study builds on previous research on work-from-home trends among white-collar workers, leading to the central question of how employers in dense urban areas can manage a return to the office amidst fluctuating public transit service levels and changes in job accessibility.&#13;
&#13;
To address this question, the research analyzes housing affordability and public transit service levels in Boston and D.C. for three design and development companies. It aims to determine the potential success rates for returning to the office for two specific job roles. The findings suggest that an income-informed approach to returning to the office, coupled with strategies to align employee preferences with best practices, can be beneficial.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reflective Planning and Design for Community Resilience: A Case Study in a Vulnerable and Shrinking Japanese Village</title>
<link href="https://hdl.handle.net/1721.1/156104" rel="alternate"/>
<author>
<name>Okai-Yabe, Keiko</name>
</author>
<id>https://hdl.handle.net/1721.1/156104</id>
<updated>2024-08-15T03:29:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reflective Planning and Design for Community Resilience: A Case Study in a Vulnerable and Shrinking Japanese Village
Okai-Yabe, Keiko
This study investigates resilience strategies in rural Japanese areas characterized by population decline, demographic aging, and heightened disaster risk. I particularly examine the approach of relocating communities to safer, higher ground in regions prone to tsunamis. The focus is on Omosu Village in Numazu City, Japan, which was the first community to attempt relocation through the Disaster Prevention Collective Relocation Promotion Project (DPCRPP) in preparation for the anticipated Nankai Trough earthquake and tsunami, expected to occur within the next 20 years with a high probability. The methodology involved developing planning and design proposals, presenting these to officials in Numazu City for feedback, and revising the proposals accordingly, embodying a reflective practice approach. Due to the sensitivity of the subject, direct discussions with residents were not possible; instead, I analyzed recorded materials from a 2012-2013 workshop on hill relocation and responses from 106 residents to a post-workshop questionnaire to gather insights and integrate them into my planning and design.&#13;
&#13;
The findings highlight a disconnect between areas supported by Japan’s Location Optimization Plan (LOP) and Small Hub Development (SHD), which complicates relocation efforts for villages like Omosu, situated in these policy gaps. This study offers policy-related recommendations for addressing the challenges faced by shrinking settlements caught in these gaps and demonstrates the potential of village design to incorporate long-term planning over the next two decades, addressing both disaster prevention and everyday livelihood sustainability. The results underscore the viability of previously considered impossible relocations to higher ground and outline the necessary steps to accomplish this. Furthermore, the study emphasizes the significance of a holistic planning and design approach that safeguards residents’ lives and invigorates community spirit in rural villages enriched with natural resources and cultural heritage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Jaywalking Index: Visual and Socio-demographic Patterns in London</title>
<link href="https://hdl.handle.net/1721.1/156103" rel="alternate"/>
<author>
<name>de Castro Filho, Fabio Marcel</name>
</author>
<id>https://hdl.handle.net/1721.1/156103</id>
<updated>2024-08-15T03:56:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Jaywalking Index: Visual and Socio-demographic Patterns in London
de Castro Filho, Fabio Marcel
This quantitative research delves into the intricate dynamics of pedestrian safety, urban design, and behavioral analysis within the overarching framework of Vision Zero principles in London, UK. With a specific emphasis on comprehending jaywalking behavior, this study investigates the sociodemographic characteristics of jaywalkers and examines the correlation between urban design features surrounding jaywalking crashes. Employing GIS, the research analyzes 25,732 pedestrian crashes and utilizes Visual Artificial Intelligence to segment 280,000 images obtained from Google Street View. Key findings encompass the sociodemographic profiles of jaywalkers and the formulation of a jaywalking index, which serves as an initial tool for identifying areas warranting further investigation in urban design. This index aids in pinpointing regions with a heightened probability of pedestrian crashes, offering valuable insights for proactive urban planning and safety enhancement measures.&#13;
&#13;
Keywords Urban Design; Urban Science; Mobility; Visual Artificial Intelligence; Computer Vision.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing In-Garage Charging Schedules to Maximize Electrified Mileage for Electric Bus Fleets</title>
<link href="https://hdl.handle.net/1721.1/156102" rel="alternate"/>
<author>
<name>Wu, Yen-Chu</name>
</author>
<id>https://hdl.handle.net/1721.1/156102</id>
<updated>2024-08-15T03:26:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing In-Garage Charging Schedules to Maximize Electrified Mileage for Electric Bus Fleets
Wu, Yen-Chu
Many transit agencies across the US are working towards a zero-emission electric bus fleet in order to reduce petroleum use and carbon emissions. This thesis presents a data-driven approach to optimize short-term in-garage charging schedules of electric buses, aiming to enhance operational efficiency in public transportation systems. We estimate the energy required for each trip using historical data on temperature, ridership, and speeds. The proposed mixed-integer programming (MIP) model maximizes total electrified mileage while considering constraints related to charger configuration, block schedules, energy requirements, and battery capacity. To solve this complex problem in a reasonable timeframe, we further decompose the problem into two phases. The initial phase involves determining which blocks should be serviced by the same bus and establishing a schedule that covers each block exactly once. The subsequent phase focuses on identifying the optimal in-garage charging schedule and deciding which blocks should be electrified, considering the schedule from the first phase. The model’s effectiveness is demonstrated through a case study using real-world data from the Chicago Transit Authority (CTA). Future scenarios and sensitivity analyses, considering variations in available electric buses, charger configurations, and risk tolerance in estimated energy requirements for each block, offer comprehensive and valuable insights for the adoption of electric buses and chargers. Key findings include: (a) slow chargers may be more cost-effective than fast ones, given recent block schedules and cost estimates, (b) customizing charging strategies maximizes electrified distance but poses operational challenges, (c) agencies should assess the trade-offs between the electrifiable distances and the risk of running below specified state of charge (SOC) thresholds, (d) lower battery degradation may reduce the required number of buses for the same electrified mileage, and (e) seasonal analyses reveal that significantly more miles can be electrified during summer compared to winter due to the lower energy required for trips on warmer days.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Shared Vulnerabilities: Climate Adaptation on the Split Island of Saint Martin</title>
<link href="https://hdl.handle.net/1721.1/156101" rel="alternate"/>
<author>
<name>Flamme, Emilie</name>
</author>
<id>https://hdl.handle.net/1721.1/156101</id>
<updated>2024-08-15T03:46:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Navigating Shared Vulnerabilities: Climate Adaptation on the Split Island of Saint Martin
Flamme, Emilie
The island of Saint Martin on the windward side of the Caribbean Sea is a volcanic island whose low-lying areas are at high risk for flooding and storm surges as a result of exposure to increasing severe hurricanes that is compounded by sea level rise. Saint Martin’s mountainous landscape is split by two governments. The first is the Collectivity of Saint-Martin, a semi-autonomous region of France. The second is the Government of Sint Maarten, an independent island government within the Kingdom of the Netherlands. This thesis examines how both governments on the island of Saint Martin are working to develop climate adaptation strategies within a context of already existing chronic exposure to extreme climate risks. Given the administrative split and the severity of climate change, how can an island with two governments and two different approaches to climate change adapt to common future climate changes? &#13;
&#13;
The work first traces how the construction of climate adaptation expertise is shaped by perceptual biases which originate from outside the Caribbean region, often in countries like the Netherlands and France. From this engagement on the construction of expertise, the Chapter 1 traces how hurricanes have shaped how climate and weather events are understood and confronted by islanders and argues that future hurricane models articulate changes to everyday climate conditions that stand to challenge longstanding practices of resilience in the face of extreme climate events. Chapter 2 examines current climate adaptation strategies implemented in the Collectivity of Saint-Martin, and underscores the relationship between risk perception, policy formulation, and historical context by highlighting the need for locally-adapted strategies. Chapter 3 examines how the Government of Sint Maarten attempts to address climate change and climate adaptation and considers avenues for community-centered risk assessment and adaptation planning. Chapter 4 engages the limitations of both strategies in Saint-Martin and Sint Maarten, and proposes an alternative vision for climate adaptation given the shared vulnerabilities that exist for both sides of Saint Martin.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable Homes for All: Designing a Clean Energy Incentive for Boston’s Section 8 HCV Landlords to Improve Tenant Quality of Life</title>
<link href="https://hdl.handle.net/1721.1/156100" rel="alternate"/>
<author>
<name>Houston-Read, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/156100</id>
<updated>2024-08-15T03:08:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sustainable Homes for All: Designing a Clean Energy Incentive for Boston’s Section 8 HCV Landlords to Improve Tenant Quality of Life
Houston-Read, Rebecca
There is an urgent need for decarbonization in the residential sector given housing's significant contributions to greenhouse gas emissions. Low-income housing is particularly energy inefficient, contributing to harmful environmental outcomes and health and financial challenges for tenants. The Boston Housing Authority (BHA) can play a central role in residential decarbonization for low-income residents because it owns and controls a substantial portion of the housing stock. While there are significant efforts underway to decarbonize Boston’s public housing stock, there are currently no initiatives aimed at decarbonization in the Section 8 Program. Thus, the BHA can broaden its influence beyond the public sector and incentivize residential decarbonization in the private sector through its relationships with over 15,000 landlords in the Section 8 HCV Program. This thesis develops the BHA Retrofit Rewards (BRR) Program: a Program that uses monthly ‘rent boost’ to financially incentivize Section 8 Housing Choice Voucher (HCV) landlords to implement clean energy upgrades in their units. This BRR Program was created through a two-step process. First, a comparative analysis of similar US programs identified the Atlanta Housing Authority's Energy Efficiency Rent Boost Program (EERB) as viable for replication in Boston. Second, a feasibility analysis was conducted to determine how the BHA’s adaptation of the EERB Program would be financed, administered, and redesigned to fit the Boston context. The results of this analysis outline a framework for a BRR Program financed by leveraging regulatory flexibility that enables higher payments to landlords within federal limits. This thesis contributes to ongoing equity-focused decarbonization initiatives at the BHA and offers a roadmap for public housing authorities and cities more broadly seeking to address the dual challenges of climate change and housing inequity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Databases for healing and justice: Co-design with a grassroots, Indigenous organization</title>
<link href="https://hdl.handle.net/1721.1/156099" rel="alternate"/>
<author>
<name>Shumway, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/156099</id>
<updated>2024-08-15T03:25:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Databases for healing and justice: Co-design with a grassroots, Indigenous organization
Shumway, Hannah
This inquiry presents a grounded case study of a partnership between the Data + Feminism Lab at MIT and Waking Women Healing Institute, a grassroots, Indigenous organization. The partners co-design a case documentation and story gathering database that enables healing and justice for Indigenous women and people. The project reveals: 1) the vital role of trust-building, openness, and constant iteration in co-design practice, 2) the importance of designing for security in aligning the database with a need for Indigenous Data Sovereignty, 3) the practical trade-offs that come with choosing to use and configure commercial off-the-shelf software as opposed to using free and open source software or building custom software, and 4) how other institutional actors, like urban planners, can learn from this collaboration by centering trust-building, by welcoming ongoing revision and feedback rather than just ‘going through the motions’ of community engagement, and by taking tangible steps to enable institutional accountability to grassroots groups. Throughout, this thesis underscores the ways that a collaborative decision making process between institutional and grassroots partners allows the team to prioritize and operationalize grassroots needs and desires in a way that enables a useful technology solution for healing, harm reduction, and justice.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)</title>
<link href="https://hdl.handle.net/1721.1/156098" rel="alternate"/>
<author>
<name>Comiter, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/156098</id>
<updated>2024-08-15T03:55:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)
Comiter, Charles
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single-cell profiling methods, such as single-cell RNA-seq (scRNA-seq), and histology imaging data, such as Hematoxylin-and-Eosin (H&amp;E) stains. While single-cell profiles provide rich molecular information, they can be challenging to collect routinely and do not have spatial resolution. Conversely, histological H&amp;E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we develop SCHAF (Single-Cell omics from Histology Analysis Framework), a deep learning framework to generate a tissue sample’s spatially-resolved single-cell omics dataset from its H&amp;E histology image. We demonstrate SCHAF on healthy and diseased—primarily metastatic breast cancer—tissue, training with matched samples analyzed by spatial transcriptomics, sc/snRNA-seq and by H&amp;E staining. SCHAF generated appropriate single-cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-seq, expert pathologist annotations, and direct MERFISH measurements. SCHAF opens the way to next-generation H&amp;E2.0 analyses and an integrated understanding of cell and tissue biology in health and disease.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Activation frame memory management for the Monsoon processor</title>
<link href="https://hdl.handle.net/1721.1/156060" rel="alternate"/>
<author>
<name>Chiou, Derek.</name>
</author>
<id>https://hdl.handle.net/1721.1/156060</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Activation frame memory management for the Monsoon processor
Chiou, Derek.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (p. 87-89).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the ability of RELAP5/MOD3 to model natural circulation of high pressure SF6 in the Westinghouse 1/7 scale PWR experimental facility</title>
<link href="https://hdl.handle.net/1721.1/156058" rel="alternate"/>
<author>
<name>Chmielewski, Stefan V.</name>
</author>
<id>https://hdl.handle.net/1721.1/156058</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Investigation of the ability of RELAP5/MOD3 to model natural circulation of high pressure SF6 in the Westinghouse 1/7 scale PWR experimental facility
Chmielewski, Stefan V.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1992; Includes bibliographical references (leaf 45).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and calibration of a simplified heat transmission apparatus for textiles.</title>
<link href="https://hdl.handle.net/1721.1/156057" rel="alternate"/>
<author>
<name>Hodara, Leon Ralph.</name>
</author>
<id>https://hdl.handle.net/1721.1/156057</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Design and calibration of a simplified heat transmission apparatus for textiles.
Hodara, Leon Ralph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1945; Bibliography: leaves 70-71.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strength analysis of a sandwich shell structure subjected to hydrostatic loading</title>
<link href="https://hdl.handle.net/1721.1/156056" rel="alternate"/>
<author>
<name>Cho, Wonjoon.</name>
</author>
<id>https://hdl.handle.net/1721.1/156056</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Strength analysis of a sandwich shell structure subjected to hydrostatic loading
Cho, Wonjoon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1992; Includes bibliographical references (leaves 68-69).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic aspects of materials substitution in horizontal automotive body panels : the issue of SMC surface finish</title>
<link href="https://hdl.handle.net/1721.1/156054" rel="alternate"/>
<author>
<name>Chen, Andrew Chinshun.</name>
</author>
<id>https://hdl.handle.net/1721.1/156054</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Economic aspects of materials substitution in horizontal automotive body panels : the issue of SMC surface finish
Chen, Andrew Chinshun.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1991; Includes bibliographical references (leaves 83-84).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Area minimization techniques of single-output functions</title>
<link href="https://hdl.handle.net/1721.1/156053" rel="alternate"/>
<author>
<name>Chen, Curtis S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156053</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Area minimization techniques of single-output functions
Chen, Curtis S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1991; Includes bibliographical references (leaves 41-42).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impact of decision support systems on an organization and its planning</title>
<link href="https://hdl.handle.net/1721.1/156051" rel="alternate"/>
<author>
<name>Matteo, Thomas P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156051</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">The impact of decision support systems on an organization and its planning
Matteo, Thomas P.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaf 66.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Mental Health in the Corporate Sphere: Evaluating Trends, Tools, and Impacts on Organizational Dynamics</title>
<link href="https://hdl.handle.net/1721.1/156050" rel="alternate"/>
<author>
<name>Zou, Yangluyao (Maria)</name>
</author>
<id>https://hdl.handle.net/1721.1/156050</id>
<updated>2024-08-13T03:08:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Digital Mental Health in the Corporate Sphere: Evaluating Trends, Tools, and Impacts on Organizational Dynamics
Zou, Yangluyao (Maria)
The escalating prevalence of mental health issues in the corporate world, exacerbated by the COVID-19 pandemic, has necessitated a reevaluation of traditional wellness programs. This thesis critically examines the integration, effectiveness, and organizational impact of digital mental health tools within corporate environments, with a particular focus on improving employee wellbeing and optimizing organizational dynamics. Grounded in a mixed-methods approach, this research encompasses an extensive literature review and 31 semi-structured interviews with a diverse cohort of stakeholders, including human resources managers, corporate executives, mental health professionals, and employees across various sectors. This methodology facilitated a deep exploration of the perceptions, challenges, and outcomes associated with the adoption of digital tools such as ecological momentary assessments, wearable biosensors, and virtual reality for emotional regulation. Key findings reveal that digital interventions, when appropriately integrated, offer substantial benefits over traditional wellness programs by providing timely, personalized, and data-driven mental health support. These technologies enable continuous monitoring and management of employee stress levels and foster a proactive approach to mental health care. Notably, the success of these digital tools is intrinsically linked to organizational changes, such as work redesign strategies that include flexible working conditions, role restructuring, and enhanced workplace social support systems. Moreover, the research highlights several barriers to the effective implementation of digital mental health tools, including cultural resistance to mental health discussions in the workplace, privacy concerns, and the need for significant shifts in organizational policies and practices. Facilitators for successful integration include leadership endorsement, the normalization of mental health conversations, and the strategic alignment of digital tools with organizational health goals. The thesis proposes a comprehensive framework for the effective integration of digital mental health tools within the corporate sector. This framework suggests that true effectiveness is achieved not only through the deployment of advanced technologies but also through fundamental enhancements to the organizational environment that foster an inclusive, supportive, and flexible workplace. This study contributes to academic and practical understandings of how digital innovations can transform corporate mental health strategies. It underscores the need for a synergistic approach that merges technology with significant organizational reforms, advocating for a holistic model that not only addresses immediate mental health needs but also fosters long-term employee wellbeing and productivity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rebalancing the Major League Baseball Through Social Investment</title>
<link href="https://hdl.handle.net/1721.1/156049" rel="alternate"/>
<author>
<name>Perrin, Matthieu</name>
</author>
<id>https://hdl.handle.net/1721.1/156049</id>
<updated>2024-08-13T03:58:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Rebalancing the Major League Baseball Through Social Investment
Perrin, Matthieu
This thesis critically examines the enduring economic disparities within Major League Baseball (MLB) that threaten its competitive balance and the sustainability of its operations. Through an extensive data-driven analysis, this research identifies the foundational causes of the current imbalances that disproportionately favor financially robust teams and disadvantage smaller market franchises. The study leverages advanced financial metrics to cluster MLB teams and employs system dynamics modeling to simulate the impacts of existing and novel rebalancing mechanisms. The comprehensive analysis begins by detailing the financial landscape of MLB, highlighting the disparities in revenue streams between teams and their consequences on competitive balance. Using a clustering approach, teams are categorized based on key financial indicators, revealing distinct economic profiles that correlate strongly with on-field success and market presence. This categorization provides a clearer understanding of the disparities contributing to competitive imbalance. Subsequently, the research employs system dynamics to model the interactions between these financial variables and team performance over time. This model serves as a tool for testing various rebalancing strategies, including refined versions of revenue sharing and luxury taxes, which are currently employed by the league but need to address the root causes of imbalance adequately. The simulations suggest that while these mechanisms have some impact, they must be more robust when applied in their current forms. To address these shortcomings, the thesis proposes innovative strategies to redistribute 2 financial resources and talent across the league more effectively. These include adjustments to the formulas used for revenue sharing, introducing a more progressive luxury tax system, and implementing minimum spend requirements to prevent underinvestment in team competitiveness. Ultimately, this research argues for a holistic approach to reforming MLB’s economic structures, aiming to ensure a fairer competitive environment and enhancing the league’s viability for the future. By ensuring that all teams, regardless of their financial capabilities, have a genuine opportunity to compete for championships, these proposed measures aim to level the playing field and maintain the integrity and excitement of the league, fostering sustained fan engagement and growth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Healthcare with GenerativeAI: A Multifaceted Approach to Reliable Medical Information and Innovation</title>
<link href="https://hdl.handle.net/1721.1/156048" rel="alternate"/>
<author>
<name>Bennani, Taieb</name>
</author>
<id>https://hdl.handle.net/1721.1/156048</id>
<updated>2024-08-13T03:04:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Advancing Healthcare with GenerativeAI: A Multifaceted Approach to Reliable Medical Information and Innovation
Bennani, Taieb
The rapid advancements in Artificial Intelligence (AI) have transformed the healthcare industry, reshaping the way we approach patient care, medical research, and healthcare delivery. This thesis explores the journey of AI in healthcare, from its early beginnings to the current landscape of highly sophisticated conversational AI systems. We first delve into the myriad applications of GenAI and AI in healthcare, including medical imaging analysis, drug discovery, personalized medicine, conversational chatbots and beyond. Through a series of case studies and real-world examples, the thesis illustrates the successes, challenges, and lessons learned from the implementation of AI in various healthcare settings. As we navigate the uncharted territory of AI in healthcare, we critically examine the ethical implications that arise and the regulations needed. Looking towards the future, we explore the bright promise and cautionary tales that lie ahead. While the continued advancements in technology hold the potential to revolutionize disease prevention, personalize treatments, and unlock new frontiers in medical research, we must remain vigilant about the risks and unintended consequences that may arise. Central to this thesis is the introduction of a novel technology and product we developed to address the reliability of large language models (LLMs) in healthcare: Veracity-Health. By enhancing the trustworthiness and accuracy of these models, this innovative approach aims to facilitate the responsible and confident deployment of AI for the benefit of patients and physicians. This thesis aims to provide a rigorous analysis of the applications, innovations, and ethical considerations surrounding AI in healthcare. By contributing to the ongoing discourse, we hope to shape a future where the power of artificial intelligence is harnessed for the greater good, prioritizing reliability and integrity of GenAI implementation in healthcare.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Price Elasticity of Air Travel Demand Using Econometrics and&#13;
Machine Learning to Scale Up Sustainable Aviation Fuels</title>
<link href="https://hdl.handle.net/1721.1/156047" rel="alternate"/>
<author>
<name>Membreno, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/156047</id>
<updated>2024-08-13T03:49:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Price Elasticity of Air Travel Demand Using Econometrics and&#13;
Machine Learning to Scale Up Sustainable Aviation Fuels
Membreno, Mark
This study seeks to estimate the price elasticity of aviation travel demand. These insights will be used to integrate Sustainable Aviation Fuels (SAF) strategically to aid in decarbonizing the aviation industry. Econometric and machine learning models were applied to historical air travel data to see how airfare prices influence travel demand, focusing on the economy and business passenger segments. Different route segmentations were explored to gather insights into how price affects travel demand in these route segments. The models consider predictors such as GDP, oil prices, population, time of year, and other socio-economic variables to predict passenger count. The econometric models were fitted to data prior to COVID-19 since passenger behavior prior to the disruption is a better indicator for travel today. Two sets of machine learning models were trained using both data before COVID-19 and the available time frame from 2016 to 2023. The predictive accuracy of both models performed well, with the average R2 for economy and business passengers being 0.95 and 0.87, respectively. The 2SLS’s instrumental variable (IV) of oil price has been shown as weak. For most of the fitted models, the IV’s coefficients do not have a significant relationship with the endogenous variable of price in the first stage. The price elasticity values for this study show how passenger count is affected based on a 1% increase in airfare price. The econometric models can directly interpret price elasticity from their fitted coefficients based on the theory of log transforming the data. The business passenger segment’s price elasticity values ranged between 0 to-1%, indicating they are less price sensitive due to the necessity of their travels or have higher income. However, the price elasticity for economy passengers was centered around 0 and even positive in some route segments. This is counterintuitive as economy passengers are typically more price sensitive than business passengers, corresponding to price elasticity values less than 1%. 3Future recommendations to improve the models’ estimations of price elasticities. The fixed effects applied to the data set and a more granular data exploration can leverage more accurate predictors of the relationship between price and travel demand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shutdown Dose Rate Modeling for Radiation Requirements Development and Design Trend Analysis in the ARC Fusion Device</title>
<link href="https://hdl.handle.net/1721.1/156046" rel="alternate"/>
<author>
<name>Murphy, Daniel T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156046</id>
<updated>2024-08-13T03:19:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Shutdown Dose Rate Modeling for Radiation Requirements Development and Design Trend Analysis in the ARC Fusion Device
Murphy, Daniel T.
To achieve commercial viability, Commonwealth Fusion System’s ARC device must maximize its availability to produce power, thus demanding a rapid maintenance process to replace radiation-damaged components. Designing robotic systems to&#13;
operate in this radiation environment requires understanding the expected radiation levels and how design decisions impact those levels. This thesis uses the Rigorous Two-Step (R2S) methodology to scope the radiation environment and provide data for those design trade-offs that must be considered in future ARC design iterations. The first trend is Vanadium’s lower dose rate than Eurofer as a Vacuum Vessel and Blanket&#13;
Tank material in all configurations, making it the preferred candidate from a radiation perspective. Second, the model indicates that the choice in Blanket Tank material contributes non-trivially to the maintenance radiation environment. Third, the trends demonstrate minimal additional reduction in radiation levels from delaying the start of maintenance beyond 14 days after fusion ceases. The final trend shows the reduction&#13;
in the radiation field from the removal of the Blanket Tank with the Vacuum Vessel warrants future study. Finally, this thesis incorporates historical nuclear robotics experience to establish an iterative process by which to develop robotic radiation&#13;
requirements and assess maintenance decision effects on ARC-level optimality.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization and Rule-Based Models for Hospital Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/156045" rel="alternate"/>
<author>
<name>Harihara, Caeley Gaw</name>
</author>
<id>https://hdl.handle.net/1721.1/156045</id>
<updated>2024-08-13T03:09:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimization and Rule-Based Models for Hospital Inventory Management
Harihara, Caeley Gaw
This thesis shows how optimization, rule-based models, and operational analytics can be used to help manage hospital surgical inventory. The models were created for AITA™, a team under Johnson &amp; Johnson’s Ethicon subsidiary. The AITA™ Smart System is an intelligent inventory management solution that stores, organizes, and distributes products via Kiosk, Smart Shelf, and Mobile Hub devices. Every device requires a planogram, or a visual representation of which products to stock and the location of each product. This project focuses on creating models to automatically build and update these planograms. The models presented in this paper have already been adopted by the AITA™ team and have begun to show accuracy and efficiency gains when compared to the current manual process. Model-designed kiosks cover, on average, 7% more historical procedures than hand-made kiosks. Also, model-generated planograms are free from manual product selection and sorting errors. From an efficiency perspective, automatically creating and updating planograms will save the AITA™ team an average of 145 hours annually for every hospital served. These accuracy and efficiency gains will add value across the entire chain of care. The AITA™ team will have more time to grow their business and to develop new features. Meanwhile, providers will save time when managing and retrieving hospital inventory, which will free up more capacity for direct patient care.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation and Strategy in the Food Industry: Trends, Challenges, and Implications for New Entrants</title>
<link href="https://hdl.handle.net/1721.1/156044" rel="alternate"/>
<author>
<name>Shen, Ting</name>
</author>
<id>https://hdl.handle.net/1721.1/156044</id>
<updated>2024-08-13T03:35:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Innovation and Strategy in the Food Industry: Trends, Challenges, and Implications for New Entrants
Shen, Ting
The food industry plays a vital role in human nutrition and health but also presents significant challenges for entrepreneurs due to its highly competitive nature. This thesis aims to provide valuable insights and strategies for new entrants in the food sector by conducting extensive market research to uncover current challenges and future trends, analyzing case studies of both successful and failed food startups using the Entrepreneurial Strategy Framework (Compass &amp; Canvas), and interviewing founders of food companies to gain further insights. The framework is systematically applied to each case study, examining various aspects such as customer identification, technology adoption, competition analysis, organizational structure, value creation and capture hypotheses, and strategic choices, along with the linkages among these elements. By identifying common patterns and deriving insights from these analyses, the thesis offers guidance for food industry startups. The practical application of this research is demonstrated by developing a systematic entrepreneurial strategy using the “Test Two, Choose One” methodology for the author’s Smarnack project, sponsored by the MIT Sandbox Innovation Fund. This example showcases how the framework can be effectively used to guide and advance the progress of a food industry startup. In conclusion, this thesis serves as a comprehensive guide for entrepreneurs seeking to enter and succeed in the competitive food industry. By leveraging market research, case study analysis, and the practical application to the author’s own project, the thesis provides valuable insights and strategies to help new entrants navigate the dynamic and challenging landscape of the food industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Digital Customer Journeys: A Comparative Analysis of Knowledge Retrieval Approaches</title>
<link href="https://hdl.handle.net/1721.1/156043" rel="alternate"/>
<author>
<name>Nicola-Antoniu, Teodor</name>
</author>
<id>https://hdl.handle.net/1721.1/156043</id>
<updated>2024-08-13T03:29:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Digital Customer Journeys: A Comparative Analysis of Knowledge Retrieval Approaches
Nicola-Antoniu, Teodor
Since its early days in 2003, Amazon Web Services (AWS) has evolved rapidly. From a single service created to support its parent company’s e-commerce business, AWS became a leading cloud services provider. As AWS‘s product offerings and customer base expanded, its support knowledge base grew proportionally. Customers looking for self-service support solutions need novel solutions to navigate such a vast repository of information. This study explores a set of knowledge retrieval architectures designed to surface the most relevant content to customers pursuing self-service solutions within the knowledge base of a large technology company. To recommend the best content that a customer should consume next in their journey, we leverage insights about the content already seen by the customer. Our research encompasses three methodologies: semantic search utilizing large language model embeddings, a frequency-based n-gram model, and a hybrid approach integrating semantic search within a deep neural network framework. Simulations on historical data display a significant percentage of scenarios where customers would be accurately directed to the desired solution. Our findings suggest that organizations can adopt these methodologies internally to enhance digital customer journeys and pave the way for further innovations in this domain. This study addresses the immediate challenges of navigating large-scale company knowledge bases and presents the potential for scalable self-service models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Whales &amp; Wind: A Case Study on Misinformation About Renewable Energy Development</title>
<link href="https://hdl.handle.net/1721.1/156042" rel="alternate"/>
<author>
<name>Wright, Sanne Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/156042</id>
<updated>2024-08-13T03:12:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Whales &amp; Wind: A Case Study on Misinformation About Renewable Energy Development
Wright, Sanne Eva
Mis- and disinformation are being increasingly harnessed to influence public opinion and advance agendas across the globe. It has also greatly impacted renewable energy planning and development. This thesis explores misinformation in the context of offshore wind projects. Despite the clear environmental benefits and necessity of transitioning to renewable energy sources like wind, misinformation poses significant barriers to their development. Building on established research about the spread of misinformation and strategies to counteract it, this study examines the approaches adopted by pro-wind stakeholders—government entities, nonprofits/NGOs, and offshore wind developers—to address misinformation. It specifically focuses on a recent case study involving alleged correlations between offshore wind activities and whale strandings in New Jersey. Through interviews with these stakeholders and an analysis of media representations, this thesis delineates how the misinformation spread—namely through unsound claims, emotional appeals, and the collective power of existing local and national interests against offshore wind. It also examines the effectiveness of different approaches to counter these misinformation campaigns, highlighting the challenges faced by pro-wind stakeholders in ensuring accurate public understanding of the impact of offshore wind development on marine life. The thesis concludes with recommendations for improving strategies to combat misinformation and fostering a more transparent and collaborative public discourse on renewable energy development projects. These recommendations aim to be applicable across various planning contexts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamics of Carbon Capture and Storage</title>
<link href="https://hdl.handle.net/1721.1/156041" rel="alternate"/>
<author>
<name>Wilson, Glenn Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/156041</id>
<updated>2024-08-13T03:01:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Dynamics of Carbon Capture and Storage
Wilson, Glenn Andrew
A techno-economic analysis of carbon capture and storage (CCS) is presented using system dynamics. These models fully couple the CCS subsystems of carbon dioxide (CO2) capture, transport, and storage as an integrated system with feedback and control. Simulations are presented for CO2 captured from and stored proximal to a liquified natural gas (LNG) export facility along the Texas and Louisiana Gulf Coast. The simulations demonstrate that CCS is a dynamic system influenced by disequilibria, such as reservoir injectivity and varying pressures and flow rates, rather than a quasi-static mass balance operation. Key insights reveal that, within the maximum 45Q tax credit value of $85 per ton of CO2 and 12-year qualification period, an LNG-related CCS project at its final investment decision could be economically viable when levelized costs of carbon capture are below about $27 per ton of CO2. This breakeven cost of capture increases to about $36 per ton of CO2 if the 45Q tax credit qualification period is extended from 12 to 20 years. This analysis excludes the impact of any tax strategies utilizing 45Q tax credits. However, economic viability at the projects’ initial investment decision is highly dependent on inflation and the time required for permitting, construction, and post-injection monitoring, as well as the CCS operator’s expected returns. Specifically, modest cost escalation or delays in permitting or construction, common phenomena in major capital projects, significantly reduce the economic viability of CCS even with favorable subsidies under the Inflation Reduction Act.  This work has implications for policymakers and industry stakeholders: it challenges the assumption of CCS as a standalone solution for carbon abatement across all industry sectors and underscores the necessity for systems-level design and operations to maximize CCS efficiency and economics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Middle-Mile Inventory Management Policies Through Simulation and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156040" rel="alternate"/>
<author>
<name>Robins, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156040</id>
<updated>2024-08-13T03:18:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Middle-Mile Inventory Management Policies Through Simulation and Reinforcement Learning
Robins, Matthew
This thesis explores approaches for enhancing middle-mile inventory management within the global supply chain of a large footwear and apparel company, referred to as "Atlas". The first part discusses the design and implementation of a high-performance, heuristic system to determine stock transfer order (STO) decisions between Atlas’s distribution centers. This system employs a greedy algorithm to match supply to demand while respecting resource constraints. As Atlas’s newly procured third-party solution proved insufficient for testing due to slow performance, this work develops an emulator of the production system that achieves a 30x speedup and integrates with Atlas’s end-to-end supply chain simulation framework. This emulator enabled Atlas to efficiently test different configurations and decision making rules on historical and theoretical data, providing valuable insights prior to deploying the production system. The second part investigates the potential of reinforcement learning (RL) to augment or replace Atlas’s middle-mile decision making. A simplified supply chain environment is modeled as a Markov Decision Process, and an RL agent is trained and benchmarked against optimization-based and heuristic approaches. While the RL policy does not outperform these alternatives in the simplified environment, this work provides a foundation for Atlas to explore RL applications as they scale to more realistic supply chain environments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of thermoplastic composite manufacturing with digital process intelligence</title>
<link href="https://hdl.handle.net/1721.1/156039" rel="alternate"/>
<author>
<name>Haas, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/156039</id>
<updated>2024-08-13T03:50:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimization of thermoplastic composite manufacturing with digital process intelligence
Haas, Evan
Thermoplastic composites are gaining traction in industries such as aerospace and automotive due to their mechanical toughness, recyclability, and scalable manufacturing. However, the relative nascency of thermoplastic composites and their complex production means optimal manufacturing parameters are not well characterized. Processes are often developed through trial-and-error with limited understanding of the underlying drivers of material behavior, reducing yields and stretching development timelines. This work describes a digital intelligence infrastructure built to close this knowledge gap with high-resolution manufacturing data collection. This inexpensive system, comprised of a series of Programmable Logic Controller (PLC)s, Raspberry Pi-based telemetry units, and SQL database, captures high resolution data across hundreds of shop-floor sensors. Since this effort began, scrap rates for the targeted product dropped 85%. We also describe experiments probing composites behavior during thermoforming; by monitoring parameters including pressure, temperature, cooling rate, and dimensions, the production process is characterized and controlled. A Design of Experiments (DOE) based on this platform identified temperature as the determining factor of outcome quality. Furthermore, controlling temperature by closing the loop with current sensors and infrared imaging effectively sustained high quality. Lastly, we describe the early stages of a digitally-informed New Product Development (NPD) process to reduce development times using data from this system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainability Analytics - Lowering Emissions With Operational Efficiency</title>
<link href="https://hdl.handle.net/1721.1/156038" rel="alternate"/>
<author>
<name>Bhakta, Shivam</name>
</author>
<id>https://hdl.handle.net/1721.1/156038</id>
<updated>2024-08-13T03:55:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sustainability Analytics - Lowering Emissions With Operational Efficiency
Bhakta, Shivam
The intersection of operational efficiency and sustainability is a pressing issue across industries seeking to modernize infrastructure while simultaneously addressing environmental concerns. A key challenge in the telecommunications sector is aging legacy equipment, like D4 Channel Banks, that draw significant electricity demand for a dwindling customer base. There is an opportunity to accelerate the decommissioning of these devices, and thereby lower electricity demand, by consolidating underutilized equipment without sacrificing performance or investing significant capital.&#13;
This study introduces a robust optimization framework using integer linear programming to navigate the complex trade-offs between maintaining operational integrity and decommissioning excess physical capacity at a representative Verizon central office. This study adopted this technical approach because it is compatible with the binary nature of decommissioning decisions, under financial and practical constraints. Employing the Gurobi optimizer within a Python environment, the model's discrete optimization capabilities were essential in evaluating the decision to retire or maintain individual components of a network infrastructure.&#13;
The findings illustrate a compelling pathway to diminish the footprint of a representative central office's D4 Channel Banks by up to 40.8%, translating into annual operational cost savings ranging between $16,000 and $41,000. This reduction is primarily attributed to the decreased electricity demand and consequent lowering of CO2e emissions by 22,832 tons, underpinning the potential of such optimization strategies to harmonize the pursuit of operational excellence with environmental stewardship. Through the lens of decommissioning underutilized legacy equipment, this study underscores the strategic imperative of leveraging analytics to integrate sustainability into the operational fabric of the telecommunications industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Change and Municipal Bond Ratings</title>
<link href="https://hdl.handle.net/1721.1/156037" rel="alternate"/>
<author>
<name>Zhang, Cindy</name>
</author>
<id>https://hdl.handle.net/1721.1/156037</id>
<updated>2024-08-13T03:19:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Climate Change and Municipal Bond Ratings
Zhang, Cindy
This paper examines whether climate change risks are incorporated into municipal bond ratings. In particular, I investigate whether municipalities with exposure to sea-level rise have lower bond ratings. Using a sample of rated bond issuances from 2011 to 2020, I docu- ment a negative relationship between bond ratings and climate risk for municipalities with exposure to sea-level rise. I also test whether there is a difference in ratings between coastal municipalities and a control group of non-coastal municipalities and find mixed results. My preliminary findings suggest that this risk is at least partially incorporated into bond ratings, however, the magnitude of the effect is small.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organizational Culture, Class Values, and Subordination at Work</title>
<link href="https://hdl.handle.net/1721.1/156036" rel="alternate"/>
<author>
<name>Zhang, Victoria Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156036</id>
<updated>2024-08-13T03:07:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Organizational Culture, Class Values, and Subordination at Work
Zhang, Victoria Y.
Using data from online job reviews, I document a gap between blue- and white-collar workers’ evaluations of organizational culture and assess the role of two competing explanations. Worker values– or shared beliefs about what workplace culture should beare commonly thought to influence evaluations o f culture, encouraging organizations to recruit workers based upon “cultural fit.” I n contrast, workers may not differ on th e values that they appreciate, and instead may evaluate companies based on experiences of subordination in the workplace. Contrary to class values theories – which assume that differences in workers’ values drive differences in cultural evaluations– I find that blue- and white-collar workers largely agree about the extent to which they find company culture satisfying and about which aspects of those cultures they find satisfying. Conversely, 40-60% of the class gap can be explained by experienced subordination, which is widely seen as a negative element of culture, but unequally distributed by class. Workplaces with more blue-collar workers have more experiences of subordination, characterizing negative relationships of supervision, disrespect, and favoritism. is It the distribution of relationships of subordination, rather than differing class values, which explain class differences in evaluations of organizational culture.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How can Market Making Aid in Developing Financial Markets</title>
<link href="https://hdl.handle.net/1721.1/156035" rel="alternate"/>
<author>
<name>Imbern, Enrique Marcos Müller</name>
</author>
<id>https://hdl.handle.net/1721.1/156035</id>
<updated>2024-08-13T03:20:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How can Market Making Aid in Developing Financial Markets
Imbern, Enrique Marcos Müller
This thesis project examines the crucial role that market makers can play in enhancing liquidity and aiding the development of financial markets, with a particular focus on emerging economies. Market making involves providing buy and sell quotes for financial securities, thereby facilitating trading and contributing to market liquidity. The project delves into the evolution of financial markets, tracing their origins from basic commodity trading to the establishment of formal stock exchanges and the subsequent advancements driven by technological innovations. The analysis highlights the indispensable function of market makers in global financial markets, emphasizing their contributions to liquidity provision, price discovery, and market stability. The project underscores the specific challenges faced by emerging financial markets, such as limited liquidity, high volatility, and underdeveloped regulatory frameworks. It explores the case of Argentina as a representative example, discussing the impact of its 2001 debt default on the country's financial system and the subsequent need for market revitalization. The project presents a detailed analysis of liquidity-enhancing strategies employed by various emerging markets, drawing insights from case studies across different regions. It examines the initiatives undertaken by exchanges and regulators to broaden investor participation, promote financial literacy, reform corporate governance standards, and invest in advanced trading technologies. The transformative potential of market making in emerging markets is highlighted, focusing on its ability to enhance liquidity, reduce information asymmetry, provide market consensus, stabilize prices, facilitate economic growth, and bridge the gap with developed markets through technological adoption. The project delves into the critical importance of fostering a diverse investor base, both domestic and international, and the role of market makers in attracting and retaining investors. It discusses strategies for expanding product offerings, such as the introduction of exchange-traded funds (ETFs) and derivatives, as well as the creation of regional market linkages to increase liquidity and investment opportunities. 2 Furthermore, the project emphasizes the need for an enabling market environment, encompassing factors such as advanced trading infrastructure, efficient pre- and post-trade processes, reliable market data, and appropriate regulatory frameworks. It explores the incentives and compensation models for market makers, examining the various schemes employed globally. In conclusion, the thesis project presents a comprehensive analysis of the challenges faced by emerging financial markets and the pivotal role that market makers can play in addressing these challenges. By enhancing liquidity, promoting market efficiency, and fostering investor confidence, market makers have the potential to catalyze the development and growth of emerging financial markets, ultimately contributing to economic prosperity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizing Total Delivered Cost of Stamped Assemblies Through Sourcing Optimization</title>
<link href="https://hdl.handle.net/1721.1/156034" rel="alternate"/>
<author>
<name>Francis, Branden</name>
</author>
<id>https://hdl.handle.net/1721.1/156034</id>
<updated>2024-08-13T03:13:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Minimizing Total Delivered Cost of Stamped Assemblies Through Sourcing Optimization
Francis, Branden
This thesis presents an optimization model for identifying alternate and cost-competitive assembly sourcing strategies in the automotive industry, focusing on the "Make vs. Buy" decision-making process for a multinational automotive OEM. A “Make vs. Buy” process evaluates the strategic benefits and cost advantages derived from in-sourcing or out-sourcing a production process. Typically, one in-source scenario is evaluated, but capacity constraints may limit the opportunity to in-source. To combat capacity constraints, the optimization model was developed to evaluate sourcing production processes from other plants within the OEM’s manufacturing network. The sourcing strategy evaluates production scenarios for multi-process stamped assemblies undergo. Utilizing a mixed integer programming framework derived from the knapsack problem, the model evaluates all production scenarios to minimize total costs while adhering to capacity and capability constraints. Results demonstrate the model's effectiveness in identifying cost-saving and alternate sourcing strategies. Future work may explore extending the model to encompass broader geographical and operational complexities within the automotive sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Intuitions in Complex Media Environments Shape Belief in Misinformation</title>
<link href="https://hdl.handle.net/1721.1/156033" rel="alternate"/>
<author>
<name>Orchinik, Reed</name>
</author>
<id>https://hdl.handle.net/1721.1/156033</id>
<updated>2024-08-13T03:43:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adaptive Intuitions in Complex Media Environments Shape Belief in Misinformation
Orchinik, Reed
Belief in misinformation has been linked in part to digital media environments promoting reliance on intuition -- which in turn has been shown to increase belief in falsehoods. Here, I propose that this apparently irrational behavior may actually result from ecologically rational adaptations to complex environments. In a large survey experiment, I test whether intuitive belief in misinformation may result from these rational adaptations by randomizing participants to be shown either a largely true or largely false news feed. I show that individuals make more frequent and quicker errors on the less common headline type, and less frequent errors on the more common headline type. After seeing many true headlines, a participant is more likely to misidentify a subsequent false headline as true, and vice versa after seeing many false headlines. This pattern is consistent with adaptation to the proportion of true and false content (the veracity base rate).  I use computational modeling to show that these differences are driven by intuitions, which correspond to Bayesian priors, about the veracity of the content -- intuitions which then spill over into new environments. The results, when paired with the observation that the news consumed by most Americans is overwhelmingly true, suggest that belief in misinformation and the intuitions that underlie it are not necessarily a failing of humans in digital environments but can be a byproduct of rational adaptations to them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harvesting Innovation: Exploring the Potential of an AI-Enabled Platform to Revolutionize Agricultural Labor Markets</title>
<link href="https://hdl.handle.net/1721.1/156032" rel="alternate"/>
<author>
<name>Haywood, Eric Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/156032</id>
<updated>2024-08-13T04:01:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Harvesting Innovation: Exploring the Potential of an AI-Enabled Platform to Revolutionize Agricultural Labor Markets
Haywood, Eric Robert
Labor shortages in agriculture are a global problem that cause revenue losses and resource waste. In the US, immigration is the main source of such laborers, but labor immigration has decreased by 75% in recent years and costs due to unharvested crops due to labor shortage in  agriculture were estimated at USD 3.1 Billion per year in 2014.&#13;
&#13;
This thesis investigates the persistent labor shortages in the agricultural sectors of the southern United States and Mexico, exploring the feasibility of alleviating these shortages through a labor matching platform enhanced by artificial intelligence (AI). With a focus on the economic implications and structural deficiencies in agricultural labor markets, the study examines how a digital platform can bridge the gap between supply and demand for agricultural labor. &#13;
&#13;
The research employs a multi-dimensional approach that includes extensive literature review, in-depth interviews with stakeholders, system dynamics modeling, and action research involving the launch of a company and release of a Minimum Viable Product (MVP). The MVP, a foundational component of the proposed digital platform, has been tested in the market to gather quantitative data and insights using web advertising. &#13;
&#13;
The findings highlight the platform’s potential to streamline labor matching processes, improve transparency, and increase efficiency in the agricultural labor market. Additionally, the integration of AI provides intelligent matchmaking capabilities, predicting and aligning labor needs with available workers more effectively. &#13;
&#13;
Not only does this thesis provide a potential business model to tackle a critical economic problem, but it also contributes to the broader discourse on the role of technology in transforming traditional industries in advanced and emerging economies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Private to Public: Why Do Alternative Asset Managers Go Public?</title>
<link href="https://hdl.handle.net/1721.1/156031" rel="alternate"/>
<author>
<name>Chen, Qiwei</name>
</author>
<id>https://hdl.handle.net/1721.1/156031</id>
<updated>2024-08-13T03:56:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Private to Public: Why Do Alternative Asset Managers Go Public?
Chen, Qiwei
The global alternative asset management industry has witnessed a significant trend of firms going public since the IPO of Blackstone in 2007, and the trend has come back recently. Since 2022, firms like PAG, Tiantu Capital, and CVC Capital Partners have announced or completed their plans to go public. This study summarizes the post-2000 waves of alternative asset managers going public, including their different pathways and post-IPO developments.   Utilizing a multi-case analysis method with public information, this study examines the motives, benefits, and costs associated with alternative asset managers’ decisions to go public.  Four primary motives and benefits of alternative asset managers going public are identified: (1) enabling founders and strategic investors to liquidate their holdings, (2) incentivizing employees through equity-based compensation, (3) providing permanent capital to fund organic growth and external acquisitions, and (4) enhancing brand and reputation. Although this study acknowledges the costs and potential disadvantages associated with going public, they are deemed less significant compared to the benefits.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Complexity Drives Long Lead Times: A Queueing Theory Space Industry Application</title>
<link href="https://hdl.handle.net/1721.1/156030" rel="alternate"/>
<author>
<name>Murga, Blanca</name>
</author>
<id>https://hdl.handle.net/1721.1/156030</id>
<updated>2024-08-13T03:42:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Complexity Drives Long Lead Times: A Queueing Theory Space Industry Application
Murga, Blanca
The space industry is going through a major transformation. In today’s space industry, commercial companies compete in the market for customers and resources; no longer the exclusive domain of government agencies and legacy aerospace giants. Companies are forced to develop new technology and find ways to reduce costs in ways not seen before. A major challenge for the industry is to produce complex reusable systems at higher rates and lower costs.&#13;
Many aerospace companies produce their components in high-mix, low-volume operations known as job shops. Job shops are notorious for having long lead times. The research for this thesis was conducted at a manufacturing site at Blue Origin, a privately held space company, that operates as a job shop. The purpose was to identify the sources for the long lead times observed in the production of machined components.&#13;
The hypothesis the thesis investigates is that long lead times are the result of high variability caused by the complexity of producing space components. Using the method proposed by Factory Physics and queueing theory, this thesis demonstrates via case studies and a queueing simulation that high variability drives long wait times leading to the long lead times experienced in job shop operations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Transformation Trends within Automobile Supply Chains in the Post-Pandemic Era</title>
<link href="https://hdl.handle.net/1721.1/156029" rel="alternate"/>
<author>
<name>Dong, Wenzhe</name>
</author>
<id>https://hdl.handle.net/1721.1/156029</id>
<updated>2024-08-13T03:17:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strategic Transformation Trends within Automobile Supply Chains in the Post-Pandemic Era
Dong, Wenzhe
This research delved into the transformation of supply chain strategies among automobile original equipment manufacturers (OEMs) in the post-pandemic era, motivated by the disruptions faced during the COVID-19 pandemic. This study employed qualitative research methods and conducted semi-structured interviews with employees from both supply chain and strategy functions in OEMs and suppliers. This study identified motivations for automobile supply chain strategy transformation, including the electrification trend, geopolitical events, and pandemic impacts, highlighting the need for agile and resilient supply chains. Driven by these factors, OEMs prioritized supply chain resilience through measures such as safety stock increases, dual-sourcing critical materials, and enhanced supplier collaboration. Organizational adaptations further bolstered these transformation initiatives, fostering flexibility and instilling a resilience-centric mindset. Furthermore, this study examined talent management issues and resistance to change as prominent obstacles in supply chain strategy transformation and offered targeted recommendations. The findings provided actionable insights into emerging post-pandemic supply chain transformation trends, serving as a valuable resource for automotive OEMs, suppliers, policymakers, and scholars in shaping future strategies for automobile supply chains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Supply Chain Resiliency through Solar Panel Delivery Optimization</title>
<link href="https://hdl.handle.net/1721.1/156028" rel="alternate"/>
<author>
<name>Ceballos Mondragón, Regina</name>
</author>
<id>https://hdl.handle.net/1721.1/156028</id>
<updated>2024-08-13T03:02:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Supply Chain Resiliency through Solar Panel Delivery Optimization
Ceballos Mondragón, Regina
Following NextEra Energy Resources’ accelerated growth and disruptions in the solar panel supply chain, their solar panel allocation process is becoming more complex. This process results in a schedule that determines when to deliver close to 150 million solar panels to more than fifty project sites under development and construction, while balancing requirements from multiple stakeholders. Due to project and contract interdependencies, modifying the equipment delivery schedule leads to costs that have consequential impacts. This thesis presents and implements a novel mixed integer programming model to determine the optimal schedule for delivering solar panels to project sites. The model abstracts impactful and quantifiable costs and minimizes them to propose a realistic solution. It produces a schedule in significantly less time than the current manual approach by finding a feasible solution in less than 15 minutes. The thesis introduces three scenarios of supply chain disruptions that mimic real-world events, demonstrating the model’s flexibility and helping NextEra Energy Resources adapt to future supply chain disruptions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Warehouse of The Future</title>
<link href="https://hdl.handle.net/1721.1/156027" rel="alternate"/>
<author>
<name>Severe, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/156027</id>
<updated>2024-08-13T03:24:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Warehouse of The Future
Severe, Stephanie
Competitive pressures require Amgen Rhode Island (ARI) warehouse to be as efficient and low cost as possible. An Outside Service Provider (OSP) has led operations within the warehouse since September 2021. The work is considered safe and compliant. However, there are many opportunities to mature their processes and make the work more efficient. The goal of this project is to support Amgen as it creates the warehouse of the future. ARI is targeting volume-based growth as it expands, aiming to increase by 130% its production of drug substances by 2026. Ensuring that the warehouse can support the site's long-term growth is key. 
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How can brands try to influence social norms?</title>
<link href="https://hdl.handle.net/1721.1/156026" rel="alternate"/>
<author>
<name>Robinet-Duffo, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/156026</id>
<updated>2024-08-13T03:59:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How can brands try to influence social norms?
Robinet-Duffo, Richard
This thesis explores how brands can influence social norms. Through detailed case studies of the American Tobacco Company, Nike, Patagonia, Yves Saint Laurent, and Viagra, it examines the various tactics employed by these brands to challenge and reshape societal perceptions and behaviors related to gender roles, athleticism, sustainable consumption, fashion, and sexual health. Key strategies include leveraging the brand's legitimacy and authenticity within its industry and previously set social norms, creating aspirational figures and personas, employing multifaceted messaging across visual, auditory, and semantic channels, and targeting individual consumers to effect collective change. The thesis also explores the ethical implications of brand influence on social norms, acknowledging the potential for both positive social change and the promotion of harmful behaviors. Ultimately, this research argues that while brands can indeed ride the waves of existing social trends, they also possess the power to actively shape the direction and pace of norm evolution through their actions and messaging. As such, the thesis underscores the importance of critical reflection on the role of brands in society and the need for responsible wielding of their influence.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Establishing Inventory Maturity in a Make-To-Order Manufacturing Environment</title>
<link href="https://hdl.handle.net/1721.1/156025" rel="alternate"/>
<author>
<name>Vignaroli, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/156025</id>
<updated>2024-08-13T03:30:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Establishing Inventory Maturity in a Make-To-Order Manufacturing Environment
Vignaroli, Adam
Accelevation, LLC experienced rapid growth in their field of data center containment manufacturing. While sales and manufacturing grew quickly, the processes to support these did not mature at the same rate. The researcher reviewed the existing operating processes related to inventory management to understand the initial state of operations. Following the review of operations, the researcher diagnosed three major focus areas for improvement to mature Accelevation's operations. These areas of focus were analytical inventory management policies, comprehensive material demand forecasting, and the achievement of sustainably high inventory accuracy. Actions were taken in each of these areas, resulting in levels of success and improvement. The results in inventory policy and demand forecasting should prepare Accelevation for future growth with more robust processes. While the actions of the researcher yielded modest improvements in inventory accuracy, Accelevation must make major strides in operational execution to alleviate these problems and fully mature their inventory management processes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Strategy for Reshoring</title>
<link href="https://hdl.handle.net/1721.1/156024" rel="alternate"/>
<author>
<name>Easley, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/156024</id>
<updated>2024-08-13T03:04:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Engineering Strategy for Reshoring
Easley, Jack
With the resiliency of extended global supply chains tested by COVID-19 and the increase in geopolitical risks driven by events such as armed-conflicts in various parts of the globe, the idea of reshoring manufacturing capabilities has gained momentum both in popular press and in studied business decisions. In theory, reshoring decisions may be based on either new grounds of competition, such as automation to achieve lower manufacturing costs, or desire to reduce risk exposure, such as moving away from sole-source and/or geographically distant suppliers. For domestic industrial businesses looking for new growth opportunities or to re-evaluate strategic sourcing decisions, there is interest to look broadly at what the prime candidates for reshoring are, and such analysis would be more useful when viewed in the context of established strategy frameworks.&#13;
&#13;
Recognizing the risks of over-reliance on offshore production, and seeing opportunities to support manufacturing with the latest breakthroughs in advanced technology such as in sustainability and mobility, Re:Build Manufacturing is a private company founded with the mission to help revitalize the American industrial base over the coming decades. Since 2020, the company has grown quickly through mergers and acquisitions, assembling a family of engineering and manufacturing businesses and mounting a platform of capabilities. As a part of the company's strategic goals, the topic of reshoring is front and center.&#13;
&#13;
This research, therefore, serves to inform strategic decision making for reshoring by taking a practical view on the subject through the lens of a company looking to grow domestic manufacturing -- Re:Build Manufacturing. The study performs detailed data analysis for reshoring opportunities and proposes a unified framework for assessing ones that look promising by comparing market intelligence and company strengths and capabilities. The approach builds an independent, data-driven model that addresses: out of everything that could be reshored or built, how should a company evaluate what to focus on from a technical and competitive landscape standpoint, at least to start with? Objective criteria for characteristics of "good" reshoring candidates is established based on literature review and pairing such guidance with application of competitive strategy frameworks. A simplified narrative would be that an ideal candidate to reshore should be one that has a big market or is considered advanced technology, exhibits rewarding financial risk/return profile, and is exposed to above average level of supply chain risks from offshore operations. The competitive strengths and goals of the company serve to bound the scope of product selection. Considering macro indicators, the thesis of the study centers on the creation of a new decision-support model for reshoring assessment, proposing that publicly available data may be leveraged to drive reshoring attractiveness assessments quickly, at scale, and at product-type level detail. Broadly speaking, the study steps through macro-economic data search and analysis, reshoring ranking model construction, company capabilities inventory, and synthesis of reshoring opportunities.&#13;
&#13;
Analysis on the results of the model suggests that, in aggregate and absent company unique considerations, the model provides a reasonable approximation of general reshoring attractiveness across product-types. Specifically, out of the 6 product-types selected for verification study, 67% stayed in relative ranking to each other under additional scrutiny. It is worth noting that, given macro-data as inputs, the model does not capture nuanced competitive information. As such, detailed case studies should dictate specific reshoring considerations. Further, true performance of the model will only become apparent over time as the extended life cycle of manufacturing decisions takes years to materialize. Nevertheless, the results here serve to offer a holistic starting point to help guide manufacturing businesses in both strategic positioning for product portfolio planning and opportunity screen in business scaling to inform and shape strategies in achieving long-term growth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Intersection of Management Practices and Bottom of the Pyramid Economics in Developing Countries</title>
<link href="https://hdl.handle.net/1721.1/156023" rel="alternate"/>
<author>
<name>Lavda, Aliki</name>
</author>
<id>https://hdl.handle.net/1721.1/156023</id>
<updated>2024-08-13T03:17:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring the Intersection of Management Practices and Bottom of the Pyramid Economics in Developing Countries
Lavda, Aliki
This thesis explores the intersection of management practices and Bottom of the Pyramid (BoP) economics within developing countries. Utilizing a comprehensive case study approach, it evaluates several multinational corporations' ventures in these regions, focusing on their strategies to leverage BoP markets for both business innovation and socio-economic upliftment. The research highlights how these companies integrate core business strategies with local economic conditions to create value for both the company and the local communities. Through detailed analysis of ventures by companies such as DuPont, SC Johnson, VisionSpring, and Procter &amp; Gamble, the study identifies critical factors that influence the success or failure of BoP initiatives such as product-market fit, community integration, governance partnerships, and sustainable business model innovation. To critically assess these ventures, two theoritical frameworks are deployed: the Specified Analytical Criteria framework, emerged from existing BoP venture literature, and the Sustainability-Oriented Innovation framework, redesigned from sustainable business practices. Additionally, this study hypothesizes that ventures incorporating a dual-entity structure—combining for-profit and non-profit elements—may increase their chances of success by effectively balancing economic and social goals. This hypothesis is assessed through an in-depth case study of Sanergy Collaborative, a venture operating in Nairobi's informal settlements that transforms waste into valuable resources. By aligning empirical findings with theoretical insights, this work provides nuanced understandings of hybrid business models and offers refined models for future BoP ventures that aim to achieve scalable social impact alongside financial sustainability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging Digital Transformation Gaps in Southeast Asia</title>
<link href="https://hdl.handle.net/1721.1/156022" rel="alternate"/>
<author>
<name>Roman, Francisco Matthew Guevarra</name>
</author>
<id>https://hdl.handle.net/1721.1/156022</id>
<updated>2024-08-13T03:30:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bridging Digital Transformation Gaps in Southeast Asia
Roman, Francisco Matthew Guevarra
This thesis explores the Digital Transformation Gaps in Southeast Asia, specifically focusing on the challenges faced by companies in the region when adopting Western-based business systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) systems. While these systems are technically equipped to meet the business requirements in Southeast Asia, there is a notable disconnect in their practical application due to distinct regional differences. These disparities include variations in business culture, workforce competency, geographical constraints, economies of scale, as well as distinct best practices and processes, and workforce power dynamics. The research methodologically examines case studies and interviews of Southeast Asian companies and their experiences with Western systems, highlighting the nuances that lead to inefficiencies and operational challenges. The study delves into the cultural and structural aspects of Southeast Asian business environments, contrasting them with Western models. It argues that the one-size-fits-all 4approach of Western business systems fails to accommodate these unique regional characteristics, leading to a digital transformation gap. This thesis proposes a framework for adapting Western business systems to better align with Southeast Asian contexts. It emphasizes the importance of localizing these systems to bridge the digital transformation gap, ensuring that they are not only technically sound but also culturally and operationally relevant. The conclusion offers strategic recommendations for companies and system developers, aimed at fostering more effective and sustainable digital transformations in Southeast Asia. This work contributes to the broader understanding of global digitalization, emphasizing the need for regional customization in global business solutions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unearthing Regulatory Influences on Climate Risk Adaptation: Exploring Asset Stranding and Regulatory Shortcomings in the US Housing Market</title>
<link href="https://hdl.handle.net/1721.1/156021" rel="alternate"/>
<author>
<name>Spiller, Matteo</name>
</author>
<id>https://hdl.handle.net/1721.1/156021</id>
<updated>2024-08-13T03:09:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unearthing Regulatory Influences on Climate Risk Adaptation: Exploring Asset Stranding and Regulatory Shortcomings in the US Housing Market
Spiller, Matteo
Financial institutions have not yet exhaustively assessed the implications that ESG risk may pose to the financial industry despite anthropogenic temperature change being duly and scientifically described in the Paris Agreement in December 2015 (Schellnhuber et al., 2016). Both banks and insurance companies will be impacted, especially in their respective real estate portfolios, and for this reason, the current risk management practices should evolve in order to exhaustively embed these scenarios in their stress testing methodologies (Jung et al., 2021). On top of this, several studies identified robust evidence of long-term growth losses for both poor and rich countries driven by natural disasters. These future climate change implications have been estimated at roughly $9.7 trillion (Hsiang &amp; Jina, 2014). &#13;
&#13;
Economic growth and climate change are two closely interconnected variables whose interplay will become more and more important in the future since, as an example, higher temperatures considerably reduce economic growth (Dell et al., 2012) and political stability (Hsiang et al., 2013).&#13;
&#13;
This thesis delves into the complex regulatory frameworks that will shape the financial sector, seeking to understand how politics shape and influence resilience and sustainability while exposing financial institutions to a new set of risks (Buhr, 2016), such as stranded assets and new crisis scenarios that could undermine the stability of the entire financial system. As of today, the lack of unified definitions and consensus has led financial institutions to implement ad-hoc methodologies creating discrepancies among them and a lack of unified interpretability of the underlying results.&#13;
&#13;
Europe has made considerable improvements to its regulatory framework and is moving toward a homogenous regulatory landscape (Baumuller &amp; Grbenic, 2021). Meanwhile, US political discourse has slowed down the implementation of essential regulations which are required not only by financial institutions but also by multiple stakeholders including investors, regulatory bodies, local entities, and supranational organizations (Dunlap &amp; McCright, 2010). Numerous non-binding guidelines have emerged setting the stage for a more comprehensive and detailed Climate Act with similar magnitude to the Dodd-Frank Act. &#13;
&#13;
This study’s conclusion highlights the need for additional regulations and guidelines from supervisory authorities on top of recommending key approaches and areas of study not only for financial institutions but also for future research. As such, these will need to provide the foundation for the next regulatory developments considering both a systematic shift toward a low-carbon economy and a delayed abrupt transition to mitigate the potential implications that could undermine financial stability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Inventory Simulations for High Velocity Garment Retail Stores</title>
<link href="https://hdl.handle.net/1721.1/156020" rel="alternate"/>
<author>
<name>Qi, Davy</name>
</author>
<id>https://hdl.handle.net/1721.1/156020</id>
<updated>2024-08-13T03:25:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Inventory Simulations for High Velocity Garment Retail Stores
Qi, Davy
To facilitate agility in store inventory planning for a brick-and-mortar retail business with high sales velocity and product portfolio complexity, this project created a Monte Carlo tool that simulates how upstream shipment decisions impact capacity utilization and product complexity. The simulation model was built in two steps, first a Monte Carlo model for aggregated store inventory, followed by machine learning models that predict the display inventory and the number of store and display unique articles based on Monte Carlo outputs. In the process of building the Monte Carlo model, the project examined methods to model inventory trends, developed a quantification technique for daily demand stochasticity, and explored possibilities to control the simulation stochasticity. These methods and techniques, novel to retail inventory modeling, were able to model store inventory with little systematic biases and store daily mean absolute inventory deviations within 2-4%. Meanwhile for the machine learning models, the project systematically examined the efficacy of linear regression, tree and fully connected neural network models at making time series predictions using two time series as inputs. It also rigorously dives into the limitations and advantages of various&#13;
model architectures, including the selection of variables, treatment of multiple time series, order of predictions, and the scope of loss functions. The final machine learning model results showed some systematic biases with daily mean absolute deviation&#13;
ranging from 3-10% for display inventory and up to 10-20% for unique articles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging digital tools and analytics for temperature management in cold chain systems for gene therapies</title>
<link href="https://hdl.handle.net/1721.1/156019" rel="alternate"/>
<author>
<name>Lee, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/156019</id>
<updated>2024-08-13T03:08:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging digital tools and analytics for temperature management in cold chain systems for gene therapies
Lee, Jessica
Emerging advanced therapies at Johnson &amp; Johnson Innovative Medicine, such as a new retina gene therapy, require maintaining ultra low temperatures within the cold supply chain from the manufacturing plant and throughout distribution to the customer. In comparison to traditional cold chain medicines such as most vaccines, gene therapies are high-value, low-volume products and assurance of the product quality requires visibility into the full time-temperature history. This thesis describes the requirements for an end-to-end, digitally-enabled temperature management system for gene therapies. First, we establish a baseline understanding of the location, incidence, and severity of temperature excursions across the cold chain, based on current practices managing traditional drugs, through descriptive statistics on real-time temperature data, historical excursion records, and product complaints. While J&amp;J has digital temperature monitoring solutions in place today, tracing the temperature history of a product across multiple legs of the supply chain, as required for a gene therapy, has to be done through manual review of disparate temperature records. To fill this gap in the existing infrastructure, we define the requirements for integrating temperature data across 6 enterprise data systems, including sensor data, ERP systems for shipments and warehouse management, and serialization records. Lastly, we build a Monte Carlo simulation to inform performance requirements for the system by modeling the trade-offs in system reliability and the cost of product loss.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Analysis of Line Haul and Switcher Locomotive Propulsion by Diesel, Battery, and Hydrogen Fuel Cell Technologies</title>
<link href="https://hdl.handle.net/1721.1/156018" rel="alternate"/>
<author>
<name>Lerman, Benjamin D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156018</id>
<updated>2024-08-13T03:29:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-Economic Analysis of Line Haul and Switcher Locomotive Propulsion by Diesel, Battery, and Hydrogen Fuel Cell Technologies
Lerman, Benjamin D.
This thesis examines the critical challenges of reducing greenhouse gas (GHG) emissions within the freight rail industry of the United States transportation sector. The transportation sector, being a significant contributor to the nation’s GHG emissions, requires urgent attention to mitigate environmental and public health impacts. This thesis presents the emissions profile of the U.S. freight rail system and explores potential strategies for decarbonization. Previous research has established the freight rail system as a relatively more efficient mode of cargo transport in terms of emissions; however, to attain national goals set for GHG emissions, further reduction of its carbon footprint is required. Through a detailed analysis of the current propulsion technologies, ranging from conventional diesel-electric locomotives to emerging alternatives such as battery electric, hydrogen fuel cell, and electrified rail, the paper evaluates their potential to reduce emissions within the freight rail sector. The use of a Total Cost of Ownership (TCO) and Environmental Impact Analysis quantifies the financial and environmental implications of adopting these technologies. The findings reveal significant opportunities for reducing GHG emissions through the adoption of cleaner propulsion technologies. Challenges associated with their implementation include infrastructure requirements and technological readiness. A strategic roadmap for the decarbonization of freight rail is proposed, segmented into short-term (0-5 years), medium-term (5-15 years), and long-term (15+ years) objectives. Emphasis is placed on the importance of regulatory frameworks, technological advancements, and stakeholder collaboration in achieving a sustainable transition. The study aims to inform policymakers, industry stakeholders, and researchers about the pathways towards a sustainable and efficient freight rail system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Strategic Framework for Evaluating Next-Generation Technologies in Biocatalysis</title>
<link href="https://hdl.handle.net/1721.1/156017" rel="alternate"/>
<author>
<name>Creta, Alec</name>
</author>
<id>https://hdl.handle.net/1721.1/156017</id>
<updated>2024-08-13T03:47:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Strategic Framework for Evaluating Next-Generation Technologies in Biocatalysis
Creta, Alec
The emergence of a new wave of biocatalysis innovation is rapidly transforming the pharmaceutical industry. This next generation of techniques, characterized by metagenomics, artificial intelligence, and computational modeling, is reshaping approaches to process development for companies operating in this space. However, significant challenges exist in fully harnessing the potential of this new technology due to limitations in internal capabilities, including time constraints and knowledge gaps. To overcome these obstacles and unlock the true growth potential of biocatalysis, pharmaceutical companies must strategically leverage external supply organizations to tap into the next wave of biocatalysis innovation and bridge its existing capability gaps.&#13;
This thesis proposes a comprehensive framework for the site selection of a next-generation technology contract development and manufacturing organization (CDMO) in biocatalysis. This framework adopts a tiered approach, with a primary focus on the use of real options analysis to facilitate quantitative decision-making in emerging technology site selection. Following the framework establishment, its application challenges the initial high-cost assumptions associated with emerging technology CDMOs, revealing a significant 20% reduction in expected costs. Overall, this de-risks the emerging technology investment and drives the implementation of novel and innovative processes in early-phase biocatalysis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Mingling: Supporting Ad Hoc, Private Conversations at Virtual Conferences</title>
<link href="https://hdl.handle.net/1721.1/156016" rel="alternate"/>
<author>
<name>Song, Jaeyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/156016</id>
<updated>2024-08-13T03:24:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Online Mingling: Supporting Ad Hoc, Private Conversations at Virtual Conferences
Song, Jaeyoon
Even though today’s videoconferencing systems are often very useful, these systems do not provide support for one of the most important aspects of in-person meetings: the ad hoc, private conversations that happen before, after, and during the breaks of scheduled events–the proverbial hallway conversations. Here we describe our design of a simple system, called Minglr, which supports this kind of interaction by facilitating the matching of conversational partners. We describe two studies of this system’s use at two virtual conferences with over 450 total participants. Our results provide evidence for the usefulness of this capability, showing that, for example, 81% of people who used the system successfully thought that future virtual conferences should include a tool with similar functionality. We believe that similar functionality is likely to be widely implemented in many videoconferencing systems and to increase the feasibility and desirability of many kinds of remote work and socializing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Equity Secondaries: A Comparative Analysis on the US &#13;
and Chinese Markets</title>
<link href="https://hdl.handle.net/1721.1/156015" rel="alternate"/>
<author>
<name>Han, Weizong</name>
</author>
<id>https://hdl.handle.net/1721.1/156015</id>
<updated>2024-08-13T03:26:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Private Equity Secondaries: A Comparative Analysis on the US &#13;
and Chinese Markets
Han, Weizong
Since its inception in the early 2000s, China's private equity secondary market has evolved dramatically. Despite its fast growth, exit strategies in China’s private equity market lag, especially as tighter IPO regulations complicate exits, undermining liquidity and returns. In contrast, the U.S. benefits from a robust secondary market, underscoring its critical role in the maturity of the private equity industry.&#13;
This thesis employs a mixed-methods approach to explore the U.S. private equity secondary market's evolution and to assess the status and challenges of China's private equity secondary market. The U.S. market enjoys a well-established regulatory environment, professional intermediaries, and advanced trading platforms, contributing to its efficiency and liquidity. Conversely, China's market, despite its growth, grapples with regulatory insufficiencies, professional gaps, and opaque transactions. Enhancements to China's legal framework, professional services, and trading platform functionalities are proposed to foster market development and global integration, aiming to enrich both academic discourse and provide practical guidance for stakeholders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Paths to Achieving Scope 1 Carbon Neutrality in Building Utilities</title>
<link href="https://hdl.handle.net/1721.1/156014" rel="alternate"/>
<author>
<name>Willette, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156014</id>
<updated>2024-08-13T03:58:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Paths to Achieving Scope 1 Carbon Neutrality in Building Utilities
Willette, Daniel
Corporate entities are increasingly adopting sustainable practices and working towards climate commitments. This study seeks to provide a practical, actionable guide for corporations seeking to navigate the complexities of Scope 1 carbon abatement. Specifically, a framework is developed to determine paths to achieve Scope 1 carbon neutrality in building utilities for a large global biotechnology company. The framework combines data analysis, extensive stakeholder engagement, financial evaluation, and expert consultation with the application of optimization modeling. This multi-dimensional approach is designed to navigate the complex landscape of carbon abatement, identifying viable technologies and strategies that pave the way to achieving Scope 1 carbon neutrality while balancing operational efficiency, cost-effectiveness, and strategic priorities.&#13;
The research evaluates 11 categories of decarbonization solutions, encompassing energy efficiency measures, alternative and renewable energy sources, with an emphasis on their technical viability, implementation feasibility, and financial impacts. Through this assessment, the study zeroes in on 9 solutions deemed most appropriate for the biotechnology industry, incorporating them into an optimization model. This model serves as a strategic tool, guiding the selection of decarbonization projects and the appropriate volume of carbon offsets required to achieve carbon neutrality. The optimization model is a flexible platform for evaluating various scenarios and constraints, thereby facilitating informed decisions that align with a company’s environmental, financial, and strategic objectives.&#13;
The developed framework and insights can serve as a blueprint for other corporations grappling with similar challenges in reducing Scope 1 emissions from their building utilities. The research underscores the potential for significant environmental impact through the adoption of targeted decarbonization strategies, contributing to the broader goal of mitigating climate change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Digitalization: 3D Deep Learning in Manufacturing Applications</title>
<link href="https://hdl.handle.net/1721.1/156013" rel="alternate"/>
<author>
<name>Kochert, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/156013</id>
<updated>2024-08-13T03:59:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Process Digitalization: 3D Deep Learning in Manufacturing Applications
Kochert, Ryan
The surge in artificial intelligence (AI) popularity and investment has significantly impacted various sectors, including automotive, aerospace, and defense. Smaller companies at the base of these supply chains often lack the resources and knowledge for AI implementation compared to larger original equipment manufacturers, creating a unique opportunity for these smaller companies to leverage AI for growth. However, many AI initiatives in these smaller firms stall at the prototyping phase. This research outlines, from planning to execution, steps and considerations for implementing an AI initiative at a small to medium sized manufacturing company. As well, given the importance of 3D data in the industry, the research also conducts a deep dive on working with, analyzing, and integrating 3D data into an AI model using various techniques, from statistical analysis to 3D deep learning. Discussion on the different&#13;
data representations including point clouds, voxels, polygon meshes, depth maps, and boundary representations, and their trade-offs help with the determination of which representation is best for different use-cases. Most of the techniques apply to various unstructured data types to enable multi-modal inputs to a descriptive, predictive, or prescriptive AI model. Additionally, beyond the technical requirements, an entire section is dedicated to discussing the human element in this whole process, focusing on a company’s personnel and cultural aspects, which is often where initiatives can succeed or fail.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financial Inclusion In Sub-Saharan Africa :  A Multidimensional Index</title>
<link href="https://hdl.handle.net/1721.1/156012" rel="alternate"/>
<author>
<name>Diallo, Aïda Sadio</name>
</author>
<id>https://hdl.handle.net/1721.1/156012</id>
<updated>2024-08-13T04:02:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Financial Inclusion In Sub-Saharan Africa :  A Multidimensional Index
Diallo, Aïda Sadio
Financial inclusion has emerged as a crucial enabler for sustainable development, with significant implications for poverty reduction, economic growth, and gender equality. Despite the growing recognition of its importance, measuring financial inclusion remains a complex challenge, particularly in the context of Sub-Saharan Africa, where countries face unique challenges and opportunities. This thesis aims to contribute to the literature by developing a comprehensive, multidimensional financial inclusion index specifically tailored to the Sub-Saharan African context. Building upon previous methodologies, the index incorporates an expanded set of both demand-side and supply-side indicators across key dimensions of financial inclusion.  The insights generated by this research have important policy implications, providing a valuable tool for policymakers to diagnose bottlenecks, prioritize reforms, and track progress over time. By contributing to the evidence base on financial inclusion measurement and its implications, this thesis aims to support the development of more efficient, equitable, and inclusive financial systems across SubSaharan Africa.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Can We Promote Impact/ESG Investing? - Clarifying the Skeptical Reasons and Benefits of Addressing Impact &amp; ESG Investing in the Age of Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/156010" rel="alternate"/>
<author>
<name>Tamura, Yosuke</name>
</author>
<id>https://hdl.handle.net/1721.1/156010</id>
<updated>2024-08-13T03:04:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Can We Promote Impact/ESG Investing? - Clarifying the Skeptical Reasons and Benefits of Addressing Impact &amp; ESG Investing in the Age of Artificial Intelligence
Tamura, Yosuke
Drawing upon professional experiences in Impact/ESG consulting and investment, this thesis explores the efficacy of Impact and ESG investments in enhancing corporate value. Chapter 1 introduces the complex landscape of these investments, outlining common misconceptions and the diverse definitions that prevail across different stakeholders. Chapter 2 delves into the metrics and standards used to assess these investments, highlighting the confusion caused by multiple rating systems and the impact on stakeholder decisions. Chapter 3 presents an event study focusing on the stock market reactions to ESG ratings changes, revealing that while negative ratings significantly influence market behavior, positive changes do not. This suggests that investors primarily use ESG ratings for negative screening. Chapter 4 extends the discussion to the role of artificial intelligence (AI) in impact investment, assessing both its potential and risks within the context of future societal impacts. Chapter 5 explores the practical applications of impact investments, particularly how they can address global health challenges through initiatives like the Triple I. The conclusion synthesizes these insights, arguing for a redefinition of ESG and impact investment frameworks that align with corporate strategies. It proposes that blending these investments with robust business models and transparent metrics can lead to sustainable corporate growth and greater stakeholder satisfaction. This thesis provides a roadmap for companies and investors aiming to genuinely enhance corporate value and societal welfare through impact and ESG investment practices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resilient by Design: A Supply Chain Digitalization Journey</title>
<link href="https://hdl.handle.net/1721.1/156009" rel="alternate"/>
<author>
<name>Vela González, Carlos David</name>
</author>
<id>https://hdl.handle.net/1721.1/156009</id>
<updated>2024-08-13T03:34:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Resilient by Design: A Supply Chain Digitalization Journey
Vela González, Carlos David
In an era where supply chain disruptions have become increasingly relevant due to geopolitical and environmental factors, resilience has emerged as a critical focus for organizations worldwide. This is particularly true in the pharmaceutical sector, where ensuring an uninterrupted supply of medical products is not only a business necessity but also a moral imperative, given the direct impact on patients’ health and well-being.&#13;
&#13;
This thesis presents the development of a digital tool designed to enhance the resilience of AstraZeneca’s supply chain, employing a design thinking approach. The tool leverages simulation and business intelligence, providing a versatile platform for conducting stress tests and evaluating response mechanisms across a spectrum of scenarios. This capability is instrumental in refining business continuity plans and informing strategic decisions on disruption response and capacity investments.&#13;
&#13;
While the tool was initially conceived to address the specific needs of AstraZeneca, its architecture is inherently generic and modular. This deliberate design choice ensures that the tool can be seamlessly adapted and scaled for use across various industries, transcending the initial scope of application. Additionally, the tool lays a solid foundation for future developments in the realm of supply chain digital twins.&#13;
&#13;
The thesis also contributes a comprehensive framework for boosting supply chain resilience through the lens of digitalization. It offers a strategic blueprint that organizations can adopt to proactively navigate and mitigate the intricacies of global supply chain disruptions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oil &amp; Gas Regional Operations Electrification Estimations</title>
<link href="https://hdl.handle.net/1721.1/156008" rel="alternate"/>
<author>
<name>Cohen, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/156008</id>
<updated>2024-08-13T03:56:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Oil &amp; Gas Regional Operations Electrification Estimations
Cohen, Rebecca
Emissions from Petroleum and Natural Gas systems accounted for 11.7% of greenhouse gas emissions reported to the EPA in 2022. [23] Oil &amp; Gas companies in the United States are exploring ways to reduce their greenhouse gas footprint. One avenue being explored for emissions reductions is operation electrification. NextEra Energy is a leading renewable energy developer in the United States with a goal to help high emitting industries with their decarbonization goals. This paper explores the potential for NextEra Energy Resources, a segment of NextEra Energy, to support Oil &amp; Gas companies in their emission reduction efforts. This potential partnership is explored through estimating a specific opportunity size for Oil &amp; Gas operations electrification and identifying how NextEra Energy Resources can support these goals.&#13;
To develop an estimation for potential electric need Oil &amp; Gas combustion emissions were analyzed for specific industry segments in specific regions. This evaluation resulted in a megawatt opportunity for selected industry segments in selected regions. Within one selected region company specific opportunities were identified and wind mapping was conducted to determine the potential for wind development in the area. With the insights developed from this project NextEra Energy Resources can create a targeted approach for supporting Oil &amp; Gas companies as they pursue their emissions reduction goals.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Comparison of Solar Racking Options to Decarbonize Florida Power &amp; Light’s System</title>
<link href="https://hdl.handle.net/1721.1/156007" rel="alternate"/>
<author>
<name>Aguiar, Marcelo</name>
</author>
<id>https://hdl.handle.net/1721.1/156007</id>
<updated>2024-08-13T03:32:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Economic Comparison of Solar Racking Options to Decarbonize Florida Power &amp; Light’s System
Aguiar, Marcelo
With the continued decline of the cost of solar photovoltaics, the importance of optimizing this resource in decarbonization efforts is increasing. In this thesis, we compare different solar racking options to minimize total system cost. We focus this analysis on the flat, tracking, and fixed racking options. We then estimate the Threshold Cost Ratio between each option and Tracking Solar, which dominates utility-scale solar projects in the U.S. For this analysis, we use the Florida Power &amp; Light (FPL) system as a case study, basing our study exclusively on publicly available information. Using the LCOE and a Capacity Expansion Model to compare the different racking options, we conclude that Flat Solar would be preferred to Tracking solar if its cost was 72-77% of the cost of Tracking Solar or lower. For Fixed Solar, this ratio is between 79-84%. Utilities can then use these ratios by estimating the expected Cost Ratio and comparing it to the Threshold Cost Ratio. For example, if FPL estimated that Flat Solar cost 70% of the cost of Tracking Solar per WDC, this analysis indicates that it should mostly build Flat Solar, but if it cost more than 77%, Tracking Solar would be preferred. In addition to lowering costs, evaluating other racking options can significantly reduce the total land needed for decarbonizing FPL since Tracking Solar is the racking option that needs the most land per unit of energy produced.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-imagining Drug Discovery with Quantum Computing: A Framework and Critical Benchmark Analysis for achieving Quantum Economic Advantage</title>
<link href="https://hdl.handle.net/1721.1/156006" rel="alternate"/>
<author>
<name>Galatsanos-Dueck, Johannes</name>
</author>
<id>https://hdl.handle.net/1721.1/156006</id>
<updated>2024-08-13T03:57:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Re-imagining Drug Discovery with Quantum Computing: A Framework and Critical Benchmark Analysis for achieving Quantum Economic Advantage
Galatsanos-Dueck, Johannes
Quantum computing’s (QC) promise of solving computationally hard problems has captured public attention and imagination, leading to significant private and public capital investments in recent years. At the same time, we are at the cusp of a biomedical revolution powered by computer-aided drug discovery (CADD). Drug discovery companies are rapidly transitioning to the use of artificial intelligence to expedite and enhance research and development. However, many of the classical AI use cases scale exponentially fast and face computational power ceilings. QC can potentially accelerate these processes by several orders of magnitude in the future. As such, an open question for drug discovery companies is when and how to adopt QC. &#13;
This thesis summarizes quantum CADD methods and useful applications in drug discovery. The current state and trajectory of quantum computing is critically analyzed based on multiple benchmarks and manufacturer roadmaps. Furthermore, 11 industry decision-makers were interviewed to identify the current behaviors of end customers in investing in QC. To answer the question of correct timing and sizing of investments for a drug discovery company, the concept of net quantum economic advantage is introduced, considering all direct and indirect costs and benefits. A framework for drug discovery companies to monitor and invest in QC to reach a net quantum economic advantage is provided. &#13;
The most useful QC algorithms of Quantum Phase Estimation and Quantum Machine Learning for CADD will provide practical value after &gt;2000 logical qubits and circuit sizes of &gt;1011 gates, a far cry from today’s performance of single-digit logical qubits. Based on manufacturer timelines, these benchmarks may be achieved in the mid-2030s. However, other use cases might become interesting in the next years, and preparing a company to take advantage of QC has a long lead time. As such, drug discovery companies should move to an active quantum monitoring phase soon.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonizing the Shipping industry through Innovative Technologies, Artificial Intelligence and New Regulations</title>
<link href="https://hdl.handle.net/1721.1/156005" rel="alternate"/>
<author>
<name>Sarantopoulos, Fotis</name>
</author>
<id>https://hdl.handle.net/1721.1/156005</id>
<updated>2024-08-13T03:02:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Decarbonizing the Shipping industry through Innovative Technologies, Artificial Intelligence and New Regulations
Sarantopoulos, Fotis
The shipping industry, responsible for approximately 3% of global CO2 emissions, plays a pivotal role in the global economy, handling over 90% of world trade. This thesis addresses the urgent need for decarbonization within the maritime sector by examining innovative technologies, regulatory frameworks, and the potential of artificial intelligence to enhance operational efficiencies. The research delves into various sustainable practices including the use of alternative fuels like ammonia, hydrogen, methanol, and biofuels, as well as advancements in onboard carbon capture and wind-assisted propulsion systems. Additionally, the study assesses the impact of AI in optimizing shipping routes, predictive maintenance, and energy management, which are pivotal in reducing emissions. By integrating technological innovation with stringent regulatory compliance, this thesis highlights the challenges and transformative potential of the maritime industry's journey towards sustainability. The findings suggest that while the path to decarbonization is fraught with complexity, strategic integration of technology and policy offers a viable route to reducing the maritime sector's environmental impact and leading global efforts in combating climate change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling in a High-Mix Low-Volume Job Shop</title>
<link href="https://hdl.handle.net/1721.1/156004" rel="alternate"/>
<author>
<name>Holmes, Nicholas J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156004</id>
<updated>2024-08-13T03:15:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scheduling in a High-Mix Low-Volume Job Shop
Holmes, Nicholas J.
This research explores the intricate challenges and strategies involved in the scheduling operations of high-mix low-volume manufacturing environments. It discusses the complexities of managing diverse production requirements while optimizing resource utilization and minimizing lead times. Through a thorough analysis of scheduling methodologies and use case studies, the research offers valuable insights into enhancing operational efficiency and meeting customer demands in a job shop manufacturing setting.&#13;
This project is still ongoing, as further research and implementation learnings have not been fully realized. However, the learnings and suggestions in this research can be used to achieve a more effective and efficient scheduling process in the job shop manufacturing setting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inventory Optimization and Simulation Analysis for Supply Chain Disruption Events</title>
<link href="https://hdl.handle.net/1721.1/156003" rel="alternate"/>
<author>
<name>Kleinemolen, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/156003</id>
<updated>2024-08-13T03:45:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inventory Optimization and Simulation Analysis for Supply Chain Disruption Events
Kleinemolen, Ian
Increasing volatility in the global supply chain following the Covid-19 pandemic has led to a challenge in reliably managing inventory, especially for high-complexity medical devices. An optimization and simulation-based inventory management model was developed to augment the decision making of supply planners in these networks. The model supports supply planners in safety stock allocation decisions by quantifying inventory cost and stockout probability risk for products with multi-stage, converging supply networks. Components of the model include iterative multi-echelon inventory optimization, monte carlo simulation of a custom base-stock inventory model and cycle service level modelling. An application of the model is explored in a case study of the J&amp;J Ethicon surgical stapler supply chain. In addition, operational considerations for implementing inventory models are discussed, including data architecture, standardization, and centralization for complex supply chains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Roadmap for Last Mile Sustainability</title>
<link href="https://hdl.handle.net/1721.1/156002" rel="alternate"/>
<author>
<name>Vaidya, Sajiree Vivek</name>
</author>
<id>https://hdl.handle.net/1721.1/156002</id>
<updated>2024-08-13T04:01:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data Roadmap for Last Mile Sustainability
Vaidya, Sajiree Vivek
The final leg of e-commerce deliveries, often referred to as the "last mile," carries a significant environmental impact. While carbon data analysis tools, such as, carbon emission forecasting tools offer meaningful insights into understanding and mitigating this impact, their effectiveness hinges on the quality, availability, and granularity of data. This research project proposes data recommendations for last-mile sustainability, acknowledging the nuances inherent in such initiatives. By integrating transportation-related metrics and operational data from delivery facilities, the project seeks to enhance the accuracy and availability of last mile carbon emission forecasts.&#13;
The research consists of three primary components: data source analysis, development of a carbon emission forecasting tool, and drafting last mile sustainability data recommendations. We developed tools for carbon data analysis to assess the impact of last mile activity variables and predict carbon emission using both process and business-level data. Through this approach, we aim to provide actionable insights to support sustainability efforts within the last mile delivery sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Prohibition to International Recognition: Key Factors Driving the Development of California as a Premier Wine Region</title>
<link href="https://hdl.handle.net/1721.1/156001" rel="alternate"/>
<author>
<name>Mathy, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/156001</id>
<updated>2024-08-13T03:42:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Prohibition to International Recognition: Key Factors Driving the Development of California as a Premier Wine Region
Mathy, Anna
This thesis explores the transformation of California's wine industry from its struggles following Prohibition to becoming a leading global wine producer by 1976. Focusing on the period between 1933 and 1976, the study examines the critical factors that contributed to the industry's revival and success. Key elements identified include the recreation of market demand, significant technical innovations, and marketing strategies that aligned with consumer preferences. By integrating case studies of influential stakeholders with business strategy literature, particularly on the dynamics of clusters and ecosystems, the analysis demonstrates how California's wine industry emerged as a cohesive and competitive cluster. The findings highlight the broader applicability of these strategies, suggesting how similar approaches can be employed in other regions aiming for transformative growth, while highlighting the limits of replicability. This research underscores the synergy between strategic marketing, technological advancement, and cluster development in revitalizing industries on a global scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>US Green Hydrogen Production: Strategic Approaches to Enhancing Economic Viability and Market Development</title>
<link href="https://hdl.handle.net/1721.1/156000" rel="alternate"/>
<author>
<name>Meehan, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/156000</id>
<updated>2024-08-13T03:59:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">US Green Hydrogen Production: Strategic Approaches to Enhancing Economic Viability and Market Development
Meehan, Brandon
As the global imperative for sustainable energy solutions intensifies, green hydrogen emerges as a potential player in a sustainable energy future. This thesis explores the viability and economic landscape of green hydrogen production within the United States. It places emphasis on the pivotal role of renewable energy credit (REC) matching criteria and strategic operational adjustments in enhancing its economic feasibility. Through a detailed examination of the effects of hourly and annual REC matching, this study illuminates the complex interplay between public policy, business strategies, and the inherent variability of renewable energy sources.&#13;
&#13;
Central to this investigation is the assessment of two primary levers which may change the underlying economics of green hydrogen: REC matching criteria, which dictate the temporal alignment between renewable energy generation and hydrogen production, and strategic electrolyzer curtailment, a novel operational strategy designed to optimize the sale of both hydrogen and electricity. The analysis utilizes robust datasets including 4 years of hourly wind and solar resource availability in the U.S. at a 3km resolution, 4 years of hourly nodal power prices, and infrastructural cost data.&#13;
&#13;
The findings reveal significant regional disparities in the cost-effectiveness of green hydrogen production. The middle regions of the U.S., particularly Texas, emerge as optimal locations. These disparities are further nuanced by the chosen REC matching criteria, where less stringent annual matching notably reduces regional cost disparities by accommodating the variability of solar energy production. Moreover, strategic electrolyzer curtailment emerges as a critical mechanism for cost reduction, offering substantial savings, especially in regions characterized by high electricity price volatility.&#13;
&#13;
This research contributes to the burgeoning field of green hydrogen studies by providing a comprehensive analytical framework that integrates technical, economic, and policy dimensions. It offers actionable insights for policymakers and industry stakeholders, suggesting pathways to enhance the competitiveness of green hydrogen. By meticulously balancing the imperative of sustainability with economic considerations, this thesis charts a course towards establishing green hydrogen as a significant contributor to the hydrogen market, poised to catalyze a profound shift in the U.S. decarbonization effort.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework for Enhancing Decision-Making Capabilities in the Decarbonization of the Airline Industry</title>
<link href="https://hdl.handle.net/1721.1/155999" rel="alternate"/>
<author>
<name>Tsay, Allison Chang</name>
</author>
<id>https://hdl.handle.net/1721.1/155999</id>
<updated>2024-08-13T03:35:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Framework for Enhancing Decision-Making Capabilities in the Decarbonization of the Airline Industry
Tsay, Allison Chang
In 2008, the aviation industry became the first to adopt sector-wide sustainability targets, including carbon-neutral growth by 2020, a 50 percent reduction in net CO2 emissions by 2050 (relative to 2005 levels), and an annual improvement in fuel efficiency of 1.5 percent from 2009 through 2020.[6] Over a decade later, the path to net zero emissions has remained elusive and as of 2023, the industry has not made sufficient progress towards these targets, let alone new 2050 net-zero targets formulated in 2021. Tackling decarbonization in the aviation industry has proven to be challenging for various reasons: the industry faces obstacles such as long development cycles for commercial aircraft, the highly-regulated nature of the sector, and uncertainties in sustainable technology advancements. From an airline’s perspective, planning for fleet sustainability is an extremely unstructured problem, demanding flexibility and adaptability. Adding to the complexity is the intense competition in the airline industry- a dynamic which seldom offers decision-makers with the luxury of time. Effective planning for a sustainable future requires the interconnection and critical consideration of short-term, medium-term, and long-term goals. The question arises: How can we develop tools that offer adequate fidelity and granularity to enable airlines to plan for and execute on net zero goals? Sprague and Carlson (1982) define Decision Support Systems (DSS) broadly as interactive computer based systems that help decision-makers use data and models to solve ill- structured, unstructured or semi-structured problems. With the uncertainty of novel aircraft technologies, sustainable aviation fuel, and renewable energy sources, a DSS developed to support scenario analysis for airlines will prove valuable for airline executives, fleet planners, and inform OEMs of short-term and long-term aircraft needs. The Cascade Climate Impact Model is Boeing’s response to increasing industry demand for clarity on strategies to reduce aviation emissions. However, the underlying model focuses on macro-level analysis. In order to be considered a useful DSS for an airline stakeholder, Cascade will need to be further developed to provide granularity and fidelity sufficient for airline fleet planning and evolution decision-making. The project involves a thorough requirements development for a new version of Cascade tailored to support sustainable airline fleet planning. We delve into the specific needs and criteria that such a system must meet to effectively guide airlines in achieving their sustainability objectives. Then, a case study on a large capacity airline is conducted to evaluate the efficacy of the identified requirements. Furthermore, an analysis is undertaken to assess the current state of Cascade and the feasibility of implementing the requirements outlined for a sustainable airline fleet planning DSS. This evaluation aims to bridge the theoretical framework established through requirements analysis with the practical considerations of implementing such a system, with a specific focus on Boeing’s Cascade model. Through comparison of multiple fleet planning scenarios, the airline in question can remove up to 6 MtCO2 of future emissions by 2030; however, fleet evolution alone will not guarantee net zero emissions by 2050. Through analysis of current MOUs and SAF purchases, the airline is not on track to meet SAF uptake goals by 2030 and will need to reevaluate the current status of SAF purchase volumes with suppliers. Results from the case study indicate the capability of the newly-developed fleet planning workflows for Cascade to deliver actionable insights to airline decision-makers in their path towards decarbonization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing Performance: Impact of The Data Revolution on Recreational Running</title>
<link href="https://hdl.handle.net/1721.1/155998" rel="alternate"/>
<author>
<name>Allouch, Maxime</name>
</author>
<id>https://hdl.handle.net/1721.1/155998</id>
<updated>2024-08-13T03:12:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Democratizing Performance: Impact of The Data Revolution on Recreational Running
Allouch, Maxime
In recent years, recreational running has experienced significant growth, with millions of individuals participating in the sport worldwide. This growth has highlighted the need for accessible and effective training tools and methodologies tailored specifically to recreational runners. While elite athletes benefit from high-end performance labs, personalized coaching, and advanced training camps, these resources are often too costly and specialized to be scalable for the average runner. This thesis investigates how recent innovations in wearable devices and data science can democratize access to such elite-level resources. Employing a critical analysis, this study examines the evolution, accuracy, and real-world application of such technologies through case studies and a comprehensive review of existing literature. Additionally, the thesis discusses future technological directions, exploring potential advancements and their implications for the recreational running community. We highlight the urgent need for rigorous and independent research to validate the efficacy of these innovations. It is crucial to quantify their impact on running performance and injury prevention, challenging the often overstated claims found in marketing materials. This research could enable runners to make more informed decisions about their training methods. By making high-quality training more accessible, we aim to improve both the performance and experience of runners at all levels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Secrets of the Aluminati: Bottleneck Assessment within an Aluminum Rolling Mill</title>
<link href="https://hdl.handle.net/1721.1/155997" rel="alternate"/>
<author>
<name>Long, Evan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155997</id>
<updated>2024-08-13T04:01:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Secrets of the Aluminati: Bottleneck Assessment within an Aluminum Rolling Mill
Long, Evan C.
This work demonstrates how heuristic-based capacity estimation techniques were used to determine the utilization of the preheat process of Commonwealth Rolled Products (CRP), an aluminum rolling mill. CRP wished to evaluate its processes to determine whether its present capacity was compatible with its long-term strategic plan. The preheat process in particular presented a challenge because of its parallel work cells, interdependent finish times, and variable runtimes. This analysis was used to determine whether the present preheat plant could support the future state volume and product mix.&#13;
We will summarize the CRP process and the business circumstances before moving to the modeling approach that was used to solve this problem without relying on a high-fidelity simulation. Given the outputs of that model, we will conclude with next steps for CRP, including the operational levers used to ease the capacity situation following the capital decision.&#13;
Note that in order to protect company confidential information, sensitive values and information are masked in this document.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identification of The Steel Decarbonization Options for Different Regions</title>
<link href="https://hdl.handle.net/1721.1/155996" rel="alternate"/>
<author>
<name>Mai, Chao-Lun</name>
</author>
<id>https://hdl.handle.net/1721.1/155996</id>
<updated>2024-08-13T03:50:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Identification of The Steel Decarbonization Options for Different Regions
Mai, Chao-Lun
Iron and steel manufacturing stands as a leading contributor to global CO2 emissions and ranks as the second-largest energy consumer within heavy industries. Over the last decade, this industry alone has accounted for over 7% of global greenhouse gas emissions. Consequently, there is an urgent imperative to identify practical pathways for substantial decarbonization. This research endeavors to identify such pathways through comprehensive modeling. We evaluate the impact of technology replacement, fuel switching, and carbon capture and storage (CCS) on energy demand, costs, and emissions in crude steel production. The analysis is underpinned by two fundamental approaches: Techno-economic Analysis (TEA) and Life Cycle Analysis (LCA). Technology replacement explores alternatives such as state-of-the-art blast furnace-basic oxygen furnace (BF-BOF-SOA) and direct reduced iron with electric arc furnace (DRI-EAF) to replace the current blast furnace-basic oxygen furnace (BF-BOF) based on iron ores, as well as state-of-the-art electric arc furnace (EAFSOA) to replace the current electric arc furnace (EAF) based on recycled steels; fuel switching involves renewable electricity, renewable natural gas, biochar, and hydrogen; CCS options focus on mono-ethanol-amine (MEA) for BF-BOF based methods. Through this comprehensive analysis, the research aims to illuminate the most pragmatic and region-specific strategies for the deep decarbonization of the steel industry, making a critical contribution to addressing the urgent global need for sustainable steel production.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product SKU Analysis, Rationalization, and Optimization</title>
<link href="https://hdl.handle.net/1721.1/155995" rel="alternate"/>
<author>
<name>Hatteberg, Heidi</name>
</author>
<id>https://hdl.handle.net/1721.1/155995</id>
<updated>2024-08-13T03:42:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Product SKU Analysis, Rationalization, and Optimization
Hatteberg, Heidi
Optimizing a portfolio product mix to balance customer demand and company strategy is an ongoing challenge for various industries, often resulting in product proliferation. LFM Capital, a private equity firm, acquired IronCraft, a tractor attachment company in Eastern Tennessee, in 2021. Since its initial founding in 2014, IronCraft has faced rapid growth and change which has challenged them to meet high market demands, resulting in a significantly large product portfolio of roughly 1,000,000 variations in Stock Keeping Units (SKUs) of both manufactured and sourced products. In addition to managing such a large portfolio, the current mix also increases variability and complexity to its manufacturing operations. This thesis employs IronCraft as a practical example to perform a SKU rationalization project using a deterministic model for strategic decision making which resulted in a “Standard Offerings List” of just 230 product offerings (130 manufactured by IronCraft and 100 sourced through its partner company, CID). When modeled alongside future orders for the 2024 pre-season, this list fulfilled 75% of those orders, proving that the pruned product mix can meet demand (assuming upselling of new products). These offerings were then a focus for production improvement and Lean/5S efforts to reduce safety hazards, reduce setup/changeover times by 80%, improve cycle times for assembly by 10-15%, establish a metric tracking system, and derive quality metrics from the existing system. All tools developed and implemented through this thesis were designed to drive productivity, growth, and integration within IronCraft.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data Driven Approach to Uncovering Energy Consumption&#13;
Reduction Opportunities Within Industrial Operations</title>
<link href="https://hdl.handle.net/1721.1/155994" rel="alternate"/>
<author>
<name>Correa Núñez, Juan Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/155994</id>
<updated>2024-08-13T03:11:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Data Driven Approach to Uncovering Energy Consumption&#13;
Reduction Opportunities Within Industrial Operations
Correa Núñez, Juan Fernando
Rising operating costs and environmental pressures are compelling industrial companies to reduce energy consumption without affecting output. Although various tools to identify energy reduction opportunities exist, they often fall short, being overly theoretical, too generic, or primarily focused on capital-intensive initiatives. Consequently, companies frequently end up relying on energy audits and benchmarks that yield minimal practical reductions. This thesis introduces a methodology designed to identify and implement operational changes that lead to energy reductions in industrial settings. By integrating data-driven analytics with continuous improvement principles, this methodology is able to uncover tangible operational improvements without substantial capital expenditure. Central to the proposed methodology is the identification of the core physical and operational principles of the system being analyzed to then develop a theoretical ideal operation against which to compare the current operation. This thesis also aims to describe the application of this framework at the pre-heating furnaces of Aluminum Duffel, an aluminum rolling mill in Duffel, Belgium, where it proved successful in reducing energy consumption by 23% within six months.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Wages and Employment: Do Minimum Wages Affect Management Practices?</title>
<link href="https://hdl.handle.net/1721.1/155993" rel="alternate"/>
<author>
<name>Tong, Di</name>
</author>
<id>https://hdl.handle.net/1721.1/155993</id>
<updated>2024-08-13T03:16:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Wages and Employment: Do Minimum Wages Affect Management Practices?
Tong, Di
Extensive research has examined the impact of minimum wages on employment. Yet less explored is whether and how these mandatory wage increases affect the broader spectrum of management practices and job quality. Compensating differentials theory posits that low-wage employers will diminish non-wage amenities to counteract the added labor costs. Conversely, the high-road strategy literature anticipates firms to enhance crucial aspects of job quality to optimize worker productivity. To assess these contrasting hypotheses, I used matched U.S. employee-employer job reviews and ratings to measure management quality in both general terms and across three specific dimensions: schedule quality, investment in employees (training, career opportunity, and relational investment), and employee input (autonomy and voice). I conducted difference-in-differences analyses based on multiple state-mandated minimum wage hikes spanning 2015-2021. The analyses show that as firms comply with mandates to raise wages, they, on average, neither compromise job quality in non-wage aspects nor undergo a thorough management system upgrade in the high-road direction. These findings align with organizational inertia theories and provide evidence of the barriers to high-road diffusion. Specifically, economic and policy pressure can be insufficient to cause strategic adoption of high-road employment systems. This study carries significant policy implications as the first comprehensive evaluation of minimum wage mandates on low-wage job quality. On one hand, it alleviates concerns regarding a negative spillover effect of mandatory wage increases on overall job quality. On the other hand, it highlights the limitations of minimum wage mandates in fostering systematic enhancements in working conditions beyond mere wage adjustments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Integrated Continuous Biomanufacturing Throughput: Resource Constraints and Process Scheduling</title>
<link href="https://hdl.handle.net/1721.1/155992" rel="alternate"/>
<author>
<name>Haddad, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/155992</id>
<updated>2024-08-13T03:27:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing Integrated Continuous Biomanufacturing Throughput: Resource Constraints and Process Scheduling
Haddad, Ana
The purpose of this thesis is to understand the available capacity of a biomanufacturing facility with respect to a product of interest. Further, the thesis aims to find opportunity to increase the system’s throughput and to determine whether current labor resources are sufficient to enable production at this level. To address these questions, the system will first be understood at a high level with preliminary analysis of its available capacity and of resource capacity utilization. A more robust available capacity analysis will then be performed by accounting for resource constraints. To this end, makespan minimization models will be created to evaluate optimal process scheduling given resource constraints. The analysis results showed that, at this time, labor is not a constraint to the system’s available capacity and that improvements to process scheduling can increase the system’s throughput at current labor levels. Finally, the thesis will evaluate new operating strategies, based on the new-found system understanding, which strive to decrease volatility of system throughput. The methods used in this thesis aim to cut through daily variability to understand fundamental production requirements. While this study was performed at a biomanufacturing facility, the methods are applicable to a wide range of industries.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Box Jumping: Portfolio Recompositions to Achieve Higher Morningstar Ratings</title>
<link href="https://hdl.handle.net/1721.1/155991" rel="alternate"/>
<author>
<name>Kim, David Sunghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/155991</id>
<updated>2024-08-13T03:24:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Box Jumping: Portfolio Recompositions to Achieve Higher Morningstar Ratings
Kim, David Sunghyo
I show that actively-managed mutual funds often pursue higher Morningstar ratings at the expense of lower investment returns. Funds achieve higher ratings by changing their holdings to induce Morningstar to reclassify them into size/value style boxes with worse average performance. This practice, which I label `box jumping', sacrifices fund performance but nonetheless attracts large inflows of capital because funds are rated based on their relative performance within style boxes. Box jumping funds also take advantage by charging higher fees, which investors pay despite the ratings upgrade reversing on average within three years. These patterns emerge after 2002 when Morningstar ratings became based on relative performance within style boxes, and are predictably absent beforehand. I also show that pervasive box jumping creates negative spillover effects to other funds. Together, my findings highlight portfolio recomposition as a novel lever that funds employ to manipulate Morningstar ratings, and that funds box jump despite sacrificing returns because investors fixate on ratings when allocating capital.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Evergrande: Rethinking Real Estate Market in China and the USA for Long Term Growth and Sustainability</title>
<link href="https://hdl.handle.net/1721.1/155990" rel="alternate"/>
<author>
<name>Yang, Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/155990</id>
<updated>2024-08-13T03:36:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Evergrande: Rethinking Real Estate Market in China and the USA for Long Term Growth and Sustainability
Yang, Jing
The Evergrande crisis has exposed vulnerabilities in China's real estate market, prompting a critical reassessment of financing models, investment strategies, and regulatory frameworks. This thesis conducts a comparative analysis of the real estate markets in China and the United States, drawing insights from the Evergrande debacle and the US subprime mortgage crisis. By examining the evolution of financing mechanisms, investment approaches, land-use policies, and socio-economic factors influencing demand and supply, the research offers a holistic understanding of challenges and opportunities. Through synthesizing lessons from crises in both markets, the study provides recommendations for stakeholders, addressing financing strategies, regulatory reforms, risk management practices, and the pursuit of long-term sustainability. The findings contribute to the discourse on sustainable real estate development, offering valuable guidance for informed decision-making and resilient strategies amidst evolving market conditions and future challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analytical Framework for Planogram Portfolio Optimization</title>
<link href="https://hdl.handle.net/1721.1/155989" rel="alternate"/>
<author>
<name>Habel, Mathew</name>
</author>
<id>https://hdl.handle.net/1721.1/155989</id>
<updated>2024-08-13T03:02:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Analytical Framework for Planogram Portfolio Optimization
Habel, Mathew
While considerable research has been conducted into the construction of optimal planograms (POGs) within a given store, the existing approaches have not been rigorously tested at scale across a network chain of retail stores. Moreover, current industry practices to design planograms are often ad hoc and anecdotal, and lead to proliferation of many different planograms that add complexity but not necessarily value. This thesis proposes an analytical framework for an end-to-end optimization of portfolios of planograms within Target, focusing on the optimal trade-off between planogram-store personalization and standardization. The study utilizes retail data from Target to develop mathematical frameworks partly based on machine learning and optimization techniques to address the challenge of managing planograms across Target’s national network chain of stores.&#13;
A four phase approach is proposed. Phase 1 develops a descriptive mathematical modeling framework that informs the identification of product categories for which reduction of POG design proliferation has promising potential. Phase 2 develops machine learning models to estimate revenue generation for any given POG design and Target store combination. Phase 3 estimates the performance of novel POG deployments in stores across Target’s network chain. Lastly, Phase 4 utilizes a knapsack formulation to find the optimal number of planograms within a category as measured by the expected revenue generation minus the planogram management costs.&#13;
This approach was assessed by applying it to the category of spice products on a 6 month time horizon and yields an estimated reduction in operational costs of 46%, which comes directly from reducing the total number of respective planogram designs active within Target’s store network. Moreover, the estimated revenue of the new planogram portfolio shows a 3% improvement over the existing, which is obtained by replacing the planogram designs in several stores by more favorable designs than the existing ones, which are assessed to generate higher sales. These results suggest the optimization approach can yield meaningful operational and cost savings across categories in the organization and improve the operating margin of Target.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Stochastic Simulation for Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/155988" rel="alternate"/>
<author>
<name>Olaleye, Ololade</name>
</author>
<id>https://hdl.handle.net/1721.1/155988</id>
<updated>2024-08-13T03:41:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Machine Learning and Stochastic Simulation for Inventory Management
Olaleye, Ololade
This thesis explores the use of advanced data-driven techniques for dimensioning safety stock and optimizing inventory in a supply chain. The thesis is based on data and insights for raw material inventory at Amgen, a biotech company. Resilient inventory management is important in the biopharma and biotech sector as the repercussions of a drug shortage are dire. However, the complexity of biomanufacturing processes creates significant variability and uncertainty around lead times and demand. Amgen currently holds high raw material inventories across thousands of materials to mitigate risks of stockouts that could delay production. However, the policies of holding high raw material inventories in Amgen have resulted in increased holding costs and also tied up working capital. To address this challenge and find a sustainable method for managing raw materials in the company and by extension, other stages of production, a novel methodology is developed. Machine learning models such as CatBoost, Extreme Gradient Boosting (XGBoost) and Random Forest are proposed to forecast lead times and demand. The models are trained on datasets of 10,000+ materials, incorporating unique patterns based on factors like suppliers’ historical delivery performance, historical demand pattern and material characteristics. A segmentation framework is also developed to properly allocate service levels based on risk tolerance for different category of materials. Stochastic simulation then applies the learned predictive distributions to quantify optimal safety stock levels under uncertainties. This considers desired service levels, holding costs, risk tolerance, cost-risk tradeoffs and potential disruptions in what-if scenario cases to support resilience. The methodology is validated on sample materials with both short and long lead times. Results indicate potential inventory reductions of over 25% while still preventing stockouts, enabling multimillion dollar savings in procurement and holding costs. A phased implementation plan is also proposed in order to ensure smooth transition using this new data-driven approach in the organisation, taking into consideration change management. This solution fuses predictive analytics with simulation and optimization to transform safety stock calculation from a cost burden to a competitive advantage. The dynamic data-driven framework significantly enhances supply chain resilience and efficiency in the vitally important biopharmaceutical industry, where patient outcomes are at stake. The methodologies developed could be applied across various production stages and tailored to other sectors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Mold: Using Automated Design to Accelerate Composites Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155987" rel="alternate"/>
<author>
<name>Sweet, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/155987</id>
<updated>2024-08-13T03:31:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Breaking the Mold: Using Automated Design to Accelerate Composites Manufacturing
Sweet, Mark
Re:Build Manufacturing’s member company, Composite Resources (CR), is a thermoset composites manufacturer primarily serving aerospace and defense customers. Composites manufacturing is an industry primarily differentiated on quality and lead times. Customers are innovative and value rapid prototyping capabilities. CR is challenged by the difficulty of finding experienced labor and a high-mix, low-volume workflow with frequent, non-recurring, low-value-added engineering work. In pursuing aggressive growth goals, CR must decouple revenue growth from headcount. &#13;
&#13;
A significant component of lead time is the engineering design and toolpath generation for the molds used to manufacture the composite parts. This research seeks to automate mold design and toolpath generation, allowing CR to eliminate the labor bottleneck and establish short lead times as a competitive advantage. &#13;
&#13;
This research studied existing manual mold design and toolpath generation processes to distill the key engineering decisions. A tiered system was developed to characterize parts suitable for automation. Algorithms were developed that automated mold design and toolpath generation for 12% of CR’s historical parts. Automation is projected to decrease engineering mold design times by 87% and overall lead times by 33% for in-scope parts.&#13;
&#13;
Several areas for algorithm improvement are explored to increase the impact of design automation and further reduce lead times. Use cases for design automation are more broadly considered, and the implications for small manufacturers are explored.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Environmental Impact Assessment of 3D Printed&#13;
Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/155986" rel="alternate"/>
<author>
<name>Gerzeghier, Abraham</name>
</author>
<id>https://hdl.handle.net/1721.1/155986</id>
<updated>2024-08-13T03:41:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Environmental Impact Assessment of 3D Printed&#13;
Medical Devices
Gerzeghier, Abraham
As the push to incorporate sustainable practices has found widespread adoption in the corporate world, much of the responsibility has fallen to those industries and companies that manufacture physical products. Stryker has set corporate-wide carbon neutral and 100% renewable energy goals requiring an adjustment of current processes and incorporating sustainability practices to processes in development. To that end, this thesis assesses the environmental impact of additive technologies at one of Stryker’s manufacturing facilities in the form of a numerical metric to give leadership a new way to incorporate environmental consequences into their decision-making. By mapping out the additive manufacturing processes at the company’s primary facility and incorporating a tool to model these processes, two main metrics were produced for three additive technologies versus traditional milling. First, the carbon footprint per part due to the raw material, production processes, and the consumed inputs was quantified. Second, the energy consumed using each manufacturing platform, from raw material extraction to finishing. In addition, a separate tool was developed to streamline the use of the model and increase adoption by the additive team. A case study was conducted using this tool on one of the company’s products and the results were compared to an external consultancy’s analysis. The discrepancies between the two analyses allow for future work in further customizing the tool’s parameters to mirror the specific conditions of the medical device facility.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling System Efficiency in Mixed-Model Assembly Lines</title>
<link href="https://hdl.handle.net/1721.1/155985" rel="alternate"/>
<author>
<name>Hoffman, Cameron</name>
</author>
<id>https://hdl.handle.net/1721.1/155985</id>
<updated>2024-08-13T03:48:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling System Efficiency in Mixed-Model Assembly Lines
Hoffman, Cameron
This thesis details the development of a system efficiency model at the Nissan Smyrna Vehicle Assembly Plant. System efficiency at Nissan is one measure of performance used to allocate new business to plants, and in pursuit of this new business, leaders at the Smyrna Plant maintain a continuous improvement culture where teams are regularly engaged in plant production improvement efforts.&#13;
&#13;
Production improvements at the Smyrna Plant typically focus on fault reduction and line balancing. These efforts leverage either vehicle or process data, but none incorporate both, as no combined data system exists. One can overcome this disconnect by generating an integrated model that links the production sequence with assembly jobs using vehicle model and feature relationships. What results is a repository of work content on produced vehicles containing real and ideal production times, which can be used to measure system efficiency. Creation of such a system greatly enhances existing capabilities to identify bottlenecks in the plant, to improve system health, and to optimize the production sequence.&#13;
&#13;
The completed research demonstrates the modeling capability to integrate product and process data and the use cases of such an integration in enhancing production improvements. The research also demonstrates how internal innovation can happen through the novel use of existing resources to unlock new capabilities. The recommendations focus on implementing the integrated system into stakeholder workflows, creating new data architectures to simplify data management and model development, and re-thinking plant performance models to incorporate current production data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The study of ESG strategies  on the development and financial performance  of traditional energy enterprises using System Dynamics - A case study on one oil and gas company</title>
<link href="https://hdl.handle.net/1721.1/155984" rel="alternate"/>
<author>
<name>Wang, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/155984</id>
<updated>2024-08-13T03:27:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The study of ESG strategies  on the development and financial performance  of traditional energy enterprises using System Dynamics - A case study on one oil and gas company
Wang, Wei
This thesis investigates the impact of Environmental, Social, and Governance (ESG) strategies on the development and financial performance of traditional energy companies, specifically focusing on the oil and gas industry. This study constructs a comprehensive simulation framework from System Dynamics modeling to analyze the dynamics of ESG strategies and company operations. The research applies this model to a case study of BP PLC, a major player in the oil and gas sector, to evaluate the effectiveness of ESG strategies on a realistic scale.&#13;
The model demonstrates that strategic implementation of ESG initiatives leads to improved environmental performance, operational efficiency, and financial performance. The findings suggest companies and policymakers to take firm and prompt actions in investing production efficiency and renewable energy to mitigate future regulatory risk and market uncertainty. This thesis contributes to the perspective of achieving sustainability goals while maintaining profitability. It also provides insights into the challenges and opportunities faced by these companies as they navigate the transition towards more sustainable landscapes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering of Similar Incident Tickets Using Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/155983" rel="alternate"/>
<author>
<name>Chen, Jackie</name>
</author>
<id>https://hdl.handle.net/1721.1/155983</id>
<updated>2024-08-13T03:25:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clustering of Similar Incident Tickets Using Natural Language Processing
Chen, Jackie
As businesses increasingly rely on digital tools for operational efficiency and value creation, Software Asset Management (SAM) becomes an important business practice. This thesis explores the use of natural language processing (NLP) and clustering algorithms to identify recurring issues affecting software applications with the objectives to assess the technical health of applications and to identify opportunities to address software issues that repeatedly plague users. Using a dataset of incident tickets from a business unit of a pharmaceutical company, various machine learning models were designed and tested to identify recurring issues affecting the business' applications. Through a dashboard that visualizes the outputs of the models, the business is provided with insights into recurring issues affecting their digital tools. As validated through user feedback and visual inspection, the model outputs indicate promising results in the clustering of incident tickets, offering valuable insights to users to understand and address recurrent software problems. However, it is important to acknowledge the inherent challenges of unsupervised machine learning. While the results can help enhance business operations, caution is advised regarding the implications to users and the business when models produce unexpected results. This project is another example of the balance between leveraging machine learning for problem-solving and understanding the limitations of the models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preemptive variation reduction in biologic drug substance manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155982" rel="alternate"/>
<author>
<name>Güereca Valdivia, Ismael</name>
</author>
<id>https://hdl.handle.net/1721.1/155982</id>
<updated>2024-08-13T03:35:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Preemptive variation reduction in biologic drug substance manufacturing
Güereca Valdivia, Ismael
Biomanufacturing processes are characterized by their high natural complexity and variability, which present significant challenges to achieving consistent product quality and operational efficiency. This thesis proposes that integrating digital twins and soft sensors into these processes can substantially improve decision-making workflows. By simulating the biomanufacturing process and enabling real-time monitoring and estimation of critical parameters, organizations can reduce emerging variations in their manufacturing process by being able to identify, predict, and mitigate emerging issues before they become disruptive.To validate this hypothesis, a digital twin for a generic Integrated Continuous Bioreactor (ICB) operation was developed based on first principles. Additionally, soft sensors were used for the real-time estimation of biomass concentration (a critical parameter in mammalian cell culture). Combining mechanistic and data-driven modeling approaches and leveraging historical production data from Sanofi’s operations, both approaches were built and tested, demonstrating their effectiveness in real-world scenarios. The results show the potential of these technologies in improving process monitoring and control. On the one hand, the digital twin of the ICB operation allowed for the simulation of various scenarios, which presents the opportunity to adjust the parameters to ensure adequate operating conditions. On the other hand, implementing soft sensors, utilizing multiple linear regression and Seasonal Autoregressive Integrated Moving-Average (SARIMAX) models accomplished precise real-time estimations of biomass concentration. Both results validate the optimization of large-scale biomanufacturing processes by highlighting the potential of digital twins and soft sensors in reducing variation and driving continuous improvement.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining the Impact of Large Financial Deals: Toward a Holistic Evaluation of Economic and Societal Consequences</title>
<link href="https://hdl.handle.net/1721.1/155981" rel="alternate"/>
<author>
<name>Dupont, Apolline</name>
</author>
<id>https://hdl.handle.net/1721.1/155981</id>
<updated>2024-08-13T03:07:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reimagining the Impact of Large Financial Deals: Toward a Holistic Evaluation of Economic and Societal Consequences
Dupont, Apolline
This thesis reimagines the assessment of large financial deals, such as mergers and acquisitions (M&amp;A), by proposing a holistic evaluation framework that considers economic, societal, and environmental consequences. Traditionally, these deals have been assessed primarily based on financial metrics, overlooking their broader impact on stakeholders and sustainability.&#13;
&#13;
Through a mixed-method approach combining literature review and qualitative interviews with professionals, this research develops a theoretical framework integrating multiple dimensions into the analysis of M&amp;A deals. The framework is applied to a case study of the contentious merger between French utility giants Veolia and Suez, highlighting the complexities and trade-offs involved in evaluating deals in the water and waste management sector.&#13;
&#13;
The findings underscore the importance of comprehensive impact assessments, robust stakeholder engagement, and long-term value creation strategies. The Veolia Suez case reveals the need for effective risk management and the potential for synergies and unintended consequences in large financial deals.&#13;
Ultimately, this thesis argues that a holistic approach to impact assessment enables informed decision-making, promoting sustainable growth and safeguarding societal and environmental interests. The proposed framework offers a roadmap for enhancing practices and fostering a more responsible approach to financial transactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating New Business Opportunities for Interregional Transmission</title>
<link href="https://hdl.handle.net/1721.1/155980" rel="alternate"/>
<author>
<name>Okoye, Don</name>
</author>
<id>https://hdl.handle.net/1721.1/155980</id>
<updated>2024-08-13T03:55:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating New Business Opportunities for Interregional Transmission
Okoye, Don
The merchant transmission investment model is conducive to addressing interregional and long-range transmission needs as it provides a pathway to circumvent localized regional and state transmission planning processes and focus directly on interregional development. Furthermore, merchant transmission investments in accordance with comprehensive (multi-value) benefits planning provide a favorable benefit-to-cost ratio for transmission customers and support positive returns for investors. However, evaluating the comprehensive benefits of proposed transmission projects is computationally expensive and unfeasible to execute for early-stage, exploratory analysis of multiple projects. Therefore, this thesis focuses on the development and use of a computationally-reduced transmission business evaluation tool that heuristically evaluates critical components of comprehensive benefits and assesses merchant-based cost recovery viability of five interregional and long-range transmission projects on a forward-looking basis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site Selection, Vendor Evaluation, and Deployment of Nuclear Microreactors in Remote Mining Operations</title>
<link href="https://hdl.handle.net/1721.1/155979" rel="alternate"/>
<author>
<name>Chew Ming Chang, Matthew Dominic</name>
</author>
<id>https://hdl.handle.net/1721.1/155979</id>
<updated>2024-08-13T03:17:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Site Selection, Vendor Evaluation, and Deployment of Nuclear Microreactors in Remote Mining Operations
Chew Ming Chang, Matthew Dominic
This thesis presents a comprehensive framework for site selection and vendor selection for deploying nuclear microreactors in remote mining areas to facilitate decarbonization efforts. The methodology involves utilizing data gathered through various internal sources and employing software tools such as HOMER Pro Grid Optimization software and Python for analysis. The framework aims to optimize settings based on economic, carbon emissions, and capacity considerations by simulating various energy generation and storage components. The study also incorporates data from publicly available sources on micromodular reactor (MMR) companies to create MMR models for optimization calculations. Through a detailed analysis of simulated data and questionnaire scenarios, the framework evaluates factors such as power requirements, high temperature processes, charging stations, baseload size, peak electricity demand, peaking factor, proximity to town, and rail infrastructure. The proposed framework offers a systematic approach to identifying suitable pilot sites for MMRs in remote mining locations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative AI in Higher Education Academic Assignments: Policy Implications from a Systematic Review of Student and Teacher Perceptions</title>
<link href="https://hdl.handle.net/1721.1/155977" rel="alternate"/>
<author>
<name>Li, Zixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/155977</id>
<updated>2024-08-13T03:34:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Generative AI in Higher Education Academic Assignments: Policy Implications from a Systematic Review of Student and Teacher Perceptions
Li, Zixuan
This study systematically investigates students’ and teachers’ perceptions of using Generative AI in higher education assignments. Through a comprehensive systematic review of 37 papers, the study identifies common perspectives, differences, major ethical concerns, and the need for policy development and regulation. The systematic review reveals the potential benefits of AI tools, including improved efficiency and personalized learning experiences. However, it also highlights significant challenges and ethical concerns, such as the risk of academic dishonesty, over-reliance on technology, and the need for transparency in data processing and privacy. &#13;
&#13;
Additionally, a policy review is conducted to assess the extent to which policies at international, national, and institutional levels address the major ethical concerns identified in the systematic review. The study finds notable gaps between the significant ethical concerns perceived by students and teachers and the existing rules and guidance available. The UNESCO guidance provides valuable recommendations,  but national and institutional policies need further development to effectively address the unique challenges posed by AI in educational settings. &#13;
&#13;
The study underscores the importance of collaboration, capacity building, and ongoing evaluation in navigating the challenges of integrating Generative AI in higher education. Policymakers and educational institutions should prioritize providing training and support for educators, fostering a culture of academic integrity, and promoting the development of AI literacy skills. Future research should address the limitations identified in this review, such as conducting studies with larger, more diverse samples and employing longitudinal designs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hong Kong's Transformative Journey Under 'One Country, Two Systems': Processes, Trends and Reflections at the Midpoint</title>
<link href="https://hdl.handle.net/1721.1/155976" rel="alternate"/>
<author>
<name>Zhu, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/155976</id>
<updated>2024-08-13T03:09:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hong Kong's Transformative Journey Under 'One Country, Two Systems': Processes, Trends and Reflections at the Midpoint
Zhu, Rui
Since the 1997 handover, Hong Kong has reached the midpoint of its unprecedented constitutional experiment under the 'One Country, Two Systems' principle. In the twenty-six years since Hong Kong's return to China, the region has achieved remarkable success within this unique political framework. From the 1960s through today, Hong Kong has transformed into one of the wealthiest, most economically developed regions with the highest living standards worldwide. As Asia's financial center and a global hub for business, shipping, and trade, Hong Kong has made significant contributions to Asia's development and progress. However, despite its rise as a global financial powerhouse, Hong Kong faces numerous challenges. These include limited industrial diversification, a lack of technological innovation, the gradual erosion of civil liberties, and diminishing geopolitical neutrality amidst the escalating U.S.-China rivalry. The complex interplay of these factors poses significant risks to Hong Kong's long-term prosperity and stability.&#13;
&#13;
This thesis chronicles Hong Kong's transformative journey since 1997, examining key development trends and its current predicaments. It aims to capture the insights and lessons learned, providing a basis for thoughtful consideration of how to enhance Hong Kong's unique strengths and characteristics, address its vulnerabilities, and thereby launch a new phase in its developmental journey.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Metal Additive Manufacturing from R&amp;D to Production</title>
<link href="https://hdl.handle.net/1721.1/155975" rel="alternate"/>
<author>
<name>Weißbach, Reimar</name>
</author>
<id>https://hdl.handle.net/1721.1/155975</id>
<updated>2024-08-13T03:28:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scaling Metal Additive Manufacturing from R&amp;D to Production
Weißbach, Reimar
Metal additive manufacturing (AM) has been successfully commercialized, yet widespread adoption has not been achieved so far. This is partly because companies struggle to operate AM factories profitably and efficiently at industrial scale.&#13;
This thesis proposes a data strategy to address this challenge and support the rapid growth and successful operation of an additive manufacturing factory – from R&amp;D to production. The central idea is to connect relevant data to the central unit of a build. A build is proposed as one unit of manufacturing in AM. Connecting commercial data, information about geometry, processing, materials, post-processing, and testing to a build allows to gain a system-level understanding while also being able to dive into details where needed.&#13;
After implementation, the framework can be used to (i) qualify processes and certify materials, (ii) improve quoting quality and efficiency, (iii) support engineering and R&amp;D, (iv) derive critical operations KPIs such as revenue per build, builds per week, and days per build, which can be used for budgeting and capacity planning as well as business control, (v) make strategic decisions on capital expenses and headcount planning, as well as (iv) ensure traceability of materials and parts. Together, these applications support decision makers as well as commercial and technical staff in their work, both strategic as well as during day-to-day operations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating the Integration of Low-Volume, High-Mix Production Organizations</title>
<link href="https://hdl.handle.net/1721.1/155974" rel="alternate"/>
<author>
<name>Chacko, Priya</name>
</author>
<id>https://hdl.handle.net/1721.1/155974</id>
<updated>2024-08-13T03:50:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accelerating the Integration of Low-Volume, High-Mix Production Organizations
Chacko, Priya
In private equity, the buy-and-build strategy may be used to perform horizontal acquisitions of targets that operate in the same industry and interact with similar customers and suppliers. This strategy increases the buyer’s market share in the field, diversifies its customer base, provides opportunities for the realization of synergies, and may even add new capabilities to its offerings. The consolidated platform company can then achieve a value that is significantly higher compared to that of the individual portfolio companies alone. This increased value from the combination of portfolio companies, however, is dependent on their successful integration into the platform company.&#13;
&#13;
This research investigates the unique challenges of aligning and integrating two independent production organizations that operate in the low-volume, high-mix (LVHM) metal fabrication sector. The research strategy used in this thesis begins with defining objectives and establishing the initial states of the portfolio companies. Then, a gap analysis and strategic benchmarking are performed to identify integration opportunities. Finally, proposals to accelerate integration in operations are provided: the first proposes increasing automation in production data management, and the second proposes a method to allocate indirect costs and better understand total costs during billing in the quote creation process.&#13;
&#13;
Though time and resource constraints prevented the proposed recommendations from being implemented during this research period, these recommendations have the potential for substantial positive impact on both platform and portfolio company operations. While the proposals are tailored to the organizations studied in this research, the broader concepts on which they are based suggest wider applicability to similar LVHM production environments. This thesis offers a framework for organizations to assess their initial and goal states, define objectives, and develop strategies to accelerate integration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance of the Private Equity industry during depressed Macroeconomic conditions</title>
<link href="https://hdl.handle.net/1721.1/155973" rel="alternate"/>
<author>
<name>Ginolhac, Gaspard</name>
</author>
<id>https://hdl.handle.net/1721.1/155973</id>
<updated>2024-08-13T03:11:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Performance of the Private Equity industry during depressed Macroeconomic conditions
Ginolhac, Gaspard
This thesis aims to understand the performance of private equity funds during economic crises and unfavorable macroeconomic conditions. To introduce the topic, I first review how private equity funds create value and how we can assess the performance of this industry. Then, I focus my analysis on the behavior of the industry during past economic crises to draw similarities with the current situation. Finally, using a large sample of private equity funds I do my own assessment of the performance of the industry and dig into what makes PE funds successful.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semi-Automatic Nesting and Lean Problem Solving in a High-Mix, Low-Volume Production Environment</title>
<link href="https://hdl.handle.net/1721.1/155972" rel="alternate"/>
<author>
<name>Davis, G. Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/155972</id>
<updated>2024-08-13T03:44:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Semi-Automatic Nesting and Lean Problem Solving in a High-Mix, Low-Volume Production Environment
Davis, G. Alexander
Nesting in manufacturing involves arranging parts to be cut in the most efficient way possible to minimize the material left over after being cut. While many commercial software solutions have optimization algorithms that can do this efficiently, complex manufacturing processes in high-mix-low-volume (HMLV) environments make it difficult and time consuming software to implement the software. This paper describes a solution built for a HMLV company to automate significant portions of the nesting process while maintaining enough human input to deal with complexity, reducing their time to nest jobs by 83% and the time to re-nest jobs in the case of a production schedule change by 95%. We focused on using lean principles as a time-saving strategy rather than a direct cost cutting strategy in order to improve quality of life for operators while improving customer service. Initial iterations of the solution focused on complete automation of the nesting process with one click by the operator, but variability and complexity in the manufacturing system required a more semi-automatic solution that allowed for operator input but in a much easier and faster way than the initial state. This solution building is an example of using the A3 lean problem solving process to align stakeholders and rapidly experiment/iterate a solution until it achieves desired performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Greenhouse Gas Optimization Across a Multi-Echelon Manufacturing and Distribution Network</title>
<link href="https://hdl.handle.net/1721.1/155971" rel="alternate"/>
<author>
<name>Rosenzweig, Theo</name>
</author>
<id>https://hdl.handle.net/1721.1/155971</id>
<updated>2024-08-13T03:46:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Greenhouse Gas Optimization Across a Multi-Echelon Manufacturing and Distribution Network
Rosenzweig, Theo
Emissions from the industrial sector are a major contributor to climate change around the world. Many of these industrial emissions are attributable to the supply chain and will need to be drastically reduced to meet emission goals set forth by the United Nations Paris Agreement. Possibilities including renewable energy technologies for manufacturing and sustainable vehicles for transportation already exist and can help to reduce emissions across the supply chain, but few solutions have been evaluated regarding re-organizing supply chains as a whole to minimize carbon footprint. This thesis focuses on adapting sourcing strategies in a multi-echelon supply chain network to minimize Greenhouse Gas emissions. An approach using a multi-objective mixed-integer linear program that balances emission reduction along with other objectives such as sourcing cost, lead time, and supply risk is conducted to test the feasibility of the developed strategy in a business context. Opportunities for improvement of the model and possibilities for implementation in other organizations are evaluated.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extracting Coronary Lesion Information from Angiogram Reports for Patient Screening Applications</title>
<link href="https://hdl.handle.net/1721.1/155970" rel="alternate"/>
<author>
<name>Gaffney, Leah Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/155970</id>
<updated>2024-08-13T03:16:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extracting Coronary Lesion Information from Angiogram Reports for Patient Screening Applications
Gaffney, Leah Paige
Dramatic improvements in the management of heart disease over the past 60 years (&gt;70% reduction in mortality) may be plateauing, and there are challenges ahead for achieving cardiovascular health objectives, with heart disease still the leading cause of death in the US. One group of heart disease patients, coronary artery disease (CAD) patients, are now presenting with increased clinical complexity and higher risk profiles due to increases in lifespan and comorbid disease states. Percutaneous coronary interventions (PCI) are the best treatment options for a subset of CAD patients, but patients are increasingly deemed ineligible due to their higher risk of procedural complications. A new option, protected PCI, makes PCI safer for those patients.&#13;
&#13;
Abiomed developed and manufactures the Impella pump, a temporary support option for the heart that provides the “protection” in protected PCI. Our work aims to ensure that the protected PCI option is available to these patients. This work supports the development of patient screener tools that identify patients with high-risk CAD who have not been offered PCI but should be eligible for protected PCI. &#13;
&#13;
Specifically, we tackle one of the eligibility requirements by extracting coronary lesion location and severity information from clinical records. Natural language processing (NLP) tools are enabling more advanced electronic health record (EHR) based patient research. We collected and curated a dataset of 72 diagnostic coronary angiogram reports from health systems which contributed data to the Abiomed cVAD registry. Of these, 39 reports from 6 sites were used as a training set and 13 reports from the same 6 sites as a development set for a data processing pipeline to extract coronary lesion information. This work expands on the existing solutions for extracting ejection fraction information from echocardiogram reports. The ejection fraction extraction task has been solved with regular expressions, a simple and somewhat inflexible pattern-matching approach. &#13;
&#13;
Our coronary lesion extraction followed a two-step general architectural approach that is common in NLP (Named Entity Recognition NER followed by Relation Extraction REL). We compare a machine learning based NER approach and a dictionary and regular expression ("matching") NER approach. Our REL implementation is rules-based. On entities alone, an intermediate outcome of the initial stage (NER), we achieve 92.1% recall and 93.9% precision with the machine learning based model and 95.1% recall with 52.6% precision for the matching-based model (on 370 total entities of types: location, vessel, and severity in the development set). The machine learning (ML) approach overcomes the inability for matching to be precise. This difference may not affect the final prediction performance depending on the second stage implementation. We achieve 89.7% recall and 84.5% precision on the second stage independently. (This is a conservative representation, as 7 of 103 relations in the development set are from types of sentences that are explicitly not yet handled by our second stage model. Recall increases to 92.6% and precision to 90.6% when those types of cases are ignored).&#13;
&#13;
Each stage independently achieves reasonable performance. We analyze errors to recommend the next steps of development for both stages. With the two stages together, we achieve 79.6% recall and 71.8% precision with the ML-based NER model and 76.2% recall and 77.7% precision 76.9% for the matching-based NER model (without correcting for expected future improvements in performance). Non-ML approaches can solve at least three-quarters of this text extraction problem. We recommend advanced methods, including grammatical dependency rules for relations and improving ML-based entity prediction with more training examples from specific contexts.&#13;
&#13;
This work provided a roadmap and the first pipeline to leverage data from the cVAD registry for algorithm development for patient screening applications. We developed structured data models and an annotated dataset for coronary lesion description extraction from coronary angiograms. We present the results of the entire algorithm and its component parts and propose advanced methods to refine the approach for implementation in future patient screening tools.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin-Driven Supply Chain Enhancement to Support Direct-to-Consumer Growth</title>
<link href="https://hdl.handle.net/1721.1/155969" rel="alternate"/>
<author>
<name>Agrawal, Siddhant</name>
</author>
<id>https://hdl.handle.net/1721.1/155969</id>
<updated>2024-08-13T03:33:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Digital Twin-Driven Supply Chain Enhancement to Support Direct-to-Consumer Growth
Agrawal, Siddhant
In response to the rising trend of Direct-to-Consumer (D2C) sales, many traditional retailers, which have historically relied on wholesale business models, are now undertaking significant supply chain transformations. This thesis explores the strategic shift of a large retailer in the footwear and apparel sector, pseudonymously referred to as Iota in this study, as it transitions towards a D2C-focused supply chain. This transition, emblematic of a broader industry transformation, is aimed at enhancing alignment with the evolving expectations of customers in terms of service, cost-effectiveness, and sustainability.&#13;
&#13;
Central to this research are the proposed enhancements by Iota’s leadership to decentralize Iota’s supply chain. These enhancements include adding both physical infrastructure, with the planned establishment of a cross-dock facility, and digital infrastructure, through the development of a decision engine that aids in efficiently routing products within the new decentralized supply chain network. The cross-dock facility is envisioned to provide an opportunity for decision postponement in the inventory flow from Asian factories to US distribution centers. Meanwhile, the decision engine, leveraging a heuristic-based algorithm, is set to unlock new inventory flows and enhance inventory distribution.&#13;
&#13;
With the new infrastructure to decentralize the supply chain yet to be fully operational, a retrospective study was conducted using a digital twin of Iota’s supply chain. Various push and pull-based inventory deployment strategies were simulated in the digital twin with the goal of alleviating pressure on the primary distribution center and increasing fulfillment from regional distribution centers. In the simulation process, challenges with forecast data and lumpiness of supply are discovered and subsequently addressed through the use of synthetic datasets, which emulate improved forecast coverage and smooth supply.&#13;
&#13;
The key findings from simulations highlight that despite achieving a modest performance in meeting the goals for the decentralized network, valuable insights were obtained that could drive future supply chain enhancements. The research underscores the benefits of smoothing supply for network performance, the critical role of comprehensive and reliable forecast data, and the necessity for supplementary storage solutions to complement the cross-dock facility. For example, one pull-based scenario using a synthetic dataset to emulate enhanced forecast coverage and smoother supply tripled network performance while reducing network costs by 1% compared to the baseline pull-based scenario. Such cost savings could be substantial for a large- scale retailer.&#13;
&#13;
Concluding with recommendations, the thesis advises Iota to re-evaluate purchasing practices, consider integrating multiple internal sources of forecast data into a single source, and continue with simulation analyses. These recommendations are designed to support Iota, and by extension, similar retailers, in their transition towards a robust and agile D2C supply chain, ensuring competitive advantage in the dynamic retail sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Diagnostic and Prescriptive Conformal Prediction Framework: Applied to Sleep Disorders</title>
<link href="https://hdl.handle.net/1721.1/155919" rel="alternate"/>
<author>
<name>Khalif, Faduma</name>
</author>
<id>https://hdl.handle.net/1721.1/155919</id>
<updated>2024-08-02T03:01:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Diagnostic and Prescriptive Conformal Prediction Framework: Applied to Sleep Disorders
Khalif, Faduma
We propose a novel predictive framework for the future diagnoses and treatments of patients with neurological conditions, specifically patients with sleep disorders, given their clinical history. Via the use of a conformal algorithm with a classifier as its base model, we are able to utilize a patients history of diagnoses, pharmacy dispensing, and other features to produce a set of possible final sleep disorder diagnoses and/or treatments with a definitive level of confidence and bounded level of uncertainty. We also utilize selective classification in order to allow the model to abstain from generating a prediction in cases where the algorithm’s predictive confidence does not meet a given confidence threshold, and we further investigate variables that correlate with “abstain” model outcomes. In addition, we experiment with the use of additional machine learning methods such as no-regret learning to better address issues that arise in clinical decision-making. We find that even in cases where there is a limited level of accuracy produced by our base classifier, we are able to use minimal data and selective prediction to establish highly accurate predictive outcomes for certain subsets of our cohort. In developing and testing this framework, we attempt to propose a new standard for predictive algorithms that target clinical-use cases and to better understand uncertainty quantification in a multitude of dimensions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Developmental Trajectories of Loophole Behavior in Autistic and Neurotypical Children</title>
<link href="https://hdl.handle.net/1721.1/155918" rel="alternate"/>
<author>
<name>Broski, Annalisa</name>
</author>
<id>https://hdl.handle.net/1721.1/155918</id>
<updated>2024-08-02T03:19:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparing Developmental Trajectories of Loophole Behavior in Autistic and Neurotypical Children
Broski, Annalisa
Loophole behavior is a common strategy used by neurotypical children to avoid trouble. The use of loopholes requires pattern recognition, language understanding, rational planning, and goal alignment. A major marker of autism is difficulty with Theory of Mind and language tasks, making their engagement with loophole behavior, which has clear patterns in neurotypical development, particularly interesting. We surveyed parents of autistic children (N = 202) and neurotypical children (N = 431) about their children’s engagement with loophole behavior. We found that loophole behavior is common in both populations, and while the onset of this behavior was significantly later among autistic children compared to neurotypical children, the peak and offset age were not. This could point to a developmental trajectory that occurs later for autistic children compared to neurotypical children, but overall demonstrates that autistic individuals have the ability to engage with loophole behavior.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Markerless Motion Capture and Principal Component Analysis to Classify BMX Freestyle Tricks</title>
<link href="https://hdl.handle.net/1721.1/155914" rel="alternate"/>
<author>
<name>Nates, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/155914</id>
<updated>2024-08-02T03:59:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Markerless Motion Capture and Principal Component Analysis to Classify BMX Freestyle Tricks
Nates, Eva
This thesis presents a novel Bicycle Motocross (BMX) Freestyle (FS) trick classification technique developed for the Australian Cycling Team. The first step is tracking six key points on the athlete and their bike using DeepLabCut, an opensource markerless motion capture software. Next, a Principal Component Analysis (PCA) is applied to the tracking data to calculate metrics to identify each trick type. Finally, a classifier is trained to learn these metrics. The dataset used in this paper focused on three common BMX Freestyle tricks: 360, backflip, and flair. The Logistic Regression model achieved the highest accuracy among the classifiers, correctly predicting the trick for 94.2% of the instances. This thesis discusses other ways to apply this data, such as novel trick generation. It also examines the robustness and cost benefit trade off of the classifier.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Data Augmentation with Attention Masks for Context Aware Transformations</title>
<link href="https://hdl.handle.net/1721.1/155913" rel="alternate"/>
<author>
<name>Marquez, Sofia M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155913</id>
<updated>2024-08-02T03:28:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Evaluating Data Augmentation with Attention Masks for Context Aware Transformations
Marquez, Sofia M.
Transfer learning from large, pre-trained models and data augmentation are arguably the two most widespread solutions to the problem of data scarcity. However, both methods suffer from limitations that prevent more optimal solutions to natural language processing tasks. We consider that transfer learning benefits from fine-tuning on increased target dataset size, and that data augmentation benefits from applying transformations in a selective, rather than random, manner. Thus, this work evaluates a new augmentation paradigm that uses the attention masks of pre-trained transformers to more effectively apply text transformations in high-importance locations, creating augmentations which can be used for further finetuning. Our comprehensive analysis points to limited success of utilizing this context-aware augmentation method. By shedding light on its strengths and limitations, we offer insights that can guide the selection of optimal augmentation techniques for a variey of models, and lay groundwork for further research in the pursuit of effective solutions for natural language processing tasks under data constraints.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the impact of automaker strategies on lithium price elasticity using a novel bottom-up demand model</title>
<link href="https://hdl.handle.net/1721.1/155912" rel="alternate"/>
<author>
<name>Sullivan, Luke Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/155912</id>
<updated>2024-08-02T03:40:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of the impact of automaker strategies on lithium price elasticity using a novel bottom-up demand model
Sullivan, Luke Robert
A global transportation paradigm shift towards electrification is underway that is rapidly redefining how billions travel. To reduce possible disruptions to the electric vehicle transition, an understanding of the demand and supply of key critical materials, materials with a high risk of supply chain disruption, is essential. There are key gaps in understanding how automaker electrification strategies will influence materials demand over time. This presents materials suppliers with risks when making decisions on new mine openings, a process that can take many years before new ore is extracted. As a result, materials prices experience significant volatility such as with lithium, which has seen 6x price swings in the past five years. Informed by semi-structured interviews with major automakers, this research applies technical insights of current and emerging battery chemistries to bottom-up economic demand modelling to generate forecasts of lithium demand and price elasticity of that demand. Detailed analysis on automaker electrification strategies, regional breakdown, vehicle class composition, and selected battery chemistries creates an industry-wide evaluation of the possible short- and long-run impacts of high lithium prices. This research provides insights for decisionmakers in industry and government to optimize electrification strategies that minimize vulnerability to lithium price disruptions. I present three recommendations to automakers, suppliers, and policymakers: (1) accelerate investment in new battery technology, (2) adopt aggressive and flexible rollout strategies that offer wide options for range and drivetrain, and (3) improve strategic communication between suppliers and automakers to narrow forecasts on supply and demand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Multi-Objective Genetic Optimization in PCB Component Placement</title>
<link href="https://hdl.handle.net/1721.1/155911" rel="alternate"/>
<author>
<name>Ngô, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/155911</id>
<updated>2024-08-02T03:57:52Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Application of Multi-Objective Genetic Optimization in PCB Component Placement
Ngô, Thomas
Designing a printed circuit board (PCB) is a complex process that involves creating a schematic, placing components, ensuring that every component is routable, and performing simulations to predict the behavior of the PCB before it is manufactured. With the rise of technological innovations, the demand for chips will increase, putting pressure on the electronic design automation (EDA) industry to innovate in PCB design. As part of Cadence’s Allegro X AI team, which aims to develop AI technology to automate PCB designers’ tasks, we explored the application of multi-objective genetic optimization in component placements as an alternative method for automating component placement. More specifically, we applied genetic optimization to a two-sided printed circuit board (PCB). We discovered that employing multiple objectives, such as half-perimeter wirelength and routability, produces promising component placements.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Multiple Objective Optimization for Autonomous Sailing Vessels</title>
<link href="https://hdl.handle.net/1721.1/155909" rel="alternate"/>
<author>
<name>Webb, Jason B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155909</id>
<updated>2024-08-02T03:31:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Multiple Objective Optimization for Autonomous Sailing Vessels
Webb, Jason B.
This research addresses using multiple object optimization, via the established opensource Mission Oriented Operating Suite-Interval Programming (MOOS-IvP) platform, to meet the unique navigational demands and operational constraints of autonomous sailing vessels. Recognizing a gap in the existing IvP Helm framework’s ability to accommodate the intricate dynamics of wind-powered navigation, this thesis initiates with the development of a sailing behavior. The core contribution of this work is the novel introduction of a sine wave-based approach for defining upwind tacking maneuvers. Building from a foundation in mathematical analysis, an algorithm was developed that employs the sine function to model the vessel’s tack plan. Furthermore, the thesis explores the integration of this behavior within the MOOS-IvP architecture, detailing the modifications necessary to support wind-powered navigation. Evaluation of the proposed navigation behavior encompasses simulated environments. The assessments highlight the algorithm’s adaptability to changing wind conditions. Through a combination of theoretical development and simulation, this study not only demonstrates the viability of integrating traditional sailing methods with contemporary autonomous systems but also contributes to advancing the capabilities of the standard MOOS-IvP tool kit and its continued use in various maritime applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacitor Ladder Circuitry for Improving Electrical Energy Transfer Efficiency to Electromechanical Actuators</title>
<link href="https://hdl.handle.net/1721.1/155908" rel="alternate"/>
<author>
<name>Murphy, Trevor</name>
</author>
<id>https://hdl.handle.net/1721.1/155908</id>
<updated>2024-08-02T03:27:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Capacitor Ladder Circuitry for Improving Electrical Energy Transfer Efficiency to Electromechanical Actuators
Murphy, Trevor
This paper focuses on the general process of taking electrical energy from a power source to a mechanical system to do mechanical work, specifically focusing on a ladder circuit to deliver energy to a capacitive-type actuator that is modeled as a mass-spring-damper (MSD) system. The whole chain of power conversion involves an electrical transfer efficiency and a mechanical energy conversion efficiency. The first chapter walks through the motivation and failure to build a dielectric elastomeric as a MSD test system. The second chapter details the electrical problem defining what a capacitive-type actuator is, what the electromechanical actuation process entails from an abstract perspective, and an efficiency metric. The third chapter reviews how inductors offer a solution and how at certain size scales they lose efficacy. The fourth chapter introduces the ladder circuit as a solution to the electrical energy transfer. Chapters 5 and 6 detail electrical experiments and modeling of the circuit to characterize the efficiency of the electromechanical process. Lastly, chapter 7 concludes with discussions of applicability of the ladder circuit solution.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Picture Book for the Roboticist— Why we Should Start with Hardware, and How to Teach so it Sticks</title>
<link href="https://hdl.handle.net/1721.1/155907" rel="alternate"/>
<author>
<name>Mehrotra, Aditya (Adi)</name>
</author>
<id>https://hdl.handle.net/1721.1/155907</id>
<updated>2024-08-02T03:55:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Picture Book for the Roboticist— Why we Should Start with Hardware, and How to Teach so it Sticks
Mehrotra, Aditya (Adi)
This thesis explores why and how to teach hardware design in relation to building intelligent systems. We focus on the concepts of modeling, embedded systems, and actuation, and develop a series of hands-on exercises to teach specific concepts based on previous work. We identify and explain the concept of the translation layer, which we define as the interface between high-level controls and the hardware system. We explain the importance of hardware engineering to its operation and explore the role of the hardware engineer in building this layer. We use these ideas to build an undergraduate curriculum in robotics, the syllabi of four core classes, and hands-on exercises for their associated lab components.   Along the way we focus on the science of learning that often doesn’t make its way into engineering education. We present a summary of key concepts surrounding how our students learn and use this to explain why hardware engineering is a good medium for teaching. We use this to build a loose design paradigm for what ‘works’ in engineering teaching. And we use that design paradigm to build the aforementioned hands-on exercises.  Additional discussions include topics that should be considered when building a curriculum including providing space for low-stakes curiosity, teaching our students about the application of their work to global problems, and including narratives on learning in our teaching.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal Interaction of Inert Additives in Energetic Materials</title>
<link href="https://hdl.handle.net/1721.1/155906" rel="alternate"/>
<author>
<name>Tsai, Gwendolyn</name>
</author>
<id>https://hdl.handle.net/1721.1/155906</id>
<updated>2024-08-02T04:02:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Thermal Interaction of Inert Additives in Energetic Materials
Tsai, Gwendolyn
Energetic materials are used for a variety of applications, including airbag deployment and solid rocket fuels, that require high energy density and various energy release rates. The energy release rate, determined by how fast the material burns, is often thought to be proportional to the bulk thermal diffusivity of the material. However, the inclusion of insulating inert particles in energetic materials has shown burning rate enhancement in certain cases. Flame front corrugation that increases the reaction front area observed at micron to sub-millimeter scales was proposed previously to explain the phenomenon. However, a recent simulation study observed a significant temperature gradient within the inert particle, implying that the residence time of the inert particle in the flame front could play a role in the thermal interaction between additives and surrounding energetic materials. In this work, we tested these hypotheses by employing a high-speed microscopic imaging system to quantify the burning rate and flame morphology of Al/CuO nanothermites with various SiO2 particle sizes and mass loading. Additionally, we performed flame propagation simulations to quantify the thermal interactions between the energetic materials and the embedded single inert particle. The experimental results show that the burning rate depends on the particle size as well as mass loading. Specifically, as the SiO2 particle size increases from 100 nm to 100 μm, the burning rate is enhanced by 26% at a mass loading of 7.5%. Further computational studies reveal that flame corrugation may not be the sole factor to alter the burning rate. Non-dimensional analyses show that energy absorption and temperature non-uniformity in inert particles have strong correlations with particle diameter. When the characteristic time of heating the inert particle is shorter than the flame residence time, the inert particle acts as a heat sink, leading to a negative impact on burning rates due to the heat removal from the surrounding energetic materials. Experimental studies reveal that additive particle size has an impact on the nanothermite burn rate. Insight into why this may occur is provided by computational studies of a single particle inclusion, as well as images captured of the burn rate experiments, showing the flame front morphology and particle size effects on heat transfer may play a key role in burn rate alteration by inert additives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovative Floating Wind Turbine with Synthetic Mooring System and Feasibility Analysis of a Solar-Wind-Battery Hybrid System</title>
<link href="https://hdl.handle.net/1721.1/155904" rel="alternate"/>
<author>
<name>Gkiokas, Christos</name>
</author>
<id>https://hdl.handle.net/1721.1/155904</id>
<updated>2024-08-02T03:58:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Innovative Floating Wind Turbine with Synthetic Mooring System and Feasibility Analysis of a Solar-Wind-Battery Hybrid System
Gkiokas, Christos
Synthetic mooring lines, characterized by their neutral buoyancy and high strength, are crucial for maintaining the station-keeping of Floating Wind Turbines (FWTs) by providing the necessary restoring forces while minimizing the vertical loads on the platform. This thesis explores the evolution of mooring systems from traditional catenary chains to taut synthetic fiber ropes, using the VolturnUS-S semi-submersible platform as a case study. The investigation delves into the viscoelastic properties of synthetic ropes and the challenges in accurately modeling their stiffness characteristics. Detailed analysis of the mooring system for the VolturnUS-S platform includes configuration, inclination, and composition of the mooring lines. Environmental conditions at the prospective mooring site are analyzed to evaluate the platform’s responses. A mesh sensitivity study determines the optimal balance between computational efficiency and accuracy. Various stiffness models of polyester mooring ropes are compared, highlighting the impact of rope diameter and inclination on mooring system performance, examining pretension, static and dynamic tensions, and safety margins. The major conclusions of this study are discussed, emphasizing the key findings. Acomprehensive feasibility analysis and preliminary economic assessment of a solar-windbattery hybrid system designed to supply power to a remote island is presented. Multiple configurations are evaluated to identify the most cost-effective and efficient system. The findings indicate that a hybrid system is both technically viable and economically feasible, with wind energy contributing significantly during winter months and solar energy during summer, yielding a reliable power supply throughout the year. Additionally, an overview of offshore wind submarine cabling is provided, focusing on types of cables, route planning, installation, operational considerations, and environmental impacts. Comprehensive planning for cable routes is covered, including site assessments, hydrographic surveys, and regulatory requirements.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving User Needs Identification through AI Augmented Approaches</title>
<link href="https://hdl.handle.net/1721.1/155903" rel="alternate"/>
<author>
<name>Schelhaas, Booker B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155903</id>
<updated>2024-08-02T03:39:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evolving User Needs Identification through AI Augmented Approaches
Schelhaas, Booker B.
In a human-centered design approach to the product design cycle, conducting a user needs analysis is critical to the long-term success of the project. Designers routinely are tasked with engaging in stakeholder studies in an effort to identify their needs that then drive the design. Sometimes users are aware of their needs, but often they are not conscious of some important yet hidden needs, called latent needs, which are particularly difficult to identify. The identification process can be laborious and resource intensive, including interviews and in-depth observations by experts to extract workarounds and pain points that suggest the highest potential for product success. This thesis aims to first explore the current status quo for user need extraction through observation and interviews, and then presents a preliminary and novel AI based method for augmenting designers’ abilities to aid in the process.  The first chapter presented demonstrates what can be done with traditional methods. We conducted videos and observations of older adults to understand their ability to stand and opinions on devices to aid them. After conducting many interviews and observations, we identified that the use of stand assist devices is in itself a latent need, as there exists a perception gap in the older adults between their perceived ability to stand and their actual ability to stand, as diagnosed by a trained physical therapist.  The following chapter in response to some of the difficulties observed in the first study presents a novel AI tool to augment designers’ abilities to identify user needs from observational videos. Our tool utilizes pose estimation to calculate ergonomic risk of users as they engage in a task, as well as object segmentation to identify objects that could be affecting the user’s behavior. 2 These are then compiled into a computer interface for designers to use when watching an observational video of a user. Methods, experimental design, and future work are discussed for the study which is pending to be completed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-throughput Bandgap Mapping for Perovskite-inspired Materials</title>
<link href="https://hdl.handle.net/1721.1/155902" rel="alternate"/>
<author>
<name>Sheng, Fang</name>
</author>
<id>https://hdl.handle.net/1721.1/155902</id>
<updated>2024-08-02T03:27:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">High-throughput Bandgap Mapping for Perovskite-inspired Materials
Sheng, Fang
In recent years, lead halide perovskites have gained attention as promising candidates for photovoltaic devices due to their superior performance. However, issues with stability and toxicity have hindered their widespread application. As a result, perovskite-inspired materials, which are stable and lead-free, have attracted attention. Like their lead halide counterparts, these perovskite-inspired materials possess a vast compositional space, presenting a challenge in finding materials with the desired optoelectronic properties. To address this challenge, there is recent interest to develop high-throughput materials synthesis techniques capable of exploring large materials spaces, culminating in the development of materials-printing platforms capable of synthesizing dozens of candidate materials per minute. However, despite acceleration of sample synthesis, significant delays are faced during sample characterization, due to time-intensive data acquisition and analysis. Additionally, issues concerning poor or unquantified synthesis reproducibility can affect the quality of information gained. In this thesis, I used a home-built high-throughput combinatorial printer to synthesize perovskite-inspired materials and developed a novel high-throughput technique to map local bandgaps with pixel-level resolution. This characterization technique utilizes spatially-resolved reflectance spectra and automated data analysis. I collected approximately a million optical bandgap measurements from the compositional space of Cs₃(BiₓSb₁₋ₓ)₂(Br subscript y I₁₋ subscript y )₉ perovskite-inspired materials in total. The bandgap mapping results revealed nonlinear bandgap variations along six compositional gradient sequences. I was able to identify phase separation within samples by detecting the presence of multiple bandgaps, utilizing extensive spatial and optical data. Finally, I worked with colleagues to obtain transient absorption spectroscopy data, which indicated that carrier depletion from ground states to excited states occurred at distinct energy levels, exhibiting unique carrier dynamics that correspond with observed bandgap variations. Anomalies within quasi-binary systems may indicate phase separation In conclusion, this approach enables rapid screening of quasi-binary phase spaces on the basis of bandgap.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stability and Dynamics of Resource Consumer Ecosystems</title>
<link href="https://hdl.handle.net/1721.1/155901" rel="alternate"/>
<author>
<name>Liu, Yizhou</name>
</author>
<id>https://hdl.handle.net/1721.1/155901</id>
<updated>2024-08-02T03:28:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stability and Dynamics of Resource Consumer Ecosystems
Liu, Yizhou
Natural ecosystems, ranging from microbiomes to forests, significantly influence humanity by affecting individual health and promoting the sustainable growth of whole society. Understanding the collective properties like diversity, stability, and diversity-stability relationships of a large complex ecosystem having real-world structures has been a significant challenge, yet is important for better ecosystem management. In this thesis, we investigate stability and dynamics of resource-consumer ecosystems (ecosystems with two trophic levels). Beginning with geometric analyses of small systems, we uncovered a critical instability arising from a mismatch between resources that promote growth (defined as niches) and those predominantly consumed. This instability emerges when the discrepancy between consumption and growth exceeds the differences among nearest niches, indicating that species are more likely to encroach upon the niches of others rather than their own. After losing stability, the extent to which species encroach upon their neighbors’ niches can predict the diversity and sizes of attractor basins. We further develop a stability criterion with statistical properties of consumption and growth, employing random matrix theory. This criterion hinges on the correlation between growth-promoting resources and those primarily consumed, with the critical level of discrepancy being influenced by the ratio of species to resources. This result is consistent with the geometric interpretation, giving an analytic estimation of maximum niche overlaps. Additionally, we uncover fundamental symmetries in system stability, enhancing our stability criterion through geometric insights and extending its applicability to realistic situations. Later, by integrating mechanisms such as cross-feeding, toxin production, species autoregulation, etc., our expanded model framework accommodates scenarios where consumers outnumber resources, thereby refining our stability criterion. Notably, we identified a re-entrant stability phenomenon, where increased diversity within trophic levels initially destabilizes but subsequently stabilizes the community. This leads to the conclusion that the difference in diversity between trophic levels is crucial for ecosystem stability, with the least stable ecosystems being those with comparable numbers of species across levels. Our work establishes a mechanistic understanding of ecosystem instability through niche encroachment and shows that stability hinges on diversity differences across trophic levels rather than total diversity, therefore emphasizing the significance of mechanistic structures in predicting large ecosystem behaviors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical Design and Learned Control System Development of Fiber Extrusion Device on Industrial Programmable Logic Controller (PLC) Platform.</title>
<link href="https://hdl.handle.net/1721.1/155900" rel="alternate"/>
<author>
<name>Sakib, Gazi S.</name>
</author>
<id>https://hdl.handle.net/1721.1/155900</id>
<updated>2024-08-02T03:33:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mechanical Design and Learned Control System Development of Fiber Extrusion Device on Industrial Programmable Logic Controller (PLC) Platform.
Sakib, Gazi S.
Optical fibers are ubiquitous in the 21st century as they form the backbone of the internet and electronic communication and enable a global village to exist. Optical fibers play a pivotal role in modern technology and communication for several reasons. They enable high speed data transmission over large distances, while minimizing the data interception. In addition, they are also used in fields like medicine (fiber-optic imaging and endoscopy), sensing technologies (used in temperature, pressure, and strain sensors), and industrial settings (for data transmission and control systems). Therefore, it is of utmost importance that the manufacturing process of optical fibers is better controlled by developing advanced control algorithms that enhance the state-of-the-art PID (Proportional–Integral–Derivative) controllers. This thesis attempts to showcase the work done to establish a framework and a “digital twin” for deploying advanced learned control algorithms on industrial platforms such as Programmable Logic Controllers (PLC) based on machine learning models such as DDPG (Deep Deterministic Policy Gradient). To develop and train such control algorithms, a desktop version of a fiber draw tower was designed, manufactured, and controlled via a PLC. System dynamics data was collected using a readily available preform substitute and the manufactured desktop Fiber Extrusion Device (FrED) was used to train the DDPG-based control algorithms/model. The model was then subsequently tested and compared to state-of-the-art PID algorithms. To that effect, this thesis establishes a framework and enables the path to further develop advanced control algorithms to better control the manufacturing process of fiber optics. This pivotal step promises to significantly enhance the precision and efficacy of optical fiber manufacturing processes, amplifying their impact across industries and technological frontiers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Directional Recrystallization of an Additively Manufactured Oxide Dispersion-Strengthened Nickel-Base Superalloy</title>
<link href="https://hdl.handle.net/1721.1/155899" rel="alternate"/>
<author>
<name>Carter, Christopher P.</name>
</author>
<id>https://hdl.handle.net/1721.1/155899</id>
<updated>2024-08-02T03:46:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Directional Recrystallization of an Additively Manufactured Oxide Dispersion-Strengthened Nickel-Base Superalloy
Carter, Christopher P.
This thesis investigates the recrystallization behaviors of an additively manufactured (AM) oxide dispersion-strengthened (ODS) NiCoCr medium entropy alloy, focusing specifically on the effects of a liquid phase, annealing twins, and aluminum microalloying additions on recrystallization kinetics. Conventional wrought ODS alloys achieve coarse columnar grains through directional recrystallization (DRX) heat treatments. The main goal of this study was to assess whether directional recrystallization can achieve a similar effect in AM ODS alloys. Gas-atomized NiCoCr powders were decorated with oxide dispersoids using resonant acoustic mixing then consolidated with laser powder bed fusion. The as-printed ODS materials were fully dense with retained nanoscale Y2O3 dispersoids and a small grain size of order 10 μm. The as-printed materials were subjected to isothermal recrystallization and directional recrystallization heat treatments at soak temperatures between 800 and 1419 °C. During isothermal annealing the material only recrystallized when the soak temperature exceeded the solidus or Al alloying additions accelerated the coarsening kinetics of the oxide dispersoids. Directional recrystallization experiments on the non-ODS alloy did not result in the formation of columnar grains likely due to the propensity of recrystallized NiCoCr to form annealing twins which are less mobile than grain boundaries. Directional recrystallization in the ODS NiCoCr could not be achieved without surpassing the solidus temperature of the alloy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Litigation Financing Disclosures on Patent Litigation</title>
<link href="https://hdl.handle.net/1721.1/155897" rel="alternate"/>
<author>
<name>Han, Yuxin (Zoe)</name>
</author>
<id>https://hdl.handle.net/1721.1/155897</id>
<updated>2024-08-02T03:10:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Impact of Litigation Financing Disclosures on Patent Litigation
Han, Yuxin (Zoe)
This paper investigates the impact of mandatory litigation financing disclosures on litigation outcomes, particularly in patent litigation. Despite the increasing importance of litigation funding, transparency regarding funders’ involvement remains limited. Using a differences-in-differences model, the study examines the effects of recent disclosure mandates implemented in federal courts. The findings unveil a notable reduction in the volume of cases instigated by Non-Practicing Entities (NPEs) following the mandate, alongside indications of strategic forum shopping aimed at circumventing disclosure requirements. Furthermore, the study finds reductions in settlement time for cases filed by likely financially constrained plaintiffs after the introduction of mandatory funding disclosures. In summary, this paper illuminates the complex relationship between disclosure regulations and NPE activities, highlighting the potential unintended consequences arising from seemingly well-intentioned reforms within the legal system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Baby Gym: Bridging the Gap between Reinforcement Learning and Human Infant Locomotor Development</title>
<link href="https://hdl.handle.net/1721.1/155896" rel="alternate"/>
<author>
<name>Patel, Nikasha G.</name>
</author>
<id>https://hdl.handle.net/1721.1/155896</id>
<updated>2024-08-02T03:05:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Baby Gym: Bridging the Gap between Reinforcement Learning and Human Infant Locomotor Development
Patel, Nikasha G.
Learning how to move is one of the most fundamental milestones humans achieve during their development, through complex interactions between neural control, biomechanics, and the environment. However, not every human learns to locomote the same way: babies exhibit remarkable variance in the stages they undergo before crawling and walking. While there exist years of empirical research quantifying and qualifying developmental stages in infant locomotion, we lack a computational model to understand how variations during the developmental stages affect overall crawling and walking behavior, thereby allowing us to test hypotheses in simulation. In order to better understand how infants learn to move, a testable model of infant locomotion would complement experimental studies allowing for model-guided interpretations of observed phenomena. This thesis work fulfills the gap in research by introducing Baby Gym, a library for probing emerged behavior through reinforcement learning (RL) on an infant-like agent with the capacity to crawl and walk, compatible with both the OpenAI Gymnasium and DM Control APIs. Baby Gym will serve as a first step in enabling a cross-disciplinary open-source ecosystem of computational models to understand infant motor development.&#13;
&#13;
The work consists of the following: an extensive literature review that justifies the foundations for a baby RL environment; a Python-based infrastructure for cross-compatibility between Gymnasium and DM Control; a reproducible RL environment with several new reward functions that yield human-like locomotor development stages; and initial methods for evaluating the "human-likeness" of the emerged locomotion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Human Memory Processes via Bio-Signals</title>
<link href="https://hdl.handle.net/1721.1/155895" rel="alternate"/>
<author>
<name>Abdelrahman, Mona Magdy</name>
</author>
<id>https://hdl.handle.net/1721.1/155895</id>
<updated>2024-08-02T03:40:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Detecting Human Memory Processes via Bio-Signals
Abdelrahman, Mona Magdy
Bio signals, such as eye movement data, photoplethysmography (PPG), and electrodermal activity (EDA), can provide insight into various cognitive states. Previous work has shown that eye movements along with other bio-signals differ when viewing familiar versus unfamiliar faces. Signals such as heart rate (derived from PPG) and skin conductance (derived from EDA) have also been previously evaluated to have correlations with different states of memory. In this study, we collected simultaneous pupillary, PPG, and EDA signals while participants (n=32) transitioned between several cognitive states (learning, recognition, and recall). Using this data, we propose multi-modal, machine learning methods to predict and evaluate whether a user is in a cognitive state of learning, recognition, or recall. We will discuss the differences observed in the data between these cognitive states, as well as next steps and applications for this model.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Sensors, Data Analysis, and Non-Intrusive Load Monitoring: Foundations for Reliability-Centered Maintenance on Ships</title>
<link href="https://hdl.handle.net/1721.1/155894" rel="alternate"/>
<author>
<name>Skimmons, Jacob Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/155894</id>
<updated>2024-08-02T03:40:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Distributed Sensors, Data Analysis, and Non-Intrusive Load Monitoring: Foundations for Reliability-Centered Maintenance on Ships
Skimmons, Jacob Daniel
Advances in computing and sensing technology have brought powerful new tools within reach of shipboard engineers. With the right setup, operators can leverage statistics and digital signal processing tools to gain physical insight previously obscured by the sheer amount of work and specialized knowledge it once took to do the same. This thesis explores several applications of non-intrusive load monitoring (NILM) tools aboard a U.S. Coast Guard Fast-Response Cutter (FRC) patrol boat, novel analysis methods of the corrosion protection systems on the FRC, and practical ways of making smart data approachable. Once implemented, these methods will reduce the effort needed to safely operate a modern, high-tech ship by giving operators greater insight into how their systems perform in real-time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular Self-Assembly of Carbon Nanosheets via AFM&#13;
Nanoprinting</title>
<link href="https://hdl.handle.net/1721.1/155893" rel="alternate"/>
<author>
<name>Ibrahim, Malek M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155893</id>
<updated>2024-08-02T03:52:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Molecular Self-Assembly of Carbon Nanosheets via AFM&#13;
Nanoprinting
Ibrahim, Malek M.
Traditional nanofabrication methods are currently enabled by top-down and more recently, bottom-up approaches. The former involves highly specialized equipment and processes, such as photolithography, electron beam lithography, and focused ion beam milling, to etch or deposit materials at the nanoscale. These methods are well-established and widely used in the semiconductor industry, but they often require expensive equipment, complex processes, and employ environmentally harmful chemicals. The latter approach, bottom-up nanofabrication, has recently gained popularity due to its potential for low-cost, highly customizable, and environmentally friendly fabrication of nanoscale structures, though many challenges still exist with developing a scalable manufacturing method. As such, a variety of techniques have been investigated to enable bottom-up nanofabrication, including 2 photon polymerization (2PP), electrohydrodynamic jet printing, dip-pen nanolithography, and solid-state polymerization among others. In this thesis, we propose a new bottom-up nanofabrication approach by combining molecular self-assembly with atomic force microscopy (AFM), which we believe has the potential to create devices with unprecedented properties and functionalities in both the technological and biological domains. To this end, we first present the development of a proof-of-concept custom AFM nanoprinter for the molecular self-assembly of carbon nanosheets, and subsequently, we explore the design, fabrication, and initial testing protocols of custom 2PP-printed FluidFM cantilevers as an alternative to traditional FluidFM probes for more general AFM nanoprinting applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Development, and Testing of an Unmanned Surface Vessel (USV) for Oyster Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/155892" rel="alternate"/>
<author>
<name>Dapoz, Annemarie</name>
</author>
<id>https://hdl.handle.net/1721.1/155892</id>
<updated>2024-08-02T03:22:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design, Development, and Testing of an Unmanned Surface Vessel (USV) for Oyster Aquaculture
Dapoz, Annemarie
To sustainably feed the growing worldwide population, development of aquaculture technology is necessary. However, it heavily lags behind that for terrestrial agriculture. The Oystermaran team at MIT Sea Grant is working on developing a vehicle to address a section of this need. Close quarters flip-bag oyster farming, common in Massachusetts, is a physically demanding job which is done entirely manually as there is no existing technology to fit into the crowded oyster field. The team developed the Oystermaran- an unmanned surface vessel designed specifically maneuver through the crowded farm and flip the baskets. This thesis covers the complete mechanical design, development, and initial testing of the 2nd Oystermaran vehicle. Built as a flexible design to allow adaptations and tuning on-site, the Oystermaran V2 featured interchangeable bows, adjustable frame and mechanism dimensions, and worked to add mechanisms and capabilities that aquafarmers requested. Multiple rounds of testing and adjustments were conducted and the Oystermaran V2 proved to be a complete platform the team can continue to test and develop to eventually make a fully autonomous vehicle.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manufacturability Assessment of the Navy Integrated Power and Energy Corridor (NiPEC)</title>
<link href="https://hdl.handle.net/1721.1/155891" rel="alternate"/>
<author>
<name>Curran, Emily Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/155891</id>
<updated>2024-08-02T04:03:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Manufacturability Assessment of the Navy Integrated Power and Energy Corridor (NiPEC)
Curran, Emily Alice
The growing electrical demands of sophisticated naval vessels necessitate the development of advanced power distribution methods. With the U.S. Navy’s shift towards fully electric ships, exemplified by the Zumwalt class destroyer and the forthcoming DDG(X), the demand for electrical power on future ships is projected to exceed 100 megawatts. To meet this challenge, the Massachusetts Institute of Technology (MIT) Sea Grant Program’s Design Laboratory, in collaboration with the Electric Ship Research and Development Consortium (ESRDC), is developing the Navy Integrated Power and Energy Corridor (NiPEC). This innovative system is designed to transform power management in all-electric warships through the use of modular units for energy management and power electronic building block (PEBB) technology. &#13;
Substantial groundwork has been established on the components and initial configurations of NiPEC. The collaborative team is working to develop not only a more robust power distribution system, but also an infrastructure that is simpler to construct, install, and maintain onboard. A next step of development focuses on evaluating the design’s manufacturability and the feasibility of manufacturing and installing the system aboard ships. This study explored the principles of Design for Manufacturability (DFM) and Design for Production (DFP) and then defined how these concepts apply to the Power Electronic Power Distribution Systems (PEPDS) and the NiPEC project. By leveraging the principles of DFM and DFP, this thesis proposes criteria for assessing the overall manufacturability of the NiPEC and its subsystems. By establishing criteria based on the principles of DFM as it pertains to NiPEC and naval applications, system designs may be objectively evaluated throughout the design phase. This thesis applies the proposed evaluation criteria to current NiPEC cooling system designs to illustrate the application of these criteria. This evaluation also highlights the trade-offs between manufacturability and other key metrics such as cost, reliability, and maintainability. These criteria may be useful in evaluating the design and functionality of systems and subsystems, steering design choices towards solutions that are not only technically sound, but also practical for manufacturing and installation. This approach ensures the alignment of the NiPEC system with the evolving needs of naval power management, and further enables its successful implementation on future all-electric warships. With this evaluation, this thesis begins to bridge the gap between the current state of research and the practical deployment of a next-generation shipboard power distribution system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity Sensors for a High-Bandwidth, Low-Latency Robotic Manipulation Object Avoidance Controller</title>
<link href="https://hdl.handle.net/1721.1/155889" rel="alternate"/>
<author>
<name>Han, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/155889</id>
<updated>2024-08-02T03:17:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Proximity Sensors for a High-Bandwidth, Low-Latency Robotic Manipulation Object Avoidance Controller
Han, Jessica
Robotics holds the promise of transforming industries, from automating recycling to managing household chores, by enabling machines to perform tasks with human-like dexterity. However, current robotic manipulation systems struggle to achieve the real-time responsiveness required for such tasks. Traditional systems rely on cameras, which slow down control loops with dense and difficult-to-process data. This thesis addresses the need for real-time control in robotic manipulation by utilizing proximity sensors in a high-bandwidth, low-latency object avoidance reflex controller on the Biomimetic Robotics Lab’s dexterous robotic manipulation platform. The research focuses on the two most viable proximity sensors for robotic manipulation: the STMicroelectronics VL6180X Time-of-Flight sensor and the Thinker Phase-Modulated-Light sensor. These sensors are characterized based on their measurement range, error, variance, field-of-view, and convergence time to determine their usability in an object avoidance reflex. Following characterization, a study on the integration of these sensors into the manipulation platform is performed to assess sensing latency and bandwidth implications. Finally, validation of the optimal sensor-controller configuration for the object avoidance reflex—averaging two time-of-flight sensors with a linear virtual force—shows an improvement in bandwidth from 33 Hz to 115 Hz, enhancing the reactivity and stability of the object avoidance reflex. Overall, this research provides a comprehensive study on the individual sensor and sensor-integration levels of proximity sensors for object avoidance reflexes. It enables future researchers to be confident in the manipulation platform’s performance for further controls-level research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Dynamic Manipulation on an Anthropomorphic Robotic Table Tennis Platform</title>
<link href="https://hdl.handle.net/1721.1/155886" rel="alternate"/>
<author>
<name>Cancio, Kendrick D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155886</id>
<updated>2024-08-02T03:36:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Dynamic Manipulation on an Anthropomorphic Robotic Table Tennis Platform
Cancio, Kendrick D.
Specialized robots whose morphologies address narrow tasks are capable of super-human precision, speed, and accuracy. However, with generalized anthropomorphic designs coming to the forefront of robotics, we have yet to achieve parity with the best human performance on these platforms, particularly in dynamic manipulation which encompasses tasks such as throwing, catching, and striking. This thesis documents work towards an enabled robotic table tennis platform that will allow the development of planning and control algorithms for dynamic manipulation. Specifically, a fully integrated hardware platform is presented with two candidate vision systems and a 5 DOF anthropomorphic robotic arm. A dynamics model of the ball is introduced and validated for predicting the trajectory of the ball. To strike the ball, a nonlinear trajectory optimization problem is formulated for the arm and shown to be capable of generating various types of swings. This formulation is applied to the lower 4 DOF case by additionally considering the timing of the strike. Finally, nominal ball striking is demonstrated on hardware for the case of planar ball trajectories.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Neuronal Cell Classes and their Role in&#13;
Cognition</title>
<link href="https://hdl.handle.net/1721.1/155884" rel="alternate"/>
<author>
<name>Huang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/155884</id>
<updated>2024-08-02T03:15:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Investigating Neuronal Cell Classes and their Role in&#13;
Cognition
Huang, Emily
Classifying neurons into different cell classes is both an idea that has existed since the origins of neuroscience, and one that is essential to understanding the complex interactions of the brain. While there has been a substantial effort to categorize neurons morphologically, molecularly and physiologically in in vitro studies, there is a gap in experiments performed on awake and behaving animals. Using data collected from macaque monkeys performing a working memory task, and employing an unsupervised Gaussian mixture model (GMM) clustering algorithm, a number of different cell classes and their defining features were distinguished in area 7A, the lateral intraparietal area (LIP), the dorsolateral and ventrolateral prefrontal cortex (PFC) and the extrastriate visual area (V4). While the number of cell classes found across areas differed, there were several classes across areas that appeared to be correlates. Classes in each area also showed functional differences in information encoding during predictable trials and distributional differences in depth. This signifies both the potential of functionally distinct cell classes involved in prediction, as well as the existence of universal cell classes across different areas.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aging Changes Cell Mechanics and Dynamics with a Backbone of Cytoplasmic Crowding</title>
<link href="https://hdl.handle.net/1721.1/155881" rel="alternate"/>
<author>
<name>Lee, Lani Dakyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/155881</id>
<updated>2024-08-02T03:00:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Aging Changes Cell Mechanics and Dynamics with a Backbone of Cytoplasmic Crowding
Lee, Lani Dakyoung
Aging is a biological process that is correlated with life-altering and terminal diseases, and while the population of the elderly increases every year, the mechanism behind aging and its physical changes is not fully understood. Many aging studies traditionally focus on molecular-level changes, but much of the physical effect of aging could be better understood in cells, which are the fundamental building blocks of life. In particular, because the cytoplasm comprises approximately 70% of cells and its physical properties strongly influence biological processes, understanding its mechanics provides a more comprehensive description of aging. Using cells from well-established aging mice models, we investigate the morphological and dynamic changes of aging cells and how they relate to the physical state of the cytoplasm. Using particle fluctuation, optical tweezers, and force spectrum microscopy, we demonstrate that aging halves motion inside the cytoplasm due to doubled stiffness, while active forces remain statistically similar. In addition, we take tomograms of cell refractive index that indicate a denser cytoplasm and 3D images that display decreased cell volume, hinting that aging causes a more crowded interior in line with the physical differences we observe. We investigate some key functional differences at the cellular level; confocal images and videos show aged cells spreading larger and rounder, and their motion decreased. 3D morphology and ECM structure, as well as contractility measurements from traction force microscopy indicate a changed cell-environment interaction. We also measure a increased nucleus to cell volume ratio, an important marker in cell biology, that may indicate changes in cell maturity, or even the connection to cancer malignancy. Our results imply a crucial physical mechanism behind cellular-level changes due to aging, helping to reconcile the physical and nonphysical changes investigated in aging literature. This study provides an extensive investigation of the cytoplasm and connects its physical state to the changes in cell mechanics and dynamics observed from aging.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Analysis of Arrays of General Bodies</title>
<link href="https://hdl.handle.net/1721.1/155880" rel="alternate"/>
<author>
<name>Cotey, Sarah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155880</id>
<updated>2024-08-02T03:14:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Analysis of Arrays of General Bodies
Cotey, Sarah M.
Wave energy is one of the world’s largest sources of renewable energy. However, wave energy farms are in an early stage of development and relevant research in this field has not produced a general agreement on design approach. Many research articles have been published analyzing optimal geometry of a singular wave energy converter (WEC) or the arrangement of a narrow range of varied geometry. This thesis seeks to expand on this research to study the effects of both WEC arrangement and varied body geometry. An optimal combination of WEC geometry and array configuration to maximize energy absorption from scattered and radiated wave interactions between bodies can be determined using computational methods. In order to lay the groundwork to accomplish this, a partial wave decomposition model was developed to describe wave-body interaction of a body of general shape. Hydrodynamic behavior was modeled using potential flow and linear wave theories, in line with other research in this area. Bodies of varied shapes were modelled using computer aided design (CAD) software. The hydrodynamic response of the isolated body problem was subsequently analyzed using the WAMIT boundary element method (BEM) program. Resulting velocity potentials, excitation forces, and other hydrodynamic quantities were then processed using a partial wave mathematical model to determine each body’s unique diffraction and force transfer matrices. These characteristic quantities were then input into a multiple wave scattering interaction in-house program to analyze the system response in various configurations. The power gain of these arrays was studied to determine the magnitude of power absorption increase relative to the body in isolation. The results were analyzed to determine arrays and body geometry designs that produce improved system response and overall WEC efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetohydrodynamic Induction Pump Jet Propuslor for Undersea Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155879" rel="alternate"/>
<author>
<name>Daus, Jonathan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/155879</id>
<updated>2024-08-02T04:06:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Magnetohydrodynamic Induction Pump Jet Propuslor for Undersea Vehicles
Daus, Jonathan J.
There has been a strong interest in Magnetohydrodynamic (MHD) technology for use in marine propulsion for over 60 years, but progress been limited due to the inability to create strong enough magnetic fields (&gt; 5T). The Defense Advanced Research Projects Agency (DARPA), Defense Sciences Office (DSO), recently released a Broad Agency Announcement (BAA) for their Principles of Undersea Magnetohydrodynamic Pumps (PUMP) program, soliciting for the design and prototype development of an MHD propulsor for naval applications. This thesis investigates the design of an induction based MHD propulsor for use on submarines. The objective of this work was to optimize the propulsor design such that its electrical efficiency, ηE &gt; ηD, where ηD is DARPA’s goal efficiency of 70%, while achieving the required thrust for tactical speeds (∼ 10 kts). This research modeled the propulsor using Actuator Disk Theory (ADT) and incorporates inflow boundary layer effects and hydrodynamic drag to develop a total propulsor efficiency. There was a significant investigation into the mitigation of the finite length effect that has traditionally led to low efficiencies (5%- 45%) in MHD liquid metal induction pumps. Analysis showed that the design of the current waveform was essential to achieving relevant efficiencies, where the Nuttall Window was selected as the optimal waveform for this application. Additionally, this work concluded that the propulsor efficiency is largely dependent upon the selection of the current carrier wavenumber k0 and wavenumber slip ε, which is defined as ε = (ω/V) −k₀, where ω is the angular frequency of the A/C current, and V is the fluid velocity inside the shroud of the propulsor. Results showed that, ηE ≥ ηD was achievable for practical magnetic field strengths (≤ 20T).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Distributed Simulation Cluster for MOOS-IvP</title>
<link href="https://hdl.handle.net/1721.1/155876" rel="alternate"/>
<author>
<name>Becker, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/155876</id>
<updated>2024-08-02T03:09:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development of a Distributed Simulation Cluster for MOOS-IvP
Becker, Kevin
Batch testing simulations of autonomy software make verification and optimization much easier and more robust. The ability to verify and optimize code is particularly important for large, expensive assets, such as marine vessels, where the cost of any failure is high. This thesis discusses the architecture, implementation, and use of an expandable simulation toolbox for MOOS-IvP. The toolbox utilizes Monte Carlo simulations since these are incredibly flexible for differing scenarios. A distributed architecture improves robustness since a single failure does not bring the cluster down. Additionally, personal computers may be added to the cluster during off hours, thus increasing the average computing power.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Exploration for Biological Fluid Sampling Platform</title>
<link href="https://hdl.handle.net/1721.1/155875" rel="alternate"/>
<author>
<name>Higginbotham, Haley O'Hara</name>
</author>
<id>https://hdl.handle.net/1721.1/155875</id>
<updated>2024-08-02T03:30:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design Exploration for Biological Fluid Sampling Platform
Higginbotham, Haley O'Hara
This work explores the design of an implantable, peristaltic pumping platform for chronic sampling of neuropeptides. The project drew inspiration from a pumping platform previously developed in the Cima Lab. Users have reported that the previous platform is difficult to use. The pump is also nearly 47x too large to be implanted in rats, which restricts its use to sedated or tethered animals. The project aimed to improve usability and enable implantation. Alternative fluidic junction methods and alternative actuation modes were investigated. The new junction design improves usability by enabling repeated, reversible attachment to the pump. The new, more efficient pump design reduces the pump volume by 49x, enabling implantation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Participatory Methods in Technical Design: Household Biomass Stoves</title>
<link href="https://hdl.handle.net/1721.1/155871" rel="alternate"/>
<author>
<name>Richmond, Robyn C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155871</id>
<updated>2024-08-02T03:30:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Participatory Methods in Technical Design: Household Biomass Stoves
Richmond, Robyn C.
Participatory Design represents an important methodology focused on involving people who experience problems in the process of defining and solving them. This is especially important in global development, where diverse stakeholders attempt to tackle poverty challenges. In this thesis, I analyze a case study of improving biomass stoves in the Himalaya through the lens of participatory design to inform design practice and research. Biomass cooking and heating cause high levels of indoor air pollution especially in the Himalaya where households need accessible and affordable wood fuel for cooking and heating during extreme winters. Prior to fieldwork, I facilitated ideation sessions to generate solutions to these challenges, and we pursued prototyping and testing of a chimney retrofit to a traditional stove. This incremental innovation had increased chances of long-term adoption and impact because it would not require users to change cooking practices or discontinue using their traditional stove. Lab testing resulted in several design guidelines, rather than optimized parameters, to enable fieldwork. In the field, the team co-designed a chimney clay stove with a lead user, trained under a local stove master in constructing improved clay stoves, and designed a one-pot clay chimney stove and modifications to metal chimney stoves using principles of participatory design. The chimney modification reduced indoor PM 2.5 and CO mass concentrations by 32.3% and 78.5%, respectively, while maintaining usability characteristics. Design experiences allowed the team to recognize the technical skills in materials and construction necessary for successful clay stove design and document cultural value placed on this expertise. The team also documented user innovations on stoves, which are sparse in literature, but further demonstrate the feasibility and value of increased user participation in designing improved stoves. Inspired by field work, I present a short review of literature on gender in biomass stove technology and recommendations to involve women and gender specialists in designing improvements to traditional stoves. In addition, I propose a new model for calculating thermal efficiency and a method for estimating space heating in biomass stoves used for cooking and heating. With the new model, clay multifunctional stoves can achieve up to 35% efficiency, which raises the standard for new stoves entering the market and better reflects actual usage and fundamentals of thermal efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local and global numerical analysis of a porous screen in free-stream flow</title>
<link href="https://hdl.handle.net/1721.1/155870" rel="alternate"/>
<author>
<name>May-Varas, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155870</id>
<updated>2024-08-02T03:41:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Local and global numerical analysis of a porous screen in free-stream flow
May-Varas, Nicholas A.
A porous screen in a free-stream flow poses a model system for the analysis of nets, as used in fishing and aquaculture. In such applications, the forces on the net inform operational design and safety choices, whereas the flow past the net relates to the mixing of flow and nutrients in the wake. Most existing analyses of the flow past porous screens are based on experiments or simplified potential flow models. While both methods can lead to insightful results, open questions related to the details of the flow field, viscous effects, and the accuracy of the simplified models remain. To address these questions, we set up and run high-fidelity numerical simulations of the free-stream flow past a two-dimensional model porous screen. The screen is formed by placing a series of solid circular cylinders orthogonal to a free-stream flow. As the number of cylinders is increased, the gaps between them decreases, which increases the solidity of the screen. The use of a free-space domain removes any artificial numerical blockage effects, consistent with a free-stream flow.&#13;
&#13;
Our analysis provides insights in the variation of the mean force coefficients as a function of the screen solidity, as well as their temporal fluctuations and spatial distributions across the screen. Further, we compute the flow rates and mean velocities through the screen gaps, and visualize the local flow field and wake. The results show that the mean value and fluctuations of the drag coefficients increase with the screen solidity. Further, the flow rate through the screen decreases monotonically as the solidity increases. The mean velocity through the screen behaves non-monotonically however, as it first increases and then decreases with screen solidity. Comparing our results to an existing potential model shows that the model predicts the flow rate well. However, the total drag coefficient is significantly lower in the predictions compared to the simulation results, pointing to the need for a better understanding of the pressure jump across the screen.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Engineering Education: Integration of the Desktop Fiber Extrusion Device (FrED) for Hands-On Learning in Smart Manufacturing.</title>
<link href="https://hdl.handle.net/1721.1/155869" rel="alternate"/>
<author>
<name>Jaiswal, Somesh Sunil</name>
</author>
<id>https://hdl.handle.net/1721.1/155869</id>
<updated>2024-08-02T03:26:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Engineering Education: Integration of the Desktop Fiber Extrusion Device (FrED) for Hands-On Learning in Smart Manufacturing.
Jaiswal, Somesh Sunil
This thesis explores the integration of the Desktop Fiber Extrusion Device (FrED) into smart manufacturing education, emphasizing its transformative potential in engineering curricula. The research focuses on the development and application of educational and research-grade FrED models, designed to provide hands-on learning experiences remotely, which is increasingly pertinent in the evolving landscape of engineering education. Through iterative design and implementation of control systems, including Proportional-Integral-Derivative (PID) and Deep Reinforcement Learning (DRL), the study enhances the operational precision and educational utility of FrED. Furthermore, the introduction of an innovative, low-cost tension sensor in the fiber extrusion process represents a significant enhancement in monitoring and controlling the mechanical properties of extruded fibers, which is critical for understanding manufacturing dynamics. The thesis also proposes a structured coursework framework titled "Remote Monitoring and Control in Smart Manufacturing" that utilizes FrED to teach key concepts of smart manufacturing. This coursework is designed to equip students with the skills to operate advanced manufacturing tools and analyze real-time data for process optimization. The findings demonstrate that FrED not only supports the theoretical and practical education of engineering students but also serves as a bridge to high-tech industrial applications, making it a pivotal tool in the digital transformation of manufacturing education. This work lays the groundwork for future research on the scalability of such educational tools and their integration into different educational settings globally, potentially democratizing access to cutting-edge engineering education.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Design of Resource Limited Genetic Networks Tuning System Parameters to Satisfy Specifications</title>
<link href="https://hdl.handle.net/1721.1/155868" rel="alternate"/>
<author>
<name>Celeste Junior, Carlos Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/155868</id>
<updated>2024-08-02T03:37:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Co-Design of Resource Limited Genetic Networks Tuning System Parameters to Satisfy Specifications
Celeste Junior, Carlos Eduardo
Modular composition is a very powerful and widely used tool in engineering disciplines, as it aids in maintaining the system complexity tractable. Its main idea is that parts of the systems can be encapsulated into black box models characterized only by its input to output behavior, which eliminates the need to consider the complex dynamics inside the black box. Moreover, this process can be done iteratively, allowing the design of highly complex systems, such as computer chips. But this powerful tool is not always available, like in synthetic biology, where engineered systems in cells have very complex and intricate interconnections between subsystems, which makes encapsulating parts of theses systems a very challenging endeavor. There are many reasons for this failure in modularity in biological systems, such as load effects (retroactivity), unknown interactions and resource competition, which is our focus for this work. Recent efforts to achieve modular design in systems with resource competition, have focused in adding additional machinery to the cell to either try to isolate the subsystems or control the availability of the shared resource. In this work we explore a co-design approach, where instead of adding additional machinery to the cell, we aim to tune some systems parameters to satisfy some specification. To this end we provide conditions on the systems parameters for a network of subsystems to meet a given specification, which are derived using mathematical logic and ideas on how to tackle similar problems. With this, this work lays the foundations for further development of co-design techniques for genetic networks with production and/or degradation resources, where one may be able to mitigate the effects of one type of resource sharing by tuning the other.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frequency Modulated Continuous Wave Radar Based Fall Risk Monitoring System</title>
<link href="https://hdl.handle.net/1721.1/155865" rel="alternate"/>
<author>
<name>Copeland, Daniel Ilan</name>
</author>
<id>https://hdl.handle.net/1721.1/155865</id>
<updated>2024-08-02T03:38:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Frequency Modulated Continuous Wave Radar Based Fall Risk Monitoring System
Copeland, Daniel Ilan
Falls represent a significant health risk, especially for the elderly. Fortunately, interventions have been shown to decrease falls when clinicians identify at-risk patients. However, factors such as medication changes, illness, and injuries can rapidly increase fall risk, making timely clinical identification and subsequent interventions challenging to implement. Our study introduces a comprehensive approach to assessing fall risk using a frequency-modulated continuous-wave (FMCW) radar system, addressing the need for frequent, low-cost, longterm balance monitoring solutions. This technology is compared with ground-truth contactbased lab sensors like force plates and motion capture systems, establishing a foundation for accurate balance assessments in home settings. In our cross-sectional analysis, participants performed the one-legged stand test (OLST) with simultaneous data collection from FMCW radar, force plates, and motion capture systems. By integrating the FMCW radar with machine learning algorithms, we achieved a 98.4% accuracy in identifying OLST foot movements and an R-squared of 0.70 in predicting force plate patterns, demonstrating the system’s nuanced capability for balance performance evaluation. Additionally, we examine the efficacy of combining radar technology with machine learning to identify movements similar to those performed in fitness, clinical, and rehabilitation settings. We also explore the use of simulations for optimizing radar system configurations. This thesis demonstrates the effectiveness of FMCW radar technology in laboratory settings and its potential for home-based health monitoring. The study highlights the transformative potential of integrating radar technology with machine learning through detailed experimentation and analysis, offering a versatile tool for health monitoring and fall risk assessment.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Data from the U.S. Shipbuilding Industry and Application to Improve Performance Metrics</title>
<link href="https://hdl.handle.net/1721.1/155863" rel="alternate"/>
<author>
<name>Willis, Heather L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155863</id>
<updated>2024-08-02T03:43:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Data from the U.S. Shipbuilding Industry and Application to Improve Performance Metrics
Willis, Heather L.
The U.S. Navy is seeking to increase the number of ships in the fleet due to growing threats, however shipyards are facing numerous issues leading to a delay in the delivery of naval warships along with cost overruns. At the same time, there is significant data available from the construction process, creating an opportunity for data analysis with the intention of identifying and hopefully resolving some of these issues. Addressing these concerns, this thesis scrutinizes Earned Value Management (EVM) data from actual shipbuilding projects, capitalizing on the datasets available to help identify the root causes of such delays. The study begins with data cleaning, an essential step that ensures the real-world data’s integrity and relevance. Preliminary data analysis was then conducted to explore cost variance, schedule adherence, and the learning curve effect observed across different hulls, setting the stage for deeper investigative modeling. Following model exploration and selection, the core of the thesis is a predictive model that uses polynomial and linear regression to predict the progression of costs over time and comparison to the prediction metrics currently in use. A regression model was chosen over more complex models like a long short-term memory (LSTM) neural network due to its simplicity, interpretability, and ease of retraining with new data, ensuring that stakeholders can readily understand and apply the model’s insights while maintaining its relevance over time. The target prediction metric for this model is the Actual Cost of Work Performed (ACWP), however similar models could also be leveraged to predict schedule. In creating this model, several features were analyzed including both the Budgeted Cost of Work Scheduled (BCWS) and the Budget at Completion (BAC), both known metrics at the start of construction. After testing various combinations of these features and comparing the mean squared error (MSE), the chosen model uses time and BCWS divided by BAC as input features, serving as a budgeted completion percentage. The model is tailored further to reflect industry-specific cost behaviors, enforcing non-negative, cumulative cost predictions. This model was trained, tested and validated using EVM data from one key event (KE), a specific subset of the overall ship construction process with the intent that it could be applied to all key events and aggregated to provide cost predictions for an 2entire hull. This thesis will ideally serve as a framework for shipyards to improve project cost predictions and identify indicators of large cost overruns early enough to correct them within the ship construction timeline.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Analysis and Modeling of a Modular Navy Integrated Power and Energy Corridor Cooling System</title>
<link href="https://hdl.handle.net/1721.1/155862" rel="alternate"/>
<author>
<name>Meyers, Wade T.</name>
</author>
<id>https://hdl.handle.net/1721.1/155862</id>
<updated>2024-08-02T03:33:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design, Analysis and Modeling of a Modular Navy Integrated Power and Energy Corridor Cooling System
Meyers, Wade T.
In response to the escalating demand for electricity onboard future naval vessels, the Design Laboratory of the Massachusetts Institute of Technology (MIT) Sea Grant Program, as part of a U.S. Navy research consortium for next-generation all-electric warships, is pioneering the development of the Navy Integrated Power and Energy Corridor (NiPEC). This innovative system is designed to enhance the power distribution capabilities of warships like the forthcoming DDG(X), which is expected to require significant electrical power to support advanced offensive and defensive systems. NiPEC features a network of modular compartments that independently or collectively perform energy storage, conversion, protection, control, isolation, and transfer functions. Central to this system is the integrated Power Electronics Building Block (iPEBB), a self-contained, power-dense converter tailored to manage the ships' stochastic and dynamic loads efficiently. However, realizing the full potential of iPEBB's advanced semiconductor technology presents significant challenges, particularly in thermal management. This aspect is further complicated by the constraints imposed by indirect liquid cooling methods and the necessity for sailor-friendly design considerations. Preliminary analyses by Padilla et al. on heat dissipation strategies, as well as Reyes' and Chaterjee’s subsequent design proposal for a NiPEC liquid cooling system highlight the operational and maintenance challenges in cooling the system's numerous components. &#13;
&#13;
This thesis presents a comprehensive approach to designing a modular, compact, and indirect liquid cooling system for the NiPEC to be deployed across future all-electric Navy destroyer warships. Leveraging a combination of first-principles thermodynamic analysis, multi-physics-based modeling, and numerical analysis, the study builds upon Reyes' and Chaterjee’s preliminary design to propose enhanced cooling system architectures that meet stringent military standards while ensuring robust thermal management. Further, the design and detailed analysis of this compact heat exchanger significantly contribute to enabling the modular construction of the NiPEC cooling system alongside the concurrent assembly of the NiPEC electrical system. This investigation also delves into the extraction and application of response surface models that elucidate the dynamic interdependencies among various response variables—such as the overall heat transfer coefficient and heat transfer rates—arising from changes in explanatory variables like inlet velocities, temperatures, and the specific geometry of the heat exchanger. This multifaceted analysis not only refines the cooling system's efficiency but also aligns it with the modular integration requirements of military naval applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Learning on the Job</title>
<link href="https://hdl.handle.net/1721.1/155860" rel="alternate"/>
<author>
<name>Liu, Jiageng</name>
</author>
<id>https://hdl.handle.net/1721.1/155860</id>
<updated>2024-08-02T03:12:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Measuring Learning on the Job
Liu, Jiageng
I study on-the-job learning at IT firms. Using detailed online activity data of 144,000 employees matched with 25,000 firms across the globe, I measure the intensity and the direction of technology acquisition, a key input to innovation. A standardized measure shows that employee-entrepreneurs who join small, young firms spend more time learning about new software than similar employees who join large incumbent firms. They engage with more diverse and rarely combined topics, behaviors that are found to be associated with more radical innovations. Within firms, more actively learning employees work on more projects and start reviewing others' code sooner. The results are consistent with channels of firm-employee matching and job security at incumbent firms. They complement Akcigit and Goldschlag (2023) that finds inventors apply for fewer patents and receive higher wages after joining incumbent firms. A heterogeneous supply of unobserved learning cannot explain all results. I also document life-cycle patterns of learning behavior that are consistent with predictions of the standard labor theory. Such predictions had been challenging to test past formal education.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Multi-Salt Transport and Salt Leakage Pathways in Bipolar Membrane Electrodialysis for Brine Valorization</title>
<link href="https://hdl.handle.net/1721.1/155859" rel="alternate"/>
<author>
<name>Wegmueller, Jakob Max</name>
</author>
<id>https://hdl.handle.net/1721.1/155859</id>
<updated>2024-08-02T03:23:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Multi-Salt Transport and Salt Leakage Pathways in Bipolar Membrane Electrodialysis for Brine Valorization
Wegmueller, Jakob Max
Bipolar membrane electrodialysis (BMED), a process which converts a concentrated saline feed into acidic, basic, and desalinated streams, has promising applications across resource recovery and brine valorization. BMED can be used to produce valuable acids and bases from reverse osmosis or nanofiltration concentrate while desalinating the waste brine and reducing disposal costs. In the first part of this thesis, we assess the feasibility of applying BMED to the nanofiltration permeate of groundwater that contains a high concentration of nitrate and sodium chloride. We analyze the transport of different ions in the mixed solution and compare the performance of the mixed salt permeate to an idealized single salt feed. BMED was shown to be just as effective at producing acid and base from the composition of polluted groundwater composition as from a single salt solution. BMED appears to be a feasible means to create value from and reduce the volume of waste brine in this application. The second part of this thesis examines the transport of salt impurities in the produced acid and base stream. BMED membranes allow small amounts of salt leakage that lower the purity and value of the acid and base generated. Impurities in the base stream may originate from the feed stream or the acid stream. While the total concentration of impurities in the base stream can be tracked, in conventional BMED operation pinpointing the origin of those impurities is not possible without making presumptions. A novel membrane stack and method is proposed for distinguishing between and measuring the flux of salt leakage from the acid and feed streams into the base stream (the same analysis is also done for the acid stream). For feed concentrations between 0.25-2.25 M and current densities from 10-100 mA/cm2, the impurity fluxes from the two sources are always the same order of magnitude and neither is negligible. Furthermore, lowering the feed stream concentration and operating at a higher current density decreased the net flux of impurities, resulting in a higher acid and base purity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Testing COLREGS Compliance in AutonomousSurface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155858" rel="alternate"/>
<author>
<name>Molina, Mikala N.</name>
</author>
<id>https://hdl.handle.net/1721.1/155858</id>
<updated>2024-08-02T03:26:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Methods for Testing COLREGS Compliance in AutonomousSurface Vehicles
Molina, Mikala N.
Globally there is an increasing number of uncrewed and autonomous surface vessels operating at sea. Preventing collisions at sea is of paramount importance to safeguard lives, protect the marine environment, and maintain smooth maritime operations. Effectively preventing collisions at sea between manned and uncrewed vessels requires that uncrewed vessels maneuver in a manner that is both safe and predictable to human mariners. Consequently, there is a pressing need to develop a comprehensive testing architecture that rigorously evaluates and verifies the level of International Rules for Preventing Collision at Sea, or "Collision Regulations" (COLREGS) compliance of autonomous marine vehicles. To address the critical need for COLREGS compliance verification in Autonomous Surface Vehicle (ASV), this thesis introduces test cases. These test cases are designed to assess the ability of autonomous vessels to respond appropriately to various navigational scenarios and interactions with conventional, manned vessels. The development of test cases draws upon historical collision data, navigational incidents, and expert knowledge to encompass a wide range of real-world situations and with simplicity of real-world implementation in mind.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Challenges of Volume Controlled Cavity Expansion (VCCE) for In-Vivo Tissue Testing</title>
<link href="https://hdl.handle.net/1721.1/155857" rel="alternate"/>
<author>
<name>Spaeth, Katherine Charlotte</name>
</author>
<id>https://hdl.handle.net/1721.1/155857</id>
<updated>2024-08-02T03:08:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing Challenges of Volume Controlled Cavity Expansion (VCCE) for In-Vivo Tissue Testing
Spaeth, Katherine Charlotte
The prevalence of Traumatic Brain Injuries (TBI’s) is a serious health concern to U.S. Military Members. Mild TBI’s, some of which have been shown to result from prolonged exposure to repeated artillery blasts, are particularly challenging to identify with existing diagnostic imaging technology. In general, as with other soft-tissued organs, there exists a gap in understanding of how biological tissues deform under extreme loading conditions. Understanding these mechanics has applications beyond diagnosing physical bodily injuries as diseased tissues have also been shown to demonstrate differing mechanical p roperties. Volume Controlled Cavity Expansion (VCCE) is a novel, needle-based probing methodology developed to capture rate dependent ex-vivo and in-vivo tissue material properties. In this thesis, the VCCE methodology was performed on numerous animal tissues as well as extracted human thyroids to study some of the challenges related to the translation of the VCCE lab technique into a medical diagnostics tool. To ensure a successful VCCE test, it was shown that choice of needle and the insertion protocol must be altered depending on the type of biological tissue being tested. Additionally, in a clinic setting, VCCE was demonstrated as a successful methodology in differentiating between a diseased and healthy tissue. Using the mechanics-informed in-vivo tissue probing method, VCCE has applications for improved assessment and diagnostic tools for injured and/or diseased tissues, Personal Protective Equipment (PPE), and casualty transport safety guidelines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automation of In-Bed Repositioning, Assistance to Sitting, and Transfer for Bedridden Patients via Robot Arms and Strap Interface</title>
<link href="https://hdl.handle.net/1721.1/155855" rel="alternate"/>
<author>
<name>Blake, Kaleb</name>
</author>
<id>https://hdl.handle.net/1721.1/155855</id>
<updated>2024-08-02T03:07:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automation of In-Bed Repositioning, Assistance to Sitting, and Transfer for Bedridden Patients via Robot Arms and Strap Interface
Blake, Kaleb
Mobility and immobility as fundamental aspects of a patient’s health. There are several factors that contribute to mobility impairments, including various medical conditions and injuries. Prolonged immobility has detrimental effects many of the body’s vital organ systems and decreases quality of life in general. Caregivers work to help patient with different levels of mobility perform necessary tasks. Severely immobile or bedridden patients are the most difficult to handle. Caregivers often experience musculoskeletal disorders lifting injuries in their line of work. Assistive devices were made to mitigate this, but their usage in practice is still limited, so caregiver injuries are still prevalent. This thesis presents a new idea that can automate in-bed motion, assistance to seated positions, and transfer for patients with severe immobility. Comfortable straps that wrap around the patient’s upper torso and thighs will be held by robot arms. The robot arms will perform movements that can control the torso and thigh angles, hip position in and out of the bed plane, and the normal force the bed provides at the hip. The control techniques described in this paper include closed loop control of a quasi-static formulation of the system and model-reference adaptive trajectory control. The results show there is promise in these methods to automate assistance of bedridden patients.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Modeling of Offshore Nuclear Platform Fuel and Transfer System</title>
<link href="https://hdl.handle.net/1721.1/155854" rel="alternate"/>
<author>
<name>Allison, Asia</name>
</author>
<id>https://hdl.handle.net/1721.1/155854</id>
<updated>2024-08-02T03:21:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Modeling of Offshore Nuclear Platform Fuel and Transfer System
Allison, Asia
The design and modeling of the fuel and transfer system aboard the Offshore Nuclear Platform (OFNP) aims to integrate Small Modular Reactors (SMRs) into an offshore setting. This endeavor is in line with global initiatives to mitigate temperature rise by 2050, using nuclear energy for electrical generation “Key Aspects of the Paris Agreement | UNFCCC.” A specific focus on high-temperature gas reactors, notably pebble bed reactors for their compact fuel form and capability for at-power fuel replenishment through TRISO fuel pebbles recirculation will be evaluated.&#13;
This study will review both historical and current high-temperature gas reactors, focusing on their development, application, and operational efficiencies. Special emphasis will be placed on methods for managing spent fuel, including storage and environmental considerations. The research will develop the platform’s conceptual fuel transfer system which details the design of fuel storage, handling and at sea transfer system, ensuring safety, particularly during at-sea offloading operations.&#13;
Additionally, the thesis will assess the platform's stability, transfer system structural integrity, and shielding design to determine its feasibility for offshore energy generation for the reactor’s lifetime.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Method for Photopolymerization 3D Printing of Recyclable Thermoplastic Polymers</title>
<link href="https://hdl.handle.net/1721.1/155850" rel="alternate"/>
<author>
<name>Tumkur Mahesh, Prajwal</name>
</author>
<id>https://hdl.handle.net/1721.1/155850</id>
<updated>2024-08-02T03:16:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Method for Photopolymerization 3D Printing of Recyclable Thermoplastic Polymers
Tumkur Mahesh, Prajwal
Conventional light-based processes used in additive manufacturing (AM), such as vat polymerization, yield non-recyclable thermoset polymers, which pose sustainability issues at scale. This thesis studies a method for photopolymerization 3D printing of the common polymers polyacrylonitrile (PAN) and polymethyl methacrylate (PMMA) to address the growing demand for low-waste production of high-resolution polymer parts with complex geometries in industrial-scale manufacturing. This new approach not only produces directly recyclable linear thermoplastic polymers but also enables the light-based printing of polymers soluble in their own monomer. &#13;
&#13;
It was previously demonstrated by Chazot et al. that photo-defined layers of polyacrylonitrile (PAN) can be formed at a liquid-liquid interface; this technique was named interfacial photopolymerization (IPP). In this thesis, which focuses on multilayer 3D printing (3D-IPP), the resolution and stability of layers formed by IPP are improved using a light-absorbing dye while incorporating a water-soluble polyethylene glycol binder to improve yield, printing speed, and mechanical properties. Joint initiation using commercial water-soluble photoinitiators V-50 and LAP, along with the addition of HCL and CaCl2, further enhances printing performance by producing dense layers and reducing voids. Post-processing techniques are devised to preserve part geometry after printing, including controlled air drying, thermal post-processing with PEG infiltration, and the inclusion of compatible polymeric binders in the printing composition to minimize cracking and shrinkage. Additionally, hardware is developed to integrate the IPP process into a commercial projector-based 3D printer, demonstrating compatibility of the proposed chemistry with off-the-shelf hardware. The capability to digitally manufacture high resolution 3D structures with IPP is demonstrated and the physical properties of the resulting composite polymer are characterized.  While 3D-IPP cannot yet directly rival conventional manufacturing methods, the benign aqueous chemistry as well as recyclability and circularity of produced parts offers a promising path towards sustainable and resource-efficient AM as the technology matures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating optical microplastic detection methods using fluorescent staining through Nile Red</title>
<link href="https://hdl.handle.net/1721.1/155849" rel="alternate"/>
<author>
<name>Prasad, Suparnamaaya</name>
</author>
<id>https://hdl.handle.net/1721.1/155849</id>
<updated>2024-08-02T03:12:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating optical microplastic detection methods using fluorescent staining through Nile Red
Prasad, Suparnamaaya
Microplastics (MPs) are small pieces of plastic debris typically defined as smaller than 5mm. Given that the global environment faces a growing plastic pollution crisis, an urgent need exists for rapid, low-cost microplastic detection systems to characterize the health and environmental risk posed by MPs. Fluorescent tagging of microplastics using Nile Red (NR) has recently emerged as an accessible and popular detection method. However, robust, standardized methods of using Nile Red to identify between plastic and organic materials or distinguish between polymers are still being developed. This thesis pursued different optical microplastic detection methods using NR-based fluorescent staining with the ultimate goal of providing data that could be used towards building a polymer identification model that could be implemented into a low-cost detection system. Three different investigations are presented. First, the fluorescence emission spectra of various plastic and organic samples stained with Nile Red is presented. The motivation behind this study was to identify the strongest fluorescence emission peaks for NR-stained plastics under a series of different excitation wavelengths. The spectral results provide a preliminary basis to distinguish Nile Red-stained plastics based on their fluorescent emission spectra alone. Second, this thesis presents a low-cost imaging set-up for fluorescent samples. The system applies the same excitation wavelengths and optical filters used to collect the spectral data. The images are then combined with the spectral data to illustrate another basis for rapidly distinguishing between different plastic polymers. Finally, an optical method for detecting microplastics in liquid samples using photodiodes is explored and discussed. Overall, this thesis contributes to the development of accessible microplastic detection technologies by leveraging the fluorescent properties of NR-stained plastics. The findings highlight the challenges and potential solutions for distinguishing between plastics and organic materials and distinguishing between different plastic polymers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Relevance for Enhanced Human-Robot Collaboration</title>
<link href="https://hdl.handle.net/1721.1/155847" rel="alternate"/>
<author>
<name>Hernandez-Cruz, Vanessa</name>
</author>
<id>https://hdl.handle.net/1721.1/155847</id>
<updated>2024-08-02T04:03:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bayesian Relevance for Enhanced Human-Robot Collaboration
Hernandez-Cruz, Vanessa
Intent prediction is a difficult yet critical component for seamless Human Robot Collaboration (HRC). As robots become increasingly involved in helping humans with a variety of tasks, ranging from part assembly to healthcare and more, it is crucial to model and understand human intention. Many works still do not take advantage of the inherent relationships between objects, task, and the human model. Current human intent prediction methods, such as Gaussian Mixture Models and Conditional Random Fields, are generally less interpretable due to their lack of causality between variables. A novel framework called Bayesian Relevance (BR) is presented for human intent prediction in HRC scenarios. The complexity of intent prediction is captured by modeling the correlation between human behavior convention and scene data. The proposed method leverages inferred intent predictions to optimize the robot’s response in real-time, ensuring smoother and more intuitive collaboration. In this paper, we use a Bayesian Network to predict human intent from a multi modality information framework. A demonstration of a HRC task, using a UR5 robot, exemplifies BR’s real-time human intent prediction and collision avoidance. Evaluations demonstrate that our multi-modality BR model predicts human intent within 2.69ms with a 36% increase in precision, a 60% increase in F1 Score, and an 85% increase in accuracy compared to its best baseline method.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavior of a lumpy artificial transmission line as the frequency is indefinitely increased</title>
<link href="https://hdl.handle.net/1721.1/155828" rel="alternate"/>
<author>
<name>Clarke, Edith,
            1883-1959.</name>
</author>
<id>https://hdl.handle.net/1721.1/155828</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1919-01-01T00:00:00Z</published>
<summary type="text">Behavior of a lumpy artificial transmission line as the frequency is indefinitely increased
Clarke, Edith,
            1883-1959.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1919
</summary>
<dc:date>1919-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two-shock interaction in a region of nonuniform flow</title>
<link href="https://hdl.handle.net/1721.1/155822" rel="alternate"/>
<author>
<name>Miller, Walter Daniel.</name>
</author>
<id>https://hdl.handle.net/1721.1/155822</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Two-shock interaction in a region of nonuniform flow
Miller, Walter Daniel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 24-25).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics and control of a spatial trade model,</title>
<link href="https://hdl.handle.net/1721.1/155819" rel="alternate"/>
<author>
<name>Hager, William W.,
            1948-</name>
</author>
<id>https://hdl.handle.net/1721.1/155819</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Dynamics and control of a spatial trade model,
Hager, William W.,
            1948-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1971; Bibliography: leaf 52.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>European strategic alliances and cross border cross shareholdings</title>
<link href="https://hdl.handle.net/1721.1/155818" rel="alternate"/>
<author>
<name>De Marchi, Edoardo.</name>
</author>
<id>https://hdl.handle.net/1721.1/155818</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">European strategic alliances and cross border cross shareholdings
De Marchi, Edoardo.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1993; Includes bibliographical references (leaves 87-88).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interlaminar fatigue of fiber-reinforced laminates.</title>
<link href="https://hdl.handle.net/1721.1/155817" rel="alternate"/>
<author>
<name>Handy, Rodney Neal.</name>
</author>
<id>https://hdl.handle.net/1721.1/155817</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Interlaminar fatigue of fiber-reinforced laminates.
Handy, Rodney Neal.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Bibliography: leaf 24.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A methodology for manufacturing process improvement</title>
<link href="https://hdl.handle.net/1721.1/155815" rel="alternate"/>
<author>
<name>Chinnaswamy, Mano H.
            (Mano Haran)</name>
</author>
<id>https://hdl.handle.net/1721.1/155815</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">A methodology for manufacturing process improvement
Chinnaswamy, Mano H.
            (Mano Haran)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (p. 73-74).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A non-orthogonal gyro configuration</title>
<link href="https://hdl.handle.net/1721.1/155812" rel="alternate"/>
<author>
<name>Gilmore, Jerold Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/155812</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">A non-orthogonal gyro configuration
Gilmore, Jerold Philip.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1967; Three blank p. included in paging.; Bibliography: p. 199-202.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of the line-spring model to cracks with partial closure</title>
<link href="https://hdl.handle.net/1721.1/155810" rel="alternate"/>
<author>
<name>Luz, James John.</name>
</author>
<id>https://hdl.handle.net/1721.1/155810</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Application of the line-spring model to cracks with partial closure
Luz, James John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1985; Includes bibliographical references.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor Evaluation and Fleet Modeling of Long-Range&#13;
Low-Cost Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155656" rel="alternate"/>
<author>
<name>Nothacker, John S.</name>
</author>
<id>https://hdl.handle.net/1721.1/155656</id>
<updated>2024-07-11T03:06:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sensor Evaluation and Fleet Modeling of Long-Range&#13;
Low-Cost Autonomous Surface Vehicles
Nothacker, John S.
This thesis examines the development and assessment of sensor configurations for Long-Range Low-Cost Autonomous Surface Vehicles (ASVs) with a focus on Maritime Domain Awareness (MDA) applications. Utilizing the Platform for Expanding AUV exploRation (PEARL) as a model, the study systematically evaluates various sensor options to identify optimal suites for MDA operations. Through an analysis of 255 sensor combinations, considering factors such as range, power consumption, field of view, resolution, and cost, this research identifies key sensor configurations that maximize operational utility while minimizing cost. The research identified that sensors should include a RADAR, AIS, IR cameras, and visual light cameras, allowing operation in all lighting and weather conditions. The study further explores fleet modeling for two MDA use cases—The Littorals and Open Ocean scenarios—providing insights into the cost-effectiveness and coverage efficiency of deploying fleets of sensor-equipped PEARL units. The fleet modeling demonstrated that these low-cost ASVs can cover approximately 20 times the area of a Saildrone Voyager for about the same capital cost. The findings contribute to the advancement of low-cost ASV technology for enhanced maritime surveillance and data collection, offering scalable solutions to maritime domain challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Trade-offs and Emergent Properties of Heterogeneous Swarms of Maritime Robot Systems through Empirical Analysis and Application-Driven Experiments</title>
<link href="https://hdl.handle.net/1721.1/155655" rel="alternate"/>
<author>
<name>Hoang, Thinh B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155655</id>
<updated>2024-07-11T03:28:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring Trade-offs and Emergent Properties of Heterogeneous Swarms of Maritime Robot Systems through Empirical Analysis and Application-Driven Experiments
Hoang, Thinh B.
Multi-agent systems present a promising approach to addressing challenges such as searching for and tracking moving targets, offering advantages like robustness and scalability over single-agent solutions. Current maritime searching and tracking strategies typically involve employing predetermined paths for exploring the search space or adopting Particle Swarm Optimization (PSO) algorithms for multi-robot systems (MRS). While these approaches often entail homogeneous deployment of algorithms or behaviors across all agents, the potential benefits of introducing heterogeneity still need to be explored. Specifically, varying agent behavior or capabilities could enhance mission performance, but the trade-offs involved are not thoroughly understood beyond trivial outcomes like adjusting agent speed.&#13;
&#13;
In this thesis, a novel swarming approach is used to tackle two core missions: dynamic target searching and tracking, and isocontour identification. The strategy employs a combination of five distinct algorithms. The innovation lies in introducing heterogeneity among agents by assigning specific roles managed through varying weights tied to each algorithm. Trade-offs between mission performance and cost are quantified by simulating a swarm with diverse roles and behaviors. Key performance metrics include the accuracy of target position estimation, convergence time on target, duration of target tracking, and correlation between the swarm's collective heading and target bearing. The overall energy consumption of the swarm determines the cost metric. Investigation of the impact due to different proportions of agent types within the swarm, provided valuable insights into how to optimize mission effectiveness while managing resource constraints.&#13;
&#13;
Overall, this research contributes to advancing the understanding of how heterogeneity in multi-agent systems can enhance mission performance and offers practical insights into optimizing resource allocation in complex tasks such as target search and tracking. By comprehensively assessing trade-offs, this thesis aims to pave the way for more efficient and adaptable multi-agent systems in real-world applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Vehicle Platform Architecting Process: Will Model Based Systems Engineering help organizations with the architectural transition from ICE to battery power?</title>
<link href="https://hdl.handle.net/1721.1/155653" rel="alternate"/>
<author>
<name>Melgarejo Oviedo, Carlos Edoardo</name>
</author>
<id>https://hdl.handle.net/1721.1/155653</id>
<updated>2024-07-11T03:32:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Vehicle Platform Architecting Process: Will Model Based Systems Engineering help organizations with the architectural transition from ICE to battery power?
Melgarejo Oviedo, Carlos Edoardo
The CO₂ emission reduction regulation set for automakers led them to develop BEVs on modified Internal Combustion Engine (ICE) platforms between the late 1990’s and early 2010’s. However, a long-term strategy to be competitive on range in the market demanded the development of BEV-dedicated platforms. Legacy OEMs (Toyota, VW, GM, and others), in theory, had deep process experience architecting vehicle platforms. The challenge from their perspective was to adapt that process to a new architecture and power source and to react to other recent technologies trending in the market. By contrast, the new market entrants (Tesla, Rivian, BYD, etc.) had almost no process experience but were unencumbered by compatibility with legacy platforms. The modules and software required to power BEVs make them more complex than ICEVs despite the fewer pieces in their powertrain. A series of interviews with Systems Engineering experts in the automotive industry were held to understand the differences in the architecting process between ICEs and BEVs. In the study, 45% of the interviewees claimed that BEVs are more challenging to architect than ICEVs, another 45% stated exactly the opposite, and the remaining 10% stated that the difficulty level is the same for both. Additionally, over 80% of the participants stated that an architectural change in a BEV is as smooth as in an ICEV. The study suggests that the architecting difficulty perception between ICEVs and BEVs is linked to the experience of the companies as well as some practices like module incompatibility tracking and key interface identification that happen during the architecting process.&#13;
Model-Based Systems Engineering (MBSE) is a methodology largely developed in the Aerospace industry, but which presents a potential solution for managing the increased complexity of BEV-dedicated platforms. During an MIT MBSE course, 5,379 professionals have been surveyed from 2017 to 2024. The data from these surveys was analyzed to identify trends in MBSE adoption over time. The study revealed that MBSE adoption has increased at a rate of 4.11% annually across several industries and has been used primarily for the transformation of processes and workflows. The MBSE implementation in the automotive industry is around 15% higher than in other sectors. However, it has not experienced growth over the past four years. The study also suggests that the main challenges to extending the MBSE adoption are the lack of guidelines and the low credibility of models. Nevertheless, survey respondents remain positive about this approach; between 60% and 70% of them think that their companies should implement MBSE, suggesting a future increase in its adoption and an important role of this methodology in managing new BEV-dedicated platforms' complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Pharmaceutical Companies Utilize Platform Strategy: A Study of the COVID-19 mRNA Vaccine Development</title>
<link href="https://hdl.handle.net/1721.1/155647" rel="alternate"/>
<author>
<name>Aoki, Tomonoshin</name>
</author>
<id>https://hdl.handle.net/1721.1/155647</id>
<updated>2024-07-11T03:32:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Pharmaceutical Companies Utilize Platform Strategy: A Study of the COVID-19 mRNA Vaccine Development
Aoki, Tomonoshin
This paper employs platform theory to investigate why Moderna and BioNTech were able to develop the COVID-19 vaccine so rapidly and provides new insights into platform theory. The COVID-19 pandemic, which began in China in late 2019, spread globally within a month, causing immense damage. Vaccines were the most critical technology needed to fight the disease. Typically, vaccine developments take more than a decade; however, Moderna and BioNTech/Pfizer successfully developed an mRNA vaccine within approximately 300 days of the pandemic's onset. In contrast, Daiichi Sankyo required around 1,000 days to develop a vaccine using the same mRNA technology. This paper utilizes platform theory as a framework to examine the factors contributing to this disparity. Various internal and external factors from the perspective of a pharmaceutical company can be considered behind the rapid development of vaccines. However, this study focuses mainly on internal factors, especially from the perspective of platform theory from management perspectives. Platform theory has emerged as a crucial framework for understanding the dynamics of modern businesses and technologies. This theory distinguishes between three primary types of platforms: product-level platforms, industry-level platforms, and digital platforms. In the pharmaceutical industry, mRNA technology can be considered a product- or technology-level platform. This is because by modifying the mRNA sequences, it will be a wide range of therapeutics targeting different diseases, not only infectious diseases but also cancers or other diseases. Although this study regarded mRNA technology as a product or technology-level 4 platform, we would like to discuss how it has led to the connection to the industry-level platform or digital platform through the COVID-19 vaccine development process. In regard to the COVID-19 vaccine development story, the question that naturally arises is, 'Why could Moderna and BioNTech develop the vaccine so rapidly?' The answer must be they executed the necessary steps for vaccine development rapidly. These steps, namely 'Discovery' (development of vaccine candidate substances), 'Development' (conducting clinical trials and obtaining regulatory approval), and 'Manufacturing' (production of vaccines), were all carried out swiftly and in parallel. These steps were executed so rapidly because, at the time of the pandemic, Moderna and BioNTech already had the financial and human resources, knowledge and patents, development experiences, digital infrastructures, efficient production facilities, influential partners, and a rational corporate culture for the project. Then, the next question should be: why did Moderna and BioNTech have such organizational capabilities at the outbreak of the pandemic? In this paper, we examine in detail why and how such capabilities were nurtured after these companies were founded. Also, we examine the academic history even before the companies were founded and why and how Moderna and BioNTech were founded as mRNA platform companies. In conclusion, this study demonstrates the importance of the pharmaceutical industry harnessing the "power of the platform" and provides concrete directions for leveraging its potential. The discussion should be expanded to explore how companies and policies can work together to address the health and healthcare challenges facing people around the world, utilizing the power of platforms to drive innovation, collaboration, and, ultimately, better health outcomes for all.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Segmentation and Analysis of High-Speed Video Phase-Detection Data for Boiling Heat Transfer Characterization Using U-Net Convolutional Neural Networks and Uncertainty Quantification</title>
<link href="https://hdl.handle.net/1721.1/155645" rel="alternate"/>
<author>
<name>Maduabuchi, Chika</name>
</author>
<id>https://hdl.handle.net/1721.1/155645</id>
<updated>2024-07-11T03:30:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Segmentation and Analysis of High-Speed Video Phase-Detection Data for Boiling Heat Transfer Characterization Using U-Net Convolutional Neural Networks and Uncertainty Quantification
Maduabuchi, Chika
Boiling heat transfer is a complex phenomenon used for cooling and heat management purposes in various industrial applications, such as nuclear reactors. Accurate characterization and understanding of boiling dynamics are essential for the design and optimization of heat transfer systems. High-speed video (HSV) imaging is a powerful tool for capturing the intricate details of boiling processes. However, the manual analysis of HSV data is time-consuming and prone to subjective interpretation. This thesis presents a novel approach for the automated segmentation and analysis of HSV phase-detection images using U-Net Convolutional Neural Networks (CNNs) and uncertainty quantification techniques. The proposed methodology involves the development of specialized U-Net CNN models for segmenting HSV data of boiling phenomena in different fluids, including liquid nitrogen, argon, FC-72, and high-pressure water, under various experimental conditions. The performance of the U-Net models is evaluated and compared with traditional adaptive thresholding techniques. The results demonstrate the superior accuracy and robustness of the U-Net models in identifying and delineating bubbles compared to manual segmentation, particularly in scenarios involving smaller bubbles and complex bubble topologies. To assess the reliability of the calculated boiling metrics, such as contact line density and dry area fraction, a comprehensive uncertainty quantification analysis is also conducted. The impact of discretization errors arising from the pixelation of bubbles is investigated using weighted average percentage relative errors and mean errors under both erosion and dilation conditions. The analysis reveals higher relative uncertainty in contact line density measurements than dry area fraction measurements across all fluids studied. The limitations of the U-Net models in generalizing to other HSV datasets are addressed, emphasizing the need for developing more sophisticated image segmentation models, such as foundation models, that are less sensitive to domain shifts. This is crucial for enabling autonomous experimentation and reducing the reliance on specialized models for each fluid and operating condition. Future research directions are outlined, including the investigation of advanced uncertainty quantification techniques, the development of real-time segmentation and analysis algorithms, the evaluation of uncertainty propagation in heat flux reconstruction, and the extension of the methodology to other multiphase flow phenomena. By addressing these recommendations, the understanding, characterization, and modeling of boiling phenomena can be further enhanced, contributing to the advancement of boiling heat transfer research and the development of improved heat transfer models and correlations. Overall, this thesis presents a comprehensive approach for the automated segmentation and analysis of HSV phase-detection images using U-Net CNNs and uncertainty quantification techniques. The proposed methodology demonstrates significant potential for accurate and reliable characterization of boiling dynamics, paving the way for advanced boiling heat transfer research and the optimization of heat transfer systems in various industrial applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Iodine-129 Environmental Releases and Surface Water Concentrations at Nuclear Fuel Recycling Facilities</title>
<link href="https://hdl.handle.net/1721.1/155644" rel="alternate"/>
<author>
<name>Whiteaker, Kate</name>
</author>
<id>https://hdl.handle.net/1721.1/155644</id>
<updated>2024-07-11T03:30:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Iodine-129 Environmental Releases and Surface Water Concentrations at Nuclear Fuel Recycling Facilities
Whiteaker, Kate
Iodine-129 (I-129) is one of the largest long-term dose contributors in high-level nuclear waste disposal models, and an important contaminant at sites currently undergoing remediation like Savannah River and Hanford. This is in part due to its environmental mobility, its 15.7 million-year half-life, and its potential for bio-accumulation. However, over 90% of the I-129 present in used nuclear fuel is regularly discharged to the ocean at used fuel recycling facilities in France and the UK. This work first quantifies the releases of I-129 to the environment per gigawatt-year electrical energy production over the entire nuclear fuel cycle with and without used fuel recycling, then synthesizes a database of I-129 surface water concentrations in waters affected by discharges from current and historical recycling facilities. We find that the environmental releases from current recycling facilities are above U.S. I-129 release limits, indicating a need for innovation in I-129 capture and isolation technologies in order to adapt used fuel recycling to the United States. We also find that the concentrations of I-129 in surface waters affected by discharges from recycling facilities do not correlate with the amount of I-129 discharged by the facilities. Persistent concentrations appear to be more dependent on factors including siting, dilution, and whether or not the facility attempted to isolate their liquid wastes. These results are particularly important in view of a current renewed interest in the commercial recycling of used nuclear fuel in the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Assessment of Adopting the ADDER Computer Code for the MIT Research Reactor Fuel Management</title>
<link href="https://hdl.handle.net/1721.1/155643" rel="alternate"/>
<author>
<name>Garanzini, Maurane</name>
</author>
<id>https://hdl.handle.net/1721.1/155643</id>
<updated>2024-07-11T03:01:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Feasibility Assessment of Adopting the ADDER Computer Code for the MIT Research Reactor Fuel Management
Garanzini, Maurane
The Massachusetts Institute of Technology Reactor (MITR) is a 6 MW research reactor currently operating with highly enriched uranium (HEU) plate-type fuel. Fuel management calculations for this reactor are performed using MCODE, which allows for the coupling of a neutron transport code and a depletion code. As part of the low-enriched uranium (LEU) fuel conversion program, the Advanced Dimensional Depletion for Engineering of Reactors (ADDER) software is being developed at Argonne National Laboratory to provide a more flexible and performant approach to fuel management. This study evaluates the feasibility of transitioning from MCODE to ADDER for MITR fuel management by carrying out a code-to-code comparison. Analyses for a full MITR cycle (70 days) for a 22-element fresh HEU core and fresh LEU core were completed, and the impact of simplified in-core experiments with various materials was also evaluated. Calculations with mid-cycle restart were performed, in which reactor power was reduced to 100 kW for 7 hours to evaluate Xe-135 poison reactivity effects. The parameters selected for comparison include control blades height, cumulative fission density, integral neutron flux and nuclide inventory (for selected actinides and neutron poisons). The study showed satisfactory agreement between ADDER and MCODE results, with control blades worth differences within the 200 pcm range that corresponds to ± 100 pcm critical search tolerance, and U-235 mass differences remaining below 0.5 g per fuel element at end of cycle. Differences for other result types remain low enough to show the potential of transitioning to ADDER, with higher differences located near control blades when using the predictor-corrector method for depletion since the codes rely on different algorithmic definitions for predictor-corrector as well as different critical blade search schedules. Closer agreement between results is obtained when switching to the predictor method but still indicates some potential differences in power normalization. The two software also present good agreement on control blades height and Xe-135 core inventory results for mid-cycle restart calculations. Further study is recommended to assess depletion factors such as neutron flux normalization and predictor-corrector schemes. Before ADDER is implemented for MITR fuel management, future work is required to evaluate good agreement for equilibrium cores with depleted HEU fuel element compositions, and analyze fuel elements shuffling in between cycles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Photoelectric Fusion Technologies: Market Potential and Strategic Insights from NTT's IOWN Case</title>
<link href="https://hdl.handle.net/1721.1/155642" rel="alternate"/>
<author>
<name>Numa, Kentaro</name>
</author>
<id>https://hdl.handle.net/1721.1/155642</id>
<updated>2024-07-11T03:01:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessing Photoelectric Fusion Technologies: Market Potential and Strategic Insights from NTT's IOWN Case
Numa, Kentaro
This thesis investigates the rapid evolution of technology in response to surging internet traffic, projected to increase from 33 zettabytes in 2018 to 175 zettabytes by 2025, and data processing demands, anticipated to exceed 180 zettabytes by the same year. The escalating requirements for more robust communication and data processing systems are emphasized, especially as AI advances necessitate substantial computational resources, increasing energy consumption. This highlights the challenges of enhancing performance while maintaining energy efficiency, suggesting the limits of Moore's Law and Dennard scaling. The thesis explores the adoption of silicon photonics as a significant innovation, shifting from electrical to optical signals, particularly within Nippon Telegraph and Telephone Public Corporation (NTT)'s Innovative Optical and Wireless Network (IOWN) initiative. It analyzes the strategic, operational, and market engagement approaches of NTT, focusing on competitive threats, potential collaborations, and strategies to foster third-party development. The conclusion underscores NTT’s potential to transform telecommunications and data processing through photonics and AI technologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Network Scalability of Metaverse-Applicable Use Cases</title>
<link href="https://hdl.handle.net/1721.1/155641" rel="alternate"/>
<author>
<name>Reveron, Daniel E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155641</id>
<updated>2024-07-11T03:29:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating Network Scalability of Metaverse-Applicable Use Cases
Reveron, Daniel E.
Within the context of scalability for the Metaverse, the network remains a principal limiting factor even if extended reality adoption were to increase, given large volumes of data needed to support complex use cases. This thesis introduces a systems framework to evaluate the scalability of network architecture within the Metaverse, envisioned as the next generation of 3D-enabled internet. Through two experiments, we developed a model to determine whether various Metaverse use cases could be supported by current network infrastructure. The first experiment utilized Meta's Horizon Worlds platform to assess the throughput scalability of objects. The second experiment constructed a model categorizing use cases and evaluated their expected throughput against current data rates, incorporating data from the first experiment and existing literature. The findings indicate that static objects do not contribute persistent throughput, while moving objects exhibit an approximate throughput of 25 Kbps each. Furthermore, education, entertainment, facility design, product design, and training are identified as use case categories most constrained by current infrastructure capabilities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective Messaging for Tackling NIMBY to Accelerate Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/155640" rel="alternate"/>
<author>
<name>Singh, Amandeep</name>
</author>
<id>https://hdl.handle.net/1721.1/155640</id>
<updated>2024-07-11T03:01:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effective Messaging for Tackling NIMBY to Accelerate Decarbonization
Singh, Amandeep
This study aims to evaluate the efficacy of communication strategies in changing the public perceptions of energy and other facilities that are typically perceived as environmental and health risks. We use the existing US-wide survey data in which the participants were asked about the risks of living near nuclear power plants and which messages were most effective. Our analysis reveals that (1) providing key facts and well- designed messages can change the risk perception, and (2) the messages focused on reassurance (e.g., controlling, containing, monitoring radiations) are more effective than the ones comparing different risks across all the demographic groups. The effectiveness is impacted by demography such that certain demographic groups—Gen Z, the Silent Generation, those with vocational education, and Independents—are more amenable to change, and that the messages focused on social benefits are also effective for the higher education groups and the Democrats. The methodologies offer a framework to improve public perceptions and to create the pathways for the public acceptance of these facilities through identifying key facts and designing messages.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CEνNS in Natural Zinc Superconductors and its Applications for Nuclear Non-Proliferation</title>
<link href="https://hdl.handle.net/1721.1/155639" rel="alternate"/>
<author>
<name>Ryan, Brianna Noelani</name>
</author>
<id>https://hdl.handle.net/1721.1/155639</id>
<updated>2024-07-11T03:49:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">CEνNS in Natural Zinc Superconductors and its Applications for Nuclear Non-Proliferation
Ryan, Brianna Noelani
The potential of neutrinos for nuclear non-proliferation has been heavily debated due to their abundance and low interaction rates. A newly discovered neutrino detection technique, referred to as Coherent Elastic Neutrino Nucleus Scattering (CEνNS), has the potential to settle this debate due to its abnormally high cross-section. This thesis presents a feasibility study for the use of CEνNS detection using zinc superconductors for nuclear non-proliferation. To address this question, this thesis is broken down into two case studies: identifying the trafficking of Cs-137 and safeguarding nuclear reactors. For each of these case studies, the antineutrino spectrum, CEνNS cross-section, and reaction rates were calculated. Using these resources, multiple statistical analyses were performed assuming a theoretically low recoil threshold, no background noise, and optimal detection parameters. Based on these analyses, I conclude that CEνNS is feasible both for discovering trafficked nuclear materials and for safeguarding nuclear reactors, under ideal detection conditions. This study should help open the door to future, more in-depth studies in less ideal, more realistic detection conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pulsed Magnetic Imaging of Broad-Frequency Fields using Nitrogen-Vacancy Centers in Diamond</title>
<link href="https://hdl.handle.net/1721.1/155638" rel="alternate"/>
<author>
<name>Karlson, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/155638</id>
<updated>2024-07-11T04:01:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pulsed Magnetic Imaging of Broad-Frequency Fields using Nitrogen-Vacancy Centers in Diamond
Karlson, Samuel
Wide-field magnetic imaging using nitrogen-vacancy (NV) centers in diamond can yield high-quality images for various applications, including biology, geology, condensed matter physics, and electronics troubleshooting. These quantum sensors yield widefield-of-view images with micron-scale spatial resolution and operate in ambient conditions. Most of the sensing work with NV centers in diamond has focused on DC and low frequency AC fields. This thesis demonstrates a wide-field magnetic imager and its capabilities with test structures of varying complexity. We overcome the challenges for measuring MHz frequency magnetic fields with a quantum frequency mixing approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Futures: Culture Crates Hybrid Digital-Analog Methodology in the Advancement of Cultural Education and Preservation</title>
<link href="https://hdl.handle.net/1721.1/155637" rel="alternate"/>
<author>
<name>Zaza, Nadine Adel</name>
</author>
<id>https://hdl.handle.net/1721.1/155637</id>
<updated>2024-07-11T03:55:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrating Futures: Culture Crates Hybrid Digital-Analog Methodology in the Advancement of Cultural Education and Preservation
Zaza, Nadine Adel
In an era of rapid globalization, the preservation of intangible cultural heritage through education is both crucial and necessary. This thesis explores culturally responsive learning and the innovative combination of hands-on and digital learning techniques introduced through "Culture Crate," an educational technology venture that integrates digital and analog methods to enhance cultural education in K-12 settings. Given that culturally responsive teaching significantly impacts student engagement and learning, there is a clear need for educational tools that are both culturally relevant and engaging. Employing a human-centered design approach, the research investigates the efficacy of merging digital content with physical artifacts to preserve cultural heritage and address educational gaps, and prototyping of Culture Crate was tested through this approach.&#13;
&#13;
This thesis underscores Culture Crate's potential to foster empathy and intercultural competence among students through immersive learning experiences. Iterative testing and feedback from educators and students, alongside qualitative interviews, prototyping, and pilot studies, inform the refinement of the product. By aligning with UNESCO's Intangible Cultural Heritage Convention and the United Nations Sustainable Development Goals, Culture Crate aims to preserve endangered cultural practices while enhancing educational outcomes. Culture Crate, a cultural ed-tech solution, addresses the need for culturally responsive teaching by offering a hybrid learning model that combines the best of digital and physical educational resources. The research further examines Culture Crate's scalability within the broader educational market, highlighting its role in integrating cultural heritage into modern education. Ultimately, this study underscores the importance of culturally responsive teaching, offering insights into the development and implementation of educational technologies that ensure future generations remain connected to their cultural roots.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of RANS-Based Turbulence Models for the Propagation of Stratified Fronts in Buoyancy-Driven Flow</title>
<link href="https://hdl.handle.net/1721.1/155636" rel="alternate"/>
<author>
<name>Cummings, Calvin James</name>
</author>
<id>https://hdl.handle.net/1721.1/155636</id>
<updated>2024-07-11T03:02:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of RANS-Based Turbulence Models for the Propagation of Stratified Fronts in Buoyancy-Driven Flow
Cummings, Calvin James
Computational fluid dynamics (CFD) is a powerful tool in the design of next-generation nuclear reactors. These reactors are designed to be inherently safe, utilizing physical phenomena such as buoyancy and natural convection to cool the core in the event forced circulation fails. However, the widely implemented Reynolds-averaged Navier-Stokes (RANS) approach to turbulence modeling has previously shown limitations in its ability to adequately predict buoyancy-driven f lows, including thermally stratified flow. Inaccuracies in these cases are often attributed to the Reynolds analogy, a simplification of the turbulent heat flux. To evaluate the validity of the Reynolds analogy under thermal stratification, the HiRJet experiment was constructed at the University of Michigan. HiRJet induces stratification and measures the propagation of stratified fronts over time. This work aims to drive conclusions on the validity of RANS-based models and the Reynolds analogy for stratified flows and develop best practices for CFD applications under these conditions. This is achieved by rigorously assessing the performance of several common turbulence models through comparison to high resolution results from HiRJet and direct numerical simulation (DNS). Sources of inaccuracy were identified through evaluation of separate components such as treatment of turbulence production due to buoyancy and modeling of anisotropic turblence. The STRUCT-ϵ model was adopted to evaluate the impact of resolving turbulent structures on predictions. The buoyancy production flux model (BPFM) was implemented to explore the advantages and challenges of more completely modeling the buoyancy production term in RANS-based models. Most importantly, this work shows that RANS-based turbulence models accurately reproduce experimental and DNS results, demonstrating that, despite widespread skepticism, the Reynolds analogy is not the primary source of error in modeling stratified flows.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Biomarkers” for Translational Success in Neurodegenerative Diseases: A Comparative Analysis of the Research to Practice Trends in Breast Cancer and ALS to Identify Systematic Indicators of Translational Success</title>
<link href="https://hdl.handle.net/1721.1/155635" rel="alternate"/>
<author>
<name>Shamsie, Maryam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155635</id>
<updated>2024-07-11T03:07:15Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">“Biomarkers” for Translational Success in Neurodegenerative Diseases: A Comparative Analysis of the Research to Practice Trends in Breast Cancer and ALS to Identify Systematic Indicators of Translational Success
Shamsie, Maryam A.
Translating research findings into practical therapies for neurodegenerative diseases remains a formidable challenge in biomedical research. This challenge arises from the diseases’ inherent heterogeneity and complexity. Interestingly, similar obstacles were historically encountered in breast cancer research, which has since made significant strides in treatment effectiveness. To address this research-practice gap for neurodegenerative diseases, this research proposes identifying key indicators that can be used to estimate and prioritize research efforts, ultimately improving the success rate of translation to clinical practice. A literature review and principles of systems architecture and dependencies were used to qualitatively assess key indicators and their impacts on successful translation. By comparing the history of research and translation between breast cancer and Amyotrophic Lateral Sclerosis (ALS), we identified eight critical indicators for successful translation in heterogeneous diseases. Among these, two stand out: (1) the identification of critical molecular pathways relevant to the disease and (2) the corresponding biomarkers. Discoveries of these two indicators for a particular disease can pave the way for a precision medicine approach, bridging the gap between research and practical applications of therapies for complex illnesses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Blueprinting AI Economics: Cost Assessment Framework for Business Stakeholders to Navigate Key Aspects in Prompt Engineering, Prompt Automation, and Fine-tuning LLMs</title>
<link href="https://hdl.handle.net/1721.1/155634" rel="alternate"/>
<author>
<name>Sulaiman, Azfar</name>
</author>
<id>https://hdl.handle.net/1721.1/155634</id>
<updated>2024-07-11T03:14:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Blueprinting AI Economics: Cost Assessment Framework for Business Stakeholders to Navigate Key Aspects in Prompt Engineering, Prompt Automation, and Fine-tuning LLMs
Sulaiman, Azfar
The rapid proliferation of large language models (LLMs) has led to an intense focus on achieving unprecedented performance benchmarks, often at the expense of considering the substantial computational costs involved. This oversight is compounded by the lack of robust, academically grounded frameworks for comprehensively evaluating these costs, their sources, and strategies for minimization while balancing performance imperatives. To address this critical gap, my research aims to develop a rigorous and systematic framework that enables researchers and industry stakeholders to understand and contextualize the cost implications of fine-tuning, prompt engineering, and prompt automation techniques. By offering a systematic approach to evaluating the trade-offs between performance, cost, and societal impact, this research seeks to advance the practical and sustainable adoption of LLMs across diverse applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative Modeling of Water Demand to Support a Continuous Human Presence on Mars</title>
<link href="https://hdl.handle.net/1721.1/155633" rel="alternate"/>
<author>
<name>Charoenboonvivat, Yana</name>
</author>
<id>https://hdl.handle.net/1721.1/155633</id>
<updated>2024-07-11T03:05:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantitative Modeling of Water Demand to Support a Continuous Human Presence on Mars
Charoenboonvivat, Yana
Establishing a continuous human presence on Mars is a crucial milestone in advancing human capabilities in space and is a high priority for the National Aeronautics and Space Administration. An important step toward establishing a continuous human presence on Mars is identifying landing sites suitable for human and scientific exploration. The quantity of water needed to sustain human life on Mars is a key driver in the selection of landing sites. However, minimal work beyond first-order water demand estimates has been completed to date. To address this gap, this thesis quantitatively estimates how much water is needed to sustain a continuous human presence on Mars. Updates were made to a tool called HabNet, a MATLABsimulation tool that incorporates key mission parameters and outputs predictions of resource levels over time, to improve the accuracy and fidelity of water demand estimates. These updates involve creating additional Environmental Control and Life Support (ECLS) technologies and updating the crew model to reflect more recent data. The updated HabNet tool was then used to simulate five discrete cases that collectively represent a Mars surface campaign crew profile that shows increasing and continuous human presence. Results from deterministic modeling of water demand showed that the net total water demand for 4, 8, 12, 16, and 20 crew members on a 790-day mission were 38,669 kg, 76,545 kg, 118,069 kg, 151,617 kg, and 193,134 kg, respectively. For each crew size, 63-65 % of water was needed for generating MAV propellant, 22-23 % of the water was needed for crops, and 12-15 % was needed for life support. Additionally, the water demand per crew member per day was found to fluctuate between 12.00 kg to 12.50 kg across the five cases. This thesis also demonstrated the ability to perform probabilistic modeling of water demand with HabNet using high-performance computing (HPC). A Monte Carlo simulation was completed using the MIT SuperCloud supercomputer for the same five discrete cases, which marked the first time HPC was used to produce HabNet simulation results. Gaussian and beta distributions were fitted to the water demand results from the Monte Carlo simulation. However, further work is still needed to determine which probability distribution best represents the data. Opportunities for future work include improving the accuracy and fidelity of HabNet to make resource demand estimates and leveraging HPC for future analyses that may be computationally intensive.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Conception to Connection: A Systematic Approach to Integrating Remote Patient Monitoring in Fertility Management</title>
<link href="https://hdl.handle.net/1721.1/155632" rel="alternate"/>
<author>
<name>Thatcher, Florence Fernandez</name>
</author>
<id>https://hdl.handle.net/1721.1/155632</id>
<updated>2024-07-11T03:57:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Conception to Connection: A Systematic Approach to Integrating Remote Patient Monitoring in Fertility Management
Thatcher, Florence Fernandez
The fertility journey, spanning preconception to postpartum, is critically underserved by traditional healthcare systems, which often fail to provide continuous, personalized care. This deficiency is particularly acute for individuals facing infertility, who must navigate a labyrinth of physiological and emotional challenges at each stage. The need for timely interventions and access to sustained, individualized care is central to addressing these issues.&#13;
&#13;
Amid these challenges, remote patient monitoring (RPM) systems are emerging as a transformative approach in healthcare, facilitating continuous patient care and monitoring beyond the conventional settings of clinics and hospitals. Despite the increased adoption of RPM and telemedicine, a gap persists in integrating such systems within the domain of fertility care.&#13;
&#13;
This thesis undertakes a comprehensive and systemic evaluation of the fertility landscape, examining barriers to effective treatments and outcomes and identifying key health metrics for each phase of the journey. Moreover, the work analyzes existing devices and technologies to determine their ability to measure these metrics and their technological readiness for remote monitoring. The work includes a review of RPM frameworks using system architecture methodologies, analyzing their architectures, technologies, and ecosystems to adapt them for fertility applications. Although numerous devices for remote testing are now available, their full potential in fertility care still needs to be explored, necessitating further development, clinical validation, and resolution of interoperability issues.&#13;
&#13;
A patient-centered, customizable fertility-RPM framework is proposed, integrating the health metrics with essential architectural decisions aligned with stakeholder needs. This thesis offers foundational insights and operational guidelines for fertility institutions considering adopting RPM services, advocating for a holistic, connected, and continuous care model throughout the fertility journey. This work underscores the transformative potential of RPM in enhancing fertility care, paving the way for more integrated and effective fertility treatment solutions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telco 5G and Non-Terrestrial Networks: An architecture trade analysis to select solutions to support socio-technical needs in Sub-Saharan Countries</title>
<link href="https://hdl.handle.net/1721.1/155631" rel="alternate"/>
<author>
<name>Kotane, Jacky L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155631</id>
<updated>2024-07-11T03:37:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Telco 5G and Non-Terrestrial Networks: An architecture trade analysis to select solutions to support socio-technical needs in Sub-Saharan Countries
Kotane, Jacky L.
Sub-Saharan Africa(SSA) has a unique cellular mobile subscriber base of over 490 million, accounting for a mobile penetration of 43%. Despite the growth in mobile telephony over the past few decades, more than 180 million people in Sub-Saharan Africa live without internet access. For those living in areas with internet connectivity, at least 59% of people cannot afford access to the internet and remain unconnected. The Global System for Mobile Communications(GSMA) estimates that mobile internet in Sub-Saharan Africa will increase by over 160 million by 2030, with fifth-generation (5G) coverage accounting for 17%. At the same time as 5G technology picks up, satellite operators are deploying megaconstellations to provide ubiquitous broadband communications services. The access to the sub 2GHz spectrum to support satellite direct-to-device will likely lead to new business models for Mobile Operators in Sub-Saharan Africa. The African space industry is anticipated to grow to about USD 23 Billion by 2026, with an expected launch of an additional 105 satellites. Fifteen African countries have collectively launched 59 satellites. This thesis applies a systems engineering approach to evaluate a selection of concepts for a 5G-Non-Terrestrial Network (5G-NTN) to address coverage of the currently unconnected people across Sub-Saharan Africa. The architecture tradespace exploration framework evaluates feasible 5G-NTN architectures using cost and multi-attribute utility. The analysis suggests a constellation of at least 45 satellites in Low Earth Orbit to provide integrated access and backhaul for 5G networks. Implementing such an architecture would require collaboration between mobile network operators and space agencies in Sub-Saharan Africa to create a shared satellite constellation infrastructure to address coverage and Space sovereignty needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Firm Dynamics under Industrial Policy</title>
<link href="https://hdl.handle.net/1721.1/155630" rel="alternate"/>
<author>
<name>Monden, Yuichiro</name>
</author>
<id>https://hdl.handle.net/1721.1/155630</id>
<updated>2024-07-11T03:42:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Firm Dynamics under Industrial Policy
Monden, Yuichiro
Designing an effective industrial policy is a critical issue for governments. How does the policy's effect on an industry as a whole vary with the attributes of the supported firms and with the nature of the supported industry? To answer this question, this thesis develops a model describing firm dynamics under government support in the form of tax credits and conducts simulation experiments while varying policy scenarios and parameters representing the industry's nature.&#13;
The results show that the impact of government support on an industry varies greatly depending on a parameter representing one of the nature of the industry: inertia to the past market share. For industries where the inertia is within a certain degree, there is a particular trend in the impact of government support on an industry, a clear trade-off depending on the target of the support: the support to large firms has the effect of increasing the size of the largest firms but reduces competition and widens the gap between firms, while the support to small and medium-sized firms has the effect of increasing competition and narrowing the gap, but reduces the size of the largest firm in the industry. However, in industries where the inertia is greater than a certain level, the effect of such policies disappears. The inertia dominates the growth dynamics of the firms, and the policy becomes unable to change the state of the industry.&#13;
These results highlight the importance of identifying the nature of the industries to be supported when designing industrial policies. They also show that even when targeting the industries that policies can affect, it is difficult to find a single policy scenario that simultaneously improves the state of an industry from all perspectives. Policymakers need to design industrial policies that meet their purposes with an understanding of the benefits and sacrifices that result from different targets of government support.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a new affordable housing approach: A system-thinking set of criteria to assess quality</title>
<link href="https://hdl.handle.net/1721.1/155629" rel="alternate"/>
<author>
<name>Gottdiener Islas, David B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155629</id>
<updated>2024-07-11T03:03:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards a new affordable housing approach: A system-thinking set of criteria to assess quality
Gottdiener Islas, David B.
Considering that the built environment footprint is expected to double by the second half of this century, mainly driven by growth – both economic and demographic – in developing countries, reconciling several tensions related to this expansion is of paramount importance.  Certainly, accommodating growth without sacrificing sustainability – considering prevalent manufacturing processes that enable the construction sector yield a substantial portion of global GHG emissions – and providing affordable housing without neglecting quality. Thus, a deceivably simple question arises: what is affordable quality housing? Evidently, the question contains an opportunity – arguably, also an obligation – to employ a system-thinking perspective that observes – and is guided by – the relationships between housing and its broader urban system. So far, pervasive affordable housing development models (typically categorizing inert metrics as economic, social, and sustainable) have proven insufficient in several developing countries for their disregard to a system-thinking approach.  The goal of this work is to build a system-thinking approach that will enable a two-way dialogue between further research that better equip housing development stakeholders with the necessary set of criteria to think and act having in mind the expected functions that housing shall provide – enabling performance comparisons between multiple design concepts until desirable results are achieved by iterative improvements – and the empirical observations that reflect the dynamic nature of both housing needs and the methods to analyze and fulfill them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring the product configuration complexity and cost for mass-customization of automobiles: A qualitative and quantitative study of the product variant complexity, its associated cost</title>
<link href="https://hdl.handle.net/1721.1/155626" rel="alternate"/>
<author>
<name>Vidhate, Chetan</name>
</author>
<id>https://hdl.handle.net/1721.1/155626</id>
<updated>2024-07-11T03:30:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Measuring the product configuration complexity and cost for mass-customization of automobiles: A qualitative and quantitative study of the product variant complexity, its associated cost
Vidhate, Chetan
This thesis presents an integrated model for analyzing product configuration complexity and cost, aiming to provide a comprehensive framework for decision-making in product configuration management. The research begins with a literature review to identify relevant complexity metrics, narrowing down to two primary metrics: structural and organizational complexity. The selected metrics are integrated into a hybrid model that conceptualizes product configuration complexity as a function of these factors. The model incorporates mathematical formulations for assessing structural and organizational complexities, allowing for a nuanced understanding of the challenges inherent in product configuration. Furthermore, a cost model is developed to quantify the financial implications of product configuration decisions, considering factors such as transport, assembly, and quality control costs. The model is applied to hypothetical scenarios, demonstrating utility in informing decision-making processes within original equipment manufacturers (OEMs). Future work is proposed to enhance the model by incorporating risk and uncertainties, conducting cost-benefit analyses, and refining the algorithm for optimal performance. Overall, this thesis contributes to the advancement of product configuration management practices by providing a comprehensive framework for analyzing complexity and cost in product configuration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing the Enterprise Architecture of an Innovative Plant Engineering Company</title>
<link href="https://hdl.handle.net/1721.1/155625" rel="alternate"/>
<author>
<name>Watanabe, Yutaro</name>
</author>
<id>https://hdl.handle.net/1721.1/155625</id>
<updated>2024-07-11T03:39:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing the Enterprise Architecture of an Innovative Plant Engineering Company
Watanabe, Yutaro
Japan faces significant challenges such as a declining workforce due to aging demographics and the need to decarbonize its pivotal industrial sector. Plant engineering companies, which form the core of manufacturing and energy-related businesses, are expected to contribute to addressing these challenges. This thesis analyzes and proposes solutions to overcome the innovation dilemmas faced by major Japanese companies, including plant engineering firms.&#13;
&#13;
Specifically, an ARIES analysis was conducted to design new approaches to foster innovation. The results suggested that traditional business management practices might not be suitable for new venture development, prompting proposals for organizational structures and systems that simultaneously support existing and new business initiatives.&#13;
&#13;
Furthermore, to aid long-term investment decisions, this study developed a unique data-driven method, the Decision-Making Support Model (DMSM). Applying this model to a plant engineering company as a case study confirmed its capability to support data-driven decision-making.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic Assessment of a Gossamer, Planar, Gigawatt-Scale Space-Based Solar Power System</title>
<link href="https://hdl.handle.net/1721.1/155624" rel="alternate"/>
<author>
<name>Althawadi, Mohamed Adel</name>
</author>
<id>https://hdl.handle.net/1721.1/155624</id>
<updated>2024-07-11T03:22:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Technoeconomic Assessment of a Gossamer, Planar, Gigawatt-Scale Space-Based Solar Power System
Althawadi, Mohamed Adel
This thesis aims to analyze space-based solar power (SBSP) from both technical and financial standpoints. While the analysis is mainly purposed to validate the findings of existing literature on SBSP, it also seeks to identify the problems that need to be addressed for SBSP to become technically and financially viable.  The technical analysis has been performed using the systemstheory method—a sequential process that involves stakeholder analysis, requirements derivation, preliminary concept generation, system decomposition, metrics formulation, architectural decision-making, and tradespace analysis. As for the financial feasibility, the assessment has been based on two metrics: the net present value (NPV) and the levelized cost of electricity (LCOE).  Although SBSP is deemed practical from an engineering perspective, the study concludes that it has yet to make financial sense. Its technical practicality is evidenced by the fact that all the components of a SBSP system are demonstrably operable. This thesis proposes a design that is expected to have a specific power of 91.5 W/kg, comparable to the specific power estimated for Caltech’s SSPP concept (98 W/kg). In contrast, NASA’s SPS-ALPHA concept has reportedly been designed to achieve a specific power of 57 W/kg. The financial infeasibility has been proven by the negative NPV and the exorbitant LCOE for all the scenarios considered. The validity of the NPV and LCOE calculated is bound by the accuracy of the estimated costs, especially the cost of satellites, which, contrary to prevailing studies, constitutes most of the system’s cost. For the NPV and LCOE to merit further consideration of SBSP, this thesis recommends boosting the efficiency-to-areal-density ratio of the PV array. This can be achieved by optimizing reflectors that are light enough to improve the efficiency-to-areal-density ratio of the planar structure yet rigid enough to resist deformation. This thesis also recommends improving the efficiency-to-areal-density ratio of the RF signal generator and transmission antenna by leveraging miniaturization techniques to unify the two components into a compact, cohesive module. Alternatively, a new set of materials should be explored for all three components through ongoing research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System interfaces to facilitate follow-up pharmaceutical care in the United States</title>
<link href="https://hdl.handle.net/1721.1/155622" rel="alternate"/>
<author>
<name>Faruque, Fahim</name>
</author>
<id>https://hdl.handle.net/1721.1/155622</id>
<updated>2024-07-11T03:38:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System interfaces to facilitate follow-up pharmaceutical care in the United States
Faruque, Fahim
System interfaces are crucial in complex engineered systems but are understudied in follow-up pharmaceutical care. Therefore, this study aimed to develop a working definition of the concept "system interface" within the context of follow-up pharmaceutical care in the US Healthcare System. To achieve this, semi-structured interviews were conducted with various healthcare system stakeholders. The transcripts of these interviews were analyzed to identify the needs expressed by each interviewee, which were then aggregated at the stakeholder level. Overlapping needs were identified to determine which stakeholders needed to interact for a system aiming to fulfill that need. The results revealed that enhancing healthcare operations, enhancing patient engagement, and educating patients required the highest number of interactions, with 98, 95, and 91 interactions across 18, 17, and 18 stakeholders, respectively. In total, the needs overlap analysis yielded 26 additional functions that may be a component of a follow-up pharmaceutical care system to meet multi-stakeholder needs. These findings suggest that system interfaces are presently an ambiguous component of the system design in follow-up pharmaceutical care despite contributing significantly to the system's complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing User Data Privacy and Trust through the Implementation of the OTrace Protocol: Development, Challenges, and Impact Assessment</title>
<link href="https://hdl.handle.net/1721.1/155621" rel="alternate"/>
<author>
<name>Wen, Dian</name>
</author>
<id>https://hdl.handle.net/1721.1/155621</id>
<updated>2024-07-11T03:14:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing User Data Privacy and Trust through the Implementation of the OTrace Protocol: Development, Challenges, and Impact Assessment
Wen, Dian
In the digital age, safeguarding user data privacy and rebuilding trust in digital platforms have become critical priorities as data breaches and misuse continue to rise. This thesis ex- plores the novel traceability and accountability protocol - OTrace, which tackles these issues by providing users with enhanced control and transparency over their personal data through advanced consent mechanisms and data traceability features. The study thoroughly analyzes current data privacy frameworks, identifying areas where the OTrace Protocol can bridge gaps. Following comprehensive software and product development methodologies, the the- sis details a web service’s technical design and development that implements the OTrace protocol, including requirements analysis, use case definition, system architecture, and Ap- plication Programming Interface (API) specification. The research addresses the technical, regulatory compliance, and user acceptance challenges encountered, offering insights into potential solutions and strategies. The thesis concludes by examining the outcomes of the enhancements made to user privacy and trust perceptions, addresses the study’s limitations and potential areas for future research, and the protocol’s promise for policymakers, de- velopers, and businesses in establishing a more secure and transparent digital landscape, ultimately strengthening user data privacy and fostering greater trust.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptual Design of a Nuclear Microreactor Transportation Cask</title>
<link href="https://hdl.handle.net/1721.1/155620" rel="alternate"/>
<author>
<name>Crawford, Carmen Sleight</name>
</author>
<id>https://hdl.handle.net/1721.1/155620</id>
<updated>2024-07-11T03:30:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Conceptual Design of a Nuclear Microreactor Transportation Cask
Crawford, Carmen Sleight
Nuclear microreactors are a rising technology with the potential to be fueled at a central facility and transported to the operation site, which would mark the first attempt to transport a fueled commercial reactor in the US. A standard Type B cask may be adapted to transport a fueled microreactor core, passing the normal condition tests and hypothetical accident condition tests, demonstrated using a sample microreactor core design with heat pipes. An adequate shutdown reactivity margin (k_{eff} &lt; 0.95) can be maintained using the control drums and shutdown rods except if the heat pipes are broken open and flooded with water. Shielding the gamma radiation so that the dose rate at a distance 2 m from the outer surface of the cask remains below 0.1 mSv/h requires 57 tons of lead, including 14 cm of radial shielding, 10 cm of solid axial shielding, and 14 cm of axial shielding through which the heat pipes pass. Decay heat can be effectively removed using thermal fins on the outer surface of the cask to maintain a surface temperature below 85C. Lead shielding melts during hypothetical accident thermal tests, which suggests that the lead must be properly protected from puncture or, better, replaced by depleted uranium (which has higher density and melting point) in further work. Redwood impact limiters and stainless steel 316 shims are sufficient to keep the vessel and heat pipes intact under the condition that the clearance holes through which the heat pipes pass through the lead shield have at least 0.21 mm of clearance. The normal and hypothetical accident conditions thermal tests and the hypothetical accident condition free drop and radiation tests were feasible with this standard Type B cask design.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study on Leveraging Generative Artificial Intelligence and Text Clustering to Support Vendors</title>
<link href="https://hdl.handle.net/1721.1/155619" rel="alternate"/>
<author>
<name>Hubbard, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/155619</id>
<updated>2024-07-11T03:11:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Study on Leveraging Generative Artificial Intelligence and Text Clustering to Support Vendors
Hubbard, Steven
This research is an initiative to discover how generative artificial intelligence (AI) tools can improve Amazon Last Mile’s feedback systems to enhance the Delivery Service Partner experience. Our specific focus is on the effectiveness of clustering algorithms&#13;
like DBSCAN and K-Means for grouping text feedback based on semantic similarity and on the employment of retrieval augmented generation (RAG) for extracting actionable insights. Our findings indicate a relative effectiveness of K-Means over DBSCAN in clustering feedback, but the overall effectiveness is moderate, which&#13;
necessitates the need for human verification to counter potential model hallucinations. Additionally the use of RAG with Claude 2.1 demonstrated promise in answering domain-specific questions in spite of limitations related to text-only input. &#13;
&#13;
We propose future emphasis on the integration of AI in current listening mechanisms to offer concise, actionable recommendations for program leaders. This research also recommends continued exploration in embedding models and RAG framework to enhance feedback quality and information retrieval. The potential to integrate generative AI tools within Amazon Last Mile represents an underexplored opportunity for significant enhancements in efficiency, accuracy, and overall partnership satisfaction.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective Assignment of Construction Managers to Construction Sites</title>
<link href="https://hdl.handle.net/1721.1/155618" rel="alternate"/>
<author>
<name>Suzuki, Kensuke</name>
</author>
<id>https://hdl.handle.net/1721.1/155618</id>
<updated>2024-07-11T03:02:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effective Assignment of Construction Managers to Construction Sites
Suzuki, Kensuke
In the construction industry, construction management is a very important job, and many people work as construction managers at construction sites every day. Construction management consists of various kinds of work. However, the work of construction management has not been clarified in a well-organized way, and construction managers have difficulty in teaching and learning skills related to construction management effectively. There are several reasons for this. One of them is the characteristic of the work of construction itself. In other words, construction of a building is only one experience that cannot be experienced by another building, which is different from other products. Another reason is the assignment of construction managers. When a project ends, the member of the project is assigned to another project. This is the matter of timing, and the assignment is not always the best. Another reason is that there is a lot of tacit knowledge related to construction. It is difficult to learn construction management with lecture-style education. In addition, it takes much time for a young construction manager to be a skillful construction manager. In this way, construction management is hard to deal with in an organized way. Therefore, we tried making a model of construction management that is helpful to understand the structure of construction management. In this research, we made the model with two steps. First, we made the first model of construction management. At this stage, the model is like a list of factors of construction management, which does not clarify the 2structure of construction management. Then, we conducted a survey that consisted of several questions related to construction management and its training. The purpose of this survey is to grasp the characteristics and essence of construction management and its training and organize the first version of the model of construction management. Based on the survey, we improved the model of construction management. This second version demonstrates what is essential in construction management and what can be trained. This second version of the construction management model is helpful with the matter of the training of construction management and the assignment of construction managers. We elicited several insights from the model of construction management created based on the results of the survey.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming Challenges in Cellular Therapies: A Systems Engineering Approach for Equitable Access</title>
<link href="https://hdl.handle.net/1721.1/155617" rel="alternate"/>
<author>
<name>Latouche, Eduardo Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/155617</id>
<updated>2024-07-11T03:12:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overcoming Challenges in Cellular Therapies: A Systems Engineering Approach for Equitable Access
Latouche, Eduardo Luis
Cellular and gene therapies have ushered in a new era of medical treatment, promising cures previously thought unattainable. Technologies like CRISPR/Cas9 enable precise genome manipulation, yet challenges persist in therapy delivery, prompting the rise of ex vivo approaches. Despite the promise of adaptive cell therapies, high development costs, manufacturing complexities, and regulatory hurdles hinder widespread adoption. The lack of agreement in the field with respect to centralized versus decentralized manufacturing models and the choice between autologous and allogeneic cell sources pose additional challenges. Equally as critical for global access to these therapies, personnel shortages and specialized expertise requirements must be addressed. A systems engineering approach offers a framework for overcoming these barriers, facilitating comprehensive bioprocess design analysis. Ultimately, developing a descriptive model for analyzing therapeutic delivery is crucial for ensuring equitable access to these transformative therapies worldwide.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Thermostat Automation and Retail Rate Designs on Cooling and Heating Flexibility: Balancing Consumer Preferences and an Efficient Grid</title>
<link href="https://hdl.handle.net/1721.1/155612" rel="alternate"/>
<author>
<name>Schmitz, Zack</name>
</author>
<id>https://hdl.handle.net/1721.1/155612</id>
<updated>2024-07-11T04:02:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Impact of Thermostat Automation and Retail Rate Designs on Cooling and Heating Flexibility: Balancing Consumer Preferences and an Efficient Grid
Schmitz, Zack
Flexibility in household energy consumption is crucial for improving grid efficiency and reducing peak electricity demand. The ongoing impact of climate change and the move towards electrification worsen these challenges, emphasizing the need for effective peak demand reduction strategies. Current approaches often involve peak pricing retail tariffs, behavioral responses to grid operator notifications, or expensive technologies such as demand-side batteries. However, these methodologies rely on unpredictable consumer participation or substantial capital investments. On the other hand, the growing use of smart thermostats presents an opportunity for passive, efficient control of household energy consumption. Combining smart thermostats with appropriate price signals creates an opportunity to optimize the balance between energy cost and thermal comfort. This work examines the role of smart thermostat automation and dynamic retail rate designs in maximizing heating and cooling flexibility while ensuring consumer comfort. The research introduces a new approach to demand-side management by using reinforcement learning (RL) to optimize thermostat settings based on individual thermal preferences and price signals. A comprehensive testbed simulation framework was developed to analyze these effects, incorporating bottom-up energy modeling, individualized thermal comfort profiles using smart thermostat data, and advanced thermostat controls to investigate the impacts of various rate designs on residential energy demand. The study evaluates these impacts at a population level, considering the effects on over 80 household archetypes across a localized region. Key findings show that partitioned time-of-use rates with moderate pricing shifts effectively reduce energy usage without creating new peaks, unlike more aggressive pricing strategies that can lead to pre-cooling-induced new peaks. These insights offer valuable guidance for policymakers and utility operators in designing rate frameworks that decrease overall electricity consumption and peak demand without compromising personal comfort.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of Corporate Entrepreneurship in Technology Companies: A Study of Strategic Practices and Governing Frameworks Shaping Entrepreneurial Ecosystems</title>
<link href="https://hdl.handle.net/1721.1/155611" rel="alternate"/>
<author>
<name>Addanki, Sowmya</name>
</author>
<id>https://hdl.handle.net/1721.1/155611</id>
<updated>2024-07-11T03:32:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamics of Corporate Entrepreneurship in Technology Companies: A Study of Strategic Practices and Governing Frameworks Shaping Entrepreneurial Ecosystems
Addanki, Sowmya
Corporate entrepreneurship is a strategic imperative for technology enterprises in a competitive landscape where evolution manifests on an exponential scale. This study examines how leading tech firms like Amazon, Google, and Microsoft foster innovation through diverse entrepreneurial initiatives, balancing autonomy with strategic alignment. The research employs a qualitative approach, using expert interviews, case studies, and literature analysis to explore internal and external innovation strategies. Findings highlight the importance of aligning entrepreneurial endeavors with long-term goals and fostering a culture that encourages risk-taking and adaptability. The Strategic Entrepreneurship Framework (SEF) is proposed to analyze the diverse approaches to innovation, revealing distinct strategies in acquisitions, venture capital investments, and internal incubators adopted by these established tech firms. Amazon emphasizes employee empowerment and strategic acquisitions, Google focuses on "moonshot" projects and external partnerships, while Microsoft prioritizes internal hackathons and cultural transformation. This study provides a comprehensive understanding of corporate entrepreneurship in the tech sector, and serves as a valuable resource for understanding how leading tech companies drive innovation, with interesting implications for future research. Further investigation could explore the impact of emerging technologies on these strategies, their scalability, and long-term sustainability amidst global shifts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamics modelling of Organizational Culture Transformation: &#13;
A study of the organizational and technical factors that affect the implementation of Toyota production system in organizations</title>
<link href="https://hdl.handle.net/1721.1/155610" rel="alternate"/>
<author>
<name>Sreekumar, Anup</name>
</author>
<id>https://hdl.handle.net/1721.1/155610</id>
<updated>2024-07-11T03:26:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Dynamics modelling of Organizational Culture Transformation: &#13;
A study of the organizational and technical factors that affect the implementation of Toyota production system in organizations
Sreekumar, Anup
Organizational culture is a vital source of competitive advantage. Nevertheless, it is often overlooked or given a lower priority due to its complex nature and the effort required to drive change. Existing change methodologies offer frameworks for implementing and sustaining organizational change, but success rates are low, with only one in three endeavours yielding favourable results. This research aims to adopt systems thinking approach to culture change. It utilizes system dynamics modelling to unravel the dynamics of change.&#13;
Initially, a hybrid change methodology is developed, incorporating the best aspects of models from literature and insights gained from organizational experiences in change efforts, including standard failure modes. This hybrid method serves as the foundation for building the system dynamics (qualitative) model. The developed model represents a shift from conventional linear methods to a circular system-based approach to change efforts. The qualitative (casual loop diagram) system dynamics model facilitates a transformative understanding of the interconnectedness and temporal dynamics. Leaders can gain insights into the complexity of these interconnected relationships and time dynamics.&#13;
Further research involves validating and updating the model through experimentation within a company, where key variables in the model can be measured easily. These measurements and the qualitative model together can be leveraged to produce an action plan to improve the variable. As we follow our action plan and track these variables repeatedly, the change implementation can be sustained.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technological Development Trajectories of the Component Technologies in Battery Electric Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155609" rel="alternate"/>
<author>
<name>Iijima, Rei</name>
</author>
<id>https://hdl.handle.net/1721.1/155609</id>
<updated>2024-07-11T03:47:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Technological Development Trajectories of the Component Technologies in Battery Electric Vehicles
Iijima, Rei
As the concern about climate change grows, interest in battery electric vehicles (BEVs) is rising. BEVs are forecasted to constitute about 40% of passenger vehicle sales in 2035. While BEVs produce no emissions from the tailpipe, they face challenges, such as driving range and refueling time, that require technological advancements to improve performance and social acceptance. Since the evolution and replacement of component technologies have propelled the BEV progress, mapping their development trajectories may yield insights into future evolutions.&#13;
&#13;
This thesis explores the technological development trajectories of batteries, ultra capacitors, battery management systems, electric motors, power electronics, and heat pumps, using main path analysis with U.S. patents published up to 2023. This analysis method can detect technological development trajectories and key patents in the trajectories by identifying the patents frequently cited, taking advantage of enormous patent data.&#13;
&#13;
The results reveal that critical innovations do not necessarily occur when many innovations occur. Regarding some technological categories in some technology fields, such as battery circuit arrangements of power electronics, important innovations have been made constantly, and the trends are suggested to continue. On the other hand, other technical categories, such as magnetic circuits of electric motors, are critically innovated recently and intensively along with the increase in attention due to their high potential to improve performance. In addition, obtaining U.S. patents for core technologies, including batteries, battery management systems, electric motors, power electronics, and heat pumps, is crucial to gaining U.S. BEV market share, though it is not the case to succeed in the global market. Furthermore, their patents are not necessarily critical innovations in the technological development of the field. Current trends illustrate that significant BEV innovations are distributed across various entities. This suggests that though patents in the automotive industry have been typically held on verticals, diverse supply chain strategies, including incorporating innovative startups into their own companies or entering into horizontal partnerships with companies that have emerging technologies, are gaining importance in staying competitive in a market where leadership in each technology can swiftly change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study and analysis of the evolution of knee arthroplasty&#13;
surgery through its technological innovation</title>
<link href="https://hdl.handle.net/1721.1/155608" rel="alternate"/>
<author>
<name>Momenzadeh, Mariam</name>
</author>
<id>https://hdl.handle.net/1721.1/155608</id>
<updated>2024-07-11T03:13:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Study and analysis of the evolution of knee arthroplasty&#13;
surgery through its technological innovation
Momenzadeh, Mariam
Total Knee Arthroplasty (TKA) offers life-changing improvements for many patients; however,  a considerable portion of 10-15% continue to experience dissatisfaction after the surgery. Given the rise in the aging population, increased insurance eligibility for TKA in patients with milder symptoms, and growing interest in robotic surgery, it is important to identify technology gaps that can improve overall patient outcomes. This analysis aims to map the network of processes and stakeholders involved in the TKA journey, from pre-operative planning to post-operative rehabilitation. It will examine existing technologies employed&#13;
across stages of TKA, understanding their functionalities, evaluating their limitations, and assessing their impact on patient outcomes while identifying areas where investment in technology and innovation is most critical.&#13;
Through this investigation, the thesis seeks to shed light on the complexities of the TKA ecosystem, pinpointing some of its limitations and opportunities for technological advancement. This work serves as a decision-making guide, potentially empowering innovators to channel their resources toward impactful solutions that elevate both short and long-term patient outcomes following TKA surgery.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach to Understanding the Attrition of Women in Software Engineering</title>
<link href="https://hdl.handle.net/1721.1/155607" rel="alternate"/>
<author>
<name>Golison, Madeleine A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155607</id>
<updated>2024-07-11T03:51:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach to Understanding the Attrition of Women in Software Engineering
Golison, Madeleine A.
Data from large tech companies shows that 15% or fewer software engineers are women. While Tech companies blame the university pipeline, studies from McKinsey and Accenture found that Tech company “bro culture” was influencing the pipeline of women out of Tech. However, in the MIT Women in Software Engineering survey, of the 183 respondents, most women reported planning on staying in Tech when leaving SWE roles. This formed the hypothesis that female software engineers were leaving SWE roles for reasons other than “bro culture.”  Understanding and improving the attrition of women in the software engineering career path is important because the representation of women in the field is already so small, so any attrition is consequential. Overall, many factors were found to have influenced the retention of women in software engineering roles. Notably, culture was not the single most important reason for women leaving the software engineering career path. The primary reason directly stated in the open-ended survey responses was “burnout,” but this was closely followed by reasons such as finding other opportunities outside of Tech, a desire for better work-life balance, and the lack of diversity. While these explicitly stated reasons were easily noted, predictive models (using logistic regression and tree-based methods) were needed to illuminate factors that were not explicitly identified by respondents. The predictive models identified the primary reasons women leave SWE roles by comparing women who planned to remain in the SWE career path and those who did not. The top reasons identified were not enjoying programming, believing that better opportunities existed outside of software engineering, and being co-located with their team. The last reason, team co-location, was identified as being related to various other environmental factors related to imposter syndrome and was likely a proxy for these other factors. Women in the age range of 25 – 44 seemed to be particularly at risk of leaving the career path, and between the general population and the specific 25 – 34 and 35 – 44 age groups, each had different factors that were most important.  Given these results, several recommendations exist for improving attrition for women in the software engineering career path. The key recommendations include improving manager feedback processes, diversity, work-life balance, and opportunities to work on high-visibility initiatives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Project Management for Research and Development</title>
<link href="https://hdl.handle.net/1721.1/155605" rel="alternate"/>
<author>
<name>Hanenkratt, Aaron C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155605</id>
<updated>2024-07-11T03:46:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Project Management for Research and Development
Hanenkratt, Aaron C.
Pharmaceutical industry research and development efforts are highly uncertain, expensive, and lengthy projects. These drug development projects require great care from their inception to ensure that unsafe or ineffective projects get canceled as soon as possible, as projects canceled later in the drug development process can incur much greater costs, potentially impacting resources for other projects in medium-sized companies or causing small companies to shutter completely. Proper project management guides executing a project and establishing a communication format for decision-makers in upper management. This work is not a definitive decision-making framework for project progression; however, it crafts a project management system around existing small molecule drug development efforts, aiding the decision-making process. A literature review of existing project management systems when writing yielded no definitive project management methodologies for drug development. Instead, it showed a project management maturity gap in the pharmaceutical industry compared to other industries. Project managers must be adaptive, even in projects with little deviation in expected progression, as those deviations can severely impact the overall project if not handled properly. A project management system that can evolve with an R&amp;D project progression can provide some structure to a very uncertain effort. To identify a project management system for small molecule drug development, main activities, and processes are determined and examined to understand the dynamics of a drug development effort. These general activities and processes are verified by industry personnel, as each company and project may differ. A discussion of existing project management systems also took place to determine if any methodologies exist. Suppose a general drug development process is established, and a project management methodology is developed to best align with the project process. In that case, the project manager can develop the requisite knowledge and skills to lead a complex pharmaceutical R&amp;D project. The model can then be applied to other drug development efforts to ensure proper management and project efforts alignment to meet the unique project or company needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Influences on Startup Decision Making: Applying Systems Thinking and Lenses to Investigate the Perspectives of Startup Leaders</title>
<link href="https://hdl.handle.net/1721.1/155604" rel="alternate"/>
<author>
<name>Durrenberger, Marcelle</name>
</author>
<id>https://hdl.handle.net/1721.1/155604</id>
<updated>2024-07-11T03:31:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding Influences on Startup Decision Making: Applying Systems Thinking and Lenses to Investigate the Perspectives of Startup Leaders
Durrenberger, Marcelle
A startup is an organization that operates in extreme conditions of risk and uncertainty but can achieve disruptive and radical innovation by solving problems with novel technologies and solutions. While in these conditions, startup leaders continue to inform themselves and make decisions, relying on their perspectives, knowledge, and the team and tools they are surrounded by. However, with a 50% failure rate within five years (U.S. BLS 2016), something is missing. Rather than try to identify failure modes, this research explores the perspectives of startup leaders, who are the people who make startup decisions that could lead to success or failure.&#13;
This research explores the application of systems thinking to the perspectives of startup leaders within the context of early and growth-stage hardware technology startups across industries in North America. The objectives consist of a literature review on startups, using a qualitative approach to gain insight into how startup leaders perceive their startups, and developing and applying a set of system lenses to assess startups holistically. Data is collected through interviews and processed using deductive and latent thematic approaches, affinity mapping, DSMs, and applying a set of startup system lenses. &#13;
An extensive literature review explores the literature surrounding the startup ecosystem and is leveraged to derive ten startup system lenses to assess startups holistically. These lenses encompass company and ecosystem elements, including finances, market climate, and business development. By conducting and analyzing the interviews with startup leaders, this research discovers their priorities, execution focus areas, reflections, learnings, and perspectives on the proposed lenses. The analysis captures priorities like speed to market and flexibility, as well as execution focus areas of having a high-performance team and focusing on fundraising and cash flow. This research also elaborates on shared experiences and challenges these leaders experienced, including achieving product market fit and strategically picking customers. The final analysis applies the system lenses using affinity mapping and a Design Structure Matrix (DSM) to holistically view how startup leaders view and discuss the system lenses and the interfaces. This holistic view exposes both which lenses and interfaces the startup leaders are discussing and which they are not discussing.&#13;
This research lays a foundation for applying systems thinking principles and systems lenses to improve the navigation of unknowns and uncertainties during decision making. There is the potential to assess the quality and comprehensiveness of the information used and the thinking used for decision making within startups. The future work of this research includes understanding the specific connections at the interfaces of the lenses, evaluating the impact of the type and quality of information, and understanding drivers on decision making.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delivery Estimate  Accuracy: Understanding and Reducing Virtual-Physical Mismatches and Missorts in Fulfillment Centers</title>
<link href="https://hdl.handle.net/1721.1/155603" rel="alternate"/>
<author>
<name>Yao, Rong (Jenny)</name>
</author>
<id>https://hdl.handle.net/1721.1/155603</id>
<updated>2024-07-11T03:01:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Delivery Estimate  Accuracy: Understanding and Reducing Virtual-Physical Mismatches and Missorts in Fulfillment Centers
Yao, Rong (Jenny)
Delivery Estimate Accuracy (DEA) is the Amazon Operations metric that measures the percentage of items that attempted delivery on or before the Promised Delivery Date (PDD). There are significant costs and customer experience impacts when packages are not delivered on time, resulting in a DEA miss. Specifically, there are two types of DEA misses that are less well-understood than others and make up a large proportion of the overall missesVirtual-Physical Mismatch (VPM) and Missort. This project focuses on understanding and reducing the number of VPM and Missort misses in Fulfillment Centers, with the scope being Amazon’s Traditional Non-Sort Fulfillment Centers in the US.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Business and Technical System Modeling of Rail Projects with Uncertainty Analysis</title>
<link href="https://hdl.handle.net/1721.1/155602" rel="alternate"/>
<author>
<name>Fujii, Yosuke</name>
</author>
<id>https://hdl.handle.net/1721.1/155602</id>
<updated>2024-07-11T03:27:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrated Business and Technical System Modeling of Rail Projects with Uncertainty Analysis
Fujii, Yosuke
Overseas technical cooperation and technology transfer projects has many hurdles. An example is the overseas expansion and operation of high-speed railways, which are highly integrated systems. This research uses the Northeast Corridor SCMAGLEV project, a Japanese high speed railway system overseas cooperation project with the United States planned and promoted as a model, to consider what type of hurdles exist and options &amp; decision-making for dealing with them. We decided to proceed with building a model with the aim of proposing useful measures to deal with complex projects. &#13;
Aimed to be a useful management method and decision-making material for projects with such complex characteristics, we built a prototype model that integrates business and technical systems and enables uncertainty analysis.&#13;
Advantage of this model is that it allows us to consider combinations of multiple system decisions and multiple business decisions. Taking advantage, for example, the research led to the following analysis: By looking at the distribution of uncertainty, it became possible to visualize the state of risk sharing due to differences in schemes (e.g., PPP and Non-PPP). In addition, by focusing on items where the expected NPV changes significantly depending on the business decision, it became possible to identify in advance contract forms where it is difficult to set numbers. In addition, we were able to visualize that the impact of long-term borrowing and interest cannot be ignored depending on the business scheme. We found that the prototype model is useful for aiming for overall optimization while considering complex combinations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Neutron Transmutation of Spent Fuel</title>
<link href="https://hdl.handle.net/1721.1/155596" rel="alternate"/>
<author>
<name>Wickert, Charlotte I.</name>
</author>
<id>https://hdl.handle.net/1721.1/155596</id>
<updated>2024-07-11T03:45:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Non-Neutron Transmutation of Spent Fuel
Wickert, Charlotte I.
This thesis is a scoping study to assess the feasibility of utilizing non-neutron transmutation to target Long-Lived Fission Products (LLFPs), which account for 99% of the long-term radiotoxicity of spent nuclear fuel. With half-lives ranging from 100,000 to 10,000,000 years, LLFPs pose a significant obstacle to long-term high-level waste storage. Geologic repositories for nuclear waste must be functional for millions of years. This significant timescale contributes to the many technical and political challenges preventing the U.S. from closing the back end of the nuclear fuel cycle for High-Level Waste (HLW). The need for a geologictime-scale repository could be reduced if the most active isotopes present in HLW could be identified and transmuted. While disposal would still be necessary, a smaller time scale could resolve some of the most significant concerns associated with the current million-year time scale. Several computational methods, TALYS, TASMAN, PHITS, and FISPACT, are utilized to model the complete transport and transmutation process for proton irradiation to explore the potential of converting LLFP isotopes into stable or shorter-lived forms. TALYS is used to generate proton cross-sections for key LLFPs, as there are no differential cross section measurements in the energy range of interest (18-70 MeV). The uncertainty in the transmutation rate is calculated from the perturbed cross sections generated by TASMAN and TALYS in work supporting this thesis. The physics of the proton beam is modeled in supporting work using PHITS to provide a flux-energy spectrum and estimate the number of irradiated particles. Finally, FISPACT calculates the amount of depletion for each LLFP. A comparison of alpha and deuteron irradiation is performed using cross sections from the TENDL2021 library and SRIM to determine the penetration depth for each incident particle. Preliminary findings indicate that longer irradiation times and higher beam energies enhance transmutation, resulting in a decreased long-term abundance of LLFPs compared to natural decay conditions. For commercial proton accelerators with a 10mA current operating continuously, the transmutation rates for LLFPs range from 0.59 +- 0.12 g/year to 7.51 +- 1.19 g/year. Most LLFPs are produced in a 1 GW (thermal) reactor on the order of 1kg/year. Therefore, the transmutation rates achievable with commercial accelerators are too low to make a significant impact. However, increasing the proton beam energy to take advantage of proton spallation reactions may be successful, especially in the case of Selenium79. 660g/year of Selenium-79 are produced in a 1 GW (thermal) reactor. Initial spallation estimates show that for Selenium-79, approximately 24 g/year could be transmuted with a single accelerator. Future work will focus on improving the spallation irradiation scheme and target design. This work was supported by the DOE ARPA-E Project.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Introducing AI as a Team Member During the Fuzzy Front End of New Product Introduction Projects in the Medical Device Industry: An Experimental Design</title>
<link href="https://hdl.handle.net/1721.1/155595" rel="alternate"/>
<author>
<name>Asher, Roy</name>
</author>
<id>https://hdl.handle.net/1721.1/155595</id>
<updated>2024-07-11T03:13:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Introducing AI as a Team Member During the Fuzzy Front End of New Product Introduction Projects in the Medical Device Industry: An Experimental Design
Asher, Roy
Artificial Intelligence has been around since the 1950s. More recently, with the introduction of advanced machine learning methods, the markets have seen many complex AI solutions that can interact with humans seemingly naturally. Furthermore, the multiplication rate of these technologies is accelerating, and new promises are being marketed daily in every media outlet. In light of this fast technological expansion, there is a need for additional research to evaluate these types of solutions.&#13;
This study focuses on the medical device industry. Industries that are highly regulated, like the healthcare technology space, are specifically interesting as they must comply with strict requirements imposed due to the industry's risky nature. The study aims to see how introducing AI as an expert research and development team member at the early phase of a new product introduction medical device project affects a medical device R&amp;D team’s capability to implement scope effectively and efficiently. &#13;
The experimental design starts with identifying barriers to team performance from literature and through interviews with seasoned industry leaders in medical devices. A battery of experiments is designed to provide a more complete assessment of the effects of AI as a team member on an R&amp;D team in the medical device industry, as AI expertise in one area can be more impactful than in another. This also ensures an understanding of how our stakeholders' various areas of interest will be affected so that AI can drive value as a team member. The study evaluates and anchors many architectural aspects of experimental design. Clear hypotheses are provided in a format that promotes insightful statistical analysis. A medical device challenge is presented as a game for teams to participate in. Protocols detail preparing for and executing the experiments as well as the post-analysis. &#13;
Through this systematic design, the work identifies and explores the complexity of executing the experiments. Implementing the guidance toward study execution and executing these experiments is a natural continuation of this work. Furthermore, the study design lays a foundation for further research in the sociotechnical integration between AI and humans.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Low-Cost, Modular Autonomous Surface Vehicle and Autonomous Underwater Vehicle Integration</title>
<link href="https://hdl.handle.net/1721.1/155594" rel="alternate"/>
<author>
<name>Hamel, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155594</id>
<updated>2024-07-11T03:02:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Low-Cost, Modular Autonomous Surface Vehicle and Autonomous Underwater Vehicle Integration
Hamel, John M.
This thesis investigates the utility of docking low-cost Autonomous Underwater Vehicles (AUVs) with low-cost Autonomous Surface Vehicles (ASVs) through the application of the systems process. With the decreasing cost and increasing functionality of consumer electronics, systems integrating commercial-off-the-shelf (COTS) components can produce higher value at economical prices. The question is how impactful this trend is for surface and underwater systems. Specifically, this thesis addresses the interface between ASVs and AUVs and how low-cost versions can complement each other to provide previously unrealized value. This thesis reviews the marine autonomy field, defines a concept of operation, and analyzes the design tradespace based on multi-attribute utility and complexity. Through the process of analyzing the architectural and engineering tradespaces, over 32,000 possible combinations were reduced by 99.9% to identify 30 leading design combinations. The theoretical analysis informed fleet modeling and field testing of a leading design with Massachusetts Institute of Technology (MIT) Engineering Systems Laboratory’s ASV Platform for Expanding AUV exploRation to Longer ranges (PEARL) on the Charles River in 2023. The fleet modeling identified the non-linear relationship between AUV &#13;
operational efficiency and percent utilization of the AUVs when serviced by one ASV. The on-water system test was a product of model-based conceptual analysis, autonomy behavior code development, and rapid prototyping which yielded a successful autonomous dock between PEARL and a dummy AUV. The autonomous docking was successful on the 3rd attempt, resulting in a 33% success rate. Ultimately, the thesis attempts to show that a low-cost framework allows for non-traditional architectures which can produce value through autonomous ASV and AUV docking.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Study of Using Nuclear Batteries in Decentralized Hydrogen Production</title>
<link href="https://hdl.handle.net/1721.1/155592" rel="alternate"/>
<author>
<name>Germonpré, Emile</name>
</author>
<id>https://hdl.handle.net/1721.1/155592</id>
<updated>2024-07-11T03:53:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Feasibility Study of Using Nuclear Batteries in Decentralized Hydrogen Production
Germonpré, Emile
Nuclear batteries (NBs) are a class of factory-fabricated, autonomously operated microreactors that have the potential to form an extremely versatile clean energy platform. However, they have a high levelized cost of electricity (LCOE), so more insights are needed into how to leverage their unique features to make attractive projects. To that goal, this work investigates using NBs in decentralized hydrogen production to better understand their true value proposition and applicability. The work is part of a larger project in which using NBs for offshore power generation is also investigated. Both the hydrogen production and offshore power generation reports are available as CANES publications [1], [2]. &#13;
&#13;
The focus is exclusively on economics, as I do not foresee any technical challenges to this application. By evaluating nearly 100 different projects, I highlight five factors needed for competitiveness; four of which directly impact the cost of hydrogen production, as shown in Figure 1:&#13;
&#13;
1. The facility size to dilute the cost of providing site security &#13;
2. The capital cost decrease over time due to the economies of multiples  &#13;
3. Policy and regulation through clean energy subsidies and the requirement of on-site guards. &#13;
4. The efficient leveraging of NB’s high-temperature heat delivery  &#13;
&#13;
The fifth factor relates to the benefit of colocation of production and demand, as it can save on the large hydrogen delivery costs. The delivery cost savings can make the best-performing semi-centralized NB projects competitive with centralized production in contexts where transmission from the centralized plants is not cheap. On the other hand, the distribution cost saving of on-site production is not decisive according to my calculations. However, hydrogen delivery costs are highly context-dependent. So, further work is needed to address other delivery contexts - e.g., rural communities – and to better understand under which circumstances NBs can provide significant delivery cost saving.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling UO₂ and UN Fuel Fission Gas Release Instances in BISON for Microreactor Applications</title>
<link href="https://hdl.handle.net/1721.1/155591" rel="alternate"/>
<author>
<name>Cunningham, Kaylee</name>
</author>
<id>https://hdl.handle.net/1721.1/155591</id>
<updated>2024-07-11T03:27:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling UO₂ and UN Fuel Fission Gas Release Instances in BISON for Microreactor Applications
Cunningham, Kaylee
Pelletized UO₂ and UN (mononitride) fuel concepts are currently under consideration for microreactor technology. To implement these fuel concepts, the performance of UO₂ and UN under microreactor irradiation conditions must be well understood. One key fuel performance phenomenon is fission gas release, where gaseous fission products are expelled from the fuel pellet to the plenum in the cladding. The fission gas release threshold plot of burnup vs. temperature at 1% release, first introduced by Vitanza and commonly coined the “Vitanza curve,” is of particular interest because it describes when fission gas release begins [1].&#13;
Thus, accurately modeling fission gas release is an active area of research. Though empirical models of UO₂ and UN fission gas release thresholds exist, like Vitanza and Wallenius et&#13;
al., they fail to account for the low linear powers found in microreactor concepts [1, 2]. As a result, BISON, a MOOSE-based (Multiphysics Object Oriented Simulation Environment)&#13;
fuel performance code, was used to evaluate fission gas release in UO₂ and UN fuels at power levels representative of microreactors. The fission gas release threshold curve was&#13;
constructed from BISON results for both fuel types, validated against Vitanza and Wallenius et. al, and then extended to a third dimension to incorporate power dependency and create 3D surface threshold plots [1]. At both light water reactor and microreactor power levels for UO₂ and UN, BISON calculated the threshold curve as the expected exponential decay, within 100 K of the Vitanza and Wallenius curves, respectively. When fuel surface temperature was gradually increased at a constant low power level, the threshold curve decreased. This was as expected since higher temperatures drive faster gas atom diffusion. Faster gas atom diffusion causes fission bubbles to form, interconnect into “tunnels” and dispel fission gases to the plenum more rapidly [3]. Ultimately, this study demonstrates&#13;
that the fission gas release threshold is not only influenced by temperature but also by power level. Low power levels associated with microreactor technology ultimately delay the onset of fission gas release. When combined with low-temperature operation, UN fuel may produce very minimal, if any, fission gas release. This may lead to enhanced reactor safety and potentially design and construction cost reductions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of the future enterprise system architecture of the Japanese-origin high-speed railway in Texas.</title>
<link href="https://hdl.handle.net/1721.1/155589" rel="alternate"/>
<author>
<name>Aoshima, Naofumi</name>
</author>
<id>https://hdl.handle.net/1721.1/155589</id>
<updated>2024-07-11T03:50:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploration of the future enterprise system architecture of the Japanese-origin high-speed railway in Texas.
Aoshima, Naofumi
High-speed rail (HSR) is renowned for its efficiency and environmental advantages, reducing fuel consumption, generating employment, boosting tourism, and mitigating congestion. The Texas HSR project aims to connect Dallas and Houston using Japanese-origin HSR technology. Despite securing regulatory approvals, it faces significant challenges. This thesis examines the project's characteristics, identifies its challenges, and prioritizes future considerations.&#13;
&#13;
The research begins with an overview of the project, exploring common reasons for the failure of large-scale projects, with a focus on demand estimation and organizational design. It then analyzes future demand for Texas HSR using two data sources: the Longitudinal Employer-Household Dynamics program’s Origin-Destination Employment Statistics (LODES) and the Next Generation National Household Travel Survey (NextGen NHTS). LODES data offers insights into workers’ transportation patterns and potential ridership, supporting current estimates but indicating areas for refinement. NextGen NHTS data aids in more precise travel demand modeling. The thesis recommends integrating multiple updated data sources for robust forecasting.&#13;
&#13;
Applying the ARIES framework, the thesis examines the Texas HSR project's enterprise architecture through landscape mapping, stakeholder analysis, and SWOT analysis. Findings suggest that a collaborative Japanese-U.S. system, sharing critical information and expertise, can leverage strengths and opportunities. However, this requires significant effort and coordination due to limited experience and multiple entities, with workforce uncertainty as a risk. Effective collaboration and talent retention are crucial. To address these issues, a survey is conducted, followed by the envisioned future capturing. Then, the thesis proposes three alternative architectures. Alternative 3, which consolidates key entities for better resource management, is preferred among them. It also explores extreme scenarios and recommends a phased implementation plan to ensure smooth transitions and mitigate resistance to change. The thesis concludes with a summary of findings and a discussion of limitations and future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Innovative Engineering Organizations within Large Technology Enterprises Using a Systems Thinking Approach</title>
<link href="https://hdl.handle.net/1721.1/155588" rel="alternate"/>
<author>
<name>Zhou, Bingnan</name>
</author>
<id>https://hdl.handle.net/1721.1/155588</id>
<updated>2024-07-11T03:55:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Architecting Innovative Engineering Organizations within Large Technology Enterprises Using a Systems Thinking Approach
Zhou, Bingnan
The technology industry has experienced significant transformations driven by rapid technological developments, changing market demands, and evolving business models. These changes have led to the creation of new products and services across various segments within the technology industry. With their substantial market value, large enterprises are crucial for driving innovation and boosting the economy. However, they face fierce competition from both established and emerging players, compounded by challenges such as economic uncertainty. Overcoming barriers to innovation is essential. Engineering organizations are the backbone of technology companies, making it vital for large enterprises to design innovative engineering organizations to remain competitive and create real value in the industry.&#13;
The primary objective of this thesis is to investigate key factors, strategies, and approaches that foster an innovative environment to drive organizational innovation. Additionally, it demonstrates how a systems thinking approach can holistically analyze an enterprise and generate crucial considerations for designing future organizational architecture. To achieve this goal, the study begins with a literature review on innovation barriers and generic strategies that might help cultivate an innovative environment. A discussion of approaches drawn from case studies to improve innovative environments is also presented. Based on these strategies and approaches, the study suggests several desired attributes to consider in transforming the organizational architecture for innovation. The study then employs an enterprise architecting framework to holistically analyze an engineering organization within a large technology enterprise. This analysis identifies the emerging stakeholder values the organization may embrace to remain competitive.&#13;
Building on this foundational analysis, the thesis proposes multiple alternative architectures. These architectures are then evaluated to determine their effectiveness, with detailed discussions on important considerations for various potential future scenarios. Finally, the thesis suggests an actionable plan for implementing the new architecture, aiming to create an innovative engineering organization and enhance the enterprise's competitive advantages in the technology industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Absorptive Capacity: Systems Framework for Open Innovation in Japanese Enterprises</title>
<link href="https://hdl.handle.net/1721.1/155587" rel="alternate"/>
<author>
<name>Yukawa, Ayako</name>
</author>
<id>https://hdl.handle.net/1721.1/155587</id>
<updated>2024-07-11T03:47:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Architecting Absorptive Capacity: Systems Framework for Open Innovation in Japanese Enterprises
Yukawa, Ayako
As Japan faces challenges in maintaining its global innovation leadership, this thesis explores the potential of collaborative R&amp;D between large Japanese firms and external actors to drive innovation through open innovation practices. The research focuses on absorptive capacity - defined as an organization's ability to recognize, assimilate, and utilize external knowledge - as a critical factor in successfully implementing outside-in open innovation. To address the gaps between academic research and real-world implementation of open innovation, the thesis develops a systems framework for understanding and designing absorptive capacity in the context of large Japanese firms. Using a systems architecture approach and conducting case studies of five Japanese companies recognized as high-performing innovators, the research identifies four main capabilities constituting absorptive capacity: management, recognition, assimilation, and exploitation. The framework maps these capabilities to specific architectural decisions and options, linking the theoretical understanding of absorptive capacity as a system to practical choices in designing a firm's absorptive capability. The significant influence of management capability on recognition and assimilation capabilities, as well as organizational structure and needs assessment in driving absorptive capacity as architectural decisions, are also revealed. This thesis is expected to contribute to both academic discourse and practical implementation, extending previous perspectives on absorptive capacity and providing actionable guidance for designing and managing open innovation initiatives for large Japanese firms and policymakers. While limitations of this research include the potential lack of comprehensiveness in architectural decisions and the subjectivity in case study selection, this thesis will serve as a foundation for future studies on establishing Japan's competitive innovation ecosystem on a global scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of osmotic shock on the release of acid phoshatase activity from streptococcus mutants</title>
<link href="https://hdl.handle.net/1721.1/155574" rel="alternate"/>
<author>
<name>Fleisher, Michael Howard.</name>
</author>
<id>https://hdl.handle.net/1721.1/155574</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The effect of osmotic shock on the release of acid phoshatase activity from streptococcus mutants
Fleisher, Michael Howard.
The "osmotic shock" treatment of bacterial cells has proven to be an effective procedure for releasing hydrolytic enzymes located in the periplasmic compartment of the cell. It was the object of the present study to examine the mechanism of the "osmotic shock" procedure on the Streptococcus mutans strain PR-89 and the measurement of the acid phosphatase activity of the shocked cells. This enzyme could be a primary etiological agent in dental caries formation, and a simple me~hod of releasing-the enzyme would greatly facilitate the characterization of its properties. During the course of the study several important characteristics of the enzyme were observed. First, the enzyme activity increases linearly with the growth of the bacteria. Secondly, in the presence of inorganic phosphate the enzyme is observedly repressed. Finally, during the period of bacterial growth in minimal media supplemented with various concentrations of phosphate, the nature of the enzyme is constituitive. The "osmotic shock" procedure allowed a limited examination of the properties of the acid phosphatase enzyme produced by the Streptococcus mutans. However, the enzyme activity was not successfully separated from the bacterial cell to prove that it had been released.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Service, 1973; Cataloged from pdf of print version of thesis. "February, 1974, i.e. Sept. 1973." Vita.; Includes bibliographical references (pages 46-49).
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process and product models for collaborative design</title>
<link href="https://hdl.handle.net/1721.1/155571" rel="alternate"/>
<author>
<name>Gross, Miriam Eva.</name>
</author>
<id>https://hdl.handle.net/1721.1/155571</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Process and product models for collaborative design
Gross, Miriam Eva.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1992; Includes bibliographical references (leaves 158-162).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective prototyping during product development</title>
<link href="https://hdl.handle.net/1721.1/155570" rel="alternate"/>
<author>
<name>Griesser, Hans Patrick.</name>
</author>
<id>https://hdl.handle.net/1721.1/155570</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Effective prototyping during product development
Griesser, Hans Patrick.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 93-97).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Color image enhancement</title>
<link href="https://hdl.handle.net/1721.1/155568" rel="alternate"/>
<author>
<name>Marshall, Shelley E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155568</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Color image enhancement
Marshall, Shelley E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the capital structure of major competitors in the telecommunications industry</title>
<link href="https://hdl.handle.net/1721.1/155567" rel="alternate"/>
<author>
<name>Marshall, Nelson W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155567</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">A study of the capital structure of major competitors in the telecommunications industry
Marshall, Nelson W.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaves 128-129.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An air-preheating system for blast furnaces</title>
<link href="https://hdl.handle.net/1721.1/155564" rel="alternate"/>
<author>
<name>McPeak, Mark Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/155564</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">An air-preheating system for blast furnaces
McPeak, Mark Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaves 177-180.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of vitamin A on plasma glycoproteins</title>
<link href="https://hdl.handle.net/1721.1/155562" rel="alternate"/>
<author>
<name>Kiorpes, Timothy Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/155562</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The effect of vitamin A on plasma glycoproteins
Kiorpes, Timothy Charles.
The rate of uptake of label from radioactive D-glucosamine and D-mannose into the plasma glycoproteins was studied in vitamin A-deficient rats by comparison with the plasma of normal pair-fed controls. Preliminary studies indicated that peak incorporation (specific activity) was reached three hours after intraperitoneal injection with labelled sugar in both vitamin A-deficient and normal rat plasma. Normal-deficient pairs were injected with the same sugar, labelled with a different isotope, and their plasma was mixed based on equal amounts of protein and fractionated on DEAE-Sephadex A-50. There was a consistent decrease m radioactivity observed m what appeared to be the alpha₁ peak in vitamin A-deficiency. This depression was on the order of 30%, when normal and deficient peak totals were compared. This effect appeared with mannose and glucosamine and was of equal magnitude for both sugars. Fractionation of this peak by gel filtration showed that most of the radioactivity was associated with one glycoprotein, which was homogeneous in 5% polyacrylamide gel electrophoresis; t.he molecular weight of this glycoprotein was estimated to be on the order of 1 x 10⁶ from its behavior on Sepharose 6B. The decrease in the incorporation of label into this peak was interpretted as representing a decreased synthesis rate in vitamin A-deficiency. A shift in the position of the peaks occurred on DEAE-Sephadex in two fractionations of glucosamine-labelled plasma. The vitamin A-deficient plasma glycoproteins were eluted slightly later than those from normal plasma, indicating either a higher negativity in deficiency or a lower molecular weight. This effect was not investigated. However, its failure to be expressed during gel filtration and its reappearance in electrophoresis suggested that charge differences were responsible for this shift.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Service, 1973; Cataloged from pdf of print version of thesis.; Includes bibliographical references (pages 73-76).
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of several distance measures for segmentation and isolated word recognition</title>
<link href="https://hdl.handle.net/1721.1/155536" rel="alternate"/>
<author>
<name>Brown, Ralph W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155536</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Comparison of several distance measures for segmentation and isolated word recognition
Brown, Ralph W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Includes bibliographical references (leaves 101-103).
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of lumber producing machinery</title>
<link href="https://hdl.handle.net/1721.1/155534" rel="alternate"/>
<author>
<name>Keller, Robert E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155534</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">The design of lumber producing machinery
Keller, Robert E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and development of a rosette extensometer of small gage length</title>
<link href="https://hdl.handle.net/1721.1/155533" rel="alternate"/>
<author>
<name>Bulkeley, Peter Zane.</name>
</author>
<id>https://hdl.handle.net/1721.1/155533</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">The design and development of a rosette extensometer of small gage length
Bulkeley, Peter Zane.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Includes bibliographies.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forces in single grit grinding</title>
<link href="https://hdl.handle.net/1721.1/155532" rel="alternate"/>
<author>
<name>Brown, Robert Hallowes.</name>
</author>
<id>https://hdl.handle.net/1721.1/155532</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Forces in single grit grinding
Brown, Robert Hallowes.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Bibliography: leaves 63-65.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometrical analysis of grinding</title>
<link href="https://hdl.handle.net/1721.1/155530" rel="alternate"/>
<author>
<name>Kalpakcioglu, Serope,
            1928-</name>
</author>
<id>https://hdl.handle.net/1721.1/155530</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">Geometrical analysis of grinding
Kalpakcioglu, Serope,
            1928-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1953; Bibliography: leaf 55.
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a new impact tester for Gillette safety razor blades</title>
<link href="https://hdl.handle.net/1721.1/155529" rel="alternate"/>
<author>
<name>Kubick, Harry.</name>
</author>
<id>https://hdl.handle.net/1721.1/155529</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Design of a new impact tester for Gillette safety razor blades
Kubick, Harry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1951; Bibliography: leaf 41.
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A technique for scheduling patients to specialist consultations in a group practice.</title>
<link href="https://hdl.handle.net/1721.1/155526" rel="alternate"/>
<author>
<name>Lynn, Jeffrey Mark.</name>
</author>
<id>https://hdl.handle.net/1721.1/155526</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A technique for scheduling patients to specialist consultations in a group practice.
Lynn, Jeffrey Mark.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternative techniques for modeling travel distance.</title>
<link href="https://hdl.handle.net/1721.1/155524" rel="alternate"/>
<author>
<name>Vaccaro, Henry Sebastian.</name>
</author>
<id>https://hdl.handle.net/1721.1/155524</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Alternative techniques for modeling travel distance.
Vaccaro, Henry Sebastian.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1974; Bibliography: leaves 246-249.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sticks : a new approach to LSI design.</title>
<link href="https://hdl.handle.net/1721.1/155523" rel="alternate"/>
<author>
<name>Williams, John Douglas,
            1944-</name>
</author>
<id>https://hdl.handle.net/1721.1/155523</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Sticks : a new approach to LSI design.
Williams, John Douglas,
            1944-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Bibliography : leaves 143-144.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classical solutions in bag theory.</title>
<link href="https://hdl.handle.net/1721.1/155522" rel="alternate"/>
<author>
<name>Lee, Sylvester.</name>
</author>
<id>https://hdl.handle.net/1721.1/155522</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Classical solutions in bag theory.
Lee, Sylvester.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Change and Aging: analyzing the disproportionate health and socioeconomic vulnerabilities of older adults in relation to the climate crisis in the U.S.</title>
<link href="https://hdl.handle.net/1721.1/155513" rel="alternate"/>
<author>
<name>McVay, Katelyn R.</name>
</author>
<id>https://hdl.handle.net/1721.1/155513</id>
<updated>2024-07-09T03:50:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Climate Change and Aging: analyzing the disproportionate health and socioeconomic vulnerabilities of older adults in relation to the climate crisis in the U.S.
McVay, Katelyn R.
Climate change has exacerbated the extreme highs and lows of temperature throughout the United States. While climate change-related temperature changes have impacted the entire population, certain demographic groups bear more of the burden than others. In particular, older adults (those aged 65+) may be especially at risk due to their overall increased morbidity and mortality rates. Older adults can escape the outdoor temperatures at home through home energy use. However, older adults living at or below the poverty level may not be able to manage the associated costs of home energy usage. This research builds upon previous work on climate justice by assessing the additive components of poverty, home-living status, and energy costs on the resilience of older adults who reside in their own homes at the national level. This paper aims to identify significant locations in the United States where older adults may be most impacted by temperature extremities and which older populations experience the most energy cost burdens. Through the development of an energy cost and climate risk index, this research hopes to identify which places in the U.S. may be most vulnerable to older Americans’ health and financial stability. Significant findings for both cold waves and heat waves include strong positive relationships between overall extreme temperature risk and annual energy cost burdens, which signify a need to subsidize and assist with energy expenses in particularly vulnerable locations. This research contributes a more precise evaluation of the issue and emphasizes the need to localize and focus on specific populations and their unique risk factors since prior spatial research covers a broad range of populations and vulnerabilities, making data interpretation less specific.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IoT at Amgen - Evaluating and Piloting Industry 4.0 Technology in Biomanufacturing</title>
<link href="https://hdl.handle.net/1721.1/155512" rel="alternate"/>
<author>
<name>Hosinski, Grant</name>
</author>
<id>https://hdl.handle.net/1721.1/155512</id>
<updated>2024-07-09T03:08:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">IoT at Amgen - Evaluating and Piloting Industry 4.0 Technology in Biomanufacturing
Hosinski, Grant
The advent of the oft cited Fourth Industrial Revolution, or Industry 4.0, has capacitated the wide spread use of Internet of Things technologies- namely, networks of wireless sensors and actuators- in industrial manufacturing processes. While Industry 4.0 purports to usher in the next generation of smart factories, traditional manufacturing facilities may also stand to benefit by selectively adopting IoT technology to augment mature manufacturing processes. Amgen, a global leader in the production of life saving biopharmaceuticals, has previously supported IoT-based solutions to provide new capabilities within existing biomanufacturing practices. However, selecting and prioritizing potential IoT investments- especially given the mature wired instrumentation infrastructure of Amgen’s manufacturing facilities- remains a challenge. This thesis examines the adoption of IoT technology at Amgen within two distinct lenses. First, an evaluative framework to aid in Amgen’s decision making process surrounding IoT investments is presented. Next, a small-scale IoT device is designed and implemented. The device hosts an artificial intelligence model which, in real time, detects and alerts personnel to glass break events during Amgen biomanufacturing processes. Both initiatives shed light on Amgen’s technical capacity for integrating IoT technology and Amgen’s willingness to adopt IoT technology in addition to creating value within Amgen’s biomanufacturing operations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System Dynamics Approach to Analyzing U.S. Army Officer Talent Retention</title>
<link href="https://hdl.handle.net/1721.1/155511" rel="alternate"/>
<author>
<name>Dulce II, Richie</name>
</author>
<id>https://hdl.handle.net/1721.1/155511</id>
<updated>2024-07-09T03:44:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A System Dynamics Approach to Analyzing U.S. Army Officer Talent Retention
Dulce II, Richie
This thesis investigates the retention of U.S. Army officers through a system dynamics approach, aiming to address the complex factors influencing officers' decisions to remain in or leave the service. By conducting a comprehensive literature review and synthesizing data from various secondary source surveys, key variables impacting retention were identified. These variables were integrated into a qualitative system dynamics model to reveal the intricate feedback loops and interdependencies affecting retention. The qualitative model serves as a foundation for proposing policy recommendations designed to improve officer retention rates by addressing systemic issues and enhancing overall career satisfaction. The insights gained from this research highlight the importance of a holistic and interconnected approach to policy development, emphasizing the need for sustained efforts to stabilize and improve the retention system in the Army.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge</title>
<link href="https://hdl.handle.net/1721.1/155510" rel="alternate"/>
<author>
<name>Alharbi, Meshal</name>
</author>
<id>https://hdl.handle.net/1721.1/155510</id>
<updated>2024-07-09T03:22:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge
Alharbi, Meshal
The problem of sample complexity of online reinforcement learning is often studied in the literature without taking into account any partial knowledge about the system dynamics that could potentially accelerate the learning process. In this thesis, we study the sample complexity of online Q-learning methods when some prior knowledge about the dynamics is available or can be learned efficiently. We focus on systems that evolve according to an additive disturbance model where the underlying dynamics are described by a deterministic function of states and actions, along with an unknown additive disturbance that is independent of states and actions. In the setting of finite Markov decision processes, we present an optimistic Q-learning algorithm that achieves Õ(√T) regret without polynomial dependency on the number of states and actions under perfect knowledge of the dynamics function. This is in contrast to the typical Õ(√SAT) regret for existing Q-learning methods. Further, if only a noisy estimate of the dynamics function is available, our method can learn an approximately optimal policy in a number of samples that is independent of the cardinalities of state and action spaces. The sub-optimality gap depends on the approximation error of the noisy estimate, as well as the Lipschitz constant of the corresponding optimal value function. Our approach does not require modeling of the transition probabilities and enjoys the same memory complexity as model-free methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refugee housing in the United States: improving the Refugee-Welcoming Rental Market</title>
<link href="https://hdl.handle.net/1721.1/155504" rel="alternate"/>
<author>
<name>Landis, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/155504</id>
<updated>2024-07-09T03:35:19Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Refugee housing in the United States: improving the Refugee-Welcoming Rental Market
Landis, Joseph
In this thesis, I explore the vital connection between the US Reception &amp; Placement (USRAP) refugee resettlement program and the health of the US’s refugee-welcoming rental market (RWRM). I focus on USRAP because it plays a unique and vital role in the international protection system for refugees and because the Biden Administration pledged in 2021 to scale the system’s capacity to resettle refugees back up to pre-2017 levels. The housing system within USRAP is understudied compared to those of much-smaller refugee resettlement programs in Europe, Australia and Canada. Recently, however, housing access has imposed an unignorable operational parameter on USRAP as evidenced by resettlement agency (RA) implementation offices falling more than 50% short of housing placement targets in FY 2023. In this thesis, I engage with the issue of USRAP-based refugee housing from a constructivist perspective, identifying and analyzing the systems and processes that shape a prospective RWRM match to search for practical changes that may lead to improvement. Informed by both a desk review and field research with RWRM stakeholders, I present a fictional narrative case study with two scenarios illustrating two frequent stories of RWRM matching in the housing journey of USRAP clients. The case shows that a wide variety of factors can drastically impair the rent capacity and, by extension, the open-market matching prospects for RWRM households. I then explore other key issues experienced on either side of the RWRM when matching and identify three major challenges: guaranteeing unit-tenant fit, managing risk perception amongst landlords, and streamlining the RWRM tenant placement process. To improve efficiency in tenant placement, resettlement agencies can harness better information systems and adopt clearer processes when liaising with landlords. Addressing the gaps in finances or knowledge that impede unit-tenant fit and landlords’ perceptions of renting to refugees must involve fostering partnerships with third party service providers. I identify six opportunities for stakeholders and partners to fortify the RWRM, and I consider the role of a social impact start-up called ReHome that I founded in 2023 to serve as a marketplace platform bringing new RWRM partnerships together. Finally, I consider what additional possibilities might open up within local rental markets if the government were to orient USRAP rental housing access toward non-market housing providers. USRAP has a unique ability to optimize the initial access step of the housing journey of the individuals it resettles because it is the global resettlement program that is most involved in the practicalities of rental market matches and because it is insulated geographically from refugee-producing countries.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PFAS and the Future of Rural Land</title>
<link href="https://hdl.handle.net/1721.1/155503" rel="alternate"/>
<author>
<name>Simon, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/155503</id>
<updated>2024-07-09T03:14:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">PFAS and the Future of Rural Land
Simon, Sarah
In this thesis, I will explain how the problem of PFAS chemical contamination on farmland in the United States emerges from and connects with other environmental, agricultural and economic challenges in rural areas. Using Maine as a case study, I will evaluate the policy response that took place in 2021-23, and consider what lessons can be learned from Maine’s approach that might apply at a national scale. The thesis concludes with a description of possible futures for contaminated land as a way of exploring the implications of different approaches to dealing with the contamination.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantification of Elastic Incompatibilities at Triple Junctions via Physics-Based Surrogate Models</title>
<link href="https://hdl.handle.net/1721.1/155502" rel="alternate"/>
<author>
<name>Rau, Aaditya</name>
</author>
<id>https://hdl.handle.net/1721.1/155502</id>
<updated>2024-07-09T03:54:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantification of Elastic Incompatibilities at Triple Junctions via Physics-Based Surrogate Models
Rau, Aaditya
Stresses at grain boundaries resulting from elastic incompatibilities have long been known to drive the premature failure and loss of desirable macroscopic properties in polycrystalline materials. As a result, there have been significant efforts in the field of grain boundary engineering to understand the sources of grain boundary incompatibilities in polycrystals and potential mitigation strategies through microstructure manipulation. Thus, understanding the relationship between grain incompatibility and failure is important for the practical use of polycrystalline materials. Surrogate models based on machine learning methods have gained broad popularity due to their ability to furnish a functional, albeit approximate, description of complex phenomena. The goal of this thesis is to predict quantitative metrics of incompatibility from various triple junction configurationsusingasurrogatemodel. High-fidelityfiniteelementsimulationsofacubic-crystal triple junction under hydrostatic extension were used to generate a synthetic dataset for training the surrogate model. A set of &#119869; integrals computed around microcracks placed along the triple junction boundaries were used to quantify the elastic incompatibilities between the grains. A multi-layer perceptron network was trained using the grain rotation angles and &#119869; integrals as the feature and label data respectively. We demonstrate that the trained network establishes an accurate functional dependence between the triple junction angles and the &#119869; integrals. We use the surrogate model to efficiently sweep the configuration space and create contour maps of the largest stress intensification at the triple junction as a function of the grain rotation angles. Furthermore, we show that the surrogate model can be utilized to identify the most and least compatible triple junction configurations via optimization. These configurations are then compared to those identified as favorable through the theory of coincident site lattices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Dark Patterns in UI/UX Elements of Digital Platforms</title>
<link href="https://hdl.handle.net/1721.1/155501" rel="alternate"/>
<author>
<name>Jain, Anukriti</name>
</author>
<id>https://hdl.handle.net/1721.1/155501</id>
<updated>2024-07-09T03:03:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Dark Patterns in UI/UX Elements of Digital Platforms
Jain, Anukriti
5.35 billion people (the equivalent of 66.2 percent of the world’s population) are using the internet as of January 2024. As the number of users on the internet is growing and the attention span of an internet user has reduced to 8 seconds, one of the challenges that digital businesses face is acquiring, engaging, and retaining users for their products/services. Some companies employ user interface elements on their websites or apps to trick users into signing up or buying a product. Dark patterns are the tricks used by apps and websites that push users into doing things they didn’t intend to, like signing up for a service or making a purchase.&#13;
&#13;
This thesis covers different types of dark patterns, including roach motel, malicious nudging, urgency/scarcity, bait and switch, and confirm-shaming. Dark patterns are also organized into “pressure” and “trickery” categories. Companies leverage dark patterns to meet their business goals, but it is critical to understand the long-term impact of using dark patterns. This thesis explores the possibility of helping users find these patterns and making them vigilant about these dark patterns. These deceptive patterns are common in web flows and are not easily detectable for many people visiting websites. There is a need to build an intervention to create consciousness about dark patterns. This thesis aims to make users aware of dark patterns by building a Chrome extension that will focus users' attention on the information provided and make them aware of dark patterns. First, we will focus on developing a Chrome extension for detecting scarcity/urgency dark patterns.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Climate Change through a Community Definition of Resilience: Qualitative Analysis of Interviews and Implications for Practice</title>
<link href="https://hdl.handle.net/1721.1/155493" rel="alternate"/>
<author>
<name>Nakagawa, Anisha Patil</name>
</author>
<id>https://hdl.handle.net/1721.1/155493</id>
<updated>2024-07-09T03:45:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Understanding Climate Change through a Community Definition of Resilience: Qualitative Analysis of Interviews and Implications for Practice
Nakagawa, Anisha Patil
This thesis explores how residents in low-income, rapidly gentrifying neighborhoods conceptualize resilience to climate change and what responses are desired. As part of a Participatory Action Research study in Eastern Massachusetts, I analyzed de-identified interviews with residents and engaged in collaborative data analysis sessions with Resident Researchers. Residents in these communities experience climate change through chronic stressors, mainly through heat, high utility bills, and flooding. They connect climate resilience to other stressors in their lives like displacement, structural racism, and trauma, and they see strong community ties as a key piece of resilience. Based on this research, responses to climate change need to consider the root causes of unjust systems, respond to the co-stressors in people’s lives, and have community ownership and control in order to be most effective.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ingestible Electronics for High Quality Gastric Neural Recordings</title>
<link href="https://hdl.handle.net/1721.1/155492" rel="alternate"/>
<author>
<name>Gierlach, Adam Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/155492</id>
<updated>2024-07-09T03:06:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ingestible Electronics for High Quality Gastric Neural Recordings
Gierlach, Adam Matthew
Recent advances in understanding the gut-brain axis, functional gastrointestinal disorders, and gastric stimulation therapies have highlighted the importance of the electrical signals that regulate the gastrointestinal (GI) tract.  Current systems for measuring neural signals from the GI tract involve acute, invasive procedures that change the underlying electrical behaviors or cutaneous recordings which measure highly attenuated signals.  This thesis describes the development of a non-invasive device for long term gastric recordings in freely moving patients known as Multimodal Electrophysiology via Ingestible Gastric Untethered Tracking (MiGUT).  The custom device and electrodes are designed to conform to the stomach wall, wirelessly transmit high quality signals, all while fitting in an ingestible form capable of being easily delivered into the GI tract.  MiGUT is shown to record the gastric slow wave in-vivo in pigs, along with signals that align with the heart and respiratory rate, and measure the expected response to prokinetic therapeutics.  Multi-day measurements were obtained using MiGUT in a freely moving pig, recording changes in the slow wave during different behaviors with no artifacts observed during ingestion or movement.  This type of data could enable a new level of understanding of one’s GI tract, for health tracking and personalized diagnostics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Affect of “Aha!" Moments to Detect the Moment of Learning</title>
<link href="https://hdl.handle.net/1721.1/155491" rel="alternate"/>
<author>
<name>Adler, Eden</name>
</author>
<id>https://hdl.handle.net/1721.1/155491</id>
<updated>2024-07-09T03:30:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling the Affect of “Aha!" Moments to Detect the Moment of Learning
Adler, Eden
What if a model could pinpoint the exact moment of learning? Currently, the only way we can understand when someone has learned is by testing them afterwards, which has its limitations. In attempts to detect the moment of learning, researchers from various fields have leveraged data from methods such as Knowledge Tracing (KT) and Electroencephalograms (EEGs) to predict students’ knowledge acquisition. These methods have contributed to improving our understanding of knowledge, but not only do they fall short of detecting the exact moment of learning, they also interfere with natural learning interactions by requiring students to wear sensors or type as they learn. Often, modeling learning does not include affect and emotion data, which are key influencers of learning outcomes. One affective expression that is often observed by educators, and has evaded quantification attempts by researchers, is the moment everything suddenly clicks for the student- the “Aha!” moment. Using classroom video data of students experiencing “Aha!” moments, we created dynamic, functional handcrafted features representing the face and body position and used them to model students’ facial expressions. We then leveraged feature selection methods and statistical analysis to ultimately contribute a novel, explainable definition of the observable, affective markers of “Aha!” moments, unlocking the opportunity to use the “Aha!” moment as a signal for detecting the moment of learning. These results invite future interdisciplinary research efforts as well as applications in fields such as artificial intelligence, human-robot interaction, education, psychology, cognitive sciences, and more.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytics for Healthcare Operations: Machine Learning&#13;
to Improve Emergency Department Patient Flow</title>
<link href="https://hdl.handle.net/1721.1/155489" rel="alternate"/>
<author>
<name>Kyle, Thomas D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155489</id>
<updated>2024-07-09T03:48:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analytics for Healthcare Operations: Machine Learning&#13;
to Improve Emergency Department Patient Flow
Kyle, Thomas D.
Over the last several years, the Emergency Department (ED) at Massachusetts General Hospital (MGH) has been experiencing a significant increase in demand for hospital services. Overcrowding in the ED and high utilization of inpatient floors are symptoms of this increase in demand. Chapter 2 shows that data available at the time of an inpatient bed request can be used to prospectively identify ED patients who are sufficiently sick to require hospitalization, but are likely to be discharged within 2 nights of the admission decision. The resulting XGBoost classification model is being implemented as a decision support tool for clinicians who would be deciding whether to send this cohort of SS patients to a short-stay unit (SSU). The SSU would allow for more effective and timely care of this class of patients, thus helping to alleviate both ED overcrowding and inpatient floor utilization. The model exhibits an out-of-sample AUC of 0.81 and its scores are inversely correlated with the observed LOS as desired. Then, Chapter 3 investigates a generic service system that captures typical healthcare settings, in which a hospital has to manage bed assignment in the face of bed requests from patients with different characteristics. The service system (e.g., a hospital) must decide whether to accept or reject service requests instantaneously. The work describes an approximate dynamic programming approach to solve for admission control policies that consider LOS forecasts in admission decisions. The resulting LOS-considerate policy with perfect LOS forecasts allows the generic hospital to increase its daily revenue (or other value-&#13;
based metric) by 5.5% compared to a policy that does not consider LOS forecasts. This value added increases as the LOS forecasts become more accurate. This illustrates the benefits of using LOS forecasts in hospital resource allocation decisions and investment in accurate LOS forecasting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System-Theoretic Process Analysis of a Novel Airborne Laser Communication System</title>
<link href="https://hdl.handle.net/1721.1/155488" rel="alternate"/>
<author>
<name>Bishop, Brittany E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155488</id>
<updated>2024-07-09T03:12:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System-Theoretic Process Analysis of a Novel Airborne Laser Communication System
Bishop, Brittany E.
As the military strives to create a more robust battle network, laser communication offers many advantages such as supporting more secure and efficient data sharing. For this reason, interest has grown in recent years in implementing lasercom as a means for intra-aircraft communication. However, many challenges unique and inherent to lasercom such as stringent line-of-sight and pointing requirements and susceptibility to atmospheric degradation lead to difficulties in implementation. Consequently, establishing and maintaining lasercom links in the dynamic environment of flight will require seamless coordination between aircraft. The complexity and novelty of such a system warrant a hazard analysis technique that can fully address the associated challenges of collaboration while the system is in an early concept phase of design. System-Theoretic Process Analysis (STPA) is a proactive hazard analysis technique rooted in Systems Theory. While more traditional hazard analysis methods evaluate the safety of system components individually, STPA provides guidance to analyze systems holistically, thus supporting the identification of emergent behaviors that arise due to component interactions.  Recently, STPA has been extended to address hazards specifically associated with collaboration of multiple controllers providing shared control over a physical process. This extension known as STPA-Teaming provides a methodology to analyze unsafe combinations of control actions that may lead to system losses. The method allows for the systematic identification of causal factors related to coordination that are likely to be missed by more traditional hazard analysis techniques. Because this approach relies on abstraction and includes human operators along with software and hardware components, it is well-suited for novel, complex systems.  This thesis applies STPA and its extension, STPA-Teaming, to an early concept airborne lasercom system to identify scenarios in which loss of communication may occur. As a result, it identifies scenarios related not only to individual component failures and unsafe internal control, but also related to flaws in coordination of multiple controllers. The output of the analysis is system recommendations that can support the remainder of the systems engineering process including generation of system requirements, definition of system concept of operations (ConOps) and system architecture, and system validation and verification (V&amp;V). In this way, the results of the analysis provide a baseline level of traceability for future design decisions to manage the emergent behavior of the system and ultimately prevent mission losses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of Contrail Models</title>
<link href="https://hdl.handle.net/1721.1/155484" rel="alternate"/>
<author>
<name>Xu, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/155484</id>
<updated>2024-07-09T03:28:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of Contrail Models
Xu, Michael
Condensation trails (contrails) are aircraft-induced ice clouds that are estimated to account for up to 50% of aviation’s climate impacts. Uncertainties in the impact of individual contrails have motivated the development of contrail models, such as CoCiP, a 0-D rapid assessment model, and APCEMM, a 2-D model with detailed ice microphysics. However, there are gaps within the current contrail modeling literature. There is no model both sufficiently fast for rapid assessment of contrail impacts and detailed in its ice microphysics modeling. There are few studies calibrating and validating the performance of contrail models on individual f lights. The absolute and relative magnitudes of errors due to weather data uncertainty and errors due to modeling assumptions have not been extensively studied, despite many studies relying on the CoCiP model and the ERA5 weather data for their analyses. This thesis addresses these gaps. The APCEMM model is optimized to achieve a decrease in runtime by 95% and is improved with depth estimation, vertical advection, and atmospheric turbulence modules. A set of 152 flight-attributed LIDAR cross sections is assembled to compare APCEMM and CoCiP results against individual contrail observations on metrics such as contrail width, depth, and optical depth. A method dubbed “ambient parameter inference”, where contrail models infer the meteorological conditions necessary to reproduce a contrail observation, is developed to produce estimated distributions of ambient parameters. These distributions are used to analyze model sensitivities, biases in the weather data, and errors due to weather data uncertainty and modeling assumptions. I find that the distributions of the wind shear and vertical humidity profile as inferred by APCEMM have means and medians within the range of radiosonde measurements of these quantities, suggesting that the model adequately accounts for the sensitivities of contrail properties to these parameters. Compared to the APCEMM-inferred parameters, the ERA5 weather data predicts a 3.8 times higher average supersaturated layer depth and a 56% lower wind shear, suggesting systematic biases. CoCiP infers on average a 39% lower supersaturated layer depth and a 3.0 times higher ice supersaturation level compared to APCEMM. Due to the APCEMM-inferred parameters’ closer agreement with radiosonde measurements, this suggests that there may be modeling errors due to CoCiP’s inability to resolve the contrail’s vertical profile and its lower sensitivity to relative humidity. Errors in the ambient humidity data are found to possibly account for an over 100% average absolute error in optical depth when using APCEMM, greater than the 72.5% attributable to CoCiP modeling limitations. APCEMM is found to predict contrails with a 29.3% longer average lifetime and a 4.34-5.92 times average higher energy forcing compared to CoCiP when using the ERA5 weather data. This suggests that inter-model disagreement is on the same order of magnitude as the already known errors resulting from meteorological data gaps.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing Intelligent Audio-Gesture Interfaces For Wearables As A Sleep Aid</title>
<link href="https://hdl.handle.net/1721.1/155483" rel="alternate"/>
<author>
<name>Jacobs Luengo, Daniel Alberto</name>
</author>
<id>https://hdl.handle.net/1721.1/155483</id>
<updated>2024-07-09T04:02:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Harnessing Intelligent Audio-Gesture Interfaces For Wearables As A Sleep Aid
Jacobs Luengo, Daniel Alberto
Insomnia—difficulty in initiating and maintaining sleep—affects a significant portion of the global population. The mainstream adoption of wearable computing presents a unique opportunity to study and aid sleep at an individual level. Here we introduce Zzzonic, a smart sleep-aid application designed for smartwatches that leverages cognitive psychology and human-computerinteraction (HCI) to facilitate sleep onset by engaging users in audio tasks as a formof intrusive thought control. A significant aspect of Zzzonic's functionality is its adaptive control system, which estimates sleep onset latency in realtime by monitoring indicators such as motion anduser response. The system then progressively modifies the characteristics of the audio tasks to minimize sleep onset latency. This thesis evaluates Zzzonic through a series of user trials conducted throughout the development of the app, accessing the capacity to predict and control sleep onset. The results indicate accurately predicting sleep onset latency in realtime as a control signal is possible but there was no evidence indicating the system could minimize slope onset latency. The inclusion of more indicator signals and machine learning techniques is likely to significantly improve realtime sleep onset latency prediction. Future work on computer-modulated intrusive thought control would benefit from the evaluation of task design, intrusive thought indicators and identifying an adequate control framework.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Community-Based Approach for Hub Placements</title>
<link href="https://hdl.handle.net/1721.1/155480" rel="alternate"/>
<author>
<name>Chavalithumrong, Alissa</name>
</author>
<id>https://hdl.handle.net/1721.1/155480</id>
<updated>2024-07-09T03:38:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Community-Based Approach for Hub Placements
Chavalithumrong, Alissa
Advanced Air Mobility (AAM) is a rapidly emerging sector in the aerospace industry that seeks to revolutionize transportation by integrating highly automated aircraft into the airspace. As AAM technology matures, establishing a network framework and strategic hub locations becomes crucial for transitioning from theoretical models to practical applications in transportation systems. This thesis investigates community-based strategies for hub placement within the AAM infrastructure. More specifically, it utilizes network segmentation to decompose a network into communities to simplify the hub selection process into more manageable sub-problems. Our first contribution is the development of a specialized community detection methodology called Directed Flow Communities (DFC), which is designed to accommodate the attributes of transportation networks. Next, we conduct a case study using the Freight Analysis Framework (FAF) dataset as a proxy for AAM demand. The empirical investigation focuses on three key sectors: pharmaceuticals, electronics, and comprehensive freight flows, each presenting distinct challenges and insights into the network’s structure. The findings show the effectiveness of the community detection-based methods in unveiling cost-efficient hub locations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reinforcement Learning for Cybersecurity Risk Assessment of Advanced Air Mobility Systems</title>
<link href="https://hdl.handle.net/1721.1/155472" rel="alternate"/>
<author>
<name>Pieper, Brenton A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155472</id>
<updated>2024-07-09T03:34:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reinforcement Learning for Cybersecurity Risk Assessment of Advanced Air Mobility Systems
Pieper, Brenton A.
Modern AI/ML tools have significant potential to accelerate the development of Advanced Air Mobility (AAM) systems that use unmanned aerial systems for providing mobility services. The efficacy of these systems relies on highly granular, reliable, and trustworthy sensor data. This thesis is motivated by the need to assess safety risks due to cyber vulnerabilities in the surveillance components of AAM systems such as Automatic Dependent Surveillance-Broadcast (ADS-B) and the Airborne Collision Avoidance System (ACAS). We focus on spoofing attacks targeted at specific AAM agents and develop a computational approach to evaluate the impact of such attacks on the performance of cooperative agents modeled in a Multi-Agent Reinforcement Learning (MARL) framework. Our threat model is particularly suited for quantifying the safety risks of nominally trained MARL algorithms under attacks by an adversary capable of compromising observational data of a single target agent. In contrast to prior work in Adversarial RL, our approach to creating adversarial perturbations does not require access to learning and control mechanisms internal to the compromised agent. We show how realistic spoofing attacks can be successfully constructed using a simulated MARL-based AAM system, called AAM-Gym. We then conduct a safety risk analysis of such attacks using commonly accepted aviation safety metrics. Specifically, we find that safety compliance decreases across multiple aircraft densities under a spoofing attack to a single agent, owing to higher risk of Near Mid-Air Collision (NMAC). Finally, to understand possible algorithmic defenses, we take inspiration from Safe RL and show how AAM agents can be made more robust, and hence more safety compliant, to observational spoofing by using a minimax training criterion. Our work highlights the need to rigorously study the safety risks of AAM systems under realistic cyber threat models. Our findings can benefit efforts to develop practical defense techniques, such as signal validation and filtering, to detect the presence of adversarial perturbations, and control algorithms to adapt and respond to safety compromises in a timely manner.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Heterogeneous Parallelism in Numerical Differential Equations</title>
<link href="https://hdl.handle.net/1721.1/155471" rel="alternate"/>
<author>
<name>Utkarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/155471</id>
<updated>2024-07-09T03:37:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automating Heterogeneous Parallelism in Numerical Differential Equations
Utkarsh
Scientific computing is an amalgamation of numerical methods and computer science. Developments in numerical analysis have allowed stable and accurate numerical schemes, whereas computer algorithms have been successfully adopted to standard multicore systems of today, enabling parallelism. Combining efficient numerical algorithms with efficient parallelism presents a challenge mainly due to the independent development of these fields and is, therefore, typically solved on a domain-specific basis by domain experts. The development of general-purpose tools that integrate parallelism into algorithms, accessible through highlevel languages, signifies the future direction for addressing computational demands across various domains. This thesis work represents a culmination of efforts in general-purpose parallel numerical algorithms for solving differential equations. We make them accessible by choosing the Julia programming language to implement the high-level framework. Solving differential equations appears to be an intrinsically serial process due to progressive time-stepping that proves challenging to parallelize. Most of the approaches are linked to two broad categories; The first is the parallelism of the solver operations by making each solve faster, and the latter is the parallelism between the solves, i.e., solving multiple batches at a time. We automate the parallelization process in both these domains while keeping the algorithms general-purpose. Parallelization with different hardware accelerators, such as CPUs and GPUs, is also investigated. Parallelism for sufficiently large stiff ODEs is traditionally linked to the parallelization of the matrix factorization stage. However, these methods still need to overcome the threading overhead for ODEs having less than approximately 200 states. We propose implementing adaptive-order, adaptive time-stepping stiff ODE solvers such as extrapolation methods, which can parallelize a single instance of an ODE solve even for small ODEs. The other need for parallelization of ODE solvers arises from solving ODEs for batches of data, a typical workflow in inverse problems, global sensitivity analysis, and uncertainty quantification. Traditionally, GPU-accelerated ODE solvers were specially developed for high-dimensional PDE systems, which can be easily adapted for batched ODE solvers. The approach for parallelization is to convert an array-based ODE solver to work with GPU-based arrays. These approaches have shortcomings, such as implicit synchronization of time steps for all the ODEs and GPU overheads. We propose that these approaches can be improved significantly where GPU acceleration for ODE solvers is device-agnostic, general-purpose, and accessible from a high-level language.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Human-Computer Interaction-Driven Inquiry on the Extent to Which Web-Based Trust Signals Adequately Represent the Risk of Interactions on the Web</title>
<link href="https://hdl.handle.net/1721.1/155470" rel="alternate"/>
<author>
<name>Ocampo, Javier Adrian L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155470</id>
<updated>2024-07-09T03:07:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Human-Computer Interaction-Driven Inquiry on the Extent to Which Web-Based Trust Signals Adequately Represent the Risk of Interactions on the Web
Ocampo, Javier Adrian L.
As the global population increasingly relies on internet-based products, services, and platforms, users are becoming more vulnerable to unintended consequences. One such consequence is the increased susceptibility to malicious actors and misinformation online. This vulnerability escalates as online interactions become more sophisticated, with users increasingly depending on the internet for complex needs like social activity, banking, and education. These interactions often involve exchanges of personal data, information, and monetary assets, which have become targets for malicious actors. This thesis examines a key point of vulnerability: the user interfaces and interaction components, referred to as "trust signals," are used to assess the trustworthiness of other users and information on these platforms. The research seeks to highlight the importance of trust signals in creating secure and reliable online environments, as well as explore how poorly designed trust signals can undermine trust and contribute to instability. To uncover latent needs and insights regarding trust signals, a human-centered design process was employed as the methodology. This approach facilitated understanding of user behaviors and preferences through iterative user research and design exploration. The thesis reveals two key findings. First, the human-centered design process showed that users rely on social proofs within trust signals, often basing their trust on their understanding of the recommender's perspective. Second, users are susceptible to relying on inadequate social proof proxies, such as like counts, follower counts, or Discord server member counts, to evaluate trustworthiness in contexts for which these signals were not intended.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems</title>
<link href="https://hdl.handle.net/1721.1/155468" rel="alternate"/>
<author>
<name>Han, Jessy Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/155468</id>
<updated>2024-07-09T03:42:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
Han, Jessy Xinyi
We are interested in developing a data-driven method to evaluate race-induced biases in law enforcement systems. While the recent works have addressed this question in the context of police-civilian interactions using police stop data, they have two key limitations. First, bias can only be properly quantified if true criminality is accounted for in addition to race, but it is absent in prior works1. Second, law enforcement systems are multi-stage and hence it is important to isolate the true source of bias within the “causal chain of interactions” rather than simply focusing on the end outcome; this can help guide reforms. In this work, we address these challenges by presenting a multi-stage causal framework incorporating criminality. We provide a theoretical characterization and an associated datadriven method to evaluate (a) the presence of any form of racial bias, and (b) if so, the primary source of such a bias in terms of race and criminality. Our framework identifies three canonical scenarios with distinct characteristics: in settings like (1) airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race; (2) AI-empowered policing2, the primary source of observed bias against a race is likely to be bias in law enforcement against criminals of that race; and (3) police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting (e.g. via 911 calls) against the other race. Through an extensive empirical study using police-civilian interaction (stop) data and 911 call data, we find an instance of such a counter-intuitive phenomenon: in New Orleans, the observed bias is against the majority race and the likely reason for it is the over-reporting (via 911 calls) of incidents involving the minority race by the general public.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-fidelity Modeling and Reinforcement Learning for Energy Optimal Planning</title>
<link href="https://hdl.handle.net/1721.1/155466" rel="alternate"/>
<author>
<name>de Castro, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/155466</id>
<updated>2024-07-09T04:00:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-fidelity Modeling and Reinforcement Learning for Energy Optimal Planning
de Castro, Luke
Modeling the energy consumption of a quadrotor involves complex electrical and physical dynamics, making it difficult to optimize over. We present a sequence-to-sequence multi-fidelity Gaussian process (MFGP) to learn a data-driven model to predict the energy required to fly a given vehicle trajectory. The goal is to create an accurate energy prediction that minimizes the number of expensive high fidelity simulations required for training. The MFGP algorithm can incorporate many low accuracy samples from a simple motor model with a few computationally demanding battery simulations to create a single accurate energy prediction. We perform sample efficiency experiments, finding a single fidelity model often needs 10 times more high fidelity data to match the accuracy achieved by the MFGP. The energy prediction model is then applied to a reinforcement learning (RL) agent, providing a reward signal to a minimum energy trajectory planner. The RL policy generates more energy efficient trajectories than those found by a nonlinear optimization baseline method, and we compare it to a minimum time RL model to show that the energy efficient policy is non-trivial.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Analysis of Fusion Energy in the European Electricity Market</title>
<link href="https://hdl.handle.net/1721.1/155465" rel="alternate"/>
<author>
<name>Duitemeijer, Mart</name>
</author>
<id>https://hdl.handle.net/1721.1/155465</id>
<updated>2024-07-09T03:59:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-Economic Analysis of Fusion Energy in the European Electricity Market
Duitemeijer, Mart
This study explores the potential of fusion energy in the decarbonization of the European Union its electricity market. The study simulates various scenarios by employing the GenX least-cost optimization model, factoring in different investment costs and emission caps. The research addresses how fusion energy could influence total system costs, electricity prices, and competitiveness against other technologies. Results indicate that if introduced at low investment costs, fusion can transform the electricity system, acting as a baseload power source and altering investment dynamics. Conversely, in scenarios where fusion investment costs are higher, the results predict a diversified electricity mix dominated by renewables like wind and solar, complemented by gas with Carbon Capture and Storage (CCS) and battery storage to manage intermittency and maintain grid stability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Approach to Fault Management Design for the Proposed Mars Sample Return EDL and Ascent Phase Architectures</title>
<link href="https://hdl.handle.net/1721.1/155424" rel="alternate"/>
<author>
<name>Mao, Cici</name>
</author>
<id>https://hdl.handle.net/1721.1/155424</id>
<updated>2024-06-28T03:50:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Approach to Fault Management Design for the Proposed Mars Sample Return EDL and Ascent Phase Architectures
Mao, Cici
The Mars Sample Return (MSR) campaign aims to bring Martian regolith samples back to Earth. JPL is currently developing the Sample Retrieval Lander (SRL) to receive the samples collected by the Perseverance rover and launch them into Mars orbit using a Mars Ascent Vehicle (MAV) for future Earth return. The telecommunications delay from Earth to Mars requires autonomy on-board the spacecraft for different phases of the mission like Entry, Descent \&amp; Landing (EDL) and MAV Launch given limited possible operator intervention. Fault protection (FP) encapsulates these autonomous system behaviors, which aim to protect the spacecraft by limiting or detecting and responding to anomalies. In order to provide sufficient coverage to the possible faults a system may encounter, multiple FP analyses are needed to identify and analyze the fault set of a system to guide future design iterations. This thesis focuses on three tools: Fault Containment Region (FCR), Failure Mode Effects \&amp; Criticality Assessment (FMECA), and Fault Tree Analysis (FTA). FCRs are used to identify the boundaries at which faults can occur and propagate in a system, making them useful tools for defining functional boundaries in a system and identifying areas that are single-string, or have no redundancy. FMECAs and FTAs use a bottoms-up and top-down approach, respectively, to identify possible faults and the associated consequences and impacts of each anomaly; together, these tools provide a comprehensive fault set to be used in FP architecture design. Using these tools demonstrates how FP design factors into engineering trades – monitoring or additional redundancy adds additional cost and complexity – and thus the results of these analyses need to be used iteratively with the system design to determine the best approach. As such, it’s shown that a majority of EDL and MAV Launch elements are single-string, and while there are opportunities of adding redundancy in EDL sensors, there are few options for MAV Launch given its engineering constraints. While both phases have little redundancy, the option space for EDL is better known given JPL’s multiple successful past landings. Future work should conceptualize possible areas of added redundancy to the MAV to lower overall mission risk.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Silicon Photomultipliers as Free Space Optical Communication Sensors</title>
<link href="https://hdl.handle.net/1721.1/155423" rel="alternate"/>
<author>
<name>Gallo, Leonardo de la</name>
</author>
<id>https://hdl.handle.net/1721.1/155423</id>
<updated>2024-06-28T03:05:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Silicon Photomultipliers as Free Space Optical Communication Sensors
Gallo, Leonardo de la
Free-space optical communications (FSOC) is a growing field that presents an attractive alternative to the the current technology standard of radio frequency (RF) communications. Typical optical carriers have smaller SWaP in comparison to RF systems due to the difference in required aperture sizes. The narrower beam divergence of optical wavelengths also results in a higher power efficiency for long range communication links. This improvement in performance can be leveraged by platforms constrained by size, such as satellites. For the past ten years, the number of satellites launched has increased by an order of magnitude, with smallsats currently making up 96% of the launched vehicles. The use of FSOC terminals for smallsats enables higher data rates but requires precise pointing. An example nanosatellite FSOC mission, NASA’s CubeSat Laser Infrared CrosslinK (CLICK) B/C addresses this by using a beacon-based pointing, acquisition, and tracking (PAT) system to correct for angular misalignment while an avalanche photodiode (APD) receiver detects the communication signal. The high gain of the APD allows the communication signal to be detected at link distances ranging from 25km to 580km for CLICK-B/C. In this work, we consider whether higher sensitivities may be achieved by using a Silicon photomultiplier (SiPM) as a receive optical sensor. SiPMs are arrays of APDs operated in Geiger mode, characterized by nanosecond output pulses and gains in the order of 106 electrons per photon. This thesis proposes using a SiPM in a 2x2 pixel configuration as a dual pointing and communication sensor for FSOC terminals in LEO. In this configuration the misalignment of the optical signal between the transmit and receive terminals can be directly measured by the SiPM, eliminating the need for a dedicated beacon laser and quadcell detector used for PAT. This reduces the overall SWaP of the communication terminal by a factor of 2. The pointing performance of the proposed SiPM configuration is characterized by calculating the noise equivalent angle (NEA) of the detector through simulation and experiment, and the communication performance is evaluated by testing the maximum detectable pulsing frequency of a laser. The simulation results support an NEA of 1µrad and a maximum detectable pulsing rate of 2GHz.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of Cavity Geometry to Improve Optical Quality of Windows in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/155422" rel="alternate"/>
<author>
<name>Schofield, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/155422</id>
<updated>2024-06-28T03:53:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Study of Cavity Geometry to Improve Optical Quality of Windows in Hypersonic Flow
Schofield, Matthew
The optical quality of the window-air system of a flight vehicle in hypersonic flow is simulated. The optical distortion of the window-air system is the metric of merit. Within the earth’s atmosphere, vehicles at hypersonic speeds may generate viscous and high-temperature thermal boundary layers. These boundary layers induce a nonuniform displacement of temperature, density, and fluid velocity over the window-sensor system leading to a degradation of optical quality of the system. The heat f lux into the system is simulated for various geometries (length-to-depth ratios). Computer-simulated flow fields and time-development of different measures of optical quality are produced using US3D. Conjugate heat transfer is used for simulation of solid temperature development, with materials Aluminum-6061 for the vehicle solid (frame) and Sapphire (Al₂O₃) for the window. Optimal window-air system configurations are discussed for a Mach 7 vehicle at 20 km.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Traveling Salesman Problem in Multi-Agent Systems with Practical Constraints</title>
<link href="https://hdl.handle.net/1721.1/155420" rel="alternate"/>
<author>
<name>Yang, Ruixiao</name>
</author>
<id>https://hdl.handle.net/1721.1/155420</id>
<updated>2024-06-28T03:03:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing Traveling Salesman Problem in Multi-Agent Systems with Practical Constraints
Yang, Ruixiao
The Traveling Salesman Problem (TSP) is a fundamental challenge in multi-agent systems, particularly in task allocation scenarios. Traditional models considering the unconstrained multi-agent TSP, which require multiple salesmen to visit all customers collectively, often fail to produce feasible solutions for real-world applications due to practical constraints. To address this gap, we explore two prevalent constraints: energy limitations and aerial robot collaboration. We introduce two novel formulations: the Multi-Agent EnergyConstrained TSP (MA-ECTSP) and the Multi-Agent Flying Sidekick TSP (MA-FSTSP). The MA-ECTSP considers constraints such as limited battery levels and inter-agent conf licts at replenishment sites, while the MA-FSTSP models scenarios where multiple trucks, each equipped with several drones, collaborate to visit all customers, with trucks restricted to roads and drones having greater freedom in their flight paths. We propose a three-phase framework that first deconstructs these complex problems into more manageable single-agent versions, then optimizes them separately without constraints as heuristics, and finally integrates the heuristics and optimizes under the practice constraints. For the MA-ECTSP, we decompose the instance into smaller sub-problems by splitting the minimum spanning tree (MST), solve each using a combination of TSP solvers and heuristic searches, and then aggregate the tours into a feasible solution using a Mixed-Integer Linear Program (MILP) with significantly few variables and constraints. For the MA-FSTSP, we initially decompose the problem into subproblems of one truck with multiple drones, compute routes for trucks without drones, and use these in the final phase as heuristics to optimize both drone and truck routes concurrently. Our approach demonstrates significant effectiveness and scalability compared to existing baselines, as validated on real-world road networks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Impact Analysis of Direct Air Capture Deployment</title>
<link href="https://hdl.handle.net/1721.1/155419" rel="alternate"/>
<author>
<name>Housen, Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/155419</id>
<updated>2024-06-28T03:05:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Climate Impact Analysis of Direct Air Capture Deployment
Housen, Tara
Direct air capture (DAC) is a negative emissions technology (NET) that can contribute to mitigating climate change impacts by extracting CO₂ from the atmosphere. Given its low technical maturity, uncertainties persist regarding DAC's cost, scalability, and life-cycle emissions. In this analysis I assess liquid and solid DAC technologies powered by various energy sources to provide insights into its current and future economic viability and its net GHG emissions impact. My findings show electric DAC configurations relying on renewable electricity have high carbon removal efficiency and relatively low cost. Additionally, I quantify the climate impact of large-scale global DAC deployment with investments of 0.5%, 1%, 1.5%, and 2% of the global GDP. I integrate electric DAC plants powered by renewable energy with co-located CO₂ storage sites. This analysis reveals that for scenarios with high anthropogenic emissions, DAC investments of up to 2% of global GDP cannot stabilize CO₂ concentrations. The results indicate that the 1.5°C goal can be achieved with an investment of 1.5% of the global GDP if cumulative emissions remain within 1178 GtCO₂; or with an investment of 0.5% of the global GDP if cumulative emissions remain within 981 GtCO₂. Alternatively, the 2°C goal can be achieved with an investment of 1.5% of the global GDP if cumulative emissions remain within 2750 GtCO₂; or with an investment of 0.5% of the global GDP if cumulative emissions remain within 1178 GtCO₂.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Experiments for Contrail Avoidance</title>
<link href="https://hdl.handle.net/1721.1/155413" rel="alternate"/>
<author>
<name>Kigotho, Olivier Ng'weno</name>
</author>
<id>https://hdl.handle.net/1721.1/155413</id>
<updated>2024-06-28T03:38:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design of Experiments for Contrail Avoidance
Kigotho, Olivier Ng'weno
Condensation trails (contrails) are line-shaped clouds that form behind aircraft and contribute more to climate change each year than any other form of aircraft emissions. While most contrails have little effect on the climate because they dissipate quickly, contrails persist when they form in parts of the atmosphere that are ice-supersaturated (ISS). These ISS regions are often shallow, and can be avoided by small deviations in altitude. However, it is expensive to test whether these deviations are effective, as conducting an experiment requires deviating commercially scheduled flights from their typical cruise altitude. Meanwhile, metrics have not been developed that can compare the costs and benefits of performing contrail avoidance deviations. This thesis shows that measuring the total contrail length avoided relative to the total length of deviations is a way to compare the costs and benefits of contrail avoidance. The results of a Monte Carlo simulation show that a paired difference test will likely reduce the necessary number of samples for statistical significance relative to a randomized control trial. On the other hand, a randomized complete block design with blocking for engine efficiency will not significantly effect the statistical power of the experiment. However, the instrument used to measure contrails will have the greatest effect on the number of samples needed because the number of samples necessary for statistical significance scales inversely proportionally to the probability that an instrument will observe a contrail. Finally, these simulations suggest that the benefit of contrail avoidance is sensitive to costs of performing deviations besides fuel burn. Therefore, a contrail avoidance policy should prioritize avoiding longer contrails over shorter ones to reduce the number of deviations necessary for a given benefit. It is expected that contrail avoidance experiments will be necessary at multiple stages of scaling up the contrail avoidance system. As a result, using these experiment designs will be useful to compare different strategies of contrail avoidance and different prediction systems. Knowing how to measuring the effect of contrail avoidance will take us one step closer to mitigating the climate impacts of the aviation industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On stress, strength, and failure in asteroids during planetary entry</title>
<link href="https://hdl.handle.net/1721.1/155412" rel="alternate"/>
<author>
<name>Rulko, Theo Artur</name>
</author>
<id>https://hdl.handle.net/1721.1/155412</id>
<updated>2024-06-28T03:28:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On stress, strength, and failure in asteroids during planetary entry
Rulko, Theo Artur
Efforts to characterize the danger posed by asteroids have motivated an effort to model their entry and breakup in Earth’s atmosphere. These models, crucial to planetary defense efforts, necessitate an understanding of the physics underlying fragmentation — including knowledge of key governing physical properties such as strength. Recovered meteorites provide some of the best evidence for these properties. However, their measured strengths are often orders of magnitude higher than those inferred from meteor observations. In this thesis, we seek to provide a full-field description of the stresses that develop in monolithic meteors as they enter the atmosphere and deform, to shed light on the fragmentation process. To quantify those stresses, we develop a simple model of meteor entry that treats the bolide as a deformable body subject to suitable aerodynamic, inertial, and centrifugal loads. We apply these external loads via the Meteor Equations in conjunction with modified Newtonian aerodynamic theory at high Mach numbers. First, we compute an analytical series-solution to the stress field in an idealized case and show that, unlike what is classically assumed, the tensile stresses in asteroids may be as much as 20 times lower than the ram pressure. Then, we conduct finite-element simulations of meteor falls attendant to non-ideal asteroids, and show that our conclusions hold for all but the most irregularly shaped bodies, where geometric stress concentrations may cause early fragmentation. Finally, we simulate the breakup process in select cases by recourse to the discontinuous Galerkin / Cohesive Zone method, confirming that cracks nucleate in accordance with our analytical predictions. We conclude that this factor is an important parameter in the modeling of asteroid entry and fragmentation and that, in combination with Weibull-type size-strength scaling laws, may help shed some light on the observed discrepancy between meteor and meteorite strengths.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Processes for the Fabrication of SU-8 Structures and Sputtered Materials on Porous Glass for Electrospray Thruster Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155410" rel="alternate"/>
<author>
<name>Nachtigal, Catherine J.</name>
</author>
<id>https://hdl.handle.net/1721.1/155410</id>
<updated>2024-06-28T03:35:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Processes for the Fabrication of SU-8 Structures and Sputtered Materials on Porous Glass for Electrospray Thruster Manufacturing
Nachtigal, Catherine J.
Electrospray thrusters are electric propulsion devices that generate thrust through the use of an electric potential between the emitter, a concentrated point at which a propellant is flowed to, and downstream extractor electrodes that generates a high electric field at the emitter causing the propellant to be accelerated. Current electrospray thruster designs use sharp micron-scale cone-shaped emitters made from porous materials to generate ion emission through passive propellant feeding, but the current design has flaws that affect its lifetime, reliability, and performance. High specific impulse thruster firing occurs when operating in the purely ionic regime (PIR), in which an ionic liquid propellant (a room temperature molten salt or liquid metal) emits individual ions rather than larger droplets. These emitters must be built on the micron-scale to achieve PIR emission, resulting in their operation as large monolithic arrays with a single extractor to produce a usable amount of thrust, such that the failure of one emitter out of thousands could lead to full extractor and device failure. Futher, the broad parameter space (geometry, flow path, insulation, etc) is currently not selected according to the optimal requirements for operation in the PIR. Recent simulations show that PIR emission can be achieved in a relatively narrow domain that depends on the applied electric field, meniscus size, and hydraulic impedance for flat panel capillary emitters. These capillary emitters can be designed with individualized extractors that are connected through a series of fuses, isolating any shortage to a single emitter. Photolithography is a useful micromanufacturing tool that has not yet been utilized to build solid structures on top of porous structures. This is because a porous substrate would uptake any liquid photoresist applied during fabrication, making the susbtrate lose its porosity. To prevent this, and allow for the formation of solid structures on top of a porous substrate for electrospray thruster applications, this thesis develops a manufacturing plan in which the pores within the substrate are loaded with a volatile organic compound (VOC), allowing a structure to be fabricated on the substrate surface via photolithography, without the material entering the substrate’s pores. To regain the substrate’s porous structure, the VOC is removed post-manufacturing via sublimation and an acetone wash. Using the manufacturing techniques described in this thesis, a novel electrospray thruster design consisting of capillaries and fuses to optimize PIR performance and prevent shortage propagation is proposed to greatly increase the performance and reliability of electrospray thruster devices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Sustainable Aviation Fuel Production Potential Using Crop Allocation Optimization</title>
<link href="https://hdl.handle.net/1721.1/155409" rel="alternate"/>
<author>
<name>Shu, Yuxin</name>
</author>
<id>https://hdl.handle.net/1721.1/155409</id>
<updated>2024-06-28T03:01:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessment of Sustainable Aviation Fuel Production Potential Using Crop Allocation Optimization
Shu, Yuxin
Sustainable aviation Fuel (SAF) has been recognized as a viable solution in the near to medium future for decreasing carbon emissions in the aviation industry. Global SAF production, however, is limited and falls well short of the International Air Transport Association’s (IATA) goal to achieve net-zero carbon emissions in 2050. This thesis quantifies the global SAF production potential through different crop allocation strategies. Biomass potential is quantified by land suitability and agricultural availability. An optimization model is developed using binary integer linear programming with three crop allocation strategies for 2050 and 2100: fuel maximization, emissions minimization, and land use minimization. The results are shown through six case studies: the United Kingdom, Japan, Australia, Kenya, Brazil, and the United States. Under the Intergovernmental Panel on Climate Change (IPCC) climate scenarios, the globally suitable land can meet and exceed the requirement for biomass cultivation for the aviation sector from the International Energy Agency (IEA). The demand for jet fuel in the U.S. can be fulfilled with 100% SAF, resulting in 21.3% emission savings if optimized for minimum emissions and assuming the use of energy crops. Incorporating lignocellulosic biomass could result in an additional 63.8% reduction in emissions. The study also shows that Japan and the United Kingdom have insufficient agricultural potential to meet their respective domestic SAF demands. In contrast, Australia, Kenya, Brazil, and the United States have agricultural potential that meet or exceed their relative SAF needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Technical and Policy Needs Analysis for Space Traffic Management of Low Lunar Orbit</title>
<link href="https://hdl.handle.net/1721.1/155401" rel="alternate"/>
<author>
<name>Kirkpatrick, Courtney R.</name>
</author>
<id>https://hdl.handle.net/1721.1/155401</id>
<updated>2024-06-28T03:21:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Technical and Policy Needs Analysis for Space Traffic Management of Low Lunar Orbit
Kirkpatrick, Courtney R.
The number of artificial objects in space has grown exponentially in the last decade, encouraging a greater focus on space safety and sustainability. Much of this focus is on the detection, tracking, cataloguing, and coordination of objects in space, also known as Space Traffic Management, which serves to prevent collisions in orbit. The cost of a collision in space is often very high--loss of mission, loss of societal support, or even loss of life. Beyond geosynchronous orbit, the Artemis mission brings a renewed excitement for lunar operations, and many countries plan to send missions to the moon in the coming decades. As this topic is quite future-looking, there are many gaps in research related to lunar Space Traffic Management. This thesis serves to begin filling these gaps by answering if Space Traffic Management will be necessary for low-altitude selenocentric orbits. This thesis analyzes the likelihood of collisions in Low Lunar Orbit using NASA's General Mission Analysis Tool and a GRAIL-based gravity model with 70 x 70 degree and order to propagate selenocentric orbits. These propagations are run using high performance computing through the MIT SuperCloud. Methods of preventing collisions are discussed with propagation analysis conducted. A discussion on recommendations on which satellites should maneuver if both have the capability is provided. Analysis found that impulsive burns are viable solutions to avoiding collisions. This thesis also serves to promote proactive development of a Space Traffic Management system for Low Lunar Orbit by discussing five main policy questions focused on the sustainability of Low Lunar Orbit. For each of these questions, the current solution used around Earth is given, followed by a discussion of the possible solutions that could be implemented in Low Lunar Orbit.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Iron Production by Molten Sulfide Electrolysis</title>
<link href="https://hdl.handle.net/1721.1/155400" rel="alternate"/>
<author>
<name>Suryarao, Kimaya P.</name>
</author>
<id>https://hdl.handle.net/1721.1/155400</id>
<updated>2024-06-28T03:21:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Iron Production by Molten Sulfide Electrolysis
Suryarao, Kimaya P.
With greater urgency to combat the detrimental effects of global warming, industries globally have pledged to reach net zero carbon emissions or become carbon neutral by 2050, the iron and steel industry included. With exponential increase in the production and the demand of steel, the carbon footprint of the industry has also been rising at a high rate, accounting ~ 10 -11% of the global carbon emissions. Present state-of-art steel production technologies have not been environmentally benign due to their inextricable dependence on carbon, making complete elimination of GHG emissions challenging. As renewable energy becomes a reality for industrial usage, efforts to decarbonize steel manufacturing motivate a key need to search for technologies solely using electricity for iron ore reduction. Herein, the electrolytic production of molten iron using a novel sulfide route, molten sulfide electrolysis (MSE) is investigated. Experimental evidence for electrolysis and the key attributes and underlying thermodynamics of MSE for iron production are investigated and discussed, along with sulfidation; the feedstock preparation step for the MSE process.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limitations of Commercial Aviation Safety Assessment Standards Uncovered in the Wake of the Boeing 737 MAX Accidents</title>
<link href="https://hdl.handle.net/1721.1/155396" rel="alternate"/>
<author>
<name>Lopes Rose, Rodrigo</name>
</author>
<id>https://hdl.handle.net/1721.1/155396</id>
<updated>2024-06-28T03:31:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Limitations of Commercial Aviation Safety Assessment Standards Uncovered in the Wake of the Boeing 737 MAX Accidents
Lopes Rose, Rodrigo
Commercial aviation accidents, though exceedingly rare, come at a large human, economic, and social cost. Therefore, different stakeholders in industry and government have collaborated to develop standard processes for developing aircraft and assessing their safety, the most popular being the Society of Automotive Engineers’ (SAE) Aerospace Recommended Practices (ARPs) 4754 and 4761. However, most of the engineering techniques used for aircraft development and safety assessment were developed in the mid-20th century and formalized into these standards in the 1990s. Modern aircraft often involve complex interactions between hardware, software, and humans, and the engineering techniques used to analyze these systems have not kept up with the pace of technological development. This thesis studies two recent accidents involving the Boeing 737 MAX (Lion Air flight JT610 and Ethiopian Airlines flight ET302) to identify the limitations that still exist in aviation safety assessment guidance that have contributed to these accidents. A new accident analysis methodology called Causal Analysis based on Systems Theory (CAST) was applied to the 737 MAX accidents to understand why the complex interactions leading to the accidents were not identified during the safety assessment process. The analysis uncovered four main limitations in safety assessment guidance that contributed to the accidents: (a) limited integration of human factors and safety, (b) limited guidance for identifying assumptions, (c) limited ability to capture non-failure based causal scenarios, and (d) limited ability to understand complex nonlinear causal relationships.  A new hazard analysis tool called System-Theoretic Process Analysis (STPA) was then applied to the same systems involved in the 737 MAX accidents to evaluate whether STPA can be used to address the identified limitations. STPA’s scenario-based framework that incorporates humans and software into the hazard analysis was found to support validation of human response assumptions, identification of new assumptions, assessing safety of intended behavior, and understanding circular causality or otherwise non-linear causal factors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Assessment of Efficiency in Arbitrary Air-Breathing Power Systems</title>
<link href="https://hdl.handle.net/1721.1/155392" rel="alternate"/>
<author>
<name>Giroux, Wyatt</name>
</author>
<id>https://hdl.handle.net/1721.1/155392</id>
<updated>2024-06-28T03:05:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling and Assessment of Efficiency in Arbitrary Air-Breathing Power Systems
Giroux, Wyatt
The push for net-zero carbon emissions in the aviation sector by 2050 has resulted in an increasing amount of work being done to analyze the benefits of emergent technologies. Aircraft propulsion systems are a common subject of such research, and studies of some proposed architectures, such as hybrid-electric powertrains, have suggested potential fuel-burn and nitrogen oxide emissions reductions of up to 10% and 4.9%, respectively. When attempting to refine and compare these systems, efficiency is a commonly used metric. Efficiency models provide an understanding of where and how energy is being dissipated in a given system, making them invaluable design and evaluation tools. Until recently, the traditional thermal/propulsive efficiency breakdown has been used to model gas-turbine engines. However, this model has two major deficiencies. First, the lack of a per-component efficiency model restricts understanding of system energy dissipation to either thermodynamic or propulsive losses. Second, the traditional model is unable to capture systems utilizing additional energy sources (batteries, fuel cells, etc.) and their respective conversion pathways. While individual studies have created efficiency models for unconventional systems, these models are either specific to a given architecture or are only applicable to a specific class of engines. This makes comparison between specific terms in existing efficiency models impossible.&#13;
&#13;
This thesis presents the Modular Efficiency Model (MEM), which is capable of constructing low-level efficiency models that accurately represent energy flow pathways and are algebraically consistent across arbitrary collections of propulsion system components. This is done by tracking the kinetic energy flow available for propulsion (expanded flow power) across each component in a system. MEM provides a more detailed breakdown of useful energy dissipation, relative influence of streams and components, and individual powertrain efficinecies that can be meaningfully compared to other systems. MEM is demonstrated in this work by comparing performance of unmixed, mixed-flow, and hybrid electric engine architectures. We identify high fan pressure ratio systems with low fan diameter as candidates for effective mixer use. For hybrid-electric systems, we find a 3.2% reduction in whole-mission fuel burn is possible at the cost of carrying only 50% of the original aircraft payload. Numerous detailed future studies utilizing MEM are recommended, using this thesis as a baseline example for the use of MEM in analyzing and comparing novel architectures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of a new metric system to regulate NOₓ aircraft emissions at cruise</title>
<link href="https://hdl.handle.net/1721.1/155378" rel="alternate"/>
<author>
<name>Guenard, Adrien</name>
</author>
<id>https://hdl.handle.net/1721.1/155378</id>
<updated>2024-06-28T03:37:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessment of a new metric system to regulate NOₓ aircraft emissions at cruise
Guenard, Adrien
NOₓ emissions represent the largest source of air quality impacts attributed to aircraft. The largest part of these impacts are due to emissions during cruise where 90% of fuel burn occurs. NOₓ emissions cause an increase of surface PM2.5 and O3 concentrations that adversely affect human health. This public health consideration has motivated the International Civil Aviation (ICAO) Committe on Aviation and Environment Protection (CAEP) to set standards aimed at constraining NOₓ emissions. While the current Landing and Takeoff (LTO) regulation is designed to control emissions levels in the vicinity of the airport, emissions above 3,000 ft are not regulated yet. The LTO regulation is limited in its ability to constrain cruise NOₓ emissions. This observation motivated the investigation of new NOₓ metric systems focusing on cruise emissions. In this thesis, several NOₓ metrics candidates were defined. The ability of metrics to represent of cruise emissions was assessed quantitatively by computing the Pearson correlation coefficient between the candidate metrics and estimates of emissions from fleets of aircraft flying on real world routes computed using an aircraft emission inventory code. Based on correlation criteria, this thesis demonstrates that a new NOₓ regulation defined as a weighted sum of emissions indices at several intermediate static thrust points is able to better constrain cruise emissions than the current LTO regulation. Additionally, this new regulation will not necessitate a significant change in the emissions certification process. The focus of this thesis was to establish the metric value —quantity to be measured— within the regulation. The limit levels that are to be set on the metric value remain to be determined in order to comprehensively define the cruise NOₓ regulation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plasma-based CO₂ Conversion for Mars ISRU</title>
<link href="https://hdl.handle.net/1721.1/155377" rel="alternate"/>
<author>
<name>McKinney, Lanie G.</name>
</author>
<id>https://hdl.handle.net/1721.1/155377</id>
<updated>2024-06-28T03:10:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Plasma-based CO₂ Conversion for Mars ISRU
McKinney, Lanie G.
Plasma-based CO₂ conversion is a promising power-to-gas chemical synthesis process for Mars In-Situ Resource Utilization (ISRU). The abundant CO₂ in the Martian atmosphere can be converted into breathable oxygen and fuel for astronauts, enabling safer and more independent Mars missions while reducing launch costs. Nonthermal plasma technologies leverage electron excitation chemistry to achieve kinetic activation and split the stable bonds of CO₂ at modest temperatures and pressures compared to typical thermal conversion processes. Other benefits of Plasma-based conversion technologies include the compatibility with many feedstock gases, opening up possibilities for synthesizing other important chemicals in situ. Many plasma sources have been explored for CO₂ conversion, and an understanding of the fundamental atomic processes in CO₂ plasmas has led to validated chemical kinetic mechanisms. However, there have been limited parametric studies that directly compare the chemical performance of reactors under varied operating conditions. Understanding the coupled pressure, temperature, and reduced electric-field dependence of the relevant chemical processes’ will inform the system-level reactor design, including the pumps, heaters, and electronics required. This thesis describes a parametric exploration of a nanosecond repetitively pulsed plasma reactor under different operating conditions to compare reactor performance and elucidate the important kinetic effects. A 0-D chemical kinetic model is developed and described in detail, building upon previous work to ensure the mechanism is appropriate for the defined conditions. A tradespace is constructed in terms of important performance metrics such as conversion, efficiency, and specific energy input. To understand the primary kinetic pathways, a first-order sensitivity analysis is conducted on selected conditions. This work contributes a robust analysis of NRPD reactor performance to extend fundamental plasma studies for the engineering of a competitive technological candidate for Martian ISRU.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spacecraft Orbiting and Uncertainty - Planning Surveillance</title>
<link href="https://hdl.handle.net/1721.1/155368" rel="alternate"/>
<author>
<name>Nikolova, Joana N.</name>
</author>
<id>https://hdl.handle.net/1721.1/155368</id>
<updated>2024-06-28T03:22:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Spacecraft Orbiting and Uncertainty - Planning Surveillance
Nikolova, Joana N.
Scheduling of the Space Surveillance Network (SSN) is a crucial operation for the maintenance of safety and operations in Earth’s orbit. However, the capabilities of the SSN are limited and the number of objects that are being tracked is increasing with every year. This work proposes harnessing Imitation learning (IL) to develop explainable schedules without the development of subjective functions, but instead learning from approved schedules. To that end is proposed a graph structuring of the situation that allows learning from expert solutions. Importantly, this proposed framework also removes fragmentation and discretisation requirements within the time and space domains, requirements that are present in other solutions and lower the asymptotic efficiency that can be achieved. However, the models that were trained in this work did not achieve these goals and showed a very strong competition between the capability to choose the correct pass to observe an object and choosing the correct time within the pass. The trained models also showed a significant maintenance of performance of a trained model on data inputs outside of distribution. Overall, this thesis provides the necessary background to understand the principles of decision making for developing an SSN schedule, shows the set up of a graph structure for the basis of an IL algorithm for scheduling, and presents the results that have been obtained to this point.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Planning and Deployment of Aerial Assets</title>
<link href="https://hdl.handle.net/1721.1/155367" rel="alternate"/>
<author>
<name>Saravanan, Akila</name>
</author>
<id>https://hdl.handle.net/1721.1/155367</id>
<updated>2024-06-28T03:14:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strategic Planning and Deployment of Aerial Assets
Saravanan, Akila
The rapid deployment of fleets of small, uncrewed aircraft (drones) for tasks like package delivery or search-and-rescue in the immediate aftermath of a natural disaster are some of the most vital and common applications of advanced air mobility. Recognizing that successful drone missions depend on pre-established, well-positioned bases and efficient task allocation, this work presents a generalizable model for base positioning and routing in diverse applications. The proposed model prioritizes choosing bases that both maximize operational coverage and enable rapid responses to high-demand areas. Additionally, the framework integrates a vehicle routing component to optimize drone flight paths for efficient task completion in the tactical portion of drone-based operations; this component is the primary focus of this work. In addition to the theoretical formulation, the models are validated through case studies examining post-flooding search-and-rescue in the Iwate prefecture of Japan and package deliveries in the Austin, TX metropolitan area.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe and Efficient Motion Planning in Robotic Manipulation through Formal Methods</title>
<link href="https://hdl.handle.net/1721.1/155366" rel="alternate"/>
<author>
<name>Yu, Mingxin</name>
</author>
<id>https://hdl.handle.net/1721.1/155366</id>
<updated>2024-06-28T03:40:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Safe and Efficient Motion Planning in Robotic Manipulation through Formal Methods
Yu, Mingxin
Manipulating rigid body objects in crowded environments poses significant challenges due to the need for rapid, real-time planning and the assurance of safe operational paths.&#13;
The challenges come from varying shapes of the manipulated objects and high-dimensional nature of manipulators. &#13;
&#13;
This thesis addresses these issues by developing (1) a mixed-integer linear programming (MILP)-based approach to plan safe paths for rigid-body objects; and (2) a learned control barrier function (CBF) tailored for manipulators with multiple degrees of freedom (DoF) and an associated framework CBF-RRT to enable efficient planning for robotic manipulators. Comprehensive experimental results have shown that the proposed methods outperform baseline methods, providing tools for improving the safety and efficiency of robotic manipulators in complex environments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low Earth Orbit Stability Analysis Using Monte-Carlo Techniques</title>
<link href="https://hdl.handle.net/1721.1/155365" rel="alternate"/>
<author>
<name>Appel, Grant F.</name>
</author>
<id>https://hdl.handle.net/1721.1/155365</id>
<updated>2024-06-28T04:04:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Low Earth Orbit Stability Analysis Using Monte-Carlo Techniques
Appel, Grant F.
Space domain awareness and the issue of space congestion have become critical topics, particularly with the proliferation of private companies launching large LEO constellations (LLCs), or mega-constellations, including SpaceX’s Starlink and Amazon’s Kuiper. This rapid expansion has led to increased concerns about space debris, potential collisions, and the possibility of reaching a critical threshold known as Kessler syndrome. To study and address these challenges, advanced modeling and simulation techniques are essential. There are largely two methods to for modelling and simulation: particle-in-box (PIB) methods and Monte Carlo techniques. Historically, the simulation tools developed have been closed source due to governments and companies wanting to maintain information security or ensure a profit. However, recently MIT’S ARCLab introduced MOCAT and MOCAT-MC, opensource toolboxes designed to propagate and model the LEO RSO population. This thesis focuses on MOCAT-MC- MIT’s Orbital Capacity Analysis Toolbox Monte Carlo. MOCATMC propagates individual space objects while accounting for various probabilistic factors common to LEO RSOs including mission failure, collision and space weather while open to new capabilities. Utilizing MOCAT-MC, this thesis presents population and density analyses which reveal exponential growth in object populations, particularly at higher altitudes of the LEO regime where Kessler’s critical density is projected to be exceeded. Collision analyses are also performed, highlighting an alarming increase in potential collisions. The results presented even impact active satellites capable of conducting collision avoidance maneuvers (CAMs). Additionally, a brief study on Anti-Satellite (ASAT) test implications reveals that a singular ASAT explosion contribute marginally to debris counts due to the existence of collisions from other sources. This thesis outlines a comprehensive approach to utilizing the MOCAT-MC toolbox and its data outputs in order to reveal some of its many capabilities in studying the LEO orbital population and stability. Overall, this research underscores the urgency of space domain awareness and sustainability. By leveraging MOCAT-MC, the paper provides quantitative insights into LEO object density trends, collision probabilities, and ASAT implications. The findings highlight the escalating risks in space operations and emphasize the need for proactive measures to mitigate space congestion and ensure long-term space sustainability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating Dual Methylcellulose-and-Oil-Nanoemulsion Thermoresponsive Gelation</title>
<link href="https://hdl.handle.net/1721.1/155361" rel="alternate"/>
<author>
<name>Wojtaszek, Mateusz M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155361</id>
<updated>2024-06-28T03:49:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Elucidating Dual Methylcellulose-and-Oil-Nanoemulsion Thermoresponsive Gelation
Wojtaszek, Mateusz M.
The rheological properties of a colloidal gel depend upon the microstructure of the gel and the identity of the load-bearing elements. Here we demonstrate a hybrid hydrogel-colloidal gel system composed of a methylcellulose-stabilized oil nanoemulsion. This system has tunable rheology with two distinct dominant load-bearing components. Oil volume fraction determines which component leads to elasticity in the gel network. At low oil volume fraction, methylcellulose forms a fibrillar gel upon an increase in temperature. As oil volume fraction increases, methylcellulose is sequestered onto the droplet surfaces, decreasing the concentration of methylcellulose available for polymer gel formation and weakening the gel structure. Upon further increase in oil volume fraction, we hypothesize that an oil droplet network becomes the primary load bearing structure, resulting in marked differences in rheology. This represents a unique system in which two gelation regimes with distinct identity and behavior are tuned using only nanoemulsion volume fraction. This behavior is made possible by the unique fact that the component which stabilizes the nanoemulsion, methylcellulose, is also active in the gel itself. Due to the components used, this system has potential uses in applications such as pharmaceuticals and food products.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Charting A Course Through Uncharted Terrain: Seeking Opportunities in the U.S. Real Estate Private Debt During Challenging Times</title>
<link href="https://hdl.handle.net/1721.1/155360" rel="alternate"/>
<author>
<name>Wang, Shao Lan</name>
</author>
<id>https://hdl.handle.net/1721.1/155360</id>
<updated>2024-06-28T03:34:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Charting A Course Through Uncharted Terrain: Seeking Opportunities in the U.S. Real Estate Private Debt During Challenging Times
Wang, Shao Lan
The current macroeconomic context is distinguished by an upsurge in private credit lending activities. This thesis delves into the realm of the real estate debt market, primarily within the United States, aiming to identify principal participants and ascertain the alignment of investment strategies. It also scrutinizes how investors are managing and exploiting opportunities during this phase of uncertainty and challenging market conditions. The ultimate goal is to gain insights into the state of the real estate market during this specific time period and understand the thought processes of investors. It will offer them a perspective on the fundamental reasoning behind real estate debt investment, elucidating why at this juncture, it has become a focal point of discussion in the real estate sector and why debt has garnered such significant attention.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recycling of Rare Earth Magnets with Sulfur Based Chemistries and High Temperature Processing</title>
<link href="https://hdl.handle.net/1721.1/155352" rel="alternate"/>
<author>
<name>Adams, Zachary Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/155352</id>
<updated>2024-06-28T03:29:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Recycling of Rare Earth Magnets with Sulfur Based Chemistries and High Temperature Processing
Adams, Zachary Kenneth
Rare-earth(RE)-iron-boron permanent magnets are among the strongest permanent magnets available and power essential technologies, from wind turbines to hard disk drives. The production of the rare earth metal for these magnets currently involves significant greenhouse gas emissions and other environmental impacts. Additionally, the production of these metals is geographically complicated, as over 95% of rare earth metals are produced in China, which leads to supply-chain concerns and price fluctuations. Recycling of the rare earth elements is imperative to decrease net emissions and for the sustainability of RE-based magnets, but current magnet recycling is limited. In this work, sulfidation is investigated in the context of RE separation and recovery from RE-based magnets. Evidence of rare-earth separation and selectivity are presented, with insights into the underlying sulfidation mechanism involved for actual magnet processing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Identification CFD-Based Reduced-Order Modeling for Hypersonic Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155348" rel="alternate"/>
<author>
<name>Middleton, Kendra Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/155348</id>
<updated>2024-06-28T03:28:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Identification CFD-Based Reduced-Order Modeling for Hypersonic Vehicles
Middleton, Kendra Lynn
System identification (SID) techniques were utilized to assemble reduced-order models purposed for estimating the aerodynamic coefficients of a hypersonic vehicle subjected to flight conditions of interest. The reduced-order models combined the accuracy of high-fidelity hybrid Reynolds Averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) computational fluid dynamic (CFD) models with the computing speed of low-fidelity inviscid CFD models, efficiently capturing the effects of complex physics in a timely manner. The vehicle geometry utilized for this study was the High-Speed Army Reference Vehicle (HARV), which was simulated in training maneuver motions solved by HPCMP CREATETM-AV Kestrel, the high-fidelity CFD software. The resulting data was used and assessed in its information supply to the SID techniques, which were also performed in Kestrel as a post-processing operation. Many SID models with varying structures were built with the training maneuver data. The models were validated using a variety of different dynamic maneuvers and static configurations in an effort to understand the limits and capabilities of hypersonic SID modeling. The results suggested insufficient low-rate data information in the training maneuver hampered the SID model prediction accuracies the most. A single trajectory analysis revealed that simulation results using SID model prediction aerodynamic databases and using low-fidelity CFD model prediction databases did not drastically differ. Once constructed, the SID model expressed the capacity to predict much more complex databases in significantly less time. This emphasized substantial benefits toward utilizing SID reduced-order models in the design phase of hypersonic vehicles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe Nonlinear Control Under Control Constraints via Reachability, Optimal Control and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/155344" rel="alternate"/>
<author>
<name>So, Oswin</name>
</author>
<id>https://hdl.handle.net/1721.1/155344</id>
<updated>2024-06-28T03:04:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Safe Nonlinear Control Under Control Constraints via Reachability, Optimal Control and Reinforcement Learning
So, Oswin
Autonomous robots in the real world have nonlinear dynamics with actuators that are subject to constraints. The combination of the two poses complicates the task of designing stabilizing controllers that can guarantee safety, which we denote as the stabilize-avoid problem. Existing control-based techniques can provide safety and stability guarantees but under the assumption of unbounded control inputs. On the other hand, learning-based techniques can handle control constraints but often are unable to correctly trade-off between safety and stability.&#13;
&#13;
In this thesis, we take a step towards synthesizing controllers with improved safety and stability for high dimensional nonlinear systems with control constraints by combining techniques from reachability, optimal control, and reinforcement learning. We first propose a novel approach to solve constrained optimal control problems using deep reinforcement learning by using techniques from traditional constrained optimization, enabling the solution of stabilize-avoid problems for high-dimensional nonlinear systems with control constraints. Next, we present an alternate method of solving the stabilize-avoid problem using control barrier functions,&#13;
where we present an improved method for learning control barrier functions for nonlinear systems with control constraints by drawing on connections between reachability and deep reinforcement learning. &#13;
&#13;
We validate our proposed methods on a variety of benchmark tasks. Our experiments demonstrate the advantage of our methods over existing techniques in terms of improved safety rates and larger regions of attraction, especially in the case of high-dimensional systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Theoretic Process Analysis as a Practical Tool for Comprehensive Flight Test Hazard Identification</title>
<link href="https://hdl.handle.net/1721.1/155341" rel="alternate"/>
<author>
<name>Eisen, Noam D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155341</id>
<updated>2024-06-28T03:04:15Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Systems Theoretic Process Analysis as a Practical Tool for Comprehensive Flight Test Hazard Identification
Eisen, Noam D.
Flight test is an endeavor inherently imbued with risk. In order to conduct flight testing safely, hazards of consequence must be identified and mitigated in advance of testing. While adequate practices are widely in place for the mitigation of hazards that have been identified, the practices generally used to reveal and identify hazards in the first place rely on brain- storming and other fragmentary methods that can leave critical gaps in safety preparedness. Mainstream flight test risk management techniques such as Test Hazard Analysis (THA) rely on expert brainstorming for the identification of hazards, and lean heavily on experience and lessons learned from subjectively ‘similar’ past test programs. Frequently for a given new program, the THA report from a past program is simply duplicated in full, with edits then made to accommodate perceived differences. Such processes have left critical gaps in hazard identification coverage even where ‘similar’ technologies and test methods are concerned; moreover, as airborne technologies evolve– with increasingly complex systems interactions, software, and human/machine interplays– the gaps in hazard coverage are becoming ever more pronounced, leaving the legacy risk management techniques unable to support a level of safety that meets industry needs. With each hazard in a THA documented separately, and mitigations addressed individually to each hazard, no underlying framework is available to unify hazard identification or analysis across functionalities or disciplines. Safety reviews and preflight briefings based on THA become lengthy and disjoint, as well as potentially incomplete. Systems Theoretic Process Analysis (STPA) is a forward-looking safety analysis methodology grounded in systems theory. Based in the System-Theoretic Accident Model and Processes (STAMP) model, STPA is able to produce meaningful results even where other methodologies struggle, such as in systems involving software, human interactions, or other forms of complexity such as exist in aviation and flight test. This thesis proposes to apply STPA to the problem of hazard identification and management in flight test, specifically focusing on piloted (‘manned’) aircraft. The state of the art in THA is examined, and STPA and THA are compared in frameworks, constructs, and work products in the context of flight test. STPA is applied to an example flight test campaign to illustrate its use in test hazard identification. A final section describes more broadly how STPA could be incorporated into flight test organizations now, and in a future where STPA is more widely used by design and engineering departments as well.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Production of Bio-based Lactone Monomers for Intrinsically Recyclable Plastics</title>
<link href="https://hdl.handle.net/1721.1/155329" rel="alternate"/>
<author>
<name>Baston, Lucas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155329</id>
<updated>2024-06-28T03:50:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Production of Bio-based Lactone Monomers for Intrinsically Recyclable Plastics
Baston, Lucas A.
The development of intrinsically recyclable plastics is crucial to halt the accumulation of waste plastics in the environment. While great strides have been made in the design of novel polymers that exhibit desirable qualities and degrade back to their respective monomer at mild conditions, the development of scalable synthesis of the monomers for these plastics lags behind. This work aims to develop methods for the synthesis of monomers using heterogeneous catalysts to allow for scaling-up. First, we used a high-throughput computational method to screen binding energies of key reaction species of more than 200 zeolite frameworks to identify potential catalysts that would selectively catalyze our probe reaction of methyl lactate lactonization to lactide. From these computations, we identified titanium-containing zeolite with the MEL topology as a promising catalyst for this reaction in the gas phase. Continuous-flow kinetic studies revealed that Ti-MEL showed 40% increased selectivity to the lactide product at over twice the conversion as Ti-BEA and Ti-MFI. Second, we show a potential pathway for the production of α-cyclohexyl-δ-valerolactone (CVL) starting from formaldehyde and δ-valerolactone (DVL). We developed a continuous gas-phase reactor using alkaline earth oxides supported on silica as catalysts for an aldol condensation reaction. CaO and BaO showed 90% and 83% selectivity, respectively, to α-methylene-δ-valerolactone at 60% DVL conversion. Following this, MVL was functionalized with 1,3-butadiene in a Diels-Alder addition to form the unsaturated form of our desired CVL monomer (CeVL). This reaction was catalyzed over Lewis acid with selectivities reaching 90% of Sn-BEA catalysts at mild temperatures of 55 °C. Finally, the CeVL was able to be hydrogenated to CVL over commercially available palladium on carbon catalysts with flowing hydrogen.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surface Curvature and Roughness Effects on Görtler Vortex Development in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/155316" rel="alternate"/>
<author>
<name>Smith, Shannon C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155316</id>
<updated>2024-06-28T03:52:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Surface Curvature and Roughness Effects on Görtler Vortex Development in Hypersonic Flow
Smith, Shannon C.
This work presents a computational and experimental investigation of surface roughness and concave curvature as control parameters on the development of Görtler vortices in hypersonic flow. Three-dimensional large eddy simulation (LES) was performed for two curvature cases using US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. Experiments were performed on two curvature cases and three roughness element shapes at the University of Texas at San Antonio (UTSA) Mach 7 Ludwieg tube wind tunnel facility. The goal of these studies was to examine how variations in surface roughness and curvature affect the downstream development and transition characteristics of the hypersonic boundary layer formed over concave models. It also serves to extend previous work on the effect of shaped surface roughness on the Görtler instability to the hypersonic regime. The included results demonstrate key features of the relationship between roughness effects, vortex development, and boundary layer transition in hypersonic flows dominated by the Görtler instability that can inform both engineering design and future research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of factors affecting the cooling load for air conditioning</title>
<link href="https://hdl.handle.net/1721.1/155259" rel="alternate"/>
<author>
<name>Bates, Maurice Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/155259</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">A study of factors affecting the cooling load for air conditioning
Bates, Maurice Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1935; Includes bibliographical references (leaves xi-xiii).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High pressure rectification</title>
<link href="https://hdl.handle.net/1721.1/155258" rel="alternate"/>
<author>
<name>Sundstrom, Warren E.
            (Warren Eric)</name>
</author>
<id>https://hdl.handle.net/1721.1/155258</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">High pressure rectification
Sundstrom, Warren E.
            (Warren Eric)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 32).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A portable analog-to-digital converter for the recording of sound surveys</title>
<link href="https://hdl.handle.net/1721.1/155249" rel="alternate"/>
<author>
<name>Bell, Chester Gordon.</name>
</author>
<id>https://hdl.handle.net/1721.1/155249</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">A portable analog-to-digital converter for the recording of sound surveys
Bell, Chester Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1957; Bibliography: leaves 59-60.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A design for steel mitering lock gate</title>
<link href="https://hdl.handle.net/1721.1/155247" rel="alternate"/>
<author>
<name>Van, Yung Tsun.</name>
</author>
<id>https://hdl.handle.net/1721.1/155247</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1914-01-01T00:00:00Z</published>
<summary type="text">A design for steel mitering lock gate
Van, Yung Tsun.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1914; Includes bibliographical references.
</summary>
<dc:date>1914-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat transfer coefficients in a falling film condenser</title>
<link href="https://hdl.handle.net/1721.1/155242" rel="alternate"/>
<author>
<name>Bays, George Samuel.</name>
</author>
<author>
<name>Blenderman, Louis Morrall.</name>
</author>
<id>https://hdl.handle.net/1721.1/155242</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">Heat transfer coefficients in a falling film condenser
Bays, George Samuel.; Blenderman, Louis Morrall.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1935; Appendix contains numerous pamphlets.; Includes bibliographical references (leaf 97).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A criminal courts, prison and hospital building</title>
<link href="https://hdl.handle.net/1721.1/155240" rel="alternate"/>
<author>
<name>Bartos, Armand Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/155240</id>
<updated>2024-06-12T06:07:43Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">A criminal courts, prison and hospital building
Bartos, Armand Philip.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1935
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wave length measurements in the spectrum of the neodymium arc</title>
<link href="https://hdl.handle.net/1721.1/155239" rel="alternate"/>
<author>
<name>Bartlett, William Walker,
            1912-</name>
</author>
<id>https://hdl.handle.net/1721.1/155239</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">Wave length measurements in the spectrum of the neodymium arc
Bartlett, William Walker,
            1912-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1935
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spectrographic analysis of grain boundary segregates in cast monel metal</title>
<link href="https://hdl.handle.net/1721.1/155238" rel="alternate"/>
<author>
<name>Barclay, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155238</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Spectrographic analysis of grain boundary segregates in cast monel metal
Barclay, John A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perturbation theory in quantum mechanics</title>
<link href="https://hdl.handle.net/1721.1/155231" rel="alternate"/>
<author>
<name>Haas, Violet B.,
            1921-</name>
</author>
<id>https://hdl.handle.net/1721.1/155231</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">Perturbation theory in quantum mechanics
Haas, Violet B.,
            1921-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1949; Bibliography: leaf [35].
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distribution and behavior of trace metals in the subterranean estuary of an Arctic coastal lagoon</title>
<link href="https://hdl.handle.net/1721.1/155067" rel="alternate"/>
<author>
<name>Schaal, Isabel Vicenta</name>
</author>
<id>https://hdl.handle.net/1721.1/155067</id>
<updated>2024-05-25T03:36:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Distribution and behavior of trace metals in the subterranean estuary of an Arctic coastal lagoon
Schaal, Isabel Vicenta
Subterranean estuaries (STEs) can be an important location for biogeochemical reactions that may alter concentrations of chemical constituents of groundwater. With warming in the Arctic and the subsequent permafrost thaw, the relative importance of submarine groundwater discharge (SGD) to ocean chemical budgets will grow. In this study, we examined the distribution of select trace metals (Fe, Mn, V, U, Mo and Ba) in the STE, lagoon surface waters, and coastal sediments of Simpson Lagoon along the Beaufort Shelf of Alaska. This location is unique among studies as the STE consists of organic-rich sediments. Samples were collected over two years and throughout seasonal water conditions, including the melting, open-water, and freeze-up periods. Fe, Mn, V, and Ba mainly exhibited non-conservative additions within the estuary, with Fe concentrations being some of the highest among groundwater studies. U exhibited both non-conservative removal and addition in the estuary, and Mo exhibited mainly removal. In the lagoon, non-conservative addition of U allowed for the calculation of an SGD flux. This flux, along with a Ra-derived flux, was used to estimate metal fluxes into the lagoon. Fluxes for all metals were similar to or greater than river flux estimates in all months except for June, when SGD was likely nonexistent. These fluxes can be used to assess SGD impact on the coastal Arctic; however, for reactive metals, processes in the lagoon may continue to alter metal concentrations before mixing with the greater Arctic Ocean. This study provides some of the first estimates of trace metal concentrations and fluxes within Arctic subterranean estuaries and exhibits the importance of considering SGD when assessing metal input to the coastal Arctic.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bleeding Details</title>
<link href="https://hdl.handle.net/1721.1/155066" rel="alternate"/>
<author>
<name>Mohan, Sahil</name>
</author>
<id>https://hdl.handle.net/1721.1/155066</id>
<updated>2024-05-25T03:28:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bleeding Details
Mohan, Sahil
This thesis begins at my Nani Ji’s house. Movies depicting Hindu mythology played in the background of our family gatherings- movies where Hanuman would grow to the size of a mountain or Shiva would morph between genders. These shifts between scale, gender, and material affirmed the queerness I had yet to find words for. They taught me that boundaries expand and contract. Everything was interconnected. I could be a mountain.  This preoccupation with Hindu Gods led me to their home: Mount Meru. Hindu, Jain, and Buddhist cosmologies consider this sacred five-peaked mountain to be the center of all physical. Metaphysical, and spiritual universes among other centers. Religious anecdotes imply that the Hindu-Kush Himalayan Ice Sheet is this focal point. And the Ice Sheet sustains a complex history: a history of water in its many forms, a history of religious diversity and spiritual importance, a history of war and boundaries.  Boundaries drawn like a line.  And so too, architecture continues to occupy itself with the line. It considers and abstracts an ideal future by drawing precise lines that separate buildings from environment. This abstraction may be necessary for the field, but it leads to a fixation on strategies of centering segregation, precision, and predictability. Drawings have become a passive instrument of information. They imply an impossible neutrality which produces objects that endure, rather than bodies that engage their contexts.  But a world assembled by determined moments and perfectly fixing parts could harbor no life. Nothing could move or become. What if the methods of architecture reflected the flow of water or the fluidity of human embodiment? This thesis is as much a question as it is an answer. Can architecture cross and blur boundaries and binaries: queer and heteronormative, land and water, human and nature? When and how would it all dissolve? What happens to architecture when the details bleed?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Stakeholder-informed Evaluation of Global Climate Temperature Response Functions</title>
<link href="https://hdl.handle.net/1721.1/155065" rel="alternate"/>
<author>
<name>Womack, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/155065</id>
<updated>2024-05-25T03:13:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development and Stakeholder-informed Evaluation of Global Climate Temperature Response Functions
Womack, Christopher
Modern climate models allow for accurate simulation over a range of future climate scenarios. However, there exists a significant gap in terms of speed, accuracy, and overall intuitiveness between the stateof-the-art and more generally accessible tools, especially in tools used for climate education. Climate emulators provide a potential closure for this gap, and a significant body of work has shown their efficacy in providing a relatively lightweight method to reproduce the results of full-scale Earth System Models. In this thesis, I demonstrate a novel methodology for climate emulation based on the response of the climate system to effective radiative forcing (ERF). While previous work has demonstrated the efficacy of impulse response functions as a tool for climate emulation, critically, these methods are largely nongeneralizable to new scenarios and are inaccessible to more general audiences. To remedy this, we present a general framework for integrating stakeholder analysis into the model development process to ensure all key stakeholder needs are identified and met at each step of development. This framework is then applied in the context of climate emulator development, showcasing how this integrated stakeholder analysis is able to increase emulator salience, credibility, and legitimacy for our target audience. We present results from an application to near-surface air temperature based on ERF and temperature data taken from experiments in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). We evaluate the emulator using additional experiments taken from the CMIP6 archive, including the Shared Socioeconomic Pathways (SSPs), demonstrating accurate emulation of global mean and spatially resolved temperature change with respect to the outputs of the CMIP6 ensemble. Global absolute error in predicted temperature averages 0.25◦C with a bias ranging from-0.14 to-0.04◦C. In addition, the comprehensive stakeholder analysis performed as a part of the development process affords the emulator ease of use and interpretability in its outputs while meeting all key stakeholder requirements. While it is unable to capture state-dependent climate feedbacks, such as the non-linear effects of Arctic sea ice melt in high-warming scenarios, our results show that the emulator is generalizable to any scenario independent of the specific forcings present.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Reinforcement Learning Algorithms for Nuclear Power Plant Fuel Optimization</title>
<link href="https://hdl.handle.net/1721.1/155061" rel="alternate"/>
<author>
<name>Seurin, Paul R.M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155061</id>
<updated>2024-05-25T03:01:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessment of Reinforcement Learning Algorithms for Nuclear Power Plant Fuel Optimization
Seurin, Paul R.M.
The nuclear fuel loading pattern optimization problem belongs to the class of large-scale combinatorial optimization and has been studied since the dawn of the commercial nuclear energy industry. It is also characterized by multiple objectives and constraints, which makes it impossible to solve explicitly. Stochastic optimization methodologies including Genetic Algorithms and Simulated Annealing are used by different nuclear utilities and vendors to perform fuel cycle reload design. Nevertheless, hand-designed solutions continue to be the prevalent method in the industry. To improve the state-of-the-art core reload patterns, we aim to create a method as scalable as possible, that agrees with the designer’s goal of performance and safety. To help in this task Deep Reinforcement Learning (RL), in particular, Proximal Policy Optimization is leveraged. RL has recently experienced a strong impetus from its successes applied to games, sometimes even reaching ”super-human” performances. This thesis presents a first-of-a-kind approach to utilize deep RL to solve the loading pattern problem and could be leveraged for any engi3neering design optimization with an integer or combinatorial input structure. This work is also to our knowledge the first to propose a study of the behavior of several hyper-parameters that influence the RL algorithm via a multi-measure approach helped with statistical tests. To demonstrate its superiority against industry-preferred computational methods, we compared its performance against the most adopted legacy Stochastic Optimization (SO)-based approaches in the literature and the industry namely, Parallel Simulated Annealing with Mixing of States (PSA), Genetic Algorithm (GA), and a novel first-of-a-kind parallel Tabu Search (TS) we developed for this effect. For this purpose, the full software development from scratch was done to enable the application of RL and SO-based algorithms optimization with SIMULATE3 and visualization of the results. The algorithm is highly dependent on multiple factors such as the shape of the objective function derived for the core design that behaves as a fudge factor that affects the stability of the learning. But also an exploration/exploitation trade-off that manifests through different parameters such as the number of loading patterns seen by the agents per episode, the number of samples collected before a policy update nsteps , and an entropy factor ent_coef that increases the randomness of the policy during training. We found that RL must be applied similarly to a Gaussian Process in which the acquisition function is replaced by a parametrized policy: in essence, a policy generates solutions, while a critic learns and evaluates the quality of these solutions. Then, once an initial set of hyper-parameters is found, reducing nsteps and ent_coef until no more learning is observed or instabilities occur will result in the highest sample efficiency robustly and stably. Applying this approach resulted in an economic benefit on average of 540,000 and 650,000 $/year/plant for a 1000 MWe and 1200 MWe Nuclear Power Plant, respectively. 4Extending this approach to eleven classical benchmarks, we demonstrated that the methodology developed in this work is problem agnostic and can be seamlessly leveraged to use RL as an optimization tool elsewhere for problems with an integer or combinatorial input space. Although we had not demonstrated it on the nuclear power plant fuel optimization problem, the initialization of the state at the beginning of an episode was also investigated with the benchmarks. We established that initializing the episode with the state of the best ever solution found might be more suitable for problem with complicated reward functions, which is the case for our problem and aligns with the way core designers operates by iterating on the best solution found. We suggest, however, to compare with initializing with random state instances on a case by case basis, hence we have not included this observation as an essential element of the approach. We also showed that, by learning which solution to generate next intrinsically while marching down the objective space (in contrast to SO-based, which are doing it randomly), the use of RL resulted in an algorithm that found solutions of greater quality systematically but also faster than legacy approaches. This opens the curtains to a new optimization paradigm that could result in significant contributions in engineering fields beyond loading pattern optimization, especially when an expensive physic-solver is required. Additional key observations include: (1) The RL algorithms cannot be applied without physic-based intuitions provided during the search. This intuition can be built up in the construction of the action space (e.g., through pre-defined templates) and the reward signal. (2) Defining the frame of your optimization (e.g., here the necessity to obtain results within a day), the shape of the reward (e.g., magnitude and curvature), and understanding the degree of exploration/exploitation needed in your problem influences the value of the 5hyper-parameters chosen. (3) RL algorithms are highly sensitive to these hyper-parameters but there is an approach (presented here) for gaining sample efficiency by playing with the exploration/exploitation trade-offs. (4) Because eventually, we are aiming at improving the economy of the Nuclear Power Plants, utilizing the Levelized Cost of Electricity (LCOE) to rigorously assess the true economic performance of the different algorithm configurations was pivotal to measure the true importance of hyper-parameter tuning and the superiority of RL over legacy approaches. Overall, the methodology developed in this research supports four important new capabilities for core designers: (1) accelerate the design of new reactors by proposing efficient solutions within a reasonable amount of time, (2) ensure feasibility and quality of the resulting design, limiting the overhead time allocated to re-design (3) propose a new set of computational methodologies more robust and stable than classical SO-based ones to result in higher economic gains for the existing fleet of operating reactors, and (4) propose a tool that could be leveraged in the future to gain managerial insights about strategies to optimize the loading pattern optimization problems beyond expert know-how. Keywords— Fuel loading pattern, Optimization, Reinforcement Learning, Proximal Policy Optimization
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Cognitive Reflection from Digital Fingerprints</title>
<link href="https://hdl.handle.net/1721.1/155059" rel="alternate"/>
<author>
<name>Jimenez, An</name>
</author>
<id>https://hdl.handle.net/1721.1/155059</id>
<updated>2024-05-25T03:35:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Predicting Cognitive Reflection from Digital Fingerprints
Jimenez, An
While social media is beneficial in facilitating social connections and spreading knowledge on a large scale, its negative impacts — the propagation of misinformation through networks and the emergence of echo chambers in particular — are con- sequential and dangerous, inducing a more divergent rather than cohesive society. What cognitive mechanisms are at play when users decide what to share and who to follow on social media? A recent study provides evidence that users with higher Cognitive Reflection Test (CRT) scores — a popular measure for reflective thinking — are more discerning in their Twitter behavior (Mosleh et al., 2021). While previous research sheds light on this relationship between cognitive reflection and Twitter behavior, there is an opportunity to generalize these correlations to larger populations and across different social media platforms by building a computational model to predict cognitive reflection from social media activity, which is the focus of my project. Applying machine learning techniques to the dataset used in Mosleh’s study, I created a model that predicts CRT scores from Twitter features such as Tweet content and accounts followed (followees) and also determined which features and combinations of features are most predictive of cognitive reflection. Correlations between predicted and actual CRT scores are strongest when predicting with information related to followees (&#119903; = 0.25) and followee bios (&#119903; = 0.24). Combining followee features and applying different regression models improves prediction accuracy (&#119903; = 0.29). These conclusions help form a more complete picture of how cognitive reflection relates to social media activity, which has important implications for how we can encourage more intentional social media use and ultimately, reconnect divisive populations online.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Optimization and Cost Analysis of Electrochemical&#13;
Micromachining for Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155058" rel="alternate"/>
<author>
<name>Li, Mingyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/155058</id>
<updated>2024-05-25T03:52:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Process Optimization and Cost Analysis of Electrochemical&#13;
Micromachining for Volume Manufacturing
Li, Mingyuan
Microfluidic devices have found numerous applications in the medical device, pharmaceutical, and healthcare industries, resulting in an increasing demand for different types of microfluidic devices in various designs and materials, including metals such as stainless steel and titanium. Electrochemical micromachining (ECMM) is a powerful method for manufacturing small channels (~0.1mm) on metal substrates which can be later made into metal microfluidic devices, but so far, it has only been studied in benchtop experiments. Scaling up this process to volume manufacturing is relatively unexplored. In this study, we examine the financial and performance benefits of ECMM in an industrial setting for manufacturing microfluidic devices. We conduct cost analysis and performance comparisons of ECMM and micro-milling, an alternative technology for making micro channels. Our findings demonstrate that channels manufactured using ECMM have less variation in the total volume of material removed when compared to micro-milling. However, the cost of ECMM is currently around 50% higher than micro-milling for the fluidic device analyzed here. By making a few simple design changes and optimizing the ECMM process, we will be able to achieve a &gt;20% cost saving compared to micro-milling. The second part of our study focuses on optimizing the ECMM process in terms of cycle time. The bottleneck for the entire process is the time for photoresist removal. By changing the solvent, agitation method, and hard baking time, we reduce the stripping time from hours or even days to just ~60 minutes, with a standard deviation of ~2.7 minutes, drastically reducing the mean and variation. Furthermore, our investigation finds a correlation between surface roughness and stripping time, which should be further controlled in the manufacturing process in the future.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A More Holistic Analysis of Privacy Risks in Transcriptomic Datasets</title>
<link href="https://hdl.handle.net/1721.1/155055" rel="alternate"/>
<author>
<name>Sadhuka, Shuvom</name>
</author>
<id>https://hdl.handle.net/1721.1/155055</id>
<updated>2024-05-25T03:28:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A More Holistic Analysis of Privacy Risks in Transcriptomic Datasets
Sadhuka, Shuvom
Gene expression data provides molecular insights into the functional impact of genetic variation, for example through expression quantitative trait loci (eQTL). With an improving understanding of the association between genotypes and gene expression comes a greater concern that gene expression profiles could be matched to genotype profiles of the same individuals in another dataset, known as a linking attack. Prior work demonstrating such a risk could analyze only a fraction of eQTLs that are independent of each other due to restrictive model assumptions, leaving the full extent of this risk incompletely understood. To address this challenge, we introduce discriminative sequence model (DSM), a novel probabilistic framework for predicting a sequence of genotypes based on gene expression data. By modeling the joint distribution over all variants in a genomic region, DSM enables an accurate assessment of the power of linking attacks that leverage all known eQTLs with necessary calibration for linkage disequilibrium and redundant predictive signals. We demonstrate improved linking accuracy of DSM compared to two existing approaches on a range of real datasets including up to 22K individuals, suggesting that DSM helps uncover a substantial additional risk overlooked by previous studies. Our work provides a unified framework for assessing the privacy risks of sharing diverse omics datasets beyond transcriptomics.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The electrical strength of insulators in high vacua</title>
<link href="https://hdl.handle.net/1721.1/155035" rel="alternate"/>
<author>
<name>Backenstoss, Henry B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155035</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">The electrical strength of insulators in high vacua
Backenstoss, Henry B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1935; Includes bibliographical references (leaf 46).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differentiation between sulfur and phosphorous in Baumann printing</title>
<link href="https://hdl.handle.net/1721.1/155033" rel="alternate"/>
<author>
<name>Skidmore, Wilbur M.
            (Wilbur Manly)</name>
</author>
<id>https://hdl.handle.net/1721.1/155033</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Differentiation between sulfur and phosphorous in Baumann printing
Skidmore, Wilbur M.
            (Wilbur Manly)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some of the technical and economic problems of central station energy storage</title>
<link href="https://hdl.handle.net/1721.1/155032" rel="alternate"/>
<author>
<name>Sloan, Royal D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155032</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Some of the technical and economic problems of central station energy storage
Sloan, Royal D.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1936; Includes bibliographical references (leaves 70-73).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Removal of arsenic from hot sulfur dioxide gas</title>
<link href="https://hdl.handle.net/1721.1/155031" rel="alternate"/>
<author>
<name>Smith, Charles W.
            (Charles William)</name>
</author>
<id>https://hdl.handle.net/1721.1/155031</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Removal of arsenic from hot sulfur dioxide gas
Smith, Charles W.
            (Charles William)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 63).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method of direct manufacture of hydrochloric acid solution</title>
<link href="https://hdl.handle.net/1721.1/155029" rel="alternate"/>
<author>
<name>Smith, Laxton M.
            (Laxton Montgomery)</name>
</author>
<id>https://hdl.handle.net/1721.1/155029</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">A method of direct manufacture of hydrochloric acid solution
Smith, Laxton M.
            (Laxton Montgomery)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 50).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity effect in spot welding of stainless steel</title>
<link href="https://hdl.handle.net/1721.1/155028" rel="alternate"/>
<author>
<name>Sweeney, James Augustus.</name>
</author>
<id>https://hdl.handle.net/1721.1/155028</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Proximity effect in spot welding of stainless steel
Sweeney, James Augustus.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1936; Includes bibliographical references (leaves 59-60).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factor influencing the translucency of porcelain</title>
<link href="https://hdl.handle.net/1721.1/155025" rel="alternate"/>
<author>
<name>Tarnopol, Milton Sidney.</name>
</author>
<id>https://hdl.handle.net/1721.1/155025</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Factor influencing the translucency of porcelain
Tarnopol, Milton Sidney.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mining and Metallurgy, 1936; Includes bibliographical references (leaves 87-89).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an automatic sideslip control for aircraft,</title>
<link href="https://hdl.handle.net/1721.1/155022" rel="alternate"/>
<author>
<name>Kendall, Delvin E.</name>
</author>
<author>
<name>Whitcomb, David W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155022</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Development of an automatic sideslip control for aircraft,
Kendall, Delvin E.; Whitcomb, David W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaves 115-117.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kirchhoff approximation for rough surface scattering</title>
<link href="https://hdl.handle.net/1721.1/155021" rel="alternate"/>
<author>
<name>Mou, Alex.</name>
</author>
<id>https://hdl.handle.net/1721.1/155021</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Kirchhoff approximation for rough surface scattering
Mou, Alex.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references (85-88).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>International comparative analysis of training requirements for technical professionals : a case study of the nuclear power industry</title>
<link href="https://hdl.handle.net/1721.1/155018" rel="alternate"/>
<author>
<name>Mason, John Herbert.</name>
</author>
<id>https://hdl.handle.net/1721.1/155018</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">International comparative analysis of training requirements for technical professionals : a case study of the nuclear power industry
Mason, John Herbert.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1990; Includes bibliographical references (leaves 126-131).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method to evaluate the performance of a chemically enhanced paper laminate</title>
<link href="https://hdl.handle.net/1721.1/155015" rel="alternate"/>
<author>
<name>Eglowstein, Sheila Ruth.</name>
</author>
<id>https://hdl.handle.net/1721.1/155015</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">A method to evaluate the performance of a chemically enhanced paper laminate
Eglowstein, Sheila Ruth.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1989; Includes bibliographical references (leaves 66-68).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The pulse amplifier in theory and experiment</title>
<link href="https://hdl.handle.net/1721.1/154980" rel="alternate"/>
<author>
<name>Tatel, Howard.</name>
</author>
<id>https://hdl.handle.net/1721.1/154980</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">The pulse amplifier in theory and experiment
Tatel, Howard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1936; Includes bibliographical references (leaf 27).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical studies of the nature of metallic surfaces</title>
<link href="https://hdl.handle.net/1721.1/154979" rel="alternate"/>
<author>
<name>Thorpe, John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154979</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Optical studies of the nature of metallic surfaces
Thorpe, John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1936; Includes bibliographical references (leaf 23).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Business research as a tool for railway management</title>
<link href="https://hdl.handle.net/1721.1/154976" rel="alternate"/>
<author>
<name>Rugge, George.</name>
</author>
<id>https://hdl.handle.net/1721.1/154976</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1942-01-01T00:00:00Z</published>
<summary type="text">Business research as a tool for railway management
Rugge, George.
Thesis: M.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1942; Includes bibliographical references (leaves [82]-[85]
</summary>
<dc:date>1942-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The scattering of charged particles by non-adiabatic magnetic fields</title>
<link href="https://hdl.handle.net/1721.1/154975" rel="alternate"/>
<author>
<name>Clarke, John F.,
            1939-</name>
</author>
<id>https://hdl.handle.net/1721.1/154975</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The scattering of charged particles by non-adiabatic magnetic fields
Clarke, John F.,
            1939-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1964; Includes bibliographical references (leaf 48).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluorescent gaseous tracers for three dimensional flow visualization.</title>
<link href="https://hdl.handle.net/1721.1/154973" rel="alternate"/>
<author>
<name>Epstein, Alan Harry.</name>
</author>
<id>https://hdl.handle.net/1721.1/154973</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Fluorescent gaseous tracers for three dimensional flow visualization.
Epstein, Alan Harry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic temperature regulation in portable life support systems.</title>
<link href="https://hdl.handle.net/1721.1/154972" rel="alternate"/>
<author>
<name>Ephrath, Arye Ravoz.</name>
</author>
<id>https://hdl.handle.net/1721.1/154972</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Automatic temperature regulation in portable life support systems.
Ephrath, Arye Ravoz.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Bibliography: leaves 69-72.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerodynamics of wind turbine with tower disturbances</title>
<link href="https://hdl.handle.net/1721.1/154964" rel="alternate"/>
<author>
<name>Chung, Song Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/154964</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Aerodynamics of wind turbine with tower disturbances
Chung, Song Y.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cell free synthesis of ferritin using the Modified Reticulocyte Lysate System.</title>
<link href="https://hdl.handle.net/1721.1/154963" rel="alternate"/>
<author>
<name>Clark, Nathaniel Goodwin.</name>
</author>
<id>https://hdl.handle.net/1721.1/154963</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Cell free synthesis of ferritin using the Modified Reticulocyte Lysate System.
Clark, Nathaniel Goodwin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lift and drag performance of a systematic series of yacht hull models</title>
<link href="https://hdl.handle.net/1721.1/154962" rel="alternate"/>
<author>
<name>Clemmer, George L.
            (George Lewis)</name>
</author>
<id>https://hdl.handle.net/1721.1/154962</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Lift and drag performance of a systematic series of yacht hull models
Clemmer, George L.
            (George Lewis)
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1978; Bibliography: leaf 103.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heave, sway and roll of ship-like cylinders in waters of finite depth.</title>
<link href="https://hdl.handle.net/1721.1/154961" rel="alternate"/>
<author>
<name>Chung, Hin Chew.</name>
</author>
<id>https://hdl.handle.net/1721.1/154961</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Heave, sway and roll of ship-like cylinders in waters of finite depth.
Chung, Hin Chew.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The friction effect in the flaw distribution determination by the hardness indentation test.</title>
<link href="https://hdl.handle.net/1721.1/154960" rel="alternate"/>
<author>
<name>Chiu, Paul Tsan-Tin.</name>
</author>
<id>https://hdl.handle.net/1721.1/154960</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The friction effect in the flaw distribution determination by the hardness indentation test.
Chiu, Paul Tsan-Tin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaves 25-27.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The processing and properties of chitosan membranes.</title>
<link href="https://hdl.handle.net/1721.1/154959" rel="alternate"/>
<author>
<name>Clark, Randall Bradley.</name>
</author>
<id>https://hdl.handle.net/1721.1/154959</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The processing and properties of chitosan membranes.
Clark, Randall Bradley.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality differences in male and female vocoded speech.</title>
<link href="https://hdl.handle.net/1721.1/154958" rel="alternate"/>
<author>
<name>Christopher, Deborah Kaye.</name>
</author>
<id>https://hdl.handle.net/1721.1/154958</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Quality differences in male and female vocoded speech.
Christopher, Deborah Kaye.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer optimization of dry and wet/dry cooling tower systems for large fossil and nuclear power plants.</title>
<link href="https://hdl.handle.net/1721.1/154957" rel="alternate"/>
<author>
<name>Choi, Michael Kam-wah.</name>
</author>
<id>https://hdl.handle.net/1721.1/154957</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Computer optimization of dry and wet/dry cooling tower systems for large fossil and nuclear power plants.
Choi, Michael Kam-wah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The growth of a small firm, its implications for management style, and the influence on corporate character by the senior executive</title>
<link href="https://hdl.handle.net/1721.1/154956" rel="alternate"/>
<author>
<name>Clope, Sara Jane.</name>
</author>
<author>
<name>Osborn, Edward Kingsbury.</name>
</author>
<author>
<name>Pototsky, John Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/154956</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The growth of a small firm, its implications for management style, and the influence on corporate character by the senior executive
Clope, Sara Jane.; Osborn, Edward Kingsbury.; Pototsky, John Edward.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Bibliography: leaves 167-170.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Performance High-Power Inductor Design for High-Frequency Applications</title>
<link href="https://hdl.handle.net/1721.1/154375" rel="alternate"/>
<author>
<name>Joisher, Mansi Vipul</name>
</author>
<id>https://hdl.handle.net/1721.1/154375</id>
<updated>2024-05-02T03:45:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-Performance High-Power Inductor Design for High-Frequency Applications
Joisher, Mansi Vipul
The performance and size of power electronic circuits are greatly impacted by magnetic components. This is especially true at Radio Frequencies (RF) of many MHz and above. In the High Frequency (HF, 3-30 MHz) range, coreless (or "air-core") inductors with a typical quality factor (Q) of 200-300 are conventionally used and are often the major contributor to the overall system’s loss and size. Even when they can achieve high Q, air-core inductors can induce electromagnetic interference (EMI) and eddy current loss in surrounding components, thus limiting system miniaturization. With the recent advancements in high-performance, high-frequency magnetic materials, there is interest in leveraging these magnetic materials at RF and replacing lossy air-core inductors with cored inductors to achieve an improved combination of size and loss. This thesis investigates high-power, high-frequency, high-Q cored inductors. This approach leverages high-frequency high-performance magnetic materials, core geometry, and quasi-distributed gaps to achieve a self-shielded inductor that emits less flux outside its physical volume and can be placed close to other circuit components without inducing EMI or eddy current loss. The performance and self-shielding characteristics of the proposed design procedure are experimentally verified for a 500 nH inductor (Q = 1150) designed to operate at 13.56MHz with a peak ac current of up to 80 Amps.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not Allowed: Practicing Process</title>
<link href="https://hdl.handle.net/1721.1/154365" rel="alternate"/>
<author>
<name>Ugorji, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/154365</id>
<updated>2024-05-02T03:27:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Not Allowed: Practicing Process
Ugorji, Amanda
Not Allowed: Practicing Process is a response to my dissatisfaction with the status quo of architectural pedagogy as I have experienced it. By shifting attention away from the architectural product and onto the process, I redefine the thesis project's success through encounters of learning, struggle, and uncomfortable ambiguity.&#13;
&#13;
The project explores ideas of co-authorship, building practice, and embedding meaning in architectural pedagogy and work. It has challenged concepts such as the urgency of production, the erasure of identity in pedagogy and practice, and the systemic harm architecture perpetuates on both the personal and on the global scale. To carry out the thesis's goals, I armed myself with tools like self-reflection, expectation of change, intentional conversation, and curiosity. The work allowed for topic change, dramatic restructuring, and lapses in rigor. It found value in opening multiple paths and diverging from linearity, although it accepts that the effort expended has been cumulative.&#13;
&#13;
Instead of a thesis review, the project culminated in a thesis reflection where I asked attendees to partake in a small group discussion and share their thoughts on provided prompts. The results of the process look like an intentionally organized collection of thoughts and conducted discussions that raise more questions than they answer.&#13;
&#13;
I have identified guiding questions on this thesis journey, such as: What ways of thinking are privileged in architecture? What modes of production are validated? What do I limit myself to when I am bound by architecture's definition of rigor? How much energy should I spend gaining validation? What are the criteria for failure? What if the ways I derive value in my work devalue my project in the normative discipline? Does that matter? If we make better work when we are full and present, what do we need to be full and present? If the social contracts we hold outside of architecture education spaces are constantly violated, what new social contracts must we build? How can we preserve them? If the pedagogy has not been serving me as I need it to, how have I been working to develop infrastructure for myself? How can I continue to do so moving forward?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reticle Stage Actuation Concepts for High Acceleration&#13;
Trajectories in Next-generation Photolithography Tools</title>
<link href="https://hdl.handle.net/1721.1/154363" rel="alternate"/>
<author>
<name>Seaberg, Charles Byron</name>
</author>
<id>https://hdl.handle.net/1721.1/154363</id>
<updated>2024-05-02T04:01:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Reticle Stage Actuation Concepts for High Acceleration&#13;
Trajectories in Next-generation Photolithography Tools
Seaberg, Charles Byron
In photolithography scanning tools, the functional patterns of integrated circuit layers are defined with critical dependence on the actuation of reticle and wafer stages along precisely synchronized trajectories. Patterning throughput of such tools is limited based on the velocity and acceleration at which the stages are actuated. Modern tools require sub-nanometer accuracy of stages along these trajectories during constant-velocity scan exposure to create feature sizes on the order of nanometers. At the ends of the constant velocity scans, high acceleration trajectories are used to reverse the scan velocity in minimal time. The next-generation of photolithography tools will require more aggressive trajectories along with the development of energy-efficient actuation solutions with higher force and precision capabilities to implement these demanding motion profiles.&#13;
&#13;
In this thesis, we propose actuation concepts which may enable 100g reticle stage turnaround accelerations, and explore two such concepts in depth. The first concept is an array of piezoelectric stack actuators attached to the long-stroke stage which mechanically contact the short-stroke stage only during turnaround. In this context, we perform a scaled two degree-of-freedom experiment in which we attempt to control the contact of a 840 g payload moving at a velocity of 80 mm/s using a 50 &#120583;m stroke piezo stack actuator which is driven open-loop. We are able to use the piezo current signal to detect mechanical contact with an estimated delay of 6-16 &#120583;s. We are unable to control the dynamics of the contact during which the measured peak contact force of 150 N exceeds the planned amount by 80% and results in the payload bouncing off the actuator.&#13;
&#13;
The second actuation concept we consider in theory is the use of dual-chamber pneumatic springs as energy storage devices to create turnaround forces for the long-stroke stage acceleration. We examine the use of such pneumatic springs in parallel with a conventional long-stroke linear motor to create a stage topology in which reactive power is stored and returned into kinetic energy. We study thermal aspects of the spring behavior first under an adiabatic assumption and then using a one-dimensional thermal model for heat flow through the piston chamber walls. The proposed design shows promise to reduce the motor power dissipation by 90% and the motor amplifier electrical power by 70%, showing promise for further study. Such energy savings can contribute to significant reduction in the energy consumption of lithography tools.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Destroy Your School: Building with Kids to Reimagine Learning</title>
<link href="https://hdl.handle.net/1721.1/154359" rel="alternate"/>
<author>
<name>Rotman, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/154359</id>
<updated>2024-05-02T03:40:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Destroy Your School: Building with Kids to Reimagine Learning
Rotman, Katherine
Too often, our education is disconnected from the physical space in which we learn. Lessons plans and curricula disregard the spatial and physical spaces that define the educational experience. The disciplinary gap between architectural and educational discourse is in need of attention, and bridging this gap is at the heart of my thesis. I seek to discern methods to better equip our youth for the future. Questions of how and where we learn and share knowledge are crucial to the formation of values in the next generation. Our current moment necessitates extensive collective change and a thorough reconsideration of the values embedded in our systems of education. How does our built environment inform our learning experience? How does pedagogy shape our world, and how in turn is our world shaped by pedagogy? How can notions of care and stewardship be generated by pedagogy? How can a shift in pedagogy shape classrooms, schools, and neighborhoods? This thesis approaches these questions through the under-considered and often-forgotten problem of middle school age education. The project examines and puts forward a new pedagogy that aims to instill architectural values of collaboration, community, mentorship, interdisciplinarity, improvisation, and material opportunism through education in order to shape the fabric of our society. The three years of middle school play an enormous role in shaping the next generation. At this critical point, students transition out of learning through play, inquiry, and experimentation to learning as adults in a results-based, structured, and standardized fashion. Introducing a design-build pedagogy into the middle school curriculum becomes not only an opportunity to build a greater sense of autonomy for young learners by elevating students’ existing skills embedded in play and experimentation, but a chance to disrupt the general assumptions we grow up with about our built environment. The design pedagogy I propose gives young adolescents a new set of tools to participate and take action in shaping their education, classroom, and community. At its core, this project aims to enable young learners to find agency and empowerment through their built environment. With the reimagined classroom as site, this thesis advocates for a porous community-wide system of learning and engagement.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determination of volatile nitrosamines in foods and other environmental samples</title>
<link href="https://hdl.handle.net/1721.1/154356" rel="alternate"/>
<author>
<name>Essigmann, John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154356</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Determination of volatile nitrosamines in foods and other environmental samples
Essigmann, John.
Methods were investigated for determination of volatile nitrosamines which have been reported to occur  in foods and other environmental samples. Nitrosamines were removed from foods using a Likens-Nickerson extractor; 81% of added dimethylnitrosamine and 110% of added diethylnitrosamine were recovered from 10 ng/g spiked aqueous solutions. The factors affecting recovery of these nitrosamines were investigated. The usefulness of Freon-11 as an extracting solvent for nitrosamines was investigated using both batch serial and continuous liquid-liquid extraction. Potential gas chromatographic  (GC) interferences were removed from food extracts by an acid extraction step and, when needed, by liquid column chromatography on alumina and silica gel. Dilute solu­tions containing nitrosamines were analyzed directly  using a GC solvent stripping technique. Nitrosamines  were detected with the Coulson electrolytic conductivity detector operated in the pyrolytic mode and with a flame ionization detector. The sensitivities of these detectors were compared for selected alkyl and heterocyclic nitros­amines. The specificity of the Coulson detector was demonstrated for analysis of extracts of meat and fish samples. Additional clean-up of food extracts is re­quired to insure identification of nitrosamines by combined GC-mass spectrometry. A method employing chromatographic equilibration (frontal analysis) was investigated for determination of dimethylnitrosamine in the air.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 171-180).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The efficiency of the vertical tail for different wing-fuselage combinations, particularly at high angles of attack</title>
<link href="https://hdl.handle.net/1721.1/154353" rel="alternate"/>
<author>
<name>Shumowsky, Stanislaw A.
            (Stanislaw Anton)</name>
</author>
<id>https://hdl.handle.net/1721.1/154353</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">The efficiency of the vertical tail for different wing-fuselage combinations, particularly at high angles of attack
Shumowsky, Stanislaw A.
            (Stanislaw Anton)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1936; Includes bibliographical references (leaves 68-70).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lean manufacturing--from automotive to aerospace</title>
<link href="https://hdl.handle.net/1721.1/154350" rel="alternate"/>
<author>
<name>Darris, Frederick E.
            (Frederick Eugene)</name>
</author>
<id>https://hdl.handle.net/1721.1/154350</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Lean manufacturing--from automotive to aerospace
Darris, Frederick E.
            (Frederick Eugene)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1997; Includes bibliographical references (leaf 75).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimation of the statical stability curve of a ship from hull coefficients,</title>
<link href="https://hdl.handle.net/1721.1/154349" rel="alternate"/>
<author>
<name>Ramsey, Lyle B.</name>
</author>
<author>
<name>Latimer, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/154349</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Estimation of the statical stability curve of a ship from hull coefficients,
Ramsey, Lyle B.; Latimer, John P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1945; Bibliography: leaf 83.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laminar boundary layer in a partially ionized diatomic gas behind a moving shock</title>
<link href="https://hdl.handle.net/1721.1/154348" rel="alternate"/>
<author>
<name>Moh, Tzu-Chung.</name>
</author>
<id>https://hdl.handle.net/1721.1/154348</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Laminar boundary layer in a partially ionized diatomic gas behind a moving shock
Moh, Tzu-Chung.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaf [32]).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A direct measurement of intraocular stray light.</title>
<link href="https://hdl.handle.net/1721.1/154343" rel="alternate"/>
<author>
<name>Larson, Ernest Theodore.</name>
</author>
<id>https://hdl.handle.net/1721.1/154343</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">A direct measurement of intraocular stray light.
Larson, Ernest Theodore.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1945; Bibliography: leaf 28.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motional transients in power selsyns.</title>
<link href="https://hdl.handle.net/1721.1/154342" rel="alternate"/>
<author>
<name>Kaci, M. M. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/154342</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Motional transients in power selsyns.
Kaci, M. M. E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaf 76.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Purification of formylglycinamide ribotide amidotransferase from chicken liver.</title>
<link href="https://hdl.handle.net/1721.1/154340" rel="alternate"/>
<author>
<name>Mizobuchi, Kiyoshi.</name>
</author>
<id>https://hdl.handle.net/1721.1/154340</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Purification of formylglycinamide ribotide amidotransferase from chicken liver.
Mizobuchi, Kiyoshi.
Thesis: M.S., Massachusetts Institute of Technology, Department of Biology, 1964
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A picture is worth a thousand words: the myth behind the international art mania.</title>
<link href="https://hdl.handle.net/1721.1/154336" rel="alternate"/>
<author>
<name>Barrett, Maudann Borthwick.</name>
</author>
<id>https://hdl.handle.net/1721.1/154336</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">A picture is worth a thousand words: the myth behind the international art mania.
Barrett, Maudann Borthwick.
Thesis: M.S., Massachusetts Institute of Technology, Department of Economics, 1974; Includes 12 unnumbered leaves.; Bibliography: leaves 115-118.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermoelectric power of indium antimonide.</title>
<link href="https://hdl.handle.net/1721.1/154335" rel="alternate"/>
<author>
<name>Eser, Erten Sadullah.</name>
</author>
<id>https://hdl.handle.net/1721.1/154335</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Thermoelectric power of indium antimonide.
Eser, Erten Sadullah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An industrialized housing system in wood.</title>
<link href="https://hdl.handle.net/1721.1/154334" rel="alternate"/>
<author>
<name>Fan, Samuel Sze Leung.</name>
</author>
<id>https://hdl.handle.net/1721.1/154334</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">An industrialized housing system in wood.
Fan, Samuel Sze Leung.
Thesis: M. Arch. A.S., Massachusetts Institute of Technology, Department of Architecture, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telephonic transmission of artificial pacemaker parameters.</title>
<link href="https://hdl.handle.net/1721.1/154331" rel="alternate"/>
<author>
<name>Ferla Delor, Guillermo Sergio.</name>
</author>
<id>https://hdl.handle.net/1721.1/154331</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Telephonic transmission of artificial pacemaker parameters.
Ferla Delor, Guillermo Sergio.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>R &amp; D task accomplishment at a U.S. Army Material Command Corporate Laboratory.</title>
<link href="https://hdl.handle.net/1721.1/154328" rel="alternate"/>
<author>
<name>Falabella, Gaetano.</name>
</author>
<id>https://hdl.handle.net/1721.1/154328</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">R &amp; D task accomplishment at a U.S. Army Material Command Corporate Laboratory.
Falabella, Gaetano.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consumer payment practices and preferences in the Boston metropolitan area.</title>
<link href="https://hdl.handle.net/1721.1/154327" rel="alternate"/>
<author>
<name>Fazio, Vincent John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154327</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Consumer payment practices and preferences in the Boston metropolitan area.
Fazio, Vincent John.
Thesis: M.S., Massachusetts Institute of Technology, Alfred P. Sloan School of Management., 1972; Bibliography: leaves 125-126.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reentry of the expatriate into the multinational firm</title>
<link href="https://hdl.handle.net/1721.1/154326" rel="alternate"/>
<author>
<name>Sharp, Robert C.</name>
</author>
<id>https://hdl.handle.net/1721.1/154326</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Reentry of the expatriate into the multinational firm
Sharp, Robert C.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1981; Bibliography: leaves 160-168.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metal complexes as models for vitamin B₆ catalysis</title>
<link href="https://hdl.handle.net/1721.1/154250" rel="alternate"/>
<author>
<name>Weinstein, Georgia Nan.</name>
</author>
<id>https://hdl.handle.net/1721.1/154250</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Metal complexes as models for vitamin B₆ catalysis
Weinstein, Georgia Nan.
Chapter I. Historical introduction to Vitamin B₆ Complexes. Chapter II. The aldimine complexes N(Salicylidene)glycinato and valinatozinc(II), N,~pyridoxylidene)valinatocopper(II) monohydrate and N-(3-Hydroxypyridyl-2-methylene)valinatocopper( II) hemihydrate have been prepared from L̳-valine. Synthetic methods and characterization data are given. Also prepared were the bis-chelate amino acid ester complexes, Bis[N- (2-ethoxycarbonyl-l-propyl)salicylaldiminato]copper(II) and Bis[N-(3-ethoxycarbonyl-2-propyl)salicylaldiminato]copper(II). The inertness of these two complexes to H-D exchange contrasts with the ready exchange in the absence of base of the complexes derived from a-amino acids. This result shows that facile exchange and racemization properties of Bis[N-(alkoxycarbonylalkyl)salicylaldimino] metal(II) complexes derive principally from the direct attachment of the electron-withdrawing HC=NM and COOC₂H₅ groups to the asymmetric center. The base-catalyzed racemization rates of four copper(II)-aldimine complexes in 95% ethanol at 50° were found to increase in the order N-Salicylidene-L̳-valinatocopper( II), Cu(sal-L̳-val) &lt;&lt; N-Pyridoxylidene-L̳-valinatocopper(II) &lt;/- N-3-Hydroxypyridyl-2-methylene-L̳-valinatocopper(II) &lt;N-4-NO₂ - Salicylidene-L̳-valinatocopper(II). This order is essentially the same as that of qualitative catalytic effectiveness of the constituent o̲-hydroxyarylcarbonyl compounds in nonenzyrnatic transamination and reinforces in semiquantitative fashion the prevailing model of ligand electronic features requisite to catalytic activity of these compounds.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis. Vita.; Includes bibliographical references (pages 58-62).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Longitudinal dispersion in non-uniform porous media</title>
<link href="https://hdl.handle.net/1721.1/154247" rel="alternate"/>
<author>
<name>Mohtadullah, Khalid.</name>
</author>
<id>https://hdl.handle.net/1721.1/154247</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Longitudinal dispersion in non-uniform porous media
Mohtadullah, Khalid.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1964; Includes bibliographical references (leaves 78-81).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of fish type motion and propulsion systems</title>
<link href="https://hdl.handle.net/1721.1/154246" rel="alternate"/>
<author>
<name>Mindell, Arnold Perry.</name>
</author>
<id>https://hdl.handle.net/1721.1/154246</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">A study of fish type motion and propulsion systems
Mindell, Arnold Perry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaves 44-45).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reactions of phenyl(trihalomethyl)mercurials with olefins</title>
<link href="https://hdl.handle.net/1721.1/154245" rel="alternate"/>
<author>
<name>Minasz, Richard Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/154245</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Reactions of phenyl(trihalomethyl)mercurials with olefins
Minasz, Richard Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1964; Vita.; Includes bibliographical references (leaves 42-43).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of intersymbol interference in data transmission by automatic time-domain equalization</title>
<link href="https://hdl.handle.net/1721.1/154244" rel="alternate"/>
<author>
<name>Mohn, William S.</name>
</author>
<id>https://hdl.handle.net/1721.1/154244</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Reduction of intersymbol interference in data transmission by automatic time-domain equalization
Mohn, William S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaves 40-41).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sniffer : a system that understands bugs</title>
<link href="https://hdl.handle.net/1721.1/154232" rel="alternate"/>
<author>
<name>Shapiro, Daniel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/154232</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Sniffer : a system that understands bugs
Shapiro, Daniel G.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1981; Bibliography: leaves 59-60.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a car utilization audit.</title>
<link href="https://hdl.handle.net/1721.1/154229" rel="alternate"/>
<author>
<name>Nowicki, Victor.</name>
</author>
<id>https://hdl.handle.net/1721.1/154229</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Design of a car utilization audit.
Nowicki, Victor.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of planning effectiveness : a case study.</title>
<link href="https://hdl.handle.net/1721.1/154228" rel="alternate"/>
<author>
<name>Siever, Ellen Carol.</name>
</author>
<id>https://hdl.handle.net/1721.1/154228</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">An analysis of planning effectiveness : a case study.
Siever, Ellen Carol.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977; Bibliography : leaves 54-55.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Framework for Multi-Messenger&#13;
Astronomy</title>
<link href="https://hdl.handle.net/1721.1/154207" rel="alternate"/>
<author>
<name>Koenig, Alexander P.</name>
</author>
<id>https://hdl.handle.net/1721.1/154207</id>
<updated>2024-04-18T03:06:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Systems Framework for Multi-Messenger&#13;
Astronomy
Koenig, Alexander P.
Multi-messenger (and more broadly, panchromatic) astronomy regards the use of multimodal information — incident photons, gravitational waves, neutrinos, and cosmic rays — to form astrophysical inferences. Since each messenger interacts uniquely with the dynamics of the phenomena in question, drawing information from multiple messengers poses a more complete probe of the universe. However, the exact inference method is scenario-specific, and we lack a general means to design multi-messenger instrument networks to best formulate scientific knowledge. To this end, this thesis presents a framework using probabilistic graph models to simulate the performance of heterogeneous instrument networks, with applications to two case studies.&#13;
&#13;
The first case study regards the measurement of the Hubble parameter, i.e. the rate of expansion of the universe, with joint gravitational-wave and electromagnetic detection of neutron star mergers — cosmological standard sirens. This case study predicts [formula] joint detections by the end of the 2020s, likely sufficient to measure the Hubble parameter with 4% uncertainty. Furthermore, &#119978;(10⁵) instrument networks are simulated. The most promising configurations rely on a highly-sensitive set of ground-based interferometers with wide geographic distribution along with a set of narrow-field, large-aperture ground- or space-based telescopes.&#13;
&#13;
The second case study regards using star tracker imagery from LEO satellite constellations to improve our knowledge of resident space objects Ð active satellites and debris. Traditionally, orbit determination relies on bespoke ground-based radar systems which are increasingly insufficient to meet the needs of LEO satellite operators. For two simulated objects, this case study shows star trackers could supplement but not replace radars to improve knowledge: including imagery from 10³ satellites could reduce positional uncertainty by a factor of ∼3 compared to a radar-only network.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Deepfakes with Human Help to Help Humans&#13;
Detect Deepfakes</title>
<link href="https://hdl.handle.net/1721.1/154206" rel="alternate"/>
<author>
<name>Fosco, Camilo L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154206</id>
<updated>2024-04-18T03:39:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Detecting Deepfakes with Human Help to Help Humans&#13;
Detect Deepfakes
Fosco, Camilo L.
Fake or manipulated video media (“deepfakes”) pose a clear threat to the integrity of online spaces that rely on video, from social media, to news media, to video conferencing platforms. To the human eye, these computer-generated fake videos are increasingly indistinguishable from genuine videos [45, 20]. Computer vision models, however, can achieve impressive success at deepfake detection. Thus, the future of deepfake detection for humans may become a problem of AI-assisted decision-making, where humans must incorporate the output of a machine learning model into their judgment process. Previous work on AI-assisted decision making indicates that the design and format of a decision aid strongly determines whether it will impact human behavior [66, 60, 14, 26, 4]. In the domain of deepfake signaling, traditional methods of flagging manipulated video have relied on text-based prompts. However, recent studies indicate relatively low rates of compliance when the model’s prediction is conveyed using text: in one study, participants shown model predictions via text updated their response only 24% of the time, and switched their response (from ”real” to ”fake”, or vice versa) only 12% of the time [20]. More innovative approaches have been proposed, such as showing users a heatmap of regions predicted to be manipulated [8], but this did not increase acceptance rates relative to textbased indicators. Overall, to make an impact, the development of deepfake detection models must proceed alongside the exploration of innovative and effective ways to alert human users to a video’s authenticity. &#13;
&#13;
In this thesis, we present an analysis of current solutions to this issue, and examine methodologies for both improving automated deepfake detection while generating better indicators of doctored media to help humans spot deepfakes. To work towards this goal, we first collect human annotations that highlight parts of videos that humans find unnatural or indicative of doctoring. We use this data as additional supervision to train an artifact attention module that generates ”heat volumes” highlighting areas of a deepfake video that evidence its fake nature. This module is in turn leveraged to both improve classifier performance as well as generate our novel visual indicators (described below). This construction is integral to our exploration of how human annotations can augment attention-based deepfake detection techniques, and we research for the first time the feasibility of exacerbating artifacts in deepfake videos to facilitate early detection from a human perspective. &#13;
&#13;
As the quality of doctored videos becomes more impressive, too many generated fakes are indistinguishable from a genuine video to the human eye. We believe that it is crucial for humans to be able to detect, at first glance, if a video is doctored or not. This limits the spread of misinformation by stopping it at the source. We achieve this by proposing a new visual indicator of doctoring that we call deepfake caricatures: a targeted distortion that reveals the fake nature of deepfakes, while rendering real videos virtually untouched (see Figure 1-1). This targeted distortion takes the form of an amplification of unnatural areas in a fake video, dubbed artifacts in this manuscript. &#13;
&#13;
This thesis introduces a novel framework that provides strong classical deepfake detection, but crucially also creates this compelling visual indicator for fake videos by amplifying artifacts, making them more detectable to human observers. Because humans tend to be highly sensitive to distortions in faces, we hypothesize that focusing our visual indicator on amplifying artifacts is likely to yield a highly detectable and compelling visual indicator. We introduce a new model, “CariNet”, that identifies key artifacts in deepfakes using our novel Artifact Attention Module. This module leverages both human supervision and machine supervision to learn what distortions are most relevant to humans. CariNet then generates deepfake caricatures using a Caricature Generation Module that magnifies unnatural areas in fake videos, making them more visible to human users. We make three primary contributions: &#13;
• We develop two annotation tools to (A) filter deepfakes according to their ease of detection, and (B) collect human annotations of fake and unnatural areas (artifacts) in doctored videos. This process yields a dataset of over 11K annotations across 1000 videos. &#13;
• We develop a framework for identifying video artifacts that are relevant to humans. Allowing our deepfake detector to leverage this information boosts its accuracy by more than 5%, showing that human supervision can improve deepfake detection models.&#13;
• We generate deepfake caricatures, and show in a user study that they increase human deepfake detection accuracy by up to 40% compared to non-signalled deepfakes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling of Graphite Elements and Low Enriched Fuel Assemblies for a High Temperature Gas-Cooled Reactor</title>
<link href="https://hdl.handle.net/1721.1/154205" rel="alternate"/>
<author>
<name>Cohen, Lorne</name>
</author>
<id>https://hdl.handle.net/1721.1/154205</id>
<updated>2024-04-18T03:18:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modelling of Graphite Elements and Low Enriched Fuel Assemblies for a High Temperature Gas-Cooled Reactor
Cohen, Lorne
As the consequences of climate change continue to have worldwide impacts, innovations in nuclear energy are a necessity for decarbonizing electricity and process heat generation. To address the large capital costs and risk of large Pressurized Water Reactors (PWRs), Stewart et al. designed a Horizontal Compact High Temperature Gas-cooled Reactor (HC-HTGR), which Boston Atomics is seeking to commercialize. The HC-HTGR leverages the safety advantages of gas- cooled, TRISO fuelled reactors, and the economic advantages of a horizontal compact layout. Graphite assembly blocks create the channels that guide the helium coolant flow and contain the fuel compacts in the HC-HTGR. Given the low tensile strength of graphite, FEA analysis is required for predicting stresses within these components. The stresses are evaluated using the ASME code for graphite components in nuclear reactors. A 2D generalized plane strain model is used to predict the ASME equivalent stresses throughout assemblies at the inlet, midplane, and outlet over 15 years of steady state operation. The effects of creep, swelling, and thermal expansion are incorporated into the model. The results predict the maximum equivalent stress will not exceed the limit of 12 MPa from the ASME code. Large thermal stresses are induced due to the high midplane and outlet temperatures but are quickly reduced by irradiation effects. As expected, creep plays a significant role in reducing the stresses that are driven by irradiation shrinkage of the graphite block.&#13;
&#13;
The use of TRISO fuel in an HC-HTGR provides safety benefits but adds significant fuel costs due to the manufacturing and fuel enrichment price. To improve the economics of the reactor, multiple designs for low-enriched fuel assemblies are evaluated on a thermal-hydraulic, neutronic, and economic basis. The designs use a combination of UC and UO₂ fuel, with SiC composite and stainless-steel cladding. While each design meets the target reactivity, enrichment, and temperature limits, the most viable design for near-term deployment uses UO₂ fuel with 0.5 mm of stainless-steel cladding. This design has an enrichment of 4.249% and maximum fuel temperature of 1414°C, under the assumed conservative steady-state conditions. A preliminary analysis indicates a 38-60% reduction in fuel costs compared to the TRISO fuelled assembly for the same energy output. The wide use of UO₂ and stainless steel in the nuclear industry supports the near-term deployment of this assembly design, as both materials are licensed for use in nuclear reactors, unlike SiC composite cladding and UC fuel. This precedent also reduces uncertainties on the fuel cost since there are well established supply chains for both UO₂ and nuclear grade stainless steel.&#13;
&#13;
Additionally, in order to improve the performance of stainless-steel cladding, oxide dispersion strengthened (ODS) steel cladding samples fabricated with high velocity oxy-fuel deposition were investigated. The XRD and XRF analyses led to the conclusion that rapid cooling after deposition results in an amorphous microstructure with a crystalline chromium phase. The bulk material is brittle, as confirmed by ring compression tests, motivating improvement in the fabrication process by the manufacturer.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impact of Communication Delay on Mission Control as an Effective Team Member with the Crew</title>
<link href="https://hdl.handle.net/1721.1/154200" rel="alternate"/>
<author>
<name>Grace, Sideena</name>
</author>
<id>https://hdl.handle.net/1721.1/154200</id>
<updated>2024-04-18T03:20:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Impact of Communication Delay on Mission Control as an Effective Team Member with the Crew
Grace, Sideena
The National Aeronautics and Space Administration (NASA) aims to send humans to Mars in the coming decade. However, the significant communication delay of up to 22 minutes one way poses challenges for Mission Control (MC) in fulfilling its role as an effective team member with the crew, potentially jeopardizing mission safety and success. Existing research on communication delay has primarily focused on the crew, neglecting the impact on MC. This study addresses this gap by investigating the impact of communication delay on MC’s role as a team member and proposes a protocol to improve communication between MC and the crew. To analyze the impact of communication delay, data from high-fidelity analog studies and the International Space Station (ISS) were examined. These studies covered scenarios with delays ranging from seconds to 20 minutes, communication blackouts, and mission durations up to 520 days. Tasks of varying complexity were evaluated to assess MC’s ability to support the crew. Additionally, existing protocols were evaluated using subjective ratings and compliance analysis. The analysis indicated that communication delay significantly impairs MC’s effectiveness as a team member, evidenced by common challenges identified in the studies. These challenges include difficulty for MC in understanding the crew’s needs and maintaining situational awareness due to communication breakdowns. As a result, MC faced challenges in providing consistent and accurate support to the crew. The delayed recovery from these challenges led to reduced reliance on MC by the crew, as their role was not always seen as the most efficient option for seeking support. In response, a new protocol focusing on tone was developed to establish effective and respectful communication between MC and the crew, to mitigate the effects of these identified challenges. Furthermore, two key recommendations emerge from the analysis: ensuring time delay consistency and standardizing communication delay implementation. These recommendations aim to optimize the effectiveness of protocols and provide a better understanding of their impact in addressing communication delay. Understanding the impact of communication delay on both MC and the crew is vital for developing protocols that enhance effective communication and teamwork during the mission. These findings contribute to optimizing protocols for future studies and preparing for the Mars mission.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Student Operated Production Facility using Discrete Event Simulation and Continuous Improvement</title>
<link href="https://hdl.handle.net/1721.1/154199" rel="alternate"/>
<author>
<name>Greene, Ethan Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/154199</id>
<updated>2024-04-18T04:03:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of a Student Operated Production Facility using Discrete Event Simulation and Continuous Improvement
Greene, Ethan Logan
The Device Realization Laboratory at MIT is committed to developing accessible, affordable devices for hands-on learning experiences in smart manufacturing. The laboratory’s main method is the desktop fiber extrusion device, FrED, produced in the student-operated and -managed FrED Factory. This paper chronicles an intensive eight-month project directed toward enhancing the efficiency and throughput of the&#13;
FrED Factory.&#13;
&#13;
The project began with a systematic analysis of the intricacies of the fiber extrusion device, the factory, and the associated manufacturing processes. A key component of the project was the development of a digital twin, leveraging discrete event simulation to amplify the modeling and analytic capabilities of the student operators.&#13;
&#13;
A comprehensive characterization of the initial state of operations was conducted, revealing the existence of hidden factories and various types of waste. Strategic, iterative solutions were then formulated and implemented, driving significant improvements over time. The project incorporated 5S methodologies, laying the groundwork for a continuous improvement program, and executed a Kaizen event focusing on the underutilized 3D printing farm that was plagued with printing failures.&#13;
&#13;
Key results from the Kaizen event included reducing print cycle times, improving printer utilization, reducing print failure rates, and boosting the 3D printer farm throughput. The project achieved a substantial reduction in calibration frequency and part defects through a dual approach: minimizing vibration and storage rack swaying issues, and decreasing bed-leveling variation with the print beds, thereby further enhancing utilization. However, the most significant outcome was realized through the alleviation of manufacturing constraints on printer configurations, which led to a 4.2x improvement in the theoretical throughput of the 3D printers.&#13;
&#13;
The project’s journey and results offer invaluable insights and a replicable model for future implementation of student ran production facilities in other university laboratories, highlighting the importance of continuous improvement and the power of advanced technology in accelerating development and operational efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>L'nuisimk (Speaking Mi'kmaq)</title>
<link href="https://hdl.handle.net/1721.1/154197" rel="alternate"/>
<author>
<name>Dennis, John J.</name>
</author>
<id>https://hdl.handle.net/1721.1/154197</id>
<updated>2024-04-18T03:19:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">L'nuisimk (Speaking Mi'kmaq)
Dennis, John J.
The Mi’kmaq have long been people that were hunter/gatherers, craft workers and artisans before our time. The beauty of Mi’kmaq language is its pure form of fluidity and its pure connection with the culture that has returned into the hands of its true owners, the Mi’kmaq. To return the language to the people is to undo all the harm inflicted by the Government that planned to annihilate a civilization or culture of people that were considered “savages” by taking away their mother tongue or the people’s language taught to them by their parents, grandparents, family, and elders within the community. The hardships that lay ahead of the Mi’kmaq who speak English is one that is embarrassing to some, an honor to others and a burden to many. There are many reasons as to why the Mi’kmaq speakers speak their mother tongue (teaching at schools, at homes and within the community), but for those that speak English, it is an utmost shame that it was not of their own doing. We will look at how to teach the next generation through baby talk, then transition to speaking at home with both parents and children. The next transition after will be moving to speaking with other community members within the area with basic conversational phrases. The true answer to solve this problem revolves around the fellow speakers, linguists and teachers that care about preserving this respectable language. The Mi’kmaq language must be placed back where it once belonged, back into the mouths of the Mi’kmaq.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gradient-Based Optimization of ReaxFF Parameters&#13;
Using Pytorch for the Study of Silica Precipitation</title>
<link href="https://hdl.handle.net/1721.1/154194" rel="alternate"/>
<author>
<name>Orlova, Yuliia</name>
</author>
<id>https://hdl.handle.net/1721.1/154194</id>
<updated>2024-04-18T03:55:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Gradient-Based Optimization of ReaxFF Parameters&#13;
Using Pytorch for the Study of Silica Precipitation
Orlova, Yuliia
Silica precipitation is a subject of big interest since it occurs in a wide variety of environmental and industrial processes. Even though there are many advances in atomistic simulation research of different forms of silica, the mechanism of silica precipitation has not been fully understood. We propose to study the following process using reactive force-field method (ReaxFF). Despite being a classical force field, ReaxFF can achieve quantum chemical accuracy once the optimal potential coefficients are found. However, the fitting of ReaxFF parameters is a challenge due to the complex functional form of the potential. Several techniques have been proposed to solve this problem, such as evolutionary algorithms, Monte Carlo methods, and simulated annealing. The stochastic nature of these methods requires millions of error evaluations to fit the parameters, which results in excessive optimization times. Recent advances in machine learning made it possible to drastically speed up the process by utilizing the gradient of the potential. In this work, the gradient-based optimization of reactive force-field parameters using Pytorch was performed. We have implemented ReaxFF potential as a Pytorch model. The model’s performance was validated against existing ReaxFF implementations. ReaxFF parameters were fitted to the dataset, which comprised 15345 geometries calculated using a long-range corrected hybrid functional &#120596;B97XD3.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Place-Based Transit Service Equity in Chicago</title>
<link href="https://hdl.handle.net/1721.1/154192" rel="alternate"/>
<author>
<name>Swarney, Emma Pauline</name>
</author>
<id>https://hdl.handle.net/1721.1/154192</id>
<updated>2024-04-18T03:00:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measuring Place-Based Transit Service Equity in Chicago
Swarney, Emma Pauline
How to equitably distribute public transit service is a highly topical subject facing transit agencies operating in North America. Recent social movements have reignited the debate around Civil Rights on public transit and resulted in increased scrutiny of transit planning practices. While many agencies are striving to incorporate more progressive equity analyses, these equity assessment methods have several shortcomings. For example, they have not addressed important questions such as how service levels can be meaingfully compared between city areas differing in geospatial characteristics (e.g. residential neighborhoods versus Central Business Districts), and what a sufficient level of transit service should be for an area to be considered equitably served.&#13;
&#13;
The goal of this thesis is to develop a new method for assessing place-based equity on a city-wide level, using Chicago and its transit system, the Chicago Transit Authority, as a case study. This method addresses several gaps in literature and practice, using historical passenger trips closely reflective of true system conditions, to measure the state of transit service. This thesis develops a method for determining what an equitable level of transit service should be while accounting for where an area is situated within the greater city geography.&#13;
&#13;
This method is applied to two datasets from different time periods, September 2019 and October 2022. The two time periods are compared to understand if and how service quality has changed. Two types of analyses are performed on the data, one illustrating the service quality of all trips originating in an area, and the other to specific destinations, highlighting the strengths and weaknesses of the transit system. A quantitative equity score for each area in Chicago is presented, demonstrating a full execution of the method. The method is also applied to a project under proposal, the Red Line Extension, quantifying the projected equity benefits, and demonstrating how the method can be applied in different contexts.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stock-constrained optimization of partially disassembled trusses</title>
<link href="https://hdl.handle.net/1721.1/154188" rel="alternate"/>
<author>
<name>Van Marcke, Albertine</name>
</author>
<id>https://hdl.handle.net/1721.1/154188</id>
<updated>2024-04-18T03:43:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Stock-constrained optimization of partially disassembled trusses
Van Marcke, Albertine
Reuse of structural components is a relatively unexplored area of research, with a lot of potential environmental impact. Structural reuse significantly reduces new material use, carbon emissions, and construction waste. This thesis shows a novel way of reusing structural components through partial disassembly of trusses into triangular components. A computational approach for quickly designing trusses with both new and recycled components is presented. The algorithm aggregates components row by row to fill a target area defined by the user, this is done by cutting the reused components where necessary and adding new material members and triangles where appropriate to prevent voids. In case the reusable inventory has a variety of component sizes, multiple designs can be generated. The workflow uses a genetic algorithm to explore and optimize these different designs, taking into account the user’s stock input and target dimensions. Three case studies, reusing realistic trusses, illustrate the algorithm’s applicability to existing truss inventories that could be reused.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerodynamic and Thermal Considerations for an Antarctic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/154187" rel="alternate"/>
<author>
<name>Makikalli, Aaron R.</name>
</author>
<id>https://hdl.handle.net/1721.1/154187</id>
<updated>2024-04-18T03:12:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Aerodynamic and Thermal Considerations for an Antarctic Ice Penetrator
Makikalli, Aaron R.
The Seismo-Geodetic Ice Penetrator (SGIP) is a helicopter-deployed kinetic penetrator designed to deliver a Global Navigation Satellite System (GNSS) and geodesy-grade seismometer to the Ross Ice Shelf (RIS) in Antarctica such that the seismometer becomes buried 2 m deep in the ice, ensuring coupling with the ice shelf. This vehicle provides a means to obtain data informative of ocean-atmosphere-ice dynamics that has historically been challenging to gather due to the remoteness and extreme environment of the RIS. &#13;
&#13;
In order to ensure an appropriate impact velocity and angle, SGIP’s aft-body must be sized to produce a drag force that results in a target terminal velocity of 42 m/s while remaining aerodynamically stable. A finite element flow simulation in SolidWorks and analytical stability calculations are applied to ensure that these requirements are met. Analytical predictions are compared with experimental data from wind tunnel testing and two full-scale drop tests in Alaska. The penetrator must be thermally insulated so that internal electronics are kept within their operating temperature range without melting the surrounding ice. A COMSOL finite element heat transfer model is used to inform the design of thermal insulation for the system to meet these requirements.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Regional Rail: Strategies for Service Transformation on the Worcester/Framingham Line</title>
<link href="https://hdl.handle.net/1721.1/154182" rel="alternate"/>
<author>
<name>Wilkins, Devin Camille</name>
</author>
<id>https://hdl.handle.net/1721.1/154182</id>
<updated>2024-04-18T03:48:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Regional Rail: Strategies for Service Transformation on the Worcester/Framingham Line
Wilkins, Devin Camille
Increasingly, the urgent threat of climate change has brought renewed focus to the efficiency of cities’ transportation networks and the benefits of mode shift away from the private automobile and towards transit. Over the last three years, changes in journey patterns resulting from the social impacts of the COVID-19 pandemic have triggered questions about the future of public transit systems, and whether changes to established service delivery strategies and fare products are needed. In many ways, commuter rail as a service delivery strategy feels like a relic of a past time, as miles of track and a fleet of train sets sit virtually idle for most of the day until peak weekday commute hours.&#13;
&#13;
The goal of this research is to explore the potential transformation of the Massachusetts Bay Transportation Authority (MBTA) Commuter Rail system into a so-called "regional rail" system. That is, a vast network of heavy rail that leverages its abundant track infrastructure to run high-frequency bi-directional service all day between major population centers in the region. The aim of regional rail service is to serve all members of society equally, not just white-collar commuters.&#13;
&#13;
The Worcester/Framingham line is used as a case study, which runs from Boston’s South Station through 44 miles of the Metro West corridor. Three post-pandemic demand scenarios are proposed and service simulation and schedule optimization tools are developed to generate 3 service plans and policy recommendations that promote increased passenger demand in the near future. This analysis culminates in the proposal of a four-part investment plan for infrastructure and service on the line, culminating in a high-frequency service that serves riders only within the urban core, acting as a "second subway" to augment Boston’s urban rail network.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Apparel Pack Sizes Across Retailer’s North America Network</title>
<link href="https://hdl.handle.net/1721.1/154180" rel="alternate"/>
<author>
<name>Teno, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/154180</id>
<updated>2024-04-18T03:21:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing Apparel Pack Sizes Across Retailer’s North America Network
Teno, Jason
Athletic apparel companies outsource apparel manufacturing to many factories that pack in varied sizes and quantities. Packaging is a critical, early step in retailers’ supply chains. Pack quantities impact downstream supply chain costs. Optimizing the relationship between pack quantities and downstream costs allows retailers to reduce unnecessary repackaging within their local distribution centers. This research created a discrete optimization model aimed to minimize distribution center costs as a function of pack sizes. As sales orders trend lower due to an increase in e-commerce sales, the optimization model suggested decreasing pack sizes to accommodate these trends and decrease the variation in pack sizes across product classifications. Immediate implementation would result in a 13.2% reduction in repackaging costs. After implementation, communication with customers to match sales orders to pack sizes would result in a 39.2% reduction in repackaging costs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ignition by Nanosecond Repetitively Pulsed Discharges</title>
<link href="https://hdl.handle.net/1721.1/154175" rel="alternate"/>
<author>
<name>Dijoud, Raphael J.</name>
</author>
<id>https://hdl.handle.net/1721.1/154175</id>
<updated>2024-04-18T03:42:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ignition by Nanosecond Repetitively Pulsed Discharges
Dijoud, Raphael J.
Previous works have shown that nanosecond pulsed plasmas can have strong benefits on ignition, including a reduction of ignition delay times, a decrease of minimum ignition energies, or an extension of lean ignition limits. These effects are highly dependent on experimental conditions such as temperature, mixture, pulse repetition frequency, pulse energy, or discharge size. Therefore, a model allowing for parametric explorations is needed to separate the influence of each variable on plasma-assisted ignition. This work presents the development of both (i) a zero-dimensional (0D) chemical model for plasma-assisted combustion relevant for aircraft engine applications, and (ii) a one-dimensional (1D) radial fluid model of reacting flows describing radial ignition triggered by Nanosecond Repetitively Pulsed discharges (NRP).&#13;
&#13;
The models developed are used to explore the influence of various parameters in an optimization effort. Using the 0D model, the influence of initial gas temperature and energy deposited per pulse on the reduction of ignition delay time is analyzed. Various mixtures of fuel/oxygen/nitrogen are also explored, changing the equivalence ratio and dilution factor, and compared with an instantaneous pure thermal input from the discharge to quantify the chemical effect of the discharge. The 1D model is initially demonstrated in a scenario where no plasma is present, focusing on the ignition of a methane/air mixture by a high-temperature kernel. Additionally, a test case is presented, comparing different NRP ignition strategies. In this case, the total power budget of the discharge is maintained within a narrow range by adjusting the pulse repetition frequency inversely proportional to the square of the plasma region size. Different plasma kernel sizes and pulse repetition frequencies are explored, and their effect on ignition and flame propagation enhancement is discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Thermal Stability of Nanocrystalline Ag-Cu Alloys</title>
<link href="https://hdl.handle.net/1721.1/154174" rel="alternate"/>
<author>
<name>Sulzman, Serita L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154174</id>
<updated>2024-04-18T03:07:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluating the Thermal Stability of Nanocrystalline Ag-Cu Alloys
Sulzman, Serita L.
Nanocrystalline alloys offer multitudinous advantages over their larger-grained counterparts including increased strength, hardness, resistance to fatigue, and more. However a significant barrier to their implementation is their low thermal stability—they are prone to coarsening at very low homologous temperatures. Luckily, a thermodynamic approach to stabilizing the microstructures of nanocrystalline metals by adding an alloying element shows great promise. Recent improvements in computational models have facilitated identification of alloy systems in which solute segregation to the grain boundaries is energetically favorable. However, more experimental validation is needed to verify whether their predictions can translate to enhanced thermal stability of alloys in practice. In this work, computational calculations of segregation energies and various processing considerations provided guidance for the selection of the silvercopper system for further study. Procedures were developed to synthesize chemically homogenous nanocrystalline Ag-Cu alloys, and heat treatments with in-situ X-ray diffraction were designed to evaluate their resistance to grain growth at increasing temperatures. Examination of the microstructures of the heat treated samples with focused ion beam and scanning electron microcopy corroborated Scherrer grain size calculations which showed that the alloys with 5 at.% and 25 at.% copper maintained much smaller equilibrium grain sizes at all temperatures in the scope of study compared to pure silver. As was computationally predicted, these data show that the addition of copper can improve the thermal stability of nanocrystalline silver. The experimental validation of these thermodynamic and other system selection criteria provides a framework for the development of novel thermally stable nanocrystalline alloys for countless engineering applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role Of Repurposing Coal Plants to Thermal Energy Storage in the Context of India</title>
<link href="https://hdl.handle.net/1721.1/154168" rel="alternate"/>
<author>
<name>Patel, Serena Naresh</name>
</author>
<id>https://hdl.handle.net/1721.1/154168</id>
<updated>2024-04-17T03:27:42Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Role Of Repurposing Coal Plants to Thermal Energy Storage in the Context of India
Patel, Serena Naresh
Substantial coal phase out initiatives have been growing as the world mobilizes to meet the Paris climate goals. However, the stranded asset risk associated with this critical transition could fall disproportionately on Asian economies with younger coal fleets, like India. Here, we use a bottom-up and top-down techno-economic modeling approach to explore the value of installing commercially available, molten-salt thermal energy storage (TES) systems for repurposing existing coal power plants in the Indian context. We combine thermodynamic simulation and an economic optimization model to evaluate design and operations of TES systems for a variety of technology assumptions, coal plant archetypes, and electricity price scenarios. Key drivers of economic viability identified include longer remaining plant lifetime, increasing peak TES temperature, lower TES energy capacity cost, co-production of waste heat for end-uses, and increasing temporal variability of electricity prices. The plant-level analysis was then extended to screen for the potential for TES retrofits for the coal power fleet in Uttar Pradesh, the most populous Indian state with amongst the largest coal capacity. Analysis for a single electricity price scenario indicates that over 89% of the coal capacity in the state can be retrofitted and recover the costs of TES retrofits. Under the top-down, capacity expansion modeling approach, we find TES retrofits can save 3-6% in system costs in zero emission scenarios and operate as long-duration energy storage, complementing shorter-duration Li-ion based energy storage. Our results justify further investigation into articulating the value of repurposing coal plants from the interests and positions of different just energy transition stakeholders.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Doing the Dirty Work: Employment vulnerability to the energy transition and its implications for climate policy and politics</title>
<link href="https://hdl.handle.net/1721.1/154167" rel="alternate"/>
<author>
<name>Graham, Kailin</name>
</author>
<id>https://hdl.handle.net/1721.1/154167</id>
<updated>2024-04-17T03:42:54Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Doing the Dirty Work: Employment vulnerability to the energy transition and its implications for climate policy and politics
Graham, Kailin
As the world moves away from fossil fuels, there is growing recognition of the need for policy to support a just transition of those working in carbon-intensive industries. However, little work has thoroughly investigated which communities are most vulnerable to economic disruption in the energy transition and therefore require policy support. This thesis analyzes the distribution of employment vulnerability in the United States by calculating the average "employment carbon footprint" of close-to every job in the U.S. economy at high geographic and sectoral granularity. I find that existing efforts to identify at-risk communities both in the literature and the Inflation Reduction Act exclude regions of high employment vulnerability, and thereby risk leaving these communities behind in the energy transition. I also identify significant within-sector heterogeneity in employment carbon footprints that are unexplained by fuel mix or power grid carbon intensity, and find that carbon-intensive regions tend to be more rural, less racially and ethnically diverse, less educated, and more likely to vote Republican, and that these regions often lack institutional capacity to retrain laid-off workers. This thesis also uses these new data to empirically test the salience of employment impacts for political representatives. I find that legislators from districts with carbon-intensive employment are less likely to vote in favor of climate policy, while household carbon footprints have no effect despite being correlated with public opinion on climate action; I also note the significance of the partisan divide on climate voting. Altogether, this thesis argues that just transition policy is crucial to progress action on climate change by addressing politically salient employment impact concerns; underscores the importance of proactive and continuous measures of employment vulnerability in targeting such policy; provides policymakers with the much-needed data to do so; and makes the case that such policies should be place-based and tailored to the communities they strive to serve.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BReach-LP: a Framework for Backward Reachability Analysis of Neural Feedback Loops</title>
<link href="https://hdl.handle.net/1721.1/154166" rel="alternate"/>
<author>
<name>Rober, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/154166</id>
<updated>2024-04-17T03:37:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">BReach-LP: a Framework for Backward Reachability Analysis of Neural Feedback Loops
Rober, Nicholas
Neural networks (NNs) can be used to solve a wide variety of robotics problems ranging from computer vision to control. However, while NNs often work well in nominal scenarios, their performance can decrease significantly in scenarios that they were not trained for. Thus, as we move toward real-world deployment of neural feedback loops (NFLs), i.e., closed-loop systems containing NNs, it is critical that we develop methods to verify that these systems are safe. Previous works have developed forward reachability techniques to verify safety for NFLs, but these techniques can be prohibitively conservative in non-convex settings such as obstacle avoidance. To enable safety verificaiton in non-convex settings, this thesis proposes BReach-LP: a set of techniques to conduct backward reachability analysis for NFLs. While backward reachability analysis has been studied for systems not containing NNs, the general noninvertability of NNs makes backward reachability analysis for NFLs a challenging problem. Thus, our approach leverages existing forward NN analysis tools to find affine bounds on the control inputs and solve a series of linear programs to efficiently find an approximation of the backprojection sets, i.e., the set of states for which an NN control policy will drive the system to a given target set. This thesis outlines four variations of BReach-LP, including proofs of their soundness and numerical results demonstrating their application.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Who, When, How (Not) to Imitate? The Role of Imitation in Collective Intelligence, and Its Implications on the Design of Socio-Technical Systems</title>
<link href="https://hdl.handle.net/1721.1/154165" rel="alternate"/>
<author>
<name>Choi, Eunseo Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/154165</id>
<updated>2024-04-17T03:50:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Who, When, How (Not) to Imitate? The Role of Imitation in Collective Intelligence, and Its Implications on the Design of Socio-Technical Systems
Choi, Eunseo Dana
Humans collectively demonstrate coordination and progress on a massive scale, building, adapting, and thriving under the rules of different institutions. Researchers posit social learning as a mechanism for overcoming individual limitations, quickly adapting to environments, passing knowledge across generations, and enabling rapid cumulative cultural evolution. This thesis demonstrates how multi-agent learning (MAL) can facilitate counterfactual experiments that shed light on the performance of different social learning. Simulations present that the details of who, when, and how to imitate affect group fitness in distinct ways based on the size and homogeneity of the group: 1. unbiased imitation works well in homogeneous groups as long as there is a minimum age for agents to be imitated; 2. imitation strategies based on models’ complete action history instead of their recent actions, although similar, can attain very different levels of group fitness; 3. very high levels of imitation probability (up to 98% in some cases) may be efficient for group learning. Results from this thesis complement and contradict accepted results from the literature. By explicitly comparing the mechanisms that govern the success or failure of group learning, findings from multi-agent learning can provide essential guidance for the design of socio-technical systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping the Electrodialysis Architecture Design Space by Determining Optimal System Configurations for Different Production Outputs</title>
<link href="https://hdl.handle.net/1721.1/154160" rel="alternate"/>
<author>
<name>Tran, Jimmy</name>
</author>
<id>https://hdl.handle.net/1721.1/154160</id>
<updated>2024-04-17T03:34:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mapping the Electrodialysis Architecture Design Space by Determining Optimal System Configurations for Different Production Outputs
Tran, Jimmy
Water scarcity is increasing around the world, and it especially affects remote, resource-constrained communities. Many communities with the highest water stress also live in close proximity to slightly saline water sources, while having abundant solar irradiance. Photovoltaic-powered electrodialysis reversal (PV-EDR) systems have been shown to produce water more cost-effectively and energetically efficiently than other desalination technologies. The goal of this work is to establish a framework for designing and optimizing PV-EDR systems for designers to develop low-cost systems that desalinate brackish water in remote, resource-constrained communities of various sizes around the world. By using this framework, the most cost-effective architecture that produces water across a large range of production volumes at the lowest cost can be identified. To potentially produce water more effectively at larger production volumes using variable power, a new architecture was proposed and explored called hybrid operation that utilizes the benefits from both continuous and batch operation. Additionally, this framework can also be used to identify the most cost-effective strategy for employing batteries and managing the energy stored versus used for desalination. Optimizing EDR systems that minimize the capital cost while maximizing their production volume across the design space including different architectures (batch, continuous, hybrid), energy management strategies (predictive, non-predictive, no batteries), feed salinities (100-500 mg/L), target salinities (1000-4000 mg/L), and recovery ratios (50%-90%) allows us to identify the most cost-effective EDR systems designs across a range of production volumes. By comparing the EDR systems designs across the design space, we can identify when each architecture and energy management strategy could be employed. Below 15 m^3 of water production per day, batch systems should be employed over hybrid systems. If users are not sensitive to salinity changes throughout the day, continuous systems should be used when producing more than 65 m^3 of water per day. Conversely, if users are sensitive to salinity changes, or a large buffer volume like a reservoir or pond is not available, hybrid systems should be used when producing more than 80 m^3 of water production per day. Between these production volume thresholds, the specific target salinity, feed salinity and recovery ratio can be used to inform which architecture to use. Incorporating a battery into a PV-EDR system can lower the capital cost of the system by approximately 12.3% for systems that produce between 10 and 100 m^3 of water per day, while producing the same amount of water as a similar EDR system without a battery.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Portable Device to Detect Per- and Polyfluoroalkyl Substances (PFAS) in Water</title>
<link href="https://hdl.handle.net/1721.1/154159" rel="alternate"/>
<author>
<name>Benner, Tioga</name>
</author>
<id>https://hdl.handle.net/1721.1/154159</id>
<updated>2024-04-17T03:01:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design of a Portable Device to Detect Per- and Polyfluoroalkyl Substances (PFAS) in Water
Benner, Tioga
PFAS or per- and polyfluoroalkyl substances are a group of man made chemicals used since the 1940s. The chemicals are highly stable and build up in the environment and organic systems. They can also cause several health problems in humans at high concentrations causing them to be a chemical of significant concern given their ubiquity. This article details the engineering research efforts in creating an initial design and prototype for a portable PFAS testing device to be used in the field for long term PFAS measurement of both drinking and groundwater. This research will also provide an initial validation and derisking for this PFAS measurement system and is intended to be used as a starting point for an eventual effort to create a marketable PFAS testing device by the project sponsor Xylem Corporation. The measurement of PFAS is made possible by a polymer developed by members of Tim Swager’s lab in MIT that has a fluorescence quenching response in the presence of PFAS.&#13;
&#13;
Multiple initial concepts were created and rated on a variety of different factors to find a final fluidic system design that would be most effective for this use case. The final design uses a needle based system inserting microliter scale sample fluids into a cartridge of many single use microwells. The microwells are multilayered devices designed to not interact with PFAS that can be easily integrated together into a cartridge with more than a hundred individual microwells fitting in a single 25 X 25 mm2 sheet allowing many tests to be done before requiring manual replacement of the cartridge. The cartridges can be easily removed and replaced for ease of use in the field. This design simplifies production of the device, can be easily automated and can fit within a conventional backpack for easy transport fulfilling the goals set out by Xylem for the final device.&#13;
&#13;
This article also discusses initial experiments into polymer validation showing potential methods of differentiating different types of PFAS and with tests of polymer sensitivity using fluorescence images. These experimens found limits of detection close to 0.1 ppb and there are multiple promising ideas to improve that sensitivity, which are being pursued through experimentation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Distributions: Invariance Principles &amp; Mismatched Guesswork</title>
<link href="https://hdl.handle.net/1721.1/154158" rel="alternate"/>
<author>
<name>Mariona, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/154158</id>
<updated>2024-04-17T03:46:50Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparing Distributions: Invariance Principles &amp; Mismatched Guesswork
Mariona, Alexander
We study two different ways of measuring the similarity between distributions over a finite alphabet. The first is an invariance principle which gives a quantitative bound on the expected difference between general functions of two finite sequences of random variables. This result is one way to generalize the foundational basic invariance principle to a particular multivariate setting. The second framework is based on guesswork, which is one way to measure the randomness of a distribution, similar to but notably distinct from the Shannon entropy. Given a bound on the total variation distance between two finite distributions, we give a bound on the difference in guesswork between those distributions and study the geometrical properties of the problem in the non-asymptotic setting.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bi-Level Belief Space Search for Assembly Tasks</title>
<link href="https://hdl.handle.net/1721.1/154156" rel="alternate"/>
<author>
<name>Chintalapudi, Sahit</name>
</author>
<id>https://hdl.handle.net/1721.1/154156</id>
<updated>2024-04-17T03:01:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bi-Level Belief Space Search for Assembly Tasks
Chintalapudi, Sahit
Contact-rich manipulation tasks, such as assembly, require a robot to reason about both the geometric relationship between parts as well as the dynamical relationship between the forces the robot exerts and the motion of the parts. The application of forces enables the robot to reduce its uncertainty by purposefully contacting the environment, a crucial skill in real-world domains where state is not fully observed. In this thesis, a planner is introduced that reasons over both gripper poses and joint stiffnesses, trading off motion generation to reach an objective and force production to manage uncertainty. Our planner performs a greedy optimization over stiffness and learns a model of the relationship between control output and goal achievement to bias the pose search. This planner is validated on a peg-in-hole insertion task in simulation and the real world and a puzzle assembly task in simulation. We measure the effects of solving for stiffnesses and generating robust gripper poses in terms of the uncertainty our planner can address.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Device for the Treatment of Obstructive Sleep Apnea</title>
<link href="https://hdl.handle.net/1721.1/154155" rel="alternate"/>
<author>
<name>Gao, Qiyun</name>
</author>
<id>https://hdl.handle.net/1721.1/154155</id>
<updated>2024-04-17T04:03:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Novel Device for the Treatment of Obstructive Sleep Apnea
Gao, Qiyun
In this thesis, a novel device for the treatment of Obstructive Sleep Apnea (OSA) is developed and tested. This device uses intra-oral suction to stabilize tongue and/or soft palate in a position that does not obstruct the airway, thus reduce apnea episodes. The treatment device consists of a patient-specific oral device and a non-patient-specific pump unit. Patients wear the oral device on their upper palate that directs suction towards tongue and/or soft palate. A length of tubing connects the oral device to the pump unit, which is placed bedside and is envisioned to be a wearable device in further iterations.&#13;
&#13;
Experimental results from a small-scale clinical trial verified that the device performs its intended function of stabilizing the tongue, and does not cause increase in Apnea Hypopnea Index (AHI) in healthy volunteers. MRI Imaging on volunteers wearing the device proved the device does enlarge the airway by 60% - 80%. A finite element model of the tongue, soft palate and airway, with muscle fiber direction derived from Diffusion Tensor MRI, is implemented as a proof of concept that the device can treat OSA. The estimation of the level of vacuum required to stabilize the tongue by a finite element (FE) model is consistent with experimental results.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-vector Energy Systems Analysis for Heavy-duty Transportation Deep Decarbonization Using H₂ and Synthetic Fuels</title>
<link href="https://hdl.handle.net/1721.1/154152" rel="alternate"/>
<author>
<name>Shaker, Youssef H.</name>
</author>
<id>https://hdl.handle.net/1721.1/154152</id>
<updated>2024-04-17T03:58:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-vector Energy Systems Analysis for Heavy-duty Transportation Deep Decarbonization Using H₂ and Synthetic Fuels
Shaker, Youssef H.
Policies focused on deep decarbonization of regional economies tend to emphasize electricity sector decarbonization in conjunction with electrification of end-uses and increasingly, on the use of hydrogen (H₂) produced via electricity for displacing fossil fuels in difficult-toelectrify sectors. One such use case is heavy-duty transport, which represents a substantial and growing share of global transport sector emissions given the increasing electrification of the light duty vehicle fleet. Here, we assess the bulk energy system impact of decarbonizing the heavy-duty vehicle (HDV) segment via use of either H₂ or drop-in synthetic liquid fuels produced from H₂ along with CO₂. Our analysis relies on soft-linking two modeling approaches: a) a bottom-up model of transportation energy demand that produces a variety of final energy demand scenarios for the same service demand and b) a multi-sectoral capacity expansion model, DOLPHYN, that co-optimizes power, H₂ and CO₂ supply chains subject to a variety of technological and policy constraints to meet the exogeneous final energy demand slate. Through a case study of Western European countries under deep decarbonization constraints for the year 2040, we quantify the energy system implications of varying levels of H₂ and synthetic fuels adoption in HDVs, under scenarios with and without CO₂ sequestration capacity availability. We find that substitution of liquid fossil fuels in the HDV segment is essential to meet the imposed deep decarbonization constraint across the modeled power, H₂, and transport sectors, particularly in the absence of CO₂ storage. Additionally, we find that utilizing H₂ HDVs reduces bulk system costs of deep decarbonization, while reducing fossil liquids demand, but could increase natural gas consumption in cases. While H₂ HDV adoption reduces the need for direct air capture (DAC), synthetic fuel adoption results in a greater need for DAC and also leads to system cost increases compared to scenarios without their adoption. The study highlights the trade-offs associated with different transportation decarbonization pathways, and underlines the importance of multi-sectoral consideration in decarbonization studies.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intracellular sensor spatial multiplexing via RNA scaffolds</title>
<link href="https://hdl.handle.net/1721.1/154122" rel="alternate"/>
<author>
<name>Johnson, Shannon L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154122</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Intracellular sensor spatial multiplexing via RNA scaffolds
Johnson, Shannon L.
To circumvent the limitations of spectrally multiplexing sensors, fluorescent sensors are clustered by type and spatially separated in the cytoplasm to avoid cross-talk. Each sensor is fused to an orthogonal viral capsid protein that binds to a long, repetitive strand of its corresponding RNA sequence. All sensors fluoresce green and are indistinguishable during recording but are identified with post-hoc antibody or FISH staining for each sensor-specific puncta. This spatial multiplexing strategy will allow for easier scaling of the number of fluorescent reporters of physiological activity.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 38-40).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>[mu]Jawstures : jaw-teeth microgestures for discreet hands-and-eyes-free mobile device interaction</title>
<link href="https://hdl.handle.net/1721.1/154119" rel="alternate"/>
<author>
<name>Vega Gálvez, Tomás Alfonso.</name>
</author>
<id>https://hdl.handle.net/1721.1/154119</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">[mu]Jawstures : jaw-teeth microgestures for discreet hands-and-eyes-free mobile device interaction
Vega Gálvez, Tomás Alfonso.
We often perform activities that situationally impair us, decreasing our ability to interact with mobile devices when needed. These impairments manifest physically, by preventing us from using our hands and eyes when already being devoted to other ongoing processes (i.e., biking, driving, etc), and socially, by making certain interaction modalities inappropriate given social norms, etiquette, and rules of engagement. Researchers have investigated using jaw and teeth microgestures as a discreet hands-and- eyes-free solution for mobile device interaction while situationally impaired. However, an opportunity remains to investigate ways to wirelessly and unobtrusively sense these gestures, and further explore and evaluate the design space for jaw and teeth microgestures in the context of general-purpose Human Computer Interaction. This thesis makes four major contributions to the exploration of jaw and teeth microgestures. Through an iterative prototyping process, the work contributes attachable, miniaturized, wireless sensor nodes that are placed bilaterally behind the ears to unobtrusively sense jaw-teeth microgestures with 88% accuracy in a stationary context. The thesis also presents a hyper-personalized mobile application that permits training jaw-teeth gestures and mapping them to mobile device commands. The work further contributes a universal teeth contact and jaw-teeth gesture taxonomy, which is evaluated for its comfort and usability. Finally, it contributes an exploration of the potential use cases of jaw-teeth-gesture-based mobile device interaction.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; [mu] appeared in title on title page appears as lower case Greek letter. Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 157-166).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High throughput single molecule in situ-verified nucleic acid synthesis</title>
<link href="https://hdl.handle.net/1721.1/154118" rel="alternate"/>
<author>
<name>Griswold, Kettner J. F.,
            Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/154118</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">High throughput single molecule in situ-verified nucleic acid synthesis
Griswold, Kettner J. F.,
            Jr.
Synthetic biology is a burgeoning field with applications in medicine, agriculture, chemistry, and other fields. Synthetic biology aims to rationally engineer novel functionality into organisms, from the molecular level to whole genome scale. As an engineering discipline, synthetic biology development follows a canonical design-build-test cycle. In a typical workflow, designs are generated in computer programs, and specified at the DNA level. Subsequently, DNA encoding the design must be built to specification and tested for desired functionality in vivo or in vitro. In current practice, building DNA, by de novo DNA synthesis and related methods, is a rate limiting and costly bottleneck for researchers. State of the art de novo DNA Synthesis technologies, are trial-and-error, nondeterministic processes where turnaround times for specified DNA range on the order of weeks, and cost up to several thousand dollars per gene, multigene order. Of the many challenges inherent to building novel DNA sequences is the occurrence of truncation errors (failure to extend), and damaging side reactions during synthesis of short DNA oligonucleotide (100bp) precursors used in DNA assembly. There are also challenges in assembling oligonucleotides due to the tendency of DNA to form secondary structures and undesired annealing products during assembly reactions. Consequently, DNA synthesis companies spend upwards of 80 percent of manufacturing time sequencing thousands of DNA assemblies until a correct DNA assembly is found. This thesis describes a method for rapid, scalable, de novo DNA synthesis embodied as highly parallelized single molecule enzymatic synthesis of 10KB sequences with real time in situ sequence verification.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 42-43).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A phase-sensitive system for measuring acoustic pressure in an impedance tube</title>
<link href="https://hdl.handle.net/1721.1/154112" rel="alternate"/>
<author>
<name>Cavalieri, Albert L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154112</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">A phase-sensitive system for measuring acoustic pressure in an impedance tube
Cavalieri, Albert L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1949; Bibliography: leaf [19].
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Headquarters, Army of the Philippines at Camp Murphy, Manila</title>
<link href="https://hdl.handle.net/1721.1/154110" rel="alternate"/>
<author>
<name>Arguelles y Corcuera, Carlos Domingo.</name>
</author>
<id>https://hdl.handle.net/1721.1/154110</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Headquarters, Army of the Philippines at Camp Murphy, Manila
Arguelles y Corcuera, Carlos Domingo.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1946; Bibliography: leaves 107-108.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Yard unreliability in rail freight movement.</title>
<link href="https://hdl.handle.net/1721.1/154109" rel="alternate"/>
<author>
<name>Reid, Robert Malcolm.</name>
</author>
<id>https://hdl.handle.net/1721.1/154109</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Yard unreliability in rail freight movement.
Reid, Robert Malcolm.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1971
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deciphering Hydrological Responses: Elastic and Poroelastic Behavior Through GPS Temporal Analysis</title>
<link href="https://hdl.handle.net/1721.1/154036" rel="alternate"/>
<author>
<name>Sandoe, Lucy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154036</id>
<updated>2024-04-03T03:22:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Deciphering Hydrological Responses: Elastic and Poroelastic Behavior Through GPS Temporal Analysis
Sandoe, Lucy A.
Hydrologic features, such as lakes and reservoirs, load the surface of the earth which causes measurable deformation. As the surface is loaded, there is a vertical and horizontal deflection, with reference points moving downwards and towards a load as the surface is depressed. The horizontal and vertical deformation from reservoir loading can be seen in Global Positioning System (GPS) data. In the first chapter of this thesis, we use 18 years of data from GPS sites across Northern California, and we invert for the loads associated with different hydrologic regions on a finer scale than previous studies. We take a novel approach to regularization: the inversion is performed using the vertical components of deformation, but we regularize using the misfits of the horizontal components of deformation which are semi-independent of the signal used in the inversion, thus avoiding overfitting noisy signals or over smoothing sharp features. We validate the inversion on Lake Shasta, a large, confined reservoir with known capacities, before performing a preliminary study of the Northern Sierras, Klamath Mountains, and Black Rock Desert. By robustly inverting remote sensing data for hydrologic mass, we provide insights on water storage budgets on a reservoir-scale across a critical and drought-prone region.&#13;
&#13;
However, there are some regions which can exhibit a porous or poroelastic response to surface water loading. In these areas, the subsurface can expand with the introduction of water; either by the filling of pore spaces or inflation of subsurface reservoirs. This process has the opposite sign of elastic loading, can have temporal delays, and is often nonlinearly recoverable. In Chapter 2, in order to accurately understand and quantify the effects of water loading, we use the degree of correlation between the modeled hydrology at each site and the actual GPS station timeseries, then extrapolate spatially, finding areas of higher and lower correlation with elastic deformation across the Western United States. We also study the dates of peak seasonal amplitude throughout the region. These factors will determine which regions can be modeled with elastic loading, which need a more complex poroelastic model, and which may have some hydrologic delay. The classification will also inform the relative drought resiliency of different regions by highlighting areas where an influx of water may have a delayed impact on reservoir recovery.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Model of a Floating Nuclear System for Hydrogen and Ammonia Production</title>
<link href="https://hdl.handle.net/1721.1/154032" rel="alternate"/>
<author>
<name>Won, Hanna</name>
</author>
<id>https://hdl.handle.net/1721.1/154032</id>
<updated>2024-04-03T03:40:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Integrated Model of a Floating Nuclear System for Hydrogen and Ammonia Production
Won, Hanna
Hydrogen’s role as a potential substitute for fossil fuels is expanding, offering significant potential to lower carbon emissions in critical sectors. Similarly, ammonia is stepping into the spotlight as both a substitute for conventional fuels in heavy industries and as an efficient hydrogen carrier. Nuclear Power Plants (NPPs) play a critical role in this scenario, offering a steady supply of low-carbon energy. This thesis explores the economic and environmental viability of an innovative marine-based facility for generating green hydrogen and ammo- nia using nuclear reactors. It analyzes various designs of this integrated floating platform, assessing their economic and environmental benefits, particularly focusing on enhancing op- erational flexibility and increasing the platform’s value. This includes selling electricity to the grid at times of peak electricity prices. The system optimizes operations by storing excess hydrogen during normal operation, ensuring continuous ammonia production during peak electricity hours. The research investigates diverse NPPs and electrolysis configurations and assesses their collective efficiency in hydrogen and ammonia production. The result of the study identifies the most effective NPP-electrolysis combination and understands how inte- grating ammonia synthesis can enhance the overall hydrogen production process from NPPs. Ammonia production generates excess heat that can be used to reduce external energy in- puts into hydrogen production. Therefore, a holistic approach to the system—including the reactor, hydrogen, and ammonia production—must be considered to minimize costs.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Approach to Investigate Environmental Footprint and Cost Tradeoffs in Additive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/154031" rel="alternate"/>
<author>
<name>Midrez, Noemie</name>
</author>
<id>https://hdl.handle.net/1721.1/154031</id>
<updated>2024-04-03T03:58:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">System Approach to Investigate Environmental Footprint and Cost Tradeoffs in Additive Manufacturing
Midrez, Noemie
As additive manufacturing (AM) continues to grow and show potential for efficient resource utilization and product lifecycle, it represents a promising technology for the green industrial transformation needed to achieve Net Zero Emissions by 2050. However, the environmental impact of AM remains unclear, given its diverse applications and the historical emphasis on cost and quality as primary adoption drivers. Pressured by climate change, AM manufacturers lack quantitative tools to balance the technology’s complexity, environmental impact, and economic value. &#13;
&#13;
This thesis demonstrates the use of system modeling methodologies to help AM manufacturers navigate these tradeoffs and make data-driven decisions to scale their service. After exploring the policy landscape impacting manufacturing and reviewing the latest developments in AM cost modeling and environmental impact assessment, a case study on an AM service unit in the sporting goods industry is used to illustrate the methodologies. A tradespace analysis compares the value of HP’s MultiJet Fusion technology to injection molding (IM) across various product characteristics and lifecycle decisions, and a flexible design analysis evaluates various investment decisions, considering uncertainties from the market and technology. &#13;
&#13;
For the case studied (and assumptions used), the tradespace analysis reveals a 75% lower environmental footprint (EF) per part using AM compared to IM, while IM yields a 97% unit cost saving. Maximizing build capacity with small, uniform parts in locations with low-footprint energy increases AM’s economic and environmental value, suggesting that opposite product attributes and lifecycle decisions constitute development areas. The flexible design analysis, conducted for the specific AM service unit, shows that transitioning with added capacity to a larger rental facility with solar panels yields a 37% lower EF than maintaining current operations, and waiting to move to the larger facility until the demand aligns with added capacity generate a 96-137% increased NPV. These trends lead to the recommendation to transition the existing capacity to a larger rental facility with solar panels and wait for increased demand to invest in additional capacity.  &#13;
&#13;
These insights affirm the effectiveness of system modeling methodologies in guiding AM service providers by balancing financial and environmental factors. By introducing the application of these techniques in the AM context, this study establishes a baseline and identifies gaps to bridge for improved model accuracy. The approach developed in this work can be applied to different cases to quantitatively explore strategic options for technology investment and scaling to meet financial and environmental sustainability goals.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cloud-Native Applications and Their Role in Supporting Agile Hardware Development</title>
<link href="https://hdl.handle.net/1721.1/154030" rel="alternate"/>
<author>
<name>Herrera, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/154030</id>
<updated>2024-04-03T03:01:35Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Cloud-Native Applications and Their Role in Supporting Agile Hardware Development
Herrera, Brian
Agile product development focuses on collaboration, iterative development, and responsiveness to change as a mindset and methodology for project teams. Agile has been instrumental in software development and improving overall project outcomes for software teams. Agile has recently been introduced to hardware teams, given the benefits experienced with software teams. While Agile for hardware is still in its infancy, there are many aspects of cloud-based applications (e.g., Jira, Microsoft 365, Zoom, Miro, Google Docs, etc.) that are enabling the use of Agile in hardware development. In this research, we explore how cloud-based applications support Agile development for hardware teams. We reviewed existing frameworks and interviewed nine individuals from eight different organizations. We learned that hardware teams are complex and require a high level of coordination between its team members. Cloud-based applications support Agile project teams through collaboration, speed of iteration, flexibility, and alignment. When utilizing these applications, experienced practitioners consider their organizational structure, the team's physical location, and interdependencies with other groups. While cloud-based applications provide several benefits to project teams, we suggest they adapt these tools to fit their specific needs. Future development and integration of these tools may help reduce the number of total applications used to streamline the coordination process and reduce the overhead of tools.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lean Technology Roadmapping: Assessing the Value Path of Existing Approaches and Exploring Process Improvements</title>
<link href="https://hdl.handle.net/1721.1/154028" rel="alternate"/>
<author>
<name>Villegas, David</name>
</author>
<id>https://hdl.handle.net/1721.1/154028</id>
<updated>2024-04-03T03:45:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Lean Technology Roadmapping: Assessing the Value Path of Existing Approaches and Exploring Process Improvements
Villegas, David
For the past half a century, the practice of Technology Roadmapping (TRM) has been invaluable in helping companies align technology initiatives with business strategies. However, its successful implementation often requires significant investment, representing a challenge for companies with limited resources, especially start-ups. This study aims to understand how various roadmapping methods differ regarding value delivery and explores ways to optimize initial investments in TRM to maximize their value. To achieve this objective, this thesis integrates theoretical insights from analyzing established methods with practical perspectives from a case study. The analysis portion of the research models roadmapping as a system and dissects the value delivery mechanism of two different TRM methods. The case study examines the experimental roadmapping process implemented at a technology-intensive energy start-up in a real-world setting.  The analysis component of the study concluded that, while both methods aim to align strategic priorities with technology initiatives, they differ in their approach: one relies on verbal communication and facilitation, and the other employs equations and models to rationalize R&amp;D project priorities quantitatively. An estimated investment of approximately 200 hours is considered sufficient to derive initial value from either method. Results from the case study showed that it is feasible to produce an initial roadmap within a start-up environment with an investment of approximately 100 man-hours, depending on the scope and complexity of the roadmap. This streamlined approach primarily enhances cross-functional communication as its key benefit and produces a simple visual roadmap using existing company documentation.  The findings from this research can assist companies in aligning their investments more effectively with their roadmapping needs and setting realistic expectations about the required resource investments to achieve certain minimum benefits from TRM. The case study provides insights into the application technology roadmapping within a start-up, highlighting practical challenges, areas of improvement, and potential for generalization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GALiCA: A Gestural Approach to Live Coding Algorithms</title>
<link href="https://hdl.handle.net/1721.1/154027" rel="alternate"/>
<author>
<name>Savoldy, Lark</name>
</author>
<id>https://hdl.handle.net/1721.1/154027</id>
<updated>2024-04-03T03:03:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">GALiCA: A Gestural Approach to Live Coding Algorithms
Savoldy, Lark
Live coding is an electronic music performance practice in which performers generate music and visuals in real time by writing code. The cognitive approach to live coding differs greatly from that of gestural music, in which performers leverage extensive embodied knowledge of their instrument. These two domains, which each provide unique tools for musical creativity and expressivity, are often performed separately.&#13;
&#13;
This thesis considers the space between these two performance styles. The primary goal is to suggest the potential of a combined modality by considering techniques for gestural control over live code. A combination of live coding and gestural performance may allow for a new cognitive approach and entirely new ways to live code.&#13;
&#13;
To explore this idea, this thesis introduces GALiCA, a live coding system that implements four techniques for manipulating code through gestural interaction with a MIDI controller. These techniques are facilitated by a flexible sequencer conceptualization that allows for easy modification. Additionally, to guide the analyses, this thesis synthesizes existing conceptual perspectives on the cognition involved in gestural performance and live coding. The promising results and analyses of these techniques may encourage further exploration into this new field and prompt new cognitive approaches to electronic music performance.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Implementing Modular Nuclear Reactor Systems for Developing Countries : A framework for capturing the value potential of modular nuclear reactor systems and their deployment in developing countries</title>
<link href="https://hdl.handle.net/1721.1/154023" rel="alternate"/>
<author>
<name>Sibanda, Leroy Kudakwashe</name>
</author>
<id>https://hdl.handle.net/1721.1/154023</id>
<updated>2024-04-03T03:01:48Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Towards Implementing Modular Nuclear Reactor Systems for Developing Countries : A framework for capturing the value potential of modular nuclear reactor systems and their deployment in developing countries
Sibanda, Leroy Kudakwashe
Carbon-conscious energy production is an increasingly global concern, especially as countries reckon with the effects of climate change and their respective contributions to the problem. While developing countries contribute significantly lower amounts to global carbon emissions when compared to developed countries (sometimes orders of magnitude less per capita), there is growing consensus amongst energy leaders in these countries that they need not replicate the damaging levels of carbon emission to fuel energy needs required for economic growth. Many developing countries have already established significant renewable energy programs, but there is a need to supplement this intermittent energy source with one that is more stable. Nuclear energy is widely accepted as a carbon-conscious energy source, and has allowed many developed countries to make the switch to clean energy. It presents the opportunity for developing countries to start off with carbon-conscious energy production, but the prohibitive upfront cost of nuclear power plants among other challenges means adoption remains slow and often faces significant opposition.&#13;
&#13;
This study explores modular nuclear reactor systems, as a solution to the challenges of building and financing nuclear power plants in Africa, as a proxy for developing countries. The result is a framework for implementation of modular nuclear reactor systems, with considerations for cost, safety, technology and electric grid development, among other factors, all from the perspective of developing countries in Africa. Special consideration is given to communicating the value of this framework based on the interests of developing countries.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Hybrid CFD Turbulence Model, STRUCT-epsilon, for Thermal Striping Behavior</title>
<link href="https://hdl.handle.net/1721.1/154022" rel="alternate"/>
<author>
<name>Vaughan, Brendan Conor</name>
</author>
<id>https://hdl.handle.net/1721.1/154022</id>
<updated>2024-04-03T03:30:28Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Assessment of Hybrid CFD Turbulence Model, STRUCT-epsilon, for Thermal Striping Behavior
Vaughan, Brendan Conor
Many advanced nuclear reactor designs are susceptible to thermal fatigue damage caused by thermal striping, which presently accepted modeling and design tools are unable to accurately or reliably predict the presence of. Advanced reactors are vital in achieving net-zero carbon electricity production and thus developing design tools that can predict thermal striping is essential. Any new design tool used in the nuclear industry must be validated against experimental data sets to ensure that results predicted by these methods are sufficiently accurate. The STRUCT-epsilon Computational Fluid Dynamics model was used to aid the development of a dedicated thermal striping experiment that will later be used to help validate the STRUCT-epsilon model's capabilities.&#13;
&#13;
The STRUCT-epsilon model provided the ability to conduct turbulence resolving simulations at a speed conducive to rapid iteration of the design of the DESTROJER test facility. To further increase confidence in the STRUCT-epsilon model's applicability to the test cases, two LES runs were completed and demonstrate STRUCT-epsilon's ability to capture flow unsteadiness. However, in both test cases the STRUCT-epsilon model exaggerates the behavior seen in the LES runs; over predicting temperature oscillations in one case and the flow asymmetry in the other. The STRUCT-epsilon model's potential to predict asymmetric configurations provides promising further applications of the model. Future studies of STRUCT-epsilon should seek to better understand the model's performance in asymmetric flow cases to further support experimental design and the assessment of complex operating configurations.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Logs to Causal Analysis: A Guided User Interface for Causal Graph Discovery</title>
<link href="https://hdl.handle.net/1721.1/154021" rel="alternate"/>
<author>
<name>Gao, Trinity</name>
</author>
<id>https://hdl.handle.net/1721.1/154021</id>
<updated>2024-04-03T03:18:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">From Logs to Causal Analysis: A Guided User Interface for Causal Graph Discovery
Gao, Trinity
In a world full of digital systems, logs are found everywhere. From distributed systems logging network events to stock exchanges logging transactions, preserving information in logs is a widely-used practice. Our group’s hope is that logs can preserve events and system states at various points in time, which can later be leveraged to answer causal questions about the system. However, analyzing logs is currently far from a smooth experience. Some system dynamics might only be partially captured by log variables, while others are drowned out by the sheer volume of uninteresting, "common-case" log-lines. It is not always possible to require the logging format to match our analysis, since most systems rely on infrastructure code and libraries that cannot be altered directly. We would also be throwing away a considerable amount of existing logs. An existing system, Sawmill, is able to parse and process log data in order to answer causal questions. Sawmill’s main functionalities include presenting the user with candidate answers to causal questions, and relies on user input to accept or reject them. Doing this iteratively allows a user to build up a causal graph for a system’s logs. However, the user currently has no way to verify Sawmill’s answers. So if a user incorrectly accepts or rejects an edge representing a causal relationship based off of Sawmill’s answers on average treatment effect (ATE), this will be integrated into the user’s causal graph and can cause even more errors further down the line. In this master’s thesis, we extend Sawmill’s capability by identifying and presenting key assumptions which greatly impact Sawmill’s answer to a causal question. The existence or non-existence of these assumptions informs the user about possible different states of the causal graph, providing more context about the log and ultimately allowing the user to be more confident in drawing causal conclusions. This also mitigates cascading effect of a single error in the construction of a causal graph. Importantly, we continue to leverage the user’s knowledge about the log, relying on their ability to accept and reject assumptions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Naval Ship Systems Power and Energy Metrics through Modeling and Analysis</title>
<link href="https://hdl.handle.net/1721.1/154019" rel="alternate"/>
<author>
<name>Platenberg, Drake</name>
</author>
<id>https://hdl.handle.net/1721.1/154019</id>
<updated>2024-04-03T03:16:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Characterizing Naval Ship Systems Power and Energy Metrics through Modeling and Analysis
Platenberg, Drake
This research introduces a framework for analyzing shipboard power and energy systems as a repeatable process to differentiate between preferred solutions within a design tradespace. The Naval design community needs a consistent method for evaluating non-functional requirements, called “ilities,” in the early design stages when informed decision making provides the greatest opportunity to positively influence the system’s performance and lifecycle cost. Ilities are defined as emergent properties that impact a system’s ability to maintain value over time. The pace of technology maturation and the uncertainty in magnitude and characteristics of future load types drive the need for robust power and energy system architectures that can adapt to future perturbations in requirements. This research proposes a framework for developing metrics that can be used to identify preferred options with the design space. The framework considers the physical, logical, and operational aspects of the architecture to generate a set of perturbations that are likely to impact the system’s ability to maintain value over its lifecycle. The proposed process is exercised to develop quantitative, measurable metrics for Naval power and energy system flexibility: the capability of the system to accommodate change in response to perturbations in requirements. Four case studies are presented, developing metrics for Flexible Power Capacity, Debitable Power Flexibility, Distributable Power Flexibility, and Energy Storage Flexibility. A fifth case presents the application of Real Options Analysis for balancing system performance and cost to “right size” the P&amp;E system at initial delivery with preparations in the design to react to future uncertainty.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Impact of AI Value Alignment in Collaborative Ideation: Effects on Perception, Ownership, and Output</title>
<link href="https://hdl.handle.net/1721.1/154018" rel="alternate"/>
<author>
<name>Guo, Alicia</name>
</author>
<id>https://hdl.handle.net/1721.1/154018</id>
<updated>2024-04-03T03:57:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring the Impact of AI Value Alignment in Collaborative Ideation: Effects on Perception, Ownership, and Output
Guo, Alicia
AI-based virtual assistants are increasingly used to support daily ideation tasks. The values or bias present in these agents can influence output in hidden ways. They may also affect how people perceive the ideas produced with AI agents of different value alignments and lead to implications for the design of AI-based tools. We explored the effects of AI agents with different values on the ideation process and user perception of idea quality, ownership, agent competence, and values present in the output. Our study tasked 180 participants with brainstorming practical solutions to a set of problems with AI agents of different values. Results show no significant difference in self-evaluation based on value alignment; however, the ideas generated in the brainstormig process reflected the AI’s values. This thesis highlights an intricate interplay between AI values and human ideation, suggesting careful design considerations for future AI-supported brainstorming tools.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Filling the Gaps – Exploring the Scope of Arts-Based Education in Jodhpur</title>
<link href="https://hdl.handle.net/1721.1/154017" rel="alternate"/>
<author>
<name>Mridul, Ashmi</name>
</author>
<id>https://hdl.handle.net/1721.1/154017</id>
<updated>2024-04-03T03:52:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Filling the Gaps – Exploring the Scope of Arts-Based Education in Jodhpur
Mridul, Ashmi
This thesis reports findings of an action-research project exploring the scope of arts-based education to fill the gap of local knowledge in schools of Jodhpur, India. The research focuses on two pilot projects executed in the old city with traditional performing arts of Kathputli and Kaavad. It is inspired by the collaborative, dynamic, sensory and affective nature of traditional art practices. The pilot projects also investigate the creation, performance and circulation of the two traditional arts, creating interventions to move past conventions. As a result, the project offers new opportunities and platforms of performance to families of traditional artists with the aim to create a new audience-base in the students.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Mars 2020 Mission Efficiency via SAPP Operations Automation and AEGIS V&amp;V</title>
<link href="https://hdl.handle.net/1721.1/154015" rel="alternate"/>
<author>
<name>Trautman, Leilani</name>
</author>
<id>https://hdl.handle.net/1721.1/154015</id>
<updated>2024-04-03T03:35:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Increasing Mars 2020 Mission Efficiency via SAPP Operations Automation and AEGIS V&amp;V
Trautman, Leilani
Operating a rover on another planet is a difficult task. While rovers are becoming increasingly autonomous, human input is still required and valuable in the space operations process. However, human time and rover time are precious and efforts must be made to make missions as efficient as possible. This thesis addresses the need for mission efficiency by implementing operational improvements to the Mars 2020 Perseverance rover’s Surface Attitude Positioning and Pointing (SAPP) subsystem and by supporting the verification and validation (V&amp;V) of the Automated Exploration for Gathering Increased Science (AEGIS) software system for autonomous science gathering. These two projects help human operators to assess the rover’s health and status more effectively and help free up time spent with a human in the loop for science operations, respectively.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferences on the Influences of Age &amp; Porosity on Oxidative Weathering of Massive Sulfides at the Endeavour Segment of Juan de Fuca Ridge</title>
<link href="https://hdl.handle.net/1721.1/154014" rel="alternate"/>
<author>
<name>Herrera, Erica Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/154014</id>
<updated>2024-04-03T03:01:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Inferences on the Influences of Age &amp; Porosity on Oxidative Weathering of Massive Sulfides at the Endeavour Segment of Juan de Fuca Ridge
Herrera, Erica Lauren
Hydrothermal activity at mid-ocean ridge spreading centers occurs during the formation of new oceanic crust and is responsible for the accumulation of mineral deposits comprised mainly of inorganic metal sulfides that precipitate from mixtures of seawater and high-temperature, sulfide-rich, oxygen-poor vent fluid. These mineral aggregates are known as seafloor massive sulfide deposits and occupy unique biogeochemical niches that remain largely unexplored. Upon the cessation of hydrothermal activity, massive sulfide deposits undergo alteration via both biotically- and abiotically-mediated geochemical reactions. These processes are collectively described as oxidative weathering. While the observed textures of these deposits suggest significant variation in weathering rates, neither the causes of this variation nor the drivers that govern biogeochemical oxidation of massive sulfides are well-characterized. To begin to describe the mechanisms that dictate these processes, massive sulfide samples were collected from deposits along the Endeavour Segment of the Juan de Fuca Ridge. Coupled synchrotron-based X-ray Absorption Near Edge Spectroscopy (XANES) and X-Ray Fluorescence (XRF) microscopy were utilized to create comprehensive redox maps that allow for characterization of the localized redox environment and identification of weathering products. These techniques are a powerful and so far underutilized tool with which to examine the geochemical landscapes of seafloor massive sulfide deposits. Mineral identifications and spatial distributions were corroborated with optical microscopy and X-Ray Diffraction (XRD). The Juan de Fuca Ridge massive sulfide samples are composed of iron-sulfide phases, primarily pyrite (FeS₂), with minor amounts of other metal-bearing sulfides, such as sphalerite ((Zn,Fe)S₂) , wurtzite ((Zn,Fe)S₂), and cubanite (CuFe₂S₃). The samples contain rinds comprised of oxides and (primarily iron-bearing) clays that occur along massive sulfide exteriors and within pore channels. Greater amounts of secondary oxides and clays are observed concurrent with increased porosity and internal pore distribution and are inferred to be products of weathering. This study contributes to current understanding of the mineralogy and composition of seafloor massive sulfide deposits and provides new insight into relationships between age, porosity, and oxidative weathering.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Relevance, Efficiency and Efficacy of Timed Coding Assessments in the Software Engineering Industry</title>
<link href="https://hdl.handle.net/1721.1/154013" rel="alternate"/>
<author>
<name>Leon Alarcon, Paola A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154013</id>
<updated>2024-04-03T03:31:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Understanding the Relevance, Efficiency and Efficacy of Timed Coding Assessments in the Software Engineering Industry
Leon Alarcon, Paola A.
Software engineering has grown as a market in the past decades to become one of the most profitable and widespread globally. Given the demand for software products, software engineers and their skills have also become highly sought-after. Even though there is great demand for software engineers, companies are looking to hire the best possible talent, forcing them to implement assessment mechanisms to evaluate candidates and their technical proficiency. Consequently, interviewing processes for software positions have become highly competitive and rigorous for prospective candidates. The volume of applicants has also forced companies to implement mechanisms that allow for screening candidates at an efficient cost. As a consequence, timed coding assessments and other technical interviewing methods have been raised as an alternative to screening candidates.&#13;
&#13;
A survey was disseminated where participants were asked about their experience with timed coding assessments. Twelve volunteers willing to participate in a semi-structured interviewed were recruited from those surveyed with the goal of understanding their experiences in more depth. It was found that timed coding assessments can be an effective filtering tool to narrow the pool of candidates but did not show consistent relevancy with respect to the job duties and responsibilities software engineers might need to carry out if offered a position. Furthermore, preparation for these types of examinations was found to be fundamental for their clearance and further advancement into the interviewing stages, showing that qualification came secondary to preparation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing business school engagement through a gamified experience in non-pedagogical contexts - a human-centered design approach</title>
<link href="https://hdl.handle.net/1721.1/154012" rel="alternate"/>
<author>
<name>Huang, Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/154012</id>
<updated>2024-04-03T03:52:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Enhancing business school engagement through a gamified experience in non-pedagogical contexts - a human-centered design approach
Huang, Chen
Gamification has a significant history of being used as an innovative approach in education and learning in recent years. While previous research has established the effectiveness of gamification for learning in management and business school settings, there is limited study on its effectiveness outside of pedagogical contexts. Furthermore, past approaches indicate a lack of focus on human-centered studies of gamification. Drawing from insights gained from a case study involving MBA students and staff at the MIT Sloan School of Management, this paper proposes that gamification, known for its efficacy in pedagogical settings, can also improve engagement and productivity in non-educational learning environments. This potential can be realized by clearly defining the scope of the gamified system and content, delivering well-tailored content to the audience, and considering the accessibility and diversity of the participants.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the use of ChatGPT and the Prompting Framework as a Self-learning Aid for Arduino Coding &amp; Circuit Building for Artists and Designers</title>
<link href="https://hdl.handle.net/1721.1/154011" rel="alternate"/>
<author>
<name>Sagar, Prem</name>
</author>
<id>https://hdl.handle.net/1721.1/154011</id>
<updated>2024-04-03T03:11:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring the use of ChatGPT and the Prompting Framework as a Self-learning Aid for Arduino Coding &amp; Circuit Building for Artists and Designers
Sagar, Prem
The intersection of art, design, and technology is a thriving place for groundbreaking ideas and crossdisciplinary innovation. However, it requires a specific long-term focus on integrating STEM subjects with the formal design education system. With the advent of AI educational tools, particularly ChatGPT, it is now possible to receive personalized learning in STEM subjects. Hence, in an effort to enhance selflearning practices of STEM topics among artists and designers, this research delves into the efficacy of integrating ChatGPT and a structured prompting framework for teaching Arduino coding and circuitry. The study is driven by three pivotal questions: the appropriateness of recommending ChatGPT in academic settings given its potential inaccuracies; the effect of a systematic prompting approach on mastering Arduino skills in a self-taught environment; and the validation of this methodology through comparative baseline and endline assessments. Adopting a mixed-methods research design, the study involved conducting a Randomized Control Trial (RCT) and gathered both qualitative and quantitative data in two phases: an initial baseline to gauge pre-existing knowledge, followed by an endline measurement to evaluate progress. Results reveal that while participants showed overall improvement in technical knowledge, those without the structured prompting framework (control group) surprisingly outperformed their counterparts. This was evident in the higher median scores achieved by the control group in endline assessments. Conclusively, while ChatGPT shows potential as an educational tool for self-learning Arduino coding and circuit building using TinkerCAD, the structured prompting framework's effectiveness remains questionable.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design considerations for an AI-prompted “Future-Self” video journaling tool to enhance self-efficacy</title>
<link href="https://hdl.handle.net/1721.1/154010" rel="alternate"/>
<author>
<name>Torres, Gabriela A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154010</id>
<updated>2024-04-03T03:44:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Design considerations for an AI-prompted “Future-Self” video journaling tool to enhance self-efficacy
Torres, Gabriela A.
This study explores a self-management digital solution designed to empower individuals struggling with emotional self-regulation. With a focus on increasing self-efficacy in specific areas or goals, the study proposes an 'AI-prompted future selfie-video journaling tool' to guide users through the process of recording video selfies with future-self narratives. The study aims to gain insights into a Large Language Model (LLM) that should be fine-tuned based on unique experiences, compare different styles of guided approaches, test metrics for self-efficacy and Future self-continuity feedback, and identify pain points for an efficient design. In a 5-day experiment with participants aged 24-77 from the USA and Peru, insights were gained by playing a simulated WhatsApp AI-assistant chatbot role. Participants were guided to set concrete goals and empowering emotions, then followed the process of recording at night and later replayed the video upon waking up the next day, utilizing the 15-minute window of theta brain waves. Those who completed the task reported gains in self-reflection on emotions, leading to more positive thoughts about daily activities. However, the study identified a key challenge: the necessity for personalized adaptation to ensure the LLM's understanding of both general patterns and the intricacies of individual mental health preferences for effective user engagement and education.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Role of Generative AI tools (GAITs) in Software Development Life Cycle (SDLC)- Waterfall Model</title>
<link href="https://hdl.handle.net/1721.1/154009" rel="alternate"/>
<author>
<name>Prakash, Mridula</name>
</author>
<id>https://hdl.handle.net/1721.1/154009</id>
<updated>2024-04-03T03:39:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Role of Generative AI tools (GAITs) in Software Development Life Cycle (SDLC)- Waterfall Model
Prakash, Mridula
The emergence of generative artificial intelligence tools (GAITs) has garnered considerable attention in recent years. These tools, powered by advanced machine learning algorithms, have the ability to generate new and innovative solutions to complex problems. As a result, organizations across various domains are increasingly seeking to reduce human involvement and rely extensively on AI tools to enhance productivity and effectiveness. The continuous advancement of AI technology has paved the way for its integration into software development, bringing forth an era of unparalleled innovation and efficiency. The amalgamation of AI and software development goes beyond mere task automation; it empowers developers and engineers to reimagine the entire process of conceptualizing, designing, and maintaining software. &#13;
&#13;
As the roles of teams evolve, the AI tools into the Software Development Life Cycle (SDLC) need to tap into the positive benefits of AI. This thesis is motivated by the widespread availability of AI tools, whose adoption and consequent benefits are still not well understood. This thesis targets the evolution of GAITs in crafting each phase of the SDLC. It details the merits, accuracy and utility requirements for engineers. The research questions delineated will anchor the investigation into the targeted areas of only SDLC in waterfall model. The examination of this thesis is centered around assessing GAITs efficiency in formulating meaningful results in each phase of SDLC, scrutinizing the results and probing the impact of generation on software quality and dependability. The research demonstrates the functionality of GAITs analyzing their impact in each phase of SDLC by iterating over systems of various complexities. Further it illustrates the concept of understanding the GAIT tool, drawing insights and usage of GAIT beyond mere automation in software development life cycle.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Can Impact Investors Enable Systems Change? Exploring the Theory and Practice of an Emerging Field</title>
<link href="https://hdl.handle.net/1721.1/154008" rel="alternate"/>
<author>
<name>Yau, Alban (Ray-Pern)</name>
</author>
<id>https://hdl.handle.net/1721.1/154008</id>
<updated>2024-04-03T03:33:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How Can Impact Investors Enable Systems Change? Exploring the Theory and Practice of an Emerging Field
Yau, Alban (Ray-Pern)
Contemporary challenges, such as climate change and inequality, are complex and systemic. There has been an increasing awareness of “systems change” in the impact investing community, recognizing the limitation of the traditional approach (investing in a single company or technology) to create meaningful impact in entrenched socio-technical systems. However, a big gap between awareness and action still exists, as the concept of “systems change” or “systems thinking” remains too abstract for most impact investors to adopt in their day-to-day operations. The objective of this study is to address this gap by investigating pioneering case studies in an emerging field of investing with explicit consideration of system change. Through comparing multiple cases, developing an in-depth empirical study, and building a simulation model, this thesis sheds some light on the theory and practice of this emerging field. The results highlight how impact investors have great potential to help enable systems change by operationalizing systems theories, building collectives with stakeholders, and developing a strategic portfolio to influence the system dynamics instead of an isolated innovation or intervention.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bringing Computational Modeling into the Classroom with Custom Block-Based Programming Languages in StarLogo Nova</title>
<link href="https://hdl.handle.net/1721.1/154007" rel="alternate"/>
<author>
<name>Greybosh, Colin</name>
</author>
<id>https://hdl.handle.net/1721.1/154007</id>
<updated>2024-04-03T03:30:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bringing Computational Modeling into the Classroom with Custom Block-Based Programming Languages in StarLogo Nova
Greybosh, Colin
It is possible to improve equity and accessibility in computer science education by incorporating computational thinking into science classrooms through agent-based computational modeling activities that use custom, task-specific programming languages in StarLogo Nova. However, StarLogo Nova’s block-based programming environment does not support extending the language with new task-specific blocks. This thesis resolves this issue by enabling programmers to add new custom blocks to a StarLogo Nova project. The technical contributions of this work resulted in a custom block system that allows StarLogo Nova programmers to create, edit, and view custom blocks, organize them into customizable drawers, and use them to build their models. It is now possible to create task-specific programming languages within StarLogo Nova for the purpose of making computational thinking concepts, such as abstraction, more approachable to learners with minimal programming experience. The conceptual contributions of this work resulted in a new design for a custom drawer interface and a system for sharing task-specific languages across StarLogo Nova projects. The original goal of the thesis was achieved, as custom blocks are now able to be used within StarLogo Nova as a new mode of abstraction within the language. Furthermore, directions for future work to increase the utility of custom blocks as a learning tool were identified and considered. As a result, custom blocks will enable curriculum designers working in the DC-Models project to create customized modeling projects for high school science learners.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies of electron heat conduction in magnetized, shock-driven implosions at OMEGA</title>
<link href="https://hdl.handle.net/1721.1/154001" rel="alternate"/>
<author>
<name>Chang, Cody</name>
</author>
<id>https://hdl.handle.net/1721.1/154001</id>
<updated>2024-04-03T03:47:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Studies of electron heat conduction in magnetized, shock-driven implosions at OMEGA
Chang, Cody
Imposing an external magnetic field strong enough to magnetize the plasma is currently being researched as an advanced approach to inertial confinement fusion (ICF). The electron heat conduction, one of the primary hot spot energy losses, is suppressed perpendicular to the external magnetic field, which is expected to increase yields and temperatures in collisional plasmas. However, since the particle and energy transport are restricted in one out of three dimensions in the plasma, it will naturally tend to develop a mode-2 asymmetry. This has been observed in experiments and in simulations, which display a volume increase of the hot spot that decreases its pressure and, thus, yield, one of the most important parameters in an ICF experiment.&#13;
&#13;
This thesis discusses an experiment performed at the OMEGA laser facility to further explore the impact of a 25 and 50 T seed magnetic field on the performance of a shock-driven, direct-drive ICF implosion with an asymmetric drive. In these experiments, measurements of the hot spot asymmetry at bang time indicate a 4.75x P2/P0 enhancement when magnetized with a 50 T initial field. Time-resolved measurements of the shell trajectory, however, indicated no asymmetry enhancement, indicating that the shell plasma is not sufficiently magnetized for the magnetic field to have an effect. Gorgon simulations, however, predicted an enhanced asymmetry due to magnetization of the shell and, thus, suppressed electron thermal conductivity. Additionally, the observed hot spot electron temperature was enhanced by 1.6x for both 25 and 50 T magnetic field strengths relative to the unmagnetized temperature. Simulations also predicted an enhanced electron temperature, but also expected the 50 T case to be higher than the 25 T case, with the 50 T case expected to have a 57% electron temperature enhancement and the 25 T enhanced by 38%. The factor by which experimental magnetized electron temperatures at 25 T and 50 T increased by agreed more with the simulated 50 T. This could indicate that the simulations are underpredicting how magnetized the 25 T hot spot becomes, which is difficult to assess since magnetization cannot be directly measured experimentally. Additionally, a lower DD ion temperature was observed with a magnetic field, with the 50 T having on average a 0.8 keV and 25 T a 0.9 keV decrease compared to the 0 T shots.&#13;
&#13;
Finally, the loss of nuclear yield is discussed. A 28% increase in volume of the magnetized hot spot was observed. This implies a loss of density, an important quantity in determining a plasma’s yield. No change in the inferred pressure was measured, however.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two Studies of Constraints in High Dimensions: Entropy Inequalities and the Randomized Symmetric Binary Perceptron</title>
<link href="https://hdl.handle.net/1721.1/153999" rel="alternate"/>
<author>
<name>Wakhare, Tanay</name>
</author>
<id>https://hdl.handle.net/1721.1/153999</id>
<updated>2024-04-03T03:35:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Two Studies of Constraints in High Dimensions: Entropy Inequalities and the Randomized Symmetric Binary Perceptron
Wakhare, Tanay
We study two constrained problems in high dimensions. We study a high dimensional inequality for the binary entropy. The perceptron is a natural model in high-dimensional probability, and a toy shallow neural network which stores random patterns; we also study a randomized variant of the symmetric binary perceptron. &#13;
&#13;
We first consider the (k + 1)-th derivative of xᵏ⁻ʳH(xʳ), where H(x) := −x log x − (1 − x) log (1 − x),0 ≤ x ≤ 1 is the binary entropy and k ≥ r ≥ 1 are integers. Our motivation is the conjectural entropy inequality αₖH(xᵏ) ≥ xᵏ⁻¹H(x), where 0 &lt; αₖ &lt; 1 is given by a functional equation. The k = 2 case was the key technical tool driving recent breakthroughs on the union-closed sets conjecture, and the k → ∞ case can be considered the "high dimensional limit". We express ((dᵏ⁺¹) / (dxᵏ⁺¹)) xᵏ⁻ʳH(xʳ) as a rational function, an infinite series, and a sum over generalized Stirling numbers. This allows us to reduce the proof of the entropy inequality for real k to showing that an associated polynomial has only two real roots in the interval (0,1). This reduction allows us to easily verify the inequality for fixed k such as k = 2,3,4 with a finite calculation, and also allows us to prove the inequality for any fixed fractional exponent such as k = 3/2 via a finite calculation. The proof suggests a new framework for proving tight inequalities for the sum of polynomials times the logarithms of polynomials, which converts the inequality into a statement about the real roots of a simpler associated polynomial.&#13;
&#13;
The symmetric binary perceptron (SBP) is a random constraint satisfaction problem (CSP) and a single-layer neural network; it exhibits intriguing features, most notably a sharp phase transition regarding the existence of its satisfying solutions. Secondly, we propose two novel generalizations of the SBP by incorporating random labels. Our proposals admit a natural machine learning interpretation: any satisfying solution to the random CSP is a minimizer of a certain empirical risk. We establish that the expected number of solutions for both models undergoes a sharp phase transition and calculate the location of this transition, which corresponds to the annealed capacity in statistical physics. We then establish, through the Berry-Esseen theorem, a universality result: the location of this transition does not depend on the underlying distribution. We conjecture that both models in fact exhibit an even stronger phase transition akin to the SBP and give rigorous evidence towards this conjecture through the second moment method. Our final focus is on the algorithmic problem of efficiently finding a satisfying solution to our models. We show that both models exhibit the multi Overlap Gap Property (m-OGP), an intricate geometrical property of the solution space which is known to be a rigorous barrier against large classes of algorithms. This gives rigorous evidence of a statistical-to-computational gap for both models. We also show that the m-OGP satisfies a similar universality property.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine-Learning based Ship Traffic Prediction in the Suez Canal</title>
<link href="https://hdl.handle.net/1721.1/153995" rel="alternate"/>
<author>
<name>Budiman, Jeremiah</name>
</author>
<id>https://hdl.handle.net/1721.1/153995</id>
<updated>2024-04-03T04:06:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Machine-Learning based Ship Traffic Prediction in the Suez Canal
Budiman, Jeremiah
This study implements and evaluates two approaches for predicting the average annual daily ship traffic (AADT) of ships within the Suez Canal with a focus on evaluating how deep-learning techniques can be leveraged for both approaches. The first approach is a novel method that utilizes both satellite imagery and AIS technology to predict the AADT. In order to do so, a 2-stage model is implemented that combines an image detection model followed by a correction factor model. The image detection model employs Mask R-CNN, a deep-learning neural network, and the correction factor model utilizes Long Short-Term Memory (LSTM), a recurrent neural network, to train on historical AIS data. Results of the 2-stage model using LSTM demonstrate positive indication of technical feasibility for the approach due to ground-truth AADT values falling within the interquartile range of predictions for all validation sets. Furthermore, although the interquartile ranges have considerable variation, the 2-stage model with LSTM had a mean absolute percentage error (MAPE) of 13.2% based on its median AADT predictions; this is a successful outcome especially when considering the high variance of vessel traffic and the noisiness that comes with satellite imagery’s small sampling rate as just snapshot moments in time. In addition to the 2-stage model, this study also implements a second approach involving a discrete-event simulation (DES) to estimate AADT, and we evaluate how the DES can benefit from using deep-learning techniques like LSTM. Results from the DES model with LSTM indicate an improved 90.8% reduction on the interquartile range of AADT predictions in comparison to that of the 2-stage model. Additionally, the DES model with LSTM had an MAPE of 3.8% for its median AADT predictions, demonstrating strong predictive accuracy. Overall, patterns within the AIS data indicate that despite the effects of Covid-19 in 2020, there is an increase in traffic in subsequent years especially in 2022 due to a rebound effect.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Space Economy Investment Strategy Through an Updated Commercial Space Technology Roadmap (CSTR)</title>
<link href="https://hdl.handle.net/1721.1/153992" rel="alternate"/>
<author>
<name>Miller, Duncan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153992</id>
<updated>2024-04-03T03:19:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data-driven Space Economy Investment Strategy Through an Updated Commercial Space Technology Roadmap (CSTR)
Miller, Duncan M.
The innovation and growth of the space industry over the past two decades has led experts to refer to this period as the "Second Space Race"[1]. The space market is projected to reach $1 trillion in 2030, up from just $280 billion in 2010[2]. During this growth, the landscape of the space sector has shifted from a market dominated by state-run agencies to a booming commercial enterprise that offers seemingly endless possibilities and applications. In response, private investors have flooded the industry with capital. Private funding increased from $1 billion in 2010 to over $12 billion in 2022[2]. The development of new forms of contracting and public-private partnerships spurred commercial investments and opened the door to companies other than the "traditional primes." This foundational change in the space business has forced the US government and commercial enterprises to reevaluate strategies for profitability and continued economic growth in the space domain. &#13;
&#13;
This paper holistically characterizes and evaluates the space industry from a two-pronged approach. First, the Commercial Space Technology Roadmap [3], developed by Prof. de Weck in 2018, is updated to reflect the technological advancements in the increasingly fastpaced industry. Second, additional research was conducted to identify and evaluate financial investment from the government and commercial players. This work hopes to inform strategies and prioritization methods that will maximize not only the success of technological investments, but also the return on financial investment throughout the space industry.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Production of Affordable Desktop Fiber Extrusion Devices (FrED) for Educational Purposes</title>
<link href="https://hdl.handle.net/1721.1/153991" rel="alternate"/>
<author>
<name>Xu, Wenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/153991</id>
<updated>2024-04-03T03:30:54Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development and Production of Affordable Desktop Fiber Extrusion Devices (FrED) for Educational Purposes
Xu, Wenhao
The Fiber Extrusion Device (FrED) is an affordable desktop engineering education tool to aid teaching by creating a laboratory experience. It simulates the continuous fiber draw process, which can provide insights into data acquisition, control systems, smart manufacturing, computer vision, data processing, product design, etc. Designed to be highly modular, the FrED device provides users with a wide range of tunable/adjustable parameters and expansion capacity to enable users to explore beyond the scope of the guided experiment. While successful classroom activities have been conducted using FrED, the 2022 FrED device is still too expensive and heavy to ship to learners worldwide.&#13;
&#13;
This Thesis focuses on product design and development of FrED, making enhancements to reduce cost and mass while increasing the capability. Specifically, the diameter measurement system’s performance was drastically improved by increasing cooling, enhancing fiber stability, introducing a modular pulley system and adjustable tension system, etc. The processor has also been changed from Teensy 4.1 to Raspberry Pi Model 4B to increase the capability to process images during fiber production and to allow users to code using Python rather than C++. To accommodate these changes, the other subsystems are also adjusted for better integration, such as a redesigned PCB. &#13;
&#13;
There are also efforts to improve user safety by following the hierarchy of control to implement visual warnings, physical barriers from hot surfaces, and a thermal switch as an engineering control to prevent thermal runaway of the heater. User experience is enhanced with the virtual monitor connectivity and reduced noise.&#13;
&#13;
Parts are also redesigned to be more compact and use less materials, reducing cost and mass. The final FrED design accomplished a 44.6% reduction in cost over the previous generation of FrED at $149.68 and a 19.7% weight reduction (1.81kg). An initial prototype of the packaging weighs 2.85kg, including the FrED device. The draft assembly manual has been done to pave the way for ramp up.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Semantic Textual Similarity Integration with Requirements and System Models</title>
<link href="https://hdl.handle.net/1721.1/153990" rel="alternate"/>
<author>
<name>Beilstein, John R.</name>
</author>
<id>https://hdl.handle.net/1721.1/153990</id>
<updated>2024-04-03T04:02:51Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Framework for Semantic Textual Similarity Integration with Requirements and System Models
Beilstein, John R.
Modern engineering projects can involve highly complicated systems with hundreds or even thousands of requirements. Organizing and managing these requirements is a task that falls on Systems Engineers (SEs) and Requirements Engineers (REs). This thesis seeks to better understand how Natural Language Processing can assist SEs and REs by identifying relationships and interactions between requirements. This thesis presents an algorithm that analyzes a requirements dataset and assigns requirements to various components defined in a system model. This system model represents an early concept design and consists of high-level components and the connections or relationships between these components. Components are defined with attributes such as names, descriptions, and synonyms. The algorithm uses semantic textual similarity to identify similarities between requirements and these component attributes to estimate which components of a system are affected by which requirement(s). The algorithm attempts to identify direct relationships between individual requirement statements using STS. Additionally, the algorithm attempts to identify indirect relationships between requirements by identifying requirements with overlapping influences on system model components. &#13;
&#13;
The initial results are promising, with the algorithm able to identify requirement-to-requirement pairings with high semantic textual similarity scores and can also identify multiple requirement statements that have high semantic textual similarity scores with overlapping parts of the system model. This information could be used to allow REs and SEs to better understand how different requirement statements directly or indirectly relate to and influence one another. This framework acts as an early proof of concept and more research is needed to understand its scalability. While not optimized, the proposed algorithm is able to reach F1 scores of 0.59 for matching requirements to individual components of the system model. While these F1 scores are not ideal, they imply this technique could be further refined to yield better results. It’s also worth noting that some of the matches between requirements and the system model would likely not be possible to categorize without a human’s intuition and engineering judgment, thus providing very challenging classifications for the algorithm. The algorithm achieves an overall precision between 0.94 and 1.00 for matching requirements to individual components of the system model at semantic textual similarity thresholds at or above 0.40.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Organizational Network Analysis of the Sprawling U.S. Department of Defense Innovation Ecosystem</title>
<link href="https://hdl.handle.net/1721.1/153989" rel="alternate"/>
<author>
<name>Case, Michael C.</name>
</author>
<id>https://hdl.handle.net/1721.1/153989</id>
<updated>2024-04-03T03:23:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">An Organizational Network Analysis of the Sprawling U.S. Department of Defense Innovation Ecosystem
Case, Michael C.
The 2022 United States National Defense Strategy (NDS) highlights that the greatest strategic challenges for today’s security environment are linked to rapidly changing military capabilities and emerging technologies. It is through innovation that the military’s technological edge is maintained. Defense innovation refers to the broad set of experimental activities aimed at developing and implementing transformational technologies, strategies, and organizational practices to provide enhanced capabilities for the military or to reduce the cost of military operations.&#13;
&#13;
The Department of Defense (DoD) relies on a massive connected network of government agencies, private industry, academia, and research institutions to accomplish these activities. This Defense Innovation Ecosystem grew rapidly over the last decade, but many organizations that comprise the ecosystem today were established independently of one another to address specific needs. This growth led to a massive ecosystem that is not optimally organized to support innovation at the speed required to maintain the military’s technological advantage, especially in light of the rapid commercialization of new technology.&#13;
&#13;
This research develops an organizational network model of the Defense Innovation Ecosystem through a comprehensive review of publicly available data sources. Then, using this model, it conducts an organizational network analysis based on five centrality measures, including degree, weighted degree, eigenvector, betweenness, and closeness. This analysis is then used to update the model visualization. Lastly, a modularity assessment of the network model examines a potential hierarchical realignment that cuts across existing organizational boundaries.&#13;
&#13;
This research aims to better understand the Defense Innovation Ecosystem as it currently exists and then provide one viewpoint on how the DoD might evolve the ecosystem to meet future demands.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The University of the Philippines at Manila</title>
<link href="https://hdl.handle.net/1721.1/153972" rel="alternate"/>
<author>
<name>Concio, César Homero.</name>
</author>
<id>https://hdl.handle.net/1721.1/153972</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1940-01-01T00:00:00Z</published>
<summary type="text">The University of the Philippines at Manila
Concio, César Homero.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1940; Includes bibliographical references.
</summary>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PCM television bandwidth reduction using pseudo-random noise</title>
<link href="https://hdl.handle.net/1721.1/153970" rel="alternate"/>
<author>
<name>Roberts, L. G.
            (Lawrence G.)</name>
</author>
<id>https://hdl.handle.net/1721.1/153970</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">PCM television bandwidth reduction using pseudo-random noise
Roberts, L. G.
            (Lawrence G.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1961; Includes bibliographical references (leaf [40]).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sugar cane juice deionization</title>
<link href="https://hdl.handle.net/1721.1/153968" rel="alternate"/>
<author>
<name>Javellana, Angel L.
            (Angel Lacson)</name>
</author>
<id>https://hdl.handle.net/1721.1/153968</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1954-01-01T00:00:00Z</published>
<summary type="text">Sugar cane juice deionization
Javellana, Angel L.
            (Angel Lacson)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1954; Includes bibliographical references (leaf A10).
</summary>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A matrix-free linear programming duality theory</title>
<link href="https://hdl.handle.net/1721.1/153964" rel="alternate"/>
<author>
<name>Villela, Paulo Arruda.</name>
</author>
<id>https://hdl.handle.net/1721.1/153964</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">A matrix-free linear programming duality theory
Villela, Paulo Arruda.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1979; Bibliography: leaf 61.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utilization of L( - )-glucose by naturally occuring microorganisms</title>
<link href="https://hdl.handle.net/1721.1/153963" rel="alternate"/>
<author>
<name>Fewkes, Robert Charles Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153963</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Utilization of L( - )-glucose by naturally occuring microorganisms
Fewkes, Robert Charles Joseph.
Carbon recycle by means of physicochemically synthesized carbohydrates has been proposed. These artificial sugars can be used to generate single cell protein. However, it is not known what effects the unnatural components will have on the yield, productivity, and metabolic regulation of the or­ganisms used.  We have obtained from natural populations, a number of organisms which utilize L-glucose as sole carbon source. Of the twelve organisms isolated, five are gram-negative aerobic rods, one is a gram positive coccus, two are thermophilic bacilli, three are yeasts, and one is a mycelial form. Pre­liminary taxonomy was done on these organisms.  When fully adapted to growth on L-glucose, one pseudomonad grows ex­ponentially with a doubling time of 14 to 16 hours with 5 g/L L-glucose in the medium. Cell yields are about 0.46 g dry cells/g L-glucose, and cell densities as high as 2.8 g/L have been acheived in shake flasks. The ap­parent maximum growth rate is 0.0506 hr.⁻¹ and the apparent overall K[subscript m] for growth is 0.14 g/L L-glucose. However, substrate inhibition sets in at about 4.5 g/L L-glucose. L-glucose transport takes place by facilitated diffusion at V[subscript max] = 2.63 x 10⁻³ mg L-glucose/(mg cells-min) and K[subscript m]= 0.65 g/L L-glucose. The organism probably utilizes the entire L-glucose molecule. There  is evidence that carbon 1 is eliminated as CO₂ and subsequently reassimilated from the medium. One or more growth factors appear to be necessary for L­ glucose utilization. They are made by the organism under good growth con­ditions and one appears to be excreted into the medium.  A hypothetical mechanism of L-glucose utilization consistent with the growth kinetics is proposed. This mechanism involves a catabolic sequence with at least two limiting reactions. The first is incipient transport limitation and the second is inhibition by an intracellular metabolite derived from L-glucose.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 143-155).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Flow-Based Sampling for Large-&#119873; Gauge Theories</title>
<link href="https://hdl.handle.net/1721.1/153909" rel="alternate"/>
<author>
<name>Zhang, Michael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153909</id>
<updated>2024-03-22T04:14:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Accelerating Flow-Based Sampling for Large-&#119873; Gauge Theories
Zhang, Michael S.
Due to its consistency with numerous experimental observations, the Standard Model of particle physics is widely accepted as the best known formulation of elementary particles and their interactions. However, making experimental predictions using the Standard Model involves mathematical and computational challenges due to its complexity. Quantum chromodynamics (QCD), which can be described as an SU(3) gauge theory due to the 3 quark colors and 8 gluon types, is one sector of the Standard Model for which computing solutions is especially challenging. A natural theoretical generalization of QCD is the class of all SU(&#119873;) gauge theories; these theories also provide a method for some QCD computations in the &#119873; → ∞ limit. To study these theories numerically, approximations are calculated from configuration samples due to the mathematical complexity and lack of analytical solutions.&#13;
&#13;
In this thesis, we explore asymptotically efficient flow-based sampling algorithms for the twisted Eguchi-Kawai (TEK) model, a method for analyzing large-&#119873; QCD numerically. We introduce an original architecture based on SU(2) matrix multiplication that allows for efficient Jacobian computation. In addition, we explore the possibility of transfer learning with respect to the number of colors &#119873; and demonstrate that a model trained quickly on the SU(&#119873;) setting also provides useful information in SU(&#119873;′), &#119873;′ &gt; &#119873; cases.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Hyperbolic Graph Convolutional Neural Networks for Age Prediction with Multi-Modal Brain Data</title>
<link href="https://hdl.handle.net/1721.1/153908" rel="alternate"/>
<author>
<name>Ramirez, Hugo</name>
</author>
<id>https://hdl.handle.net/1721.1/153908</id>
<updated>2024-03-22T03:15:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fully Hyperbolic Graph Convolutional Neural Networks for Age Prediction with Multi-Modal Brain Data
Ramirez, Hugo
Characterizing age-related alterations in MEG brain networks holds great promise in understanding aging trajectories and revealing aberrant patterns of neurodegenerative disorders, such as Alzheimer’s disease. In this study, we utilize a Fully Hyperbolic Neural Network (FHNN) to embed functional brain connectivity graphs, derived from magnetoencephalography (MEG) data, into low dimensions on a Lorentz Hyperboloid model for hyperbolic space. Using these embeddings, we aim to detect changes in the intrinsic hierarchy of functional subnetworks across time as well as predict age for patients across multiple decades. We use the hyperbolic embedding pipeline in tandem with multimodal MEG and MRI data to create embeddings from the Cam-CAN (Cambridge Centre for Ageing and Neuroscience) dataset for the downstream task of brain age prediction in healthy patients to better understand how brain connectivity structure impacts brain aging trends. Our hyperbolic MEG brain network embedding framework effectively transforms high-dimensional complex MEG brain networks into lower-dimensional hyperbolic representations, facilitating structural brain hierarchy analysis across age, as well as age prediction. Our versatile embedding pipeline allows for the ready implementation of other downstream tasks like clustering and classification. This constitutes a novel way of studying connectivity alterations in brain networks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DalSegno: User-centric preference elicitation strategies for mitigating cold start in music recommender systems</title>
<link href="https://hdl.handle.net/1721.1/153907" rel="alternate"/>
<author>
<name>Lin, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/153907</id>
<updated>2024-03-22T03:56:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">DalSegno: User-centric preference elicitation strategies for mitigating cold start in music recommender systems
Lin, Cynthia
Avid music enthusiasts often rely on music recommender systems to sift through expansive music catalogs and find new songs fitting their interests. However, such systems struggle to personalize suggestions for new users as they heavily rely on extensive listening histories to make accurate suggestions — an issue known as the new user cold start problem. This problem is exacerbated by the fact that most commercial recommender systems lack transparency and avenues for users to influence their recommendations.&#13;
&#13;
We thus propose DalSegno, a music recommender system with an interactive web-based user interface. The platform is designed to overcome the new user cold start problem by iteratively presenting users with recommendations and incorporating elicited feedback. Additionally, DalSegno enables users to learn about and fine-tune their inferred music preferences through interactive visualizations of song characteristics.&#13;
&#13;
Throughout three rounds of user testing, DalSegno demonstrated promising results. Participants appreciated the system's ability to incorporate user feedback to provide more relevant recommendations and considered it more intuitive to use than commercial recommendation systems. Additionally, users felt that the interactive visualizations of musical qualities helped them learn more about their personal music tastes, which encouraged them to further utilize the interface. Overall, positive evaluations of DalSegno demonstrate that incorporating user input and fostering explainability is vital to creating a more user-focused and effective music discovery experience.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Random Variate Sampling for Secure and Federated Polygenic Risk Scores</title>
<link href="https://hdl.handle.net/1721.1/153906" rel="alternate"/>
<author>
<name>Yen, Derek Jia-Wen</name>
</author>
<id>https://hdl.handle.net/1721.1/153906</id>
<updated>2024-03-22T03:33:31Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Private Random Variate Sampling for Secure and Federated Polygenic Risk Scores
Yen, Derek Jia-Wen
Polygenic risk scores (PRS) are used to quantify the additive effect of single nucleotide polymorphisms (SNPs) on an individual’s genetic risk for developing a particular trait or condition. Collaborations between data centers are important for improving the statistical power and validity of PRS through larger, more genetically diverse datasets. However, owing to the privacy concerns inherent in genomic data, regulations restrict institutions’ capacity to share data. Using cryptography, we present a secure and federated implementation of a Monte Carlo algorithm for PRS, enabling collaborations that respect data regulations. To implement a Monte Carlo algorithm in a privacy-preserving context, our work exhibits techniques for sampling random variates with cryptographically private parameters, which may be of independent interest.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Based Compact Model Development of Field Emitter Arrays</title>
<link href="https://hdl.handle.net/1721.1/153903" rel="alternate"/>
<author>
<name>Shin, Youngjin</name>
</author>
<id>https://hdl.handle.net/1721.1/153903</id>
<updated>2024-03-22T03:04:19Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Physics-Based Compact Model Development of Field Emitter Arrays
Shin, Youngjin
Field Emitter Arrays (FEAs) - based cold cathodes have shown promise as electron sources in devices capable of high power and high frequency operation for a variety of applications such as microwave power amplifiers, pressure sensors, x-ray sources and high-power excimer lasers. Limited work has been done exploring the device characteristics using well defined cathode to anode separation. Consequently, FEAs lack a physics-based compact model. In this work, a flat stand-off anode was placed on the FEAs, which guarantees the anode-to-emitter distance and the parallel condition. The I-V characteristics in the space charge limit show an unexplained, yet reproducible Negative Differential Resistance (NDR) region resulting in a double saturation behavior. Upon further analysis of the electrostatics, the parallel-plate configuration was found to introduce a 2-dimensional acceleration channel affecting electron transport in the space between the anode and the gate electrode. Using simulation results of the electric field and potential distributions, the collection velocities of the emitted electrons were calculated, revealing that the current collection at the anode is electrostatically limited due to the deceleration of the electrons in the vacuum channel when the anode bias is below the gate bias. Although the physical mechanisms behind the NDR region are not fully understood, a qualitative conjecture with relation to the electrostatics is provided. The modeling approach approximates the current density distribution to a Gaussian distribution. The error function is used to calculate the integral of the Gaussian distribution, representing the normalized current density of the output characteristics. The error function is parametrized to predict the experimental I-V behavior in the separate operating regimes of the device. The resulting FEA model contains 22 fitting parameters, and the model function is only dependent on 4 physical parameters consisting of the anode and gate biases, anode separation distance, and the anode material work function. The resulting model shows more than reasonable accuracy within typical operation ranges of FEAs and captures the trends as observed in the experimental data. The compact model also includes the behavior of the NDR regime, opening new avenues of applications for FEAs including oscillators and frequency dividers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing Financial Risks for Wind Power Producers in Wholesale Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/153902" rel="alternate"/>
<author>
<name>Shen, Daniel Weihang</name>
</author>
<id>https://hdl.handle.net/1721.1/153902</id>
<updated>2024-03-22T03:03:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Managing Financial Risks for Wind Power Producers in Wholesale Electricity Markets
Shen, Daniel Weihang
Wind power plant operators are exposed to financial risk in wholesale electricity markets due to the uncertain nature of wind forecasts, day-ahead electricity prices, and real-time electricity prices. In the event of a shortfall compared to the production forecast, the wind generator may have to repurchase power at a higher price in the real-time market. Based on this consideration, this thesis formulates a mixed-integer quadratic program which uses conditional value at risk to create a hedged “risk-aware” offer curve for the wind generator to submit into the day-ahead electricity market. The formulated program additionally considers specific concerns around the offer optimization process being negatively interpreted as using physical withholding to increase profits. We also exploit the structure of the problem to introduce additional constraints to improve computation time and demonstrate that despite the complexity of mixed-integer variables it can be solved within an acceptable operating timeframe under realistic conditions. We simulate the impacts on financial returns for the generator of applying such an approach for a wind farm in the New York City region; the program can be successfully tuned to adjust the variability in returns based on the agent’s preferences, but does not outperform a more naive strategy of simply cutting off the quantity based on a percentile of the forecast distribution. Finally, we provide some discussion on how the act of “active” price creation through these risk-aware offer curves could come into conflict with the current regulatory environment, especially around the concept of exercise of market power, which has long relied on tying fair prices to ones that represent marginal fuel costs for generators.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equivariant symmetry breaking sets</title>
<link href="https://hdl.handle.net/1721.1/153901" rel="alternate"/>
<author>
<name>Xie, YuQing</name>
</author>
<id>https://hdl.handle.net/1721.1/153901</id>
<updated>2024-03-22T03:13:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Equivariant symmetry breaking sets
Xie, YuQing
Equivariant neural networks (ENNs) have been shown to be extremely useful in many applications involving some underlying symmetries. However, equivariant networks are unable to produce lower symmetry outputs given a high symmetry input. Spontaneous symmetry breaking occurs in many physical systems where we have a less symmetric stable state from an initial highly symmetric one. Hence, it is imperative that we understand how to systematically break symmetry for equivariant neural networks. In this work, we propose the first symmetry breaking framework that is fully equivariant. Our approach is general and applicable to equivariance under any group. To achieve this, we introduce the idea of symmetry breaking sets (SBS). Rather than redesign existing networks to output symmetrically degenerate sets, we design sets of symmetry breaking objects which we feed into our network based on the symmetry of our input. We show there is a natural way to define equivariance on these sets which gives an additional constraint. Minimizing the size of these sets equates to data efficiency. We show that bounding the size of these sets translates to the well studied group theory problem of finding complements of normal subgroups. We tabulate solutions to this problem for the point groups. Finally, we provide some examples of symmetry breaking to demonstrate how our approach works in practice.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Sharing and Traceability: Improving User Trust in Data Management within Open Banking and Beyond</title>
<link href="https://hdl.handle.net/1721.1/153900" rel="alternate"/>
<author>
<name>Magendanz, Quinn</name>
</author>
<id>https://hdl.handle.net/1721.1/153900</id>
<updated>2024-03-22T03:13:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data Sharing and Traceability: Improving User Trust in Data Management within Open Banking and Beyond
Magendanz, Quinn
This paper identifies the declining trust in proper data handling throughout the past decades, reviews studies into User Trust, and explores existing frameworks that have been developed to secure, streamline, and make accessible the processes of receiving authenticated User consent, sharing User data, and expressing data usage and collection preferences. Together, these realizations illustrate the customer need, market understanding, and optimum mode of integration which will demand and enable the development of the OTrace Traceability Protocol. This protocol allows a User to track the sharing and usage of their personal data after it has been provided to, or collected by, an initial Data Provider that has explicitly received User consent. For the purpose of monitoring and auditing, the Data Provider and Data Recipient submit records to a Traceability Server to record initial User consent for data sharing as well as ensuing sharing and usage of the User's data. This specification introduces new standards for recording data sharing and usage as Traceability Records into a consent framework which builds off elements of the OAuth 2.0, PAR, PKCE, JWT, JWS, and TB protocols as well as the FAPI and FDX standards for financial data sharing.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MLVR: Regular Expression-Based Specification for Verified Model Checking of Hardware</title>
<link href="https://hdl.handle.net/1721.1/153899" rel="alternate"/>
<author>
<name>Kammer, Gabriel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153899</id>
<updated>2024-03-22T04:08:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">MLVR: Regular Expression-Based Specification for Verified Model Checking of Hardware
Kammer, Gabriel A.
Model checking is an approach to verification of finite-state systems which relies on iterating through all possible states and checking whether some condition holds at each state. One challenge with this approach is that in the majority of real-world systems, the number of states to traverse is too large to feasibly fully explore. In this thesis, we present MLVR (Multi-Layer Variable Regexp), a specification language designed for model checking against hardware system implementations. The syntax of MLVR is based on regular expressions, where we specify what traces of inputs and outputs from the system are acceptable. We offer support for variables to be remembered and later recalled, and we allow for treating the values of variables symbolically during model checking. This allows the state space of systems primarily dealing with variable input/output (for example, hardware buses) to be reduced enough that model checking is feasible for formal verification of the system. We provide a simplified language, SLVR (Single-Layer Variable Regexp), with some of the core features of MLVR and formal proofs of correctness for model checking with SLVR, implemented in the Coq proof assistant. The style and structure of the proofs about SLVR provide insight into how proofs of correctness of MLVR might be written, and they demonstrate solutions to some of the technical challenges raised in proving correctness of MLVR.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Multimodal Behaviors for Neurodegenerative Disease</title>
<link href="https://hdl.handle.net/1721.1/153898" rel="alternate"/>
<author>
<name>Berrones, Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/153898</id>
<updated>2024-03-22T03:36:18Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Detecting Multimodal Behaviors for Neurodegenerative Disease
Berrones, Antonio
Neurodegenerative diseases such as Parkinson’s and Alzheimer’s are incurable and affect millions of people worldwide. Early diagnosis is critical for improving quality of life for patients. Current methods rely on the use of tests administered and evaluated by clinicians. The digital Symbol Digit Test (dSDT) is a novel cognitive test that aims to distinguish between individuals with normal and impaired cognitive abilities. This thesis will develop a framework for processing collected participant eye-tracking and handwriting data and show its use in detecting specific multimodal learning behaviors. Furthermore, this thesis will explore recommendations for working with eye-tracking systems and outline future steps towards developing a multimodal classification model to automate early diagnosis of neurodegenerative disease.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Load balancing and memory optimizations for expert parallel training of large language models</title>
<link href="https://hdl.handle.net/1721.1/153897" rel="alternate"/>
<author>
<name>Wisdom, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/153897</id>
<updated>2024-03-22T03:32:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Load balancing and memory optimizations for expert parallel training of large language models
Wisdom, Daniel
Large language models (LLMs) are an effective way to solve many text-based machine learning tasks, but require huge amounts of computation to train and evaluate. Mixture of experts models have emerged as a way to reduce the amount of computation required for LLMs without compromising accuracy. It is necessary to distribute these large models across several devices, but this requires substantial communication between devices throughout training. Expert parallel is a promising approach to distributing the model across devices and communicating necessary information during training, especially for small batch sizes or models with large embedding sizes. Unfortunately, expert parallel creates an imbalanced workload across devices, causes errors with existing memory conservation strategies, and has poor overlapping of communication and computation. Some existing works solve the imbalanced workload by dropping excess tokens sent to experts above a capacity, but that may reduce accuracy.&#13;
&#13;
In my thesis I introduce ModuleFormer-PRM, an expert parallel training system that addresses these issues without dropping tokens. I will explain a subtle error that occurs when trying to save memory and a strategy to prevent it. I will analyze the distribution of workload among experts and show two approaches to better balance the workload across devices, leading to more stable memory use and faster runtime. I evaluate ModuleFormerPRM using pretrained MoE models and show my optimizations improved expert parallel’s throughput by 2.1×.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ExoSpotter: Few Shot Relevance Feedback For Learning High Recall Exoplanet Search</title>
<link href="https://hdl.handle.net/1721.1/153896" rel="alternate"/>
<author>
<name>Živanović, Goran</name>
</author>
<id>https://hdl.handle.net/1721.1/153896</id>
<updated>2024-03-22T03:49:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">ExoSpotter: Few Shot Relevance Feedback For Learning High Recall Exoplanet Search
Živanović, Goran
Transit photometry is a widely used method for searching exoplanets. For example, NASA’s Transiting Exoplanet Survey Satellite (TESS) utilizes this technique. However, identifying exoplanet candidates requires significant human effort to process light curves; a workflow with minimal human input is desirable. Unfortunately, very few labeled training data are available (i.e., light curves labeled as planet candidates), which makes automatic classification di cult. Here, we propose a new approach to identify planet candidates using relevance-feedback accelerated few-shot learning. We generate many labeled synthetic light curves with and without transits by combining a simple physics-based transit injection model with a statistics-based generative model seeded with abundant non-transiting (“noise”) light curve data. After comparing multiple methods, we selected and trained a generic XGBoost classifier offline on only unfolded and diffused synthetic light curves. We adapted it online by feeding back a few observed and misclassified light curves. The result is an exoplanet classifier with the currently best-known recall and precision.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for LLM-based Lifelong Learning in Robot Manipulation</title>
<link href="https://hdl.handle.net/1721.1/153895" rel="alternate"/>
<author>
<name>Mao, Jerry W.</name>
</author>
<id>https://hdl.handle.net/1721.1/153895</id>
<updated>2024-03-22T03:51:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Framework for LLM-based Lifelong Learning in Robot Manipulation
Mao, Jerry W.
While robotic agents have become increasingly adept at low-level manipulation skills, increasingly they are being guided by large language model planners that decompose complex tasks into subgoals. Recent works indicate that these language models may also be effective skill learners. We develop HaLP 2.0, a modular and extensible framework for lifelong learning in human-assisted language planning, using GPT-4 to propose a curriculum of skills that is learned, used, and intelligently reused. Our system is designed for large-scale experiments, is equipped with a user-friendly interface, and is extensible to new skill learning frameworks. We demonstrate extensibility by comparing alternative implementations of our abstractions and improving overall performance by incorporating novel frameworks. Moreover, we conduct a focused study of GPT-4, using crowd-sourced scene and task datasets, finding that language models are capable agents of skill reuse and adaptation. We observe that while performance is dependent on language context, supplying optimized prompts can yield exceptional skill reuse behaviors. We envision that as manipulation primitives and large language models become more powerful, our system will be ready to synthesize their capabilities to create an autonomous system for lifelong learning, that can one day be deployed in the real world.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a neuro-symbolic approach to moral judgment</title>
<link href="https://hdl.handle.net/1721.1/153894" rel="alternate"/>
<author>
<name>Wing, Shannon P.</name>
</author>
<id>https://hdl.handle.net/1721.1/153894</id>
<updated>2024-03-22T03:53:51Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Towards a neuro-symbolic approach to moral judgment
Wing, Shannon P.
The goal to build a safe Artificial General Intelligence requires an advancement beyond any single human being’s moral capacity. For the same reason why we desire democracy, a moral AGI will need to be able to represent a wide array of perspectives accurately.&#13;
&#13;
While there has been a lot of work to push AI towards correctly answering unanimously agreed upon moral questions, we will take a different approach and ask: What do we do for the space where there is no correct answer, but perhaps multiple? Where there are better and worse arguments? We will investigate one complex moral question, where the empirical human data strays from unanimous agreement, evaluate chatGPT’s success, and build towards a neuro-symbolic framework to improve upon this baseline. By investigating one problem in depth, we hope to uncover nuances, intricacies, and details that might be overlooked in a broader exploration. Our insights intend to spark curiosity, rather than provide answers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation</title>
<link href="https://hdl.handle.net/1721.1/153893" rel="alternate"/>
<author>
<name>Alkhafaji, Yaseen</name>
</author>
<id>https://hdl.handle.net/1721.1/153893</id>
<updated>2024-03-22T03:02:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation
Alkhafaji, Yaseen
Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard graph partitioning methods for GNNsinclude Random partitioning and the state-of-the-art METIS. Whereas METIS produces partitions of high-quality, its preprocessing overheads make it impractical for extremely large graphs. Conversely, random partitioning is cheap to compute, but results in poor partition quality that causes GNN training to be bottlenecked by communication. In my thesis, I seek to prove that it is possible to reduce the data preprocessing overhead on small machines for large graph datasets used in ML while maintaining partition quality. In support of this goal, I design and implement a hierarchical label-propagation-based graph partitioning system known as PLaTE (Propagating Labels to Train Efficiently), partially based on the paper “How to Partition a Billion Node Graph” [18]. PLaTE runs 5.6x faster than METIS on the Open Graph Benchmark’s papers100M dataset, while consuming 4.9x less memory. PLaTE produces partitions that are equally balanced to METIS with comparable communication volumes under certain conditions. In real GNN training experiments, PLaTE has comparable average epoch times to METIS.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>URDF Studio: Tools for the visualization and verification of Universal Robot Description Format</title>
<link href="https://hdl.handle.net/1721.1/153891" rel="alternate"/>
<author>
<name>Nocito, Marco</name>
</author>
<id>https://hdl.handle.net/1721.1/153891</id>
<updated>2024-03-22T03:09:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">URDF Studio: Tools for the visualization and verification of Universal Robot Description Format
Nocito, Marco
A Unified Robot Description Format (URDF) file is an XML file specification used to model robotic systems. URDF files are difficult to modify and verify due to the complexity of the systems they model. We build a set of tools to aid in the modification and verification of these URDF files. This includes a web-based URDF visualizer as well as a Python URDF linter to check if a given URDF file follows formatting and content requirements. We also collect a dataset of representative URDFs.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controllable Transformation Matching Networks for Efficient RF Impedance Matching</title>
<link href="https://hdl.handle.net/1721.1/153890" rel="alternate"/>
<author>
<name>Rafa Islam, Khandoker N</name>
</author>
<id>https://hdl.handle.net/1721.1/153890</id>
<updated>2024-03-22T03:47:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Controllable Transformation Matching Networks for Efficient RF Impedance Matching
Rafa Islam, Khandoker N
Efficient and controlled delivery of radio-frequency (rf) power for semiconductor plasma processing typically relies upon tunable matching networks to transform the variable plasma load impedance to a fixed impedance suitable for most rf power amplifiers. Plasma applications require fast tuning speed with precise control from the matching networks while operating at a high frequency range. However, it is difficult to meet the requirements for many semiconductor plasma applications with conventional impedance matching solutions due to their limited response speeds. This slow speed comes from the presence of mechanical components in the matching network, since they can be tuned only mechanically. This work introduces a novel controllable transformation matching network (CTMN) intended to address the need for high-speed, tunable impedance matching.&#13;
&#13;
The design of the CTMN employs a two-port controllable switching network coupled with a high-Q passive network, enabling rapid voltage modulation and dynamic reactance tuning (dynamic frequency tuning) to swiftly accommodate both resistive and reactive load variations. Control strategies are introduced to maintain zero-voltage switching as needed to minimize switching losses. This approach is substantiated through simulations, which indicate the CTMN’s capability to achieve precise impedance matching with the potential for substantially faster response times (in the &#120583; s range) than traditional systems. It is anticipated that the proposed approach will enable ultra-fast, high-efficiency tunable impedance matching to address the needs of modern plasma systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparisons in End-to-End Pipeline Designs for Customized Document Information Extraction</title>
<link href="https://hdl.handle.net/1721.1/153889" rel="alternate"/>
<author>
<name>Kim, Seok Hyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/153889</id>
<updated>2024-03-22T03:01:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparisons in End-to-End Pipeline Designs for Customized Document Information Extraction
Kim, Seok Hyeon
As businesses continue to adapt to the shift toward the digitalization of corporate tasks, one particular remaining financial and temporal bottleneck is the need for manual labor in interpreting digital documents and recording relevant information. Much work and research has been done, utilizing both machine learning techniques and traditional algorithmic approaches, to alleviate the resources required for this task by developing automated solutions for extracting information from such documents. However, current commercially available solutions typically struggle with either generalization to unique document structures or with handling the range of potential details present within a document type. The thesis introduces and compares two distinct end-to-end pipeline architectures combining neural networks with algorithmic techniques to effectively extract custom key-value information, with one focusing on commercial invoices with consistent keys and the other on technical specification sheets with variable keys. With accuracy, generalizability, and modularity as priorities, their use cases, benefits, and limitations are explored alongside comparisons with existing commercial solutions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winning at Pokémon Random Battles Using Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/153888" rel="alternate"/>
<author>
<name>Wang, Jett</name>
</author>
<id>https://hdl.handle.net/1721.1/153888</id>
<updated>2024-03-22T03:37:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Winning at Pokémon Random Battles Using Reinforcement Learning
Wang, Jett
Pokémon battling is a challenging domain for reinforcement learning techniques, due to the massive state space, stochasticity, and partial observability. We demonstrate an agent which employs a Monte Carlo Tree Search informed by a actor-critic network trained using Proximal Policy Optimization with experience collected through self-play. The agent peaked at rank 8 (1693 Elo) on the official Pokémon Showdown gen4randombattles ladder, which is the best known performance by any non-human agent for this format. This strong showing lays the foundation for superhuman performance in Pokémon and other complex turn-based games of imperfect information, expanding the viability of methods which have historically been used in perfect-information games.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Player Zero-Sum Markov Games with Networked Separable Interactions</title>
<link href="https://hdl.handle.net/1721.1/153885" rel="alternate"/>
<author>
<name>Park, Chanwoo</name>
</author>
<id>https://hdl.handle.net/1721.1/153885</id>
<updated>2024-03-22T03:07:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-Player Zero-Sum Markov Games with Networked Separable Interactions
Park, Chanwoo
We study a new class of Markov games, (multi-player) zero-sum Markov Games with Networked separable interactions (zero-sum NMGs), to model the local interaction structure in non-cooperative multi-agent sequential decision-making. We define a zero-sum NMG as a model where the payoffs of the auxiliary games associated with each state are zero-sum and have some separable (i.e., polymatrix) structure across the neighbors over some interaction network. We first identify the necessary and sufficient conditions under which an MG can be presented as a zero-sum NMG, and show that the set of Markov coarse correlated equilibrium (CCE) collapses to the set of Markov Nash equilibrium (NE) in these games, in that the product of per-state marginalization of the former for all players yields the latter. Furthermore, we show that finding approximate Markov stationary CCE in infinite-horizon discounted zero-sum NMGs is PPAD-hard, unless the underlying network has a “star topology”. Then, we propose fictitious-play-type dynamics, the classical learning dynamics in normal-form games, for zero-sum NMGs, and establish convergence guarantees to Markov stationary NE under a star-shaped network structure. Finally, in light of the hardness result, we focus on computing a Markov non-stationary NE and provide finite-iteration guarantees for a series of value-iteration-based algorithms. We also provide numerical experiments to corroborate our theoretical results.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why did the prediction change? Explaining changes in predictions as time progresses</title>
<link href="https://hdl.handle.net/1721.1/153884" rel="alternate"/>
<author>
<name>Wang, Wei-En Warren</name>
</author>
<id>https://hdl.handle.net/1721.1/153884</id>
<updated>2024-03-22T03:03:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Why did the prediction change? Explaining changes in predictions as time progresses
Wang, Wei-En Warren
Few works on machine learning (ML) explanations design explanations from the perspective of model deployment in the real-world. This work addresses the challenges of understanding ML models applied to event-based time-series data, concretizes two explanation scenarios, and proposes explanations based on changes in feature values, model predictions, and feature contributions for each deployment scenario. We study the prediction problem of turbine brake pad failures, where predictive time-series ML models were deployed in production. Our solution to help decision makers understand how the predictions are made include the development of a usable ML interface and explanations that are aware of the scenarios and contexts where the models are being used. We discuss the usage of ML explanations and the importance of the context under which the model is deployed. We showed our usable ML interface and the explanations with their corresponding scenarios built on top of the usable ML system, which consists of Pyreal, Sibyl-API, and Sibylapp.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Careful Design: Using multi-modal data and virtual reality to bridge the subjectivity gap in architectural space-making.</title>
<link href="https://hdl.handle.net/1721.1/153883" rel="alternate"/>
<author>
<name>Dojnow, Aleksy</name>
</author>
<id>https://hdl.handle.net/1721.1/153883</id>
<updated>2024-03-22T04:05:56Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Careful Design: Using multi-modal data and virtual reality to bridge the subjectivity gap in architectural space-making.
Dojnow, Aleksy
Architecture is a field that deals with the synthesis of many others. It is not just design and construction, but philosophy, art, technology, culture, user experience and all the intangible aspects of the human psyche. As such, architects, throughout their training and professional life, aim to build an intuitive sense of what makes any given space perform the way it is supposed to when experienced by the beholder. They support their decision-making with heuristics and rules of thumb that have been percolating since the beginning of human construction. This is usually a realm dictated by subjective experience and is, therefore, intrinsically imperfect in the way it reflects the architect’s desire and the user’s experience of the architecture. But does it have to be? &#13;
&#13;
Virtual Reality provides the unique affordance of rapidly testing and adapting virtual environments to the real-time biofeedback, eye-tracking and self-reports of the beholder. Something that brick-and-mortar architecture is unable to achieve at sufficient pace and scale. As a result, VR has the chance of lifting, if even ever so slightly, the veil that separates the objective reality from subjective experience. I want my thesis to attempt just that. I recognize that I may fail to do so entirely. Perhaps the gap between these two worlds is not meant to be bridged. But that shouldn’t be the reason why I shouldn’t try, as I believe that the path, I will take may yield important and unexpected discoveries that, at the very least, may show where not look and perhaps point in the direction we should try to go next.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NLP City</title>
<link href="https://hdl.handle.net/1721.1/153882" rel="alternate"/>
<author>
<name>Nguyen, Thanh P. Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/153882</id>
<updated>2024-03-22T03:10:03Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">NLP City
Nguyen, Thanh P. Q.
The rapid advancements in Artificial Intelligence (AI) have led to the development of complex and powerful models, resulting in opaque “black boxes” that hinder human understanding of their decision-making processes. This is especially true in the field of Natural Language Processing as large language models have become widely used and popularized in the form of chatbots and AI assistants. While there have been many attempts at explaining these models and concepts, most of them are directed at an audience already familiar with machine learning concepts. In this paper, I propose an approach to understanding existing concepts and models in NLP by simplifying them into intuitive narratives of towns and cities. By leveraging this more familiar context, the hope is to provide more engagement and information retention to non-technical audience members. The complete narrative can be found at nlp-city.vercel.app.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Robust and Efficient Pseudo-transient Methods for Solving Neural Complementarity Problems in Julia</title>
<link href="https://hdl.handle.net/1721.1/153881" rel="alternate"/>
<author>
<name>Delelegn, Yonatan</name>
</author>
<id>https://hdl.handle.net/1721.1/153881</id>
<updated>2024-03-22T04:00:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Implementing Robust and Efficient Pseudo-transient Methods for Solving Neural Complementarity Problems in Julia
Delelegn, Yonatan
Traditional deep learning models typically consist of explicitly defined layers, such as fully connected and self-attention layers found in Transformers, which have been pivotal in recent advancements in computer vision and large language models. Selecting an appropriate architecture is critical for these models. However, even with optimal architecture, these models may fail to capture intricate relationships and dependencies within hidden states due to the inherent limitations of the chosen layers. Furthermore, in several scientific applications, particularly those simulating physical systems, there is a pressing need to integrate domain-specific knowledge into the modeling process, a task for which explicit neural networks may not be ideally suited.&#13;
&#13;
Recent studies, such as [2] and [4] have highlighted the potential of implicit layers in capturing more complex relationships and learning more stringent constraints than traditional neural networks. Beyond capturing intricate relationships, implicit layers offer the advantage of decoupling the solution process from the layer definition, thus facilitating faster training and the seamless integration of domain-specific knowledge. To enable implicit models to rival state-of-the-art performance, robust and efficient solvers are required for the forward pass. In this project, we focus on exploring stable and efficient solvers, specifically Pseudo-transient methods, for resolving neural complementarity problems. We aim to derive the sensitivity analysis of these problems, implement it in julia, and delve into the applications of differentiable complementarity problems in fields such as economics, game theory, and optimization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Carrier Choice Models for Load Pricing in Digital Freight Platforms</title>
<link href="https://hdl.handle.net/1721.1/153880" rel="alternate"/>
<author>
<name>Li, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/153880</id>
<updated>2024-03-22T03:01:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Carrier Choice Models for Load Pricing in Digital Freight Platforms
Li, Alexandra
With the expansion of digital commerce and growth of the economy, the freight transportation scene has adapted to reflect such changes. Digital freight platforms, acting an intermediary between shippers and carriers, have gained traction to modernize the process and leverage technology to improve efficiency and increase the ease-of-use for all parties involved. Through their role in setting prices and presenting loads, these platforms can reduce the negative environmental impact of freight while simultaneously increasing the efficiency of carriers and satisfying the needs of shippers. The key challenge that these digital freight platforms face is understanding how carriers strategically select an action on the platform, which is difficult to capture despite having large amounts of data because naive estimation methods on historical data produce unrealistic results for different pricing methods.&#13;
&#13;
This thesis addresses this challenge by developing a simulation to evaluate the practicality of these estimates and iteratively revise the parameters based on constraints until they produce desirable results. In our research, we model the behavior through which carriers select a load to accept or reject with a 2-way latent class multinomial logit model. We tune the parameters of this model through a feedback loop where we perform a maximum likelihood estimate on the data to obtain model parameters, evaluate these parameters in the simulation, and use the results to perform a re-estimation to eventually obtain parameters that are both representative of the data and produce the expected results.&#13;
&#13;
We use this system to evaluate optimized pricing and load presentation methods. We experiment with bundling, or grouping a sequence of loads together to reduce the overhead time carriers spend finding suitable loads and to produce routes with less CO2 emissions. We solve for a mixed-integer linear program that maximizes the total utility of bundles proposed by the platform to generate few and non-overlapping bundles. We develop a dynamic programming based pricing method to generate carrier and time specific prices for bundles. We evaluate these methods in our model and analyze the effects of such methods on carrier interactions and behavior. Although these methods do not yet show a substantial decrease in freight carbon emissions, we have laid the groundwork for modeling this complex system and hope that future work can be done to reduce the negative environmental that the freight transportation sector leaves on this planet.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Event-Driven Distributed Task Orchestration System with Applications to Automated PCB Design</title>
<link href="https://hdl.handle.net/1721.1/153879" rel="alternate"/>
<author>
<name>Perez, Sergio A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153879</id>
<updated>2024-03-22T03:14:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Event-Driven Distributed Task Orchestration System with Applications to Automated PCB Design
Perez, Sergio A.
Printed circuit board (PCB) design is the process of taking a board schematic and design constraints and realizing a manufacturable design. Electronic Design Automation (EDA) software allows humans to manually design PCB’s by placing components and routing the electrical connections required. Allegro X AI by Cadence is a cloud-based tool that utilizes machine learning and optimization to automatically generate PCB designs.&#13;
&#13;
Microservice-based architectures have proven to be popular due to their flexibility and scalability. X AI’s current process for generating a printed circuit board design is monolithic with logically separate stages, making it difficult to support flexible configuration of the ordering of downstream stages or branching off the current design and attempting different versions of a stage by varying input parameters and constraints.&#13;
&#13;
In this thesis, we design a microservice-based architecture and orchestration system for automated PCB design. Our design structures the application as a directed acyclic graph (DAG) of microservices and achieves the following goals: decouples the stages of the design generation flow, supports flexible configuration and ordering of downstream stages, and brings the power of elastic compute from the cloud to the PCB design generation process.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ZeroWD: Supporting Zero-Waste Garment Design with Linked Edits</title>
<link href="https://hdl.handle.net/1721.1/153878" rel="alternate"/>
<author>
<name>Zhang, Ruowang</name>
</author>
<id>https://hdl.handle.net/1721.1/153878</id>
<updated>2024-03-22T04:12:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">ZeroWD: Supporting Zero-Waste Garment Design with Linked Edits
Zhang, Ruowang
In traditional garment manufacturing, the way fabrics are cut produces significant waste due to inefficiencies in the design and layout of the garment panels on fabric. Recently fashion designers have begun to explore different ways to design and layout garment panels in more efficient ways. An extreme example of this efficient fashion design process is zero waste fashion design, which aims to use all available fabric in the resulting garment. Currently many zero waste fashion designers manually cut out the 2D patterns and experiment with their 3D shape. With zero-waste design being inherently strictly constrained by the dimensions of the fabric, designers need to perform meticulous calculations for tasks as resizing and restyling. In our work, we propose ZeroWD, a novel interactive design tool that assists zero-waste fashion design by bringing pattern layout and cutting earlier in the design process. With our tool, designers can design zero waste garment panels and simulate the garment’s 3D shape with realtime feedback. By embedding zero-waste design constraints into the system, we enable designers to focus on the creative design rather than tedious constraint solving. Our user study demonstrates that ZeroWD can help fashion designers create garments with minimal waste.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing Steady-state and Post-transplant Blood System Dynamics with Computational Analysis and Lineage-tracing</title>
<link href="https://hdl.handle.net/1721.1/153874" rel="alternate"/>
<author>
<name>Kuoch, Michael K.</name>
</author>
<id>https://hdl.handle.net/1721.1/153874</id>
<updated>2024-03-22T03:32:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Probing Steady-state and Post-transplant Blood System Dynamics with Computational Analysis and Lineage-tracing
Kuoch, Michael K.
Bone marrow transplants are an important tool in modern medicine due to their ability to treat a wide range of diseases, spanning both cancerous and non-cancerous conditions. We aim to study the blood system dynamics using sequencing data from paired pre-transplant and post-transplant samples and look for potential expression profiles that may be biased toward successful bone marrow engraftment. We find that some genes have increased expression in post-transplant samples compared to pre-transplant samples. We also discuss using clonal lineage tracing to track cell clones throughout the transplant process and present some preliminary analyses.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Manufacturing, and Experimental Validation of an Electric Machine for Aircraft Propulsion</title>
<link href="https://hdl.handle.net/1721.1/153873" rel="alternate"/>
<author>
<name>Andersen, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/153873</id>
<updated>2024-03-22T04:02:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Modeling, Manufacturing, and Experimental Validation of an Electric Machine for Aircraft Propulsion
Andersen, Henry
The work presented in this thesis is part of an effort at MIT to develop a 1-MW electric machine which achieves the specific power necessary for hybrid-electric aviation: 13 kW/kg [1]. The models for torque and core loss used in the design of the 1-MW machine are revised and expanded based on experimental results obtained from a partially-manufactured prototype to guide the design of future high specific-power electric machinery.&#13;
&#13;
To calculate the torque produced by the machine, the air-gap field created by a segmented Halbach array rotor is derived from Maxwell’s Equations. The closed-form solution for the air-gap field matches Finite Element Analysis (FEA) to within 1% and experimental data from the manufactured prototype to within the tolerance of the experiment. A method for modeling a slotted stator as a smooth cylinder with a surface current is applied to the stator of the 1-MW machine, and the average torque and torque ripple are calculated using the Lorentz-Kelvin force density. The analytical torque calculation computes 100,000 times faster than 2D FEA (0.56 ms vs. 44 s), and matches FEA to within 1.2%, making it ideal for initial machine design.&#13;
&#13;
An experimental procedure is developed to measure the core loss and B-H curve of an iron lamination stack. This procedure is applied to various toroid samples and a stack of slotted stator laminations. A conventional lamination bonding process is found to raise core loss by 20% for 0.1-mm iron-cobalt laminations. An alternative stator-core manufacturing process, which results in no impact on core loss, is identified and experimentally verified. Based on the measured core loss of a stack of stator laminations, the 1-MW prototype is expected to remain within the thermal limits imposed by the winding insulation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinetic Inductance Characterization of Thin 2H-NbSe₂ Superconductor Using Circuit Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/153871" rel="alternate"/>
<author>
<name>Zaman, Sameia</name>
</author>
<id>https://hdl.handle.net/1721.1/153871</id>
<updated>2024-03-22T03:46:30Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Kinetic Inductance Characterization of Thin 2H-NbSe₂ Superconductor Using Circuit Quantum Electrodynamics
Zaman, Sameia
Wedeveloped hybrid superconducting microwave resonators incorporating van der Waals (vdW) superconductors to explore the microwave response of superconducting 2D materials in the GHz regime. We first established a reliable technique to contact thin NbSe₂, entirely encapsulated with hexagonal Boron Nitride (hBN), with a coplanar Al resonator. Then we fabricated hybrid Al-NbSe₂ resonators and measured the kinetic inductance of thin NbSe₂ at low-temperature and low-photon number limits. In this thesis, we discuss the observed relation between the kinetic inductance and the thickness of the thin NbSe₂. Furthermore, we characterize DC bias current, and microwave power dependence of the kinetic inductance in the hybrid Al-NbSe₂ resonators. Our approach contributes to understanding the both DCand ACproperties of superconducting 2D materials with potential implications for their utilization in emerging technologies.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Land Material Geometry: Spline Construction with Invasive Species in a time of Water Crisis in the Colorado River Basin</title>
<link href="https://hdl.handle.net/1721.1/153870" rel="alternate"/>
<author>
<name>Marshall Jr, William D.</name>
</author>
<id>https://hdl.handle.net/1721.1/153870</id>
<updated>2024-03-22T03:43:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Land Material Geometry: Spline Construction with Invasive Species in a time of Water Crisis in the Colorado River Basin
Marshall Jr, William D.
This thesis speculates architectural systems that act in reciprocal and reparative relationship with the local environment and ecology rather than extractive means.  It suggests material sourcing tamarisk, an invasive species in southwestern desert river systems that exacerbate strains on water availability, thus, removing the plant, yet maintaining its sequestered carbon as construction material.  Active bending of this raw natural timber allows for a low tech means to approximate structural geometry for adobe construction.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Compositional Abstract Models Incrementally&#13;
for Efficient Bilevel Task and Motion Planning</title>
<link href="https://hdl.handle.net/1721.1/153869" rel="alternate"/>
<author>
<name>McClinton III, Willie B.</name>
</author>
<id>https://hdl.handle.net/1721.1/153869</id>
<updated>2024-03-22T03:58:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Compositional Abstract Models Incrementally&#13;
for Efficient Bilevel Task and Motion Planning
McClinton III, Willie B.
In robotic domains featuring continuous state and action spaces, planning in long-horizon task is fundamentally hard, even when the transition model is deterministic and known. One way to alleviate this challenge is to perform bilevel planning with abstractions, where a high-level search for abstract plans is used to guide planning in the original transition space. In this thesis, we propose an algorithm for learning predicates from demonstrations, eliminating the need for manually specified state abstractions. Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective. We use this surrogate objective in a hill-climbing search over predicate sets drawn from a grammar, which we call predicate invention. However, our research highlights another limitation in current symbolic operator learning techniques. They often fall short in robotics scenarios where the robot’s actions result in numerous inconsequential alterations to the abstract state. This limitation arises mainly because these techniques aim to precisely predict every observed change in that state, and as the execution horizon grows longer so does the built up complexity of the predictions. In this thesis, we study this separately and introduce an innovative method where the operators are induced to selectively predict by focusing solely on changes crucial for abstract planning to meet specific subgoals, which we call our operator learning procedure. Our contributions include: a predicate invention procedure based on a hill-climbing search over predicate sets, and a planning-driven operator learning objective based on a hill-climbing search algorithm that only model changes necessary for abstract planning and preserve compositionality of operators. We evaluate learning predicates and operators across a few toy environments and dozens of tasks from the demanding BEHAVIOR-100 benchmark.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanofabrication of flexible thin-film bioelectronics for long-term stable neural signal recording</title>
<link href="https://hdl.handle.net/1721.1/153868" rel="alternate"/>
<author>
<name>Lee, Ariel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153868</id>
<updated>2024-03-22T03:09:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Nanofabrication of flexible thin-film bioelectronics for long-term stable neural signal recording
Lee, Ariel J.
Establishing a long-term stable and effective interface for brain is a significant milestone for all neural implants. Recent studies have demonstrated that tissue-level soft and flexible materials and devices can provide such stability for neural implants. Therefore, engineering suitable materials and developing fabrication methods for soft and flexible thin-film electric probes to further exploit their potential are essential to advancing the field. This thesis demonstrates the comprehensive methods required for developing electrical recording devices and for analyzing the acquired neural data. It presents the design, fabrication, and in vivo implantation of flexible thin-film electronic devices. The materials and fabrication processes are engineered to create structures that can more closely mimic the mechanical properties of brain tissue, in contrast to traditional stiff neural probes. The device designs in this work feature serpentine-shaped ribbons for stretchability and tetrode-like electrode configuration to enable the measurement of single-unit neural activities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SigPro: Enabling Subject Matter Expert Guidance in Feature Engineering</title>
<link href="https://hdl.handle.net/1721.1/153867" rel="alternate"/>
<author>
<name>Xu, Guanpeng Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/153867</id>
<updated>2024-03-22T03:08:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">SigPro: Enabling Subject Matter Expert Guidance in Feature Engineering
Xu, Guanpeng Andy
In this thesis, we detail developments to SigPro, a feature engineering library in Python guided by Subject Matter Experts (SMEs). SigPro includes a suite of data processing building blocks, or primitives, as well as an algorithm to combine primitives to form feature engineering pipelines. These pipelines are in turn used to construct features for machine learning.&#13;
&#13;
SMEs, through a low-code interface, have several ways to dictate the feature engineering process. First, subject matter experts can construct a feature engineering pipeline for signal data simply by specifying a sequence of data transformations and aggregations (building blocks); SigPro then automatically composes a primitive graph and thus a feature engineering pipeline. Second, subject matter experts can also specify parameters and hyperparameters for each building block through SigPro’s user-friendly API. These methods encourage SMEs to incorporate their domain knowledge through informative feature transformations and carefully chosen parameter values.&#13;
&#13;
When existing building blocks fall short, SigPro facilitates efficient development of new primitives. To this end, we streamline the process for the contribution of new primitives while ensuring their seamless integration into existing pipelines. These improvements ensure that SigPro provides an intuitive yet effective solution where subject matter experts can leverage their domain knowledge to generate relevant, explanatory features that can greatly improve the performance of downstream predictive modeling.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bottom-Up Standardization For Data Preparation</title>
<link href="https://hdl.handle.net/1721.1/153866" rel="alternate"/>
<author>
<name>Lai, Eugenie Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/153866</id>
<updated>2024-03-22T03:59:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bottom-Up Standardization For Data Preparation
Lai, Eugenie Y.
Data preparation is an essential step in every data-related effort, from scientific projects in academia to data-driven decision-making in industry. Typically, data preparation is not the novel or interesting piece of a project — it transforms raw data into a format that enables further innovative work. Because data preparation scripts are never intended to be interesting, are project-specific, and are written in general-purpose languages, they can be tedious to understand and check. As a result, data preparation scripts can easily become a breeding ground for poor engineering and statistical practices.&#13;
&#13;
Ideally, data preparation scripts are “admirably boring” — they should serve the project, but otherwise be as simple and as standard as possible. We propose a bottom-up script standardization framework that takes a user’s data preparation script and transforms it into a simpler, more standardized, more boring version of itself. Our framework takes the user’s input script not as an unchangeable definition of correctness, but as a semantic sketch of the user’s overall intent. We present an algorithmic framework and implemented a prototype system. We evaluate our approach against state-of-the-art methods, including GPT-4, on six real-world datasets. Our approach improves script standardization by 39.5% while not meaningfully changing the user’s intent, while GPT-4 achieves 2.9%.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A New Framework for Refraction-Based Image Verification</title>
<link href="https://hdl.handle.net/1721.1/153865" rel="alternate"/>
<author>
<name>Simhon, Sage</name>
</author>
<id>https://hdl.handle.net/1721.1/153865</id>
<updated>2024-03-22T03:07:03Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A New Framework for Refraction-Based Image Verification
Simhon, Sage
We propose a novel approach to image verification that aims to unify optics, computer vision, computer graphics, and deep learning for active image protection. Our approach builds upon previous work involving placing spherical refractive objects in the scene that collectively act as a signature for authenticity, however we hypothesize that we can learn refraction models independent of scene and for arbitrary refractive objects. We develop a framework for learning such refraction models, where each model can be considered a key to authenticate an image or video. In this way, complex refraction models inherent to the physics of arbitrarily shaped objects can be used to increase security without requiring a closed form solution for their optical behavior. The approach involves scanning a laser over the scene and learning an image of its warping transformation by the refractive object. With a learned model, detecting and localizing manipulations in an image is accomplished by validating consistency between the primary, unverified image and a reconstruction based on the warped image in the object. This is demonstrated in simulation, using a photorealistic rendering engine to collect synthetic training data that captures real world behavior. We present both qualitative and quantitative results demonstrating the capabilities of our system, including computational speedups and practical improvements compared to prior work, as well as an analysis across different resolutions, model settings, and limiting factors. We demonstrate that with a sufficient sampling resolution, we can detect and localize content additions, content removals, and texture changes. Our key contribution is a novel integration of physical laws with deep learning in the context of image forensics. Further, the generalization introduced by our deep learning approach allows us to enhance image verification security.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Love in the Fast Lane: Not-so-new Models for American Stewardship and Preservation</title>
<link href="https://hdl.handle.net/1721.1/153864" rel="alternate"/>
<author>
<name>Gideonse, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/153864</id>
<updated>2024-03-22T03:38:47Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Love in the Fast Lane: Not-so-new Models for American Stewardship and Preservation
Gideonse, Lauren
From 1914 to 1919 American Portland Cement funded the construction of eight mile-long stretches of paved road across the Midwest, seedling miles, as part of the campaign to garner support for the  Lincoln Highway project. In internal memoranda, the company called these seedlings “object lessons.”  The seedlings, by creating a physical encounter of the space between the status quo and what could be, manufactured a desire in drivers. The incredibly successful campaign shifted responsibility for the road system to the public domain and cemented the road as a site of civic investment. It also invented a mode of experience that facilitates noticing but is not in response to failure or crisis.&#13;
&#13;
This thesis begins with one hundred and fifty sites across the United States – domestic buildings that are particularly old for their context – documented through two road trips. The road-trip as collection mechanism sets the terms: the road and the house are considered together. They inform and contextualize each other. The road is both a critical contemporary network of resources and people, and the historical agent of rationalizing, mobilizing, and capitalizing on the American landscape. The historic home is not  considered in a vacuum but always in time, in relationship to the landscape and through its frontage.  By looking carefully at these sites through tailored analytical tools, this thesis identifies tendencies, both at the time of construction and in the behavior of the buildings since, that reflect an alternate set of values from those that shape building and preservation practices today.&#13;
&#13;
From these sites the thesis composes, and in the process re-evaluates, the history of a house and the road. The objects of this research form ulterior narratives – derivative and projective – that cast an ill-fated romance between forms of stewardship and systems of capital. The results, a collection of slow media, construct and reconstruct encounters with an altered landscape. Like the seedling miles from which the contemporary American highway system grew, this thesis utilizes the “object lesson” as a mechanism to prompt reconsideration. The thesis puts forward a new stretch of seedling road to  manufacture a desire for not-so-new forms of stewardship and preservation that are both born-of and particular-to the  American context.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High A-scan rate optical coherence tomography angiography for blood flow speed quantification in the retina</title>
<link href="https://hdl.handle.net/1721.1/153860" rel="alternate"/>
<author>
<name>Hwang, Yunchan</name>
</author>
<id>https://hdl.handle.net/1721.1/153860</id>
<updated>2024-03-22T03:01:02Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">High A-scan rate optical coherence tomography angiography for blood flow speed quantification in the retina
Hwang, Yunchan
Optical coherence tomography angiography (OCTA) offers non-invasive and depth-resolved imaging of the retinal vasculature. While OCTA is widely used to study retinal disease, it traditionally provides limited information about blood flow speed. This thesis introduces a second-generation variable interscan time analysis (VISTA) OCTA, designed to evaluate a quantitative surrogate marker for blood flow speed in the vasculature. At the capillary level, spatially compiled OCTA and a simple temporal autocorrelation model, ρ(τ) = exp(-ατ), are used to evaluate a temporal autocorrelation decay constant, α, as a marker for blood flow speed. A 600 kHz A-scan rate swept-source OCT prototype instrument provides short interscan time OCTA and fine A-scan spacing acquisition, while maintaining multi mm2 field-of-views for human retinal imaging. The cardiac pulsatility in α is demonstrated and its synchronization across retinal capillaries is quantified. The repeatability of α measurements is evaluated at multiple spatial levels. This new approach reveals varying α values across different retinal capillary plexuses in healthy eyes, and demonstrates spatial correspondence between high blood flow speeds and the centers of choriocapillaris lobules. VISTA OCTA images of eyes with diabetic retinopathy and age-related macular degeneration are also presented. By providing blood flow speed information, the second-generation VISTA aims to enhance and complement traditional structural vasculature imaging offered by OCTA. These advancements promise to enable clinical studies of blood flow speed alterations in retinal diseases, offering earlier markers for disease detection, progression, and response to treatment.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microarchitecture Categorization and Pre-RTL Analytical Modeling for Sparse Tensor Accelerators</title>
<link href="https://hdl.handle.net/1721.1/153859" rel="alternate"/>
<author>
<name>Feldman, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/153859</id>
<updated>2024-03-22T03:17:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Microarchitecture Categorization and Pre-RTL Analytical Modeling for Sparse Tensor Accelerators
Feldman, Andrew
Specialized microarchitectures for exploiting sparsity have been critical to the design of sparse tensor accelerators. Sparseloop introduced the Sparse Acceleration Fea­ture (SAF) abstraction, which unifies prior work on sparse tensor accelerators into a taxonomy of sparsity optimizations. &#13;
&#13;
Sparseloop succeeds at analytical pre-RTL modeling of architecture-level metrics for sparse tensor accelerators, accurately capturing the beneficial impact of SAFs on overall design cost. However, Sparseloop lacks cost models for microarchitectural primitives and design topologies required for implementing SAFs (referred to in this work as "SAF microarchitectures".) &#13;
&#13;
Analysis of prior works shows that SAF microarchitectures may or may not con­stitute a significant overhead, depending on the particular design; thus it is desirable to have pre-RTL models which help anticipate SAF microarchitecture overheads. &#13;
&#13;
Building on the Sparseloop SAF abstraction, this work1 attempts to synthesize a number of prior works into a concise, unified, and effective framework for doing research on SAF microarchitectures. This overall framework comprises (1) a concep­tual framework which facilitates concise description and design-space exploration for SAF microarchitectures, (2) a software framework for compiling Sparseloop-style SAF descriptions into microarchitecture designs and analytical models, and (3) a compo­nent library including specific SAF microarchitecture subcomponent designs as well as RTL to support implementation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing CSV Injection Attacks With A Browser Extension</title>
<link href="https://hdl.handle.net/1721.1/153858" rel="alternate"/>
<author>
<name>Dedhia, Ray</name>
</author>
<id>https://hdl.handle.net/1721.1/153858</id>
<updated>2024-03-22T03:13:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Preventing CSV Injection Attacks With A Browser Extension
Dedhia, Ray
CSV injection occurs when an attacker injects malicious code into a CSV file, and this code is executed when the file is opened in a spreadsheet program. This type of attack is possible because most spreadsheet programs have a set of built-in functions that run automatically when a CSV file is opened with the spreadsheet program. Given the widespread usage of CSV files and programs that interpret those CSV files, the risk posed by such CSV injection attacks is great.&#13;
&#13;
In this study, I present a browser extension designed to sanitize all downloaded CSV f iles by eliminating any harmful code while preserving the integrity of benign code. The extension does this by first finding all formulas within a CSV file, and determining whether or not each one has the potential to contain malicious code. If the extension determines that a formula may be malicious, it will edit the cell containing that formula so that spreadsheet programs will interpret the cell as text, and will not execute it.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Memory in Reinforcement-Learned Agents for Smarter Lateral Movement</title>
<link href="https://hdl.handle.net/1721.1/153857" rel="alternate"/>
<author>
<name>Johnson Schofield, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/153857</id>
<updated>2024-03-22T03:12:11Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring Memory in Reinforcement-Learned Agents for Smarter Lateral Movement
Johnson Schofield, Catherine
Computer networks are the backbone of most organizations’ technology infrastructure. Yet, they remain susceptible to many hidden vulnerabilities. One proactive approach to uncover and mitigate threats is red teaming. Red teams imitate hacking to find and exploit vulnerabilities in a network. This practice removes uncertainty about which parts of a network attackers could compromise. A central component of red teaming is lateral movement in which a red team operator moves through a network by traversing workspaces on that network. Each step in the lateral movement process requires careful decision-making given the information gleaned so far, consequences of past actions, and knowledge about workspaces on the network. The process is complex and typically requires years of experience for a red team operator to master.&#13;
&#13;
Automating red teaming with machine learning, and specifically reinforcement learning (RL), could help secure a domain more efficiently and allow operators to focus on higher-level decisions. However, unlike humans, traditional RL agents forget details from past experiences. This is a problem because remaining stealthy requires remembering consequences of past actions. By adding a memory architecture to the agent, the agent can remember these consequences and make better action choices in the lateral movement environment.&#13;
&#13;
I propose several variations of Long-Short-Term Memory (LSTM), transformers, and Hierarchical Chunk Attention Memory (HCAM), which help the agents to better remember past events inside a memory-enhanced lateral movement simulation. I compare the performance of a control agent, an RL agent with a linear neural network, to the performance of memory agents, RL agents with architectures capable of determining dependencies on past events. I test the agents on a control environment that does not include a memory task, and a memory environment that does.&#13;
&#13;
Agents with the memory architectures perform better than the control agent on the memory environment, at varying levels. I show that agents with an LSTM outperform the control agent on the memory environment by about 25%, matching the performance of the control agent on the control environment. While the HCAM and transformer agents do not perform as well as the LSTM agents, they still show the ability to slightly outperform the control agents on the memory environment. They also show potential for performing well in more generic memory tasks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Cloud Database Performance: General-Purpose Compression and Workload-Driven Layout</title>
<link href="https://hdl.handle.net/1721.1/153856" rel="alternate"/>
<author>
<name>Piszczek, Miloslawa</name>
</author>
<id>https://hdl.handle.net/1721.1/153856</id>
<updated>2024-03-22T03:16:46Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Enhancing Cloud Database Performance: General-Purpose Compression and Workload-Driven Layout
Piszczek, Miloslawa
Cloud-based disaggregated database systems that divide data across a data layer and a storage layer connected by network calls are popular for analytical query loads. This thesis explores two topics critical to building performant systems of this type: space optimization and latency minimization.&#13;
&#13;
First, I propose ColumnConstruct- a general-purpose machine learning compression that uses a novel information-maximizing method for building input features. ColumnConstruct is competitive with existing ML compression methods for categorical data, but is not able to perform lossless compression on arbitrary tabular data. This limitation, as well as the additional compression and decompression latency, make it insufficient to improve query latency within a database management system. Next, I investigate whether workload-aware data layout combined with caching can improve query times without the need for ML-based compression or storage layer computation pushdown. I show that for small cache sizes and homogeneous query sets, a workload-aware layout combined with existing compression methods can be more effective than computation pushdown without reliance on particular features in the data storage layer.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fragments of Home: Domestic Businesswomen and Collective Motherhood</title>
<link href="https://hdl.handle.net/1721.1/153855" rel="alternate"/>
<author>
<name>Carriker, Bella Carmelita</name>
</author>
<id>https://hdl.handle.net/1721.1/153855</id>
<updated>2024-03-22T03:37:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fragments of Home: Domestic Businesswomen and Collective Motherhood
Carriker, Bella Carmelita
One in three children in the United States live in a single parent household; yet, the most likely demographic to experience eviction in the U.S. is low-income single mothers. This thesis proposes a framework for thinking about communal family structures, housing security, and intimate domestic space, through the lens of designing for single mother households in New York City. The housing crisis in cities across the country specifically affects single mothers and children, yet these identities are rarely explicitly designed for; economically, systemically and architecturally. &#13;
&#13;
Collections of oral histories— from single mothers in my life who have experienced housing insecurity— illustrate the fragments which make up the feeling of home, the ways that architectural detail can reflect motherhood, the need to inherently examine both domesticity and labor. These spatial fragments, in conjunction with research on existing zoning, planning, development, and affordable housing pathways, inform architectural possibilities for communal housing across three neighborhoods in New York City.&#13;
&#13;
In order to advocate for these kinds of architectural opportunities to exist and planning initiatives to be community specific, family specific; we have to be able to imagine what these collective structures visually look like, how architecture can facilitate a stable relationship between working and living for single mother households.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Update: Using Reinforcement Learning to Discover Policies for List Update</title>
<link href="https://hdl.handle.net/1721.1/153854" rel="alternate"/>
<author>
<name>Quaye, Isabelle A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153854</id>
<updated>2024-03-22T03:38:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning to Update: Using Reinforcement Learning to Discover Policies for List Update
Quaye, Isabelle A.
The use of machine learning models in algorithms design is a rapidly growing f ield, often termed learning-augmented algorithms. A notable advancement in this field is the use of reinforcement learning for algorithm discovery. Developing algorithms in this manner offers certain advantages, novelty and adaptability being chief among them. In this thesis, we put reinforcement learning to the task of discovering an algorithm for the list update problem. The list update problem is a classic problem with applications in caching and databases. In the process of uncovering a new list update algorithm, we also prove a competitive ratio for the transposition heuristic, which is a well-known algorithm for the list update problem. Finally, we discuss key ideas and insights from the reinforcement learning agent that hints towards optimal behavior for the list update problem.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pensieve: Microarchitectural Modeling for Security Evaluation</title>
<link href="https://hdl.handle.net/1721.1/153853" rel="alternate"/>
<author>
<name>Yang, Yuheng</name>
</author>
<id>https://hdl.handle.net/1721.1/153853</id>
<updated>2024-03-22T03:22:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Pensieve: Microarchitectural Modeling for Security Evaluation
Yang, Yuheng
Traditional modeling approaches in computer architecture aim to obtain an accurate estimation of performance, area, and energy of a processor design. With the advent of speculative execution attacks and their security concerns, these traditional modeling techniques fall short when used for security evaluation of defenses against these attacks.&#13;
&#13;
This thesis presents Pensieve, a security evaluation framework targeting early-stage microarchitectural defenses against speculative execution attacks. At the core, it introduces a modeling discipline for systematically studying early-stage defenses. This discipline allows us to cover a space of designs that are functionally equivalent while precisely capturing timing variations due to resource contention and microarchitectural optimizations. We implement a model checking framework to automatically find vulnerabilities in designs. We use Pensieve to evaluate a series of state-of-the-art invisible speculation defense schemes, including Delay-on-Miss, InvisiSpec, and GhostMinion, against a formally defined security property, speculative non-interference. Pensieve finds Spectre-like attacks in all those defenses, including a new speculative interference attack variant that breaks GhostMinion, one of the latest defenses.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Progress in Parallel Algorithms</title>
<link href="https://hdl.handle.net/1721.1/153852" rel="alternate"/>
<author>
<name>Tontici, Damian</name>
</author>
<id>https://hdl.handle.net/1721.1/153852</id>
<updated>2024-03-22T03:44:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Progress in Parallel Algorithms
Tontici, Damian
Parallel computing offers the promise of increased performance over sequential computing, and parallel algorithms are one of its key components. There has been no aggregated or generalized comparative analysis of parallel algorithms. In this thesis, we investigate this field as a whole. We aim to understand the trends in algorithmic progress, improvement patterns, and the importance and interactions of various commonly used metrics. We collect parallel algorithms solving problems in our set and analyze them. We look at four major themes: how parallel algorithms have progressed, including in relationship to sequential algorithms and parallel hardware; how the work and span of algorithms influence performance; how problem size and available parallelism affect performance; and what researchers’ observable priorities look like. We find that more problems have had parallel improvements than sequential ones since the ’80s, that most parallel algorithms don’t improve algorithmic complexities, and much more. This research is important for us to understand how the field of parallel algorithms has changed throughout time, and what it looks like now.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Souvenir for the Land of Pagodas</title>
<link href="https://hdl.handle.net/1721.1/153851" rel="alternate"/>
<author>
<name>Allen, Christopher H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153851</id>
<updated>2024-03-22T03:43:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Souvenir for the Land of Pagodas
Allen, Christopher H.
In the capital cities of Myanmar (Burma)—present and previous—stand five government-sponsored pagodas, five gold-plated stars linking two architectural constellations. The first of these constellations is composed of the thousands of religious structures that punctuate the landscape of Myanmar, often called the “Land of Pagodas.” The second is composed of the monuments erected by the various regimes that have administered the country’s government since independence in 1948, each of which embodies its own formulation of national identity and history. Occupying this covalent position, these five pagodas are physical manifestations of an ongoing nationalist project of ethnic and religious homogenization that legitimizes itself through historicist narratives, militaristic violence, and the co-opting of religion in service of political power. They are artifacts of propaganda—tools for “propagating the faith”¹ of ethno-nationalism. &#13;
&#13;
As such, these buildings also embody many of the social and political forces that pressured the maternal side of my family to emigrate from the country in the early 1980’s, in order to avoid prejudice and persecution as members of marginalized ethnic and religious groups. This thesis therefore operates from a diasporic distance, and is informed by a perspective which lacks the privilege of nostalgia.&#13;
&#13;
Taking the five government-sponsored pagodas as its site of departure, this thesis approaches them as narrative media, and comprises a series of investigations into challenging monumental architecture and repurposing its narrative capacities. If these architectural forms function as narrative tools of the state, how can they be claimed in order to tell alternate stories?&#13;
&#13;
This thesis approaches memory(s) as an inheritance, augmenting personal and ancestral narratives that have been excised from a history whose authority is predicated on their exclusion. It considers historiography as a process of multiplicity—even dissensus—and proposes the diasporic souvenir as a mechanism for disrupting narrative regimes of power. &#13;
&#13;
¹ “Propaganda” (n.), from New Latin &#120369;&#120371;ō&#120369;ā&#120360;&#120354;&#120367;&#120357;&#120354;, short for &#120330;&#120368;&#120367;&#120360;&#120371;&#120358;&#120360;ā&#120373;&#120362;ō &#120357;ē &#120343;&#120371;ō&#120369;ā&#120360;&#120354;&#120367;&#120357;ā &#120333;&#120362;&#120357;ē, “Congregation for Propagating the Faith.” &#120342;&#120377;&#120359;&#120368;&#120371;&#120357; &#120328;&#120357;&#120375;&#120354;&#120367;&#120356;&#120358;&#120357; &#120339;&#120358;&#120354;&#120371;&#120367;&#120358;&#120371;’&#120372; &#120331;&#120362;&#120356;&#120373;&#120362;&#120368;&#120367;&#120354;&#120371;&#120378;, 10th ed. (Oxford: Oxford University Press, 2020), s.v. “Propaganda.”
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative and Discriminative Models in Phase Transition Prediction</title>
<link href="https://hdl.handle.net/1721.1/153849" rel="alternate"/>
<author>
<name>Zhang, Difei</name>
</author>
<id>https://hdl.handle.net/1721.1/153849</id>
<updated>2024-03-22T03:51:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Generative and Discriminative Models in Phase Transition Prediction
Zhang, Difei
Accurate prediction of critical temperatures in phase transitions is crucial for understanding physical systems. Generative and discriminative models offer promising yet distinct approaches. Considering varying knowledge levels of the system, accessible data amounts, and computation resources of the experiments, these methods exhibit different accuracy and efficiency. This study aims to comprehensively compare six methods for predicting critical temperatures in the Ising lattice. Leveraging Julia’s capabilities will enable efficient parallel computation and benefit from its robust scientific machine learning ecosystem. The evaluation will focus on their performance concerning error rates, computation time, and required data. The goal is to guide researchers in selecting the optimal method within data and computational constraints for precise critical temperature estimation in complex physical systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frictitious Matters</title>
<link href="https://hdl.handle.net/1721.1/153848" rel="alternate"/>
<author>
<name>Amstutz, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/153848</id>
<updated>2024-03-22T04:05:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Frictitious Matters
Amstutz, Caroline
Wood arrives on site abstracted into rectangular studs; steel beams, once a mineral soup, are extrusions with patented silhouettes; and stone is severed from time, processed into thin shiny slabs. We’ve manipulated our terrestrial matter to conform to smooth expectations: building materials are homogenous, standard, orthogonal, drawable, and specifiable. We live in the modern fantasy of “ frictionlessness,” where material becomes product and smoothness lubricates the flow of capital. Today architects don’t craft, but rather we specify.&#13;
&#13;
Granite, unlike processed ‘plastic’ materials, resists the abstraction of typical architectural production. It is too hard, too heavy, and too heterogeneous for specification. I argue that granite’s high-friction properties – if carefully understood and deliberately worked with – pose new design potentials. Granite’s microstructure causes it to cleave, or split, almost orthogonally. It's surface of crystals self-interlocks, allowing for jamming. And its high mass and friction cause it to pile with a 45-degree angle of repose.&#13;
&#13;
Yet, we would sooner expend immense energy to downgrade granite from a 230-newton piece of stone to a 40-newton piece of concrete than embrace the design potentials of aplasticity. ⁰ Abandoned for its “nuisance” properties, granite has been relegated to the realm of finish.&#13;
&#13;
Friction-intolerant and smoothness-obsessed, we are estranged from our materials. This thesis presents a methodology to reconsider architectural material culture through the embrace of aplastic material. Material properties are not incidental or inconvenient, but rather invitations for co-authorship. Working directly with Barre Gray™ granite through mock-ups, miniatures, and models, I offer a craft-optimized slowness, implanting the architect in streams of “ waste,” rather than extraction, to co-design with a “difficult” material.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Operator Models as Applied to Fluid Flow Systems and Real Ocean Dynamics</title>
<link href="https://hdl.handle.net/1721.1/153847" rel="alternate"/>
<author>
<name>Rajagopal, Ellery M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153847</id>
<updated>2024-03-22T03:49:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Neural Operator Models as Applied to Fluid Flow Systems and Real Ocean Dynamics
Rajagopal, Ellery M.
Data-driven, deep-learning modeling frameworks have been recently developed for forecasting time series data. Such machine learning models may be useful in multiple domains including the atmospheric and oceanic ones, and in general, the larger fluids community. The present work investigates the possible effectiveness of such deep neural operator models for reproducing and predicting classic fluid flows and simulations of realistic ocean dynamics. We first briefly evaluate the capabilities of such deep neural operator models when trained on a simulated two-dimensional fluid flow past a cylinder. We then investigate their application to forecasting ocean surface circulation in the Middle Atlantic Bight and Massachusetts Bay, learning from high-resolution data-assimilative simulations employed for real sea experiments. We confirm that trained deep neural operator models are capable of predicting idealized periodic eddy shedding. For realistic ocean surface flows and our preliminary study, they can predict several of the features and show some skill, providing potential for future research and applications.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Model Routing with Benchmark Datasets</title>
<link href="https://hdl.handle.net/1721.1/153846" rel="alternate"/>
<author>
<name>Ou, Anthony C.</name>
</author>
<id>https://hdl.handle.net/1721.1/153846</id>
<updated>2024-03-22T04:06:30Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Large Language Model Routing with Benchmark Datasets
Ou, Anthony C.
There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. With a new dataset, it can be difficult to determine which LLM is best suited to the task. In this work we will address the challenges associated with selecting the best LLM model out of a collection for a new task. To do so, benchmark datasets are repurposed to learn a “router” model for this LLM selection, such that the “router” model will solve a collection of binary classification tasks. This work will demonstrate the utility and limitations of learning model routers from various benchmark datasets, where performance is improved upon using any single model for all tasks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalized Treatment Response Prediction under Dynamic and Time-Varying Treatment Strategies for Sepsis Patients</title>
<link href="https://hdl.handle.net/1721.1/153844" rel="alternate"/>
<author>
<name>Su, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/153844</id>
<updated>2024-03-22T03:15:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Personalized Treatment Response Prediction under Dynamic and Time-Varying Treatment Strategies for Sepsis Patients
Su, Megan
Sepsis is a life-threatening medical emergency in which the body responds improperly to an infection, and is typically treated with intravanous fluids and vasopressors. However, administering the right balance is often difficult because adverse outcomes can be caused by both excessive and insufficient treatment. There have been many clinical trials done in the past to investigate the optimal regime for treating sepsis, however these studies have resulted in inconclusive results and often take a long time to conduct. Thus, personalized treatment response prediction under dynamic time-varying treatment strategies can be a very useful tool for clinicians when deciding what treatment strategy to administer to a patient. &#13;
&#13;
This thesis builds on G-Net, a deep sequential modeling framework for g-computation that has been evaluated on response prediction under dynamic and time-varying strategies on the population level. Utilizing real-world data collected from the intensive care unit (ICU), we evaluate the performance of various deep learning implementations of G-Net on individual-level response prediction and compare their performances on prediction under the observational treatment regime. We then apply G-Net to counterfactual prediction under alternative regimes of interest and show that G-Net is able predict patient covariates and outcomes that are physiologically plausible and match clinical intuition. Our work showcases the potential of G-Net as a tool for personalized treatment response prediction to aid clinicians in determining optimal therapy for sepsis patients in the ICU.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archi-Culture or Agri-Tecture: The Garden in The Machine</title>
<link href="https://hdl.handle.net/1721.1/153843" rel="alternate"/>
<author>
<name>Brazier, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/153843</id>
<updated>2024-03-22T03:17:48Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Archi-Culture or Agri-Tecture: The Garden in The Machine
Brazier, Justin
Since the late 19th century, Urban Agriculture has served its respective context in more ways than just food production. The Urban farm became the center of community, essential democratic space where neighbors from all walks of life could share stories, recipes, farming practices and resources with one another. Tackling a main aspect of a people’s basic needs, urban farms and the sharing of agriculture is an essential act of selfreliance, self-preservation, and resistance. Planting their identities into the earth, this urban landscape has also become a reflection of culture, passing of tradition, and a connection to a homeland some immigrants could not get back to, but through native culinary practices and the ability to grow foods of familiarity, people have been able to carry their history with them. In this sense, on a small scale the community garden has become a central node of urban civic exchange. On the precipice of exponential growth, the necessity for urban grow space has never been more pressing. Moving beyond our typical urban agriculture typologies, the implementation of year-round growth can and has proven to expand the output of existing urban land already dedicated to closing the gap between where we grow our food and where we need it most. Interior urban growth spaces have recently been on the rise but have been typically implemented under the currently limited typology that is the standard ready-made greenhouse or the hyper productive food lab. Both missing the essence of what made urban agriculture different from rural, the people. This essential ethos that has been baked into the urban farm has not yet translated into the urban greenhouse, a largely generic structure that has provided farmers and communities the opportunity to increase crop yield and expand operations beyond their normal seasons. A key innovation, as food security within urban contexts becomes a more prevalent issue, but this expansion of production has come at the expense of the atmosphere that has made the urban farm what it is, the legibility of authorship, of collaboration, of identity. The greenhouse kit is a generic solution, cheap, easy to construct and pre-engineered, making it the obvious choice for more grassroots efforts that urban agricultural endeavors tend to be. But can we take a greenhouse kit that exists everywhere and develop a reconstruction so that it reacts to the constraints of its location? What can it hold to take on the dual identity of the garden? If we are going to move the greenhouse into the city, do we have to ask it to do more than just produce food? With access to infrastructure, can we push the community farm to its full potential, a community center?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Old Is Now?</title>
<link href="https://hdl.handle.net/1721.1/153842" rel="alternate"/>
<author>
<name>Giorgis, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/153842</id>
<updated>2024-03-22T03:04:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How Old Is Now?
Giorgis, Adriana
L’Aquila is a city without new buildings. Founded in the early 13th century on a fault line, the city has been destroyed by earthquakes every three hundred years. Its buildings are repaired on the same cycle. In l’Aquila, acts of construction and maintenance are one and the same. Through centuries, buildings in l’Aquila have been reinforced with punctual, visible, acts of support. Tension ties, corner stones, and thickened walls are the language of  architecture, producing both aesthetic and spatial implications. In this city, to maintain is to remake, to build is to preserve, to care is to create. When life and life-expectancy of structures is literally infinite, there can be no differentiation between repair and construction.&#13;
&#13;
This project dwells on l’Aquila’s architectural value systems. The absence of ‘new’ buildings in the city is made possible by a culture of collective acts of repair. In the long-now, kindnesses reinforce, prop-up, and adjust materials that have bore witness to historical events and familial genealogies. What might it mean for the discipline to center maintenance the way it has been centered in l’Aquila? What are the ways that the architect-maintainer conceives of originality? Of design? How, too, might they care for the ongoing present and future of l’Aquila?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D Self-Localization of Drones using a Single Millimeter-Wave Anchor</title>
<link href="https://hdl.handle.net/1721.1/153836" rel="alternate"/>
<author>
<name>Lam, Maisy Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/153836</id>
<updated>2024-03-22T03:33:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">3D Self-Localization of Drones using a Single Millimeter-Wave Anchor
Lam, Maisy Lilian
We present the design, implementation, and evaluation of MiFly, a self-localization system for autonomous drones that works across indoor and outdoor environments, including low-visibility, dark, and GPS-denied settings.&#13;
&#13;
MiFly performs 6DoF self-localization by leveraging a single millimeter-wave (mmWave) anchor in its vicinity- even if that anchor is visually occluded. MmWave signals are used in radar and 5G systems and can operate in the dark and through occlusions. MiFly introduces a new mmWave anchor design and mounts light-weight high-resolution mmWave radars on a drone. By jointly designing the localization algorithms and the novel low-power mmWave anchor hardware (including its polarization and modulation), the drone is capable of highspeed 3D localization. Furthermore, by intelligently fusing the location estimates from its mmWave radars and its IMUs, it can accurately and robustly track its 6DoF trajectory&#13;
&#13;
 We implemented and evaluated MiFly on a DJI drone. We demonstrate a median localization error of 7cm and a 90th percentile less than 15cm, even when the anchor is fully occluded (visually) from the drone.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earth Mission Control: A Virtual Reality Platform for Bridging the Climate Science Communication Gap</title>
<link href="https://hdl.handle.net/1721.1/153834" rel="alternate"/>
<author>
<name>Cherner, Phillip</name>
</author>
<id>https://hdl.handle.net/1721.1/153834</id>
<updated>2024-03-22T04:01:34Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Earth Mission Control: A Virtual Reality Platform for Bridging the Climate Science Communication Gap
Cherner, Phillip
Data visualizations are incredibly powerful tools for engaging users with increasingly complex and unfamiliar information about the Earth’s changing climate, yet scientists often use only one tool or modality to communicate their ideas about climate data, such as two-dimensional figures and graphs. With the rise of commercially available virtual reality (VR), we can leverage the affordances of the immersive technology to help integrate multiple modalities into a cohesive experience. In this thesis, I will present the design and implementation of the Earth Mission Control (EMC), an immersive multi-user VR data visualization platform designed to enable scientists and educators to more effectively communicate their data-driven stories of climate impacts to policymakers and community members to help them deepen their understanding of their community and the climate impacts that they are facing. The EMC combines existing visualization modalities such as NASA’s Hyperwalls, spherical projections (e.g., NOAA’s Science on a Sphere), map tables, virtual environments, 360 video, and human scale immersive experiences into an engaging and highly interactive VR environment, leveraging each of the modalities’ unique strengths. The design and creation of an AI-powered virtual assistant is also described as a way to add increased immersion, more natural interactions, and increased presence. Initial testing of potential effectiveness of the platform in providing a deeper understanding of localized climate issues and available adaptation strategies and personal actions are also discussed.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Changing the Course: Reimagining Switzerland’s Aging Nuclear Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/153833" rel="alternate"/>
<author>
<name>Reinhard, Ellen Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/153833</id>
<updated>2024-03-22T03:35:32Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Changing the Course: Reimagining Switzerland’s Aging Nuclear Infrastructure
Reinhard, Ellen Marie
Countries worldwide have been experiencing a rise in the number of decommissioned nuclear power plants due to the infrastructure’s finite lifespan, ranging from 20 to 60 years. Consequently, nearly all of today’s global operating 410 nuclear power plants will soon reach their operating end of life, with an additional 263 already having ceased operations. Of those, only a few have attempted to repurpose them with programs aimed at reintegrating the isolated site into its existing context. This thesis proposes to change that course by reimagining alternative ways of adaptively reusing the remaining infrastructural buildings to facilitate the process of reconnection.&#13;
&#13;
The thesis centers on Switzerland, home to some of the world’s oldest nuclear power plants. One of them, based in Mühleberg, is the only decommissioned nuclear power plant in Switzerland to date and is therefore a pioneer to this process. The lengthy 15-year and costly $3.2Bn USD process dedicated to the safe nuclear fuel removal and building demolition lasts until 2034. Following that, the remaining greenfield, currently surrounded by agricultural land, would be available for new purposes.&#13;
&#13;
The proposal imagines transforming the nuclear power plant in Mühleberg into an accessible pumped hydro storage system for energy storage. In addition, indoor hydroponics and outdoor agricultural land serve as extensions for the longstanding agricultural community. Beyond economic uses, recreational spaces are dispersed throughout the site for larger community engagement and participation.&#13;
&#13;
Zooming back out to the larger picture of aging nuclear energy infrastructure, this thesis uses the Mühleberg narrative on other affected sites globally. It also reflects on potential opportunities that arise when considering scalability.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Auctions with Multiple Items</title>
<link href="https://hdl.handle.net/1721.1/153829" rel="alternate"/>
<author>
<name>Zhang, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/153829</id>
<updated>2024-03-22T03:20:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Online Auctions with Multiple Items
Zhang, Wei
Motivated by a recent switch of online ad exchanges from second-price auctions to firstprice auctions, this thesis studies computational problems related to how an advertiser can select bids to maximize her cumulative reward when participating in a sequence of single-item f irst-price auctions, or a sequence of several first-price auctions that take place in parallel. In particular, we study the problem of regret minimization in this setting, extending prior work for second-price auctions. We show that sub-linear regret cannot be achieved when the values are continuous and there are two or more single-item auctions that take place per round. On the other hand, we show that if the values are discretized the regret can be made to grow sublinearly, and this can be attained computationally efficiently using a best-response oracle. Finally, when there is a single first-price auction per round, we can attain tight regret bounds in two settings where additional information is available, in the form of hints, about the opponent bids.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Emergent Gaits with Decentralized Phase Oscillators:&#13;
on the role of Observations, Rewards, and Feedback</title>
<link href="https://hdl.handle.net/1721.1/153828" rel="alternate"/>
<author>
<name>Zhang, Jenny L.</name>
</author>
<id>https://hdl.handle.net/1721.1/153828</id>
<updated>2024-03-22T03:03:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Emergent Gaits with Decentralized Phase Oscillators:&#13;
on the role of Observations, Rewards, and Feedback
Zhang, Jenny L.
We present a minimal phase oscillator model for learning quadrupedal locomotion. Each of the four oscillators is coupled only to itself and its corresponding leg through local feedback of the ground reaction force, which we interpret as an observer feedback gain. The oscillator itself is interpreted as a latent contact state-estimator. Through a systematic ablation study, we show that the combination of phase observations, simple phase-based rewards, and the local feedback dynamics induces policies that exhibit emergent gait preferences, while using a reduced set of simple rewards, and without prescribing a specific gait.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open-Set Object Based Data Association</title>
<link href="https://hdl.handle.net/1721.1/153827" rel="alternate"/>
<author>
<name>Magoun, Tim Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/153827</id>
<updated>2024-03-22T04:05:46Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Open-Set Object Based Data Association
Magoun, Tim Y.
Representing the world using sparse objects allows for compact and semantically meaningful maps in simultaneous localization and mapping (SLAM). Traditionally, object detectors trained on a specific set of objects, such as the YCB objects, are used to provide input to the data association problem, which limits the scope of the system to environments that it has been trained on. With advancements in foundational models, we can extend this representation for objects that are not known a priori and do not have a labeled category during training. This thesis explores a system that creates data associations between open-set objects using an RGB-D camera and how it is used in a sparse object SLAM system. We show comparable trajectory performance to traditional SLAM systems while being more adaptable to out-of-distribution objects.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Railroad line capacity, scheduling, and dispatching models : state-of-the-art and possible extensions</title>
<link href="https://hdl.handle.net/1721.1/153807" rel="alternate"/>
<author>
<name>Little, Patrick.</name>
</author>
<id>https://hdl.handle.net/1721.1/153807</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Railroad line capacity, scheduling, and dispatching models : state-of-the-art and possible extensions
Little, Patrick.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1982; Bibliography: leaves 104-105.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laminar head-on flame quenching in a spherical combustion bomb</title>
<link href="https://hdl.handle.net/1721.1/153806" rel="alternate"/>
<author>
<name>Sellnau, Mark Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/153806</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Laminar head-on flame quenching in a spherical combustion bomb
Sellnau, Mark Charles.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and evaluation of an electrocutaneous dynamic phantom sensation</title>
<link href="https://hdl.handle.net/1721.1/153805" rel="alternate"/>
<author>
<name>Serocki, John Harvey.</name>
</author>
<id>https://hdl.handle.net/1721.1/153805</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Development and evaluation of an electrocutaneous dynamic phantom sensation
Serocki, John Harvey.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear waste reprocessing and disposal for Iran : an assessment.</title>
<link href="https://hdl.handle.net/1721.1/153801" rel="alternate"/>
<author>
<name>Sinaki, Ali Mohammad.</name>
</author>
<id>https://hdl.handle.net/1721.1/153801</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Nuclear waste reprocessing and disposal for Iran : an assessment.
Sinaki, Ali Mohammad.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weight and cost impact of large stand off distances on ships.</title>
<link href="https://hdl.handle.net/1721.1/153799" rel="alternate"/>
<author>
<name>Sims, Philip Johns.</name>
</author>
<id>https://hdl.handle.net/1721.1/153799</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Weight and cost impact of large stand off distances on ships.
Sims, Philip Johns.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1977; Bibliography : leaves 166-167.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Baleen Whale Detection and 2D Localization Using a Network of Unsynchronized Passive Acoustic Sensors</title>
<link href="https://hdl.handle.net/1721.1/153797" rel="alternate"/>
<author>
<name>Goldwater, Mark Harry</name>
</author>
<id>https://hdl.handle.net/1721.1/153797</id>
<updated>2024-03-16T03:25:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Automatic Baleen Whale Detection and 2D Localization Using a Network of Unsynchronized Passive Acoustic Sensors
Goldwater, Mark Harry
Underwater acoustics is a powerful tool for learning about the ocean's soniferous marine life. However, most modern acoustic sensing systems consist of expensive arrays of time-synchronized recorders which require a crewed research vessel and significant expertise to deploy, operate, and recover. Recently, there has been a growing corpus of research related to algorithms for low-cost and accessible acoustic hardware. Deep learning methods have shown great promise when applied to underwater acoustics inverse problems. While many signal processing or physics-based algorithms exhibit long run times and require manual labor to extract signals of interest, tune parameters, as well as visually verify the results, an appropriately trained neural network can quickly process data with no human supervision. Both low-cost passive acoustic monitoring (PAM) sensing platforms and algorithms that can analyze massive amounts of raw data are critical to accessible and scalable approaches in ocean acoustic monitoring.&#13;
&#13;
This thesis presents a method for detection and 2D (latitude-longitude) localization of underwater acoustic sources without requiring synchronized sensors. The signals of interest here are the dispersive low-frequency impulsive gunshot vocalizations of North Pacific and North Atlantic right whales (NPRWs, NARWs). In shallow-water channels, the time-frequency representation of the received signal is strongly dependent on source-receiver range, making these impulses ideal candidates for range-based localization. The first step in the localization pipeline uses a temporal convolutional network (TCN) to simultaneously detect gunshot vocalizations and predict their ranges. Trained on spectrograms of synthetic data simulated in a variety of environments, the TCN is applied to PAM data from moorings in the Bering Sea. Gunshots are detected with high precision, and the range estimates are comparable to those estimated using traditional physics-based processing. Both methods use a minimal set of a priori environmental information including water column depth, sound speed, and density.&#13;
&#13;
Depending on the sensor layout, the TCN may need to scan large windows of data, so the number of unique acoustic sources is unknown. To automatically associate and localize range measurements, the proposed method seeks subsets of measurements across unique sensors which are internally consistent. For every considered measurement subset, locations are estimated with single constituent measurements left out and checked to be sufficiently close to the excluded measurement's set of potential locations. If a measurement subset is entirely consistent in this manner, the measurements are added as neighboring nodes in a graph-based representation, and strongly connected components are used to determine data associations and calculate the final source location estimates. Informed by the methods developed in this thesis, an array of low-cost TOSSIT moorings was deployed in Cape Cod Bay and used to collect experimental PAM data. The localization results are comparable to another similar physics-based inversion approach. Overall, this thesis aims to fill a gap in acoustic data processing methods where data from a low-cost network of unsynchronized acoustic sensors are fused to localize acoustic sources. The presented methods and data processing pipeline demonstrate the great potential of low-cost acoustic sensing systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Social Dilemmas in Multi-Agent Reinforcement Learning with Formal Contracting</title>
<link href="https://hdl.handle.net/1721.1/153795" rel="alternate"/>
<author>
<name>Christoffersen, Phillip Johannes Kerr</name>
</author>
<id>https://hdl.handle.net/1721.1/153795</id>
<updated>2024-03-16T03:12:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Mitigating Social Dilemmas in Multi-Agent Reinforcement Learning with Formal Contracting
Christoffersen, Phillip Johannes Kerr
As society deploys more and more sophisticated artificial intelligence (AI) agents, it will be increasingly necessary for such agents, while pursuing their own objectives, to coexist in common environments in the physical or digital worlds. This may pose a challenge if the agents’ objectives conflict with each other– in the worst case, this can prevent any given agent from being able to fulfill their own objectives (e.g. self driving cars in a traffic jam). Situations such as these are termed social dilemmas. &#13;
&#13;
In this thesis, it is demonstrated that providing RL agents with the software infrastructure to precommit to zero-sum incentive modifications &#13;
&#13;
1. Induces maximal social welfare in theory; and &#13;
2. When implemented with deep multi-agent reinforcement learning (MARL), also avoids social dilemmas in practice.&#13;
&#13;
Specifically, a novel algorithmic framework is proposed, termed formal contracting, which is formalized, studied game-theoretically, and investigated empirically. In formal contracting, before engaging in a given shared environment, agents are given the opportunity negotiate a binding modification to all agents’ objective functions, in order to provide incentives for the optimal use of shared resources. Within this framework, at all subgame-perfect equilibria (SPE), agents will in fact maximize social welfare, that is, the sum of all agent objectives in the original environment. Moreover, studies in simple domains, such as the classic prisoner’s dilemma, and more complex ones such as dynamic simulations of pollution management, show that this algorithmic framework can be implemented in MARL, and does indeed lead to outcomes with superior welfare in social dilemmas. This thesis concludes with discussions of related work, limitations of the approach, and future work, particularly involving scaling this methodology to larger problem instances containing more agents than studied.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo Methods for Motion Planning and Goal Inference</title>
<link href="https://hdl.handle.net/1721.1/153789" rel="alternate"/>
<author>
<name>Kondic, Jovana</name>
</author>
<id>https://hdl.handle.net/1721.1/153789</id>
<updated>2024-03-16T03:14:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Monte Carlo Methods for Motion Planning and Goal Inference
Kondic, Jovana
Human cognition exhibits remarkable abilities in reasoning about the plans of others. Even infants can swiftly generate effective predictions from minimal observations. This capability largely stems from our ability to employ specific assumptions about others’ decision-making, while considering potential alternative interpretations that align with reality. Such versatility is particularly crucial in navigation tasks, where multiple strategies exist for avoiding obstacles and reaching a target location. A sophisticated autonomous system should, therefore, be capable of: (1) acknowledging the inherent uncertainty in various obstacle avoidance strategies; and (2) predicting motion plans in a way that recognizes the different possibilities in a given goal-driven navigation scenario. To address these needs, we introduce a framework that captures the stochastic nature of motion planning and prediction through Monte Carlo sampling techniques. We ensure (1) by shifting the focus from pure trajectory optimization to generating a variety of near-optimal paths, and achieve (2) by developing a prediction method capable of capturing the inherent multimodality in the distribution over goal-driven trajectories. For the former, we utilize Markov Chain Monte Carlo (MCMC) methods to obtain trajectory samples that approximate the Boltzmann distribution, a common model for approximate rationality, which incorporates a cost function derived from trajectory optimization literature. For the latter, we develop a Bayesian model of the observed agent, and utilize Bayesian inference to reason about the underlying end goals of their movement. We propose a sequential Monte Carlo method that adapts the MCMC trajectory sampling to construct plausible hypotheses about the agent’s motion plan and then updates these hypotheses in real-time with new observations. In experiments conducted within continuous, obstacle-laden environments, we demonstrate our framework’s effectiveness for both diversity-aware motion planning and robust inference of latent goals from partial, noisy observations.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Phase Retrieval: A Robust and Efficient Multidimensional Phase Retrieval Algorithm</title>
<link href="https://hdl.handle.net/1721.1/153788" rel="alternate"/>
<author>
<name>Brabec, Cole</name>
</author>
<id>https://hdl.handle.net/1721.1/153788</id>
<updated>2024-03-16T03:30:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fast Phase Retrieval: A Robust and Efficient Multidimensional Phase Retrieval Algorithm
Brabec, Cole
We present the first phase retrieval algorithm with a set of deterministic recovery guarantees. We show that for a class of objects known as "Schwarz Objects", the algorithm is guaranteed to reconstruct the object given only the magnitudes of its discrete Fourier transform. We present numerical evidence that the algorithm additionally succeeds quite often for non-Schwarz objects. We also present a set of measurement matrices for which the algorithm is guaranteed to recover any object. We derive the algorithm by converting instances of the phase-retrieval problem to the Schwarz problem and refine the solution with local optimization. The result is an algorithm that is fast, universal and robust against noise.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perturbation-invariant Speech Representation Learning by Online Clustering</title>
<link href="https://hdl.handle.net/1721.1/153784" rel="alternate"/>
<author>
<name>Chang, Heng-Jui</name>
</author>
<id>https://hdl.handle.net/1721.1/153784</id>
<updated>2024-03-16T04:08:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Perturbation-invariant Speech Representation Learning by Online Clustering
Chang, Heng-Jui
Despite success across various tasks, self-supervised speech models face significant challenges in enhancing content-related performance with unlabeled data, requiring substantial computational resources. Meanwhile, learning from clustered discrete units has been shown to facilitate accurate phonetic representations. Thus, this thesis investigates speaker and noise-invariant speech representations. First, Speaker-invariant Clustering (Spin) is proposed to extract content representations through online clustering and speaker-invariant cross-view prediction. Second, Robust Spin (R-Spin) is devised to extend Spin to handle more distorted speech signals by leveraging acoustic pieces. Furthermore, this thesis includes a diverse set of evaluation and visualization techniques to quantitatively and qualitatively analyze the perturbation invariability of the proposed methods. This thesis offers approaches to producing perturbation-invariant speech representations and deeply investigates the characteristics of the learned representations, providing insights into these models and cultivating future extension possibilities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Stabilizing Controllers for High-dimensional Unknown Systems and Networked Dynamical Systems</title>
<link href="https://hdl.handle.net/1721.1/153783" rel="alternate"/>
<author>
<name>Zhang, Songyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/153783</id>
<updated>2024-03-16T04:05:50Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Stabilizing Controllers for High-dimensional Unknown Systems and Networked Dynamical Systems
Zhang, Songyuan
Designing stabilizing controllers is a fundamental challenge in autonomous systems, particularly for high-dimensional, nonlinear systems that cannot be accurately modeled using differential equations because of the scalability and model transparency, and large-scale networked dynamical systems because of scalability and generalizability. To address the challenge, we develop (1) A Lyapunov-based guided exploration framework to learn stabilizing controllers for high-dimensional unknown systems; (2) A compositional neural certificate based on ISS (Input-to-State Stability) Lyapunov functions for finding decentralized stabilizing controllers in large-scale networked dynamical systems. Comprehensive experiments have shown that the proposed methods outperform the prior work in the case of stability, especially in high-dimensional unknown systems and large-scale networked systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Potential Impact of Curved Meshing for Higher-order Adaptive Mesh Simulations</title>
<link href="https://hdl.handle.net/1721.1/153782" rel="alternate"/>
<author>
<name>Womack, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/153782</id>
<updated>2024-03-16T03:32:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">On the Potential Impact of Curved Meshing for Higher-order Adaptive Mesh Simulations
Womack, Christopher
Higher order, adaptive finite element methods have demonstrated the ability to significantly reduce the human and computational cost of accurately approximating the solution to partial differential equations (PDEs). In this thesis, we consider the potential advantages of incorporating higher-order element shapes, i.e. curved meshes, into an adaptive process through the use of a mesh-based, geometric mapping. While previous work has considered the generation of curved meshes to account for geometry curvature, less research has attempted to curve meshes to control error in an adaptive process. This work considers adaptive finite element methods for the advection-diffusion PDE in both Cartesian and polar coordinate systems, with the polar coordinate transformation serving to demonstrate the potential benefits of incorporating curvature into an adaptive meshing process. Results are presented for both uniform and adaptive refinement, considering first a volume output problem, followed by a boundary output problem; analytic solutions to these canonical problems are derived and presented as well. The results of this investigation demonstrate that, for each polynomial order, discretization, and output functional tested, solving the advection-diffusion equation in a polar coordinate system achieves significantly higher levels of accuracy in computing output quantities of interest. These results also showcase the potential improvements which are possible with the use of an adaptive process which incorporates element curving to control error. Additionally, adjoint analysis performed in this work shows how the form the primal output functional affects the adjoint PDE and boundary conditions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Relative Pose Estimation with Ultra-Wideband Ranging</title>
<link href="https://hdl.handle.net/1721.1/153779" rel="alternate"/>
<author>
<name>Fishberg, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/153779</id>
<updated>2024-03-16T03:01:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-Agent Relative Pose Estimation with Ultra-Wideband Ranging
Fishberg, Andrew
Inter-agent relative localization is critical for any multi-robot system operating in the absence of external positioning infrastructure or prior environmental knowledge. Motivated by the applications of nuclear non-proliferation, radiological search, and radiological mapping, this thesis explores leveraging multiple ultra-wideband (UWB) ranging sensors to produce frequent inter-agent pose estimates with minimal communication overhead. This work is intended as a component of a larger multi-agent simultaneous localization and mapping (SLAM) system (also known as collaborative SLAM or CSLAM), where persistent UWB-based inter-agent pose estimates provide a valuable alternative source of inter-agent loop closures. By collecting and analyzing real data, we develop improved sensor models, which in turn inform our algorithm design process– thus, this work produces competitive or improved results to state-of-the-art approaches with significantly less overall communication. By comparison, prior work typically supplements noisy UWB range measurements with additional continuously transmitted data, such as odometry, leading to potential scaling issues with increased team size and/or decreased communication network capability.&#13;
&#13;
This thesis’s main technical contributions are as follows: (1) Exploration of current commercially available off-the-shelf (COTS) UWB devices for use in mobile robotics. Byanalyzing real data, insights into commonly overlooked sensor quirks are addressed through our improved sensor models. (2) Development and testing of a novel 2D relative pose estimation system based on trilateration, leveraging multiple UWB ranging sensors per agent. (3) Extension of said system to 3D environments. (4) A list of recommendations and continuations for future work.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using source code to solve control problems</title>
<link href="https://hdl.handle.net/1721.1/153777" rel="alternate"/>
<author>
<name>Hernandez Cano, Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/153777</id>
<updated>2024-03-16T04:01:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Using source code to solve control problems
Hernandez Cano, Leonardo
Planning for long-horizon tasks in environments with non-discrete state spaces and dynamics with discontinuities remains a core challenge in robotics. In this setting, fully automatic search methods do not yet scale to many real-world problems of interest, and because of this, specialized planning algorithms (e.g., hierarchical planners) have been developed that leverage domain knowledge to organize the search for a successful plan. However, these specialized algorithms rely on representations tailored to specific problems and domains, which imposes additional workload. Recent work, however, has studied scalable techniques for finding concrete control inputs using a given control specification alone in the form of a logical formula, which reduces the burden on the user.&#13;
&#13;
This thesis studies the application of program analysis techniques to the aforementioned planning problem, in conjunction with local formulae and hybrid search spaces in the style of hierarchical planners. Our observation is that the high-level structure of problem domains can often be coded into domain-specific simulators that model the high-level dynamics of the domain. This presents an opportunity to reuse that structure when describing the planning domain. We argue, this decreases the effort required to implement a planning system when a domain expert can relate domain knowledge to simulator source code. Thus, we design a planning system which can leverage simulator source code when describing a planning domain.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information Retrieval with Dense and Sparse Representations</title>
<link href="https://hdl.handle.net/1721.1/153774" rel="alternate"/>
<author>
<name>Chuang, Yung-Sung</name>
</author>
<id>https://hdl.handle.net/1721.1/153774</id>
<updated>2024-03-16T03:33:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Information Retrieval with Dense and Sparse Representations
Chuang, Yung-Sung
Information retrieval, at the core of numerous applications such as search engines and open-domain question-answering systems, relies on effective textual representation and semantic matching. However, current approaches can lose nuanced lexical detail information due to an information bottleneck in dense retrieval, or rely on exact lexical matching and thus overlook the broader contextual relevance when using sparse retrieval. This thesis delves into improving both dense and sparse retrieval systems with advanced language models and training strategies. We first introduce DiffCSE, a difference-based contrastive learning framework for unsupervised sentence embedding and dense retrieval that can effectively capture minor differences in sentences, showcasing improved performance in semantic tasks and retrieval for open-domain question answering. We then address sparse retrieval's limitations by developing a query expansion and reranking procedure. Using pre-trained language models, we propose an expansion and reranking pipeline for better query expansion, achieving superior retrieval results both in-domain and out-of-domain, yet retaining sparse retrieval's computational efficiency. In summary, this thesis provides a comprehensive exploration of advancing information retrieval in the generation of large language models.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Unified Framework for Characterization of Mode and Spike Routes to Rotating Stall</title>
<link href="https://hdl.handle.net/1721.1/153771" rel="alternate"/>
<author>
<name>Logrono, Marcos A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153771</id>
<updated>2024-03-16T03:17:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Unified Framework for Characterization of Mode and Spike Routes to Rotating Stall
Logrono, Marcos A.
In this thesis, we characterize modal and spike-type rotating stall inception for an isolated rotor using a low order, non-linear actuator disk model. The actuator disk representation is capable of capturing stall inception behavior given an axisymmetric total-to-static pressure rise characteristic. A parametric study of the effect of the derivative of the total-to-static pressure rise with respect to flow coefficient has been carried out to (i) define the links between the computed behavior of circumferentially propagating flow disturbances and those of established linearized analyses and (ii) describe both modes and spikes as different regimes of the same dynamical framework.&#13;
&#13;
The results of the parametric study show three distinct regimes for the non-dimensional compressor characteristics examined. For total-to-static pressure rise characteristic slopes below 0.2, exponentially growing sinusoidal disturbances lead to the onset of rotating stall with growth time scales on the order of ten rotor revolutions. This behavior is characteristic of what is known as modal inception, or modes. For pressure rise slopes above 0.4, disturbances with no sinusoidal structures and with magnitudes of order of the mean axial flow were observed before the onset of rotating stall. The growth time scales of these disturbances were on the order of a rotor revolution. This behavior is characteristic of spikes. For pressure rise slopes between 0.2 and 0.4, both behaviors were observed. These results suggest a continuous transition between modal and spike inception, contrary to the description as two distinct phenomena.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Diagnostic Tools for Deep Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/153769" rel="alternate"/>
<author>
<name>Casper, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/153769</id>
<updated>2024-03-16T03:21:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Practical Diagnostic Tools for Deep Neural Networks
Casper, Stephen
The most common way to evaluate AI systems is by analyzing their performance on a test set. However, test sets can fail to identify some problems (such as out-of-distribution failures) and can actively reinforce others (such as dataset biases). Identifying problems like these requires techniques that are not simply based on passing a dataset through a black-box model. In practice, this challenge lies at the confluence of two fields: interpreting and attacking deep neural networks. Both of these goals help to improve oversight of AI. However, existing techniques are often not competitive for practical debugging in real-world applications. This thesis is dedicated to identifying and addressing gaps between research and practice.&#13;
&#13;
I focus on evaluating diagnostic tools based on how useful they are for identifying problems with networks under realistic assumptions. Specifically, this thesis introduces a benchmark for these tools based on their usefulness for identifying trojans– specific bugs that are deliberately implanted into networks. I present the following thesis: &#13;
&#13;
1. Trojan discovery is a practical benchmarking task for diagnostic tools that can be applied to both dataset-based and dataset-free techniques. &#13;
2. State-of-the-art feature attribution methods often perform poorly relative to an edge detector at discovering trojans even under permissive conditions with access to data containing trojan triggers. &#13;
3. Feature synthesis methods– particularly ones that leverage the latent representations of models– can be more effectively used for diagnostics in dataset-free contexts.&#13;
&#13;
Chapter 1 adopts an engineer’s perspective on techniques for studying AI systems. It overviews motivations for building a versatile toolbox of model-diagnostic tools. These hinge on their unique ability to help humans understand models without being limited to some readily accessible dataset.&#13;
&#13;
Chapter 2 overviews literature on interpretable AI, adversarial attacks, feature attribution, feature synthesis methods, and evaluation methods for these tools. It also reviews connections between research on interpretability tools, adversarial examples, continual learning, modularity, network compression, and biological brains.&#13;
&#13;
Chapter 3 presents a benchmark for diagnostic tools that is based on helping humans discover trojans. This can be done either (a) under permissive assumptions by allowing access to data that include the trojan triggers or (b) under stringent assumptions where no such access is available.&#13;
&#13;
Chapter 4 demonstrates the difficulty of this benchmark with a preliminary evaluation of 16 state-of-the-art feature attribution tools. This reveals two shortcomings of them. First, because they can only explain model decisions on specific examples, these tools are not equipped to help diagnose bugs without data that trigger them. Second, even under idealized conditions where examples containing a trojan trigger are available, most feature attribution methods consistently fail to identify them better than an edge detector.&#13;
&#13;
Chapter 5 focuses on dataset-free feature synthesis methods. It introduces two novel techniques for studying networks with feature-level adversarial attacks. Both use model latents to produce interpretable adversarial attacks. Compared to other state-of-the-art feature-synthesis tools, these techniques are the most useful for trojan-discovery. However, there remains room for improvement on this benchmark. No techniques help humans identify trojans in more than 50% of 8-option multiple choice questions.&#13;
&#13;
Finally, Chapter 6, analyzes gaps between research and practical applications. It argues that a lack of clear and consistent criteria for assessing the real-world competitiveness of techniques has hampered progress. I conclude by discussing directions for future work emphasizing benchmarking, interdisciplinarity, and building a dynamic AI interpretability toolbox.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Analysis of Clinical Trials</title>
<link href="https://hdl.handle.net/1721.1/153768" rel="alternate"/>
<author>
<name>Cho, Joonhyuk</name>
</author>
<id>https://hdl.handle.net/1721.1/153768</id>
<updated>2024-03-16T03:06:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data-driven Analysis of Clinical Trials
Cho, Joonhyuk
The research combines two studies in the field of clinical trials. The first evaluates the amyotrophic lateral sclerosis (ALS) drug AMX0035 using Bayesian decision analysis (BDA), balancing FDA safety standards with patient needs. This method provides a quantitative way to consider both the patient’s perspective and the disease’s impact. The second study uses machine learning models to predict how long clinical trials will take. By analyzing a large dataset, it identifies factors that affect trial duration, helping to streamline the trial process and potentially reduce costs. Together, these studies offer new ways to evaluate and manage clinical trials, combining patient-focused evaluation with efficient trial design.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless Scheduling for Monitoring Remote Correlated Sources</title>
<link href="https://hdl.handle.net/1721.1/153767" rel="alternate"/>
<author>
<name>Ramakanth, Rudrapatna Vallabh</name>
</author>
<id>https://hdl.handle.net/1721.1/153767</id>
<updated>2024-03-16T03:16:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Wireless Scheduling for Monitoring Remote Correlated Sources
Ramakanth, Rudrapatna Vallabh
We study the design of scheduling policies to minimize monitoring error for a collection of correlated sources, where only one source can be observed at any given time. We model correlated sources as a discrete-time Wiener process, and later as a Linear Time-Invariant process, where the increments are multivariate normal random variables, with a general covariance matrix that captures the correlation structure between the sources. Under a Kalman filter-based optimal estimation framework, we show that the performance of all scheduling policies oblivious to instantaneous error can be lower and upper bounded by the weighted sum of Age of Information (AoI) across the sources for appropriately chosen weights. We use this insight to design scheduling policies that are only a constant factor away from optimality and make the rather surprising observation that AoI-based scheduling that ignores correlation is sufficient to obtain performance guarantees. We also derive scaling results that show that the optimal error scales roughly as the square of the dimensionality of the system, even in the presence of correlation. We extend these findings to processes with looser constraints. Finally, we provide simulation results to verify our claims.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-assisted reaction impurity prediction and inverse structure elucidation</title>
<link href="https://hdl.handle.net/1721.1/153765" rel="alternate"/>
<author>
<name>Mohapatra, Somesh</name>
</author>
<id>https://hdl.handle.net/1721.1/153765</id>
<updated>2024-03-16T03:48:21Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">AI-assisted reaction impurity prediction and inverse structure elucidation
Mohapatra, Somesh
Identification and control of impurities play a critical role in chemical process development for drug substance synthesis. Most chemical reactions result in a number of by-products and side-products, along with the intended major product. While chemists can predict many of the main process impurities, it remains a challenge to enumerate the possible minor impurities and even more of a challenge to track and propagate impurities derived from raw materials or from step to step. Further, in the absence of a systematic means for listing out possible-low-level impurities and performing impurity propagation, inverse structure elucidation – that is, identifying unknown impurities post hoc from analytical data, such as mass spectrometry data – presents a significant challenge.&#13;
&#13;
In this work, impurity prediction was established by developing an AI-based reaction predictor that takes as input the main reactants, and reagents, solvents, and impurities in these materials. Further, the predictor was run iteratively to track impurity propagation in multi-step reactions. For inverse structure elucidation, a chemistry-informed language model was developed to translate mass spectrometry data to potential molecular structures, which can then be checked for matches against the predicted chemical reaction products. The impurity prediction tool was applied to synthesis of common small molecule drugs—paracetamol and ibuprofen, and the inverse structure elucidation tool was used for the identification of chemical structures from publicly available electrospray ionization mass spectrometry data, The models were applied to proprietary Amgen programs, both small molecule drugs and biologics, with significant results noted in both projects.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Determinants of Voluntary Carbon Emissions Targets</title>
<link href="https://hdl.handle.net/1721.1/153742" rel="alternate"/>
<author>
<name>Downing, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/153742</id>
<updated>2024-03-14T03:00:47Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Determinants of Voluntary Carbon Emissions Targets
Downing, Charles
This study seeks to examine whether and how firms near important emissions thresholds change their behavior to meet these targets. Emissions targets are commonly measured in two ways: absolute emissions levels and emissions intensity(absolute levels normalized by sales). To meet absolute benchmarks, firms can only reduce their actual emissions. However, to meet intensity-based benchmarks, firms can either lower their emissions or raise revenue to meet their goal. This study will characterize the differences between firms who choose these two measurements, and investigate whether and when firms shift their emissions or reporting behavior to meet their emissions targets. Furthermore, this study will characterize the capital market consequences of meeting or missing emissions targets, consider potential market-based benchmarks in addition to targets set by the firms, and test cross-sectionally when firms have stronger incentives or ability to react to these targets.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elsewhere in New York City: Seeking Opportunities for Office Conversion</title>
<link href="https://hdl.handle.net/1721.1/153741" rel="alternate"/>
<author>
<name>Hong, Nayeon</name>
</author>
<id>https://hdl.handle.net/1721.1/153741</id>
<updated>2024-03-14T03:06:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Elsewhere in New York City: Seeking Opportunities for Office Conversion
Hong, Nayeon
Office-to-residential conversions have emerged as the most promising solution to New York City’s housing crisis. Previously unprecedented in the country’s largest office market, this trend evolved in response to the impact of the pandemic on offices, further reinforced by the surplus of underutilized office spaces in major districts like Manhattan. However, this phenomenon is not exclusive to Manhattan; it extends across the entire city. Boroughs outside Manhattan, such as Brooklyn, may offer untapped potential for such conversions, benefiting from more favorable conditions like lower property costs, varied zoning regulations, and diverse community needs. Broadening the scope of these conversion projects to include other boroughs could lead to a more equitable distribution of housing resources, address the city-wide housing shortage more effectively, and stimulate balanced economic growth and community development across New York City’s diverse landscape. This thesis delves into the opportunities and challenges of office-to-residential conversions and conducts a comparative case study of two properties, comparable in physical condition, one in Manhattan and the other in Brooklyn. This study aims to explore how geographic differences within New York City might impact the feasibility of the conversion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solar Roof Monetization in US Industrial Real Estate</title>
<link href="https://hdl.handle.net/1721.1/153740" rel="alternate"/>
<author>
<name>Xu, Ben</name>
</author>
<id>https://hdl.handle.net/1721.1/153740</id>
<updated>2024-03-14T03:18:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Solar Roof Monetization in US Industrial Real Estate
Xu, Ben
The transition towards clean energy in the US has placed the industrial real estate sector at the forefront of solar energy adoption, to leverage the underused extensive roof for solar power generation. This thesis scrutinizes the process of solar roof monetization, assessing the interplay between market dynamics, policy frameworks, and the financial implications of various solar roof business models within industrial real estate sector.&#13;
&#13;
Through a mixed-methods approach, including structured interviews with industry stakeholders and an extensive review of public databases and industry research reports, the research delineates the nuanced dynamics of the industrial solar market, marked by state-dependent variability and diverse regulatory environments, and business model for deployment. The study critically assesses two predominant business models – self-ownership and roof leasing, exploring their operating structure and implications to real estate owners.&#13;
&#13;
Utilizing a model grounded in real-world industrial underwriting, the thesis extends to a detailed financial analysis of the two solar roof business models integrating federal- and state-level policy incentives, signatured with tax credits, accelerated depreciation and renewable energy certificates. A critical examination of operating metrics – production efficiency, capital expenditures, financing costs, and revenue projections – also reveals their pivotal impact on investment returns.&#13;
&#13;
The thesis concludes with practical implications for industry stakeholders, providing a comprehensive guide to executing solar roof projects that not only align with corporate sustainability targets but also enhance financial and property values. This paper serves as a roadmap for industrial real estate owners seeking to capitalize on the transition to a cleaner energy grid while reinforcing their market position in an evolving landscape shaped by environmental imperatives and economic opportunities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Feasibility Study of a Tension-Leg Platform for Hydro-Powered Turbines System and Metocean Data Analysis for Floating Wind Turbine Design</title>
<link href="https://hdl.handle.net/1721.1/153735" rel="alternate"/>
<author>
<name>Alus, Avri</name>
</author>
<id>https://hdl.handle.net/1721.1/153735</id>
<updated>2024-03-14T04:04:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Feasibility Study of a Tension-Leg Platform for Hydro-Powered Turbines System and Metocean Data Analysis for Floating Wind Turbine Design
Alus, Avri
Marine and wind energy stand as promising frontiers for clean and sustainable power generation. The first chapter of this study explores the feasibility of implementing a Tension Leg Platform (TLP) for a hydropower turbine system with an overall rated power of 1500 kW.  &#13;
&#13;
The TLP semi-submersible concept for harnessing ocean energy is an innovative approach, which allows the employment of turbines in deep waters near the water surface. The TLP's structural and tendon parameters are examined through simplified static and dynamic analyses, ensuring its stability under extreme conditions. Furthermore, a power yield analysis is demonstrated, utilizing hindcast datasets of the Gulf Stream, to meticulously pinpoint the most suitable site. This selection process takes into careful consideration factors such as current velocities, water depth, and proximity to the shoreline.&#13;
&#13;
In the second chapter, we embark on a thorough preliminary analysis of metocean data, focusing on a potential site for wind turbine deployment. This analysis relies heavily on statistical examination, employing historical buoy data as well as high-resolution hindcasts for rigorous data validation. The findings illuminate the frequent occurrence of adverse weather conditions, marked by the prevalence of high and severe conditions, intermittently punctuated by storms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Estimation of Stochastic Parameters: A GLS Approach</title>
<link href="https://hdl.handle.net/1721.1/153734" rel="alternate"/>
<author>
<name>Huo, Da</name>
</author>
<id>https://hdl.handle.net/1721.1/153734</id>
<updated>2024-03-14T03:06:42Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Efficient Estimation of Stochastic Parameters: A GLS Approach
Huo, Da
This thesis presents a novel rolling GLS-based model to improve the precision of time-varying parameter estimates in dynamic linear models. Through rigorous simulations, the rolling GLS model exhibits enhanced accuracy in scenarios with smaller sample sizes and maintains its efficacy when the normality assumption is relaxed, distinguishing it from traditional models like Kalman Filters. Furthermore, the thesis expands on the model to tackle more complex stochastic structures and validates its effectiveness through practical applications to real-world financial data, like inflation risk premium estimations. The research culminates in offering a robust tool for financial econometrics, enhancing the reliability of financial analyses and predictions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of a Mobile Motion Capture&#13;
Suite for Advancing Technology Adoption</title>
<link href="https://hdl.handle.net/1721.1/153732" rel="alternate"/>
<author>
<name>Abdo, Hadeel</name>
</author>
<id>https://hdl.handle.net/1721.1/153732</id>
<updated>2024-03-14T03:06:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Design and Development of a Mobile Motion Capture&#13;
Suite for Advancing Technology Adoption
Abdo, Hadeel
Motion Capture (MoCap) technology has revolutionized several industries, including filmmaking, manufacturing, sports, and healthcare. Yet, the high cost and complexity of existing precise MoCap systems can make them inaccessible to many people. In addressing this accessibility problem, the Lab-in-a-Box (LabX) project was initiated within MIT’s Center for Clinical and Translational Research (CCTR) to develop a portable, accurate, user-friendly, and inclusive MoCap system to be used in healthcare applications and beyond.&#13;
&#13;
This thesis explores the initial stages of developing the LabX system, including extensive market research and user interviews, user-centric hardware design, software development, and camera integration and sensor fusion. Decisions such as Raspberry Pi camera selection and ROS2 utilization for system integration are made to ensure optimal performance. Structural tests are conducted to ensure durability and adaptability to diverse environmental conditions and natural vibrations. This stage of the LabX project lays the foundation for creating accessible markerless tracking and less-invasive radar motion capture systems in the future. The current design of LabX enables quick customization, creating a robust foundation for broader applications in physical therapy education, in-home remote sensing, and other use cases.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Lab to Life: Bridging Gaps in Motion Capture to&#13;
Increase Public Usability through Integrated Hardware&#13;
and Software Solutions</title>
<link href="https://hdl.handle.net/1721.1/153731" rel="alternate"/>
<author>
<name>Lonni, Pierre</name>
</author>
<id>https://hdl.handle.net/1721.1/153731</id>
<updated>2024-03-14T03:49:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">From Lab to Life: Bridging Gaps in Motion Capture to&#13;
Increase Public Usability through Integrated Hardware&#13;
and Software Solutions
Lonni, Pierre
This Master’s thesis delves into the initial stages of the Lab-In-A-Box (LabX) project, an initiative within MIT’s Center for Clinical and Translational Research (CCTR). LabX is dedicated to simplifying the incorporation of Motion Capture (MoCap) technology into home environments. The project’s primary aim is to create portable and accurate MoCap systems, utilizing less intrusive technology (such as RADAR signals instead of traditional IR or visible light) for capturing motion of individuals in their everyday lives. This approach seeks to revolutionize MoCap’s applicability, making it more accessible and user-friendly for public use.&#13;
&#13;
The central focus of this research is the development of a portable and stable sensor rig, which is crucial to LabX’s mission. Designed for precise data capture, the rig emphasizes ease of deployment and versatility, ensuring that it can be effectively used in various settings outside of specialized laboratories.&#13;
&#13;
In addressing the challenges presented by traditional MoCap systems, the thesis details the hardware development process, focusing around the creation of the project’s sensor rig, and incorporating sensor fusion technology. This enhancement allows simultaneous data capture at different locations, emphasizing stability and portability for versatile application in various public settings.&#13;
&#13;
The thesis extends its focus to LabX’s overarching goal of enhancing MoCap’s public accessibility through integrated hardware and software solutions. A holistic approach is emphasized, encompassing sensor fusion and machine learning components. This integration aims to bridge gaps in traditional setups and render MoCap technology more inclusive and widely applicable.&#13;
&#13;
This research significantly contributes to advancing user-friendly MoCap technology, signifying a transition from controlled laboratory environments to real-world applications. The incorporation of hardware, sensor fusion, and machine learning solutions in LabX establishes a foundation for future advancements, ultimately enriching public interaction with motion capture and seamlessly integrating it into everyday life.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Process Control Framework&#13;
Incorporating Deep Reinforcement Learning for Desktop&#13;
Fiber Extrusion Device via PLC Implementation</title>
<link href="https://hdl.handle.net/1721.1/153730" rel="alternate"/>
<author>
<name>Zhang, Yutong</name>
</author>
<id>https://hdl.handle.net/1721.1/153730</id>
<updated>2024-03-14T03:02:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development of Process Control Framework&#13;
Incorporating Deep Reinforcement Learning for Desktop&#13;
Fiber Extrusion Device via PLC Implementation
Zhang, Yutong
Optical fiber has revolutionized communication, and the market has experienced rapid growth in the last ten years. It can transmit information at high speeds with minimal loss over long distances due to its structure. Fiber extrusion, a common manufacturing method in the industry, involves controlling the fiber diameter during its formation. In this thesis, a control framework for a desk-top fiber extrusion device is developed, incorporating Deep Reinforcement Learning. By improving the mechanical design of the desk-top fiber extrusion device and implementing PID controllers over the system on the Allen-Bradley PLC, the coefficient of variation in the fiber extrusion process is reduced to 0.1. A communication path is established based on open platform communication unified architecture (OPC UA), enabling the external devices to access the data in the PLC. Using a Deep Reinforcement Learning model on a separate PC, the process is controlled to have a coefficient of variation of 0.13, with the potential to reduce the response time.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microscale Analysis of Millimeter-wave Induced Vitrified Basalt for Use in Enhanced Geothermal Energy Systems</title>
<link href="https://hdl.handle.net/1721.1/153727" rel="alternate"/>
<author>
<name>Meltzer, Eve</name>
</author>
<id>https://hdl.handle.net/1721.1/153727</id>
<updated>2024-03-14T03:53:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Microscale Analysis of Millimeter-wave Induced Vitrified Basalt for Use in Enhanced Geothermal Energy Systems
Meltzer, Eve
Extraction of the energy available from geothermal heating in the Earth could provide substantial contributions to energy needs long-term. However, there are major technical limitations with the current geothermal drilling process. A new technology in the field of EGS that uses a millimeter (MM) wave gyrotron, which allows for quicker, more efficient drilling could be a potential solution to these limitations. The MM-wave drilling process, a technique developed by Dr. Paul Woskov of the MIT Plasma Lab, has two significant advantages as compared to traditional drilling: 1. The well hole advance is through melting of the rock, which is faster than mechanical drilling. 2. The molten rock then solidifies, creating a vitrified wall support without the need for extra casings. This drilling and casing process can potentially save money, time, and material. The study presented in this thesis is aimed at understanding the strength and microscale mechanical and chemical properties of the vitrified material to see what is happening to the rock, specifically Basalt, pre- and post-melting by using a series of experimental and analytical tools. These include:  Scanning Electron Microscopy (SEM), Electron Dispersive Spectroscopy (EDS), Nano-Indentation, Raman Spectroscopy, and optical imagery. &#13;
&#13;
The results presented in this thesis show the creation of a non-crystalline amorphous solid that has relatively high strength values with slight evidence of micro-cracking. There are significant elemental differences between the basalt matrix, transition zone matrix, and solidified melt in addition to changes in the molecular phases. The partial melting of basalt minerals throughout the transition zone was also recorded. Ultimately, due to micro-cracking and the variability in the transition zone's chemical make-up, there may be significant risks to using this material as a well-bore casing as it is now. However, these results open up the possibly of future research in the field of environmental sustainability for alternative uses of this new vitrified material.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying System Dynamics to Simulate and Forecast Rental Real Estate Market</title>
<link href="https://hdl.handle.net/1721.1/153726" rel="alternate"/>
<author>
<name>Chauhan, Rohit Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/153726</id>
<updated>2024-03-14T03:02:04Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Applying System Dynamics to Simulate and Forecast Rental Real Estate Market
Chauhan, Rohit Singh
This research explores the utilization of system dynamics modeling methodology to simulate and forecast a sub-market within the real estate industry. By doing so, this research examines the feasibility and potential of a system dynamics-based tool that could reliably forecast future trends and inform decision-making for businesses in a sub-market. It is based on the original system dynamics model for real estate markets as developed by John Sterman (Sterman, Case Study: Boom and Bust in Real Estate Markets 2000), and other subsequent examples of this methodology’s application in a real estate context since. It expands on this existing literature by recognizing and incorporating concepts central to the real estate industry, such as rental rates, affordability, absorption, inflation, cap rates, and rental prices, as key for predicting market movements.&#13;
&#13;
As a test bed, the multifamily rental housing in the South Boston region is identified for application. The study thus predicts short-term movement for the multifamily assets in this sub-market in comparison to forecasts from other major sources. It also highlights the limitations of this approach, such as the smoothing effect of generated data and its limitations in capturing seasonality in the market. The study further explores potential avenues for enhancing the functionality and accuracy of forecasts by endogenizing additional factors, thus establishing a foundation for subsequent research.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating the Storms of Distressed Ventures: South Korean Investments in US Office Real Estate</title>
<link href="https://hdl.handle.net/1721.1/153724" rel="alternate"/>
<author>
<name>Lee, David Sang Hyup</name>
</author>
<id>https://hdl.handle.net/1721.1/153724</id>
<updated>2024-03-14T03:00:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Navigating the Storms of Distressed Ventures: South Korean Investments in US Office Real Estate
Lee, David Sang Hyup
During the early to mid-2010s, Korean investors flooded into the US office real estate market, enticed by the promise of higher returns in an era of low interest rates. At this time, the Korean base interest rate exceeded the Fed funds rate, minimizing losses from currency hedging. The allure of investment was further magnified by the "herding effect" – a phenomenon driven by headlines of Korean institutions achieving success in the US office market. Fear of missing out (FOMO) and pressure from executives propelled a wave of Korean investments into the same sector. Today, Korean investors face distress in this market. The aftermath of COVID-19 has led to a significant decline in demand for office space, with employees reluctant to return to physical offices. Furthermore, the distress extends beyond demand dynamics; it encompasses financial turmoil caused by the Federal Reserve's rapid interest rate hikes. These hikes have created a double-edged sword, adversely impacting both equity investors struggling to meet loan obligations and lenders unable to recoup their loans.  This thesis explores potential solutions through real-life case studies, drawing from the author's experience working at a number of real estate private equity firms. The path to resolution, though, is fraught with challenges, including but not limited to: information asymmetry, moral hazards, a lack of experience in US office market distress, complex investment committee approval procedures, and the entanglement of numerous investors in single deals. This thesis sheds light on these complexities while offering insights into navigating the distressed landscape of US office real estate investments for Korean investors.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling the Dynamics of Inflation in Housing Rent</title>
<link href="https://hdl.handle.net/1721.1/153723" rel="alternate"/>
<author>
<name>Flores Jimenez, Julio E.</name>
</author>
<id>https://hdl.handle.net/1721.1/153723</id>
<updated>2024-03-14T03:04:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Unveiling the Dynamics of Inflation in Housing Rent
Flores Jimenez, Julio E.
Inflation is one of today’s biggest short-term global economic challenges, and housing costs, a persistent component of inflation whose price increases have had a strong influence in the loss of purchasing power of American households, and which is irreplaceable, have more than doubled in the past 20 years. Housing cost rises have outpaced inflation for the rest of the products typically consumed by individuals, and low-income earners have been highly burdened by the situation. However, this has not always been the tendency, and this paper will explain how the recent rise in rents can be mainly attributed to a higher demand for housing, as opposed to higher construction and operating costs due to inflation spillovers into real estate related products. This will be demonstrated through both qualitative and quantitative analyses of the housing market and its price dynamics in the United States.  The first section of this document — The Upheaval of Housing Costs — will explain how rising house prices have trespassed into rising residential rents, and how this has been highly influenced by long periods of expansionary monetary policy and the implementation of Quantitative Easing, along with rising income inequality and the failure of the market to swiftly adapt its residential products to the changing dynamics in demand. This chapter offers a well-rounded explanation of the demand determinants of housing, as well as historical context to better understand why rents have outpaced inflation for other products since the 1980’s.  The second section — Rents, House Prices, and Inflation —exhibits a quantitative analysis of how house prices and inflation for non-rent products impact residential rents. This analysis was carried out with an Error Correction Model to capture both the short-term and long dynamics of these variables, given that changes in house prices and inflation do not fully impact rents immediately. This model was run for the United States and replicated for Boston, Chicago, Dallas, Detroit, Houston, Los Angeles, Miami, New York, Philadelphia, and San Francisco. Results for this analysis show that since 1978, demand-pull inflation has dominated rent growth in the United States and in most of the studied cities. This analysis is followed by an Appendix showcasing the detailed outputs for every model, as well as graphs to visually support our quantitative analysis and provide comprehensive evidence of the dynamics of these variables in those cities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Performance of Real Estate Investment Strategy Across Multiple Cycles: A Comparison of Core and Non-Core Strategies Based on A New Dataset and Industry Interviews</title>
<link href="https://hdl.handle.net/1721.1/153722" rel="alternate"/>
<author>
<name>Ding, Yizhuo (Wilson)</name>
</author>
<id>https://hdl.handle.net/1721.1/153722</id>
<updated>2024-03-14T03:38:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Performance of Real Estate Investment Strategy Across Multiple Cycles: A Comparison of Core and Non-Core Strategies Based on A New Dataset and Industry Interviews
Ding, Yizhuo (Wilson)
In the wake of the COVID-19 pandemic, the real estate capital markets have been thrust into a realm of heightened uncertainty, primarily due to fluctuating federal funds rates and rapidly changing economic conditions. This thesis delves into the intricate dynamics of Core and NonCore private equity real estate strategies in response to these turbulent times. The research aims to dissect and understand the performance and strategic adjustments in real estate investment amidst changing capital market cycles, particularly in the post-pandemic landscape. Using a new source of data from the MSCI Property Index and NCREIF Research database, the study analyzes historic performance trends across strategies since 2000, identifying a strong correlation between market fundamentals and private real estate returns. The analysis highlights the superior performance of Development strategies in the Sunbelt and Southwest regions, contrasted with the decline of Rehabilitation/Repositioning strategies in West Coast markets, and it reflects a shift in office sector demand. The thesis also explores market expectations and strategic responses during the high-interest rate environment and secular market changes of the fourth quarter of 2023. Qualitative insights from 21 industry professionals point to a transition from falling values in 2023 to value recovery in 2024. The interviews also signal the short-term opportunities for Core/Core-plus strategies in the forthcoming lower-rate environment, as inflation eases. The thesis also underscores the importance of aligning investment strategies with thematic investment trends, as evidenced by the success of development strategies in certain regions. The thesis posits that while investment style affects return and volatility, the overarching drivers of long-term returns across strategies are thematic trends and the broader market environment, including access to capital, leverage opportunities, and secular shifts. The study advocates for a holistic approach to investment decisions, considering thematic trends and market dynamics beyond just the immediate return and volatility differences.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/153717" rel="alternate"/>
<author>
<name>Nader, Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/153717</id>
<updated>2024-03-14T03:39:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Housing in Massachusetts
Nader, Andy
Massachusetts is experiencing a housing crisis with the cost of housing increasing more rapidly than in any other comparable coastal state over the past 40 years. This growth in the cost of housing has far outpaced the growth in household income. This thesis explores state economics, the housing market in Massachusetts, and one piece of recent legislation, the MBTA Communities Act, designed to directly address the housing crisis. Over these past forty years cities and towns in Massachusetts have developed zoning codes that restrict the ability to add new housing to the existing stock. With such strong local control over land use, I argue that intervention is needed from the state to provide zoning relief and institute as of right high-density zoning. I will use the town of Milton as a case study to illustrate the adoption of the new legislation and theorize on the impact of unlocking new housing.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real Estate Redevelopment Framework: Quantitative Analysis of Adaptive Reuse Strategies</title>
<link href="https://hdl.handle.net/1721.1/153716" rel="alternate"/>
<author>
<name>Kittisorayut, Khanachai (Earn)</name>
</author>
<id>https://hdl.handle.net/1721.1/153716</id>
<updated>2024-03-14T03:28:40Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Real Estate Redevelopment Framework: Quantitative Analysis of Adaptive Reuse Strategies
Kittisorayut, Khanachai (Earn)
As urban landscapes continue to evolve, real estate developers face opportunities and challenges in redeveloping underutilized properties while maximizing their return on investment. This thesis explores the concept of adaptive reuse as a socially, environmentally, and economically viable strategy for real estate redevelopment. It provides a systematic and quantitative approach to identifying potential buildings, prioritizing areas for improvement, and assessing the financial feasibility of adaptive reuse projects.&#13;
&#13;
The study begins by exploring the fundamental concepts of adaptive reuse, encompassing cultural, urban, and environmental benefits that mutually contribute to economic value creation. A series of quantitative analyses then dissects the value drivers of adaptive reuse strategies. These analyses form a strategic toolkit, categorizing various strategies by investment phases from acquisition to disposition.&#13;
&#13;
Using Center Plaza in downtown Boston as a real-world case study, the thesis employs the Discounted Cash Flow (DCF) method to determine key financial metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), Return on Cost (ROC), and Multiple on Invested Capital (MOIC). These metrics compare financial returns across different redevelopment scenarios—no improvement, adaptive reuse, and new construction. Further, the study employs volatility and cost-benefit analyses to gauge the impact on NPV and identify conditions under which redevelopment is viable. The comprehensive findings suggest that adaptive reuse can outperform complete redevelopment when conditions are favorable, requiring a minimum yield-on-cost for improvement averaging around 6.8%.&#13;
&#13;
Conclusively, the thesis provides a comprehensive framework for enhancing value and evaluating potential buildings for real estate redevelopment. It serves as a resource for real estate professionals, property owners, policymakers, and preservationists, advocating for the conservation and revitalization of our dynamic urban landscapes.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examination of Airbnb Demand and Supply in India</title>
<link href="https://hdl.handle.net/1721.1/153715" rel="alternate"/>
<author>
<name>Chotangada, Gautham Somana</name>
</author>
<id>https://hdl.handle.net/1721.1/153715</id>
<updated>2024-03-14T04:09:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Examination of Airbnb Demand and Supply in India
Chotangada, Gautham Somana
This thesis finds Airbnb occupancy in India low and examines the reasons for it. In particular, it focuses on the curious case of year-round high occupancies of 85% + for RAHO properties, a hospitality company with accommodations listed on Airbnb in Coorg, South India. When comparing RAHO accommodation occupancies with the average Indian Airbnb occupancy of 36% and the average branded hotel chain occupancy of 66%, some questions become apparent. Is RAHO’s high occupancy systemic or idiosyncratic? What could be the reason underpinning the occupancy rate differences between Airbnb and branded hotel chains? This is a particularly relevant topic given the changes in the Indian economy. India is a rapidly developing country with an average year-on-year real GDP growth of 5.75% from 2013 to 2023. The GDP per capita has grown by 57% during the same period.  This economic development and increased disposable income have resulted in a larger, more powerful middle-income group that travels more often. As a result, the number of domestic traveler visits has doubled from 2013 to 2019. This increasing demand can be more easily met if accommodation supply comes from individual homeowners through online travel agencies (OTAs). The findings aim to inform strategies for improving the supply of suitable accommodations for this target group, particularly non-urban vacation destinations in India. This thesis hopes to provide a valuable resource for entrepreneurs in the space to build sustainable businesses by highlighting the primary reasons for higher occupancies and suggesting approaches for higher occupancy.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choice Modeling and Assortment Optimization on the Transformer Model</title>
<link href="https://hdl.handle.net/1721.1/153714" rel="alternate"/>
<author>
<name>Jiang, Qingxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/153714</id>
<updated>2024-03-14T03:05:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Choice Modeling and Assortment Optimization on the Transformer Model
Jiang, Qingxuan
The problem of modeling customer choices and finding assortments with maximal revenue has been widely studied in revenue management. Random utility models (RUMs) are typically used to model choice. These models implicitly enforce a rational decision making process whereby a customer is endowed with utilities for each product in the assortment and picks the product that maximizes her utility. This work seeks to explore a general class of choice models where the customer’s decision making process is not constrained in this fashion.&#13;
&#13;
To allow for departures from rational choice (and RUMs), we posit that the customer indirect utility associated with a product is a function of the assortment offered to her. Motivated by the success of transformer models in deep learning, we investigate the case where this utility function is defined through a trained transformer network. This leads to a new class of neural network-based discrete choice models, which we call transformer choice models. The universal approximation property of the transformer network ensures that our model can approximate any discrete choice model, and thus it can capture irrationalities in choice behavior. &#13;
&#13;
We perform computational experiments with real data to verify the generalization performance of our transformer choice model to unseen assortments. To ensure that our model does not overfit on the training data, we use dropout as the regularization method during training. We compare our model to both traditional choice models (the multinomial logit model and its synergistic variant that considers cross-product interaction) and machine learning-based choice models (decision forest choice model and feedforward neural network choice model) on two datasets: a large grocery panel dataset and an online hotel search dataset. We show that on both datasets, the transformer choice model has generalized well to unseen assortments with proper regularization. Moreover, on the more complex dataset of online hotel search, the transformer choice model has outperformed all other models in terms of out-of sample error.&#13;
&#13;
We finally consider the assortment optimization problem on transformer choice models. While the general assortment optimization problem is complex and in-tractable, we empirically evaluate and compare several heuristic algorithms, including random search, quadratic approximation, and local search. Our experiments on transformer choice models with real prices show that a simple local search heuristic finds the global optimum for the assortment optimization problem in three-fourths of the data categories, while achieving a good approximation on the rest of the categories. This shows that in practice, local search can be a reasonable heuristic for assortment optimization on transformer choice models.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity of Precipitation to Land-use Changes in a Regional Climate Model of West Africa</title>
<link href="https://hdl.handle.net/1721.1/153713" rel="alternate"/>
<author>
<name>Ryser, Patric</name>
</author>
<id>https://hdl.handle.net/1721.1/153713</id>
<updated>2024-03-14T03:14:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Sensitivity of Precipitation to Land-use Changes in a Regional Climate Model of West Africa
Ryser, Patric
Limited water resources, climate change and food security needs in West Africa present a special set of challenges in the years to come as the population grows. An optimized irrigation scheme for agriculture can change regional climate by increasing rainfall in specific areas, possibly increasing the water availability for agricultural activities by causing changes in the background large-scale climate circulations which could lead to more precipitation overall in areas with water scarcity.  Both observational and model studies have looked at irrigation impacts around the world, including West Africa. However, the intermediate mechanisms, such as specific roles of the atmospheric structures of the Planetary Boundary Layer (PBL) and Lifting Condensation Level (LCL), or how background wind patterns are affected under certain land-use changes have not been thoroughly explored.  This thesis analyzes the atmospheric changes due to land-use and land-cover changes (LULCC) by analyzing the PBL, the LCL, surface wind, surface pressure and other atmospheric variables to quantify the underlying physical mechanisms which shape rainfall. We analyze this by using the MIT Regional Climate Model (MRCM) to test different LULCC scenarios. For the irrigation experiment, LCL is more sensitive and drops more than does PBL especially in the north, yet rainfall only increases south of the irrigation area. There also exists a transitional zone, north of which there is less rainfall. Desertification increases both the PBL and LCL heights, but the increase in LCL is greater. This pushes the cloud base higher than the PBL, preventing cloud formation and rainfall. However, the simulated rainfall changes do not mirror this development. At a certain latitude, there is again a transitional zone, north of which the rainfall decreases and south of which the rainfall increases intermittently. Given the patterns of the precipitation changes, we believe that different mechanisms are at work for both the desertification and irrigation experiments. This study hypothesizes a blocking mechanism that prevents the monsoon from travelling northward due to the presence of a high surface pressure anomaly being observed in the north of the irrigated zone under the irrigation scenario.&#13;
&#13;
The changes of the atmospheric structure, specifically the PBL and LCL, surface pressure, and wind patterns, as analyzed in this thesis, provide us with another dimension to understand the effects of irrigation and desertification on rainfall, enabling more optimal irrigation strategies. It also provides insights on the locations where natural vegetation or croplands may benefit from the additional rainfall, which could facilitate soil carbon sequestration, a nature-based solution for combatting climate change.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Top Retailers in the United States:  A Changing Landscape of Space Demand in a Post-COVID Era</title>
<link href="https://hdl.handle.net/1721.1/153711" rel="alternate"/>
<author>
<name>Sun, Yueqi</name>
</author>
<id>https://hdl.handle.net/1721.1/153711</id>
<updated>2024-03-14T04:02:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Top Retailers in the United States:  A Changing Landscape of Space Demand in a Post-COVID Era
Sun, Yueqi
The retail sector in the United States has been undergoing a paradigm shift, predominantly driven by digitalization and further accelerated by the disruptive forces of the pandemic. This study examines the dynamic space needs of top retailers in the U.S. within the context of the post-COVID era. The study employs both qualitative and quantitative analyses, including macro research and pairwise comparisons of key financials and physical space metrics of 83 listed U.S. retail companies during a time period from 2017 to 2022. The research reveals a significant shifting of revenues towards e-commerce, and a substantial correlation between revenues in different channels and the space need of physical stores and distribution facilities. By delving into the data and models, the thesis provides potential applications and insights into how the stakeholders within the industry could leverage the findings to plan and adapt in an evolving retail landscape.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Negotiating ROI (Return on Investment) for ROI (Return on Impact)  A Pre-Feasibility Study of Socio-Eco Resort Development in Eastern Indonesia</title>
<link href="https://hdl.handle.net/1721.1/153710" rel="alternate"/>
<author>
<name>Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/153710</id>
<updated>2024-03-14T03:23:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Negotiating ROI (Return on Investment) for ROI (Return on Impact)  A Pre-Feasibility Study of Socio-Eco Resort Development in Eastern Indonesia
Christopher
The thesis delves into the intriguing possibility of developing a nature-based resort in the pristine Raja Ampat region of Eastern Indonesia that simultaneously maximizes Return on Investment (ROI) and Return on Impact (ROI*). With the tourism industry in Raja Ampat growing at an impressive rate of 310% in just five years before the pandemic hit, the potential for a successful socio-eco resort is undeniable. However, the study recognizes the need to consider the growing demand for environmentally sustainable travel and the desire of travelers to positively impact the local economy. The research aims to determine the best partnership structure and agreement for the general partner (GP), limited partner (LP), and hotel management to achieve the desired alignment between ROI and ROI*. This requires analyzing the level of sacrifice necessary for impact and how to measure the impact on various stakeholders, including investors, community leaders, local communities, hotel management firms, and potential customers. Additionally, the thesis explores the metrics to use when measuring impact for each stakeholder and ultimately aims to align the interests of all parties involved. The study recognizes the critical need to create a socio-eco nature-based resort that not only delivers financial returns but also generates social and environmental benefits. The research provides a unique perspective on the importance of local economic growth in a less developed area in Indonesia. Ultimately, the thesis aims to identify a partnership structure that ensures the success of the proposed resort while creating a positive impact on the local economy and environment.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>To attract or to oscillate: Validating dynamics with behavior</title>
<link href="https://hdl.handle.net/1721.1/153709" rel="alternate"/>
<author>
<name>Murray, Keith T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153709</id>
<updated>2024-03-14T03:35:11Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">To attract or to oscillate: Validating dynamics with behavior
Murray, Keith T.
In recent years, the `computation-through-dynamics' framework has gained traction within the neuroscience community as a means of describing how neurological processes implement behavioral computations. The framework argues that computations in neural systems are best explained through dynamical systems in which behaviorally-relevant variables are represented and manipulated via dynamical phenomena. While a variety of previous works have demonstrated the framework's productivity, there are a number of challenges surrounding its efficacy. In this thesis, we identify and address two challenges concerning the existence of multiple dynamical systems which perform the same computation.&#13;
&#13;
We show that a continuous-time recurrent neural network (CT-RNN) can implement two distinct dynamical systems, termed the ``attractive mechanism'' and the ``oscillatory mechanism'', to compute a novel modular arithmetic task inspired by the card game SET. The attractive mechanism computes modular arithmetic through traversing a lattice of fixed-point attractors. The oscillatory mechanism computes modular arithmetic through phase-shifts on a limit cycle. The existence of these two dynamical mechanisms raises two challenges for the `computation-through-dynamics' framework:&#13;
1. How can computationally similar, yet dynamically distinct systems be experimentally identified?&#13;
2. What criteria determine the implementation of one dynamical system versus another?&#13;
&#13;
We address these questions by advocating for the use of behavioral phenomena. Through two experiments, we show how our dynamical mechanisms produce distinct psychometric curves when classifying ambiguous stimuli and generalize to unseen stimuli at different rates when trained on partial datasets. We further argue how these behavioral phenomena can serve as ecological criteria in determining the implementation of a mechanism. These results underscore the utility of behavior in the `computation-through-dynamics' framework.&#13;
&#13;
We conclude this thesis by formulating levels of abstraction for the `computation-through-dynamics' framework, termed `levels of neural computation'. Levels of abstraction were critically important in establishing the efficacy of digital computation; therefore, we speculate that the `levels of neural computation' will further advance the efficacy of the framework. These levels argue for interpreting dynamical systems as implementations for more abstract `geometric representations and manipulations' that effectively serve as neural algorithms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural implicit representations for engineering design</title>
<link href="https://hdl.handle.net/1721.1/153704" rel="alternate"/>
<author>
<name>Rebbagondla, Jaya Manideep</name>
</author>
<id>https://hdl.handle.net/1721.1/153704</id>
<updated>2024-03-14T03:02:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Neural implicit representations for engineering design
Rebbagondla, Jaya Manideep
A good design geometry parameterization is essential for mechanical design engineers to quickly modify the design features without the need to remodel everything from scratch. But, with the advent of better manufacturing methods, design geometries are becoming more and more complicated. Design parameterization is even more important in such case, as the remodeling of such complex design consumes significant time. Furthermore, such a parameterization can also aid in creative ideation of design engineers and decision processes at the management level. &#13;
&#13;
However, traditional design representation methods like (Brep, meshes etc.) face difficulty in representing designs with diverse topologies using the same number of parameters that are also limited in number. Implicit neural representations are gaining popularity in 3D geometry representations, because of their capabilities to represent diverse set of designs in a fixed length latent vector space. So, the goal of this thesis is to demonstrate the best implicit neural architecture for building latent space with design geometries that are diverse in their topologies and to demonstrate the methods in which the learned latent space can then be explored.  &#13;
&#13;
The effectiveness of this parameterization method is demonstrated by analyzing the reconstruction quality of the learned designs and regularization quality of the latent space, corresponding to an eight design dataset. Superiority of these results are demonstrated both qualitatively and quantitatively. Then, several latent space exploration tools are proposed to analyze the resultant latent space. Unique design geometry results are demonstrated for methods like latent space interpolation, principal component analysis and latent vector scaling. While the random sampling of latent space is shown to yield low quality results because of the sparsity of the latent space, the random sampling of the principal components of the latent space is shown to yield meaningful design geometries. Furthermore, a user interface for design space exploration is proposed wherein the user can explore the parameter space by just tuning the proportions of each of the dataset geometries. The possibility of training a surrogate models for mapping the latent space to metrics like maximum von Mises stress is also analyzed using a dataset of 25 designs. Finally, the required characteristics of the design parameterization are revisited to demonstrate that the proposed method satisfies the ideal characteristics of design parameterization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Does Investors’ Belief on Other Investors’ Information Acquisition Affect Trading and Price?</title>
<link href="https://hdl.handle.net/1721.1/153702" rel="alternate"/>
<author>
<name>Wang, Yuting(Economist)</name>
</author>
<id>https://hdl.handle.net/1721.1/153702</id>
<updated>2026-02-03T16:43:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Does Investors’ Belief on Other Investors’ Information Acquisition Affect Trading and Price?
Wang, Yuting(Economist)
I study how investors’ belief on other investors’ information acquisition about an asset affects trading and price, holding constant investors’ actual information acquisition. I hypothesize that the predictions depend on the trading strategy investors adopt, which is essentially determined by the nature of the asset and the level of investor sophistication. In a world where investors are able to form high-quality independent estimates of the fundamental asset value, they extract other investors’ signals from the price change and end up trading more aggressively on their private signals when they believe there have been more information acquirers. In contrast, in a world where investors cannot form high-quality independent estimates of the asset value, they tend to adopt a heuristic strategy and trade less aggressively on their private signals when they believe there have been more information acquirers. Using comprehensive private meetings data in China from 2007 to 2017 and a mandate by the Shenzhen Stock Exchange in 2012 that requires firms to disclose the dates and participants of private meetings within two trading days, I find that investors on average trade less aggressively when they believe there have been more information acquirers, consistent with the heuristic world. The results are concentrated in firms with high information uncertainty, e.g., firms with high market-to-book and volatility, which approximate a world where investors are less likely to have a high-quality fundamental anchor, supporting my theoretical mechanisms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidation of Battery Electrolyte Coordination Sphere Thermodynamics via Calorimetric and Potentiometric Titrations</title>
<link href="https://hdl.handle.net/1721.1/153698" rel="alternate"/>
<author>
<name>Skiba, Dhyllan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153698</id>
<updated>2024-03-14T04:06:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Elucidation of Battery Electrolyte Coordination Sphere Thermodynamics via Calorimetric and Potentiometric Titrations
Skiba, Dhyllan A.
Rechargeable metal-anode batteries are a promising post Li-ion battery development. However, the high reactivity of metallic anodes with the electrolyte results in the formation of a solid-electrolyte interphase (SEI). Electrolyte design is a key handle in controlling the SEI composition in metal-anode batteries, but our understanding of the electrolyte—specifically the cation’s first coordination sphere—is limited. In this thesis, the study of ion solvation and complexation techniques are brought into the context of battery electrolytes. Relevant data from literature is summarized and supplemented with enthalpy of solution (ΔsolH) and enthalpy of transfer (ΔtrH) measurements for the Li-battery relevant salts, LiPF6 and LiTFSI, in a set of polar aprotic solvents. The trends observed are rationalized by consideration of solvent and anion properties, particularly the solvent donicity and anion size. To achieve a finer picture of the Li+ coordination sphere, isothermal titration calorimetry (ITC) and potentiometric titrations (PT) were employed with a set of exemplar electrolytes to probe the thermodynamic evolution of the Li+ coordination complex as weak solvent is displaced by a stronger solvent in the first coordination sphere. Raman spectroscopy is used to confirm that solvent displacement occurs as expected, and the effect of the anion on ITC measurements is investigated. A statistical binding model is developed which is fit to the experimental titration data to extract an average change in Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) of solvent displacement. Preferential solvation tendencies are quantified for EC:DMC and EC:PC electrolyte using this methodology, and compared with preferences observed by other workers. This thesis provides the framework for future studies on the thermodynamics of more complex battery electrolyte coordination environments and its connection with the SEI composition.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Equity Volatility Dynamics with Markov-Switching EGARCH Models</title>
<link href="https://hdl.handle.net/1721.1/153697" rel="alternate"/>
<author>
<name>Dennis-Sharma, Tyson</name>
</author>
<id>https://hdl.handle.net/1721.1/153697</id>
<updated>2024-03-14T03:35:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Forecasting Equity Volatility Dynamics with Markov-Switching EGARCH Models
Dennis-Sharma, Tyson
Understanding and anticipating stock market volatility enables better portfolio management. We forecast US equity volatility with a Markov-Switching EGARCH model with one high and one low volatility regime. We show that this model contains similar information about future volatility as the VIX Index. It also outperforms single-regime GARCH and EGARCH models. Moreover, the model’s 1-day ahead regime predictions are economically significant: market volatility and kurtosis, equity risk premia, and stock-bond relations shift when the model forecasts a regime change.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Data-Driven Analysis of Thermal Runaway Characteristics in Lithium-Ion Batteries</title>
<link href="https://hdl.handle.net/1721.1/153695" rel="alternate"/>
<author>
<name>Petersen, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/153695</id>
<updated>2024-03-14T03:04:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Machine Learning and Data-Driven Analysis of Thermal Runaway Characteristics in Lithium-Ion Batteries
Petersen, Julia
This study explores thermal runaway in lithium-ion batteries, particularly examining NCM (Nickel Cobalt Manganese) and NCA (Nickel Cobalt Aluminum) chemistries. Utilizing data analysis and machine learning on approximately 400 data points, it gives insights into thermal runaway dynamics, focusing on characteristic parameters such as onset temperature of self-heating (T1), onset temperature of thermal runaway (T2), maximum temperature during thermal runaway (T3) and mass loss. The investigation revealed that NCA cells are more prone to thermal runaway, exhibiting lower initial self-heating temperatures compared to NCM cells. A notable preliminary finding is the potential link between nickel content in battery chemistries and thermal runaway initiation temperatures. Higher nickel compositions, like in NCM811 and various NCA cells, tend to display lower initial self-heating temperatures, possibly indicating faster progression toward thermal runaway. The limited research on how nickel content specifically influences the onset of self-heating during thermal runaway in battery cells underscores the need for new investigations into the cathode’s role and the factors beyond SEI layer decomposition. Addressing this gap, particularly focusing on the impact of nickel content on the critical onset temperature of exothermic heating that initiates thermal runaway, is essential to deepen our understanding of thermal dynamics and improve battery safety and stability.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Compact Non-Volatile Photonic Switching Based on Optical Phase Change Material and Graphene Heater</title>
<link href="https://hdl.handle.net/1721.1/153693" rel="alternate"/>
<author>
<name>Dao, Khoi Phuong</name>
</author>
<id>https://hdl.handle.net/1721.1/153693</id>
<updated>2024-03-14T03:10:16Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Modeling Compact Non-Volatile Photonic Switching Based on Optical Phase Change Material and Graphene Heater
Dao, Khoi Phuong
On-chip photonic switches are the building blocks for programable integrated circuits (PICs) and the integration of phase change materials (PCMs) enables promising designs which are compact, non-volatile, and efficient. However, conventional PCMs such as Ge₂Sb₂Te₅ (GST) introduce significant optical absorption loss, leading to elevated insertion losses in devices. Current approaches, compensating for this loss through weak evanescent light-PCM interactions, result in larger footprint devices. A compact non-volatile 2 × 2 switch design is introduced, leveraging optical concentration in slot waveguide modes to significantly enhance interactions of light with PCM, thereby realizing a compact, efficient photonic switch. The crystalline-amorphous phase transitions are driven by an integrated single-layer graphene heater, providing high electro-thermal efficiency, low absorption loss, and rapid switching speed. Computational simulations demonstrate reversible phase transitions of Sb₂Se₃ facilitating 2 working states with crosstalk (CT) down to -24 dB at 1550 nm wavelength and more than 55 nm 0.3 dB insertion loss (IL)bandwidth. The proposed photonic switch architecture can constitute the cornerstone for next-generation high-performance reconfigurable photonic circuits.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soft at the Joints</title>
<link href="https://hdl.handle.net/1721.1/153692" rel="alternate"/>
<author>
<name>Williams, Susan</name>
</author>
<id>https://hdl.handle.net/1721.1/153692</id>
<updated>2024-03-14T04:06:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Soft at the Joints
Williams, Susan
A building can be understood entirely through its joints. It can explain gravitational forces, interlacing moments of material application, and environmental conditions. Yet, this portion of the design is often relegated to the end of the design process, as a finishing touch. The walls, floors, and roof are meticulously considered, while the spaces between are left blank in order to accommodate the imperfections and unsolved complexities that occur when the idealism of design meets the reality of assembly.&#13;
&#13;
In 1851, Gottfried Semper proclaimed, “The beginning of building coincides with the beginning of textiles.” Over the past hundred and fifty years this statement has moved in and out of relevancy as manufacturing, digital tools, design trends and the role of designer and builder has changed. Today, architecture’s relationship with textiles is somewhat estranged. Like the joint, textiles appear at the completion of a project’s development, confined to fulfilling an aesthetic role. Textiles are materials with unique properties which allow for both high levels of strength and flexibility all at the same time. Unlike in architecture, in textiles, the interlacing of fabric is the starting point of both design and construction.&#13;
&#13;
This thesis re-envisions new methods of architectural design through the logics of textiles: by applying principles of aggregation, establishing a dependent relationship between material and structure, and designing through making at a one-to-one scale. As a result, this project acts as a catalyst for playful tectonic systems, eliminating the boundary between where the joint begins and where it ends.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speeding up Housing Supply in Hong Kong through Land Readjustment</title>
<link href="https://hdl.handle.net/1721.1/153690" rel="alternate"/>
<author>
<name>Li, Mingyao</name>
</author>
<id>https://hdl.handle.net/1721.1/153690</id>
<updated>2024-03-14T03:43:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Speeding up Housing Supply in Hong Kong through Land Readjustment
Li, Mingyao
For over a decade, Hong Kong's housing has been ranked as the least affordable globally. “Pricy and cramped” living conditions have increasingly become a pressing social issue concerning the public at large. Explanations of this housing issue are multi-faceted, among which the most fundamental cause is the insufficient supply of developable land. In response to this shortage, the Hong Kong government has passed a controversial bill to develop a large-scale reclamation project, costing more than US$50 billion to build. Nevertheless, a massive amount of land in the rural New Territories remains idling or underutilized due to convoluted history and ownership. The housing crisis may be eased more effectively if solutions can be formulated to make these lands developable.&#13;
&#13;
This thesis focuses on understanding the context, characteristics, and limiting factors affecting the development potential of these rural lands. Correspondingly, a land management mechanism – Land Readjustment – will be introduced as a feasible tool to overcome major obstacles.&#13;
&#13;
Chapter I – Hong Kong: Calling for a Solution to the Land Supply Problem introduces current land and housing supply issues and elaborates on how different land supply mechanisms have failed to create sufficient land for housing development. Then, the root cause on a theoretical level is explained – bilateral monopoly and constituency effect are the main predicaments paralyzing the Hong Kong land supply system. A practical solution will require breaking the gridlock inherent in current power dynamics.&#13;
&#13;
Chapter II – Land Readjustment: A Possible Solution brings forth Land Readjustment as a potential tool to address the land supply problem. As Land Readjustment is a relatively unfamiliar concept in the U.S., a brief introduction explaining the rationale is presented. Embedded in its characteristics are the benefits it can realize and objectives it can achieve, which are regarded as valuable, as they are aligned with major obstacles the government faces in developing rural land in Hong Kong. As Land Readjustment does not directly lead to housing affordability, a separate discussion is dedicated to different ways to create affordable housing within the framework of Land Readjustment.&#13;
&#13;
Chapter III – Applying Land Readjustment in Hong Kong focuses on drawing a tighter connection between the problem and the solution. The first evaluation is whether Hong Kong can meet all the pre-conditions to qualify for implementation of Land Readjustment. Second, ex-post performance evaluation frameworks are adapted to an ex-ante assessment of whether a satisfactory outcome could be achieved through Land Readjustment. Third, through international case studies, more practical mechanisms are incorporated to generate a bespoke proposal to address the unique conditions in Hong Kong.&#13;
&#13;
To summarize, applying Land Readjustment to speed up the housing supply in Hong Kong is a feasible proposal. It can not only promote private participation to expedite land development with equitable sharing of costs and benefits but also contribute to untangling the long-lasting impasse among the Rural Committee, private developers, and the government against the backdrop of criticisms of real estate hegemony. Most importantly, the development potential of rural New Territories can be unleashed. Hong Kong youth may see a glimmer of hope for owning their first house sooner and with better quality.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Profit Real Estate: Financial Strategies for Mission and Impact</title>
<link href="https://hdl.handle.net/1721.1/153687" rel="alternate"/>
<author>
<name>Cha, Yoon</name>
</author>
<id>https://hdl.handle.net/1721.1/153687</id>
<updated>2024-03-14T03:31:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Non-Profit Real Estate: Financial Strategies for Mission and Impact
Cha, Yoon
This thesis examines how land-endowed nonprofits can optimize their assets to better serve their mission and unlock value for the communities they serve. After exploring various real estate strategies and partnership structures among nonprofit, for-profit, private, and public entities related to nonprofit land use, the thesis will apply its lessons to a detailed case study of Cambridge Young Women’s Christian Association to inform short and long-term real estate policies that compliment and maximize the organization’s mission impact.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing the Role of Amines in Aqueous Electrochemical Reduction of Captured-state CO₂</title>
<link href="https://hdl.handle.net/1721.1/153684" rel="alternate"/>
<author>
<name>Bernhardt, Elizabeth M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153684</id>
<updated>2024-03-14T03:40:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Probing the Role of Amines in Aqueous Electrochemical Reduction of Captured-state CO₂
Bernhardt, Elizabeth M.
Integrating CO₂ capture and CO₂ conversion into a single reactor presents an opportunity to power the combined process with renewable electricity. The integration of these two historically separate technologies establishes a large system parameter space, which introduces many handles for system optimization, but also presents many challenges. As the capture medium takes on the dual role of sorbent and electrolyte, a complex landscape of potential reaction pathways emerges. Before these integrated systems can be engineered to perform at industrial scales, we must better understand the speciation and characteristics of the capture medium, as well as its impact on transport and interfacial properties. The integration of CO₂ capture and conversion processes has primarily been investigated in aqueous, amine-based solutions to draw on the maturity of amine chemistry in CO₂ capture. However, when subjected to reducing currents, the aqueous solvent provides a pathway for parasitic hydrogen evolution. Additionally, amines become ion pairs upon uptake of CO₂, giving them the opportunity to act as both reactant and supporting electrolyte. We approach the complexity of these systems by investigating the influence of amine choice on electrochemical performance. Primarily, we explore how amine physicochemical properties, namely steric hindrance and pKₐ, impact speciation, product selectivity, and cell performance. We chose a subset of primary alkylamines with varied steric hindrance and pKₐ and evaluated each on the basis of Faradaic efficiency, partial current density to reduced products, and the dynamics of product formation on Ag-based electrocatalysts. Through these measurements, we elucidate trends in the competition of hydrogen evolution and carbon monoxide formation as a function of amine pKₐ and steric hindrance in order to inform the choice of sorbent-electrolyte for industrially integrated amine-based CO₂ capture and conversion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Inference and Experimental Design of Combustion Kinetic Models</title>
<link href="https://hdl.handle.net/1721.1/153682" rel="alternate"/>
<author>
<name>Chen, Huaibo</name>
</author>
<id>https://hdl.handle.net/1721.1/153682</id>
<updated>2024-03-14T03:36:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bayesian Inference and Experimental Design of Combustion Kinetic Models
Chen, Huaibo
In combustion kinetic model calibration, researchers usually use experimental data to reduce the uncertainty of kinetic parameters, and Bayesian inference is the most common approach to do inverse calibration. This thesis explores two interconnected aspects of Bayesian approaches in the context of combustion kinetics models: how to utilize high-resolution species profiles in Bayesian inference and how to identify the most informative experimental conditions to collect data. In the first part, we investigated the impact of the effective independent-data number and target selection on Bayesian inference of kinetic parameters using species time-histories obtained from shock tube experiments. Neural networks serve as response surfaces. Maximum a posteriori estimation and Markov chain Monte Carlo sampling are employed to determine optimal parameters as well as their uncertainty. Three optimization strategies are employed: utilizing the entire species time-history curve with effective independent-data numbers of 1 (C-1) and 160 (C-160), and using only the last point of each curve (LastP). All three improved models fit experimental data better. Comparing C-1 with C-160 reveals that increasing the number of targets improves prediction accuracy but may lead to overtuning. Comparing C-1 with LastP, LastP exhibits comparable or slightly better agreement with measurements, suggesting that focusing on critical points is effective for point estimation. However, C-1 shows different posterior uncertainty from LastP in both parameters and predictions, despite their similarity in the point estimation.&#13;
&#13;
Experimental data obtained at different experimental conditions (e.g., pressure, temperature, equivalence ratio, etc.) is not equally informative when it is used to calibrate kinetic parameters. Thus, experimental design becomes an important topic in combustion kinetics, where the most informative condition can be identified by algorithms. In the second part, we propose an efficient Bayesian experimental design algorithm that integrates Laplacian approximation-based experimental design with gradient-based design optimization, employing sophisticated neural network response surfaces for mapping kinetic parameters to target prediction at a wide range of thermodynamic conditions. The algorithm demonstrates efficiency and robustness against local maxima. Additionally, to meet various needs in kinetic experiments, we develop various experimental design targets based on the posterior covariance matrix, including model-oriented, parameter-oriented, target-oriented, and parallel experimental design. The proposed method, utilizing a full posterior covariance matrix without fixing any parameter of insensitive reactions, achieves significant acceleration compared to previous methods, demonstrating effectiveness in reducing parameter and target uncertainty as well as designing multiple experiments simultaneously.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Machine Learning Approach to Improve Diameter Control in Desktop Fiber Extrusion Processes</title>
<link href="https://hdl.handle.net/1721.1/153677" rel="alternate"/>
<author>
<name>Patrick, Keeghan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153677</id>
<updated>2024-03-14T03:01:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Machine Learning Approach to Improve Diameter Control in Desktop Fiber Extrusion Processes
Patrick, Keeghan J.
A machine learning approach to controlling the diameter of a desktop fiber extrusion process with a PLC is developed and evaluated against the performance of PID control. The deep reinforcement learning model can learn how to control the output diameter of the process based on a given target without any knowledge of the system dynamics. The model learns how to control the output diameter after being trained on hours of data recorded from an open loop control process. After training the model can receive sensory information from a PLC, calculate an action based on the desired target and send the action to the PLC to execute.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How actors and groups in the family business system  influence innovation in the family business:  an analytical framework</title>
<link href="https://hdl.handle.net/1721.1/153672" rel="alternate"/>
<author>
<name>Vanparys, Thierry F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153672</id>
<updated>2024-03-14T04:02:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How actors and groups in the family business system  influence innovation in the family business:  an analytical framework
Vanparys, Thierry F.
Innovation is understood to be vital to the prosperity and survival of family businesses and there is great value for practitioners, advisors, researchers, and academics in understanding how innovation occurs in family businesses—in a clear and practical way. I provide a framework that aides in shedding light on how and by whom innovation may be enacted, promoted, and supported in the family business system.&#13;
&#13;
The family business literature offers clear and practical models explaining that the family business must be understood in the context of the family business system, which includes the business organization, the owners of the business, and the family that has ownership control of the business.  Frameworks also explain how this system may be affected by how a family in business changes over time. These are demonstrated by the “Three-Circle Model” and the “Three-Dimension Developmental Model” of the family business respectively. The literature on innovation is extensive, albeit, as a body, much of it is confusing and unfortunately impractical for consistent application across the family business system. Recognising this, we focus our discussion and draw out two taxonomies from the literature, chosen for their accessibility and applicability and the crispness with which they allow us to talk about innovation. I then focus on one taxonomy and connect it back to the actors and groups in the family business system to establish our analytical framework. I believe the latter and its practical, actionable orientation to be a valuable addition to the literature.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Inventory Induction under Demand Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/153669" rel="alternate"/>
<author>
<name>Robin, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/153669</id>
<updated>2024-03-14T03:10:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Robust Inventory Induction under Demand Uncertainty
Robin, Arnaud
E-commerce retailers need to meet growing demand and rising customer expectations while efficiently managing operating costs across global supply chains. This thesis addresses the tactical problem of inventory induction under demand uncertainty, which involves determining where the position incoming inventory to serve future customer demand. We formulate the problem via two-stage adaptive robust optimization with right-hand side uncertainty. First-stage variables characterize initial induction and positioning and second-stage variables capture subsequent rebalancing and order fulfillment. Demand is modeled via an uncertainty set based on an aggregate forecast---at the nation-wide and monthly level---to protect against spatiotemporal deviations---at the local and daily level. We develop a Benders decomposition algorithm, iterating between a lower-bounding master problem and an upper-bounding subproblem. We accelerate the Quadratically Constrained Quadratic Problem (QCQP) subproblem with primal heuristics and dual-bounding strategies---including a novel simplicial relaxation. We also propose a cut-learning strategy from offline instances to warm-start the Benders decomposition scheme. We conduct extensive computational experiments, leveraging an experimental setup build on real-world data and developed in collaboration with a major e-commerce provider. From a computational standpoint, results show the benefits of the acceleration strategies for the subproblem and the master problem which, together, outperform state-of-the-art benchmarks in terms of optimality gaps, solution quality, and computational times. From a practical standpoint, results suggest that the adaptive robust solution can provide significant benefits on average against the deterministic benchmark, by mitigating operating costs by up to 5-10% and improving delivery speeds by up to 1%.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>T2 Characterization of Oil-In-Water Emulsions for NMR Sensor Applications</title>
<link href="https://hdl.handle.net/1721.1/153667" rel="alternate"/>
<author>
<name>Zammit, Alexa S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153667</id>
<updated>2024-03-14T03:32:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">T2 Characterization of Oil-In-Water Emulsions for NMR Sensor Applications
Zammit, Alexa S.
Fluid status assessment is an essential aspect of healthcare with implications in chronic conditions such as renal disease and congestive heart failure. Current fluid status determination techniques lack quantitative methods and standards. Our research explores a point-of-care approach through a portable single-sided magnetic resonance (MR) sensor. We are developing a more accurate and clinically relevant hydration metric through measuring localized skeletal muscle. Phantoms are used as stand-ins for a human subject to calibrate and ensure system functionality. The microstructure of an emulsion also mimics the multiple compartments of tissue such as the intra and extracellular volumes of muscle and adipose tissue. We aim to use oil-in-water emulsions as phantoms to ensure device reproducibility and determine how much the scale of the microstructure affects relaxation behavior. A quantitative understanding of the length scales appropriate for muscle and adipose tissue will help determine the reliability of our hydration measurement.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commercial production of carbon free chromium or ferrochrome by leaching from the ore and electrolysis</title>
<link href="https://hdl.handle.net/1721.1/153595" rel="alternate"/>
<author>
<name>Crafts, Walter.</name>
</author>
<id>https://hdl.handle.net/1721.1/153595</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1926-01-01T00:00:00Z</published>
<summary type="text">Commercial production of carbon free chromium or ferrochrome by leaching from the ore and electrolysis
Crafts, Walter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1926; Includes bibliographical references (leaf 30).
</summary>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Air conditioning of railway passenger cars</title>
<link href="https://hdl.handle.net/1721.1/153593" rel="alternate"/>
<author>
<name>Steenkamp, W. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/153593</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">Air conditioning of railway passenger cars
Steenkamp, W. L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1939; Includes bibliographical references (leaves 184-188).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trapping and discharge of megavolt electrons in solid dielectrics</title>
<link href="https://hdl.handle.net/1721.1/153592" rel="alternate"/>
<author>
<name>Chang, William Wai.</name>
</author>
<id>https://hdl.handle.net/1721.1/153592</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Trapping and discharge of megavolt electrons in solid dielectrics
Chang, William Wai.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1963; Includes bibliographical references (leaf 58).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A strategic analysis for a Chilean steel castings firm</title>
<link href="https://hdl.handle.net/1721.1/153590" rel="alternate"/>
<author>
<name>Armas, Juan Pablo.</name>
</author>
<id>https://hdl.handle.net/1721.1/153590</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">A strategic analysis for a Chilean steel castings firm
Armas, Juan Pablo.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaf 98).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The transfer of managerial skills through United States enterprises in the developing nations.</title>
<link href="https://hdl.handle.net/1721.1/153587" rel="alternate"/>
<author>
<name>Khalifa, Ahmes Mohamed Said.</name>
</author>
<id>https://hdl.handle.net/1721.1/153587</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">The transfer of managerial skills through United States enterprises in the developing nations.
Khalifa, Ahmes Mohamed Said.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1968; Bibliography: leaves 97-98.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Outsourcing railway engineering</title>
<link href="https://hdl.handle.net/1721.1/153585" rel="alternate"/>
<author>
<name>Heavin, Jerry W.
            (Jerry Wayne)</name>
</author>
<id>https://hdl.handle.net/1721.1/153585</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">Outsourcing railway engineering
Heavin, Jerry W.
            (Jerry Wayne)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1988; Bibliography: leaves 162-163.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PRMT5 Inhibitors in Merkel Cell Carcinoma</title>
<link href="https://hdl.handle.net/1721.1/153471" rel="alternate"/>
<author>
<name>Higgins, Kathleen Whitmore</name>
</author>
<id>https://hdl.handle.net/1721.1/153471</id>
<updated>2024-02-09T03:08:01Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">PRMT5 Inhibitors in Merkel Cell Carcinoma
Higgins, Kathleen Whitmore
Merkel Cell Carcinoma (MCC) is a rare neuroendocrineskin cancer.  Treatment options are limited, and they are largely based on MCC’s similarity to other cancers, rather than original research. Many of these treatments have low efficacy and significant side effects, and the overall prognosis remains bleak.  In this thesis, I will propose a new therapeutic strategy for MCC based on chemical inhibition of protein arginine methyltransferase 5 (PRMT5).  PRMT5 inhibitors are already being tested in a variety of other solid and liquid tumors to good effect. Our data suggest that PRMT5 inhibitors may be effective in treating a specific subtype of MCC defined by a viral driver and wildtype p53.  Treatment inhibits growth in vitro and results in large changes in alternative splicing and more subtle changes in oxidative metabolism. Furthermore, we observe differential alternative splicing of the p53-regulator MDM4, suggesting a possible mechanism for the drug’s greater efficacy in p53-wildtype cell lines.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating a New Malaria Vaccine Design that uses a Blood Stage P. falciparum Chassis for Non-Blood Stage Antigen Presentation</title>
<link href="https://hdl.handle.net/1721.1/153463" rel="alternate"/>
<author>
<name>Parker, Shelbi Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/153463</id>
<updated>2024-02-09T03:47:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Creating a New Malaria Vaccine Design that uses a Blood Stage P. falciparum Chassis for Non-Blood Stage Antigen Presentation
Parker, Shelbi Nicole
Malaria is a global disease that affects millions annually and the complex life cycle of the Plasmodium species that cause malaria results in increasing drug resistance and poor vaccine efficacy. Current vaccine designs focus on a single stage in the parasite life cycle and antibody responses are inefficient in offering protection, leading to “malaria rebound” as a lack of immune response to multiple stages of the life cycle result in case numbers returning to their levels before intervention. In this work, we utilize a blood stage parasite to present infection and transmission stage antigens. Plasmids using the conditional translation repressor system TetR-DOZI were created, and transgenic parasites that express the scaffold protein eTRAMP4 fused to either CSP or P25 were generated. We assessed the transgenic parasites for growth defects, proper fusion length, and localization to the parasitophorous vacuolar membrane. We also removed parasites from host red blood cells and examined two purification methods in the pipeline of developing a pure, intact culture of transgenic parasites. The methods and results of this work set the stage for a new malaria vaccine design that has the potential to fill the gap of current vaccine technologies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trailer-on-flat-car service in the perspective of competition for freight traffic</title>
<link href="https://hdl.handle.net/1721.1/153441" rel="alternate"/>
<author>
<name>Davis, John Christy.</name>
</author>
<id>https://hdl.handle.net/1721.1/153441</id>
<updated>2026-02-06T05:16:59Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Trailer-on-flat-car service in the perspective of competition for freight traffic
Davis, John Christy.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1956; Bibliography: leaves 111-116.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving the Future of Long-Haul Trucking: Realizing the Potential of Battery Electric Vehicles through an Analysis of Financial and Environmental Impacts</title>
<link href="https://hdl.handle.net/1721.1/153407" rel="alternate"/>
<author>
<name>Chehrazi, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/153407</id>
<updated>2024-01-25T03:39:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Driving the Future of Long-Haul Trucking: Realizing the Potential of Battery Electric Vehicles through an Analysis of Financial and Environmental Impacts
Chehrazi, Natalie
This thesis examines the transition to battery electric vehicles (BEVs) for long-haul trucking, using system dynamics modeling, financial impact modeling, and environmental impact modeling, and looks across a broad range of possible future scenarios that could impact the viability of BEV use in long-haul trucks. System dynamics modeling, with causal loops, is used to identify key factors influencing adoption rates. Results show that battery capabilities, the total cost of ownership, and feedback loops are critical considerations in increasing BEV adoption. Environmental impact analysis demonstrates that transitioning to BEVs can lead to significant and immediate reductions in emissions. If the transition occurs now, with current development, there would be an immediate 37% reduction in GHG emissions and an 85% reduction in all direct emissions from air pollutants, not including SO2 emissions. If the medium or aggressive development scenarios outlined in this paper occur, there would be a 60% reduction in GHG emissions and a 90% reduction in all direct emissions from air pollutants, not including SO2 emissions.&#13;
&#13;
These reductions could be vital in addressing emissions in this sector and helping curb climate change. Payload impact analysis demonstrates that the additional battery weight in a BEV long-haul truck would not be an issue for 93% of long-haul trucks. Financial impact analysis indicates that if charging capabilities increase to 500kW or above, BEVs are a better investment across all economic scenarios over the years of ownership, driven by lower operating costs. If no further development in charging capability occurs, the economic benefits of transitioning are subject to market conditions. Regardless of charging station capability development, if the price of diesel fuel remains above US$3.65 per gallon, BEVs are the preferred investment. Additionally, comprehensive net present value (NPV) analysis is used to demonstrate whether BEV long-haul trucks are a good investment for both the trucking industry and partner companies depending on various economic and development speed scenarios.&#13;
&#13;
In current economic scenarios with no further development, BEV long-haul trucks are a good investment for both the trucking industry and partner companies, with net financial gains of $59K with a payback period of 5 years or $77K with a payback period of 4 years respectively. It is also significant to note that these calculations use transportation end consumer electricity prices and do not include subsidies or incentives. By sourcing energy differently and utilizing renewable energy sources, companies can substantially decrease operating costs, making the transition to BEVs even more financially viable than presented. With subsidies and incentives in place, the case for BEV long-haul trucks is further strengthened. The thesis also includes a specific analysis of the Tesla semi-truck with a fuel economy of 19.8MPGe. This Tesla semi-truck analysis revealed that regardless of charger development, the Tesla semi-truck would be a better investment than an ICE long-haul truck for both the trucking industry and partner companies.&#13;
&#13;
Additionally, the analysis in this thesis suggests that there are significant benefits to increasing charging capabilities to 500kW, which would reduce charging downtime from 4 hours to approximately 2 to 2.5 hours per full charge. Even with the significant downtime, such an increase in charging capabilities would make the BEV long-haul truck the better investment in all feasible projected economic scenarios. The thesis concludes that the case for BEV long-haul trucks is clear, and there is significant potential to accelerate and capitalize on the transition to BEVs in the long-haul trucking industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The development of an automatic curve-following mechanism</title>
<link href="https://hdl.handle.net/1721.1/153376" rel="alternate"/>
<author>
<name>Traver, Harold A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153376</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1933-01-01T00:00:00Z</published>
<summary type="text">The development of an automatic curve-following mechanism
Traver, Harold A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1933
</summary>
<dc:date>1933-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration of standard stars for planetary reflectivitiy studies.</title>
<link href="https://hdl.handle.net/1721.1/153373" rel="alternate"/>
<author>
<name>Elias, Jonathan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153373</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Calibration of standard stars for planetary reflectivitiy studies.
Elias, Jonathan H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Earth and Planetary Science, 1972; Bibliography: leaves 89-93.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vibrational characteristics of building frames</title>
<link href="https://hdl.handle.net/1721.1/153372" rel="alternate"/>
<author>
<name>Haba, Mohamed.</name>
</author>
<author>
<name>Dloomy, Naim Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153372</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Vibrational characteristics of building frames
Haba, Mohamed.; Dloomy, Naim Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1946; Bibliography: leaf 74.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of the scattered-wave cluster method to zinc sulfide.</title>
<link href="https://hdl.handle.net/1721.1/153371" rel="alternate"/>
<author>
<name>Kim, Hwasoo Park.</name>
</author>
<id>https://hdl.handle.net/1721.1/153371</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Application of the scattered-wave cluster method to zinc sulfide.
Kim, Hwasoo Park.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies on the biosynthesis and structure of the acidic brain protein.</title>
<link href="https://hdl.handle.net/1721.1/153369" rel="alternate"/>
<author>
<name>King, William Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/153369</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Studies on the biosynthesis and structure of the acidic brain protein.
King, William Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Biology, 1972; Bibliography: leaves 32-33.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods of quantitative analysis for clinical phonoangiography.</title>
<link href="https://hdl.handle.net/1721.1/153367" rel="alternate"/>
<author>
<name>Klitzner, Thomas Samuel.</name>
</author>
<id>https://hdl.handle.net/1721.1/153367</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Methods of quantitative analysis for clinical phonoangiography.
Klitzner, Thomas Samuel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Bibliography: leaves 66-67.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Actor machine architecture.</title>
<link href="https://hdl.handle.net/1721.1/153364" rel="alternate"/>
<author>
<name>Steiger, Richard John.</name>
</author>
<id>https://hdl.handle.net/1721.1/153364</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Actor machine architecture.
Steiger, Richard John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1974; Bibliography: leaves 183-184.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A control system for isolated muscle experiments.</title>
<link href="https://hdl.handle.net/1721.1/153363" rel="alternate"/>
<author>
<name>Kleinbaum, Jerry Israel.</name>
</author>
<id>https://hdl.handle.net/1721.1/153363</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A control system for isolated muscle experiments.
Kleinbaum, Jerry Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Bibliography: leaf 93.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-noise broadband I. F. amplifiers for radiometric receivers.</title>
<link href="https://hdl.handle.net/1721.1/153362" rel="alternate"/>
<author>
<name>Kjartansson, Vilhjalmur Thor.</name>
</author>
<id>https://hdl.handle.net/1721.1/153362</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Low-noise broadband I. F. amplifiers for radiometric receivers.
Kjartansson, Vilhjalmur Thor.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A simulated forecast of the joint frequency distribution of any four-magazine campaign using readership survey data.</title>
<link href="https://hdl.handle.net/1721.1/153361" rel="alternate"/>
<author>
<name>Klapfish, Maurice S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153361</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A simulated forecast of the joint frequency distribution of any four-magazine campaign using readership survey data.
Klapfish, Maurice S.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximizing communications in an R and D laboratory through computerized relative allocation of facilities.</title>
<link href="https://hdl.handle.net/1721.1/153360" rel="alternate"/>
<author>
<name>Klurfeld, Laurence Franklin.</name>
</author>
<id>https://hdl.handle.net/1721.1/153360</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Maximizing communications in an R and D laboratory through computerized relative allocation of facilities.
Klurfeld, Laurence Franklin.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Bibliography: leaf 44.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The lease - purchase phenomenon in the capital goods market.</title>
<link href="https://hdl.handle.net/1721.1/153359" rel="alternate"/>
<author>
<name>Kirby, Marvin Goodloe.</name>
</author>
<id>https://hdl.handle.net/1721.1/153359</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The lease - purchase phenomenon in the capital goods market.
Kirby, Marvin Goodloe.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Bibliography: leaves 110-111.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mathematical madel for screening storm water control alternatives.</title>
<link href="https://hdl.handle.net/1721.1/153358" rel="alternate"/>
<author>
<name>Kirshen, Paul H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153358</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Mathematical madel for screening storm water control alternatives.
Kirshen, Paul H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1972; Bibliography: leaves 120-123.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating queries concurrently in a shared database system</title>
<link href="https://hdl.handle.net/1721.1/153354" rel="alternate"/>
<author>
<name>Danberg, Seymour A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153354</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Evaluating queries concurrently in a shared database system
Danberg, Seymour A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 87-89.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of technology trajectories for industrial applications of the indirect dimensional acquisition industry</title>
<link href="https://hdl.handle.net/1721.1/153353" rel="alternate"/>
<author>
<name>Indest, William L.
            (William Logan),
            1963-</name>
</author>
<id>https://hdl.handle.net/1721.1/153353</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1999-01-01T00:00:00Z</published>
<summary type="text">An analysis of technology trajectories for industrial applications of the indirect dimensional acquisition industry
Indest, William L.
            (William Logan),
            1963-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Vita.; Includes bibliographical references.
</summary>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Subglacial Hydrology in the Himalayas</title>
<link href="https://hdl.handle.net/1721.1/153347" rel="alternate"/>
<author>
<name>Narayanan, Neosha Gupta</name>
</author>
<id>https://hdl.handle.net/1721.1/153347</id>
<updated>2024-01-17T03:31:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling Subglacial Hydrology in the Himalayas
Narayanan, Neosha Gupta
The snowpack and glaciers of the Himalaya-Karakoram range feed several major river systems in Asia which provide water to over a billion people. Glacial retreat, glacial lake outburst flooding (GLOFs), surge behavior, and glacial ice mass balance are all likely strongly affected by subglacial hydrology. Unfortunately, little is known about Himalayan glaciers due to their remoteness and the danger of doing field work there. Recent advances in subglacial hydrological modeling may allow us to shed more light on subglacial processes that lead to changes in ice mass balance and glacial lake flooding. In this master's thesis, we present the first application of the SHAKTI subglacial hydrology model to a Himalayan glacier. We model the subglacial drainage network of Shishper Glacier, located in Gilgit-Baltistan, Pakistan, to understand its seasonal evolution and history of surges and GLOFs. Our results show that Shishper's subglacial system follows a similar seasonal pattern to past observed and modeled subglacial systems. We find that a central channel persists through the winter and serves as the basis for the subglacial drainage system throughout the melt season. We also investigate the 2017-2019 surge of Shishper Glacier and find that subglacial hydrology, while likely an important component of surging, cannot provide a standalone explanation for surges. This work serves as a nucleus for future subglacial hydrology modeling work in the Himalayas and provides a new framework for studying the effects of climate change on glacier dynamics, water availability, and glacier-related hazards in the Himalaya-Karakoram (H-K) region.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-throughput Photodegradation of Plastics</title>
<link href="https://hdl.handle.net/1721.1/153345" rel="alternate"/>
<author>
<name>Frankson, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/153345</id>
<updated>2024-01-17T03:38:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-throughput Photodegradation of Plastics
Frankson, Alexis
Plastic is a critical resource in the modern world, but an emphasis on durability in design coupled with the widespread use of plastic products has led to significant accumulation of plastic waste in the environment. It is imperative that new chemistries are discovered to produce polymers with the correct properties to meet consumer demands, but with a finite and well understood lifetime in the environment. This thesis aims to evaluate the rate of abiotic degradation of different plastics using a high throughput photo-reactor, to better understand the rate at which plastic will degrade due to ultraviolet light exposure based on polymer type and properties. The research findings suggest that the photo-degradability of polymers is impacted by the presence of chromophores and the presence of impurities from manufacturing. The experiments were performed on a small range of the most common consumer plastics, but the methodology developed can be used to design more efficient degradation experiments. Continued research into the factors impacting degradation in laboratory settings and in the natural environment are needed to promote the development of more environmentally sustainable polymers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Looking for Pirdoudan: The Past, Present, and Future of Mining in Armenia</title>
<link href="https://hdl.handle.net/1721.1/153344" rel="alternate"/>
<author>
<name>Vosgueritchian, Sarine Gacia</name>
</author>
<id>https://hdl.handle.net/1721.1/153344</id>
<updated>2024-01-17T03:43:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Looking for Pirdoudan: The Past, Present, and Future of Mining in Armenia
Vosgueritchian, Sarine Gacia
In our anthropogenic age, data and memory accumulate and decay faster than we can recall. Depiction of history is usually political and hierarchical, emphasizing chosen moments to build narratives, but time has shown us how that can lead to inaccurate accounts of the past. Historians and researchers constantly undo these narratives by consulting different forms of memory from collective to individual, using physical and virtual artifacts. With the accelerating global climate crisis, it is imperative to project further into the future while remaining deeply rooted in the histories and futures of the past. To do so, we need to understand the processes of change that have led to the construction of our current reality. But what happens if the archive is constantly deteriorating?&#13;
&#13;
Set in what is known today as the mining town of Kajaran, Looking for Pirdoudan uses the medium of film and textual essay to piece together and reinterpret the processes of change which have led to the disappearance of mount Pirdoudan after large deposits of copper and molybdenum were discovered in the 19th century. The extraction of the geological layers of Pirdoudan has effectively erased millennials of memory retained by the earth. While geological studies have allowed us to date these layers and put meaning to the accumulations, scattered archival records and media are today’s most readily available material that allow us to piece together the narratives of our past and present moment. That said, archives and data don’t tell a story on their own. A seeker from 2086 takes on the task of weaving an alternative history of Pirdoudan. Critical fabulation is employed, not only to visualize the gaps in our knowledge, but also to project a post-mine future of Kajaran based on a deep understanding and interpretation of the past. Kajaran is rebranded as an ideal ecological city attempting to repair its extractive legacy, but even with the best intentions, driven by technological advancements which are meant to reverse the anthropogenic footprint on the land, a new cycle of destruction begins.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating and Optimizing Throughput in an Aluminum Rolling Mill Using Capacity Modeling and Optimization Techniques</title>
<link href="https://hdl.handle.net/1721.1/153342" rel="alternate"/>
<author>
<name>Hungerford, Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/153342</id>
<updated>2024-01-17T03:34:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Estimating and Optimizing Throughput in an Aluminum Rolling Mill Using Capacity Modeling and Optimization Techniques
Hungerford, Scott
The aluminum industry has sustained continuous growth since 1975 and expects to continue this trend with the increased popularity of electric vehicles. With these forecasts in place and the current market conditions, Commonwealth Rolled Products (CRP) is in a unique position to meet the increased market demand and supply auto and industrial product manufacturers with aluminum rolled products. In order for CRP to be able to meet the increased demand, they first must understand the full complexities of their operations and confidently estimate future volumetric capacity they are able to sell.&#13;
&#13;
The objective of the internship program with CRP is to provide a quantitative analysis on the current state and future state throughput of the complex continuous line (CCL). The analysis includes a heuristic model to determine the throughput and identify key performance indicators (KPIs) that impact throughput improvement the greatest. This model will recommend a roadmap to achieve a sustainable operations plan and sales forecast that will enable increased manufacturing capabilities.&#13;
&#13;
In addition to the heuristic model, a mixed integer program (MIP) will be developed to optimally schedule the product mix to reduce production hours lost to product changeover time. The scheduling of a CCL is considered a single machine scheduling problem (SMSP), and the introduction of transition coils is considered a sequence-dependent setup times (SDSTs) problem. This last portion of the paper will focus on the MIP application to optimally schedule the CCL to reduce transition coils.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An implantable piezoelectric ultrasound stimulator (ImPULS) for selective deep brain activation</title>
<link href="https://hdl.handle.net/1721.1/153341" rel="alternate"/>
<author>
<name>Hou, Jason F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153341</id>
<updated>2024-01-17T03:18:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An implantable piezoelectric ultrasound stimulator (ImPULS) for selective deep brain activation
Hou, Jason F.
Precise neurostimulation has potential to revolutionize therapies for neurological disorders. However, current neural interfaces targeting the deep brain face significant limitations in spatial resolution and potency due to tissue attenuation. We developed an implantable piezoelectric ultrasound stimulator (ImPULS) that generates an ultrasonic focal point pressure of 100 kPa and can non-genetically modulate the activity of neurons. We demonstrated that ImPULS can i) excite neurons in a mouse hippocampal slice ex vivo, ii) activate cells in the hippocampus of an anesthetized mouse to induce expression of activity-dependent gene c-Fos, and iii) stimulate dopaminergic neurons in the substantia nigra pars compacta (SNc) to elicit time-locked modulation of nigrostriatal dopamine release. This work introduces a novel, non-genetic ultrasound platform for spatially localized neural stimulation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effects of Pre-Training and Fine-Tuning CLIP with Domain-Specific Data</title>
<link href="https://hdl.handle.net/1721.1/153338" rel="alternate"/>
<author>
<name>Wang, Jialan</name>
</author>
<id>https://hdl.handle.net/1721.1/153338</id>
<updated>2024-01-17T03:51:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Effects of Pre-Training and Fine-Tuning CLIP with Domain-Specific Data
Wang, Jialan
Mercari is an online two-sided marketplace that allows users to both sell and purchase items. To create the most efficient item listing process for the sellers and bring the most relevant items to the buyers, Mercari utilizes a pre-trained model called Contrastive Language-Image Pre-training (CLIP), famed for its exceptional zero-shot performances, to support the auto-filling feature for item listing and similar items recommendation. As this model is pre-trained on a general dataset gathered from the Internet, which likely does not have the same data distribution as Mercari’s data and results in non-optimal performance, we would like to explore the possibility of pre-training or fine-tuning CLIP with Mercari’s data to improve its performance within Mercari’s data domain. We explore various training strategies to understand the effects of each and determine the most effective strategy. Our best-performing and most space-efficient model achieves a brand prediction top-1 accuracy of 89.34% with 49.89% coverage and a category prediction accuracy of 78.02% with 69.62% coverage, significantly outperforming the current zero-shot CLIP in brand prediction and marginally in category prediction. Moreover, it achieves this with an embedding size that is half of that of the original CLIP.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergence: Speculative Ecologies  &amp; Evolution in Art</title>
<link href="https://hdl.handle.net/1721.1/153336" rel="alternate"/>
<author>
<name>Medina, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/153336</id>
<updated>2024-01-17T03:41:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Emergence: Speculative Ecologies  &amp; Evolution in Art
Medina, Alejandro
This thesis explores emergence as a focal point within my art practice. Emergence is the phenomenon through which complex systems exhibit properties and behaviors that are not directly attributable to any of the individual components within a system. Instead, these properties emerge through the (often entangled) relationships and interactions between individual, and often heterogeneous, components of a system. By orienting my work towards emergence, I propose a necessary shift towards an ecological and systems-based understanding of the world, one in which artworks can begin to be imagined in networks of relations and interdependence, doing so as a means of probing new ways of Being in an increasingly complex and entangled world. The thesis presents two frameworks for further exploring emergence, including an understanding of the exhibition as a “speculative ecology” and the different roles that instructions, rule-based systems and contracts could take on in staging evolutionary processes. The ecological framing of the exhibition emphasizes a renegotiation of agency amongst the exhibition’s components, open-over-closed systems and a focus on the integration of life cycles into the work; the use of instructions, rule-based systems and contracts enables the translation and embedding of evolutionary processes as part of the work's conceptualization and execution, aiming to inscribe change and instability as a core element in the work. The thesis draws on references from the fields of art and computation to expand upon historical lineages of thinking, in relation to several works that I have developed during my time at MIT’s program in Art, Culture and Technology (ACT).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Manufacturing Performance to Plan with Predictive Analytics</title>
<link href="https://hdl.handle.net/1721.1/153332" rel="alternate"/>
<author>
<name>Weisberg, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/153332</id>
<updated>2024-01-17T03:11:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing Manufacturing Performance to Plan with Predictive Analytics
Weisberg, Joshua
Modern manufacturing requires meticulous planning to coordinate tightly wound supply chain activities in the face of disruption. This is especially true for automotive companies, which produce complex products at high rates. Their production planning process involves estimating the demand for various vehicles, determining the most profitable mix of products to meet that demand, and then selecting the production parameters which provide maximum efficiency. All this is done while balancing the the short term demands of a volatile market with the long term implications of capital equipment purchases, staffing changes, and supplier management. From the time of each decision to the day of production, demand may change, supply may be disrupted, and manufacturing performance may fall short of expectations. These uncertainties lead to high error in production plans, which propagates to suppliers, other areas of the business, and future periods. Changes harm stability, efficiency, and thus profitability for all stakeholders. This study shows how predictions of performance can be used to revise a plan, using predictive analytics models trained on the characteristics of the plan. To this end, 480+ features are developed to describe plan characteristics and recent manufacturing performance. Several algorithms are utilized to evaluate the relationship between these features and manufacturing performance to plan, measured by ratios of actual production rate to that planned, and hours actually worked to planned. Results of the best performing features, algorithms, and modeling architectures on out-of-sample manufacturing days in the Post-Covid Era showed Median Absolute Error improvements of 40%-60% over a 3-month lead time and 10%-40% over a 1-month lead time across several production lines. These reductions in error can improve stability such that better decisions can be made. Interpretation of the predictive models can lead to improvements in the factory’s ability to meet demand. Benefactors include customers looking to purchase and receive their desired products, employees needing more day-to-day consistency, and suppliers aiming to maintain a healthy business. The only certainty in operations is uncertainty, making it critical for operations companies to improve their understanding and estimation of their performance to plan.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CO₂ and public health impacts of US residential heating electrification</title>
<link href="https://hdl.handle.net/1721.1/153330" rel="alternate"/>
<author>
<name>Grobler, Carla</name>
</author>
<id>https://hdl.handle.net/1721.1/153330</id>
<updated>2024-01-17T03:20:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">CO₂ and public health impacts of US residential heating electrification
Grobler, Carla
US Residential combustion heating is currently estimated to lead to ~10,000 premature mortalities annually due to degraded air quality. Replacement of this combustion heating with electric heating is expected to reduce these impacts by shifting emissions away from population centers to electric generators. However, these benefits have not been assessed. This thesis quantifies the health impacts of replacing residential combustion heating with electric heating in the US due to changes in air quality. In addition, we calculate how such a change would affect fossil CO₂ emissions. We find 99% of the premature mortalities currently attributable to US residential fuel combustion can be prevented through the replacement of combustion with electric air-source heat pumps, with net benefits in every US county. Wood-burning systems alone account for 84% of this benefit, particularly in densely populated areas. However, the reduction in air pollution does not necessarily translate into CO₂ reductions, as the study highlights variations in emissions based on location and electricity grid carbon intensity. Future research will explore different assumptions regarding CO₂ emissions. The thesis concludes that electrification of residential heating offers substantial air quality benefits and potential CO₂ reductions in warmer coastal regions and areas with low grid carbon intensity. However, investment in high-efficiency solutions and further grid decarbonization may be necessary for climate benefits nationwide.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Needles in a Haystack: Perceptions of Deservingness on the Implementation of Harm Reduction Programs in the American Midwest</title>
<link href="https://hdl.handle.net/1721.1/153329" rel="alternate"/>
<author>
<name>David, Lauren A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153329</id>
<updated>2024-01-17T03:41:51Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Needles in a Haystack: Perceptions of Deservingness on the Implementation of Harm Reduction Programs in the American Midwest
David, Lauren A.
The association of opioid abuse with rural, white, working-class individuals ultimately generated sympathy, rather than hatred, in the general political zeitgeist. However, some cases deviated from this pattern by adhering to the common cycle and villainizing individuals with substance use disorder involving opioids, and in no such state was this more prevalent than Indiana and the circumstances of the 2014 Scott County HIV Crisis. First-person interviews and comparative analysis between Ohio, Kentucky, and Indiana revealed that negative moral evaluations of individual behavior contribute to a reticence to implement harm reduction programs, often due to the influence of in-group isolation and the social phenomenon known as “not-in-my-backyard.” Indiana is found to be an outlier even among the Midwestern states in its negative response to opioid epidemic victims due to the continued legacy of three, Indiana-specific historical events and phenomena: the rise and legacy of the Temperance Movement; the development of the Indiana Klan – a subset of the KKK; and the lasting influence of moral evangelism, manifesting in the careers of politicians like Mike Pence. This thesis demonstrates that while Americans, in general, viewed victims of the opioid epidemic as more sympathetic than victims of previous substance use epidemics, in part due to the blame placed by pharmaceutical and medical sectors, citizens of Indiana displayed less sympathy, which helps to explain the slow and minimal response to the Scott County HIV Crisis.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planogram Optimization in Support of Small Format Retail Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/153328" rel="alternate"/>
<author>
<name>Kurtz, Miles</name>
</author>
<id>https://hdl.handle.net/1721.1/153328</id>
<updated>2024-01-17T03:26:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Planogram Optimization in Support of Small Format Retail Inventory Management
Kurtz, Miles
Target is in the midst of building its "stores-as-hubs" capabilities, relying on stores to support in-store shopping and serve as ecommerce fulfillment hubs. To execute this strategy, Target has further expanded its footprint into urban and dense suburban geographies. The stores in these areas, referred to as Small Format stores, have less than half of the square footage compared to a traditional Target location and carry an order of magnitude less SKUs. The dynamics of Target's urban retailing, which are characterized for the first time in this study, require specific inventory strategies to maintain service levels with a smaller product assortment and fewer customer choices. &#13;
&#13;
One metric to measure inventory management is `Fit', which considers an item's risk of generating backroom inventory in stores and the days of expected demand covered. Excess inventory decreases worker productivity, while insufficient inventory is associated with stockouts and lost sales. A mixed-integer linear program is developed to suggest the optimal shelf capacity for each product to maximize Fit. The decision model suggests sacrificing space allocated to high cube items to display more units of smaller items, and provides strong evidence for localizing Small Format assortments. A pilot of 10 test display units (planograms) was set and the effects measured via Synthetic Control Design (SCD). This research is part of a multi-year partnership between Target and MIT and is the first implementation of an in-store intervention.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Zero Defect Manufacturing in Multi-Stage Production Systems</title>
<link href="https://hdl.handle.net/1721.1/153327" rel="alternate"/>
<author>
<name>Lyberger, Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/153327</id>
<updated>2024-01-17T03:11:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Zero Defect Manufacturing in Multi-Stage Production Systems
Lyberger, Taylor
Implementation of quality improvement methods in multi-stage production systems is essential to manage and quickly eliminate manufacturing quality defects. Many companies tend to prioritize production speed rather than overall throughput, and are hypothesized to be below the optimal level of investment in quality systems when taking into account the full cost of bad quality. While traditional quality management techniques such as six sigma and process control are still valuable and worthwhile tools, recent advancements in technology offer manufacturers the opportunity to augment this tool set with the use of IoT, big data, and advanced analytics. &#13;
&#13;
This thesis addresses the problem of how to build a modern quality manufacturing system that continuously reduces scrap and defect rates in the production process. The study adapts a zero defect manufacturing framework and applies it to the automotive manufacturing industry. Five key activities, including data collection, data integration, data analytics, process control, and defect mitigation are all found to be essential components in the development of a robust quality improvement infrastructure. The process of applying these framework components in the context of an automotive manufacturer’s production lines sheds light on both technical and operational challenges and benefits of the quality system enhancement process. Other manufacturers may find this analysis to be a relevant use case and template when constructing or making improvements to their own quality management architecture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytical Graphical Approach for Predicting Ground Conditions in TBM-based Tunneling Construction</title>
<link href="https://hdl.handle.net/1721.1/153322" rel="alternate"/>
<author>
<name>Goncalves Klink, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/153322</id>
<updated>2024-01-17T03:37:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analytical Graphical Approach for Predicting Ground Conditions in TBM-based Tunneling Construction
Goncalves Klink, Beatriz
The present master's thesis addresses the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms to predict geology based on Tunnel Boring Machine (TBM) data. The use of mechanized tunneling has become frequent over the last decade, and their performance is critical for project management and safety. Numerical simulation methods have become prevalent in predicting TBM performance metrics, and the use of AI/ML techniques for prescient applications using TBM-generated data has become ubiquitous. The current research aims to propose an exploratory look into the correlation between specific TBM parameters and ground conditions. The methodology seeks to classify rings based on three main ground classes: rock, soil, and mixed, through the observation of clear patterns, found to be representative of these ground classes, which are demonstrated. A techno-economic assessment of the current use of AI/ML tools for geology prediction in TBM-based tunneling construction, is also presented, analyzing both the potential and shortcomings of the technology. For the purpose of the study, the Porto Metro project (Portugal) is introduced, used as a case study for the proposed methodology. As the mining and drilling market is projected to almost double from 2020-2030, and with the increasing use of TBMs, improving ground condition prediction is paramount to the advancement of tunneling automation efforts. The present thesis aims to further develop the field and open dialogue on the use and effectiveness of using purely AI/ML modelling methods for this application.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Objective forecasting in East Anglia by use of weather types</title>
<link href="https://hdl.handle.net/1721.1/153192" rel="alternate"/>
<author>
<name>Hunsaker, Leon M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153192</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">Objective forecasting in East Anglia by use of weather types
Hunsaker, Leon M.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1953; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A ground water problem in the North Shore area, Nova Scotia</title>
<link href="https://hdl.handle.net/1721.1/153190" rel="alternate"/>
<author>
<name>Young, Edward J.
            (Edward Joseph),
            1923-</name>
</author>
<id>https://hdl.handle.net/1721.1/153190</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1950-01-01T00:00:00Z</published>
<summary type="text">A ground water problem in the North Shore area, Nova Scotia
Young, Edward J.
            (Edward Joseph),
            1923-
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology, 1950; Bibliography: leaves 54-55.
</summary>
<dc:date>1950-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the epoch state filter.</title>
<link href="https://hdl.handle.net/1721.1/153188" rel="alternate"/>
<author>
<name>Edwards, Joan Annette.</name>
</author>
<id>https://hdl.handle.net/1721.1/153188</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Investigation of the epoch state filter.
Edwards, Joan Annette.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A least squares convergence criterion for nonequilibrium boundary layer solutions.</title>
<link href="https://hdl.handle.net/1721.1/153187" rel="alternate"/>
<author>
<name>Elgin, James Brinson.</name>
</author>
<id>https://hdl.handle.net/1721.1/153187</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A least squares convergence criterion for nonequilibrium boundary layer solutions.
Elgin, James Brinson.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>K-shell ionization in collisions of heavy atoms.</title>
<link href="https://hdl.handle.net/1721.1/153183" rel="alternate"/>
<author>
<name>Eichler, David Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/153183</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">K-shell ionization in collisions of heavy atoms.
Eichler, David Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The influence of inclusion content on fatigue crack propagation in aluminum alloys.</title>
<link href="https://hdl.handle.net/1721.1/153182" rel="alternate"/>
<author>
<name>El-Soudani, Sami Mahmoud.</name>
</author>
<id>https://hdl.handle.net/1721.1/153182</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The influence of inclusion content on fatigue crack propagation in aluminum alloys.
El-Soudani, Sami Mahmoud.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Vita.; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for electronically processing neurological data.</title>
<link href="https://hdl.handle.net/1721.1/153177" rel="alternate"/>
<author>
<name>Eckerle, Joseph Stephen.</name>
</author>
<id>https://hdl.handle.net/1721.1/153177</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Methods for electronically processing neurological data.
Eckerle, Joseph Stephen.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Bibliography: leaf 45.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of debt indexation on the value of the firm</title>
<link href="https://hdl.handle.net/1721.1/153176" rel="alternate"/>
<author>
<name>Hollings, Peter F.</name>
</author>
<author>
<name>Raff, George Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153176</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">The effects of debt indexation on the value of the firm
Hollings, Peter F.; Raff, George Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 86-87.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the design, operation and economics of freight containers</title>
<link href="https://hdl.handle.net/1721.1/153122" rel="alternate"/>
<author>
<name>Lappin, Walter William.</name>
</author>
<author>
<name>Westerfeld, Stuart Clarence.</name>
</author>
<id>https://hdl.handle.net/1721.1/153122</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of the design, operation and economics of freight containers
Lappin, Walter William.; Westerfeld, Stuart Clarence.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1932; Includes bibliographical references (leaves 209-218).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A beach resort: club and apartments</title>
<link href="https://hdl.handle.net/1721.1/153117" rel="alternate"/>
<author>
<name>Marshall, Thomas F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153117</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">A beach resort: club and apartments
Marshall, Thomas F.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1952; Bibliography: leaves 47-48.
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Life insurance home office employees and their unionization</title>
<link href="https://hdl.handle.net/1721.1/153116" rel="alternate"/>
<author>
<name>Cogswell, Dean Edmund.</name>
</author>
<id>https://hdl.handle.net/1721.1/153116</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Life insurance home office employees and their unionization
Cogswell, Dean Edmund.
Thesis: M.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1951; Bibliography: leaves 103-105.
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new urban center in Cambridge</title>
<link href="https://hdl.handle.net/1721.1/153113" rel="alternate"/>
<author>
<name>Chalmers, Richard K.</name>
</author>
<author>
<name>Hopper, Thomas P.</name>
</author>
<author>
<name>Kozima, Masashi.</name>
</author>
<author>
<name>Rousos, William B.</name>
</author>
<author>
<name>Vahrenkamp, Donald F.</name>
</author>
<author>
<name>Wulff, Bernard J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153113</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">A new urban center in Cambridge
Chalmers, Richard K.; Hopper, Thomas P.; Kozima, Masashi.; Rousos, William B.; Vahrenkamp, Donald F.; Wulff, Bernard J.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1964; Includes bibliographies.
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The United States foreign service, a personnel model.</title>
<link href="https://hdl.handle.net/1721.1/153110" rel="alternate"/>
<author>
<name>Emmons, Charles Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/153110</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The United States foreign service, a personnel model.
Emmons, Charles Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1972; Bibliography: leaves 113-114.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inter-union work assignment disputes under the railway labor act.</title>
<link href="https://hdl.handle.net/1721.1/153108" rel="alternate"/>
<author>
<name>Swartz, William John.</name>
</author>
<id>https://hdl.handle.net/1721.1/153108</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">Inter-union work assignment disputes under the railway labor act.
Swartz, William John.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1967; Bibliography: leaf 80.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of a manufacturing strategy : Apple Computer's Fremont factory</title>
<link href="https://hdl.handle.net/1721.1/153107" rel="alternate"/>
<author>
<name>Gee, Bruce R.</name>
</author>
<id>https://hdl.handle.net/1721.1/153107</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Evolution of a manufacturing strategy : Apple Computer's Fremont factory
Gee, Bruce R.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 63-67.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of ship speeds along prescribed course under uncertainty</title>
<link href="https://hdl.handle.net/1721.1/153106" rel="alternate"/>
<author>
<name>Foo, Cedric Chee-Keng.</name>
</author>
<id>https://hdl.handle.net/1721.1/153106</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Optimization of ship speeds along prescribed course under uncertainty
Foo, Cedric Chee-Keng.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1985; Includes bibliographical references.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat of formation of some ferro-calcic singulo-silicates</title>
<link href="https://hdl.handle.net/1721.1/153105" rel="alternate"/>
<author>
<name>Wen, Ching Yu,&#13;
            1881-</name>
</author>
<id>https://hdl.handle.net/1721.1/153105</id>
<updated>2025-01-18T02:15:48Z</updated>
<published>1908-01-01T00:00:00Z</published>
<summary type="text">Heat of formation of some ferro-calcic singulo-silicates
Wen, Ching Yu,&#13;
            1881-
Thesis: M.S., Massachusetts Institute of Technology, Dept. of Mining Engineering and Metallurgy, 1908; MIT Institute Archives copy has the following paper bound with thesis: Design of plant for smelting and converting a sulphide copper ore, by C.Y. Wen. 1909. (29 leaves, [1] leaf of plates : ill.; 27 cm.).; Includes bibliographical references.
</summary>
<dc:date>1908-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observations of the Upper Ocean from Autonomous Platforms during the Passage of Extratropical Cyclone Epsilon (2020)</title>
<link href="https://hdl.handle.net/1721.1/153102" rel="alternate"/>
<author>
<name>Zimmerman, Michael T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153102</id>
<updated>2023-12-01T03:23:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Observations of the Upper Ocean from Autonomous Platforms during the Passage of Extratropical Cyclone Epsilon (2020)
Zimmerman, Michael T.
Hurricane Epsilon (2020) was a late-season, category-3 tropical cyclone that underwent extratropical transition and became Extratropical Cyclone Epsilon on 26 October. The upper ocean response to the passage of the storm was observed by three types of autonomous platforms: the eXpendable Spar buoy, the Air-Launched Autonomous Micro Observer profiling float, and two Seagliders. Taken together, this array enabled the rare collection of contemporaneous observations of the upper ocean, air-sea interface, and atmospheric boundary layer before, during, and after the passage of the storm. The evidence presented highlights how Extratropical Cyclone Epsilon broke down the residual North Atlantic summer stratification regime and accelerated the shift to the period of prolonged ocean cooling associated with winter. The significance of the synergistic capabilities of the array is two-fold: 1) comparing observations of the same parameters, taken from different platforms, enables a comprehensive approach to better understanding how storm-induced momentum, sensible heat, and moisture fluxes input kinetic and near-inertial energy into the ocean and thereby alter upper ocean structure; and 2) future, targeted deployments of similarly capable observational arrays will reduce the uncertainty of tropical and extratropical cyclone intensity forecasts by facilitating the assimilation of real-time subsurface ocean data into coupled numerical prediction models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to be Satisfied with Less-Than-Perfect Finish</title>
<link href="https://hdl.handle.net/1721.1/153101" rel="alternate"/>
<author>
<name>Park, Hyun Woo</name>
</author>
<id>https://hdl.handle.net/1721.1/153101</id>
<updated>2023-12-01T03:07:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How to be Satisfied with Less-Than-Perfect Finish
Park, Hyun Woo
Contemporary (Western) society runs on the ideology of projected continuous growth. Capital works as food for growth, by placing the virtue on the constant and focused effort of production, followed by consumption. Creative industries such as the art and design field are not an exception. But in recent years it has become more and more clear that continued economic stability will not be possible at least with the same mode of operation in making things. It is more relevant than ever to look for new approaches in engaging the practice of creative making. How can one engage in the practice of creative making in the precarious world of current era?&#13;
&#13;
In this paper, I navigate through various activities that I performed while at MIT and weave through them in a methodology of embracing contingency, precarity, and friction to demonstrate a novel way of creative making practice with materiality and material agency becoming an active player within the practice.&#13;
&#13;
I reevaluate the preconceived notion of material-based art and design, as well as the common practice of relentless production of novel objects in negligence of their object or material agency. The methodology of reappropriating material and the way of its fabrication unfolds in an exploratory manner, unsurprisingly often with a novel approach of precarious, ad-hoc, and even seemingly haphazard ways.&#13;
&#13;
Additionally, this paper will be in such fashion as a logbook and should serve as a reference for the future self and art and design practitioners alike. I propose that the questions and trials that this paper presents would be to help for someone who hopes to escape from the immobilization by the feeling of their own practice going nowhere from the very practice that they have carried on.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Tools and Design: Improving Participation in Policymaking</title>
<link href="https://hdl.handle.net/1721.1/153100" rel="alternate"/>
<author>
<name>Jeong, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/153100</id>
<updated>2023-12-01T03:27:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Digital Tools and Design: Improving Participation in Policymaking
Jeong, Sarah
This thesis examines how digital tools and design principles can be used to improve public participation in policymaking. I begin by identifying the problem that government consultations often fail to engage the public in policymaking because of their inaccessibility. I then explore ways to make government consultations more accessible and engaging, taking findings from: a literature review; interviews with policy practitioners; and case studies of real-world consultations that were effective in engaging the public. I apply these learnings to design and conduct an online survey as an alternative to the typical form of government consultation, using a recent New Zealand consultation on recycling as my comparator. The thesis evaluates the results of my survey and concludes with implications for incorporating digital tools and design principles into the consultation process.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hedging a Falling Knife: Investing Through the Post Covid-19 Dallas-Fort Worth Housing Correction Utilizing Real Options Strategies</title>
<link href="https://hdl.handle.net/1721.1/153099" rel="alternate"/>
<author>
<name>Gietema III, William Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/153099</id>
<updated>2023-12-01T03:35:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hedging a Falling Knife: Investing Through the Post Covid-19 Dallas-Fort Worth Housing Correction Utilizing Real Options Strategies
Gietema III, William Alexander
Historically and in past housing cycles, the Dallas-Fort Worth (DFW) housing market has maintained remarkable price stability relative to the broader U.S. housing market. Consistent population and employment growth, combined with abundant developable land, have created the ideal environment for developers and homebuilders to achieve the stable production of new single-family housing. In the face of rapid population growth, the high elasticity of housing supply in DFW has enabled the region to maintain housing affordability relative to other major U.S. housing markets.&#13;
&#13;
The Covid-19 pandemic was the second “once in a century” event to occur in the 21st century, the other being the 2008 Global Financial Crisis. Lock downs, work from home, low interest rates, and inflation characterized the supply and demand shocks that followed Covid-19 and produced a rapid escalation of home prices previously uncharacteristic to DFW fundamentals. &#13;
&#13;
This thesis analyzes the impact and sustainability of Covid-19 supply and demand shocks on the DFW housing market and its participants. Focus is placed on the relationship between homebuilders, developers, and lenders in the event of housing correction due to rising interest rates and oversupply. Through the analysis of market fundamentals and structure and in the event of broader market decline, this thesis proposes an investment strategy based on the acquisition of distressed single-family lot developments. The investment strategy leverages real options theories of project delay and product switching to mitigate the risk of catching a falling knife.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Optical Imaging and Image Processing to Verify a Layer&#13;
in a Laser Powder Bed Fusion Process</title>
<link href="https://hdl.handle.net/1721.1/153098" rel="alternate"/>
<author>
<name>Kota, Maya Padmini</name>
</author>
<id>https://hdl.handle.net/1721.1/153098</id>
<updated>2023-12-01T03:36:55Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Using Optical Imaging and Image Processing to Verify a Layer&#13;
in a Laser Powder Bed Fusion Process
Kota, Maya Padmini
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods. AM is widely used in regulated industries such as medical and aerospace which require objective evidence of good manufacturing processes (GMP) for auditing purposes. Within AM, important powder layer characteristics must be met to ensure final part quality. Currently, no machine can provide objective evidence of a proper characterization of crucial powder layer properties with in-process monitoring equipment. Such properties are currently verified by unquantifiable means and can be classified within two categories of failure. This project investigates and analyzes possible sensor technologies that can provide in-process data to objectively quantify the characterization condition. Implementing in-process monitoring technologies will provide objective, quantitative evidence, prevent failed builds due to improper powder layer setups, and reduce the time it takes to set up an AM machine for a build. While the final solution for this project incorporates the use of both a 2D laser line sensor and an AM in-machine camera, this thesis will specifically focus on the in-machine camera. More specifically, this thesis will discuss camera repeatability tests that were conducted, the images taken during these tests, and the resulting pixel intensity values from these images. Analysis of the intensity values demonstrated that the in-machine camera could distinguish between different powder layer thickness values and that intensity values could be used as a quantitative metric to indicate whether certain powder layer characteristics are within specification.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Production Network Capacity Modeling for Strategic Network Planning</title>
<link href="https://hdl.handle.net/1721.1/153097" rel="alternate"/>
<author>
<name>Simons, Philipp</name>
</author>
<id>https://hdl.handle.net/1721.1/153097</id>
<updated>2023-12-01T03:31:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Production Network Capacity Modeling for Strategic Network Planning
Simons, Philipp
Strategic planning of manufacturing capacity requires data-based approaches to determine current and future constraints in a manufacturing network. While oftentimes, the desire to improve decision-making in strategic planning is strong among decision makers, and data on capacity generally exists in some form, there can be a lack of centrally coordinated efforts to harvest existing data as well as high degrees of inconsistency. In addition, modeling manufacturing capacity is an inherently complex problem due to varying modes of production, unclear units of measure, and complex global manufacturing networks. &#13;
&#13;
In this thesis, a capacity model design is proposed for a global medical device manufacturer, and key aspects of the model functionality are demonstrated in a case study. At the core of the capacity model is a database structure using standardized data fields for capacity and demand data, including cycle times, shift structure, and space. The logic of the capacity model is developed, with the goal to capture supply chain complexities such as mixed model lines or various degree of automation. In short, the logic determines the required production time for the product portfolio under consideration, and assesses the available capacity by comparing this required production time with the total available time. &#13;
&#13;
The logic is tested on a prototype product with a focus on mixed model lines. It is found that naming and product grouping inconsistencies require significant manual data manipulation, which – in combination with a lack of standardized, centrally available data – will form the biggest bottleneck in the implementation of the capacity model. Finally, an implementation roadmap is presented to offer guidance on converting the logic presented here into a functional model for decision makers in a supply chain strategy organization.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Playful Occupations: Mobile Creative Coding for Critical Consciousness</title>
<link href="https://hdl.handle.net/1721.1/153095" rel="alternate"/>
<author>
<name>Xisto, Thaís</name>
</author>
<id>https://hdl.handle.net/1721.1/153095</id>
<updated>2023-12-01T03:00:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Playful Occupations: Mobile Creative Coding for Critical Consciousness
Xisto, Thaís
The transformative potential of technology is often championed as a catalyst for societal progress, offering pathways to address challenges and create more inclusive futures. Despite this optimistic perspective of technology as a force for positive change, it often falls short of expectations. As the 21st century unfolds there is growing interest and investment in equipping individuals with computational skills so that they can navigate and further shape our increasingly digitally-mediated world. How can we design computational learning environments so that they not only empower individuals with technical proficiency but also foster the critical thinking, agency, and socio-cultural awareness necessary to fully realize the revolutionary potential of technology? This thesis looks to the Brazilian educator and philosopher Paulo Freire’s concept of conscientização (critical consciousness) as a lens through which we can explore this question.&#13;
&#13;
Throughout this research project, I collaborated with the Homeless Workers’ Movement in Brazil (MTST for short). Freire’s concept of critical consciousness as the ability to intervene in one’s reality in order to change it is central to the movement’s grassroots mobilizations and political education. Using a combination of Participatory Action Research and Social Design Experimentation approaches, we co-designed and implemented a series of creative coding workshops and a projects guide tailored to MTST’s community. These computational learning experiences centered on OctoStudio, a mobile programming app being developed by the Lifelong Kindergarten Group.&#13;
&#13;
What insights about computational literacy might we reach by incorporating critical consciousness into computing education? How can we cultivate critical consciousness through creative coding learning experiences? This thesis investigates these questions while also describing how researchers and communities can collaborate more equitably to create meaningful change in the educational circumstances of marginalized groups. Otherwise, technology might not serve as a tool for empowerment and societal progress but as another mechanism to preserve existing systems of marginalization.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Electrochemical Approaches to&#13;
Deep-Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/153090" rel="alternate"/>
<author>
<name>Badel, Andres F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153090</id>
<updated>2023-12-01T03:11:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Low-Cost Electrochemical Approaches to&#13;
Deep-Decarbonization
Badel, Andres F.
Though numerous efforts have been made to mitigate the impact of global warming, deep decarbonization of the world's largest sources of CO₂ emissions is proving increasingly necessary. Progress in many sectors is proceeding quickly, but we have so far failed to address all sources of industrial emissions. Aviation, long-distance shipping, load-following electricity, and steel, iron, and cement production together account for 27% of emissions and are considered hard-to-decarbonize sectors. We demonstrate and analyze several approaches using electrochemistry in an attempt to address two of these hard-to-decarbonize sectors: cement production and load-following electricity.&#13;
&#13;
For cement production, a novel approach to drive the decarbonation of calcium carbonate using neutral water electrolysis is proposed. This approach also generates concentrated gas streams of H₂ and O₂/CO₂. The fine powder Ca(OH)₂ that is generated in the reactor is then used to synthesize the majority cementitious phase in cement. Approaches to use the concentrated gas streams from this process may be used synergistically with other processes under development for a decarbonized energy economy, suggesting a pathway to cost-competitive emissionless cement manufacturing wherein all energy is supplied by renewable electricity.&#13;
&#13;
For load-following electricity, an evaluation of metal-air batteries is first performed that provides a roadmap of the scale of cost reductions that might be accessible by 2050. We find that because metal-air batteries for grid energy storage are based on low-cost materials, system-level energy costs are low. However, we also find metal-air batteries currently suffer from performance and cost characteristics that prevent wide-scale deployment. Should these be addressed, we find that the cost of ownership for long-duration metal-air batteries is projected to become lower than $100/kWh.&#13;
&#13;
Drawing on the need for low-cost energy storage, a novel battery architecture that uses abundant chemicals separated by two immiscible phases is demonstrated. This self-assembling Zn-Cl₂ battery takes advantage of the immiscible nature of aqueous solutions and non-polar solvents. This system shows an inverse relationship between temperature and energy density that allows for low chemical cost while simultaneously exhibiting high energy density, reaching roughly $2/kWh and 700 Wh/L.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced Order Modeling of a Rocket Engine Turbopump Inducer for Assessment of Pogo Instability</title>
<link href="https://hdl.handle.net/1721.1/153089" rel="alternate"/>
<author>
<name>Hussein, Mennatallah</name>
</author>
<id>https://hdl.handle.net/1721.1/153089</id>
<updated>2023-12-01T03:01:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Reduced Order Modeling of a Rocket Engine Turbopump Inducer for Assessment of Pogo Instability
Hussein, Mennatallah
Cavitation in liquid rocket engine turbopump inducers can lead to system instabilities. One form of these flow oscillations is so-called "pogo instability", in which dynamic instability arises from the interaction of vehicle structural dynamics with thrust oscillations of a liquid fueled propulsion system. These thrust oscillations are a result of the pressure fluctuations that originate in the piping system, the injector, the combustion chamber, and/or the turbopump inducers when cavitating. The main challenges associated with pogo analyses are the complexity of fluid-structure coupling, a disparity in findings about mechanisms for cavitation onset, sparse amount of data on inducer dynamic behavior. This research presents a modular approach for dynamical system modeling capturing structural dynamics due to viscous shear to study their effect on a cavitating dynamics and overall pogo instabilities. An existing cavitating inducer model is extended to include the effect of viscous shear on the piping structure, and is then integrated in a simple rocket engine feedline model to characterize pogo instability with a direct link to changes in operating conditions and design choices. The open loop system analysis captures the effect of viscous shear on cavitation surge. This dissipating mechanism adds damping and stabilizes the system. The closed loop system analysis demonstrates that the reduced order model is capable of assessing the effect of viscous shear induced structural vibrations on overall system stability. Based on these ideas, this work sets the stage for pogo analysis of more complicated configurations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Multi-Agent Decision Making Under Uncertain Communication</title>
<link href="https://hdl.handle.net/1721.1/153081" rel="alternate"/>
<author>
<name>Pittman, Cameron W.</name>
</author>
<id>https://hdl.handle.net/1721.1/153081</id>
<updated>2023-12-01T03:48:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Distributed Multi-Agent Decision Making Under Uncertain Communication
Pittman, Cameron W.
As space exploration accelerates and the number of robots and humans working in extreme environments grows with it, we must enact autonomous multi-agent coordination in order to safely operate in environments that are inherently hostile to communication. To the best of our knowledge, there are no multi-agent scheduling algorithms capable of independently reasoning over communication delay. A key gap that must be addressed is a single-agent scheduler capable of deciding when to act given uncertain observation, which can the form the basis for distributed multi-agent scheduling. Existing research has provided insights into temporal reasoning, namely modeling observation uncertainty and scheduling events with temporal constraints. There is both a need for deciding when to schedule events when there is uncertain observation delay, and a need to robustly coordinate between agents. Scheduling events in the face of uncertainty is a challenge due to the compounding uncertainties of uncontrollable exogenous events, unknown observation delay, and uncertain communication between agents. This thesis puts forth a series of contributions that culminates in the demonstration of a robust single-agent task executive that used our scheduler to coordinate in a multi-agent context despite observation delay. Doing so required insights in checking controllability of temporal constraints with uncertain delay, defining a scheduler that is robust to uncertain observation delay, integrating the scheduler in an existing high-level task executive, and a coordination strategy for multiple agents. We show that the scheduler exhibits the expected performance characteristics, and perform laboratory demonstrations of multi-agent execution with uncertain communication using a scenario inspired by human spaceflight.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Dislocation Behavior in High Entropy Alloys Using Atomistic Simulations</title>
<link href="https://hdl.handle.net/1721.1/153079" rel="alternate"/>
<author>
<name>Oh, Changhwan</name>
</author>
<id>https://hdl.handle.net/1721.1/153079</id>
<updated>2023-12-01T03:52:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigating Dislocation Behavior in High Entropy Alloys Using Atomistic Simulations
Oh, Changhwan
High-entropy alloy (HEA) is a new alloying strategies involving multi-principal elements in near equiatomic proportions.[39, 11, 37, 19, 41, 13] To fully understand and tune the mechanical properties and crystal plasticity of the alloys, it is necessary to investigate the dislocation behavior[15]. The NiCoCr system is reported to have a single-phase face-centered cubic (FCC) crystal structure with enhanced mechanical properties compared to conventional alloys. Its negative stacking fault energy and high yield strength allows unique dislocation behavior. Also, the annealing temperature of NiCoCr system leads to a wide range of short range orders which directly affect the energy barrier of dislocation movement.[22] This work investigates the flow stresses in various systems under constant strain rate and the relationship between partial dislocation behavior and stacking fault energy of NiCoCr system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Photonic Spectroscopy: Applying the Digital Fourier-Transform Spectrometer</title>
<link href="https://hdl.handle.net/1721.1/153077" rel="alternate"/>
<author>
<name>Micale, Gillian K.</name>
</author>
<id>https://hdl.handle.net/1721.1/153077</id>
<updated>2023-12-01T03:34:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integrated Photonic Spectroscopy: Applying the Digital Fourier-Transform Spectrometer
Micale, Gillian K.
The digital Fourier-Transform (dFT) spectrometer is a promising on-chip spectrometer architecture that offers exponential scaling for resolution with a compact device footprint. A package of scripted modules employs object-oriented programming to automate creating the mask layout and streamline the dFT design process. Moving towards longer infrared wavelengths with broadband devices expand the sensing capabilities by accessing stronger chemical absorption signatures associated with the fingerprint regime. The second generation of dFT devices realizes two high-resolution, 1024-channel spectrometers. The first device operates around 1550 nm and fully utilizes foundry standard components and processes. The second device achieves half-octave operation between 1620 - 1750 nm with the use of custom broadband adiabatic couplers. The next set of designs push beyond the telecom range, combining two dFT devices on a single chip for 1.2 - 2.4 µm operation. Ultrabroadband single-mode waveguides and custom adiabatic couplers were designed for each device on this chip. All four of the discussed designs use the SOI material platform and are compatible standard foundry processes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sodium-Ion Battery Cathode Active Material Cost Drivers and Manufacturing Scale-up Barriers</title>
<link href="https://hdl.handle.net/1721.1/153072" rel="alternate"/>
<author>
<name>Clingman, Brooks T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153072</id>
<updated>2023-12-01T03:46:40Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sodium-Ion Battery Cathode Active Material Cost Drivers and Manufacturing Scale-up Barriers
Clingman, Brooks T.
Energy storage can mitigate challenges posed by intermittent renewable generation. Non-hydro energy storage is currently dominated by lithium-ion batteries, but cost and materials supply are concerns. Sodium is more abundant and cheaper to mine and refine than lithium, positioning sodium-ion batteries to be a potential grid storage solution. However, researchers working at the lab-scale have yet to build consensus around the best sodium-ion battery candidates for commercialization. Cathode active materials (CAMs) are of particular interest because of the pivotal role they play in battery performance and cost. Because of the material class’s simple structure, straightforward synthesis, and potential scalability, layered metal oxides (LMOs) are a particularly promising CAM under study. This thesis investigates the cost drivers and scale-up barriers of LMOs. Processes and equipment considerations influencing scale-up are probed through interviews with experts in industry and academia, and materials and process properties driving the design of critical equipment are identified. A process-based cost model is utilized to investigate the impact of synthesis route on CAM costs at scale, and the materials to total cost fraction for LMOs is found to be significantly lower than that of lithium-ion batteries.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flooded with Possibilities: Analyzing Flood Insurance as a Catalyst for Development in Southeast Florida</title>
<link href="https://hdl.handle.net/1721.1/153049" rel="alternate"/>
<author>
<name>Mejia Martinez, Carlos Augusto</name>
</author>
<id>https://hdl.handle.net/1721.1/153049</id>
<updated>2023-11-28T03:43:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Flooded with Possibilities: Analyzing Flood Insurance as a Catalyst for Development in Southeast Florida
Mejia Martinez, Carlos Augusto
Florida stands as one of the most critical residential markets in the United States, with residential sales reaching an impressive $468.5 billion and real estate residential investment amounting to $6.8 billion in 2022. However, the question arises whether this seemingly perpetual growth can withstand the tightening flooding policies. Is the residential market immune to the decisions made by insurance companies and the National Flood Insurance Program (NFIP)? These uncertainties form the basis of this thesis, which delves into the factors influencing insurance premium rates in Miami-Dade and Broward Counties, with a specific focus on geographic factors and independent variables.&#13;
&#13;
Through the utilization of regression models, incorporating data from First Street Foundation and the US Census Bureau, the study analyzes the intricate relationship between these variables and premium rates. A key finding is the pivotal role played by geographic factors, particularly census tracts, in accurately predicting and comprehending premium rates. The inclusion of census tract data enhances accuracy and data normalization. &#13;
&#13;
Moreover, several independent variables, such as flood risk, property values, mortgages, and rental affordability, emerge as significant influencers of premium rates. Time series data analysis reveals a steady upward trajectory in premium rates over time, accentuating the urgency for proactive measures in addressing the surge in insurance costs.&#13;
&#13;
The research further identifies residential arbitrage opportunities, whereby developers can strategically acquire land in areas disproportionately affected by high premium rates. Approximately 15% of single-family homes within the census tracts of Broward and Miami-Dade Counties pay double the insurance dollar value compared to their peers in areas with similar characteristics, as depicted by FEMA. By considering demographic characteristics and purchasing power parity, developers can navigate the evolving real estate market and contribute to sustainable urban development.&#13;
&#13;
These valuable insights into the factors influencing insurance premium rates open avenues for future research. Expanding the analysis to other geographic areas, incorporating additional variables, assessing the impact of climate change, and analyzing the effectiveness of mitigation measures are all potential directions for further exploration. Ultimately, this research sheds light on the intricate dynamics of insurance premium rates and paves the way for more informed decisions in the realms of residential real estate and urban development.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abortion Beyond the Binary: Transgender people have historically been left out of abortion and reproductive health research. Now, two researchers are bringing their experiences to light.</title>
<link href="https://hdl.handle.net/1721.1/153042" rel="alternate"/>
<author>
<name>Jacobs, Phie</name>
</author>
<id>https://hdl.handle.net/1721.1/153042</id>
<updated>2023-11-28T03:53:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Abortion Beyond the Binary: Transgender people have historically been left out of abortion and reproductive health research. Now, two researchers are bringing their experiences to light.
Jacobs, Phie
When it comes to accessing abortions and other reproductive healthcare, transgender people throughout the United States face a minefield of issues—from getting insurance coverage to dealing with medical providers who don’t know how to treat them, to weathering discrimination—but very little research exists on how bad these problems are, the impacts they have, or potential solutions. Currently, only a few national-level studies have investigated how trans people experience the US healthcare system, and no major studies measure the number of trans people who undergo abortions, the type of abortions they receive, or the challenges they face when accessing these services. The few studies that do exist suggest that, due to myriad legal, financial, and social barriers, trans people often struggle to obtain the healthcare services they need.&#13;
&#13;
In 2017, this knowledge gap spurred Heidi Moseson and Sachiko Ragosta, two public health researchers at Ibis Reproductive Health in Oakland, California, to begin developing the first national-level survey into the reproductive healthcare experiences of trans Americans. The survey, which ended data collection in 2019 and is still in the analysis phase, included input from more than 3,000 transgender and nonbinary respondents. The project is unprecedented in terms of size, scope, and specificity, and is currently the only major study in this field that was designed with consultation from those within the trans community and is led by a scientist who is gender diverse themself—Ragosta identifies as nonbinary and uses they/them pronouns.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under Their Own Laws: How the Kitasoo/Xai’xais First Nation created a new marine protected area – without the federal government’s approval</title>
<link href="https://hdl.handle.net/1721.1/153039" rel="alternate"/>
<author>
<name>von Herff, William</name>
</author>
<id>https://hdl.handle.net/1721.1/153039</id>
<updated>2023-11-28T03:52:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Under Their Own Laws: How the Kitasoo/Xai’xais First Nation created a new marine protected area – without the federal government’s approval
von Herff, William
On June 21, 2022, the Kitasoo/Xai’xais, a First Nation on the Pacific coast of Canada, unilaterally declared the Gitdisdzu Lugyeks marine protected area (MPA) in their territorial waters of Kitasu Bay. Whether they have the legal authority to create that protected area, however, is a difficult question to answer. The Constitution Act of Canada protects Indigenous people’s fundamental rights to fishing, logging, and land, but technically they remain subjects of the Canadian government. For the Kitasoo/Xai’xais this system is especially frustrating, since like many other Pacific coast nations, they have never signed a treaty with the Canadian government. &#13;
&#13;
The eventual goal, then, for the new MPA is to reach a co-management agreement, where the Kitasoo/Xai’xais and Canadian government establish overlapping MPAs in Kitasu Bay and share authority over the bay’s resources. The Kitasoo/Xai’xais have their traditional knowledge and holistic understanding of their territory that is needed to protect and manage Kitasu Bay. Meanwhile, the Canadian government has far-reaching political power and a national perspective that the Kitasoo/Xai’xais lack. Combining these assets could do great things for the bay. By declaring their MPA, the Kitasoo/Xai’xais are, in a sense, just getting a head-start on this process. They still want the federal government involved, after all. They just felt that they couldn’t wait any longer. &#13;
&#13;
The Kitasoo/Xai’xais have been fighting for decades to keep their environment intact. They have had to use every tool at their disposal – protests, lawsuits, and industry alliances – to maintain their way of living. Now, the Gitdisdzu Lugyeks MPA represents a new opportunity: if the Canadian government comes to the table, the Kitasoo/Xai’xais will have a renewed chance to safeguard their resources under their own laws and practices, just as they did before European colonization. They are using a vast wealth of traditional knowledge, bolstered by decades of their own scientific research, to guide their management practices and ensure their waters and resources will still be there for generations to come. The Kitasoo/Xai’xais, however, are striving for something bigger than themselves: they believe this MPA can demonstrate the power of Indigenous-led conservation both in Canada and around the world.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Sleepless Forest Observers : Ecologists are using remote observation to advance their understanding of environments. Are they losing something in the process?</title>
<link href="https://hdl.handle.net/1721.1/153032" rel="alternate"/>
<author>
<name>Nalamalapu, Vishva</name>
</author>
<id>https://hdl.handle.net/1721.1/153032</id>
<updated>2023-11-28T03:58:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Sleepless Forest Observers : Ecologists are using remote observation to advance their understanding of environments. Are they losing something in the process?
Nalamalapu, Vishva
In recent years, camera traps, acoustic recorders, genetic methods to identify organisms using DNA they shed into their environments (eDNA), tags on animals to log their behaviors, and aircraft or satellite remote sensing to identify environments and species have all become less expensive, and the quality of sensors and the methods to analyze their data have improved. As a result, ecologists are using remote observation more and more in their research. “The explosion is happening now,” says Taal Levi, an ecologist at OSU who studies quantitative wildlife ecology, conservation, and environmental genetics at the Andrews. A review article published in Frontiers in Ecology and Evolution found that the number of scientific publications with the keyword “eDNA” tripled from 2015 to 2018, the number with the keyword “camera traps” doubled, and the number with the keyword “bioacoustics” increased by 50%.&#13;
&#13;
There are good reasons for this shift. Remote sensing can help researchers learn about ecosystems. Because sensors don’t always need someone physically present, researchers can use them to collect data at larger and finer scales and in places that are difficult to observe directly. Sensors can also detect a wider range of organisms than traditional methods. Levi says these technologies are like direct observation “but instead of just you, you’ve got 5,000 versions of you that can stay awake all night long.”&#13;
&#13;
Simultaneously, researchers spend less time in the field when they use remote observation. And it is in the field where they often come up with research ideas and develop a deeper intuition for an ecosystem. Remote observation can also encourage the trend of finding patterns (that an animal lives in environments with specific characteristics, for example) without learning what causes those patterns (which of those characteristics are important to the animal and why).&#13;
&#13;
The Andrews is one place of many where the explosion of remote observation is happening. It was established as a site for long-term science and management studies by the Forest Service in 1948 and designated one of the first of 28 National Science Foundation funded Long-Term Ecological Research (LTER) Network sites in 1980. LTER Network sites focus on long-term and large-scale ecological processes. As a result, the Andrews has a long history of research on forests, streams, and watersheds, which makes it an especially good place to assess the transition from traditional methods to remote observation. At the Andrews, researchers are trying to get the benefits of remote observation while avoiding the risks and to find a balance between remote observation and traditional methods. That requires being intentional within the fast-paced broader culture of scientific research. Their success determines the novelty, completeness, and accuracy of their research, which in turn influences how society understands and manages its environments.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Softwar: A Novel Theory on Power Projection and the National Strategic Significance of Bitcoin</title>
<link href="https://hdl.handle.net/1721.1/153030" rel="alternate"/>
<author>
<name>Lowery, Jason P.</name>
</author>
<id>https://hdl.handle.net/1721.1/153030</id>
<updated>2023-11-28T03:27:36Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Softwar: A Novel Theory on Power Projection and the National Strategic Significance of Bitcoin
Lowery, Jason P.
Current analysis of Bitcoin’s underlying proof-of-work technology is almost exclusively based on financial, monetary, or economic theory. Recycling the same theoretical frameworks when performing hypothesis-deductive analysis of Bitcoin has the potential to create systemic-level analytical bias which could negatively impact public policy making efforts and could even pose a threat to US national security.&#13;
&#13;
This thesis introduces a novel theoretical framework for analyzing the potential national strategic impact of Bitcoin as an electro-cyber security technology rather than a peer-to-peer cash system. The goal of this thesis is to give the research community a different frame of reference they can utilize to generate hypotheses and deductively analyze the potential risks and rewards of proof-of-work technologies as something other than strictly monetary technology. The author asserts it would be beneficial for researchers to explore alternative functionality of proof-of-work technologies to eliminate potential blind spots, provide a more well-rounded understanding of the risks and rewards of proof-of-work protocols like Bitcoin, and positively contribute to the development of more informed public policy in support of the March 2022 US Presidential Executive Order on Ensuring the Responsible Development of Digital Assets and the May 2022 US Presidential Executive Order on Improving the Nation’s Cybersecurity.&#13;
&#13;
Utilizing a grounded theory methodology, the author combines different concepts from diverse fields of knowledge (e.g. biology, psychology, anthropology, political science, computer science, systems security, and modern military strategic theory) to formulate a novel framework called “Power Projection Theory.” Based on the core concepts of Power Projection Theory, the author inductively reasons that proof-of-work technologies like Bitcoin could not only function as monetary technology, but could also (and perhaps more importantly) function as a new form of electro-cyber power projection technology which could empower nations to secure their most precious bits of information (including but not limited to financial bits of information) against belligerent actors by giving them the ability to impose severe physical costs on other nations in, from, and through cyberspace. The author calls this novel power projection tactic “softwar” and explores its potential impact on national strategic security in the 21st century. Like most grounded theory research efforts, the primary deliverable of this thesis is a novel theory rather than deductive analysis of a hypothesis derived from existing theory.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms Underlying Learning Mediated Plasticity in the Adult Mammalian Olfactory Bulb</title>
<link href="https://hdl.handle.net/1721.1/153029" rel="alternate"/>
<author>
<name>McCue, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/153029</id>
<updated>2023-11-28T03:20:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanisms Underlying Learning Mediated Plasticity in the Adult Mammalian Olfactory Bulb
McCue, Margaret
The olfactory system is rapidly modified by learning, starting at the first informational relay station. The olfactory bulb retains high levels of plasticity throughout adulthood and undergoes lasting structural changes following classic learning paradigms. Recent studies are starting to elucidate the mechanisms that underlie these high levels of rapid, flexible, and persistent change. This review will first discuss the anatomy and basic coding of the olfactory bulb to provide a basis for understanding the fundamental processes of the system. It will then discuss recent breakthroughs in understanding the mechanisms of learning mediated plasticity in the olfactory bulb.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Treating Brackish Groundwater for Irrigation with Selective Electrodialysis &amp; Nanofiltration</title>
<link href="https://hdl.handle.net/1721.1/152963" rel="alternate"/>
<author>
<name>Heath, Samuel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152963</id>
<updated>2023-11-14T03:08:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Treating Brackish Groundwater for Irrigation with Selective Electrodialysis &amp; Nanofiltration
Heath, Samuel M.
The Campo de Cartagena aquifer in southeastern Spain contains brackish groundwater that is considered low quality since it requires treatment prior to economic use. For example, the groundwater contains high concentrations of monovalent ions (Na⁺ and Cl⁻), which are detrimental to crop growth and must be removed before irrigation. Currently, the most widely used technology to remove the monovalent ions from water is reverse osmosis (RO) desalination, but this technology also removes divalent ions that are beneficial to crop growth (Mg²⁺, Ca²⁺, and SO₄²⁻). In this study, two technologies, selective electrodialysis (SED) and nanofiltration (NF), are evaluated to treat the brackish groundwater as an alternative to RO. Unlike RO, both SED and NF can remove monovalent ions from the brackish groundwater feed stream while retaining the divalent ions, producing an irrigation product rich in nutrients.&#13;
&#13;
Using a bench-scale experimental setup for each water treatment method, the monovalent-divalent selective performance of commercial SED and NF membranes are quantified and compared, using a feed stream representative of the water present in the Campo de Cartagena aquifer. Specifically, the pH and total dissolved solids (TDS) of the feed stream are varied to experimentally optimize the monovalent-divalent selectivity for each process. In addition to comparing the membrane performance, this thesis also evaluates practical considerations in the implementation of both technologies. The primary results of this work show that both SED and NF have potential as technically feasible alternatives to RO, but further analysis is needed to determine the economic feasibility of these two processes for this application. NF and SED have the potential to produce a nutrient rich irrigation product, ultimately creating less waste and saving farmers money on fertilizer.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Error in a Model&#13;
Predictive Irrigation Controller</title>
<link href="https://hdl.handle.net/1721.1/152952" rel="alternate"/>
<author>
<name>Ingersoll, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/152952</id>
<updated>2023-11-14T03:00:52Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis of Error in a Model&#13;
Predictive Irrigation Controller
Ingersoll, Samuel
Significant portions of the world’s agricultural land are vulnerable to desertification, leading to water shortages and changing climate conditions. Smart irrigation controllers could be part of the solution by helping farmers save water and adapt to changing climate without sacrificing yield. This thesis presents an analysis of sensitivity to crop model parameters in the MIT GEAR Lab’s new POWEIr irrigation controller with the goal of making it cheaper and easier to deploy and therefore more accessible. The analysis shows that, of the four crop parameters, the controller is most sensitive to the crop coefficient (K subscript c), moderately sensitive to the maximum rooting depth (Zᵣ), less sensitive to depletion fraction (f subscript d), and almost completely independent of the the yield response factor (K subscript y). This result is potentially useful for designing calibration procedures for the deployment of the POWEIr Controller, especially where there may be limited ability to calibrate the controller.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Galvanic Displacement Across Single-Layer Graphene</title>
<link href="https://hdl.handle.net/1721.1/152951" rel="alternate"/>
<author>
<name>Cunitz, Isabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/152951</id>
<updated>2023-11-14T03:40:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Galvanic Displacement Across Single-Layer Graphene
Cunitz, Isabelle
This work aims to advance the scientific and engineering understanding of galvanic displacement reactions as buffered by a monolayer of graphene, specifically by investigating palladium deposition on graphene on a copper foil substrate via galvanic displacement between the copper and palladium (II) ions in solution. To understand palladium nanoparticle deposition and determine how this process can be controlled, electrochemical thermodynamics and classical nucleation theory are first synthesized into a thermodynamic model of the system. Next, scanning electron microscopy is used to characterize palladium deposition on the graphene/copper surface after galvanic displacement. Copper etch pits are observed to form during the reaction, maintaining contact between the deposition solution and the copper and thereby ensuring that the reaction is not self-limiting under the conditions studied. Palladium is observed to preferentially deposit along atomic steps in the copper foil, at graphene defects where the copper is exposed to the deposition solution, and at etch pits. The effects of varying palladium concentration and graphene/copper surface treatments are characterized, and these results are synthesized to propose a mechanism of palladium deposition via galvanic displacement through graphene. Finally, galvanic displacement is investigated in a novel engineering application, as a method of sealing graphene defects for the synthesis of centimeter-scale nanoporous atomically thin membranes. Palladium nanoparticles deposited on the graphene surface are observed to largely survive graphene transfer to a support membrane substrate, as well as mounting and use in aqueous diffusion cell experiments. However, diffusion experiments show that graphene treated via galvanic displacement has higher leakage than untreated graphene, indicating that under the reaction conditions studied here, galvanic displacement has a net effect of graphene defect enhancement rather than defect sealing. This work contributes new insights regarding galvanic displacement as a method of modifying monolayer graphene, as well as exploring this method in the novel application of membrane separations. With further development, this simple, quick, and inexpensive technique for the fabrication of 2D material/nanoparticle composites may have a myriad of possible applications relevant to medicine, sustainability, and beyond.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and Electronic Transport of Natural Superlattice Compounds</title>
<link href="https://hdl.handle.net/1721.1/152950" rel="alternate"/>
<author>
<name>Chen, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/152950</id>
<updated>2023-11-14T03:16:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Synthesis and Electronic Transport of Natural Superlattice Compounds
Chen, Alan
The study of periodic structures and their impact on states of matter is essential in condensed matter physics. The analysis of this periodicity led to the modern understanding of electronic properties through the band structure. Advances in materials synthesis and discovery have led to precise control over electronic properties via control of the atomic structure. One family of materials in which this has been explored are van der Waals (vdW) materials. In addition to their study as bulk crystalline specimens, the two-dimensional nature of these materials enables the development of artificial heterostructures with a diverse range of electronic states of matter. The ability to in turn design bulk crystals containing such heterostructures would enable access to a broader range of experimental techniques and potential new electronic states. In this thesis, we present a synthesis study of natural superlattices composed of transition metal dichalcogenide (TMD) monolayers alternating with spacer layers. These superlattices belong to the TMD family with chemical formula MS₂, M = (V, Nb, Mo, W). We study one such compound, Sr-VS₂, through electronic transport measurements including evidence for an insulating state therein. We further discuss syntheses of Group-VI TMD superlattices and the potential physics such systems may support.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery-Free Wireless Imaging of Underwater Environments</title>
<link href="https://hdl.handle.net/1721.1/152949" rel="alternate"/>
<author>
<name>Akbar, Waleed</name>
</author>
<id>https://hdl.handle.net/1721.1/152949</id>
<updated>2023-11-14T03:19:05Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Battery-Free Wireless Imaging of Underwater Environments
Akbar, Waleed
Imaging underwater environments is of great importance to marine sciences, ocean sustainability, climatology, defense, marine robotics, geology, space exploration, and global food security. Despite advances in underwater imaging, most of the ocean and marine organisms remain unobserved and undiscovered. Existing methods for underwater imaging are unsuitable for scalable, long-term, in situ observations because they require tethering for power and communication. Here we describe underwater backscatter imaging, a method for scalable, real-time wireless imaging of underwater environments using fully-submerged battery-free cameras. The cameras power up from harvested acoustic energy, capture color images using ultra-low-power active illumination and a monochrome image sensor, and communicate wirelessly at net-zero-power via acoustic backscatter. We demonstrate the potential of this method in wireless battery-free imaging of animals, plants, pollutants, and localization tags in enclosed and open-water environments. The method’s self-sustaining nature makes it desirable for massive, continuous, and long-term ocean deployments with many applications including marine life discovery, submarine surveillance, and underwater climate change monitoring.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Infrastructure Requirements of a Low-Carbon Hydrogen Supply Chain in Germany and theGulf Coast</title>
<link href="https://hdl.handle.net/1721.1/152948" rel="alternate"/>
<author>
<name>Sizaire, Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/152948</id>
<updated>2023-11-14T03:26:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Evaluating the Infrastructure Requirements of a Low-Carbon Hydrogen Supply Chain in Germany and theGulf Coast
Sizaire, Paul
The increasing political momentum advocating for decarbonization efforts, in Europe and elsewhere, has led many governments to unveil national hydrogen strategies. Hydrogen is viewed as a potential enabler of deep decarbonization, notably in hard-to-abate sectors such as the industry. A novel optimal low-carbon hydrogen network algorithm was developed to assess the supply chain requirements of systems with increasing electrolytic hydrogen production levels. This model was used to investigate the low-carbon hydrogen procurement strategies of Germany and the Gulf Coast, with a focus on industrial demand.&#13;
&#13;
An initial case explored a self-sufficiency scenario in which the studied region would domestically procure hydrogen with electrolytic production. Results show important synergies between electrolytic production powered by a mix of renewables, large-scale hydrogen storage in the form of salt caverns, and hydrogen pipelines. The optimal power mix in the Gulf Coast consists of a majority of wind turbines, while Germany deploys a larger share of solar panels. The levelized cost of hydrogen, which includes storage and transport, totals ~$5.5-6.2kgH₂ in the Gulf Coast (2025), and 4.9-6.1 €/kgH₂ in Germany (2025). Replacing salt caverns with compressed and liquid tank storage drastically changes the system, which deploys more renewable capacity to avoid storage needs but ultimately increases curtailment, driving costs up by ~$1/kgH₂ in the Gulf Coast and 1.0-2.2 €/kgH₂ in Germany. This calls for a centralized approach to building out the supply chain, requiring extensive stakeholder collaboration. Furthermore, optimal electrolytic production requires low capacity factors (40-70%) to truly achieve low-carbon status with renewable electricity at all times, which impacts the levelized cost of hydrogen and keeps it high (&gt;$4 (and €)/kgH₂) even in 2050. It was found that electricity storage is not economical to increase electrolytic capacity factors at times of low renewable production. &#13;
&#13;
Natural gas-derived production was found to be significantly impacted by upstream supply chain emissions of electricity and natural gas. Maintaining such production will require important reductions of the methane leakage rate and electricity carbon footprint, alongside a high carbon capture rate at the process level. Finally, in the case of Germany, pipeline imports from neighboring countries were found to have important systemic benefits and provide a viable pathway to decarbonization, but the local large-scale storage of these potentially variable imports should not be overlooked.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis Driven Shape Design of Parametric Geometry Using B-Splines and Free-Form Deformation</title>
<link href="https://hdl.handle.net/1721.1/152947" rel="alternate"/>
<author>
<name>Gomez, Marlena</name>
</author>
<id>https://hdl.handle.net/1721.1/152947</id>
<updated>2023-11-14T03:27:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis Driven Shape Design of Parametric Geometry Using B-Splines and Free-Form Deformation
Gomez, Marlena
This thesis presents a method for local aerodynamic shape optimization to morph parametric geometry. The goal is a general framework for analysis driven shape design that combines CAD-like parametric solid model geometry construction with free-form-like local deformation. One method explored uses the control point locations of certain B-splines defining the geometry as design parameters during optimization. A second approach builds on this method, using free-form deformation (FFD) to morph geometry within an FFD box. Analytic geometry inside the FFD box which is not by default defined by B-splines is converted to a B-spline representation, and free-form deformation is then used to move the B-spline control point net. In this method, the control point locations of the FFD box serve as design parameters, which allows for the generation of smooth geometry while keeping the number of degrees of freedom manageable for the optimizer. The first technique is applied to an optimization case where the &#119871;²-norm difference between an airfoil shape and a target shape is minimized. Then, the technique is demonstrated on an optimization driven by computational fluid dynamic analysis (CFD) where drag of an airfoil geometry is minimized. Lastly, the B-spline method is applied to an optimization of a wingtip surface, where the objective function minimizes drag while maintaining the initial lift of the shape. The FFD technique is similarly applied to an airfoil &#119871;²-norm difference minimization, and a wingtip &#119871;²-norm difference minimization. Finally, the FFD technique is demonstrated on design cases driven by CFD analysis for an airfoil.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diel vertical migration and frontal variability of&#13;
acoustic backscatter in the Balearic Sea</title>
<link href="https://hdl.handle.net/1721.1/152945" rel="alternate"/>
<author>
<name>Cheslack, Helena R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152945</id>
<updated>2023-11-14T03:33:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Diel vertical migration and frontal variability of&#13;
acoustic backscatter in the Balearic Sea
Cheslack, Helena R.
Acoustic Doppler current profilers (ADCPs) use active sonar to measure current velocities by measuring the sound returned by scatterers (most often zooplankton) in the water column. The volume of scatterers, or echo intensity, has been used to measure the abundance of zooplankton and characterize diel vertical migration (DVM). DVM is the mass vertical movement of zooplankton and fish between the surface waters where they feed at night, and the mesopelagic zone where they avoid predators during the day; it is considered the largest migration of biomass on Earth, happens in every ocean, and is important to the global carbon cycle.&#13;
&#13;
This thesis uses a combination of data that I helped acquire during the Office of Naval Research-funded CALYPSO 2022 field campaign in the Balearic Sea. Acoustic backscatter from a 38kHz ADCP and a 150kHz ADCP is translated into mean volume backscattering strength (MVBS) to characterize the sound scattering layers (SSLs) in the Balearic Sea. WireWalker data is used to model subsurface light. The MVBS is compared to measurements of temperature, salinity, chlorophyll concentration, and dissolved oxygen (DO) from the EcoCTD, a towed instrument that simultaneously measures hydrographic and biological parameters. The analysis reveals one permanent scattering layer at 300m – 600m and two migrating scattering layers in the top 50m and between 100m – 300m. The layers are likely made up of zooplankton like krill and pteropods and pelagic fish. The speed of vertical migration ranges from 1 – 11cms⁻¹, and migrators are follow isolumes during migration times. DVM has the strongest effect on backscatter anomalies, but during daytime and nighttime, DO is most correlated with the backscatter anomaly.&#13;
&#13;
We demonstrate that ADCPS can be used to characterize SSLs and DVM. The uniquely co-located EcoCTD data from CALYPSO enables us to compare the frontal variability in scatterers to variability in biological and physical parameters. Characterizing the SSLs, DVM, and frontal variability of acoustic backscatter furthers understanding of the global carbon cycle.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Qubit Quest Decoded: A Mixed-Methods Analysis of Innovation Policies and Ecosystem Mapping in the Race for Quantum 2.0 Technologies</title>
<link href="https://hdl.handle.net/1721.1/152892" rel="alternate"/>
<author>
<name>Sandoval Sandoval, Jorge I.</name>
</author>
<id>https://hdl.handle.net/1721.1/152892</id>
<updated>2023-11-03T03:31:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Qubit Quest Decoded: A Mixed-Methods Analysis of Innovation Policies and Ecosystem Mapping in the Race for Quantum 2.0 Technologies
Sandoval Sandoval, Jorge I.
This thesis presents a comprehensive examination of the evolution and dynamics of emerging Quantum 2.0 technologies through the lens of innovation ecosystems. Utilizing a mixed-methods approach that incorporates both qualitative and quantitative data, the study offers a cross-country perspective segmented by various technological, social, and policy factors. The manuscript begins with an in-depth review of the literature, capturing the current state, challenges, and scientific discourse surrounding Quantum 2.0 technologies. It then introduces an "innovation ecosystems" framework to contextualize the complex interplay of policies, strategies, and stakeholder dynamics. The concept of an "innovation pipeline" is further developed, informed by a variety of sources to draft a timeline that traces the emergence and diversification of Quantum 2.0 technologies, primarily within the U.S. context. &#13;
A scientometric analysis of global quantum-related publications, U.S. patents, and worldwide venture capital investments provides a broad view of the landscape from 2010 to 2022. This data-driven approach uncovers patterns of collaboration and topic divergence, and assesses the variation in the sequential production of knowledge artifacts. The study highlights the top ten global players in the field and leverages a keyword co-occurrence analysis to further elaborate on the trends and ideas influencing Quantum Information Science (QIS).&#13;
Overall, the dissertation provides valuable insights into the current state of strategic policy approaches on the nascent ecosystems of Quantum 2.0 technologies. The developed analytical frameworks serve as a reference for understanding coherence in policy actions and funding allocations, offering guidelines for future strategic innovation in both public and private sectors engaged in large-scale technological projects.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulatory benchmarking by machine learning: The case of climate resilience in electric utilities</title>
<link href="https://hdl.handle.net/1721.1/152891" rel="alternate"/>
<author>
<name>Lyu, Beichen</name>
</author>
<id>https://hdl.handle.net/1721.1/152891</id>
<updated>2023-11-03T03:37:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Regulatory benchmarking by machine learning: The case of climate resilience in electric utilities
Lyu, Beichen
Regulatory targets are becoming increasingly complex to benchmark, including electric utilities’ climate resilience (“utility resilience”) that are non-linear and highdimensional. Meanwhile, machine learning (ML) models have been developed, and continue to be developed, with desirable properties such as local optimality and data compression. To explore the synergy between ML and benchmarking, we review and discuss the literature from both sides in the context of government regulation.&#13;
&#13;
Then we dive into a case study of utility resilience, where the dual complexities of the climate and power systems converge, with climate impacts that are likely to harm resilience and increase risks. However, these complicated and changing climate impacts are overlooked in the current regulations of utility resilience [30]. We examine how benchmarking could be applied to fill this regulatory gap through performance incentive mechanisms and elaborate on the political-economic implications, both advantages and potential pitfalls, of its application.&#13;
&#13;
With these theoretical understandings, we experiment with benchmarking weatherrelated power outages in New England, US between 2010-2021. We propose a data regime by combining station-level weather data with district-level outage data, as well as a baseline model using ridge regression. We also deploy our model through an online portal, as well as discuss its limitations on long-tail distributed outage and weather data. Our studies could inform future ML-based benchmarking for regulatory uses, particularly over utility resilience, that balances accuracy, accessibility, and applicability.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracking Dust Plumes and Identifying Source Areas Using Spatiotemporal Clustering of Remote Sensing Data</title>
<link href="https://hdl.handle.net/1721.1/152889" rel="alternate"/>
<author>
<name>Alnasser, Faisal</name>
</author>
<id>https://hdl.handle.net/1721.1/152889</id>
<updated>2023-11-03T03:30:53Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tracking Dust Plumes and Identifying Source Areas Using Spatiotemporal Clustering of Remote Sensing Data
Alnasser, Faisal
Traditionally, studies on dust relied on polar-orbiting satellites whose limited tempo-ral coverage does not oﬀer a detailed picture of how dust plumes evolve and change over time. To address this, we develop a method to identify and track individual dust plumes via hourly images from the Meteosat Second Generation Spinning Enhanced Visible and Infrared Imager (SEVIRI) instrument on the Eumetsat geostationary or-bit satellites. Our framework uses the SEVIRI Dust RGB false color composite to highlight airborne dust in images. We then use the DBSCAN machine learning algo-rithm to cluster pixels into plumes based on their spatial and temporal connectivity. Through careful analysis and processing, we are able to analyze properties such as the storm’s source area, distance traveled, and aﬀected areas. Through our framework, we gain insights into dust storm sources, emission factors such as soil moisture, wind speed, and vegetation, and their seasonal eﬀects, which are key for understanding dust impacts on air quality, health, and the environment. To illustrate the eﬀective-ness of our methodology, we conduct comprehensive case studies on several prominent dust-emitting regions: the Bodélé Depression, Southern Iraq, the Syrian Desert, and the Sistan basin. These case studies shed light on the complex eﬀects of drought and the interplay between soil moisture and vegetation as well as their eﬀects on plume properties. Providing an understanding of the diﬀerent variables contributing to dust storm dynamics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subsurface Digital Twin and Emergence</title>
<link href="https://hdl.handle.net/1721.1/152888" rel="alternate"/>
<author>
<name>Zhao, Yushi</name>
</author>
<id>https://hdl.handle.net/1721.1/152888</id>
<updated>2023-11-03T03:35:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Subsurface Digital Twin and Emergence
Zhao, Yushi
Subsurface characterization stands at the nexus of humanity's growing demands for materials, energy, and safety amid the burgeoning population and rising living standards. However, challenges in subsurface characterization, rooted in conventional practices, functional silos, limited data density, and technological constraints, impede business efficacy and sustainable development. As societies' expectations shift and industries evolve, a paradigm shift is required in the human-machine relationship and the way we organize work. To meet these challenges and ensure responsible human progress, a systematic solution is needed.&#13;
&#13;
This thesis investigates the concept of a subsurface digital twin as a boundary object that bridges disciplines, scales, and uncertainties, fostering collaboration and real-time informed decision-making. It explores the evolution of subsurface characterization from data-sparse and theory-dependent practices to a holistic digital twin framework. The thesis identifies critical technical and sociotechnical challenges, including data scarcity, overreliance on empirical relationships, functional silos, and trust. The thesis demonstrates how a subsurface digital twin can enhance cross-functional collaboration and address critical challenges through real-world examples. It highlights the use of geoanalytics and machine learning to predict total organic carbon content and formation brittleness, showcasing the digital twin's power in multidisciplinary workflows. Furthermore, it proposes a solution for uncertainty reduction through integration and laid out future steps for the development of the subsurface digital model, construction of pseudo/surrogate models for probabilistic simulation complex and time-consuming numerical simulations, and use of the digital twin to bridge workflows between data-rich and data-scarce regions across scales.&#13;
&#13;
The thesis outlines the design and value-creating functions of the subsurface digital twin system, facilitating adaptive resolution and agile implementation. It envisions a future where such digital twins revolutionize decision-making, from individual project optimization to enterprise-wide insights. The thesis underscores the importance of strategic investment in digital twins for long-term returns and as a cornerstone of the evolving human-machine relationship and advances the concept of a subsurface digital twin as a transformative approach to subsurface characterization, fostering collaboration, tackling challenges, and paving the way for sustainable progress in a rapidly changing world.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Multi-Sensor Fusion for 3D Perception</title>
<link href="https://hdl.handle.net/1721.1/152887" rel="alternate"/>
<author>
<name>Shao, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/152887</id>
<updated>2023-11-03T03:58:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Efficient Multi-Sensor Fusion for 3D Perception
Shao, Kevin
As a critical component to realizing widespread autonomous driving, 3D perception systems have come to be heavily studied in the community. However, many solutions are solely focused on merely achieving the highest accuracy – overlooking other practical considerations such as speed and cost. In this thesis, I develop two multisensor fusion models for 3D Perception: BEVFusion, a camera-LiDAR fusion model, and BEVFusion-R, a camera-radar fusion model. BEVFusion seeks to balance accuracy and speed. By fusing features from each input modality in the shared bird’s eye view space, it captures both semantic and geometric information from each input. Its simple design allows it to achieve both state-of-the-art accuracy and a 24% speedup over competing works. BEVFusion-R further incorporates cost and hardware deployment into the design consideration. By carefully designing the entire model with both performance and acceleration, BEVFusion-R achieves a 2.1% NDS improvement on nuScenes over the previous state-of-the-art with a 4.5× measured speedup. Additionally, it is capable of real-time latency on edge GPUs. The code will be publicly released at https://github.com/mit-han-lab/bevfusion
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Stable Reinforcement Learning in&#13;
Non-Episodic Tasks</title>
<link href="https://hdl.handle.net/1721.1/152886" rel="alternate"/>
<author>
<name>Karnik, Sathwik</name>
</author>
<id>https://hdl.handle.net/1721.1/152886</id>
<updated>2023-11-03T03:52:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Stable Reinforcement Learning in&#13;
Non-Episodic Tasks
Karnik, Sathwik
Despite recent advances in deep reinforcement learning (RL), deploying RL policies in robotics often leads to various challenges. The typical training paradigm in RL involves the rollouts of policies executed in a finite horizon or episodes. However, such policies may struggle to generalize well in various non-episodic tasks, including both object manipulation and locomotion. In this thesis, we study the challenges that arise from non-episodic tasks in two settings: (1) object manipulation in the Habitat Home Assistant Benchmark (HAB) [18] and (2) locomotion in the MuJoCo suite [20]. &#13;
&#13;
In the first of these two settings, we study the failure modes of the baseline methods and characterize much of the failures as being due in part to the instabilities in object placement and the lack of error recovery in the setting of open-loop task planning. We consider a possible approach to address this issue by modifying the steady-state termination condition in the RL objective to place the object at the goal position for a longer horizon. We next consider an error-corrective policy using inverse-kinematics (IK) following the execution of the RL policy. The integration of an IK policy leads to a significant improvement in the final task success rate from 41.8% to 65.3% in SetTable, one of the three tasks in the HAB.&#13;
&#13;
In the second setting, we consider extrapolation in the non-episodic task of locomotion in the MuJoCo suite. Typical RL policies are trained for a finite horizon, but may need to be executed for a longer horizon during deployment in locomotion tasks. However, current RL approaches may fail to generalize beyond the training horizon. To address this issue, we consider the use of time-to-go embeddings as part of the observations. Specifically, we introduce the use of a constant time-to-go embedding in the setting where the horizon is much longer during evaluation or deployment. We find limited evidence of improvements in the average episode returns during evaluation in 6 tasks in the MuJoCo suite.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Transit Oriented Development using Satellite&#13;
Imagery: Riyadh vs. Phoenix</title>
<link href="https://hdl.handle.net/1721.1/152882" rel="alternate"/>
<author>
<name>Almazroa, Noor</name>
</author>
<id>https://hdl.handle.net/1721.1/152882</id>
<updated>2023-11-03T03:01:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing Transit Oriented Development using Satellite&#13;
Imagery: Riyadh vs. Phoenix
Almazroa, Noor
As urbanization becomes the way of the future, the demands on the cities are becoming more urgent, with an increased awareness of the need for sustainability and resilience, making the utilization of today’s Technology and data critical in decisionmaking and planning. In the first part of this thesis, I combine a few of these techniques and datasets to explore their ability to provide a helpful assessment of Transit- Oriented Development (TOD). This research assesses the transit-oriented characteristics in two cities, Riyadh, Saudi Arabia, and Pheonix City, Arizona, US. Both share many similarities in urban design and climate. I use high-resolution satellite imagery with Computer Vision methods to detect the built area around public transit stations to measure the building density and, combined with land use data, measure the residential and nonresidential density. Both of these measurements are important indicators of the success of a public transportation system. I found that out of the two different building detection methods, the one based on deep learning techniques was more precise, with better generalization abilities. While the method based on classical image processing techniques is more sensitive to threshold choices, with considerable variability when tested on different years. Both methods, however, were able to give a useful prediction of buildings. And from their results, I found that Phoenix City has a building density of less than 50%, even around the busiest stations downtown stations. Riyadh, on the other hand, is more compact and with at least more than 50% of the land being developed. In the second part, I formulate a System Dynamics that is validated by Phoenix’s actual ridership for the 2010-2020 period and predicts transit ridership in Riyadh. The model closely approximated Phoenix’s ridership up until 2016. The Riyadh model estimated that the ridership would start with six million riders, surpassing the predictions of the Royal Commission for Riyadh City (RCRC) of 1.6 million initially. The results of both parts indicate that given that Riyadh is more densely built with a smaller area and has a more extensive transportation system and bigger population, this should serve as an incentive to promote a more transit-oriented built environment by increasing walkability and dense mixed-use developments throughout the city.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Transfer Learning for Macroscale Defect Detection in Semiconductor Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/152881" rel="alternate"/>
<author>
<name>Waterworth, John Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/152881</id>
<updated>2023-11-03T03:30:31Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Deep Transfer Learning for Macroscale Defect Detection in Semiconductor Manufacturing
Waterworth, John Timothy
This thesis proposes improvements to wafer macro inspection processes and tools on four axes at Texas Instruments. The major axis of improvement involves real-time machine learning recommendations regarding the presence of macroscale defects. In this work, a model for detecting central defects is described in detail, and a novel approach to overcoming data scarcity through the creation of synthetic data is deployed. The binary classifier model achieves an out-of-distribution area under curve (AUC) of 0.909 for detecting hotspot defects. Detection for other classes of central defects is also explored but limited by even greater data sparsity. Models for catching spin on glass defects and edge defects are also trained with out-of-distribution AUCs reaching 0.927 and 0.906 respectively. Other axes of improvement covered in this thesis involve gauge reliability and repeatability analysis of macro inspection tools, the creation of a new user interface called OwlView, and the trial of a new macro inspection system used in-line on photolithography tools for greater efficiency. Gauge repeatability and reliability analysis gives insight into tool function and assists the team and technicians in root cause analysis. Several hardware failures of current toolsets are identified and addressed. Maintenance procedures are also updated to keep tools operating within specifications. The OwlView interface is developed with features to increase user efficiency. Additionally, the interface helps create an infrastructure for tagging more data, which will be fed back into the models to address data scarcity. Lastly, an in-line inspection trial shows achievable high quality wafer images compatible with the machine learning and inspection infrastructures developed in this work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the accessibility and usability of motion capture&#13;
technology: design and development of indoor MoCap&#13;
hardware system</title>
<link href="https://hdl.handle.net/1721.1/152880" rel="alternate"/>
<author>
<name>Chang, Cheng</name>
</author>
<id>https://hdl.handle.net/1721.1/152880</id>
<updated>2023-11-03T03:30:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing the accessibility and usability of motion capture&#13;
technology: design and development of indoor MoCap&#13;
hardware system
Chang, Cheng
Motion capture technology (MoCap) is a revolutionary method to translate real-world subjects’ movements into digital content across various industries, including robotics, medical devices, gaming, and biomechanics. This paper investigates how to make MoCap more accessible and usable to a broader and more diverse audience. Endorsing a user-centric design and development approach, the researchers defined the problem statement as wider acceptance and adoptions of the MoCap technology. Subsequently, a comprehensive market research and real-world MoCap guided how researchers would brainstorm solutions. After carefully considering factors such as camera angles, pole styles, height, light conditions, etc., researched also incorporated various related sensors, such as vibration meters and distance sensors, to generate the functional prototypes and test their ideas. Compared with traditional motion capture devices, the resulting MoCap system demonstrates an easier way to deploy MoCap and a steadier system under consistent vibrations. This improved accessibility and stability allows not only scientists and researchers but also sports coaches, doctors, or students to use MoCap effectively. In conclusion, this research contributes to bring MoCap technology a wider adoption and more practical applications. Meanwhile, the system’s structural stability, manufacturing method, intergration with other sensors, and reliance on Sony RX0 cameras with resolution and frame limitation can be optimized in the future to meet an even broader user need.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating SigmaOS with Kubernetes for Orchestrating Microservice and Serverless Applications</title>
<link href="https://hdl.handle.net/1721.1/152879" rel="alternate"/>
<author>
<name>He, Yizheng</name>
</author>
<id>https://hdl.handle.net/1721.1/152879</id>
<updated>2023-11-03T03:52:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Evaluating SigmaOS with Kubernetes for Orchestrating Microservice and Serverless Applications
He, Yizheng
SigmaOS is a new multi-tenant cloud operating system that simplifies distributed application development. Its design centers around the novel concepts of realms and procs. A realm presents a tenant with a shared global namespace that hides the machine boundaries. Tenants structure their applications as process-like procs interacting through the realm’s namespace. Procs are lightweight, stateful, and can communicate. SigmaOS manages the scheduling and execution of procs to achieve high resource utilization and performance isolation.&#13;
&#13;
This thesis compares SigmaOS with Kubernetes, a mainstream cloud operating system, using a microservice-style social network website and a serverless image resizing program. It measures their performances on a small-scale cluster in CloudLab. The SigmaOS version of the social network is easier to build (30% fewer lines), and its image resizing starts faster (25% - 89%). SigmaOS performs comparably to Kubernetes regarding latency and resource consumption when running a single application but provides better performance isolation when running multiple applications in separate realms: latency increases by 4-11% with concurrent applications in SigmaOS versus over 150% in Kubernetes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aggressive Aerial Grasping using a Soft Drone with Onboard Perception</title>
<link href="https://hdl.handle.net/1721.1/152878" rel="alternate"/>
<author>
<name>Ubellacker, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/152878</id>
<updated>2023-11-03T03:48:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Aggressive Aerial Grasping using a Soft Drone with Onboard Perception
Ubellacker, Samuel
Contrary to the stunning feats observed in birds of prey, aerial manipulation and grasping with flying robots still lack versatility and agility. Conventional approaches using rigid manipulators require precise positioning and are subject to large reaction forces at grasp, which limit performance at high speeds. The few reported examples of aggressive aerial grasping rely on motion capture systems, or fail to generalize across environments and grasp targets. We describe the first example of a soft aerial manipulator equipped with a fully onboard perception pipeline, capable of robustly localizing and grasping visually and morphologically varied objects. The proposed system features a novel passively closing tendon-actuated soft gripper that enables fast closure at grasp, while compensating for position errors, complying to the target-object morphology, and dampening reaction forces. The system includes an onboard perception pipeline that combines a neural-network-based semantic keypoint detector with a state-of-the-art robust 3D object pose estimator, whose estimate is further refined using a fixed-lag smoother. The resulting pose estimate is passed to a minimumsnap trajectory planner, tracked by an adaptive controller that fully compensates for the added mass of the grasped object. Finally, a finite-element-based controller determines optimal gripper configurations for grasping. Rigorous experiments confirm that our approach enables dynamic, aggressive, and versatile grasping. We demonstrate fully onboard vision-based grasps of a variety of objects, in both indoor and outdoor environments, and up to speeds of 2.0 m/s— the fastest vision-based grasp reported in the literature. Finally, we take a major step in expanding the utility of our platform beyond stationary targets, by demonstrating motion-capture-based grasps of targets moving up to 0.3 m/s, with relative speeds up to 1.5 m/s.&#13;
&#13;
Video Attachment: https://www.youtube.com/watch?v=HF4M7TooqfE
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Bovine Methane Emissions: Respiratory Simulation and Optical Gas Imaging Methods</title>
<link href="https://hdl.handle.net/1721.1/152877" rel="alternate"/>
<author>
<name>Huang, Zhong Qian</name>
</author>
<id>https://hdl.handle.net/1721.1/152877</id>
<updated>2023-11-03T03:51:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing Bovine Methane Emissions: Respiratory Simulation and Optical Gas Imaging Methods
Huang, Zhong Qian
Bovine methane emissions contribute significantly to global greenhouse gas levels. Quantifying these emissions is of key importance in mitigating their production. This work investigates the physical simulation of cattle breaths and the assessment of their methane content through the use of optical gas imaging (OGI). A physical respiratory simulator was designed and built to replicate cow exhalations in controlled laboratory settings, successfully emulating breath flow, tidal volume, respiration rate, temperature, and methane concentration. The simulator was used in infrared imaging experiments that demonstrated the feasibility of using OGI as a technique for measuring breath methane content. To visualize breath gas plumes, image processing methods were developed, encompassing background subtraction, frame differencing, and optical flow. These methods enabled the characterization of plume intensity and movement dynamics under varying concentrations and temperatures. Quantification techniques were developed to compute a measure of breath methane content from thermal video footage. Detected methane intensity exhibited a positive linear correlation with breath methane concentration within the range of 1000 - 4000 ppm. The influence of breath exit temperature on detected methane intensity was found to be minimal, with intensity primarily scaling with the difference between ambient air temperature and background temperature. These observed trends were found to be in alignment with those predicted by theoretical models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Semi-analytical Model for Nonlinear Elliptical Inclusions with Spherical Eigenstrains</title>
<link href="https://hdl.handle.net/1721.1/152876" rel="alternate"/>
<author>
<name>Bonavia, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/152876</id>
<updated>2023-11-03T03:16:26Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Semi-analytical Model for Nonlinear Elliptical Inclusions with Spherical Eigenstrains
Bonavia, Joseph
Motivated to understand the stresses induced by the formation of precipitates in metals,&#13;
in 1957, John D. Eshelby provided a fully-analytical solution for the stress and&#13;
deformation fields induced by an incompatible ellipsoidal inclusion embedded within&#13;
an infinite matrix. Over the past six decades, his theory, which considers linearly elastic&#13;
materials, has been essential in developing homogenized micromechanical models&#13;
for metals and composites. However, as solid mechanics research increasingly focuses&#13;
on soft materials such as biological tissues, a linear theory is no longer sufficient.&#13;
Despite numerous potential applications ranging from medical diagnosis to industrial&#13;
manufacturing processes, an accurate analytical or semi-analytical nonlinear extension&#13;
of Eshelby’s theory of the elliptical inclusion problem has yet to be developed.&#13;
This work presents a novel approach to solve the 2D elliptical inclusion problem,&#13;
which satisfies incompressibility. It is shown to converge to the Eshelby solution in&#13;
the linear limit for the case of isotropically growing inclusions. Moreover, this model&#13;
matches almost identically to 2D finite element simulations for large incompatibilities,&#13;
far beyond the linear range, while providing a complete description of the field&#13;
through a single function. Finally, it is suggested that the simplified solution can enable&#13;
the use of homogenization methods for future nonlinear micromechnical models&#13;
and can help to elucidate various growth phenomena observed in nature.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Recurrent Metastatic Events</title>
<link href="https://hdl.handle.net/1721.1/152875" rel="alternate"/>
<author>
<name>Singh, Harveer</name>
</author>
<id>https://hdl.handle.net/1721.1/152875</id>
<updated>2023-11-03T03:03:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling Recurrent Metastatic Events
Singh, Harveer
Progression of cancer is marked by metastatic spread, with certain tumors preferentially spreading to specific organ sites – known as organotropism. The site of metastatic spread can significantly impact the mortality of cancer patients, for example with metastases to the brain being highly lethal, but the underlying mechanisms are poorly understood. Here, we aim to characterize the genetic landscape of metastatic drivers to specific organ sites using large-scale tumor sequencing and medical record data. We propose and evaluate a recurrent event survival model that draws additional statistical power from patients with multiple metastases, while modeling loss to follow up and mortality. We analyze tumor sequencing data from over 15,000 unique patients across 8 primary cancers and 7 target organ sites to identify genetic drivers of organotropism among a panel of 547 genes. We identify 1,130 somatic alterations significantly associated with organotropism, including 171 associations with brain metastases. We train a penalized predictive model that can accurately identify individuals at high risk for metastases to specific organ sites in held out samples. For example, the predicted top 10% of non-small cell lung cancer patients exhibit a hazard ratio of 1.96 for brain metastases relative to the bottom 10%. Our results demonstrate the power of recurrent event modeling in a real world clinical cohort to characterize the genetic landscape of organotropic events.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pipeline for Synthesizing Action-conditioned Human Motion from Raw Motion Capture Data</title>
<link href="https://hdl.handle.net/1721.1/152874" rel="alternate"/>
<author>
<name>Tiwari, Ritaank</name>
</author>
<id>https://hdl.handle.net/1721.1/152874</id>
<updated>2023-11-03T04:01:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Pipeline for Synthesizing Action-conditioned Human Motion from Raw Motion Capture Data
Tiwari, Ritaank
In many sports, less-experienced trainees will often draw inspiration from videos of experts. While this can be an effective tool for improvement, this process lacks the ability for the trainee to specifically focus on improving their skills based on the limitations of their current abilities, body type, and weaknesses.&#13;
&#13;
Since sports are very competitive, there exists a need to convert expert movements to a series of standardizable forms and movements that can then be pedagogically applied to the differing needs of various trainees: specifically, their different abilities, body types, and weaknesses.&#13;
&#13;
Effectively, this conversion requires a pipeline that can take an input of motion capture data, automatically label the markers used, create a skeletal representation, and then train a machine learning model to accurately synthesize human motion, conditioned on the action type.&#13;
&#13;
The outputted motions can be rendered for any body type, and could be customized to the trainee. The designed pipeline is not fencing specific – it is highly adaptable to the nature of the data or sport, robust to errors and noise, as well as tightly integrated in an easy-to-use library.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Project Funding: A Framework for Rapid Evaluation of Innovation Projects for Implementation Using a System Approach</title>
<link href="https://hdl.handle.net/1721.1/152873" rel="alternate"/>
<author>
<name>Gonzalez, Nicholas Ciro</name>
</author>
<id>https://hdl.handle.net/1721.1/152873</id>
<updated>2023-11-03T03:57:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Innovation Project Funding: A Framework for Rapid Evaluation of Innovation Projects for Implementation Using a System Approach
Gonzalez, Nicholas Ciro
Success rates of corporate innovation are notoriously low. Improving corporate innovation success rates increases investment efficiency and enables progress toward an improved future. A literature review was completed to develop an understanding of innovation strengths and weaknesses often present in corporations. System engineering and quantitative analysis tools were explored to address the common weaknesses present in corporate innovation investment. The investment step was targeted as a critical decision point for progressing proposals forward for further implementation. The framework mitigates common pitfalls of corporate innovation while enabling the corporation to architect the innovation process to fit its needs. The framework is a five-step process: risk rank to define the predictors of innovation project success, establish a success function to calculate innovation success likelihood, solicit project proposals from the entire employee base, plot a tradespace to visualize the tradeoffs between all possible innovation projects, and finally select the portfolio of projects for investment.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A LIGO Double Pendulum Suspension Prototype for Reducing Unwanted Cross-Couplings</title>
<link href="https://hdl.handle.net/1721.1/152872" rel="alternate"/>
<author>
<name>Lee, Regina E.</name>
</author>
<id>https://hdl.handle.net/1721.1/152872</id>
<updated>2023-11-03T03:01:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A LIGO Double Pendulum Suspension Prototype for Reducing Unwanted Cross-Couplings
Lee, Regina E.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) is a Michelson Interferometer with 4km long arms used to detect gravitational waves passing through earth. LIGO uses extremely isolated optics in the form of suspensions to measure slight changes in laser path length down the two interferometer arms. Unwanted cross-couplings between degrees of freedom in LIGO suspensions pose a large problem when trying to isolate their optics in the interferometer.&#13;
&#13;
This thesis provides an analysis of the effects of changing the wire geometry of a double pendulum as a case study for LIGO pendulum designs. By changing the wire attachment point to be closer to the center of mass, we are able to see a decrease in longitudinal-to- pitch coupling by about a factor of 3. We observe that the pitch-to-pitch coupling decreases by a factor of approximately 1.5 at DC when comparing the new four wire configuration to the original two wire configuration. However, the first pitch resonance increases slightly. This resonance is most influenced by a combination of the wire attachment point and spring stiffness. The resonance can be moved around by changing these factors.&#13;
&#13;
This project has two main components. The first is a state space model that describes the equations of motion for the double pendulum and is used to predict dynamic responses. The second is the construction of a physical double pendulum prototype which is used to verify results from the model. The experimental results show differences in dynamics compared to the state space model due to forcing off center, and the model was updated to include these dynamics. The physical pendulum is set up outside of vacuum and is not manufactured to the tight tolerance of real LIGO suspensions. Therefore, we do not have the precision necessary to experimentally attach the wires directly at the center of mass and did not measure these transfer functions. In conclusion, our observations lead us to believe that suspending the top mass with four wires is beneficial to reducing the longitudinal-to-pitch coupling. However, it is necessary to align the pivot point of the wires to the actuation point in order to demonstrate this. Future research can be done on placing the wire pivot point directly at the actuation point.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a market-ready tractor for small farms in low- and middle-income countries</title>
<link href="https://hdl.handle.net/1721.1/152871" rel="alternate"/>
<author>
<name>Goldbach, Collin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152871</id>
<updated>2023-11-03T03:01:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a market-ready tractor for small farms in low- and middle-income countries
Goldbach, Collin J.
This paper presents the design, testing and user feedback of a new prototype of a tractor platform intended for use on small, resource-constrained farms. This development builds on past work by implementing several upgrades which promote market competitiveness and maximize functionality, ergonomics, and aesthetics. Stakeholder discussions, review of prior art and recommendations from past authors were used to draft new functional requirements for a better vehicle. Hydraulic power systems were implemented that significantly improve user comfort by automating repetitive or unwieldy tasks. Newly designed crop-spraying solutions based on feedback from farmers allowed the tractor to perform crop maintenance functions that larger vehicles cannot while also reducing worker exposure to harmful chemicals. A rear-oriented PTO was installed to allow the vehicle to power external implements. A redesign of a stabilizing solution increased the vehicle’s versatility in managing various crops and transit between properties. The upgraded vehicle was tested in Massachusetts and validated by stakeholder surveys in India. Farmers from Massachusetts and from The Philippines who tested the vehicle responded positively. They indicated the tractor would be a valuable addition to their small farms and would substantially reduce drudgery. Testers believed the format of the vehicle was familiar, easy to learn, and comfortable to ride. This paper demonstrates that two-wheeled tractors are not only viable, but can produce the same utility as conventional tractor layouts at a significantly lower cost.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of a Handle Robot for Providing Bodily Support to Elderly Persons</title>
<link href="https://hdl.handle.net/1721.1/152870" rel="alternate"/>
<author>
<name>Bolli Jr., Roberto A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152870</id>
<updated>2023-11-03T03:49:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Optimization of a Handle Robot for Providing Bodily Support to Elderly Persons
Bolli Jr., Roberto A.
Age-related loss of mobility and an increased risk of falling remain major obstacles for&#13;
older adults to live independently. Many elderly people lack the coordination and strength&#13;
necessary to perform activities of daily living, such as getting out of bed or stepping into&#13;
a bathtub. A traditional solution is to install grab bars around the home. For assisting in&#13;
bathtub transitions, grab bars are fixed to a bathroom wall. However, they are often too far&#13;
to reach and stably support the user; the installation locations of grab bars are constrained&#13;
by the room layout and are often suboptimal. In this thesis, we present a mobile robot that&#13;
provides an older adult with a handlebar located anywhere in space - “Handle Anywhere”.&#13;
The robot consists of an omnidirectional mobile base attached to a repositionable handlebar.&#13;
We further develop a methodology to optimally place the handle to provide the maximum&#13;
support for the elderly user while performing common postural changes. A cost function&#13;
with a trade-off between mechanical advantage and manipulability of the user’s arm was&#13;
optimized in terms of the location of the handlebar relative to the user. The methodology&#13;
requires only a sagittal plane video of the elderly user performing the postural change, and&#13;
thus is rapid, scalable, and uniquely customizable to each user. A proof-of-concept prototype&#13;
was built, and the optimization algorithm for handle location was validated experimentally.&#13;
&#13;
Additionally, we present the results of a study to discover any correlations between an&#13;
elderly person’s preferred handlebar pose and various demographic indicators, self-rated&#13;
mobility for tasks requiring postural change, and biomechanical markers. For simplicity,&#13;
we considered only the case where the handlebar was positioned directly in front of the&#13;
user, as this confined the relevant body kinematics to a 2D sagittal plane. This data-driven&#13;
approach complements the cost function described earlier by assessing how a handlebar&#13;
should be positioned based on data from actual elderly people.&#13;
&#13;
Lastly, we introduce a novel design for a wheel capable of changing configuration based&#13;
on the surface underneath it, such that there will always be a high coefficient of friction&#13;
between the wheel and the ground. The wheel design was refined through experimental tests&#13;
on various floor surfaces commonly found in the homes of elderly people.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Pattern and Anomaly Detection Methods in Influence Campaigns</title>
<link href="https://hdl.handle.net/1721.1/152868" rel="alternate"/>
<author>
<name>Mitchell, William B.</name>
</author>
<id>https://hdl.handle.net/1721.1/152868</id>
<updated>2023-11-03T03:16:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Developing Pattern and Anomaly Detection Methods in Influence Campaigns
Mitchell, William B.
Influence operations are a prominent psychological component of modern warfare. Recent historic events including the 2016 US election, the 2021 Myanmar Coup, and the Russian invasion of Ukraine in early 2022 have catapulted Department of Defense&#13;
(DoD) interest in modeling and predicting outcomes of military and political events, particularly in regions of strategic interest to the US. The MIT Lincoln Laboratory, Group 52, under contract with USTRANSCOM, has developed the Global Influence Model (GIM) to evaluate the information landscape at scale. This project seeks to expand on the previous work of Group 52 on GIM, incorporating pattern and anomaly detection methods. Several statistical and machine learning methods were applied to a data set of approximately 30,000 news articles from a 2-year period between August 2019 and August 2021. Statistical methods included moving average models and Singular Spectrum Analysis (SSA). Machine learning techniques included the use of an autoencoder and an LSTM neural network. These methods provide different ways to visualize and characterize the data. Together, the approaches offer a holistic picture of events in specific countries over a time period of interest. The figures generated by these techniques may be a useful tool for a military intelligence analyst. These products allow for the rapid visualization of large news article data sets that can help model influence campaigns as they unfold.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering Novel Microarchitectural Security&#13;
Vulnerabilities in Modern Processors</title>
<link href="https://hdl.handle.net/1721.1/152860" rel="alternate"/>
<author>
<name>Ravichandran, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/152860</id>
<updated>2023-11-03T03:42:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Discovering Novel Microarchitectural Security&#13;
Vulnerabilities in Modern Processors
Ravichandran, Joseph
For decades, computer security issues such as viruses, worms, and Trojans have caused significant damages to computer systems across the world. Many of these security issues are caused by vulnerabilities in software allowing for memory corruption, a kind of attack where the contents of a computer’s memory are corrupted by an attacker to change a program’s behavior. While much research has been done on how to improve software security, vendors are increasingly turning to hardware defenses to compensate for software vulnerabilities. One such example is ARM Pointer Authentication, a security feature that enforces pointer integrity through the use of cryptographic hashes.&#13;
&#13;
I will introduce the PACMAN attack, a novel attack methodology that defeats Pointer Authentication by leveraging the behavior of the CPU’s microarchitecture. I will present multiple proof-of-concept attacks showing PACMAN defeating Pointer Authentication on the Apple M1 SoC, the world’s first desktop processor that supports Pointer Authentication. I will also document the tools I have created to perform detailed reverse engineering of the microarchitecture on Apple Silicon platforms, enabling both this work and future research.&#13;
&#13;
I will also present two memory corruption vulnerabilities I have discovered and reported in modern operating systems as case studies of the kind of software vulnerability Pointer Authentication tries to mitigate. The first is an uninitialized memory issue in Linux, and the second is a race condition leading to a type confusion in XNU. Finally, I will present a series of classroom exercises I have created to teach students about CPU vulnerabilities like PACMAN.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Growth for Whom? Sacrifice of Chicago’s Chinatown Then and Now</title>
<link href="https://hdl.handle.net/1721.1/152857" rel="alternate"/>
<author>
<name>Chen, Yu Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/152857</id>
<updated>2023-11-03T04:00:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Growth for Whom? Sacrifice of Chicago’s Chinatown Then and Now
Chen, Yu Jing
The utilitarian planning practices that marked the era of Urban Renewal and the beautification efforts of the City Beautiful Movement led to widespread displacement and destruction of many Chinatowns across America, including Chicago’s. The history of Chicago’s Chinatown tells a story of a community that has always been an afterthought in city planning and priority, continually sidelined for the purposes of “broader” city goals. Recent decades, however, have brought upon a shift in planning paradigms, as values of equity and justice have become of increasing priority. This shift comes at a time when Chinatowns across the nation are experiencing change of their own as they face pressures of displacement largely due to downtown expansion. Chicago’s Chinatown, however, is an exception, largely regarded as America’s last growing Chinatown.&#13;
&#13;
Amidst these changing world contexts, this thesis strives to understand how Chicago today has actually evolved in how it values and centers Chinatown in its planning processes, particularly as the largest private development in Chicago history, The 78, is slated to become Chinatown’s neighbor. By examining through the lens of the 78 planning process, this thesis seeks to illuminate whether and how Chicago city planning has evolved from the sacrificial nature with which it has historically treated Chinatown during the periods of City Beautiful and Urban Renewal.&#13;
&#13;
This research relies on historical analysis of documents, maps, photographs, and more to understand the relationship between planning and Chinatown during the eras of City Beautiful and Urban Renewal. I then examined the 78 development process further than what was publicly reported by conducting a number of semi-structured interviews. Ultimately, I found that in many ways, the sacrificial nature of planning has not changed, although the way this sacrifice takes form is different. While economic interests for large-scale planning projects stay the same, social interests have evolved due to changing societal values. Today, the notion of diversity has become viewed as an amenity or asset, and as such, Chinatown’s function as a cultural center is capitalized upon despite ultimately still being subjected to sacrifice for the city’s economic advancement.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VocalCords: Exploring Tactile Interaction and Performance with the Singing Voice</title>
<link href="https://hdl.handle.net/1721.1/152855" rel="alternate"/>
<author>
<name>Addae, Maxwell K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152855</id>
<updated>2023-11-03T03:37:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">VocalCords: Exploring Tactile Interaction and Performance with the Singing Voice
Addae, Maxwell K.
The close relationship between touch, gesture, and sound plays a critical role in expressive musical performance. Many acoustic instruments, ranging from strings to brass to percussion, involve some coupling of the “feel” of the instrument in the hands and the corresponding sound produced. The singing voice, however, is one of few musical instruments that typically does not involve touch-mediated interaction. Despite several neurological, psychological, and social connections demonstrated between the hands and voice, the coupling of touch and voice is surprisingly absent from traditional vocal performance technologies. This provides the motivation for VocalCords, which explores the design of a new digital music interface inviting tactile interaction and performance with the singing voice. The interface makes use of physical rubber cords, acting as stretch sensors, which are pulled and manipulated by the hands of the singer as they vocalize to augment and modify their voice in real-time – as if they were able to physically “touch” their own vocal cords. This approach allows for expressive, tactile control over the singing voice, which suggests a striking relationship between physical and musical tension. Through a series of prototyping iterations and a public performance with the interface, I explore the potential of touch-mediated vocal performance, as well as how this added tactile interaction may alter our experience with, and perception of, our singing voices.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Permutation-based Significance Tests for Multi-modal Hierarchical Dirichlet Processes with Application to Audio-visual Data</title>
<link href="https://hdl.handle.net/1721.1/152853" rel="alternate"/>
<author>
<name>Anderson, Madeline Loui</name>
</author>
<id>https://hdl.handle.net/1721.1/152853</id>
<updated>2023-11-03T04:06:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Permutation-based Significance Tests for Multi-modal Hierarchical Dirichlet Processes with Application to Audio-visual Data
Anderson, Madeline Loui
Complex underlying distributions in multi-modal data motivate the need for data fusion methods that integrate observations of different modalities in a meaningful way. We explore the multi-modal hierarchical Dirichlet process (mmHDP) mixture model as a Bayesian non-parametric approach to data fusion. In particular, we elaborate on its censored-data perspective, which aligns groups of observations at a group level to accommodate for missing data in any modality. To explore the model behavior, we develop a processing pipeline that applies the mmHDP to audio-visual data, a common and practical multi-modal system. We apply this pipeline to musical data with known audio-visual relationships and provide in-depth qualitative analyses on the learned model parameters. Because of its non-parametric and unsupervised clustering nature, it can be difficult to quantify the significance of the learned mmHDP structure. We propose a novel permutation testing framework that empirically measures the significance of the mmHDP structure and demonstrate its viability using both synthetic and real audio-visual data. The results convey that the mmHDP model captures meaningful structure in the audio-visual data and that the permutation testing framework is a viable method for quantifying model significance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tame Long-Horizon Model-Based Reinforcement&#13;
Learning</title>
<link href="https://hdl.handle.net/1721.1/152852" rel="alternate"/>
<author>
<name>Chen, Boyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/152852</id>
<updated>2023-11-03T03:39:23Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tame Long-Horizon Model-Based Reinforcement&#13;
Learning
Chen, Boyuan
Model-free reinforcement learning algorithms have exhibited great potential in solving single-task sequential decision-making problems with high-dimensional observations and long horizons, but are known to be hard to generalize across tasks. Model-based RL, on the other hand, learns task-agnostic models of the world that naturally enables transfer across different reward functions, but struggles to scale to complex environments due to the compounding error of applying a learned dynamics model iteratively. To get the best of both worlds, we propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards, while circumventing the challenges of model-based RL. In particular, we show self-supervised pre-training of model-free reinforcement learning with a number of neural-network random features as rewards allows implicit modeling of long-horizon environment dynamics. Then, planning techniques like model-predictive control using these implicit models enable fast adaptation to problems with new reward functions. Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks. We validate that our proposed method enables transfer across tasks on a variety of manipulation and locomotion domains in simulation, opening the door to generalist decision-making agents.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Design and Analysis for a DeNOₓ Catalyst in Aviation</title>
<link href="https://hdl.handle.net/1721.1/152851" rel="alternate"/>
<author>
<name>Strauch, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152851</id>
<updated>2023-11-03T03:57:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Structural Design and Analysis for a DeNOₓ Catalyst in Aviation
Strauch, Michael
Nitrogen oxides, collectively known as NOₓ, are compounds that can cause health damage to humans, lead to smog production and acid rain, and cause the formation of ground-level ozone that is detrimental to life. NOₓ formation in aircraft engines occurs because of high-temperature reactions between the nitrogen and oxygen naturally present in the air and is a spontaneous and unintended pollutant. One technology that has proven effective in controlling NOₓ emissions in other industries is selective catalytic reduction (SCR). As a post-combustion emissions control (PCEC) exhaust treatment, the technology works by introducing a nitrogen-rich reductant into the exhaust stream, then passing the flow through a catalyst. This device facilitates reactions between the dosed exhaust flow and catalyst wall to create harmless N₂ and H₂O at the cost of lost engine efficiency due to adding back pressure to the engine system. In prior work, a “pleated filter” design of an SCR catalyst was proposed as a potential solution for reducing NOₓ in aviation. The work covered in this thesis describes the design and analysis approach of such a device to meet the dynamic loads encountered during flight. A multi-level structural finite element analysis (FEA) of both the honeycomb plates is needed to enable this technology and of the frame components that support these plates. Using a stiffness matrix approach, the honeycomb catalyst was simplified into equivalent panels that were used to analyze the catalyst overall structure. The overall additional weight from the structure necessary to support this novel catalyst is estimated to be between 80-90 kg, which is within the additional mass required that was estimated in the original work. This implies that this design is structurally feasible.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fueling an Energy Transition: Designing an Optimal&#13;
Portfolio of Competing Fuels Under Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/152850" rel="alternate"/>
<author>
<name>Abel, Samuel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152850</id>
<updated>2023-11-03T03:39:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fueling an Energy Transition: Designing an Optimal&#13;
Portfolio of Competing Fuels Under Uncertainty
Abel, Samuel A.
To facilitate the energy transition, firms must allocate their investment between incumbent and emerging fuel capacity. Understanding how to pace investment between competing energy options during this transition is crucial for energy companies and policymakers. Allocating investments among competing fuel technologies is complex due to uncertainty, improving costs of emerging fuels, market competition, and the delay between capacity investment and production.&#13;
&#13;
To address this complexity, we develop a stochastic dynamic optimization model incorporating dynamic decision-making, Nash-Cournot equilibrium between competing firms, and&#13;
uncertainty of competing fuel parameters, such as hydrogen demand and technology improvements. The model is also the first, to our knowledge, to include technology learning rates in dynamic optimization models for energy markets with firms in Cournot competition. Learning rates are a critical factor in assessing cost improvements and competitiveness of&#13;
emerging fuels.&#13;
&#13;
The model provides valuable insights for profit-driven firms and policymakers:&#13;
(1) Firms need to account for market structure and learning rates to optimize capital allocation&#13;
between fuels, as neglecting these factors can lead to sub-optimal immediate capacity investment decisions, and regret, measured as sub-optimal private gain.&#13;
(2) Incorporating stochastic modeling is also required for firms. We show that deterministic models lead to sub-optimal capacity investment decisions and increasing profit regret as the&#13;
uncertainty range increases.&#13;
We observe that learning rates can be complementary with carbon taxes and competition,&#13;
which has implications for policymakers:&#13;
(3) Encouraging market participation reduces fuel costs through learning. This increases investment in the emerging fuel by a greater amount than improved competition alone.&#13;
(4) Early implementation of carbon taxes can encourage capacity investment and production. We observe that under certain circumstances, with a sufficient learning rate, this early&#13;
implementation can reduce the need for stricter future taxes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Idea of Heritage in Nineteenth-Century Iran: Nādir Mīrzā’s Account on Tabriz</title>
<link href="https://hdl.handle.net/1721.1/152848" rel="alternate"/>
<author>
<name>Moossavi, Boshra</name>
</author>
<id>https://hdl.handle.net/1721.1/152848</id>
<updated>2023-11-03T03:01:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Idea of Heritage in Nineteenth-Century Iran: Nādir Mīrzā’s Account on Tabriz
Moossavi, Boshra
This thesis focuses on the concept of heritage and its preservation in nineteenth-century Iran through the perspective of Prince Nādir Mīrzā Qajar (1827-1888/9). While heritage and preservation have been extensively studied in the Euro-American context, little attention has been given to their meanings in the non-Western discourses, particularly in Iran. The few studies on Iran focuses on the institutionalization of heritage and the Western influences on it by Europeans and Iranian reformists. This research seeks to provide a fresh perspective on the idea of heritage for non-reformist groups of people who were rooted in the religious and cultural traditions of Iran. To this end, Nādir Mīrzā, as a Qajar prince whose writing reflects traditional Iranian patterns of thought, was selected for this study. By an in-depth investigation of Nādir Mīrzā’s The History and Geography of Tabriz, I shed light on the difference between Nādir Mīrzā’s understanding of architecture and what later was promoted as heritage by the Society for National Heritage in the twentieth century. The manuscript belongs to the antiquarian category of texts that focus on history and geography in tribute to rulers and princes. However, unlike other works of this genre that mainly consist of chronicles and descriptions, I would contend that The History and Geography of Tabriz offers insight into a broader era of decay of the traditional built environment in Qajar Iran. Moreover, the city of Tabriz, situated near the Ottoman Empire and inhabited predominantly by Azari speakers, is significant from a strategic and ethnic point of view.&#13;
&#13;
To this end, I examine Nādir Mīrzā’s background, including his family lineage, education, and writing style, to understand how his understanding of heritage was shaped. Then I investigate how Nādir Mīrzā’s writing is a form of heritage as it attempts to preserve certain aspects of history. Therefore, how religion, class, linguistic, and ethnic identities influenced his choice for historicizing the past. Then, following a brief mention to the reformists’ values on heritage, I uncover Nādir Mīrzā’s specific values by analyzing his accounts of buildings. Finally, I investigate the role of those values in preservation and maintenance of buildings through extracting the reasons for construction, repair, and destruction from Nādir Mīrzā’s accounts. The conclusion proposes further investigation into other sources to complete the narrative of non-European understanding of heritage in nineteenth-century Iran.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The power-gas demand impacts and regulatory implications for the future of gas systems under the electrification of space heating in cold climates</title>
<link href="https://hdl.handle.net/1721.1/152843" rel="alternate"/>
<author>
<name>Santoni-Colvin, Morgan</name>
</author>
<id>https://hdl.handle.net/1721.1/152843</id>
<updated>2023-11-03T03:33:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The power-gas demand impacts and regulatory implications for the future of gas systems under the electrification of space heating in cold climates
Santoni-Colvin, Morgan
The call for action to mitigate GHG emissions necessitates the decarbonization of the building sector. The electrification of heating, especially via efficient air-source heat pumps coupled with a low-carbon electricity grid, is considered an attractive option for displacing emissions from fossil-fueled heating systems. While the opportunity for decarbonization is high in emission-intensive housing stocks such as that of the U.S. New England region, the high demand for heating in cold climates elicits concerns about energy demand impacts. Furthermore, there is concern about what electrification and the broader call for decarbonization might imply for gas distribution systems, which will face declining usage and most likely infrastructural retirement.&#13;
&#13;
First, this thesis develops a bottom-up building energy modeling framework to quantify the hourly power and gas demand impacts of the electrification of residential heating in New England under a range of electrification and weather scenarios for 2050. We find that deep electrification greatly diminishes gas demand and increases electricity demand, with a potentially drastic increase in peak electricity demand given current technologies. Furthermore, the weather-induced variation in peak demand becomes more drastic. These adverse demand impacts can be mitigated by envelope improvements and motivate the implementation of demand-side flexibility, but the effectiveness of these measures may be limited by long peak demand durations. However, the adverse demand impacts of deep electrification must be weighed against the downsides of less-aggressive electrification, which might actually result in worse demand impacts in the long term. Second, we compare the current future gas system planning frameworks of Massachusetts regulators against other states, finding that policymakers in Massachusetts must address several issues in order to prepare for the transformative effect that electrification will have on gas distribution systems. Resulting recommendations highlight the need for continuous long-term gas planning procedures, legal reform of the consumer right to gas service, a cautious approach towards considering alternative fuels as a mechanism for gas system decarbonization, and prioritization of equity in allocation of the costs of gas system retirement.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery-Electric-Bus Transit System Design</title>
<link href="https://hdl.handle.net/1721.1/152840" rel="alternate"/>
<author>
<name>Besa Lehmann, Jorge Andrés</name>
</author>
<id>https://hdl.handle.net/1721.1/152840</id>
<updated>2023-11-03T03:20:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Battery-Electric-Bus Transit System Design
Besa Lehmann, Jorge Andrés
The increasing availability of battery-electric buses (e-buses) as a sustainable alternative for public transportation has sparked considerable interest in recent years. With a notable decrease in lithium-ion battery prices, e-buses have become a competitive option in terms of the total cost of ownership when compared to diesel buses. This trend is driven by a growing awareness of the environmental impact of the transportation sector, which accounts for a significant portion of global CO2 emissions. Transit Authorities at the forefront of this transition are facing important challenges to scaling up their operations, starting with the selection of their charging infrastructure and battery-electric-bus equipment. The problem is generally approached as a cost optimization problem that fails to represent system uncertainties comprehensively, undermining the capacity of solutions to guarantee high service levels to the public. This research contributes to this regard by analyzing and comparing eight infrastructure and equipment scenarios from cost and service level perspectives, using the city of Chicago as a case study. First, an e-VSP is solved for each charging configuration to find efficient robust schedules that can withstand the uncertainties of travel time and energy demand. Then, each scenario undergoes a single-charger failure simulation to assess the operational impact of energy supply disruptions. The simulation quantifies the daily number of buses at risk of breakdown (i.e., depleted battery) as a proxy for service level degradation. Finally, the life-cycle costs of each scenario are calculated according to their infrastructure and scheduled operation and compared along the reported bus breakdowns at failure. The study finds that charging configurations favoring the concentration of power capacity (i.e., chargers at depot only) can better withstand operational uncertainties when compared to decentralized charging configurations that favor network coverage (i.e. on-route charging). The failure assessment corroborates this finding by reporting a critical degradation of service levels (i.e. multiple trip cancellations) on charging networks presenting single-charger charging stops. Ultimately, this research concludes that the selection of the charging configuration will depend on the transit agency budget and risk profile, since the higher reliability provided by the centralization of power capacity comes at a higher life-cycle cost, even when accounting for the effects of innovation in battery technology.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long Sequence Transformer Variants on Varying Context Length</title>
<link href="https://hdl.handle.net/1721.1/152839" rel="alternate"/>
<author>
<name>Sun, Melinda</name>
</author>
<id>https://hdl.handle.net/1721.1/152839</id>
<updated>2023-11-03T03:45:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Long Sequence Transformer Variants on Varying Context Length
Sun, Melinda
Transformers are powerful and effective tools in natural language processing, but their scalability is limited by the quadratic complexity of attention. Several transformer variants that address this problem have recently been proposed, including Moving Average Equipped Gated Attention (Mega). In this thesis, we evaluate how effectively Mega uses past context, by comparing the perplexity trend as context length varies with the perplexity trend of a standard transformer. We find that Mega does not show greater benefit from longer context in a Wikipedia or book setting, though it does have a much better ability to extrapolate beyond training context lengths.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Language Conditioned System for 6-DoF Tabletop Manipulation</title>
<link href="https://hdl.handle.net/1721.1/152838" rel="alternate"/>
<author>
<name>Parakh, Meenal</name>
</author>
<id>https://hdl.handle.net/1721.1/152838</id>
<updated>2023-11-03T03:49:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Building a Language Conditioned System for 6-DoF Tabletop Manipulation
Parakh, Meenal
We present a full-stack modular system for solving tabletop manipulation tasks from natural language task descriptions. The tasks that the system can perform include everyday pick-place tasks, such as sorting, or rearrangement, and the ability to learn new skills. The system primarily consists of three components: perception, planning, and execution, each of which exploits the recent advancements in large machinelearning models developed for particular tasks. The three components interact with each other through carefully designed interfaces which are also crucial contributions of this work. We further evaluate different parts of the system, belonging to perception and execution, as well as showcase performance on some examples tasks, both in real and in sim. The main advantage of a modular system is that no training data is required to either train an end-to-end model or for finetuning. Further, the recent advancements in large models such as Segment Anything and GPT-4 made it possible to construct a modular system, that incorporates vast common sense knowledge, as opposed to traditional approaches. These large models have been trained on billions of data points, and internet-scale data, allowing for zero-shot applications in our system and no need for large-scale data collection. Building such modular systems has the potential to minimize the labor and time spent in the data collection step in robotics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scene Perception for Simulated Intuitive Physics via Bayesian Inverse Graphics</title>
<link href="https://hdl.handle.net/1721.1/152837" rel="alternate"/>
<author>
<name>Shehada, Khaled K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152837</id>
<updated>2023-11-03T03:02:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Scene Perception for Simulated Intuitive Physics via Bayesian Inverse Graphics
Shehada, Khaled K.
Humans have a wide range of cognitive capacities that make us adept at interpreting our physical world. Every day, we encounter new environments, yet we can parse those environments with limited visual exposure and make fairly accurate inferences about unfamiliar objects. Emulating scene understanding capacities in computational models has numerous applications ranging from autonomous driving to virtual reality. Despite the proficiency demonstrated by deep neural networks in pattern recognition, recent works have uncovered challenges in their abilities to encode prior physical knowledge, form visual concepts, and perform compositional reasoning, such as inferring inter-object relations like containment. To this end, the thesis introduces the Simulated COgnitive Tasks (SCOT) benchmark, a large-scale synthetic dataset and data creation codebase allowing for the procedural generation of videos of simulated cognitive tasks targeting intuitive physics understanding. Those cognitive tasks are adapted from tests in the literature used to comparatively assess the cognitive capacities of non-human primates. Additionally, the thesis presents an analysis of several deep learning models on the benchmark, underlining their limitations in tasks involving object permanence comprehension, quantities, and compositionality and their inability to generalize learned knowledge to complex dynamic scenes. In response to these limitations, we propose a probabilistic generative approach that leverages Bayesian inverse graphics to learn structured scene representations that facilitate learning new objects and tracking objects in dynamic scenes. Our evaluation of this model on SCOT revealed near-perfect performance on most tasks with significant data efficiency, suggesting that structured representations and symbolic inference can cooperate with deep learning methods to interpret complex 3D scenes accurately. Overall, this thesis contributes to the field of artificial intelligence (AI) by presenting a new method for improving scene understanding in AI models and providing a benchmark for assessing the visual cognitive capacities of computational models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visualization and Behavioral Testing Of Common&#13;
Sense Generative Programs</title>
<link href="https://hdl.handle.net/1721.1/152834" rel="alternate"/>
<author>
<name>Chuang, Keenly Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/152834</id>
<updated>2023-11-03T04:07:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Visualization and Behavioral Testing Of Common&#13;
Sense Generative Programs
Chuang, Keenly Simon
Probabilistic generative programs are powerful tools that allow for modeling complex 3D worlds containing objects and agents. Recent advances in these programs have resulted in creation of rich models whose traces represent 3D scenes, but there exist challenges in using visualizations and simulation tools for practical implementations. In this thesis, I describe the development of infrastructure to accelerate research in this area. Specifically, I present a pipeline for synthetic data generation with physics simulation capabilities and a suite of rendering options. By leveraging existing scene graph generators and multiple visualization engines, photorealistic datasets can be produced to evaluate probabilistic generative programs and create stimuli for gathering information on human behavior. This framework allows fine-grained temporal tracking of object poses and velocities, both with and without occlusion, facilitating the collection of rich human behavioral data on dynamic object tracking. More broadly, the tools developed here provide visualization, debugging capabilities, and configurable synthetic datasets to benchmark future progress in 3D scene understanding. Development of this infrastructure is an investment in improved synthetic data generation and analysis frameworks is an important step toward robust probabilistic generative programs for 3D world modeling.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Illuminate</title>
<link href="https://hdl.handle.net/1721.1/152832" rel="alternate"/>
<author>
<name>Cocking, Chelsi Alise</name>
</author>
<id>https://hdl.handle.net/1721.1/152832</id>
<updated>2023-11-03T04:07:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Illuminate
Cocking, Chelsi Alise
What would it be like if we could see our movement?&#13;
&#13;
This thesis presents, Illuminate, an interactive art installation in which the movements of a person through open space are visually augmented and brought to life in front of them in real-time through custom interactive visualization software. Seamlessly merging physical and digital space, Illuminate submerges a participant into an artificial reality in which their usually unseen paths of movement become visible. Aiming to give the participant a visceral yet magical moment in which they can see, interact with, and play with their once invisible wakes of motion— pushing the boundaries of our senses and making the invisible visible. The project also explores the themes of spatial computing, bodily expression, abstraction, and choreographic interfaces. &#13;
&#13;
Illuminate provides a deeper understanding of bodily motion to a general audience through a playful interactive performance space made for human creativity, expression, and public play. Investigating the poetic implications of making the invisible trails of our human movement visible. It explores the relationship between our bodies' movement, time, space, and the digital world, provoking questions regarding the possible implications of a world in which we can more casually and effortlessly control and interact with digital elements spatially through the free unrestricted movement of our bodies. 
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wall-modeled Large-eddy Simulation Based on Building-block Flows</title>
<link href="https://hdl.handle.net/1721.1/152829" rel="alternate"/>
<author>
<name>Ling, Yuenong</name>
</author>
<id>https://hdl.handle.net/1721.1/152829</id>
<updated>2023-11-03T04:00:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Wall-modeled Large-eddy Simulation Based on Building-block Flows
Ling, Yuenong
A unified subgrid-scale (SGS) and wall model—the building-block flow model (BFM) — for wall modeled large-eddy simulation (WMLES) is proposed by devising the flow as a collection of building blocks that enables the prediction of the eddy viscosity. The core assumption of the model is that simple canonical flows contain the essential physics to provide accurate predictions of the SGS tensor in more complex flows. The model is constructed to predict zero-pressure-gradient wall-bounded turbulence, adverse/ favorable pressure gradient effects, separation and laminar flow. The approach is implemented using a Bayesian classifier, which identifies the contribution of each building block in the flow, and a neural-network-based predictor, which estimates the eddy viscosity based on the building-block units. The training data are directly obtained from wall-modeled LES with an exact SGS/wall model for the mean quantities to guarantee consistency with the numerical discretization. The model is validated in canonical flows, the NASA High-Lift Common Research Model and a Gaussian bump and shown to improve the predictions with respect to current modeling approaches. The modular extensibility of the BFM paradigm will allow for future improvements by incorporating additional physics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weakly Supervised Representation Learning for Trauma&#13;
Injury Pattern Discovery</title>
<link href="https://hdl.handle.net/1721.1/152826" rel="alternate"/>
<author>
<name>Jin, Qixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/152826</id>
<updated>2023-11-03T03:18:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Weakly Supervised Representation Learning for Trauma&#13;
Injury Pattern Discovery
Jin, Qixuan
Given the complexity of trauma presentations, particularly in those involving multiple areas of the body, overlooked injuries are common during the initial assessment by a clinician. We are motivated to develop an automated trauma pattern discovery framework for comprehensive identification of injury patterns which may eventually support diagnostic decision-making. We analyze 1,162,399 patients from the Trauma Quality Improvement Program with a disentangled variational autoencoder, weakly supervised by a latent-space classifier of auxiliary features. We also develop a novel scoring metric that serves as a proxy for clinical intuition in extracting clusters with clinically meaningful injury patterns. We validate the extracted clusters with clinical experts, and explore the patient characteristics of selected groupings. Our metric is able to perform model selection and effectively filter clusters for clinically-validated relevance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Lagrangian, Discontinuous-Galerkin Material Response Solver for the Analysis of Ablative Thermal Protection Systems</title>
<link href="https://hdl.handle.net/1721.1/152825" rel="alternate"/>
<author>
<name>Quinn, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/152825</id>
<updated>2023-11-03T03:15:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Lagrangian, Discontinuous-Galerkin Material Response Solver for the Analysis of Ablative Thermal Protection Systems
Quinn, Christopher
Thermal protection systems (TPS) play a vital role in safeguarding aerospace vehicles from the intense aerodynamic heating encountered during hypersonic flight. One category of TPS materials manages extreme heat through pyrolysis, a process in which the elevated temperatures trigger an endothermic reaction that decomposes the material into char and gases, and through thermochemical ablation, in which char and pyrolysis gases blow away from the surface. Their use is common in high-velocity hypersonic missions through dense atmospheres, in contrast to other materials such as reusable TPS, which are often used in lower heat flux scenarios. Analysis of ablative TPS materials is challenging due to their complex material response which involves a combination of thermal, chemical, and mechanical phenomena.&#13;
&#13;
A major concern in hypersonic vehicle design is the catastrophic failure of the TPS. It is necessary to anticipate scenarios in which excessive ablation, inelastic deformation, or fracturing of the TPS occurs. A successful TPS design should account for these failure modes while balancing concerns about cost, weight, and vehicle performance.&#13;
&#13;
Computational modeling has emerged as an important tool in TPS design, and in predicting the behavior of TPS materials including failure. Existing codes are capable of modeling the thermo-chemical response of ablative TPS and predicting some modes of failure, but they are often limited in their ability to model mechanical deformation and damage.&#13;
&#13;
This thesis proposes a new computational framework for modeling the thermo-chemomechanical behavior of ablative TPS materials to address this gap. The modeling approach is based on a Lagrangian, Discontinuous-Galerkin finite element formulation of the coupled multiphysics problem, which includes models of finite elastic and inelastic deformation as well as damage, pyrolysis reactions, heat, and mass transfer. The numerical solution employs a semi-implicit time integration scheme for the nonlinear heat and mass transfer problems, while the solid mechanics is addressed using dynamic relaxation. Importantly, a mesh recession algorithm is implemented to explicitly account for changes in geometry due to material ablation. A staggered iteration scheme is used to couple the multiphysics problem.&#13;
&#13;
Several numerical examples demonstrating the correctness and versatility of the proposed method are presented. These include verification against several analytical solutions to the heat equation and benchmark problems utilized in the ablation modeling community. The mesh recession algorithm is also verified through a series of numerical tests known as patch tests. Finally, a demonstration of an arc-jet experiment of phenolic-impregnated carbon ablator (PICA) is presented to illustrate the computational framework’s ability to model thermo-chemically induced deformation, stresses, and surface recession in pyrolyzing TPS materials.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Understanding Organizational Structure and Employee Development in Tech Sector</title>
<link href="https://hdl.handle.net/1721.1/152821" rel="alternate"/>
<author>
<name>Yang, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/152821</id>
<updated>2023-11-03T03:56:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Understanding Organizational Structure and Employee Development in Tech Sector
Yang, Bryan
The technology industry holds a distinctive position due to its relentless pursuit of rapid innovation, necessitating substantial investments in research and development. As organizations seek to thrive in this constantly evolving and highly competitive environment, the modern business landscape presents formidable challenges, demanding companies to remain agile and excel in their respective industries. In response to these challenges, organizations create structures to drive efficiency and scalability, serving as a solid foundation to function smoothly, adapt to changing circumstances, and achieve their missions and visions.&#13;
&#13;
Organizational structures play a fundamental role in the success and growth of companies, providing the necessary framework to define roles and responsibilities, allocate resources, harness the collective efforts of the workforce, and drive toward sustainable growth. As such, the organizational structure directly impacts the nature of work that individuals are involved in and the array of opportunities that align with their career aspirations, which can impede or accelerate their growth potential.&#13;
&#13;
This thesis explores the intricacies of organizational structure within the technology sector through a literature review and a series of semi-structured interviews. By examining the specific needs and challenges faced in structuring organizations, this thesis analyzes the essential elements that contribute to employee development. Utilizing the critical enterprise elements from the ARIES framework, this thesis uses a systems approach to enrich the understanding of different organizational structures in fostering employee development and growth.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Investment Risks in Nature-Based Solutions: A Strategic Approach Towards Sustainable Project Implementation</title>
<link href="https://hdl.handle.net/1721.1/152820" rel="alternate"/>
<author>
<name>Zhang, Zhao</name>
</author>
<id>https://hdl.handle.net/1721.1/152820</id>
<updated>2023-11-03T03:43:42Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Mitigating Investment Risks in Nature-Based Solutions: A Strategic Approach Towards Sustainable Project Implementation
Zhang, Zhao
Reforestation plays a crucial role in combating global warming while promoting biodiversity conservation and restoring ecosystems. However, poorly planned reforestation efforts can lead to increased emissions and long-term damage to landscapes, biodiversity, and livelihoods. In addition, low carbon prices in the voluntary market further hinder reforestation project viability. Therefore, a comprehensive understanding of the scientific and economic aspects of reforestation is essential to ensure effective and sustainable implementation, especially considering the increasing number of institutions and companies that rely on reforestation to achieve ambitious environmental goals. &#13;
&#13;
This thesis focuses on the strategic challenge of developing large-scale investments in reforestation, considering both scientific and economic perspectives. It first applies analytical workflows to explore the relationship between changes in soil carbon and above-ground biomass following plantation by utilizing a diverse set of measurements across multiple interrelated sites. The resulting estimates provide insights and decision-making tools to guide investment choices concerning reforestation location, species selection, and project types from the scientific perspective. The second strategy showcases the application of engineering design flexibility through a case study on an existing reforestation project, demonstrating the benefits of adopting a progressive investment approach with scale optionality.This approach proves advantageous, particularly when dealing with policy and commercial uncertainties that influence reforestation development. Monte Carlo simulation and multi-dimensional project evaluations were implemented to investigate a range of potential scenarios and assess their implications. By integrating these scientific findings, this research contributes to an enhanced understanding of optimizing reforestation investments in a manner that aligns with scientific principles and economic considerations. This holistic approach, incorporating engineering design flexibility and robust evaluations of project dynamics, offers insights and practical guidance for stakeholders to make informed decisions and achieve optimal project outcomes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Models for Domain-Specific Summarization</title>
<link href="https://hdl.handle.net/1721.1/152819" rel="alternate"/>
<author>
<name>Queipo, Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/152819</id>
<updated>2023-11-03T03:32:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Generative Models for Domain-Specific Summarization
Queipo, Laura
This project evaluates the performance of generative models of summarization in aviation safety domain. Models such as DaVinci, Text-DaVinci-003, and GPT-3.5-Turbo were analyzed in both their zero-shot learning and fine-tuned performance against state-of-the-art models. In zero-shot learning, generative models were superior in most cases to the state-of-the- art models, whereas the fine-tuned models could learn with less information about the dataset. These results predict promising advances in the summarization space to address current limitations in the field.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficiently Learning Robust, Adaptive Controllers from Robust Tube MPC</title>
<link href="https://hdl.handle.net/1721.1/152818" rel="alternate"/>
<author>
<name>Zhao, Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/152818</id>
<updated>2023-11-03T03:37:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Efficiently Learning Robust, Adaptive Controllers from Robust Tube MPC
Zhao, Tong
The deployment of agile autonomous systems in challenging, unstructured environments requires adaptation capabilities and robustness to uncertainties. Existing robust and adaptive controllers, such as those based on model predictive control (MPC), can achieve impressive performance at the cost of heavy online onboard computations. Strategies that efficiently learn robust and onboard-deployable policies from MPC have emerged, but they still lack fundamental adaptation capabilities. In this work, we extend an existing efficient Imitation Learning (IL) algorithm for robust policy learning from MPC with the ability to learn policies that adapt to challenging model/environment uncertainties. The key idea of our approach consists of modifying the IL procedure by conditioning the policy on a learned lower-dimensional model/environment representation that can be efficiently estimated online. We tailor our approach to learning an adaptive position and attitude control policy to track trajectories under challenging disturbances on a multirotor. Our evaluation shows that a high-quality adaptive policy can be obtained in about 1.3 hours of combined demonstration and training time. We empirically demonstrate rapid adaptation to in- and out-of-training-distribution uncertainties, achieving a 6.1 cm average position error under wind disturbances that correspond to 50% of the weight of the robot, and that are 36% larger than the maximum wind seen during training. Additionally, we verify the performance of our controller during real-world deployment in multiple trajectories, demonstrating adaptation to turbulent winds of up to 5.2 m/s and slung loads of up to 40% of the robot’s mass, and reducing the average position error on each trajectory to under 15 cm, a 70% improvement compared to a non-adaptive baseline.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Distributed Executive</title>
<link href="https://hdl.handle.net/1721.1/152817" rel="alternate"/>
<author>
<name>Romero, Sabrina</name>
</author>
<id>https://hdl.handle.net/1721.1/152817</id>
<updated>2023-11-03T03:32:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Distributed Executive
Romero, Sabrina
The deployment of autonomous robots has the potential to revolutionize high-risk missions, from rescue operations and outer-space exploration to the maintenance of underwater infrastructure. In many of these scenarios, such as the routine maintenance of a distant space station, collaboration between multiple robots becomes essential. However, the vastness of space or the depths of the oceans often impose communication constraints, making realtime coordination challenging. While centralized control of these missions is traditional and straightforward to implement, it often becomes impractical in these contexts because of communication delays and uncertainties. Given these challenges, a distributed approach is not just preferred but necessary, ensuring robots can operate independently when under limited communication conditions. Traditional strategies to coordinate multiple robots’ schedules when communication is unreliable have been conservative, as they tend to favor prefixing the time for their actions, creating rigid and non-robust schedules. Such schedules can allocate excessive time to tasks as a safety measure, leaving potential resources underutilized. This over-caution not only results in inefficient execution but can also prevent the executive from identifying viable schedules for missions, even when they exist with a more flexible approach. The lack of adaptability, especially in the face of unexpected challenges, undermines the executive’s robustness. To address these shortcomings, our aim is to craft a flexible and robust distributed executive adept at planning, scheduling, and executing multi-agent scenarios. We build upon the Kirk executive, a creation of the MERS group at CSAIL, enabling it to proficiently manage multi-agent scenarios without a guarantee of perfect communication during execution. Central to our methodology is the principle of temporal decoupling which allows agents to decouple any inter-dependencies in their schedule and operate independently. We integrate the state of the art algorithm in temporal decoupling, which decouples as necessary, leaving room for communication when it is available. This integration not only enhances the autonomy of the agents but also ensures they can leverage the benefits of communication when it is available, striking a balance between independence and collaborative efficiency. Building on this foundation, our work offers a practical perspective on autonomous robot coordination. By enhancing the Kirk executive with a temporal decoupling algorithm, expanding the Reactive Model-based Programming Language (RMPL) for multi-agent scenario representation, and showcasing Kirk’s improved capability in multi-agent scenarios with communication constraints, we bridge the gap between theoretical foundations and practical applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Trust: Building Secure and High-Performance Confidential VMs</title>
<link href="https://hdl.handle.net/1721.1/152816" rel="alternate"/>
<author>
<name>Srivastava, Shashvat</name>
</author>
<id>https://hdl.handle.net/1721.1/152816</id>
<updated>2023-11-03T03:58:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Architecting Trust: Building Secure and High-Performance Confidential VMs
Srivastava, Shashvat
Recent research in TEE (Trusted Execution Environment) design have focused on the development of confidential VMs — virtual machines completely protected by secure hardware. All major CPU vendors have rolled out support for VM based TEEs — AMD created SEV (2017), Intel created TDX (2020), and ARM launched CCA (2021). Confidential VMs are a quite promising new technology as they are significantly more user-friendly, allow existing applications to run without modifications, and have better performance compared to process-based TEE. However, confidential VMs still face two large design challenges: security and performance. In the first part of this thesis, we propose a secure confidential VM design on the RISC-V platform, which currently has no official confidential VM support. We specifically focus on the task of secure CPU virtualization and build a security monitor that hides the virtual CPU register state from the hypervisor during context switches. To allow the hypervisor to properly handle interrupts and emulate instructions, we summarize a specification listing which registers need to be exposed in specific scenarios. In the second part of this thesis, we aim to improve the network I/O performance of existing confidential VMs. The hardware protections of TEEs create additional I/O overhead in confidential VMs, and Trusted I/O (TIO) is a promising solution to reduce this overhead. However, TIO has several drawbacks — it relies on hardware support from the I/O device and expands the Trusted Computing Base (TCB) to include these TIO devices. Furthermore, TIO devices will not be commercially available for several years. We aim to create a I/O solution that can reach the performance of TIO without relying on TIO devices. In particular, we present Folio, a system for high-performance network I/O compatible with AMD SEV-SNP. Compared to network I/O in a non-TEE VM, Folio performs only a single extra memory-copy of packet data. Our extensive evaluation shows that Folio performs only 6% worse than the ideal TIO solution.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Serialization and Applications for the Gen Probabilistic Programming Language</title>
<link href="https://hdl.handle.net/1721.1/152815" rel="alternate"/>
<author>
<name>Limarta, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/152815</id>
<updated>2023-11-03T03:47:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Serialization and Applications for the Gen Probabilistic Programming Language
Limarta, Ian
Probabilistic programming has emerged as a powerful framework for building expressive models that can handle uncertainty in a wide range of applications. Serialization, the process of converting data structures or objects into a format suitable for storage or transmission, plays a crucial role in the development and execution of probabilistic programs. Efficient serialization techniques are essential for tasks such as data persistence, distributed computation, and data exchange between different programs or machines. We delve into specific challenges and considerations unique to probabilistic programming for serialization. Probabilistic models often involve complex structures, including nested random variables, hierarchical dependencies, and potentially infinite or unbounded dimensions. Serializing samples from such models requires careful handling of these complexities, including strategies for preserving model fidelity, dealing with modeling dependencies, and specializing for disk representations. In this thesis, we discuss twofold objectives for the Gen probabilistic programming model. The first establishes a formalism for serializing (and deserializing) traces as an interface that respects the existing Gen interfaces and faithfully reconstructs data objects from disk. We highlight challenges for efficient serialization for Gen’s DSLs. The second objective is to show how serialization routines common in other areas of computing transfer well to Gen. We show how serialization provides easiers means for visualizations, remote computing, and training inference approximators.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Bi-Directional Converter Module for&#13;
Battery Cell Voltage Charge Cycling</title>
<link href="https://hdl.handle.net/1721.1/152814" rel="alternate"/>
<author>
<name>Gonzalez, Rolando A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152814</id>
<updated>2023-11-03T03:42:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a Bi-Directional Converter Module for&#13;
Battery Cell Voltage Charge Cycling
Gonzalez, Rolando A.
A framework for testing and controlling a bidirectional DC to DC converter is proposed that can be used for battery cell cycle testing. The circuit allows for the shuttle of energy in both directions for a battery cell under test, enabling functions such as monitoring the deterioration of battery cells’ capacity across discharge/charge cycles. This thesis includes the design, fabrication, and testing this circuit to validate and characterize its utility. Additional code was written to quickly provide feedback on the circuit’s performance and control the circuit’s operating point. This thesis builds off previous work done on an inductive cell-balancer circuit topology[3], while tweaking the topology and adding features that lend towards improved utility in settings where battery monitoring and characterization are important, such as a laboratory.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Integration of an Underactuated Robotic&#13;
Finger with Vision-based Tactile Sensing</title>
<link href="https://hdl.handle.net/1721.1/152811" rel="alternate"/>
<author>
<name>Ma, Yuxiang</name>
</author>
<id>https://hdl.handle.net/1721.1/152811</id>
<updated>2023-11-03T03:49:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Integration of an Underactuated Robotic&#13;
Finger with Vision-based Tactile Sensing
Ma, Yuxiang
Underactuated fingers are adaptable to different shapes, robust, and cost-effective for executing sturdy and versatile grasps. However, they generally have limited control or require complex planning when performing tasks that require high precision or delicate handling. Vision-based tactile sensors, like GelSight, can mitigate these control issues by adding real-time proprioception and also provide useful high-resolution tactile information, which can enhance underactuated fingers with shape and texture perception. As such, this work presents the development of a compact, underactuated linkage finger and its integration with a low-cost, simple vision-based tactile sensor, i.e. the Gelsight. &#13;
&#13;
Through the process of developing the tactile, linkage fingers, we established a planar linkage mechanism simulator and a simple 2D ray-tracing optical simulator to help optimize the linkage transmission and improve tactile sensing performance. In total, the finger went through three major designs. In the initial iteration, we designed sliding joints, which were replaced in the second iteration by linkage mechanisms to make the design more compact and robust. A planar linkage simulator was used to optimize the trajectory to avoid collision and increase range of motion. In the current iteration, the finger has evolved from having two segments to having three segments, with underactuation incorporated to further reduce the number of motors. Each finger segment houses a silicone gel pad, whose tactile imprints are captured by mirrors, which are then observed by a single camera placed at the second finger segment. The camera and mirrors are positioned based on the results of a simple raytracing simulator, which guaranteed that each finger segment could be visible in all finger configurations.&#13;
&#13;
The use of mirrors, linkage transmission and underactuation makes the mechanism compact, efficient, and less complex by reducing the number of cameras and motors needed. Moreover, the integration of vision-based sensors allows these underactuated fingers to perceive contact information and finger configuration. In conclusion, this work encapsulates the innovative design and integration of an underactuated linkage finger with vision-based tactile sensing, offering compactness, adaptability, and robustness in grasping tasks. Additionally, the integration of vision-based tactile sensors can significantly enhance the capabilities of underactuated fingers by providing them with high resolution images and proprioception information, and potentially broaden the future usage of underactuated fingers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogel Design Optimization for Measuring&#13;
Ultrasound Using Laser Doppler Vibrometry</title>
<link href="https://hdl.handle.net/1721.1/152810" rel="alternate"/>
<author>
<name>Caraballo-Justiniano, Eugenio</name>
</author>
<id>https://hdl.handle.net/1721.1/152810</id>
<updated>2023-11-03T03:56:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Hydrogel Design Optimization for Measuring&#13;
Ultrasound Using Laser Doppler Vibrometry
Caraballo-Justiniano, Eugenio
Over the past decade, work in the medical field has been geared towards the development&#13;
of ultrasonic systems for medical diagnostic imaging applications. Compared to&#13;
other imaging modalities, patient contact is a significant source of variability unique&#13;
to ultrasound. Contact sensitive applications such as remote patient/neonatal monitoring,&#13;
tracking wound healing, and imaging of sensitive skin areas can significantly&#13;
benefit from a non-contact ultrasound system. Laser ultrasound (LUS) imaging offers&#13;
potential advancements over conventional ultrasound, especially in achieving highresolution&#13;
imaging of tissue structures and the elimination of liquid coupling mediums&#13;
and probe-to-body contact. The thesis presents an innovative approach to enhance&#13;
the performance of LUS signals in human tissue by utilizing hydrogels, hydrophilic&#13;
polymeric materials known for high-water content and biocompatibility, as a surface&#13;
treatment layer for ultrasound detection and generation. The system integrates&#13;
and synchronizes linear stage automation, transducer acoustic wave generation, laser&#13;
doppler vibrometry (LDV), and LabView integration. High speed data acquisition&#13;
(DAQ) through a dedicated Pico Technology setup streams digitized data directly to&#13;
the host PC. LDV measurements highlighted the crucial role of bead concentration&#13;
within hydrogels. Velocity amplitude measurements reflected an inverse relationship&#13;
with increasing bead concentrations, peaking at approximately 700 mm/s. However,&#13;
higher bead concentrations yielded better data accuracy and reduced noise, suggesting&#13;
an optimal range for bead concentration.A comparison of noise ranges across different&#13;
hydrogel bead concentrations highlighted improved data quality and precision for&#13;
concentrations exceeding 0.015 g/mL. Furthermore, laser-based measurements indicated&#13;
that hydrogel with a bead concentration of 0.02 g/mL provided consistent and&#13;
enhanced signal amplitude. The findings present a pivotal step towards optimizing&#13;
LUS for clinical applications, opening new doors in medical imaging and diagnostics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic feasibility of decentralized&#13;
desalination in the Navajo Nation</title>
<link href="https://hdl.handle.net/1721.1/152809" rel="alternate"/>
<author>
<name>Brei, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/152809</id>
<updated>2023-11-03T03:29:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Technoeconomic feasibility of decentralized&#13;
desalination in the Navajo Nation
Brei, Melissa
The Navajo Nation, located in the southwest United States, faces a significant water&#13;
stress issue, with approximately 30% of households lacking access to piped water. For&#13;
many, connection to a piped network is infeasible and decentralized solutions, like desalination,&#13;
have encountered barriers to adoption. This study evaluates the Navajo&#13;
Nation’s geography, environment, and infrastructure to justify decentralized desalination.&#13;
A diverse group of stakeholders were interviewed to gain comprehensive insights&#13;
into the underlying challenges and possible value-added solutions. Analyzing these&#13;
interviews revealed a cultural aversion to wastewater, a strong sensitivity to operating&#13;
costs, and two potential system sizes: home and community. With financial sustainability&#13;
being an important requirement for several stakeholders, a first-order economic&#13;
analysis of both system sizes was conducted. Home systems present strong potential&#13;
for economic viability but community systems struggle to compete in this region&#13;
due to low population density. Using the elucidated design requirements for home&#13;
systems, electrodialysis (ED) and reverse osmosis (RO) were evaluated for technical&#13;
feasibility. While RO systems, unlike ED, are commercially available at this scale,&#13;
RO wastes 50-80% of the feedwater while ED wastes &lt; 30%. Both technologies have&#13;
strong technical feasibility for this region and both will be field tested to understand&#13;
long-term maintenance requirements and user perception of wastewater.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling up a quantum register of dark electronic spins in diamond</title>
<link href="https://hdl.handle.net/1721.1/152806" rel="alternate"/>
<author>
<name>Ungar, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152806</id>
<updated>2023-11-20T15:09:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Scaling up a quantum register of dark electronic spins in diamond
Ungar, Alexander
Electronic spin defects in the environment of an optically-active spin can be used to increase the size and hence the performance of solid-state quantum registers, especially for applications in quantum metrology and quantum communication. Previous works on multi-qubit electronic-spin registers in the environment of a Nitrogen-Vacancy (NV) center in diamond have only included spins directly coupled to the NV. As this direct coupling is limited by the spin coherence time, it  significantly restricts the register's maximum attainable size. To address this problem, this thesis presents a scalable approach to map out and control a network of interacting environmental spins. We use this approach to characterize a spin network beyond the direct-coupling limit and exploit a weakly-coupled probe spin to mediate the transfer of spin polarization between the central NV and an environmental spin that is not directly coupled to it. We then demonstrate both detection and coherent control of this electronic spin outside the coherence limit of the central NV. Our work paves the way for engineering larger quantum spin registers with the potential to advance nanoscale sensing, enable correlated noise spectroscopy for error correction, and facilitate the realization of spin-chain quantum wires for quantum communication.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approaching Novel Perovskites Photovoltaic Devices through Machine Learning and Interfacial Engineering</title>
<link href="https://hdl.handle.net/1721.1/152805" rel="alternate"/>
<author>
<name>Zhang, Ruiqi</name>
</author>
<id>https://hdl.handle.net/1721.1/152805</id>
<updated>2023-11-03T03:52:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Approaching Novel Perovskites Photovoltaic Devices through Machine Learning and Interfacial Engineering
Zhang, Ruiqi
Organic metal halide perovskites have shown plenty of extraordinary optoelectronic properties which make them good candidates for various photovoltaic applications [1-5]. The fascinating optoelectronic properties of perovskite largely take credit to their low exciton binding energy, strong light absorption coefficient, relatively long carrier diffusion length, and carrier recombination lifetime [6-9]. However, even with an increasing number of studies carried out, perovskite solar cell is still facing plenty of challenges towards commercialization. Two main challenges towards large-area commercialization include first the harsh fabrication environment and cost of large-area coating; and second the redundant fabrication process with a huge labor force impelled. In this thesis study, an intermedia thin film layer tris(4-carbazoyl-9ylphenyl)amine (TcTa) with a thickness of 3 nm is discovered in a large-area compatible perovskite solar cell structure ITO/SnO2/(MAFACs)1Pb(IBrCl)3/PV2000/TcTa/Au that reaches a power conversion efficiency above 14%. The TcTa intermediate film is compatible with substituting gold top electrodes and preventing sputter damage while maintaining a similar solar cell performance (etc. sputtered Ni). In addition, a machine learning algorithm is developed to predict the solar cell current-voltage properties only based on the film stack optical properties before the solar cell is fabricated. The algorithm is developed and tested based on the 3D/2D perovskite solar cell structure [10] with resulting in an average prediction regression loss below 5% and a best prediction accuracy above 99%. Multiple different machine learning algorithm is also carried out to analyze the prediction results and learning weights for the model.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated-Photonics Devices and Architectures for Advanced Cooling of Trapped Ions</title>
<link href="https://hdl.handle.net/1721.1/152804" rel="alternate"/>
<author>
<name>Hattori, Ashton</name>
</author>
<id>https://hdl.handle.net/1721.1/152804</id>
<updated>2023-11-03T03:03:54Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Integrated-Photonics Devices and Architectures for Advanced Cooling of Trapped Ions
Hattori, Ashton
Integrated-photonics-based architectures for trapped-ion systems offer a potential avenue for improved fidelity and addressability of ion arrays. Motional state cooling, a key optical function in trapped-ion systems, however, has been limited to Doppler and resolved-sideband cooling in integrated-photonics-based implementations. In contrast, polarization-gradient and electromagnetically-induced-transparency cooling can offer better cooling performance in multi-ion systems, but have not been demonstrated on an integrated-photonics platform. This thesis demonstrates key integrated-photonics devices and architectures to enable enhanced laser cooling of integrated trapped-ion systems.&#13;
&#13;
First, we develop the framework for two advanced trapped-ion cooling schemes, polarization-gradient and electromagnetically-induced-transparency cooling. Then, we present the design of key integrated devices enabling the proposed system architectures. First, we show the design and experimental demonstration of the first integrated polarization splitters and rotators at blue wavelengths. We develop compact and efficient designs for both a polarization splitter and rotator at a 422-nm wavelength, an important transition for 88Sr+ ions. These devices are fabricated in a 200-mm wafer-scale process and experimental results are demonstrated. Next, we present the design and experimental demonstration of the first pair of integrated TE- and TM-emitting gratings at a wavelength of 422 nm to enable polarization-diverse operations for integrated-photonics-based trapped-ion systems. The development of both the devices and architectures for advanced cooling schemes presented in this thesis paves the way for sophisticated integrated control for trapped-ion and neutral-atom systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanomaterial-Enabled Out-of-Autoclave and Out-of-Oven Manufacturing of Fiber Reinforced Polymer Composites</title>
<link href="https://hdl.handle.net/1721.1/152800" rel="alternate"/>
<author>
<name>Serrano, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/152800</id>
<updated>2023-11-03T03:04:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Nanomaterial-Enabled Out-of-Autoclave and Out-of-Oven Manufacturing of Fiber Reinforced Polymer Composites
Serrano, Steven
Fiber reinforced polymer composite materials have been a staple of the aerospace industry, integral to creating lightweight flight vehicles due to their high specific material properties. These materials often come in a prepreg form, where microfibers are pre-impregnated with a polymer matrix to form lamina that are stacked to form a composite laminate. These aerospace-grade composite structures generally require an autoclave cure, which uses both temperature and pressure to cure the thermoset polymer (or consolidate the thermoplastic polymer) in the prepreg and remove voids throughout the laminate. In this thesis, curing of autoclave-grade thermosetting prepregs using vacuum-bag only (VBO) processes are investigated and further developed through the employment of nanomaterials, both within the laminate itself and externally as a conductive heating mechanism. A preliminary void reduction study was conducted on the effects of placing different nanoporous networks (NPNs) in the interlaminar regions of a VBO manufactured quasi-isotropic laminate using autoclavegrade glass fiber reinforced polymer (GFRP) unidirectional prepreg. It was shown that vertically aligned carbon nanotubes (VA-CNTs), electrospun polymer nanofiber (EPN) veils, and polyimide (PI) aerogel thin films may each successfully evacuate voids via capillary-pressure enhanced polymer flow, as the laminate was void-free. A subsequent study placing PI aerogel NPN in each interlaminar region was shown to successfully create a void-free GFRP laminate on a hot plate using VBO manufacturing. Autoclave woven CFRP prepreg laminates were also manufactured using the same VBO with NPN technique, with PI aerogel in each interlaminar region. Laminates were shown to have minimal void content (&lt; 0.03 vol%) using an advantageously thinner aerogel film than previous work. A previously studied out-of-oven (OoO) curing process using a carbon nanotube (CNT) thin film heating element was modeled using ANSYS Composite Cure Simulation (ACCS) to predict the temperature and degree of cure (DoC) of CFRP laminates using cure kinetics equations and the finite element method. The Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Bound constraints (L-BFGS-B) algorithm was implemented to optimize the cure cycle with respect to time and DoC constraints. Two optimized cure cycles were revealed via the optimization scheme, showing significant (60% to 65%) reductions in manufacturing time. A third accelerated-cure cycle did not use the optimization scheme, but rather utilized an empirical estimation of resin rheology, time history of temperature, and DoC to obtain a cure cycle that had comparable resin flow to that of the manufacturer recommended cure cycle (MRCC) per a defined flow metric. Laminates utilizing the three accelerated cures, the MRCC cure, and a cure with an extended first hold were all modeled in ANSYS and manufactured with a CNT heater OoO set-up and an EPN NPN. The model was found to overestimate the DoC of the manufactured 152 mm x 152 mm x 2 mm (16 ply) laminates by ∼5% on average. The accelerated-cure laminates were shown to have a relatively high void content, indicating that additional considerations are necessary to successfully accelerate the VBO CFRP cure cycle. However, the laminate cured with an extended first hold, as well as the MRCC laminate, were found to have minimal void content (0.02 vol% and 0.08 vol%, respectively). Furthermore, the accelerated-cure laminate with a second hold of 200°C for 36.5 minutes was found to yield a nominal DoC (90.5%) and a comparable glass transition temperature (&#119879; subscript &#119892;) to that of the MRCC cured laminate. Together, the results found in this work show that nanomaterials (i.e. NPNs and CNT heating elements) enable the VBO manufacturing of several types of autoclave prepregs and improve manufacturing throughput via cure cycle modifications that can allow significant acceleration of the overall cure cycle.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Virtual Sheriff Sales: Contested Narratives on Tax Sales in Philadelphia, PA</title>
<link href="https://hdl.handle.net/1721.1/152798" rel="alternate"/>
<author>
<name>Mana, Soad</name>
</author>
<id>https://hdl.handle.net/1721.1/152798</id>
<updated>2023-11-03T03:44:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Virtual Sheriff Sales: Contested Narratives on Tax Sales in Philadelphia, PA
Mana, Soad
This thesis describes a qualitative overview of tax foreclosure auctions in Philadelphia, PA, otherwise colloquially known as sheriff sales. As the rate of displacement of long-term residents has increased in the past few years, greater attention has been called upon official city processes of land acquisition and disposition. By analyzing city council meeting transcripts, reports, news articles, and interviews with key stakeholders in the city, I use the emerging debate on sheriff sales’ permanent shift to virtual in 2021 as a lens to interrogate how various stakeholders view tax foreclosure sales overall. Through this qualitative analysis, I identify five main factors that outline the impact of the increasing privatization of a city sanctioned tax enforcement and collection tool: reduced accountability, transparency, accessibility, a disproportionate social impact on marginalized residents, and the discounting of vacant land. Exchanges about tax sales have been grounded in a much larger conversation in the city about neighborhood change and displacement. As homes, community gardens, and gathering spaces have been sold in sheriff sales, many community members have questioned its impacts on their neighborhoods and challenged the city’s conceptualization of tax delinquent land. Official categorizations of land as abandoned by the City contrast with how residents have materially cared for the land and staked claims to it. Recognizing land beyond property involves understanding land as a site for people's experiences, aspirations, memories, and visions for different futures. Understanding the land as such calls for a reexamination of sheriff sales as a dominant tool used by the City to collect delinquent taxes and activate land. As displacement in Philadelphia intensifies, the land question is once again gaining urgency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IlluSonnet: Using Generative AI to Create Illustrations for Sonnets</title>
<link href="https://hdl.handle.net/1721.1/152797" rel="alternate"/>
<author>
<name>Chen, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/152797</id>
<updated>2023-11-03T03:01:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">IlluSonnet: Using Generative AI to Create Illustrations for Sonnets
Chen, Tiffany
Poetry evokes imagery, and writers and readers alike desire to translate the artful wordplay to a beautiful image. To facilitate this process, we built IlluSonnet, a system that creates illustrations for poetry using text-to-image generative AI models. IlluSonnet works by labelling keywords, emotional qualities, and most related artistic style for the given sonnet before prompting DALL-E for an image. To evaluate IlluSonnet, we both ran a user study to assess the quality of the output images as well as the overall interface. Our study indicates that IlluSonnet helped users generate images that illustrated the sonnets well and that the process of creating and seeing imagery alongside the poem helped users understand the sonnets in a new light. We conclude by discussing how IlluSonnet can be used to further facilitate a deeper connection between both art and poetry.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamic Analysis to Evaluate the Socio-Economic Impact of the Energy Transition of Singapore to Achieve Net Zero Emissions by 2050</title>
<link href="https://hdl.handle.net/1721.1/152796" rel="alternate"/>
<author>
<name>Lum, Mun Kit Kenny</name>
</author>
<id>https://hdl.handle.net/1721.1/152796</id>
<updated>2023-11-03T03:02:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">System Dynamic Analysis to Evaluate the Socio-Economic Impact of the Energy Transition of Singapore to Achieve Net Zero Emissions by 2050
Lum, Mun Kit Kenny
Singapore has pledged to achieve net zero carbon emissions by 2050. However, due to the country's limited land and lack of natural resources, the transition to net zero emission is a challenging journey. Singapore has chartered an energy transition plan to help the country navigate into a lower carbon footprint future but the plan focuses mainly on the technological challenges that the country needs to overcome to achieve the goal. A System Dynamic model that captures the economy, energy &amp; GHG emissions, and labor market of Singapore was developed to help understand the potential socio-economic challenges that could arise from the energy transition. The results from the model suggest that energy transition needs to be managed through a multi-pronged approach of not just technological changes but also managing efficiencies improvement in labor and energy use as one key issue relates to managing the availability of the skilled labor provided by the local workforce versus increasing the foreign worker ratio in the workforce, especially when new technologies are employed for the energy transition. To combat this challenge, Singapore can consider adopting policies implemented by other countries to improve energy efficiency and labor productivity.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Are Fact-checks Effective Even for Those Who Distrust Fact-checkers?</title>
<link href="https://hdl.handle.net/1721.1/152791" rel="alternate"/>
<author>
<name>Martel, Cameron</name>
</author>
<id>https://hdl.handle.net/1721.1/152791</id>
<updated>2023-11-03T03:07:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Are Fact-checks Effective Even for Those Who Distrust Fact-checkers?
Martel, Cameron
There is growing concern over the spread of misinformation. One of the most widely adopted interventions by online platforms for addressing false stories is applying fact-checker informed ‘warning labels’ over misleading posts. Despite a rich literature on corrections and approaches for debunking misinformation, there is comparatively less work examining evidence on the effectiveness of warning labels. Do warning labels effectively reduce belief and spread of misinformation? Chapter I reviews research aimed at answering this important question, and further investigates factors contributing to warning label efficacy: features of the labels themselves, features of the underlying labeled content, and features of individuals viewing labeled content. Overall, existing research suggests that warning labels typically produce consistent, beneficial effects – though the size of these effects is moderated by a multitude of relevant factors. We highlight features that best contribute to warning label efficacy and discuss potential limitations and implications of labelling policies for addressing online misinformation.&#13;
&#13;
As reviewed in Chapter I, prior work suggests that warning labels are effective at reducing the belief and spread of false content on average. However, there is concern about growing distrust of fact-checkers, particularly among those on the political right. In Chapter II we investigate whether trust in fact-checkers moderates the efficacy of warning labels. Are warning labels from fact- checkers effective even for those who say they distrust fact-checkers? In a correlational study (N=1,000), we first establish and validate an adapted trust in fact-checkers measure. We also explore the relationship between trust in fact-checkers and partisanship and replicate prior findings of more Republican-favoring participants reporting less trust in fact-checkers. We also extend upon such work by providing evidence that skill-based traits like procedural news knowledge and analytic thinking exacerbate this partisan asymmetry. Next, we conduct meta-analyses across 21 experiments (N=15,983) in which participants evaluated either their perceived accuracy or sharing intentions of news articles. Participants either received no warning labels, or warning labels on a high proportion of false news articles encountered. We find that warning labels were on average effective at reducing belief and sharing of false headlines. Next, we find that trust in fact-checkers moderates warning efficacy on accuracy, but do not find evidence of moderation on sharing intentions. Importantly, despite this moderation, our results suggest that warning labels significantly reduce belief and sharing of false headlines even for those most distrusting of fact- checkers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Keeping New Orleans Afloat: What can be done to ensure another hurricane the size of Katrina will not destroy the entire city?</title>
<link href="https://hdl.handle.net/1721.1/152790" rel="alternate"/>
<author>
<name>Brown, Daelin</name>
</author>
<id>https://hdl.handle.net/1721.1/152790</id>
<updated>2023-11-03T03:24:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Keeping New Orleans Afloat: What can be done to ensure another hurricane the size of Katrina will not destroy the entire city?
Brown, Daelin
On August 29, 2005, Hurricane Katrina, a Category 3 storm, struck New Orleans. The location of New Orleans makes the city extremely vulnerable to massive storm surges during hurricane season, and the entire city was relying on flood management for their safety. They had a Hurricane and Storm Damage Risk Reduction System (HSDRRS) in place, but the system was not efficient enough for the strength of Katrina’s 28-foot storm surge and 55-foot waves. After 50 major levee breaches, New Orleans looked like residents had built a beach in their backyards, with several feet of water breaking right through the levees. The Gulf Coast resembled the largest wave pool in the world, with the 55-foot waves damaging 34 pumping stations and 169 miles of protective structures in the regional HSDRRS. All of these failures caused 80 percent of New Orleans, along with several surrounding neighborhoods, to be underwater for weeks.&#13;
&#13;
Not only were there 1,392 estimated fatalities, but 800,000 housing units were also destroyed or damaged by Katrina, leaving at least 800,000 people homeless. The total damage of Katrina amounted to over $160 billion, making it one of the largest natural disasters in the history of the U.S., and the third deadliest storm in U.S. history. The catastrophe posed two questions: what had gone so wrong for this American city to be destroyed and what needed to be done to make sure that this amount of devastation would not happen the next time a storm hit New Orleans?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performing Distance Queries on Social Networks in Sublinear Time</title>
<link href="https://hdl.handle.net/1721.1/152786" rel="alternate"/>
<author>
<name>Kōshima, Nadia</name>
</author>
<id>https://hdl.handle.net/1721.1/152786</id>
<updated>2023-11-30T12:25:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Performing Distance Queries on Social Networks in Sublinear Time
Kōshima, Nadia
Shortest path computation is an important base task in many applications. While there have been improvements to the shortest path algorithms, all require preprocessing the entirety of the graph, creating inefficiencies, especially when applied to large social networks. Considering that social networks often appear with power law distributions, we present the question of utilizing this insight for sublinearity. We thus propose Wormhole, an algorithm that can perform reasonably accurate shortest distance estimations in sublinear runtime. On large graphs, scaling up to billions of edges, Wormhole empirically demonstrates the ability to provide reasonable accuracy over 10,000 distance queries while only seeing &#119874;( √ &#119899;) vertices. This shows an improvement over the baseline method of Bi-directional BFS, which has shown similar results on the scale of &#119874;(&#119899;).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hyperfine Interaction of the Group IV Color Centers</title>
<link href="https://hdl.handle.net/1721.1/152785" rel="alternate"/>
<author>
<name>Harris, Isaac B. W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152785</id>
<updated>2023-11-03T04:04:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Hyperfine Interaction of the Group IV Color Centers
Harris, Isaac B. W.
The group IV-negative color centers (SiV⁻, GeV⁻, SnV⁻) are one of the leading candidates for spin-photon interfaces for use in quantum information technologies. They feature highly coherent optical transitions, as well as native electron and nuclear spins that can be used as quantum memories. While the optical and electronic properties of these defects have been studied extensively in previous works, a detailed theory of the hyperfine coupling to the nuclear spin is lacking. This work presents a complete theoretical model of the hyperfine coupling to the intrinsic dopant nucleus in the group IV-negative color centers, complete with ab-initio theoretical predictions of the hyperfine coupling strength, and supported by experimental observation in an isotopically engineered sample. The theoretical model explains the observed hyperfine features well, providing a foundation for future work to use the intrinsic nuclear spin in quantum protocols.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for Modeling and Control of a Packaging&#13;
Manufacturing Process</title>
<link href="https://hdl.handle.net/1721.1/152782" rel="alternate"/>
<author>
<name>Deshpande, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/152782</id>
<updated>2023-11-03T03:52:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Machine Learning for Modeling and Control of a Packaging&#13;
Manufacturing Process
Deshpande, Aniruddha
Process control is a key component of industrial automation. Irrespective of the specific product being manufactured, there is always a need for a controller to decide on specific inputs to the system such that the process gives the desired output, which may be product specifications, quality requirements. Many modern industrial processes still use classical PID control, which is quite effective and easy to implement on Programmable Logic Controllers (PLC). However, this control strategy does not account for the dynamics of the process while developing a control policy, thereby fundamentally limiting its performance capabilities. This means that intervention by operators is required often to ensure smooth functioning of the process, and even then large down times which lead to wastage of material and money are all too common.&#13;
&#13;
With the advent of Industry 4.0 however, more and more manufacturing processes are being fitted with a large number of sensors, cameras; which allow us to collect process data at a scale that was not possible before. Modern machine learning methods have become extremely capable of transforming big data into accurate models. This opens up the opportunity of developing sophisticated models or Digital Twins of manufacturing processes which can then be used to develop more advanced control strategies that would improve on the status quo of heuristically tuned PID control. Such models can be used to explicitly derive control strategies or even be used in simulation to learn improved control.&#13;
&#13;
In this thesis we tackle this modeling and control problem for a packaging manufacturing process. We developed a model for the process that is based on a combination of physics based roll to roll models fine-tuned with process data as well as Neural network based NARX models and validate this combined plant model. We then use this model to test out various control strategies in simulation, ranging from classical PID and optimal linear control as well as use these models to further fine-tune these controllers for better performance.&#13;
&#13;
While such improved data driven controller development strategies exist, adoption is still limited. In the final section of this thesis we also explore how this digital transformation is taking place in the wider manufacturing ecosystem. We review key literature, industry surveys and policy documents and synthesize a view on the current state of adoption of AI in manufacturing, its potential impacts, as well as the big hurdles in adoption. We also examine the kinds of policies in place in the United States to tackle this.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Small modular reactor technology for industrial heat and power: selection techniques and implementation strategies for real-world use cases using systems-based approaches.</title>
<link href="https://hdl.handle.net/1721.1/152779" rel="alternate"/>
<author>
<name>Coffey, Clay Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/152779</id>
<updated>2023-11-03T03:02:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Small modular reactor technology for industrial heat and power: selection techniques and implementation strategies for real-world use cases using systems-based approaches.
Coffey, Clay Allen
The potential of Small Modular Reactors (SMR) has been recognized as an emerging technology that could play a key role in climate change mitigation and achieving net zero 2050 climate goals. The technology behind SMRs is known and proven in many cases but has yet to be deployed commercially.&#13;
&#13;
SMR technological innovation is advancing in several countries around the world, from Gen-III+ light water SMRs, to Gen-IV SMRs and micro-reactors. All at various stages of development, from an early conceptual phase to operational deployment and commercialization. These SMRs are also being developed in multiple configurations, being land, or marine based and coming in single or multimodule (and scalable configurations) with a wide range of heat and power generation capabilities.&#13;
&#13;
In the U.S., several of these technologies are supported by recent funding from legislation that supports a variety of energy policies to advance decarbonization goals in electricity and hard to abate industries, where the potential for renewables is limited. SMRs have attributes related to safety, flexibility, footprint, and waste management that give them opportunities not seen by traditional nuclear.&#13;
&#13;
In North America and Europe there are over 15 SMR designers working on designs ranging from 4 MWe to nearly 350 MWe (500 MWe with thermal storage) and generating heat in a range of 300-750 °C. These SMRs seek to fill a variety of industrial use cases ranging from district heating to fossil fuel replacement for on-grid power, to replacement of fossil fuel cogeneration with high heat requirements.&#13;
&#13;
This thesis addresses the overarching question of how to select which SMR designer and technology is most likely to be successful for various industrial use cases by answering the following sub-questions:&#13;
&#13;
1. What First-of-a-Kind (FOAK) SMR designs currently in development are most likely to be deployed and commercialized in the United States over the next decade?&#13;
&#13;
2. Of the many hard to abate sectors, what are the potential industrial use cases for SMRs and what is the cost and competitiveness of SMRs in these areas relative to existing energy systems?&#13;
&#13;
3. Based on the findings of 1. and 2. above, which SMR designs are best suited for the identified industrial uses cases in the United States?&#13;
&#13;
4. Can these best suited SMR designs be competitive with existing technologies (footprint, siting, capital cost, levelized cost of electricity (LCOE)?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Macroscale Defect Detection in Semiconductor Manufacturing using Automated Inspection with Convolutional Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/152778" rel="alternate"/>
<author>
<name>Sampson, Jonathan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152778</id>
<updated>2023-11-03T03:31:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Improving Macroscale Defect Detection in Semiconductor Manufacturing using Automated Inspection with Convolutional Neural Networks
Sampson, Jonathan A.
The work detailed in this thesis explores four distinct pathways of improving wafer macroscale defect detection from both tool-centric and operator-centric perspectives. The primary tool-centric improvement detailed in this work is the implementation of machine learning-enhanced defect detection models to provide recommendations of defective wafers to review operators. This work features the theory, data acquisition and processing, and training steps for three models designed to catch three different defect types. Models are trained on spin-on-glass (SOG) defects, defects around the perimeter of a wafer, and various other defects occurring in the central area of a wafer. SOG defects are the primary focus of this work, also occurring in the central area of a wafer, though much smaller than the defects present in the central defect detection model. After training, the SOG defect detection model achieved an area under curve (AUC) of 0.927 for testing data out of its training data set distribution. The edge model and general central model achieved AUC values of 0.906 and 0.909, respectively, also on out of distribution testing data. These models, and the tools developed for data labeling, can be adopted for automated defect detection, and efficient data tagging for machine learning applications.&#13;
&#13;
The other improvement pathways featured in this work involve additional tool-centric improvements of examining and performing corrective action on current wafer inspection tools, and evaluating the potential for in-line wafer inspection during processing. An operator-centric improvement is also detailed, describing the feature, operational, and productivity enhancements associated with the development of a new software interface for wafer image review.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Sensing of Ice Dynamics in the Beaufort Sea</title>
<link href="https://hdl.handle.net/1721.1/152777" rel="alternate"/>
<author>
<name>Flores, Matthew A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152777</id>
<updated>2023-11-03T03:25:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Remote Sensing of Ice Dynamics in the Beaufort Sea
Flores, Matthew A.
Arctic summer sea ice extent has undergone dramatic declines over the past several decades, particularly in the Beaufort Sea.  The comprehension of the sea ice decline requires an understanding of the annual sea ice retreat during the summer melt season.  While there are observations of the seasonal sea ice retreat, there is no accurate data on the evolution of sea ice thickness during the melt season.  This thesis presents an analysis of sea ice in the Beaufort Sea using available sea ice freeboard data taken from NASA’s Ice, Cloud, and Elevation Satellite-2 (ICESat-2) mission.  Through tracking bi-weekly changes in freeboard for Lagrangian tracked parcels of sea ice, the patterns of sea ice retreat are examined from 01 June – 30 September for 2020-2022.  This method provides realistic patterns of sea ice thinning through mid-summer, with the most pronounced thinning occurring in the eastern Beaufort Sea. By September, freeboard changes are difficult to detect, with some subregions showing an increase in freeboard (thickening).  The increase in freeboard likely reflects uncertainty due to changes in the distribution of ice types, particularly preferential disappearance of thinner ice but an also reduced rate of thinning.  Although these results are preliminary, suggest that ICESat-2 can be used to track seasonal changes during the melt season to help identify trends and drivers of sea ice retreat. Further work is necessary to improve these results, especially in understanding how different ice types evolve.  Other remote sensing data or in-situ observations are needed to reduce the uncertainty in the subregional estimates of ice melt.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Gaussian Noise in Superconducting Circuits</title>
<link href="https://hdl.handle.net/1721.1/152776" rel="alternate"/>
<author>
<name>McCourt, Trevor Johnathan</name>
</author>
<id>https://hdl.handle.net/1721.1/152776</id>
<updated>2023-11-03T03:40:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Non-Gaussian Noise in Superconducting Circuits
McCourt, Trevor Johnathan
In stark contrast to man-made systems, living things embrace noise and use it to further their functionality. It is therefore not surprising that some lifeforms couple strongly to environmental fluctuations, and can leverage non-Gaussian noise to gain a competitive edge over their peers. In this thesis, I study non-Gaussian fluctuations using a system of Transmon qubits as ultra-sensitive quantum sensors and make the first clear experimental observation of non-Gaussian noise in a qubit system. I achieve this using multi-qubit dynamical decoupling sequences that characterize noise during two-qubit gates when the system is coupled strongly to flux fluctuations. This noise is qualitatively different from the well-studied noise that leads to single qubit dephasing; it simultaneously affects the two qubits, inducing fluctuations in their entangling parameter. In our superconducting system, the experimentally observed noise is consistent with random telegraph noise and leads to the stepwise decay of signals. With this clear characterization of non-Gaussian noise in hand, we have paved the way for a new class of lifelike engineered systems that harness noise to their benefit.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Matching of Users and Creators on Social Media Platforms</title>
<link href="https://hdl.handle.net/1721.1/152774" rel="alternate"/>
<author>
<name>Lyu, Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/152774</id>
<updated>2023-11-03T03:41:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Dynamic Matching of Users and Creators on Social Media Platforms
Lyu, Liang
Social media platforms are two-sided markets bridging content creators and users. Existing literature on content recommendation algorithms used by platforms often focuses on user preferences and decisions, and does not jointly address creator incentives. We propose a model of content recommendation that explicitly focuses on dynamic user-content matching, with the novel contribution that both users and creators may leave the platform if they feel dissatisfied. In our model, each player decides to stay or leave at each time step based on utilities derived from the current match: users based on their similarities with the recommended content, and creators based on their audience size. We show that a user-centric greedy algorithm that only maximizes immediate engagement can result in poor total engagement in the long run, even if users and creators are randomly generated from prior distributions, but explicitly maximizing long-term engagement is NP-hard. Finally, we present new practical algorithms with provable guarantees and good empirical performance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Thinking Urban Retail: The Design and Planning of “Dark Stores” and Public Spaces Case Study: Manhattan, New York</title>
<link href="https://hdl.handle.net/1721.1/152773" rel="alternate"/>
<author>
<name>Halim, Juanita</name>
</author>
<id>https://hdl.handle.net/1721.1/152773</id>
<updated>2023-11-03T03:21:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Re-Thinking Urban Retail: The Design and Planning of “Dark Stores” and Public Spaces Case Study: Manhattan, New York
Halim, Juanita
The retail industry has transformed into various formats due to the fast-paced social and sharing economy changes driven by technological advancements. The recent concept, grocery “dark stores” (retail facilities that are designed for online order fulfillment mostly located in urban areas), is expected to stay as e-commerce and omni-channel operators view them as cost-effective means of delivering quick services to customers. City officials are currently discussing the potential advantages and drawbacks of “dark stores” which could affect changes for street livability in the absence of retail storefronts. Should cities ban “dark stores” that compete with traditional brick-and-mortar retailers?&#13;
&#13;
This thesis analyzes the proliferation of online grocery shopping and how “platform urbanism” (Sadowski, 2020), a novel set of digitally-enabled socio-technological assemblages rooted in the urban affects the spatial distribution of grocery “dark stores” activities by understanding their location and target customers. By using spatial analysis and interviews, this thesis tries to answer three questions: what is  the role of grocery “dark stores” in cities?; where are they located?; and what are their impacts on the urban fabric? It uses NYC (Manhattan) 2021 decennial census and retail food stores data collected in 2022 and 2023 to provide some insights to these questions. The result shows that 1) The location of grocery “dark stores” are mostly located in neighborhood areas with high retail food stores and facility concentration 2) Grocery “dark stores” in Manhattan are located mostly in the Commercial and Manufacturing districts 3) Despite the rise of grocery “dark stores,” high funding from Venture Capitalists, and their promise of convenience to customers, in mid2022, grocery “dark stores” in Manhattan faced exits due to dwindling investor funding, competitive market landscape, and political environment driven by Russia-backed Venture Capitalists.&#13;
&#13;
In the digital era, strategies to digitally transform the city need to consider the implications of different types of retail formats and stakeholders involved. There is a need for urban policy and regulation to address how new retail platforms can reshape the nexus between businesses location, their design and function and the public. As this thesis shows, there is more urgency to do so as new form of retail and businesses are emerging as a result the tech-enabled digital economy and urban new urban infrastructure.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Ecosystems in Geographically-Remote and Resource-Limited Regions with Indigenous Populations and considering Ancestral Science, Knowledge, and Practices: Intentional Development in the Pacific Islands of Hawaiʻi, Fiji, and New Zealand</title>
<link href="https://hdl.handle.net/1721.1/152772" rel="alternate"/>
<author>
<name>Nihipali, Holly Christine Greenberg</name>
</author>
<id>https://hdl.handle.net/1721.1/152772</id>
<updated>2023-11-03T03:53:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Innovation Ecosystems in Geographically-Remote and Resource-Limited Regions with Indigenous Populations and considering Ancestral Science, Knowledge, and Practices: Intentional Development in the Pacific Islands of Hawaiʻi, Fiji, and New Zealand
Nihipali, Holly Christine Greenberg
Innovation ecosystems provide a way to transform and diversify a regional economy. Much of the existing research focuses on mature economies in regions with strong foundational insti- tutions and natural resources. The research herein uses the MIT Three-S (system, stakeholder, strategy) Framework to characterize regional ecosystems that are geographically-remote and resource-limited, specifically the Hawaiian Islands, Fiji, and New Zealand. Using measure- ments of entrepreneurial and innovation capacities and, where possible, interviews of local stakeholders, opportunities and challenges for these regional innovation ecosystems are iden- tified. Attention is given to the counterpoint Indigenous peoples bring to a regional innovation ecosystem. Strategies are suggested for leveraging comparative advantages. Further research and testing is recommended to trial the effectiveness of innovation and entrepreneurship to drive the transformation of tourist economies towards diversification and becoming knowl- edge and digital economies.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Code Exchange (HCX) : A Community-Value-Driven Framework for Data Governance in Humanitarian Crises</title>
<link href="https://hdl.handle.net/1721.1/152771" rel="alternate"/>
<author>
<name>Vibbi, Leonard Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/152771</id>
<updated>2023-11-03T03:42:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Human Code Exchange (HCX) : A Community-Value-Driven Framework for Data Governance in Humanitarian Crises
Vibbi, Leonard Francis
In this study, we examine data collection methods utilized in local communities during humanitarian crises, with a focus on the Sierra Leone COVID-19 scenario. We assess how widely-used data ethics principles in humanitarian data initiatives align with community values. We define these values as encompassing shared principles, virtues, and a collective understanding of what holds significance and meaning to affected communities[1].  &#13;
&#13;
Interviews conducted in Freetown communities allowed us to identify common themes across community principles and norms [values] toward data collection activities. Identified principles held by communities were subsequently contrasted with how data collection activities guided by established data ethics guidelines in humanitarian settings were carried out in target communities.&#13;
&#13;
Our findings commend the general adherence to ethical benchmarks, yet spotlight notable gaps that call for strategies more attuned to community shared principles and understanding. To address this, we present the "Human Code Exchange" (HCX) ethical data governance framework. HCX promotes participatory data collection, weaving in community values and experiences, thereby ensuring a balanced exchange between data collection activities and the community, and reducing practices that are not in tune with community values. With its core focus on the community, HCX aligns humanitarian data initiatives with the intrinsic values of communities, particularly in the regions of the global south. Our work lays the foundation for a refined data governance framework that places emphasis on ethical data collection in vulnerable communities.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Feature Fields for Language-Guided Robot Manipulation</title>
<link href="https://hdl.handle.net/1721.1/152770" rel="alternate"/>
<author>
<name>Shen, William</name>
</author>
<id>https://hdl.handle.net/1721.1/152770</id>
<updated>2023-11-03T03:56:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Neural Feature Fields for Language-Guided Robot Manipulation
Shen, William
Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ecosystem Reboot : How scientists are building an inside-out Noah’s Ark for Florida’s vanished coral reefs</title>
<link href="https://hdl.handle.net/1721.1/152769" rel="alternate"/>
<author>
<name>Guy, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/152769</id>
<updated>2023-11-03T03:47:54Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Ecosystem Reboot : How scientists are building an inside-out Noah’s Ark for Florida’s vanished coral reefs
Guy, Allison
In Florida, a deadly marine plague called stony coral tissue loss disease has inspired an unprecedented conservation plan: to rescue affected corals from the wild, and keep them alive in captivity, indefinitely. The idea was to make a Noah’s Ark turned inside out, evacuating corals from an inhospitable ocean, and raising, breeding and propagating them on land, with the quixotic hope that the reef can one day be rebooted from its backup copy. To do so, Florida’s coral community would need to collect thousands of corals, find places to warehouse their charges, and figure out ways to grow big, genetically diverse captive populations. And with stony coral tissue loss spreading swiftly up and down the state’s coast, they needed to act fast.&#13;
&#13;
This may be the most audacious conservation plan ever attempted — not just to save a species here and there, but to rescue the basis of an entire ecosystem, and to keep it alive through everything the future has in store. And where Florida’s beleaguered reefs go, the rest of the world will follow. Sooner or later, but most likely sooner, corals everywhere will be in need of their own inside-out arks, ferrying them towards some hoped-for future. Improbable as it seems, it just might work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shared Equity Homeownership in Korea: Analysis of the First Public Programs</title>
<link href="https://hdl.handle.net/1721.1/152768" rel="alternate"/>
<author>
<name>Park, Joon Tae</name>
</author>
<id>https://hdl.handle.net/1721.1/152768</id>
<updated>2023-11-03T03:15:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Shared Equity Homeownership in Korea: Analysis of the First Public Programs
Park, Joon Tae
Korea's public housing policy has reached a turning point, with an emphasis on alternative housing tenure types. Based on the notion of intermediate housing, three homeownership programs—land-lease housing for sale, profit-sharing housing for sale, and accumulated equity housing for sale—have been introduced. The high competition rates shown in these recent projects have proven the demand for these new intermediate or transitional homeownership programs. However, to avoid further trial and error, there is a need for rich discussions on what should be the founding principles and methods of implementation of these new homeownership programs.&#13;
&#13;
This study analyzes Korea's new homeownership programs based on the shared equity homeownership (SEH) models. To provide grounds for the evaluation, multiple literature and statistical data were explored. In turn, the principles and methodologies of the SEH models were derived, and the three homeownership programs were explained including their history and individual projects. As a result of the analysis, it was difficult to conclude that the three homeownership programs have adopted the principles and methodologies of the SEH models. To sustain the supply of affordable housing and to improve the lives of the homeowners who live within them, lessons from the SEH models should be taken into account.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Circadian and Multi-day Rhythms in Generalized Tonic-Clonic Seizure: A Probabilistic Approach</title>
<link href="https://hdl.handle.net/1721.1/152767" rel="alternate"/>
<author>
<name>Zhang, Boyu</name>
</author>
<id>https://hdl.handle.net/1721.1/152767</id>
<updated>2023-11-03T03:29:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Circadian and Multi-day Rhythms in Generalized Tonic-Clonic Seizure: A Probabilistic Approach
Zhang, Boyu
Epilepsy is a chronic neurological disorder characterized by recurrent seizures that affect more than 50 million people worldwide, representing approximately 0.6\% of the global population. This condition poses significant public health challenges, with a heightened risk of premature mortality. Underdiagnosis and undertreatment remain pervasive, particularly in low- and middle-income countries.&#13;
&#13;
Studies have discovered that seizure occurrences are phase-locking to subject-specific circadian and multi-day rhythms in human physiological signals. Also, various types of epilepsy have distinctive timing patterns with respect to sleep-wake cycles. However, it remains inconclusive how sleep parameters, non-invasive ambulatory physiological signals, and seizure occurrences are quantitatively related.  &#13;
&#13;
We first conduct an observational study on the association between sleep parameters, including duration, efficiency, fragmentation, and regularity, and generalized tonic-clonic seizure (GTCS) occurrences on the next day. We then conduct retrospective analyses of GTCS events phase-locking to rhythms in wrist electrodermal activity (EDA), validating previous claims. Ambulatory sleep-wake cycles and EDA recorded by smart wristbands from more than 1,000 patients diagnosed with GTCS are analyzed. GTCS events are detected by an FDA-cleared algorithm on the wristband.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Inefficiencies and Reflecting the Desires of Low-Income Housing Stakeholders: Recommendations to the Department of Housing and Urban Development to Deploy a Simplified, Developer-Driven Affirmative Fair Housing Marketing Plan Filing Process, as well as Review Proposals for Adaptive Policy Mechanisms</title>
<link href="https://hdl.handle.net/1721.1/152762" rel="alternate"/>
<author>
<name>Ananthabhotla, Bhavani</name>
</author>
<id>https://hdl.handle.net/1721.1/152762</id>
<updated>2023-11-03T03:51:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Addressing Inefficiencies and Reflecting the Desires of Low-Income Housing Stakeholders: Recommendations to the Department of Housing and Urban Development to Deploy a Simplified, Developer-Driven Affirmative Fair Housing Marketing Plan Filing Process, as well as Review Proposals for Adaptive Policy Mechanisms
Ananthabhotla, Bhavani
The Affirmative Fair Housing Marketing Plan (AFHMP) is a set of regulations passed by the U.S. Department of Housing and Urban Development (HUD) to govern the sharing of information about applications for low-income rental housing in accordance to the Fair Housing Act. Collaborators of this work at Camfield Estates, a low-income housing development in Boston, MA, communicated concerns over the regulations’ efficacy as well as desires for increased autonomy in the process of tenant selection and application marketing. The purpose of the research conducted was to describe, using social science and statistical methods, the limitations of the AFHMP regulations that are pertinent to low-income developments, to amplify any voiced concerns of Camfield Estates that may also help other low-income developments, and offer suggestions for improvements for the AFHMP regulations to align with their original goal.&#13;
&#13;
Qualitative interviews of low-income housing developers, development residents and staff, and HUD New England Compliance staff to identify the following limitations of the AFHMP which prevent effective enforcement of fair housing goals: (1) that there is significant administrative burden, for both filers and HUD staff, in maintaining and checking for policy compliance, (2) that guidelines for when to file updates were underdefined, (3) that guidelines for how to conduct the analysis to determine groups least likely to apply to a property were underdefined, and (4) that both stakeholders at low-income developments and HUD New England Compliance demonstrated interest in extending affirmative marketing to improve outcomes for those with intersectional identities, such as to address the difficulties of accessing housing while being single, male, and Black. A quantitative analysis of AFHMP, resident, and census data for Camfield Estates was conducted to study the first, second, and third concern in context. &#13;
&#13;
Recommendations for immediate changes that would respect Camfield Estates’ concerns of autonomy and wouldn’t significantly increase cost of administrative burden for HUD are made, including: (1) that the AFHMP form be simplified to reduce administrative burden, to reduce room for error in the analysis of groups least likely to apply to the development, and to reduce barriers to updating marketing strategy more frequently if needed, (2) that greater flexibility should be allowed in determining affirmative marketing strategy, perhaps by allowing qualitative, free-form response, and (3) that developers should themselves determine the groups least likely to apply to the development, and HUD should send out a memo banning other agents like housing authorities from limiting developers with pre-completed, read-only analysis on forms. A recommendation is also made for space to be allowed for a link to a survey on newest AFHMP forms for further work to be conducted by approved researchers. To support a long-term feedback mechanism for policy relevance, an exploration of adaptive regulations to govern fair marketing is presented.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cybersecurity Risk Assessment Matrix (CRAM): A System-Theoretic Approach to Balancing Operational and Cybersecurity Risk in the Management of Transient Cyber Assets (TCA) in the Maintenance of Operational Technology (OT)</title>
<link href="https://hdl.handle.net/1721.1/152760" rel="alternate"/>
<author>
<name>Nurthen II, John Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152760</id>
<updated>2023-11-03T04:02:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Cybersecurity Risk Assessment Matrix (CRAM): A System-Theoretic Approach to Balancing Operational and Cybersecurity Risk in the Management of Transient Cyber Assets (TCA) in the Maintenance of Operational Technology (OT)
Nurthen II, John Michael
Less than 10 years ago, cyber security of critical infrastructure was a topic of interest in various circles of focused technical subject matter expertise. Today, it has become a mainstream topic of discussion all too often highlighted by large scale incidents with global visibility and impact such as Stuxnet, Triton, the Colonial Pipeline, or the multiple Russian cyber-attacks on Ukraine. Highlighted by President Joe Biden’s Executive Order on Improving the Nation’s Cybersecurity issued 12 May 2021, and solidified in the March 3rd 2023 release of the brand new National Cybersecurity Strategy, deliberate action and improvement has been demanded at the highest levels of the Federal Government.&#13;
&#13;
Although the digital revolution has established its presence in the automation, oversight, and management of critical facilities and utility systems, the knowledge gap between the management of the mechanical and digital platforms remains significant.  This exposes a critical vulnerability in the oversight of electromechanical processes such as those used to control utility systems, machinery, and industrial processing; often referred to as Operational Technology (OT). OT, by way of delivering its fundamental value amongst the systems and environments in which it operates, demands both routine and non-routine maintenance and repair.  Increasingly often, the required maintenance/repair cannot proceed without the introduction and use of an electronic device (e.g. to run diagnostics, troubleshoot error codes, update OT firmware/software, test and balance, etc).  While not a ubiquitous term amongst all infrastructure industries, the North American Electric Reliability Corporation (NERC) defines the electronic device in this scenario as a Transient Cyber Asset (TCA).  &#13;
&#13;
The introduction of a TCA to the FRCS/OT ecosystem is a well-known and significant threat vector.  In this scenario, there are multiple actions that can be taken to mitigate the cybersecurity risk introduced by the TCA, but the solution is entirely dependent on the time, resources, and capabilities available in that specific location.  Increasingly often, the electronic device required for the maintenance/repair is untrusted and operated by a technician focused on the operational need of the maintenance/repair.  Notably, this scenario requires a field level decision to be made by a non-IT professional (e.g. a Facility Manager) that must consider the tradeoff between the operational need of the maintenance/repair and the cybersecurity risk associated with the use of the untrusted device.&#13;
&#13;
Through literature review and subject matter expert interviews in conjunction with the Department of Defense, MIT Lincoln Laboratories, Cyber Security at MIT Sloan (CAMS), and private industry, this thesis offers an attempt at providing a repeatable, tailorable, risk-based decision framework referred to as CRAM (Cybersecurity Risk Assessment Matrix) that incorporates both cybersecurity risk and operational risk associated with a given maintenance/repair scenario, in an effort to provide facility managers in the field a reliable tool to assist in the timely assessment and risk mitigation of day-to-day operations and maintenance conducted by outside contractors with untrusted electronics.&#13;
 &#13;
This thesis aims to provide a rudimentary framework to aid in the determination of how much risk is acceptable in order to maintain operations, and how can decision makers in this space make sensible, informed, Cybersafe decisions on a routine basis.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fostering Well-Being: Designing Technology to Improve the Psychological Well-being of Foster-Involved Youth</title>
<link href="https://hdl.handle.net/1721.1/152759" rel="alternate"/>
<author>
<name>Kumar, Ila Krishna</name>
</author>
<id>https://hdl.handle.net/1721.1/152759</id>
<updated>2023-11-03T03:52:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fostering Well-Being: Designing Technology to Improve the Psychological Well-being of Foster-Involved Youth
Kumar, Ila Krishna
Over 600,000 youth in the United States experience abuse or neglect each year. Youth who are deemed to be at risk of significant harm in their homes are often removed and placed in a temporary housing situation known as foster care. Despite this system’s goal of supporting youth, research suggests that foster care can negatively impact youths’ ability to heal and develop the skills they need to reach their goals and avoid future traumatic situations. Given that very little has been done to explore how technology might be able to help youth heal and learn coping skills, this project aimed to explore if and how internet-connected technologies (such as smartphones and computers) might be able to support the psychological well-being of youth in and transitioning out of the foster care system. We approached these questions in three phases. In Phase 1, we conducted broad, semi-structured interviews with 16 current and former foster-involved youth to understand their experience and explore if and how technology could promote psychological well-being for foster-involved youth. Through this phase, we learned that young people are especially concerned about the lack of social support youth have in foster care and see opportunities for peer-to-peer technology to fill this need. In Phase 2, we built off these findings by prototyping and testing multiple peer-to-peer support app designs with 24 current and former foster-involved youth. Through this iterative process, we identified that a community-based, reflective check-in system might allow youth to give and receive most types of social support in a safe and comfortable environment. Finally, in Phase 3, we tested this system through a two-week mixed-methods pilot study with 15 current and former foster-involved youth, collecting data to suggest that this type of interface can provide youth with multiple types of social support and thereby improve their psychological well-being.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WACO: Learning workload-aware co-optimization of&#13;
the format and schedule of a sparse tensor program</title>
<link href="https://hdl.handle.net/1721.1/152757" rel="alternate"/>
<author>
<name>Won, Jaeyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/152757</id>
<updated>2023-11-03T03:42:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">WACO: Learning workload-aware co-optimization of&#13;
the format and schedule of a sparse tensor program
Won, Jaeyeon
Leveraging the existence of the large number of zeros in sparse tensors offer a powerful way to solve complex problems efficiently in many applications. However, optimizing the performance of those applications poses a challenge. Sparse tensor programs must find the ideal balance between data format and implementation strategy to achieve optimal performance.&#13;
&#13;
This thesis presents WACO, a novel method of co-optimizing the format and schedule of a given sparsity pattern in a sparse tensor program. A core challenge in this thesis is the design of a lightweight cost model that accurately predicts the runtime of a sparse tensor program by considering the sparsity pattern, the format, and the schedule. The key idea in addressing this is exploiting a sparse convolutional network to learn meaningful features of the sparsity pattern and embedding a coupled behavior between the format and the schedule using a specially designed schedule template. In addition, within the enormous search space of co-optimization, our novel search strategy, an approximate nearest neighbor search, efficiently and accurately retrieves the best format and schedule for a given sparsity pattern.&#13;
&#13;
We evaluate WACO for four different algorithms (SpMV, SpMM, SDDMM, and MTTKRP) on a CPU using 726 different sparsity patterns. Our experimental results shows that WACO outperformed four state-of-the-art baselines, Intel MKL, Formatonly auto-tuner, TACO with a default schedule, and ASpT. Compared to the best of four baselines, WACO achieved 1.43×, 1.18×, 1.14×, and 1.27× average speedups on SpMV, SpMM, SDDMM, and MTTKRP, respectively.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Urgency of Presence: Designing Healing Community Spaces After Displacement</title>
<link href="https://hdl.handle.net/1721.1/152756" rel="alternate"/>
<author>
<name>Teng, Melissa Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/152756</id>
<updated>2023-11-03T04:06:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Urgency of Presence: Designing Healing Community Spaces After Displacement
Teng, Melissa Q.
Named for its proximity to the intersection of Massachusetts Avenue and Melnea Cass Boulevard, “Mass. and Cass” is an informal neighborhood in Boston that is often described in the news with disaster-tinged language like “epicenter” and “tent city”. After this neighborhood was declared a “public health crisis”, the City of Boston made major investments into constructing and bolstering permanent supportive housing and other much-needed services. But when we sat with its unhoused, drug-using, and outreach communities on the ground, they described parallel investments in militarized public spaces, an exclusionary neighborhood planning process, and stigmatizing media stories that overemphasize the neighborhood’s crime and violence. Most narratives about “Mass. and Cass” ignore these structural oppressions, exemplifying how current “solutions” to homelessness are less concerned with the well-being of unhoused people and more with their disappearance from public space. In response, our art collective See You In The Future has been working with community members of “Mass. and Cass” and poor people’s movements to research how histories of crisis and displacement connect with current anti-homeless policies, and to collectively imagine what healing community spaces might feel like. Centering the wisdom and lived experiences of residents and staff—and informed by liberatory and loving philosophies like harm reduction, disability justice, and abolition—we offer four spatial design values: belonging, care, hope, and growth. As our project is ongoing, this document shares our work thus far: our methods rooted in seeing and solidarity; research on the creative labor of maintaining community spaces despite policy interventions; practical notes on designing workshops and a mural; and finally reflections on presence and solidarity as outside artists and designers. Because we are focusing on community stories, which are in some sense infinite, I present our work as a series of essays to emphasize the indeterminate, character-led, and emotional nature of our methods and findings. My hope is this reads like a walk, where our feet stay planted on the ground and the humanity of community members never leaves our sight.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Age-Inclusive Design Framework for  On-Demand, Shared Autonomous Vehicles</title>
<link href="https://hdl.handle.net/1721.1/152755" rel="alternate"/>
<author>
<name>Hong, David</name>
</author>
<id>https://hdl.handle.net/1721.1/152755</id>
<updated>2023-11-03T04:10:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Age-Inclusive Design Framework for  On-Demand, Shared Autonomous Vehicles
Hong, David
The often repeated promise of autonomous vehicles is to make transportation safer, cleaner, more accessible, and convenient - in particular for vulnerable and underserved groups, such as older adults and people using mobility devices. This future, however, is far from guaranteed; rather, it must be paved by a number of stakeholders, including at minimum, first those who have been traditionally underserved, as well as designers, AV makers and operators, policy makers and regulatory authorities. If we do not carefully study the mobility needs of users – young and old – and design to meet them, we stand to repeat the same fate of in-accessibility in new mobility as in the case of transportation network companies (TNCs). The time to think about age-inclusive design is now for AVs, and I make a case for this here. This thesis explores the following questions: ‘How can we imagine a fully autonomous future if we do not have a viable transportation pathway for younger children and older adults?’ ‘What challenges might users of mobility devices (e.g., rollators, baby strollers) face in using driverless vehicles with hitherto unseen form factors?’ ‘What spatial allowances and features should vehicle designers consider when re-imagining the interior space of autonomous vehicles?’ The study analyses user needs, questions, and suggestions across ten (10) vehicle touchpoints, and presents a series of recommendations aimed for design, operation, policy, regulation, and institutional reform.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Municipal Bonds for Financing India's Urban Infrastructure:                                                      The Case of Indore</title>
<link href="https://hdl.handle.net/1721.1/152754" rel="alternate"/>
<author>
<name>Gangamreddypalli, Lakshmi</name>
</author>
<id>https://hdl.handle.net/1721.1/152754</id>
<updated>2023-11-03T03:42:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Municipal Bonds for Financing India's Urban Infrastructure:                                                      The Case of Indore
Gangamreddypalli, Lakshmi
To address the challenges arising from growing urbanization, local governments in India need to allocate significant funds to facilitate the development of urban infrastructure in the coming decades. The financial constraints experienced by governments at various levels, especially at the local level, underscore the need for alternative financing methods to bridge the substantial investment gap. Municipal bonds present a viable option for accessing the capital market for long-term debt to finance urban infrastructure.&#13;
&#13;
India’s history with municipal bonds dates back to the mid-1990s, yet its municipal bond market is shallow and urban local bodies remain highly underleveraged. Recent initiatives aimed at developing the municipal bond market have led to an increase in bond issues since 2017. However, this activity is very limited and few municipalities have been successful in issuing bonds. In this context, Indore’s relatively active participation in India’s municipal bond market, despite facing similar challenges as other municipalities, offers an interesting case study. This thesis analyzes Indore Municipal Corporation’s latest green bond issuance and situates it within the trajectory of municipal bond financing in India in order to understand the factors contributing to the city’s performance, and to reflect on the replicability and scalability of these factors to proximate contexts.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilitating Adoption of Continuous Manufacturing Platforms in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/152753" rel="alternate"/>
<author>
<name>Klukovich, Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/152753</id>
<updated>2023-11-03T03:40:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Facilitating Adoption of Continuous Manufacturing Platforms in the Pharmaceutical Industry
Klukovich, Hope
Continuous manufacturing (CM) of pharmaceutical products has gained a great deal of interest over the past decade. CM promises multiple benefits to all pharmaceutical industry stakeholders; however, pharmaceutical manufacturers generally have been slow to invest in the technology and even slower to transition their manufacturing operations from batch even when a CM process would make the most sense. This thesis intends to drive the implementation of CM to augment batch manufacturing, allowing for a wider array of manufacturing tools in the pharmaceutical manufacturing enterprise by using a system-focused approach in developing a change system for small molecule pharmaceutical manufacturing, with the emergent property of an actionable framework based on systems architectural design that manufacturers can use. This research will employ the Architecting Innovative Enterprise Strategy (ARIES) Framework1 to illustrate the current and future landscapes of the pharmaceutical manufacturing enterprise. First, the problem space, specifically the environment that impacts the pharmaceutical manufacturing enterprise including stakeholders and governing agencies, will be described. Second, the envisioned future for the drug manufacturing enterprise in which the enterprise adopts CM as a dominant manufacturing process as opposed to solely batch manufacturing is examined. Finally, a framework is synthesized for the transition to CM in the pharmaceutical manufacturing enterprise derived from ARIES elements (strategy, process, organization, knowledge, products, services, information, infrastructure) nested in the previously described ecosystem and stakeholders. This framework will not be prescriptive, but also is intended to be adapted for each company’s unique business model and operational circumstances.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility of a Human Capital Digital Twin</title>
<link href="https://hdl.handle.net/1721.1/152752" rel="alternate"/>
<author>
<name>Lindstrom, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/152752</id>
<updated>2023-11-03T04:04:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Feasibility of a Human Capital Digital Twin
Lindstrom, Ethan
CEOs often state that people are their most important asset, thus expressing that human capital is vital to achieving businesses’ strategic outcomes. Yet, very few would be likely to list their HR information system as a critical enabler of their business. Meanwhile, engineering disciplines have begun combining technological advances to create digital twins of important company assets, which provide unprecedented visibility into the state of these assets and can significantly improve the ability to manage them. This thesis asks if it is possible to apply those cutting-edge engineering tools (digital twins) to enhance the visibility and management of human capital. And if yes, what could such a system potentially look like?&#13;
&#13;
To get there, the thesis defines the scope and objectives of a potential human capital digital twin, analyzes existing HR systems to see if they are already digital twins, and then proposes a conceptual architecture for a skills-focused human capital digital twin. The risks (both technical and sociotechnical) of implementing such a system are then evaluated and discussed.&#13;
&#13;
The conclusion is that while creating a human capital digital twin appears to be possible, it is less clear if it is advisable. ’Technifying’ HR comes with significant risks that may outweigh the benefits. However, elements of the proposed system may still be worth adopting (or adapting), as there can be substantial benefits for employees and the business of taking a skills-based approach to managing human capital.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance characterization of functional fiber detectors: scintillating fibers with embedded photodiodes</title>
<link href="https://hdl.handle.net/1721.1/152751" rel="alternate"/>
<author>
<name>Ohstrom, E. V.</name>
</author>
<id>https://hdl.handle.net/1721.1/152751</id>
<updated>2023-11-03T03:24:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Performance characterization of functional fiber detectors: scintillating fibers with embedded photodiodes
Ohstrom, E. V.
In various operational scenarios, the effectiveness of existing radiation detection technologies is frequently hindered by limitations in terms of portability, adaptability, and cost-effectiveness. Bridging this critical gap necessitates innovative approaches, and thus, this thesis proposes a solution in the form of radiation-sensitive functional fabrics. This innovative concept involves the integration of avalanche photodiodes within scintillating fibers, thereby engendering a detector that is not only lightweight and flexible, but also remarkably affordable.&#13;
&#13;
The foundation of this methodology revolves around the utilization of a convergent thermal draw process, an intricate technique that yields millimeter-thick fibers encompassing all essential detector components. Through a series of iterative experiments, a limited number of fibers embedded with silicon photomultipliers (SiPMs) have been produced for study. The light attenuation length for each prototype of the functional fiber detectors is measured. In addition, the SiPMs used in this work have been carefully calibrated to obtain the correspondence between the measured energy and the number of photoelectrons detected by the SiPM. This allows for determination of the detection threshold of the functional fiber detectors. Furthermore, a crucial facet of this research involves the calibration of the SiPMs. This calibration process is executed to establish a precise correspondence between the energy detected and the count of photoelectrons registered by the SiPM. The calibration allows for the determination of the detection threshold of the functional fiber detectors, thus underpinning their effectiveness in radiation detection.&#13;
&#13;
After obtaining a clear understanding of the performance, future plans include more complicated multi-fiber arrays and fabrics. The potential applications of functional fiber detectors include identification of unknown radioactive sources, wearable detectors for warfighters and first responders, and flexible detector arrays for arms control applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting an Upstream Oil and Gas Enterprise for Innovation</title>
<link href="https://hdl.handle.net/1721.1/152750" rel="alternate"/>
<author>
<name>Dargis, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/152750</id>
<updated>2023-11-03T03:44:09Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Architecting an Upstream Oil and Gas Enterprise for Innovation
Dargis, Justin
The oil and gas industry’s rapidly evolving and dynamic nature has historically led to significant volatility within the energy sector. Upstream oil and gas enterprises, in particular, need to catch up in adapting and adjusting their corporate strategies to embrace cleaner and more sustainable practices in oil and gas production. This highlights the critical need for enterprise transformation that fosters innovation and positions companies as leaders in the industry.&#13;
&#13;
Creating an innovative upstream oil and gas enterprise requires a fundamental shift in the fabric that has traditionally defined success in the industry. Unfortunately, the conventional work processes and procedures have proven to be ill-suited for adapting to the changing times and evolving societal pressures. The industry’s heavy reliance on external parties and their prescribed processes further contributes to the rigidity and impedes a focus on innovation.&#13;
&#13;
To address these challenges, the ARIES methodology includes insights gathered from interviews to lay the groundwork for designing a flexible enterprise promoting collaboration and innovation. Various evaluative strategies assessed different enterprise concepts and attributes, including applying Multi-Attribute Utility, Tradespace analysis, the Pugh Matrix, and Weighted SWOT analysis.&#13;
&#13;
By embracing a more flexible and innovative approach, upstream oil and gas enterprises can break free from the constraints of traditional practices and position themselves at the forefront of industry transformation. This shift will enable them to navigate the ever changing landscape more effectively and contribute to sustainable and responsible oil andgas production in alignment with societal and environmental expectations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coast Guard Aviation &amp; the Assignment Problem: &#13;
An Auction Model to Allocate the Future 'All-Jayhawk' Fleet</title>
<link href="https://hdl.handle.net/1721.1/152748" rel="alternate"/>
<author>
<name>Ensley, Kyle L.</name>
</author>
<id>https://hdl.handle.net/1721.1/152748</id>
<updated>2023-11-03T03:47:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Coast Guard Aviation &amp; the Assignment Problem: &#13;
An Auction Model to Allocate the Future 'All-Jayhawk' Fleet
Ensley, Kyle L.
As the US Coast Guard (CG) prepares to transition from a mixed rotary wing fleet of MH-65 Dolphins and MH-60 Jayhawks to an ‘All-Jayhawk’ fleet, an opportunity is presented to seek an optimized set of aircraft assignments, prior to making capital facilities investments.  Through more optimized assignments, the CG can achieve better mission value at cost.  The objective of this thesis is to build a model to aid the CG in making rotary wing aircraft basing and satellite unit organizational decisions as it transitions to an ‘All Jayhawk’ fleet of 127 aircraft, by building a model that can tradeoff between geographic coverage and cost.  The decision to assign Jayhawks to different aviation locations will be assessed under the auspices of the ‘Assignment Problem,’ the combinatorial optimization problem of assigning two sets of elements to each other, while seeking optimization for greater metrics.  Optimization will be sought with an auction technique, one solution to the Assignment Problem.&#13;
&#13;
This thesis will begin with a historical review of the CG’s rotary wing fleet and aviation facilities since the CG first created an aviation program in 1916.  This review will showcase trends and possible correlation between increasing rotary wing aircraft ranges, reductions in full-service Air Stations, and growth in satellite aviation facilities used to forward deploy aircraft.  This thesis will then break down these different Aviation Support Constructs by Architectural Decisions and model them with Design Structure Matrices to better understand differences and cost drivers.  The Architectural Decisions will be used to build a model that estimates the total cost of the Jayhawk fleet’s global assignment to any mix of 39 locations under four Support Constructs.  Ten years of CG mission data and aircraft capability range rings will be overlaid in GIS software, to visualize and quantify where CG missions are required, and which air stations are most valuable.  Six Assignment Problem Auctions will then be conducted with differing objective criteria to seek a best identifiable set of global assignments for the Jayhawk fleet, with metrics including mission coverage percent and the Net Present Value cost of the assignment set over the fleet’s lifespan.  &#13;
&#13;
This analysis and the six auctions will show the relationship between geographic mission coverage and costs and will suggest a Pareto front to showcase a short list of sets of global Jayhawk assignments for consideration by the CG.  Auction B will be performed with the objective criteria to seek the lowest cost set of fleet assignments while still achieving the threshold mission coverage rate.  Auction B’s result will be proposed as the best-identifiable result, achieving the baseline mission coverage percent with only 14 aviation locations, 25 fewer than the status quo, and 36% less expensive than the CG’s notional plan.  Following demonstration of this technique, it will be proposed for use by the CG, to be adapted with refined objective criteria, to seek an optimal set of global assignments for the future All-Jayhawk fleet.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Innovation in Technical Teams: A Study of Design Thinking and Systems Architecture Integration</title>
<link href="https://hdl.handle.net/1721.1/152747" rel="alternate"/>
<author>
<name>Anderson, Warren V.</name>
</author>
<id>https://hdl.handle.net/1721.1/152747</id>
<updated>2023-11-03T03:19:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing Innovation in Technical Teams: A Study of Design Thinking and Systems Architecture Integration
Anderson, Warren V.
With increasing discoveries in technology and new emerging markets, large enterprises and agencies see a rising demand for innovation. As a result, the roles and responsibilities of technical experts, including engineers and scientists, in these organizations are growing. Technical experts are being pushed to expand their capabilities beyond solution evaluation and into divergent concept exploration space. Additional tools and skills support are needed to assist these technical experts in this new approach.&#13;
&#13;
NASA's Aeronautics Research Mission Directorate (ARMD) is dedicated to transforming aviation to meet the nation's and the world's future needs. The Convergent Aeronautics Solution (CAS) project was developed to accelerate ARMD's innovation capabilities. The CAS project is designing a bespoke innovation framework that fits its culture and mission through human-centered design and leveraging tools from systems architecture to address complex societal problems through aviation. This thesis investigates how to influence the technical experts using the CAS project as a case study in addition to interviews conducted with team members. &#13;
&#13;
This real-world case study provided a unique opportunity to observe a large agency. This thesis discusses three insights that emerged from this research into how to support new technical teams during ideation. First, embrace the natural tendency of technical experts to generate concepts. While systems architecture and human-centered design prescribe exploring the problem before developing concepts, it is better to make some space for the technical experts to propose ideas. Second, concept generation and the ideation process can benefit from an experienced facilitator(s) to help keep the team in a generative mindset. Teams new to the ideation process need assistance while they gain experiential learning of this new approach. Finally, this early lifecycle exploration of the problem and the stakeholder's needs can be ambiguous and challenging. The tools and methods of human-centered design and systems architecture can help structure the approach for problem formulation, interpreting the stakeholder's needs, and generating transformative solutions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Industry Platforms: Case Studies to Measure Platform Capabilities for US Unicorns</title>
<link href="https://hdl.handle.net/1721.1/152746" rel="alternate"/>
<author>
<name>AlSadah, Yousif Fayez</name>
</author>
<id>https://hdl.handle.net/1721.1/152746</id>
<updated>2023-11-03T03:32:35Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Industry Platforms: Case Studies to Measure Platform Capabilities for US Unicorns
AlSadah, Yousif Fayez
Large-sample empirical research by Cusumano et. al. found that US privately-held unicorns with platform capabilities command on average 123% premium over non-platforms. However, measuring the extent to which a company is platform or non-platform based is a difficult problem given the complexities of business organizations and how these activities interact with each other in non-linear ways.  &#13;
&#13;
This thesis attempts to address this by proposing a systems thinking, case-based approach to evaluate the key business activities of a firm with potential platform capabilities using the author’s proposed Platform Classification Matrix on five of the largest US privately-held firms: Epic Games, Databricks, Plaid Technologies, Stripe, and Instacart. Each business activity for a firm is classified as platform or nonplatform, and if it is a platform then it is assessed based on its revenue contributions to the firm and three strength metrics: Network effects, strength against multihoming, and new entrant deterrence. This matrix generates a ‘platform strength’ metric and allows identification of the platform activity with the most potential towards a winner take all or most case.&#13;
&#13;
The author further proposes combining this matrix with a system dynamic approach to identify how differing business activities can boost or hinder the leading platform service which allows decision makers to assess whether retaining or subsidizing seemingly low-performing business lines is strategic for their leading platform.&#13;
&#13;
The thesis concludes by advocating for using both methods as well as the generated metrics to perform a holistic analysis when evaluating firms with platform capabilities potential.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Achieving Robustness and Generalization in MARL&#13;
for Sequential Social Dilemmas through Bilinear&#13;
Value Networks</title>
<link href="https://hdl.handle.net/1721.1/152745" rel="alternate"/>
<author>
<name>Ma, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/152745</id>
<updated>2023-11-03T03:54:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Achieving Robustness and Generalization in MARL&#13;
for Sequential Social Dilemmas through Bilinear&#13;
Value Networks
Ma, Jeremy
This thesis presents a novel approach for training multi-agent reinforcement learning (MARL) agents that are robust to different unforeseen gameplay strategies in sequential social dilemma (SSD) games. Recent literature has demonstrated that reward shaping can not only be used to enable MARL agents to discover diverse, human-interpretable strategies with emergent qualities, but also help alleviate the issue in conventional actor-critic methods that tend to converge to suboptimal Nash equilibria in SSD games. However, agents trained through self-play typically converge and overfit to a singular Nash equilibrium. Consequently, these agents are limited to executing the specific strategy they have converged to during training, which renders them ineffective when faced with opponents employing commonly-used strategies such as tit-for-tat. This thesis proposes a method that employs a bilinear value critic that can learn an adaptive and robust strategy in SSD games through self-play with randomized reward sharing. We evaluate the efficacy of this approach on “prisoner’s buddy,” an iterated three-player variant of the prisoner’s dilemma game. Our results show that the bilinear value structure helps the critic generalize over the reward sharing manifold and leads to an adaptive agent with emergent qualities such as reputation. The results of this research highlight the ability of MARL agents to learn a general high-level policy that can effectively socialize with agents with different strategies in SSD games, despite being trained through self-play. The proposed method is scalable and has the potential to be applied to a wide range of multi-agent competitive-cooperative environments, providing insights into the design of MARL algorithms for solving social dilemmas.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Learned Soups: neural network averaging for joint clean and robust performance</title>
<link href="https://hdl.handle.net/1721.1/152744" rel="alternate"/>
<author>
<name>Huang, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/152744</id>
<updated>2023-11-03T03:31:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Adversarial Learned Soups: neural network averaging for joint clean and robust performance
Huang, Brian
To make computer vision models more adversarially robust, recent literature has made various additions to the adversarial training process, from alternative adversarial losses to data augmentations to the usage of large numbers of diffusion-generated synthetic samples. However, models trained for adversarial robustness often face an inherent tradeoff between performance on clean images and performance against adversarial attacks. Methods that primarily seek to boost adversarial robustness may not optimize for the best combined performance along the clean-vs.-adversarial tradeoff. We devise a method to finetune adversarially trained models for combined clean and robust performance, borrowing from the method of "model soups," where parameters within an ensemble of finetuned checkpoints are averaged to form new model weights. Such model soups have been shown to improve performance in transfer learning settings while maintaining or improving the original task performance; extending from this observation, we find that linear interpolation of adversarially robust ensemble parameters reaps similar benefits in the tradeoff between robustness and clean accuracy. Furthermore, we construct a wrapper architecture, or "learned soup," to adversarially train our interpolation coefficients for model soups, and find that, in some cases, directly training the souping coefficients leads to a more robust model than grid-searching for the coefficients. This method of adversarial learned soups can be applied in conjunction with existing methods for adversarial training, further bolstering the current arsenal of defenses against adversarial attacks.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LoopTree: Enabling Systematic and Flexible&#13;
Exploration of Fused-layer Dataflow Accelerators</title>
<link href="https://hdl.handle.net/1721.1/152743" rel="alternate"/>
<author>
<name>Gilbert, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152743</id>
<updated>2023-11-03T04:05:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">LoopTree: Enabling Systematic and Flexible&#13;
Exploration of Fused-layer Dataflow Accelerators
Gilbert, Michael
Deep neural network (DNN) accelerators exploit data reuse to reduce memory traffic. Typically, DNN accelerators exploit data reuse within layers. However, there is also reuse between layers. To exploit this inter-layer reuse opportunity, fused-layer dataflow accelerators tile and buffer intermediate data between layers on-chip to benefit from inter-layer reuse while minimizing buffer size. To further minimize buffer space requirement, some fused-layer dataflows also propose not buffering part of the tile at the cost of recomputation of the unbuffered data.&#13;
&#13;
The design space of fused-layer dataflow accelerators is large, but prior work only considers a subset of the design space. Prior works are limited in a number of ways: (1) tiling only in certain dimensions, leaving some designs unexplored; (2) limited choices of reuse/recompute which are applied uniformly to all layers, leading to increased recomputation; (3) not exploring the interaction of tiling and reuse/recompute choices; and (4) applying the same design choices for all layers in the DNN despite diverse layer shapes, which call for different choices.&#13;
&#13;
To address these limitations, we propose (1) a more extensive design space, (2) a taxonomy that introduces structure into the design space, and (3) a fast, flexible, analytical model, called LoopTree, to evaluate the latency, energy consumption, buffer space requirements, and bandwidth requirements of designs in this design space. Finally, we present case studies enabled by LoopTree that show how exploring this larger space results in designs that require less buffer space (e.g., up to 7.6× buffer space reduction for the same off-chip transfers).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Application of Graph of Convex Sets Trajectory Optimization to the Marine Robotics Domain</title>
<link href="https://hdl.handle.net/1721.1/152742" rel="alternate"/>
<author>
<name>Largaespada, Raul Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152742</id>
<updated>2023-11-03T03:25:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">An Application of Graph of Convex Sets Trajectory Optimization to the Marine Robotics Domain
Largaespada, Raul Alexander
Autonomous unmanned surface vehicles (USVs) and unmanned underwater vehicles (UUVs) are becoming ubiquitous in applications exploring marine environments, and the design of path planning algorithms for these vehicles remains an open area of research. For marine environments, to save on energy a path between two points should be optimized to minimize distance traveled while remaining smooth to reduce changes in speed and account for the dynamic limits of the vehicle.&#13;
&#13;
The Graphs of Convex Sets (GCS) trajectory optimization motion planner from the MIT Robot Locomotion Group is a recently developed planner which has been demonstrated to return smooth and optimal paths navigating around complex environments filled with obstacles, but this planner has not been applied to marine environments. The early successes of the GCS planner and the smoothness of the trajectories returned suggest that GCS could be effectively applied to USV and UUV path planning.&#13;
&#13;
This project implemented the GCS planner as part of the MOOS-IvP so software suite for autonomous marine robotics. The robustness of the trajectories returned from GCS was evaluated via Monte Carlo trials on a simulated USV traversing a field of randomized known and unknown obstacles. The performance of GCS was compared against alternate planners implementing the D* Lite algorithm or relying only on existing MOOS-IvP obstacle avoidance capabilities, running the the same simulation environment.&#13;
&#13;
In testing, the GCS planner was not as successful as the D* Lite planner in navigating dense obstacle fields, but returned smoother and shorter paths than D* Lite which were easier for the vehicle to follow. Testing also suggested future modifications to the GCS planner which could be added to further increase its robustness when applied to USVs operating in dense obstacle fields.&#13;
&#13;
All code developed for this project may be found at: https://github.com/rlargaespada/moos-ivp-monte-carlo.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sediment Erosion and Deposition Within Mangrove Forests</title>
<link href="https://hdl.handle.net/1721.1/152741" rel="alternate"/>
<author>
<name>Deitrick, Autumn Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/152741</id>
<updated>2023-11-03T03:29:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Sediment Erosion and Deposition Within Mangrove Forests
Deitrick, Autumn Rose
Mangroves are highly productive ecosystems that sequester carbon in their own biomass and by trapping carbon-rich sediment imported from outside the forest and deposited in the forest. Aboveground biomass, like mangrove pneumatophores (i.e., aerial roots), creates conditions that facilitate sediment deposition by enhancing drag and slowing currents near the bed. However, pneumatophores also generate turbulence that enhances turbulent kinetic energy (TKE), which can promote sediment resuspension. Two studies were conducted to better understand the impacts of pneumatophore-generated turbulence on sediment transport. The first study investigated whether pneumatophore-generated turbulence impacted the erosion threshold and rate of natural cohesive sediment collected from a black mangrove habitat. Sediment cores with intact belowground and aboveground biomass were placed in a recirculating channel. Pneumatophores were removed from one side of each core. Each side of the core, with and without pneumatophores, was separately exposed to the same sequence of channel velocities. Although the presence of pneumatophores significantly enhanced the turbulence in the channel, the bed stress, threshold for sediment resuspension, and rate of sediment erosion were similar for the bare and vegetated sides of each core. This result differs from non-cohesive sediments, for which pneumatophore-generated turbulence has been found to increase erosion rates. The second study considered deposition. Laboratory experiments measured TKE and net deposition of non-cohesive sediment in bare and vegetated channels. For the same velocity, as pneumatophore density increased, TKE increased and net deposition decreased. The impact of TKE on deposition was described in terms of a deposition probability model. This model was used to predict deposition over a range of typical mangrove field conditions, which indicated that pneumatophore-generated turbulence can facilitate the delivery of sediment farther into the mangrove forest. Understanding how pneumatophores impact the balance of the competing processes of deposition and erosion is critical for improving the assessment and modelling of sediment retention and carbon storage in mangrove forests.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the technical feasibility of converting U.S. salt&#13;
caverns used for natural gas storage into hydrogen&#13;
storage facilities</title>
<link href="https://hdl.handle.net/1721.1/152740" rel="alternate"/>
<author>
<name>Paca, Edgar</name>
</author>
<id>https://hdl.handle.net/1721.1/152740</id>
<updated>2023-11-03T03:27:30Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing the technical feasibility of converting U.S. salt&#13;
caverns used for natural gas storage into hydrogen&#13;
storage facilities
Paca, Edgar
The 2015 Paris Agreement laid the foundation for the current momentum in renewable energy, leading to a significant increase in availability. According to the International Energy Agency (IEA), by 2022, the world was on track to add nearly 2,400 GW of renewable energy in the next five years, equivalent to what was achieved in the past two decades. This investment in renewables aims to reduce global emissions and limit temperature rise to below 1.5 degrees Celsius by 2050.&#13;
&#13;
Hydrogen is emerging as a crucial energy carrier, particularly for challenging decarbonizing sectors, such as heavy-duty transportation, cement production, iron and steel manufacturing, chemicals, and building materials. While progress has been groundbreaking in wind and solar energy, the issue of large-scale energy storage remains persistent. The intermit-tent nature of wind and solar power requires a storage medium capable of handling seasonal variations, similar to underground salt caverns used as natural gas reservoirs since 1961.&#13;
&#13;
In light of these challenges, this thesis examines the possibility of repurposing existing U.S. natural gas storage salt caverns into hydrogen storage facilities. By exploring this approach, we can utilize the established infrastructure and leverage the extensive knowledge gained from decades of natural gas storage. This can potentially accelerate the adoption of hydrogen as a clean and sustainable energy alternative.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Industrial Decarbonization: Evaluating the State of Organic Membrane Technology for Class-Based Hydrocarbon Separation</title>
<link href="https://hdl.handle.net/1721.1/152739" rel="alternate"/>
<author>
<name>Cochran, Corinne S.</name>
</author>
<id>https://hdl.handle.net/1721.1/152739</id>
<updated>2023-11-03T03:55:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Advancing Industrial Decarbonization: Evaluating the State of Organic Membrane Technology for Class-Based Hydrocarbon Separation
Cochran, Corinne S.
Industrial organic chemical separations are major contributors to carbon dioxide (CO$_2$) emissions in the energy industry, contributing to global temperature rise. As membrane separations have successfully reduced energy requirements in water purification and desalination, their application in organic separations, a challenging area to decarbonize, is gaining attention. With gas phase organic membrane separations being installed at the industrial scale, liquid separations remain as the next frontier for industrial decarbonization.&#13;
&#13;
This thesis begins with an exploration of the history and current developments in membrane technology, focusing on enhancing membrane applications for liquid organic hydrocarbon separations. The objective is to showcase technological advancements that address the existing limitations of semi-permeable membrane systems in organic liquid hydrocarbon separation processes.&#13;
&#13;
Then this work presents a first-order thermodynamic screening method to determine the suitability of membrane separations for different liquid separation processes in a refinery. The method is specifically applied to a data set of gasoline and lighter hydrocarbon separations executed following a fluid catalytic cracking (FCC) operating unit.&#13;
&#13;
First, the findings highlight that non-polymer based membrane materials offer improved durability and performance. Second, preferential separations for liquid organic membrane applications involve feed compositions with a higher percentage of material above the intended molecular weight cut-off (MWCO) for separation. Third, an effective combination of membrane and traditional distillation separation methods in brownfield constructions is observed in mitigating distillation overhead limitations.&#13;
&#13;
Lastly, this work identifies areas for improvement and recommends technological advancements to further the industrial adoption of semi-permeable membrane installations, enhancing the potential for widespread implementation and significant environmental impact.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Play taxonomies: A toy index for product design</title>
<link href="https://hdl.handle.net/1721.1/152737" rel="alternate"/>
<author>
<name>Rossikopoulou Pappa, Styliani</name>
</author>
<id>https://hdl.handle.net/1721.1/152737</id>
<updated>2023-11-03T03:02:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Play taxonomies: A toy index for product design
Rossikopoulou Pappa, Styliani
This research delves into the diverse landscape of play categorizations, spanning historical foundations to contemporary perspectives, with a focus on its significance in toy design. Drawing insights from prominent scholars and classification frameworks, this study introduces an approach to be used during the toy design process and nurturing constructive design critique. &#13;
&#13;
Beginning with Johan Huizinga's foundational dual classification of play, the groundwork is laid for comprehending the contest and representation forms of play. Jean Piaget's developmental viewpoint is explored next, underscoring the progressive nature of play categories and their pivotal role in children's cognitive and social development. Roger Caillois's taxonomy illuminates the spectrum of play types, uncovering the intricate interplay between human behavior and culture. Sara Smilansky's observations in child development further shed light on how play influences cognitive, social, and emotional growth.&#13;
&#13;
A comprehensive toy product index emerges as a central outcome, offering a structured framework for evaluating and categorizing toy products, transcending traditional play value assessments. By encompassing attributes such as affect, miniaturization, assembly, simulation, craft, education, event-oriented toys, and collectibles, the index equips toy designers, educators, and users explore, compare, and critique products. The study details a methodical approach to data collection, categorization, database construction, and validation, while acknowledging inherent limitations and envisioning future refinements. &#13;
&#13;
Ultimately, this study aims to bridge the gap between theoretical play classifications and their practical implications in design, to enhance the toy design process and foster a culture of informed design critique. By intertwining play categorizations with innovative design methodologies, this research aims to provide a deeper understanding of toys’ significance in material culture. The toy product index emerges as a useful tool, promoting informed exploration, collaborative ideation, and innovative thinking within the realm of toy design.&#13;
&#13;
Keywords: play categorizations, toy taxonomy, play attributes, affect, miniaturization, assembly, simulation, craft, education, event-oriented toys, collectibles, toy product index.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic Minimization of Ocean Twilight Zone Vehicle, Mesobot</title>
<link href="https://hdl.handle.net/1721.1/152736" rel="alternate"/>
<author>
<name>Davis, Cameron J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152736</id>
<updated>2023-11-03T03:24:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Acoustic Minimization of Ocean Twilight Zone Vehicle, Mesobot
Davis, Cameron J.
The ocean’s twilight zone (OTZ) is one of the most unexplored regions of the Earth’s oceans.&#13;
The OTZ is defined as the region of the water column between 200 and 1,000 meters in&#13;
depth. It plays a vital role in the global carbon cycle, pushing carbon from the surface layer&#13;
into the deep ocean. It has a very diverse population of fauna, known and unknown, that&#13;
migrate up and down the water column to feed and reproduce. The migration pattern occurs&#13;
based on the amount of radiated sunlight into the water column. The mid-water column&#13;
vehicle, Mesobot, was designed to mimic the migration patterns of mesopelagic organisms.&#13;
Unmanned Underwater Vehicles (UUVs) have become a staple of ocean exploration for years,&#13;
going where man is not able to. Although much quieter than noise from shipping traffic,&#13;
the noise radiated from Mesobot could present potential for error in observation, tracking,&#13;
and sampling. In this thesis, I have analyzed the effect of commutation methods and&#13;
propeller design on the acoustic noise radiated from a single BlueRobotics T200 thruster.&#13;
The propeller design choices are a standard three-blade propeller and a three-blade toroidal&#13;
propeller. The commutation methods analyzed are trapezoidal control and field-oriented&#13;
control. After analyzing four different alternatives, quantitative evidence was found to&#13;
recommend using field-oriented control as the commutation scheme to minimize the radiated&#13;
noise from the thrusters on Mesobot. The radiated noise from the thurster was dominated by&#13;
motor noise, and no conclusive evidence was found to recommend the three-blade propeller&#13;
over the toroidal propeller.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Queueing System Analysis in Oil and Gas Abandonment Operations</title>
<link href="https://hdl.handle.net/1721.1/152731" rel="alternate"/>
<author>
<name>Monnig, Jonathan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152731</id>
<updated>2023-11-03T03:24:08Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Queueing System Analysis in Oil and Gas Abandonment Operations
Monnig, Jonathan R.
Oil and Gas (O&amp;G) well abandonments are crucial in ensuring environmental and regulatory compliance in the industry. This thesis characterizes an O&amp;G well abandonment system and process using well-plugging regulations from six major hydrocarbon-producing states in the United States. A comprehensive process flow diagram depicting the O&amp;G well abandonment process is presented. The well abandonment process is further characterized as a queueing system and a queueing network model is developed. &#13;
&#13;
The study introduces four job prioritization classes to explore the impact of prioritization and priority queues on the defined system performance metrics. Due to the complexity of the system, and instances when the server capacity required exceeds the capacity available, traditional queueing equations are inadequate, necessitating the use of simulation. The simulation, implemented using Python and the SimPy library, assesses the system's behavior and efficiency. The functionality of the simulation is demonstrated through five insights that explore varying architectures of the developed queueing system, encompassing server prioritization schemes, service channel configurations, job priority compositions, review periods, and dynamic server counts.&#13;
&#13;
This analysis employs queueing theory to model the stochastic behavior of the O&amp;G well abandonment process, emphasizing the need for simulation. The resultant model identifies dominant queueing system architectures, including combined service channel configurations, priority queues, and review periods that reduce average throughput times and variability of prioritized jobs in the well abandonment process.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct Air Capture as a Carbon Removal Solution: Analyzing Scale-Up, Cost Reduction, and Pathways for Acceleration</title>
<link href="https://hdl.handle.net/1721.1/152729" rel="alternate"/>
<author>
<name>DiMartino, Brooke B.</name>
</author>
<id>https://hdl.handle.net/1721.1/152729</id>
<updated>2023-11-03T03:37:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Direct Air Capture as a Carbon Removal Solution: Analyzing Scale-Up, Cost Reduction, and Pathways for Acceleration
DiMartino, Brooke B.
In addition to drastic reductions in global carbon dioxide emissions, the Intergovernmental Panel on Climate Change has stated with high confidence that carbon dioxide removal will be needed to meet the Paris Agreement temperature goals. Direct air capture is a novel carbon removal technique that is gaining attention for its potential contribution to the portfolio of carbon removal solutions. As its primary barrier to deployment is high costs, there is a focus on understanding how this technology could reach lower costs by mid-century.&#13;
&#13;
This thesis uses technological change theory to investigate potential scale-up and cost reduction forecasts for existing direct air capture methods. The literature review provides context for carbon dioxide removal, direct air capture, and technological change theory. Analogous technologies are reviewed for cost-reduction drivers and compared to the common direct air capture methods. This comparison is used for learning and improvement rate analysis to estimate cost reduction forecasts for mature direct air capture methods, then used to identify levers that direct air capture stakeholders can deploy to accelerate scale-up and cost reductions.&#13;
&#13;
The results suggest solid sorbent direct air capture (S-DAC) could achieve costs of $100-$400/tonCO2 by 2050, while liquid solvent direct air capture (L-DAC) may reach $100-$220/tonCO2 in the same period. For the base assumptions investigated, S-DAC reaches the 45Q U.S. tax credit threshold in 2041 using a single-factor improvement rate analysis and in 2040 using component-based. L-DAC reaches the threshold in 2034 for single-factor and in 2037 for component-based improvement rates. Neither method reaches the threshold using a single-factor or component-based learning rate analysis under base assumptions.&#13;
&#13;
The analog analysis emphasizes the importance of a variety of direct air capture stakeholders in accelerating the technology’s scale-up and cost reductions. Policymakers can develop standards for measurement, reporting, and verification of carbon dioxide removal. The private sector can set clear requirements for carbon removal purchases focusing on proven, durable, measurable methods with clear paths for cost reductions. Direct air capture providers can focus on early design choices that enable cost reductions and work to build economies of scale in manufacturing. The findings indicate that the technology may reach cost-competitive thresholds by mid-century and that stakeholders across the direct air capture ecosystem have opportunities to accelerate this transition.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Self-Supervised Learning through Transformations in Higher Activation Space</title>
<link href="https://hdl.handle.net/1721.1/152728" rel="alternate"/>
<author>
<name>Gabrielsson, Rickard Brüel</name>
</author>
<id>https://hdl.handle.net/1721.1/152728</id>
<updated>2023-11-03T03:50:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing Self-Supervised Learning through Transformations in Higher Activation Space
Gabrielsson, Rickard Brüel
We introduce Deep Augmentation, an approach to data augmentation using dropout to&#13;
dynamically transform a targeted layer within a neural network, with the option to use&#13;
the stop-gradient operation, offering significant improvements in model performance and&#13;
generalization. We demonstrate the efficacy of Deep Augmentation through extensive&#13;
experiments on contrastive learning tasks in computer vision and NLP domains, where we&#13;
observe substantial performance gains with ResNets and Transformers as the underlying&#13;
models. Our experimentation reveals that targeting deeper layers with Deep Augmentation&#13;
outperforms augmenting the input data, and the simple network- and data-agnostic nature of&#13;
this approach enables its seamless integration into computer vision and NLP pipelines.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anomaly Detection in Collider Physics via Factorized Observables</title>
<link href="https://hdl.handle.net/1721.1/152725" rel="alternate"/>
<author>
<name>Wynne, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/152725</id>
<updated>2023-11-03T03:16:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Anomaly Detection in Collider Physics via Factorized Observables
Wynne, Raymond
To maximize the discovery potential of high-energy colliders, experimental searches should be sensitive to unforeseen new physics scenarios. This goal has motivated the use of machine learning for unsupervised anomaly detection. In this paper, we introduce a new anomaly detection strategy called FORCE: factorized observables for regressing conditional expectations. Our approach is based on the inductive bias of factorization, which is the idea that the physics governing different energy scales can be treated as approximately independent. Assuming factorization holds separately for signal and background processes, the appearance of non-trivial correlations between low- and high-energy observables is a robust indicator of new physics. Under the most restrictive form of factorization, a machine-learned model trained to identify such correlations will in fact converge to the optimal new physics classifier. We test FORCE on a benchmark anomaly detection task for the Large Hadron Collider involving collimated sprays of particles called jets. By teasing out correlations between the kinematics and substructure of jets, FORCE can reliably extract sub-percent signal fractions. This strategy for uncovering new physics adds to the growing toolbox of anomaly detection methods for collider physics with a complementary set of assumptions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for Cost Methodology Applied to High Temperature Gas-Cooled Reactors</title>
<link href="https://hdl.handle.net/1721.1/152723" rel="alternate"/>
<author>
<name>Venneri, Lorenzo</name>
</author>
<id>https://hdl.handle.net/1721.1/152723</id>
<updated>2023-11-03T03:46:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design for Cost Methodology Applied to High Temperature Gas-Cooled Reactors
Venneri, Lorenzo
SpaceX’s Falcon 9 is like a Toyota Corolla – an order of magnitude cheaper than competitor’s “high performance” rocket systems, the Ferraris, but achieving the same basic transport requirements with greater reliability and safety. Before Falcon, space launch was a Ferrari-like industry, with handmade, highly specialized, extremely expensive vehicles targeting government customers and fully complicit in the inefficiencies of government contracting. Similarly, the nuclear industry produces and still designs Ferrari-like fission reactors, with high performance metrics in terms of power density and unit power, at a megaproject scale, but with high system and operational complexity, extreme development cost, numerous part counts, and very low production and deployment rates that still require human-machine interface to meet societal safety objectives. The demand for nuclear Ferraris in the U.S., particularly within non-traditional energy utilities is very low, as few competent utilities want unique reactors with such high capital costs, running at such high power that low probability accidents can have offsite consequences. Where is the nuclear Corolla?&#13;
&#13;
In the pursuit of energy systems that are cost effective and widely deployable, this thesis specifies a nuclear reactor architecture called the Class A HTGR (CA-HTGR), where Class A refers to the passive safety class during decay heat cooling. The architecture is used in a coupled design and cost approach to search for Levelized Cost of Electricity (LCOE) minimizing designs. The feasibility and utility of this design for cost (DFC) methodology, also termed economics by design, is shown through assessment of advanced manufacturing opportunities and LCOE minimizing designs.&#13;
&#13;
Section 1 introduces the history and status quo of fission energy, providing a perspective on the stalled industry and possible paths forward, motivated by the rapid expansion and success of the space launch industry. A comparison is made between nuclear and natural gas, suggesting possible cost reductions in a rebooted nuclear industry. Because of the unusual, black swan risk associated with nuclear, the starting point for massive cost reductions and widescale deployment is a new safety paradigm that reduces risk through consequence reduction. Through a technology description and down selection in Section 2, the CA-HTGR is shown to be the most effective architecture available for reducing consequence and mitigating hazards pertinent to nuclear fission reactors.&#13;
&#13;
Nuclear reactor design has historically been a painful, one-off process with limited opportunity for optimization, iteration, and design exploration. Connections between design parameters and value functions like LCOE are often unclear or missing altogether. The wide-ranging disciplines, the timelines and development costs involved, and the barriers to change, combine to form a complex design process that often leads to siloed subsystem teams, and leaving little room for optimization, iteration, or integrated design, and more easily favors design by regulation, tradition, and sunk cost.&#13;
&#13;
As an alternative to the traditional nuclear design approach, Section 2 introduces DFC methods made up of design, cost, and search codes. Instead of one-off labor-intensive estimates, DFC aims to automate estimates over a wide range of the design space with the end goal of LCOE minimization. Section 3 presents the design code and describes the models and assumptions used to specify an HTGR concept design, including models for core energy content, power rating, reactor vessel geometry, and balance of plant. Section 4 presents the cost code which includes estimates for CAPEX, OPEX, and project LCOE. Section 5 describes model uncertainty and design rankings, discussing the utility of each and possible methods for their estimation.&#13;
&#13;
Advanced manufacturing (AM) and its potential use cases for nuclear fission are introduced in Section 6. DFC methods are used to evaluate the cost effects on an HTGR baseline. Rather than attempt detailed and high uncertainty cost estimation of advanced manufacturing methods, ranges of costs and performance factors were reported together with dependent LCOE changes. The results suggest various opportunities for AM and the utility of coupled design and cost estimation for evaluating the potential impacts of AM opportunities.&#13;
&#13;
Finally, Section 7 presents the use of DFC methods to examine the design and cost space. A wide range of cost outcomes were found through random sampling of the design space. Genetic algorithms were used to search the design space for LCOE minimizing designs, establishing the feasibility of DFC methods for HTGRs.&#13;
&#13;
The DFC methods developed and utilized for this thesis can be used to improve the delivery of cost competitive nuclear fission reactors for planet-wide deployment. DFC methods provide a system-focused approach that considers design interdependencies, allowing for optimization. The main shortcomings of the reported DFC methods include low fidelity design and cost approximations that may not match the reality of an HTGR. Optimizing on a simplified model can be useful because financial commitments are often made using similar or even simpler models. DFC methods could be used to quickly produce cost minimizing designs for a given population of end users and projects. In the future, nuclear projects can be accelerated by using DFC methods in conjunction with nuclear analysis codes, templating codes, and language models to automatically produce design and licensing documentation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for the Margins: Mapping Conceptual Implications of Profound Intellectual Disability and Informality through Slovo Park, Johannesburg</title>
<link href="https://hdl.handle.net/1721.1/152721" rel="alternate"/>
<author>
<name>Ansari, Natasha</name>
</author>
<id>https://hdl.handle.net/1721.1/152721</id>
<updated>2023-11-03T03:37:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Planning for the Margins: Mapping Conceptual Implications of Profound Intellectual Disability and Informality through Slovo Park, Johannesburg
Ansari, Natasha
Disability remains one of the most marginalized considerations within urban planning and social justice research and practice. Disability affords planning the critical conceptual lens of interdependence, moving beyond ideas of individualized independence. Interdependence is an especially salient provocation for how we live in today’s world, shaped by COVID-19 and global crises brought on by climate change, meaningful work and livable wages, generative AI, and future pandemics. This thesis focuses on the challenges of urban planning for and with people with profound and intellectual disabilities in informal and impoverished Global South contexts as an acute, but nonetheless pervasive, example of the need and precarity of interdependence. Drawing primarily from fieldwork in the informal settlement of Slovo Park, Johannesburg, this thesis aims to calibrate what it means to “plan for the margins” in situations of compounded vulnerability and resource scarcity. In doing so, it documents vitally important kin and care networks existentially challenged by neoliberal market forces. It argues that profound disability ought to be a central planning concern, informing how we transform social relations and build infrastructures of care that center deep vulnerability.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Collective Speculators’ Playbook Subversive Market Forms for Common Wealth</title>
<link href="https://hdl.handle.net/1721.1/152720" rel="alternate"/>
<author>
<name>Ofer, Tamar M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152720</id>
<updated>2023-11-03T03:44:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Collective Speculators’ Playbook Subversive Market Forms for Common Wealth
Ofer, Tamar M.
In 2016, a remarkable collaboration between the State of New York and the city’s largest landlord redirected a staggering $1.6 billion from neighborhoods hosting the highest unemployment rates along the Harlem River to Midtown's Hudson Yards, marking the costliest privatized development in American history. Through a dizzying array of financial tactics, regulatory loopholes, and spatial manipulations to fund this unprecedented development, the axis stretched between Hudson Yards and Harlem River Yards exposes a dominant state-sponsored speculative urban development model in urban New York - often operating above the designer’s head.&#13;
&#13;
This thesis draws the practices, policies, and protocols of the speculative urban development model across two urban projects. It identifies two key actors: the developer, who speculates on financial returns, the designer, who speculates on spatial forms. The gap between them can then be termed the ‘speculative spectrum’. In it, the developer, adept at capturing value but often lacking socio-spatial literacy to create them in situ, contrasts with the designer, proficient in creating values but lacking the mechanisms to capture them. This dichotomy draws the required leverage points to intervene in an existing system design that underpins prevalent forms of urban speculation, paving the way for a local cross-sector partnership in the Mott Haven-Port Morris neighborhood of Harlem River Yards.&#13;
&#13;
In collaboration with a civic coalition, uncovered and invented speculations are mobilized and reconfigured to assemble a collective ‘playbook’ for co-disciplinary forms of urban speculation. It compiles a repository of ‘plays’ into a design portfolio of spatial, economic, and political plays that re-imagine Harlem River Yards as a collective waterfront rather than an industrial wasteland. In doing so, it posits that design must not only actively reengage with current forms of real estate speculation but must strategically reposition its practices as a design project in and of itself.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drawing as Programming Language</title>
<link href="https://hdl.handle.net/1721.1/152719" rel="alternate"/>
<author>
<name>Huang, Lingdong</name>
</author>
<id>https://hdl.handle.net/1721.1/152719</id>
<updated>2023-11-03T03:59:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Drawing as Programming Language
Huang, Lingdong
Drawing has always been a powerful tool for humans to communicate information and express themselves. While numerous programming languages have previously been designed to break from the traditional text-based linear approach, the idea of using drawings as a means of computation yet presents many exciting and novel opportunities. This thesis explores some of such possibilities by presenting a series of novel programming languages: λ-2D, a two-dimensional grid-based lambd a calculus derivative that fuses diagrams with free-hand drawings; Nor-wires, a minimalistic, symbol-less language based on NOR gates where semantics are inferred from the topology of lines drawn alone; The Languages of Primitives, where the spatial relationships and inherent properties of fundamental shapes come in play to build programming constructs; as well as other experiments on form and animation that relates drawings to computation. The goal of these experiments is to create unusual, playful, and interesting interactions, to blur the boundaries between art and code, to open up possibilities and to inspire future programming language design, as well as to make computation more accessible Could it be possible that writing a computer program would be as simple as making a drawing?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microfabrication and characterization of a new box-shaped high frequency (7.5 MHz) low aperture 2D phased ultrasound transducer array</title>
<link href="https://hdl.handle.net/1721.1/152718" rel="alternate"/>
<author>
<name>Shuvo, Ikra Iftekhar</name>
</author>
<id>https://hdl.handle.net/1721.1/152718</id>
<updated>2023-11-03T03:19:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Microfabrication and characterization of a new box-shaped high frequency (7.5 MHz) low aperture 2D phased ultrasound transducer array
Shuvo, Ikra Iftekhar
Mini electronics in wearables inspired this study. Thus, this work offers a new box-shaped high frequency (7.5 MHz) low aperture 2D phased sparse array ultrasonic transducer developed, built, and characterized. The capacity of matrix or 2D phased arrays to generate ultrasound beams without requiring any form of motion or mechanical steering holds potential value in the biomedical sonographic domain. However, these systems need a large number of piezoelectric elements to sample the active aperture, which is smaller than λ/2 wavelength between them, necessitating the need for a sizable or large transducer. To the best of knowledge, this is the first endeavor to design and microfabricate a 7.5 MHz transducer array, based on commercial PZT-5H polycrystalline materials, as tiny as 70x70 µm per transducer with a pitch of 102 µm to maintain an inter-element separation below 50% of the lambda. The study employs a square box-shaped structure that houses the transmitters and receivers perpendicular to each other, resulting in a reduced aperture and compact design compared to different commercial designs. This transducer not only provides satisfactory longitudinal k33 coefficient (0.45-0.5), acoustic pressure (2.1 kPa), sound pressure level (180 dB), low Q-factor (1.19), thermal stability, and high bandwidth (5.6 MHz, 73.41%), while minimizing cross-talk (&lt;-50 dB), but also reduces the overall transducer area due to its unique sparse array configuration, resulting in a diminutive size (3.3 mm x 3.3 mm).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Current State of the Commercial Real Estate Office Sector</title>
<link href="https://hdl.handle.net/1721.1/152716" rel="alternate"/>
<author>
<name>Dessalines, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/152716</id>
<updated>2023-11-03T03:05:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Current State of the Commercial Real Estate Office Sector
Dessalines, Nick
In January 2023, approximately 50% of Manhattan office workers were in the office on an average weekday; roughly 10% of the local workforce was fully remote; and only 9% of employees were in the office five days a week. These city-level trends are also reflected at the submarket, market, and national levels. As of 2023, 13% of full-time U.S employees work entirely from home, while 28% work a hybrid model. The Covid-19 pandemic’s impact on commercial real estate, particularly the office sector, is still being felt three years later. Due to the 2020 outbreak of the coronavirus, regulators worldwide implemented lockdowns, forcing employees to work remotely indefinitely. And to the surprise of many, this trend has continued unabated.&#13;
&#13;
The adaptation of the work-from-home model by a myriad of office real estate tenants caused a significant decline in office space demand. According to commercial real estate services firm, CBRE, the U.S. national office market reported 16.5 million sq. ft. of negative net absorption in Q1 2023 (the weakest quarter for office demand in two years), bringing overall vacancy up to 17.8%. The concept of remote working has long been criticized and rejected. The prevailing belief was that employees are simply not as motivated nor productive working from home as opposed to the office. Additionally, critics further argue that it is impossible to build and maintain a company office culture if your employees are not physically present in the office. Simply put: the remote work model was widely regarded and portrayed as a productivity and culture “killer”. The temporary lockdowns in 2020 however presented a unique (and forced) opportunity for those theories to be tested. Three years later, it’s safe to say that the paradigm of traditional workspaces has undergone a seismic shift thanks to the Covid-19 pandemic. The remote-work model's benefits and limitations have largely come to light, prompting employers and employees to respond accordingly.&#13;
&#13;
With an increasing number of companies cutting down their real estate footprints, rising vacancy rates, and plummeting valuations, what exactly does the future hold for the office sector? How are investors, landlords, and tenants affected? These are some of the questions that I look to address throughout this paper, for which I’ve interviewed three highly regarded and respected industry experts.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Workforce Practices &amp; Organizational Performance in Nursing Homes: Implications for Resident Health and COVID-19 Containment</title>
<link href="https://hdl.handle.net/1721.1/152715" rel="alternate"/>
<author>
<name>Scott, K. MacKenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/152715</id>
<updated>2023-11-03T03:05:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Workforce Practices &amp; Organizational Performance in Nursing Homes: Implications for Resident Health and COVID-19 Containment
Scott, K. MacKenzie
One in three COVID-19 deaths in the United States occurred in a nursing home, raising questions about how nursing home facilities might improve organizational performance on resident health outcomes. Though researchers have linked workforce practices to organizational performance on patient health, it is less clear whether the predictors of organizational performance look different for pandemic infection, relative to other health conditions. To address this gap, this paper links workforce practices with both pre-pandemic resident health conditions and with COVID-19 outcomes. The analysis relies on multivariate and logistic regressions using two novel datasets that link multiple administrative sources before and during the pandemic. It evaluates how workforce practices such as pay, staff hours per resident, outsourcing, and overtime relate to resident health in both contexts. Whereas estimates show that workforce practices for Registered Nurses are the primary driver of resident health before the pandemic, outsourcing is more important to predicting COVID-19 infections and mortality. Specifically, outsourcing care work before the pandemic is associated with a one percentage point decrease in COVID-19 mortality during the crisis, conditional on at least one positive case in the facility. The findings call into question widely made extrapolations from pre-pandemic research on how workforce practices may help predict pandemic spread. By evaluating multiple workforce practices in one model, the findings inform nursing home management decisions in the interest of resident health.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tuning Into The Planet: Scientists are collecting and archiving soundscapes before they disappear</title>
<link href="https://hdl.handle.net/1721.1/152713" rel="alternate"/>
<author>
<name>Gamillo, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/152713</id>
<updated>2023-11-03T03:36:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tuning Into The Planet: Scientists are collecting and archiving soundscapes before they disappear
Gamillo, Elizabeth
Soundscapes of the world can reveal the status of an ecosystem to an ecologist, much like how a cardiologist can distinguish abnormal heart murmurs with an electrocardiogram. The effort to use the aural landscape to assess the recovery of fragile ecosystems and directly assist in those recoveries is becoming a movement within ecology. The audio recorder has become an increasingly powerful tool at a time when catastrophic heatwaves, wildfires, floods, and extreme weather increase in severity and occurrence. The soul-stirring calls of animals and the mechanical hum or the roar of cars and planes, all engulfed by the swift and rhythmic sounds of wind and water flow, generate a unique score that researchers collect to note unique rhythms and patterns in the cacophony of these landscapes. &#13;
 &#13;
Altered soundscapes are often the first detectable changes in an ecosystem facing threats. By strapping recorders onto trees or tripods, scientists can also track how ecosystems change in response to human disruptions, like air traffic and logging, or track biodiversity and shifts brought on by climate change. Collecting and archiving the baseline data of sounds that can be visited and studied, much like preserved specimens in a natural history museum, is crucial before they disappear or change forever. Together, these scientists are creating a record for the future. It's the sound of this moment, frozen forever as audio files. And they hope that someone can use it to travel back to the world as it was in this instant—and help preserve the ecosystems they love.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Consumer of Humans</title>
<link href="https://hdl.handle.net/1721.1/152712" rel="alternate"/>
<author>
<name>Tsann, Abdullahi</name>
</author>
<id>https://hdl.handle.net/1721.1/152712</id>
<updated>2023-11-03T03:04:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Consumer of Humans
Tsann, Abdullahi
For decades, a dearth of scientific research, inadequate treatment and diagnostic tools have slowed progress in the fight to control tuberculosis globally. Scientists have developed important drugs, such as isoniazid, rifampin, and pyrazinamide, against the disease. But these drugs must be taken for several months, are sometimes ineffective, and can cause debilitating side effects. What’s more, if people don’t finish their treatments, it can lead to multidrug-resistant tuberculosis (MDR-TB), a form of the disease that is resistant to two of the four common drugs against TB, or, even more worryingly, extensively drug-resistant tuberculosis (XDR-TB), a form of the disease against which broader anti-TB drugs are powerless. Now, advances in immunology, chemistry, and biomolecular engineering are helping scientists to gain better insight into the complex cellular processes of Mycobacterium tuberculosis and the disease it causes. This could pave the way for the development of innovative diagnostics, vaccines, and new treatments for these tuberculosis superbugs. This thesis examines why tuberculosis kill millions of people till this day and scientists’ best efforts alone can’t win the war.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Middle Power Military Intervention: Australia’s Use of Force in Maritime Southeast Asia and the South Pacific</title>
<link href="https://hdl.handle.net/1721.1/152711" rel="alternate"/>
<author>
<name>Ackert, Nicholas Wolf</name>
</author>
<id>https://hdl.handle.net/1721.1/152711</id>
<updated>2023-11-03T04:08:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Explaining Middle Power Military Intervention: Australia’s Use of Force in Maritime Southeast Asia and the South Pacific
Ackert, Nicholas Wolf
Why do middle powers use military force to intervene in external conflicts despite the costs and risk? While there is no shortage of literature on the causes of military intervention, most theories were derived from – and seek to explain – great power behavior. Given differences between great power and middle power material capabilities and interests, there are good reasons to anticipate that the logics of great power and middle power military intervention may not be the same. &#13;
&#13;
Understanding why middle powers use force to intervene in some external conflicts but not others is important. While the majority of military interventions are led by great powers, middle powers like Australia and South Africa have led a number of large and costly operations. Moreover, throughout the past decade, middle powers have exercised increasingly assertive and consequential efforts to either challenge or strengthen the normative, economic, and alliance-related elements of the current international order. Therefore, it behooves academics and policymakers to evaluate how middle powers frame their interests and to better understand the conditions under which they implement costly and risky policies – including the use of force – to pursue them. &#13;
&#13;
This thesis tests four competing theories of military intervention across a complete universe of post-Cold War cases associated with Australia, a state that most closely resembles the middle power ideal. Those theories are: (1) Military Intervention as Threat Response, (2) Military Intervention as a Socialized Behavior, (3) Military Intervention as Greed, and (4) Military Intervention as the Foreign Imposition of Domestic Institutions. I conclude that the theory of Military Intervention as a Socialized Behavior – which emphasizes the role of ideational incentives and defensive intentions – explains the greatest amount of variation in Australia’s behavior.&#13;
&#13;
I find that Australia intervened primarily to protect its self-image and status as a guarantor of regional security, which it had been socialized into adopting through over sixty years of security cooperation with the United States and its Pacific Island neighbors. Notably, Canberra intervened in East Timor, Papua New Guinea, and the Solomon Islands, where the outbreaks of violence were perceived as a direct consequence of Australia’s colonial and neocolonial behaviors. However, it did not intervene in Fiji, where there was no expectation that Canberra would act because the country was not labeled as a failed state and Australia had not been blamed for the unrest there. Other goals, such as deterring foreign interference and preventing the externalities of adjacent state collapse, had less influence than presumed. &#13;
&#13;
In the bigger picture, this thesis offers several contributions to the empirical and theoretical literature on military intervention and foreign policy. First, I develop an original framework for categorizing existing explanations about military intervention which facilitates easier – and replicable – comparison and testing of extant theories. Second, I demonstrate that, based on Australia's experiences, middle powers use force for reasons that differ from their great power counterparts. Thus, this project is a rejoinder to those who claim that middle powers are not differentiable from other non-great power states. Finally, I illustrate the inherent fragility of the middle power identity and reveal how easily it can be threatened by external shocks. &#13;
&#13;
Three implications, which are based largely on the Australian experience, follow. First, we should question the mainstream argument that military intervention cannot be explained by ideas and images. As states weigh the costs and benefits of intervening, potential gains and losses can refer to intersubjectively understood social facts – such as self-image, status, and credibility – as much as wealth and physical safety. Second, middle powers may be more likely to take costlier and riskier actions when their self-image and status are at stake. Finally, middle powers may find themselves caught in self-defeating cycles of intervention. The more a middle power intervenes to protect its self-image and status as a purveyor of regional security, the more that identity will solidify in its own mind and in the minds of other states, encouraging future interventions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Natural Language Processing Models for Depression Detection in Chatbot Dialogues</title>
<link href="https://hdl.handle.net/1721.1/152710" rel="alternate"/>
<author>
<name>Belser, Christian Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152710</id>
<updated>2023-11-03T03:31:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Comparison of Natural Language Processing Models for Depression Detection in Chatbot Dialogues
Belser, Christian Alexander
Depression is an important challenge in the world today and a large source of disability. In the US, a recent study showed that approximately 36 million adults had at least one major depressive episode, including some with severe impairment [1]. However, approximately two-thirds of all depression cases are never diagnosed [2], largely due to a shortage of trained mental health professionals as well as a lingering cultural stigma that often prevents afflicted people from seeking professional care. In order to address this need, there is an emerging interest in using computer algorithms to automatically screen for depression, which offers the potential to be widely deployed to the public via clinical websites and mobile apps. Within this field, Dr. Fletcher’s group at MIT develops mobile platforms that are used to support mental health wellness and psychotherapy, including tools to screen for mental health disorders and refer people to treatment. As part of this work, this thesis compares three distinct Natural Language Processing (NLP) models used to screen for depression. I have revised and updated three state-of-the-art models: (1) Bi-directional gated recurrent unit (BGRU) models, (2) Hierarchical attention networks (HAN), and (3) Long-sequence Transformer models to accurately screen for depression in individuals. The models were all trained and tested on a common standard clinical dataset (DAICWoz) that is derived from clinical patient interviews. After optimization, and exploring several variants of each type of model, the following results were found: BGRU (accuracy=0.71, precision=0.65, recall=63, F1-score=0.64, MCC=0.20); HAN (accuracy= 0.77, precision=0.76, recall=0.77, F1-score=0.76, MCC=0.46); Transformer (accuracy=0.77, precision=0.76, recall=0.77, F1-score=0.76, MCC=0.43). In addition to model performance, I also compare the different categories of models based on computational resources and input token size. I also discuss the future evolution of these models and provide recommendations for specific use cases.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the quality and breadth of carbon reduction project ideation and roadmapping for corporations</title>
<link href="https://hdl.handle.net/1721.1/152709" rel="alternate"/>
<author>
<name>Tainter, Stephen M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152709</id>
<updated>2023-11-03T03:53:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing the quality and breadth of carbon reduction project ideation and roadmapping for corporations
Tainter, Stephen M.
This thesis seeks to demonstrate the benefits of a systems approach to the ideation and complexity analysis of carbon reduction projects (CRP) for heavy-industry corporations (HICs), which is critical for HICs to adapt to a lower-carbon world. The thesis uses a systems approach to improve the breadth of ideation and robustness of complexity analy-sis such that sustainability teams (STs) and technical framing teams (TFTs) have a more robust queue of CRPs and options for decarbonization pathways. A steamflood operation is used as an example to demonstrate the application of system architecting tools and is decomposed into its formal and functional components, which are then recombined to gen-erate an operand-process diagram (OPD). System boundaries are drawn within and around the OPD to ideate unique CRPs to reduce the emissions intensity for a steamflood oper-ation, ranging from tactical solutions to alternative recovery mechanisms. The range of solution-neutral concepts (SNCs) improves a HICs ability to brainstorm more "disruptive" architectures of their operations that will reduce the emissions intensity of their operations. The CRPs are then translated into a design structure matrix (DSM) and inputted into a change-propagation model to forecast how complexity differences enhance or hinder a sys-tem’s ability to adapt to future technological changes. The tools demonstrated in this thesis equip STs and TFTs with insights and comparative analysis to develop near-term solutions and redesign operations for the future. Overall, this research contributes to enhancing CRP ideation and operability of complex industrial systems in the future by applying systems architecting tools and principles.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Usability Study of Nomon: A Flexible Interface for Single-Switch Users</title>
<link href="https://hdl.handle.net/1721.1/152707" rel="alternate"/>
<author>
<name>Bonaker, Nicholas Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/152707</id>
<updated>2023-11-03T03:37:36Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Usability Study of Nomon: A Flexible Interface for Single-Switch Users
Bonaker, Nicholas Ryan
Many individuals with severe motor impairments communicate via a single switch—which might be activated by a blink, facial movement, or puff of air. These switches are commonly used as input to scanning systems that allow selection from a 2D grid of options. Nomon is an alternative interface that provides a more flexible layout, not confined to a grid. Previous work suggests that, even when options appear in a grid, Nomon may be faster and easier to use than scanning systems. However, previous work primarily tested Nomon with non–motor-impaired individuals, and evaluation with potential end-users was limited to a single motor-impaired participant. We provide a usability study following seven participants with motor impairments and compare their performance with Nomon against a row-column scanning system. Most participants were faster with Nomon in a picture selection task, while entry rates varied more in a text-entry task. However, we found participants had to click more times per selection using Nomon, motivating future research into mitigating this increased click load. All but one participant preferred using Nomon; most reported it felt faster and had better predictive text.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identification of Atomic Propositions in English Instructions for Flexible Translation to Robot Planning Representations</title>
<link href="https://hdl.handle.net/1721.1/152706" rel="alternate"/>
<author>
<name>Gandhi, Rujul</name>
</author>
<id>https://hdl.handle.net/1721.1/152706</id>
<updated>2023-11-03T03:00:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Identification of Atomic Propositions in English Instructions for Flexible Translation to Robot Planning Representations
Gandhi, Rujul
Creating human-interactive problem-solving robots involves interfacing natural-language instructions into formal representations. This formal representation should contain all the verifiable constituent units (ideally atomic propositions) which are present in the natural language instruction. However, the format and vocabulary of atomic propositions may vary substantially across formal representations and their application domains. Hence, extracting the correct atomic propositions from natural language has been a bottleneck in converting language to formal representations. In this thesis, we propose and implement a two-step method for identifying atomic propositions in a representation-agnostic way. Given an instruction in natural English, we first identify the spans of that instruction that may potentially be atomic propositions, and then carry out a finer-grained translation into the chosen formalization language. In evaluating this approach, we demonstrate the ability of the span identification method to generalize to two common domains of robot planning tasks, navigation and manipulation, as well as three additional domains of household robot tasks. Finally, we discuss, implement, and evaluate methods to incorporate span identification into the process of parsing English into three formal representations: Temporal Logic, PDDL, and a custom style of atomic propositions. Using pretrained language models and naturalistic parallel data, we build a system that enables flexible formalization of natural language across chosen intermediate representations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis, Design, and Evaluation of Hierarchical Switched-Capacitor Cell Voltage Balancers</title>
<link href="https://hdl.handle.net/1721.1/152705" rel="alternate"/>
<author>
<name>Negm, Ahmad H.</name>
</author>
<id>https://hdl.handle.net/1721.1/152705</id>
<updated>2023-11-03T03:57:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis, Design, and Evaluation of Hierarchical Switched-Capacitor Cell Voltage Balancers
Negm, Ahmad H.
With the increased utilization of and reliance on battery-powered and -storage systems, battery cell voltage balancers are crucial in providing additional capacity extraction and lifespan from battery packs. This thesis explores the analysis, design, and evaluation of a hierarchical switched-capacitor cell voltage balancer topology that employs canonical charge pump inverters as fundamental building blocks. The charge pump inverter implementation was first designed using a fully N-channel switch configuration and a mixed N- and P-channel switch configuration. This was tested at the 2S, 4S, 8S, and 32S battery configuration voltage levels for combined cell stack voltages up to 100V. Subsequently, a complete 4S multi-level implementation of the hierarchical topology was designed around distinct N- and P-channel switch configurations at the 2S and 4S levels. The control circuitry ran off a single external dual-supply by implementing discrete charge pump circuits as floating supplies for the gate drivers. Testing on 2.5Ah capacity and 0mΩ inner resistance emulated cells with 0.4V imbalance yielded a typical balance time under 20 min. Although the topology scales moderately poorly with respect to component count and stress, it excelled at edge-to-edge cell balancing. Overall, the work in this thesis demonstrates the proposed hierarchical balancer topology from 10s of Amps to 33.56A peak cell balance current, 10s of Watts to 1.67kW output power, and typically &gt;90% efficiency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Throughput in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs</title>
<link href="https://hdl.handle.net/1721.1/152702" rel="alternate"/>
<author>
<name>Gowra, Vineeth</name>
</author>
<id>https://hdl.handle.net/1721.1/152702</id>
<updated>2023-11-03T04:07:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Optimization of Throughput in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs
Gowra, Vineeth
Sheet metal fabrication has become a fundamental process in modern engineering due to its versatility and is used across a wide range of industries. Nesting a given set of sheet metal blanks onto raw material sheets is a major cost driver as it determines the amount of usable metal and the rest of the sheet is thrown away as scrap. Nesting algorithms are very effective at identifying the most efficient layout of a given set of parts to maximize the sheet utilization. Hence, material utilization of the sheet is mainly defined by the number of parts being nested and their geometries. On one hand, nesting algorithms would prefer having a large number of grouped parts that allow them to make more efficient sheet metal nests due to more possible combinations of parts on a given sheet. On the other hand, the downstream sorting process which sends the parts to their respective further processing stations would prefer having fewer number of grouped parts as the parts get nested randomly which increases the time spent on the non value add activity. Therefore, an effective nesting strategy between the two extremes is necessary to balance the sheet utilization with the intensive sorting requirements to make the process cost effective and meet the required throughput. In this thesis, a sheet metal nesting strategy is identified for a manufacturing operation with a wide variety of products and plant locations across the globe. Cost and throughput models are produced which inform the selection of a globally optimized nesting strategy. Regional differences in cost drivers such as varying labor rates and raw material costs are considered, and an optimized nesting strategy is validated for deployment across global plant locations. This work provides a detailed approach to optimizing sheet utilization in sheet metal manufacturing through selection of an optimized nesting strategy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organizational Data Journey</title>
<link href="https://hdl.handle.net/1721.1/152701" rel="alternate"/>
<author>
<name>Papenfuss, Tanner</name>
</author>
<id>https://hdl.handle.net/1721.1/152701</id>
<updated>2023-11-03T04:03:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Organizational Data Journey
Papenfuss, Tanner
This thesis delves into a comprehensive analysis of an organization's data journey, exploring its various stages and critical aspects at each step. It serves as a tool for organizations embarking on their data journey. It can help organizations understand where they are and how to transition to the next stage. It delves into tools such as Excel and the transition to cloud-based solutions. It walks through the idea of data-centric and how organizations can evolve into data-driven enterprises. Each section identifies key actions and capabilities at each stage, guiding organizations in preparing themselves before transitioning to the next phase.&#13;
&#13;
The thesis culminates in a data journey workshop to jump-start organizations' transformation. A detailed plan for conducting the workshop is presented, including securing leadership commitment and outlining the workshop agenda. Additionally, a tactical 30-60-90 day plan is proposed, providing participants and leadership with actionable steps to drive data initiatives effectively. This plan acts as a compass, guiding organizations toward their data-driven objectives while fostering a culture of continuous improvement and innovation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Automated Macro-Inspection and&#13;
Improved Defect Identification in Semiconductor Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/152700" rel="alternate"/>
<author>
<name>Cheung, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/152700</id>
<updated>2023-11-03T03:33:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Automated Macro-Inspection and&#13;
Improved Defect Identification in Semiconductor Manufacturing
Cheung, Sophia
This thesis proposes four methods to improve macro-inspection capability of defects on wafers at a semiconductor wafer fab. First, an investigation into the performance of current inspection tools is done, revealing results that are not reliable nor reproducible. Tool maintenance procedures and specification adjustments are recommended. Second, a software upgrade to the current inspection software is developed, including enhanced features that address pain points of reviewing wafer images. The image processing and loading time is reduced by over 50%. Third, three binary classification machine learning models are trained to isolate spin-on-glass defects, edge type defects, and center defects. Each of the models exhibits an area under curve (AUC) of over 0.90 on out-of-distribution test sets. Finally, a proof-of-concept for an in-line inspection system is designed and tested on the fab floor. New images from this system appear to be of sufficient quality for inspection. The results of each part of this study can be used to inform investment decisions required to move towards a more automated process.&#13;
&#13;
Relevant to the machine learning community are the methods developed to address class imbalance in neural network training. Methods for preparing data to be trained in a meaningful way such as spitting, transforming, and creating synthetic data are proposed. The effect of generating data in such a fashion is shown to be positive, increasing the AUC of the specified model by up to 65%.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Sea Ice Modeling with the Dynamically Orthogonal Equations</title>
<link href="https://hdl.handle.net/1721.1/152699" rel="alternate"/>
<author>
<name>Suresh Babu, Anantha Narayanan</name>
</author>
<id>https://hdl.handle.net/1721.1/152699</id>
<updated>2023-11-03T03:56:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Stochastic Sea Ice Modeling with the Dynamically Orthogonal Equations
Suresh Babu, Anantha Narayanan
Accurate numerical models are essential to predict the complex evolution of rapidly changing sea ice conditions and study impacts on climate and navigation. However, sea ice models contain uncertainties associated with initial conditions and forcing (wind, ocean), as well as with parameter values, functional forms of the constitutive relations, and state variables themselves, all of which limit predictive capabilities. Due to the multiple types and scales of sea ice and the complex nonlinear mechanics and high dimensionality of differential equations, efficient ocean and sea ice probabilistic modeling, Bayesian inversion, and machine learning are challenging. In this work, we implement a deterministic 2D viscoplastic sea ice solver and derive and implement new sea ice probabilistic models based on the dynamically orthogonal (DO) equations.&#13;
&#13;
We focus on the stochastic two-dimensional sea ice momentum equations with nonlinear viscoplastic constitutive law. We first implement and verify a deterministic 2D viscoplastic sea ice solver. Next, we derive the new stochastic Sea Ice Dynamically Orthogonal equations and develop numerical schemes for their solution. These equations and schemes preserve nonlinearities in the underlying spatiotemporal dynamics and evolve the non-Gaussianity of the statistics. We evaluate and illustrate the new stochastic sea ice modeling and schemes using idealized stochastic test cases. We employ two stochastic test cases with different types of sea ice: ice sheets and frozen ice cover with uncertain initial velocities. We showcase the ability to evolve non-Gaussian statistics and capture complex nonlinear dynamics efficiently. We study the convergence to the physical discretization, and stochastic convergence to the stochastic subspace size and coefficient samples. Finally, we assess and show significant computational and memory efficiency compared to the direct Monte Carlo method.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methodology for Pyrolysis-induced Thermal Runaway&#13;
Analysis in Li-ion Batteries</title>
<link href="https://hdl.handle.net/1721.1/152698" rel="alternate"/>
<author>
<name>Ramadan, Mahmoud M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152698</id>
<updated>2023-11-03T03:51:23Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Methodology for Pyrolysis-induced Thermal Runaway&#13;
Analysis in Li-ion Batteries
Ramadan, Mahmoud M.
As the adoption of lithium-ion batteries (LIBs) grows due to the demand for highenergy density storage solutions, ensuring their safety becomes paramount. Thermogravimetric Analysis (TGA) and Differential Scanning Calorimetry (DSC), which have traditionally been tools in polymer thermal analysis since the 1950s, have seen increasing use in LIB thermal research in recent decades. However, applying these techniques to LIBs poses challenges due to the multifaceted composition of LIBs and its sensitivity to environmental conditions. This research aims to overcome the inherent limitations of TGA and DSC when applied to LIBs by introducing a robust, standardized experimental protocol to ensure accuracy and consistency. Employing TGA and DSC concurrently and using sealed crucibles with pinholes, we present a comprehensive thermal profile of next-generation LiFSI-based electrolytes, revealing behaviors that differ based on solvent choice. Our analysis discerned distinct thermal properties between LiFSI-carbonate and LiFSI-ether electrolytes. Specifically, carbonate-based electrolytes displayed a pronounced exothermic peak at 350°C, indicative of significant decomposition reactions. In contrast, the LiFSI-ether electrolyte exhibited an exothermic reaction at 210°C, followed by an endothermic event near 300°C. Such variances in thermal behavior emphasize the profound influence of solvent selection on the thermal profiles of electrolyte solutions. A techno-economic assesment on sodium-ion batteries is also presented.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Triad Interactions among Surface Waves Propagating through an Ice Sheet</title>
<link href="https://hdl.handle.net/1721.1/152697" rel="alternate"/>
<author>
<name>Pierce, Max W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152697</id>
<updated>2023-11-03T03:40:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Triad Interactions among Surface Waves Propagating through an Ice Sheet
Pierce, Max W.
We study nonlinear resonant wave-wave interactions which occur when ocean waves propagate into a thin floating ice sheet. Using multiple-scale perturbation analysis verified against regular perturbation for short distances past the ice edge, we obtain theoretical predictions of the wave amplitude evolution as a function of distance travelled past the ice edge for a semi-infinite ice sheet. We relate the amplitude evolution to ice bending strain, related to ice breakup. We show that, due to sum-frequency interactions, the maximum strain in the ice sheet can be more than twice that predicted by linearized theory. We further demonstrate that difference-frequency interactions also can result in a moderate strain increase compared to the linear result despite transferring energy to longer wave components. This work has implications to understanding the occurrence of ice breakup and the resulting ice floe size distribution.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fugitive Spaces For Cultivating Creativity: A Framework For Value-Centered Learning Environments</title>
<link href="https://hdl.handle.net/1721.1/152690" rel="alternate"/>
<author>
<name>Sadler, Cecilé</name>
</author>
<id>https://hdl.handle.net/1721.1/152690</id>
<updated>2023-11-03T03:42:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fugitive Spaces For Cultivating Creativity: A Framework For Value-Centered Learning Environments
Sadler, Cecilé
In today's world, there is a growing recognition of the importance of creativity and curiosity as essential 21st-century skills. Efforts to expand access to technology-mediated creative learning experiences have become widespread. However, oppressive structures in education systems hinder and stifle the participation of all young people in these opportunities for self-expression and agency, particularly Black youth who are negatively impacted by the pervasiveness of anti-Blackness.&#13;
&#13;
This thesis proposes a framework for value-centered learning environments. Through the application of BlackCrit’s theory of anti-Blackness as a lens for analyzing experiences, the holistic design of these learning environments aims to cultivate a sense of fugitive space –purposefully constructed out-of-school spaces for practicing radical imagination. These spaces offer transformative creative learning experiences with computing, challenging anti-Blackness in education and embracing liberatory fantasy. The learning environment encompasses the tools and materials, physical space, pedagogy, and community culture. The core values – accountability, authenticity, awareness, and adaptability – are put into practice by: doing the work, showing up, checking in, and embodying change.&#13;
&#13;
The exploration is conducted in the context of a local community-based grassroots organization, blackyard, that centers Black youth and their families by offering after-school programming. Through design-based approaches and critical inquiry, creative learning workshops, interviews, and immersion are employed to explore the values that underpin ways of being, knowing, and acting, fostering creative and playful learning experiences for Black youth. By centering Black youth and their communities, this thesis opens a dialogue, explores tensions, and encourages persistent dreaming about opportunities that center the humanity and dignity of the learner, celebrate curiosity and imagination, and challenge oppressive narratives in education about who is worthy and capable of the rights to creativity and play.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Peripheral: Re-Thinking the Organized Industrial Zone</title>
<link href="https://hdl.handle.net/1721.1/152688" rel="alternate"/>
<author>
<name>Sahin, Selin</name>
</author>
<id>https://hdl.handle.net/1721.1/152688</id>
<updated>2023-11-03T03:08:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Non-Peripheral: Re-Thinking the Organized Industrial Zone
Sahin, Selin
The Organized Industrial Zone (OIZ) in Turkey emerged in the 1960s as a primary model for industrial development. Combining base infrastructure, specific land use rules, financial incentives&#13;
such as tax exemptions, and partial self-governance, the OIZ is essentially a cluster of light to medium industries. It was meant to facilitate capital flows and later became seen as a tool to&#13;
drive urbanization. This thesis examines the development, consequences, and future prospects of OIZs. It traces their origins from a Checci and Company report, based on the industrial estates, commissioned through an agreement between the governments of Turkey and the United States, to their rapid proliferation in the 21st century.&#13;
&#13;
Today, there are over three hundred individual sites scattered across the country. Varying in size and complexity, these zones are born from policies and regulations that have barely changed&#13;
since the late 1960s. As the OIZ model stands today, both at the policy level and in practice, there is a multitude of issues related to their internal organization, urban planning, impacts on the environment, regional disparities, and social equity. In exploring the evolving relationships between OIZs and the urban texture, sped up by expanding boundaries and changing paradigms in the industry, I submit that the design of OIZs should not be peripheral in our thinking.&#13;
&#13;
Selecting a particular site that is exemplary of the spatial conditions of many OIZs, I propose design interventions to address current problems and speculate the future of these zones. The components of the proposal factor in the city, within a material ecologies awareness. Through these proposals, this thesis aims to spatialize some of the hopes and narratives about these zones in the political consciousness and offer new urban visions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Developer and Lending Risk Associated with Offsite Construction</title>
<link href="https://hdl.handle.net/1721.1/152687" rel="alternate"/>
<author>
<name>Coen Jr., William</name>
</author>
<id>https://hdl.handle.net/1721.1/152687</id>
<updated>2023-11-03T03:43:05Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Understanding Developer and Lending Risk Associated with Offsite Construction
Coen Jr., William
This thesis investigates the potential of offsite construction as an effective alternative to traditional onsite methods in the construction industry. Targeting real estate professionals, financers, developers, construction contractors, and architects, the research aims to foster confidence and awareness in offsite techniques, specifically among project lenders. Through a combination of a literature review, interviews, workshop attendance, and site visits, the study addresses three critical research questions. First, it quantifies the project finance risk profile of offsite construction compared to traditional methods. Second, it identifies the qualitative determinants that influence lending decisions for offsite projects. Finally, it explores the data and education required for the finance industry to gain confidence in offsite construction's risk profile. The findings highlight the importance of incorporating modular offsite methods into educational curricula to create a cultural shift among industry professionals. This cultural shift can dispel misconceptions about offsite construction's quality, durability, and visual appearance, ultimately encouraging wider adoption. Moreover, lenders must conduct thorough personal due diligence when financing offsite projects, as manufacturing requires significant capital early in the timeline. Understanding the financial wherewithal of offsite manufacturers and assessing their experience in completing similar projects is crucial for mitigating risks. To facilitate offsite construction financing, industry leaders should explore innovative contractual, legal, and financial instruments. Implementing recourse provisions and enabling working capital financing for offsite manufacturers can alleviate the financial burden on developers. The Uniform Commercial Code approach could also make offsite projects more appealing to traditional lenders, enhancing their security interests during fabrication. Integrating these solutions can support and facilitate financing for offsite projects, driving increased efficiency, sustainability, and effectiveness in building practices. Overall, this thesis provides valuable insights into offsite construction, offering a comprehensive understanding of its benefits and challenges. By disseminating these findings to the target audience, the research aims to promote the widespread adoption of offsite construction and pave the way for a more innovative and sustainable future in the construction industry.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Acquirer Abnormal Returns in Listed European Real Estate M&amp;A Transactions</title>
<link href="https://hdl.handle.net/1721.1/152685" rel="alternate"/>
<author>
<name>Reimer, Clemens</name>
</author>
<id>https://hdl.handle.net/1721.1/152685</id>
<updated>2023-11-03T03:45:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis of Acquirer Abnormal Returns in Listed European Real Estate M&amp;A Transactions
Reimer, Clemens
This thesis analyzes a sample of 70 listed European real estate M&amp;A transactions between June 2013 and June 2023. The analysis is based on three filters: target country, real estate subsegment, and payment structure. The findings reveal significant discrepancies in bid premiums compared to NAV across subsegments, with industrial segment transactions exhibiting a significant average premium of 46% and retail segment transactions occurring at an average discount of 13% to NAV. Additionally, the study finds that cash offers in the sample have higher bid premiums on average than share offers, albeit lower than the premiums in mixed payment offers. By using event study methodology, a sub-sample of 27 transactions is examined to analyze acquirer abnormal returns across multiple event windows. Consistent with prior research, the study demonstrates minor and statistically insignificant impacts on bidders’ shareholder returns. Notably, an intriguing pattern emerged when grouping the sub-sample by payment method. For the [-5/+5] and [-10/+10] event windows, transactions financed with all-cash exhibited higher cumulative average abnormal returns (CAARs) compared to all-share transactions. However, for the [-1/+1] event window, the difference between all-share and all-cash offers was relatively narrow, with slightly higher returns observed for share offers. An additional finding was that for the [-10/+10] event window, combination offers, involving both cash and shares, experienced significantly greater abnormal returns than other offer types.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visible-Light Integrated Photonics for 3D-Printing and Trapped-Ion Systems</title>
<link href="https://hdl.handle.net/1721.1/152677" rel="alternate"/>
<author>
<name>Corsetti, Sabrina M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152677</id>
<updated>2023-11-03T04:09:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Visible-Light Integrated Photonics for 3D-Printing and Trapped-Ion Systems
Corsetti, Sabrina M.
Silicon photonics has enabled next-generation optical technologies that have facilitated revolutionary advances for numerous fields spanning science and engineering, including computing, communications, sensing, and quantum engineering. In recent years, the advent of visible-light integrated photonics platforms has opened up the potential for further diverse applications. This thesis builds upon these recent technologies to demonstrate novel applications of visible-light integrated photonics.&#13;
&#13;
First, we combine the fields of silicon photonics and photochemistry to propose the first chip-based 3D printer, consisting of only a single millimeter-scale photonic chip without any moving parts that emits reconfigurable visible-light holograms up into a simple stationary resin well to enable non-mechanical volumetric 3D printing. This work presents a highly-compact, portable, and low-cost solution for the next generation of 3D printers.&#13;
&#13;
Next, we propose integrated-photonics-based system architectures and the design of key integrated-photonics components for both polarization-gradient and electromagnetically-induced-transparency cooling of trapped ions. Further, we experimentally demonstrate a pair of polarization-diverse gratings and design the first integrated polarization rotators and splitters at blue wavelengths, representing a fundamental stepping stone on the path to advanced operations for integrated-photonics-based trapped-ion quantum systems involving multiple polarizations.&#13;
&#13;
Finally, we demonstrate optical trapping and tweezing of microspheres and cancer cells using an integrated optical phased array for the first time, representing a two-orders-of-magnitude increase in the standoff distance of integrated optical tweezers and the first cell experiments using single-beam integrated optical tweezers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Attribution: From Classifiers to Generative Models</title>
<link href="https://hdl.handle.net/1721.1/152676" rel="alternate"/>
<author>
<name>Georgiev, Kristian</name>
</author>
<id>https://hdl.handle.net/1721.1/152676</id>
<updated>2023-11-03T04:08:26Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Data Attribution: From Classifiers to Generative Models
Georgiev, Kristian
The goal of data attribution is to trace model predictions back to training data. Despite a long line of work towards this goal, existing approaches to data attribution tend to force users to choose between computational tractability and efficacy. That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e.g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets. Moreover, existing methods are often tailored to the supervised learning setting, and are not well-defined for generative models.&#13;
&#13;
In this thesis, we introduce TRAK (Tracing with the Randomly-projected After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differentiable models. In particular, by leveraging only a handful of trained models, TRAK can match the performance of attribution methods that require training thousands of models. We first demonstrate the utility of TRAK across various modalities and scales in the supervised setting: image classifiers trained on ImageNet, vision-language models (CLIP), and language models (BERT and mT5). Then, we extend TRAK to the generative setting, and show that it can be used to attribute different classes of diffusion models (DDPMs and LDMs).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Hidden Network: addressing digital equity through meaningful connectivity in urban India</title>
<link href="https://hdl.handle.net/1721.1/152674" rel="alternate"/>
<author>
<name>Agrawal, Surbhi</name>
</author>
<id>https://hdl.handle.net/1721.1/152674</id>
<updated>2023-11-03T03:30:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Hidden Network: addressing digital equity through meaningful connectivity in urban India
Agrawal, Surbhi
This research explores the transformative impact of digital technologies, particularly mobile internet access, on digital equity in informal settlements in urban India. It investigates the nationwide expansion of 4G LTE infrastructure, driven by Jio's cost-effective high-speed data telecom revolution, which led to a shift towards smartphone-first internet access across diverse socio-economic classes. A market analysis demonstrates the rise of digital applications, enabling financial transactions, e-commerce, and service deliveries. Additionally, the study investigates internet activity patterns in New Delhi, revealing that infrastructure and connectivity are more significant predictors of digital equity than literacy rates. Notably, the research highlights the pivotal role played by Civil Society Organizations (CSOs) in promoting digital equity through initiatives in these urban informal settlements, emphasizing the significance of community engagement and technology-awareness efforts. It centers on the human aspect of technology, utilizing a smartphone-friendly website media to communicate research findings in an accessible format. The research seeks to empower residents, enhance digital inclusion, and bridge the digital divide through community-centric interventions. The central research question guiding this work is to identify key determinants and barriers in achieving digital equity for marginalized communities in urban informal settlements and explore effective strategies to bridge the digital divide for their empowerment and socioeconomic upliftment.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power Failure Cascade Prediction using Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/152673" rel="alternate"/>
<author>
<name>Chadaga, Sathwik P.</name>
</author>
<id>https://hdl.handle.net/1721.1/152673</id>
<updated>2023-11-03T03:01:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Power Failure Cascade Prediction using Machine Learning
Chadaga, Sathwik P.
We consider the problem of predicting power failure cascades due to branch failures. We propose several flow-free models using machine learning techniques like support vector machines, naive Bayes classifiers, and logistic regression. These models predict the grid states at every generation of a cascade process given the initial contingency. Further, we also propose a model based on graph neural networks (GNNs) that predicts cascades from the initial contingency and power injection values. We train the proposed models using a cascade sequence data pool generated from simulations. We then evaluate our models at various levels of granularity. We present several error metrics that gauge the models’ ability to predict the failure size, the final grid state, and the failure time steps of each branch within the cascade. We benchmark the proposed models against the influence model proposed in the literature. We show that the proposed machine learning models outperform the influence models under every metric. We also show that the graph neural network model, in addition to being generic over randomly scaled power injection values, outperforms multiple influence models that are built specifically for their corresponding loading profiles. Finally, we show that the proposed models reduce the computational time by almost two orders of magnitude.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracking Sargassum in the Caribbean: The Design, Deployment, and Validation of a Low-Cost Surface Drifter</title>
<link href="https://hdl.handle.net/1721.1/152672" rel="alternate"/>
<author>
<name>Pixa, Chase R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152672</id>
<updated>2023-11-03T03:31:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tracking Sargassum in the Caribbean: The Design, Deployment, and Validation of a Low-Cost Surface Drifter
Pixa, Chase R.
This thesis presents the development of a low-cost surface drifter designed to track and monitor the abundant Sargassum seaweed in the Caribbean. The phenomenon of the Great Atlantic Sargassum Belt (GASB), inundating coastlines in the northern equatorial Atlantic and Gulf of Mexico, has raised concerns due to its negative impacts on marine ecosystems, coastal communities, and tourism. The introduction section provides background information on the arrival of Sargassum in the Caribbean and its ecological significance.&#13;
&#13;
One of the key motivations behind the drifter's development is the potential use of Sargassum as a feedstock for biofuel production. A comprehensive literature review assesses the feasibility of utilizing Sargassum for biofuels, taking into account infrastructure, economics, and scientific challenges. Although Sargassum holds promise as a renewable biomass source, several hurdles must be addressed, including consistent biomass production, processing techniques, and lack of industrial-scale biofuel plants using macroalgae.&#13;
&#13;
The core of the thesis is dedicated to the surface drifter development and field trials. Iterative trials are conducted to design a drifter that entangles with Sargassum, providing in situ movement data to complement remote sensing and modeling efforts. The drifter's design is optimized to mimic Sargassum rafts, and successful deployments off the coast of Puerto Rico demonstrate the potential for effective tracking. The drifter's association with Sargassum rafts is validated through satellite imagery and wind and current data.&#13;
&#13;
In parallel, a low-cost chemical sensing drifter is introduced in the thesis. This advanced drifter iteration incorporates self-validation mechanisms for Sargassum entanglement and enables the measurement of dissolved gases. The chemical sensing capabilities enhance the understanding of Sargassum rafts' dynamics and their environmental impact.&#13;
&#13;
The thesis concludes by summarizing the key findings and implications of the research. The low-cost surface drifters have shown promising potential for tracking Sargassum and studying its movement patterns within the GASB. The drifter's effectiveness in entangling with Sargassum provides valuable insights into the seaweed's behavior and could help improve existing remote sensing and modeling techniques.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Optimization of Tunable Insulated Hybrid Cooling to Extend Food Shelf-life Using Scalable and Affordable Materials</title>
<link href="https://hdl.handle.net/1721.1/152669" rel="alternate"/>
<author>
<name>Ko, Young</name>
</author>
<id>https://hdl.handle.net/1721.1/152669</id>
<updated>2023-11-03T03:45:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling and Optimization of Tunable Insulated Hybrid Cooling to Extend Food Shelf-life Using Scalable and Affordable Materials
Ko, Young
Cooling is a pivotal technology for tackling the global food crisis by securing the food shelf-life. More than 15% of food loss is due to improper food storage temperatures in developing countries. Every 10℃ temperature drop can increase food shelf-life by 2-3 times. However, conventional electrically powered cooling technology, such as vapor-compression refrigerators, is readily unavailable in developing countries. To this end, passive cooling, including radiative and evaporative cooling, is a promising solution that enables daytime sub-ambient temperature without any power requirement. However, radiative and evaporative cooling are constrained by low energy density, climate conditions, and significant water consumption. &#13;
&#13;
In this work, we combine evaporative and radiative cooling into tunable insulated hybrid cooling (TIHC) that can deliver low-cost, high-performance passive cooling for post-harvest foods. TIHC comprises three functional layers: hydrophobic porous membrane for radiative cooling, polyacrylamide hydrogel for evaporative cooling, and aluminum sheet as a substrate. Instead of the state-of-art radiative cooling material that requires complex and expensive fabrication, TIHC leverages commercially mass-produced hydrophobic porous membranes to achieve optical selectivity. In addition, an air gap between the membrane and the hydrogel insulates the cooling structure to reduce environmental heat gain. Concurrently, the air gap provides tunability to optimize the cooling performance by modifying the insulation thickness. &#13;
&#13;
Based on the TIHC concept, we implement a one-dimensional heat and mass transfer model to predict the cooling performance. We accelerate the computation time by a physics-based approximation of radiative heat transfer, which decouples it from conductive heat transfer. As a result, the model can simulate cooling performance for diverse design parameters, including optical properties of the hydrophobic porous membrane, air gap, polyacrylamide hydrogel, and storage free-space thickness. Next, we fabricate a surface-level TIHC cooler and characterize its cooling performance through outdoor cooling experiments. Cooling power and stagnation temperature difference for three different hydrophobic porous membranes (polyethylene film, polypropylene film, and polytetrafluoroethylene film) are measured and compared to the model predictions. We obtain a good agreement between the experimental data and the model predictions. However, the cooling performance of all tested hydrophobic porous membranes is inferior to pure evaporative cooling due to insufficient solar reflectance. Nevertheless, the TIHC cooler achieved an 81.6% of temperature drop with 48.8% less water consumption compared to the pure evaporative cooler throughout the day. &#13;
&#13;
Finally, we propose several optimization strategies to improve the TIHC cooling performance. We identify desired optical properties of the hydrophobic porous membrane to outperform pure evaporative cooling. We also discover a trend-shifting of cooling power to air gap thickness depending on the food temperature, from which optimal air gap thicknesses to maximize cooling power are determined. Eventually, we simulate the transient food temperature profile and quantitatively predict the food-shelf life under real-time weather conditions. The simple design of TIHC food storage is expected to improve the shelf-life of red tomatoes by up to 231.7% in Kenya, Nairobi. Given its compact structure, low-cost scalable materials, and tunable cooling performance, we expect the successful deployment of TIHC food storage can bring promising benefits to farmers, businesses, and households in developing countries.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Methods for Biomedical Imaging</title>
<link href="https://hdl.handle.net/1721.1/152668" rel="alternate"/>
<author>
<name>Gerlach, Connor Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152668</id>
<updated>2023-11-03T03:08:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Computational Methods for Biomedical Imaging
Gerlach, Connor Michael
This work aims to survey and advance the state of the art in methods for biomedical imaging and disease diagnosis. We demonstrate the generation of non-diffracting beams using Lee Holography, and argue that the rich depth information made possible by these beams is well-suited for machine learning applications, where 2D images can contain 3D contextual information without the added computational overhead of performing 3D convolutions. We begin with a review of important non-diffracting beams in the existing literature, and proceed to discuss the necessary experimental design for their generation. We then demonstrate the experimental generation of these beams, including the novel generation of a rotating beam and needle beam via Lee holography. This is followed by the presentation and analysis of a particular semi-supervised machine learning method, contrastive learning, and a novel demonstration of how transfer learning can further improve the representations made by contrastive learning.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RL-SAR: A Robotic System for Fine-Grained RFID&#13;
Localization using RL-based Synthetic Aperture&#13;
Radar</title>
<link href="https://hdl.handle.net/1721.1/152667" rel="alternate"/>
<author>
<name>Chen, Weitung</name>
</author>
<id>https://hdl.handle.net/1721.1/152667</id>
<updated>2023-11-03T03:18:29Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">RL-SAR: A Robotic System for Fine-Grained RFID&#13;
Localization using RL-based Synthetic Aperture&#13;
Radar
Chen, Weitung
Efficient localization of RFID-tagged items is crucial in scenarios that require tracking and managing a large inventory. Current systems for fine-grained RFID localization have shown limitations since they only collect measurements on a pre-defined trajectory or optimize measurement locations for a single tag. Thus, there is a need for an RFID localization system that can autonomously optimize for multiple tags and adaptively relocalize tags with lower confidence to achieve a more precise and efficient localization.&#13;
&#13;
We introduce RL-SAR, an end-to-end autonomous Synthetic Aperture Radar (SAR) based RFID localization system, utilizing a Reinforcement Learning (RL) algorithm to determine the most optimal trajectory for localizing multiple tags. We implemented this system with an antenna moving on a ceiling-mounted 2D track. The core of the system is a RL-based trajectory optimization algorithm for collecting RF measurements. Based on these RF measurements, we developed a data processing pipeline to compute the estimated tag locations along with their confidence metrics, derived from the RF SAR hologram. The RL algorithm leverages confidence metrics associated with the tags and is capable of learning a strategy that minimizes the antenna’s traveled distance while enhancing the localization accuracy.&#13;
&#13;
We built and evaluated a proof-of-concept prototype of RL-SAR. Experimental evaluation demonstrates a mean 3D localization accuracy of 0.244m and the capability to locate 15 tags within an average scanning distance of 19.14 m. We compared our algorithm to naive baselines and show that the baselines require 86% longer trajectory than RL-SAR. Our results show the potential for achieving robust and efficient localization to enhance the current inventory processes across the manufacturing, retail, and logistics sectors.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chemistry in a new key: Surplus, soy, and the history of sustainable enterprise in the United States, 1934-1950</title>
<link href="https://hdl.handle.net/1721.1/152666" rel="alternate"/>
<author>
<name>La Rock, Zachary</name>
</author>
<id>https://hdl.handle.net/1721.1/152666</id>
<updated>2023-11-03T03:22:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Chemistry in a new key: Surplus, soy, and the history of sustainable enterprise in the United States, 1934-1950
La Rock, Zachary
In the mid-1930s, a prominent group of industrialists, politicians, and farmers in the United States rallied around chemurgy, an emergent field of applied chemistry that sought to transform post-World War I agricultural surplus into industrial commodities. Ephemeral but wide-ranging in its scope, the “chemical revolution” that chemurgy’s proponents envisioned was a promise that the ends of agriculture and those of industry might be hybridized if the former dedicated itself to the cultivation of plant-based chemical compounds for the latter’s manipulation. In so doing, chemurgy became, in the eyes of its advocates, something of a panacea: for raw material scarcity, for Dust Bowl land degradation, and for underemployment caused by the Great Depression and racial segregation after Civil War Reconstruction. Under the banner of this hard-to-pronounce neologism, automaker Henry Ford and soil scientist George Washington Carver united in unlikely friendship and a quest to find new industrial applications for already existing plants, especially the soybean.&#13;
&#13;
Historicizing the futures that chemurgy’s allies, especially Ford and Carver, advocated, two distinct versions of the field emerge. Ford’s chemurgy entailed autarchic, unregulated mass production of single crops that linked farms, factories, and a white American workforce ever more closely as they worked to harvest profits for captains of industry. That of Carver, meanwhile, privileged the diversification of arable land and self-maintenance of a black base of growers in a context marked by  land dispossession and accumulation under racial capitalism. Almost a century since chemurgy was coined, it is worth revisiting this long-forgotten movement as a progenitor of contemporary calls that processes of industrial production be low-waste, renewable, even “green.” The tensions internal to this modernist doctrine of scientific praxis, which anchored innovation firmly in the soil, situate a North American genealogy of the logics by which today’s industries of sustainable enterprise replicate ecological and economic inequities of the past.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Analysis, and Design of&#13;
Switched-Capacitor Battery Cell Balancers</title>
<link href="https://hdl.handle.net/1721.1/152665" rel="alternate"/>
<author>
<name>Lopez, Mario A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152665</id>
<updated>2023-11-03T03:02:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling, Analysis, and Design of&#13;
Switched-Capacitor Battery Cell Balancers
Lopez, Mario A.
Battery systems have become crucial components in many modern technological solutions. Battery balancers are among the most important parts of these systems because they play a significant role in the battery’s lifespan and performance. A novel capacitive-based balancer was designed and tested for two cell and four cell batteries. The key parameters that were optimized are efficiency, balancing time, volume, and cost. A theoretical model of the circuit was derived to guide design optimization. Additionally, simulations were created to predict performance. Custom printed circuit boards were developed and tested.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patches As Agents</title>
<link href="https://hdl.handle.net/1721.1/152663" rel="alternate"/>
<author>
<name>Zheng, Winnie</name>
</author>
<id>https://hdl.handle.net/1721.1/152663</id>
<updated>2023-11-03T03:03:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Patches As Agents
Zheng, Winnie
StarLogo Nova is a powerful agent-based game and simulation programming environment designed for classroom learning. This modeling tool allows students to apply their creativity and replicate real-world phenomena through game-like coding and scientific knowledge. However, the current agent-to-environment interaction is limited, with the environment(terrain) possibly affected by the agent’s action, but unable to initiate actions autonomously. This thesis proposes an innovative approach to Star- Logo Nova by transforming the static environment into a dynamic structure made of thousands of patch agents. In this new framework, the traditional static terrain is now composed of a collection of independent patch agents, each with its own behaviors and traits. This transformation enables the patch agents to actively interact with each other and the surrounding agents, expanding potential for a more complex and responsive simulation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Reinforcement-Learning-based Robot Navigation with 3D Scene Graphs</title>
<link href="https://hdl.handle.net/1721.1/152662" rel="alternate"/>
<author>
<name>Muriga, Veronica</name>
</author>
<id>https://hdl.handle.net/1721.1/152662</id>
<updated>2023-11-03T03:21:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Reinforcement-Learning-based Robot Navigation with 3D Scene Graphs
Muriga, Veronica
Applying Reinforcement Learning (RL) for autonomous navigation has enormous potential in several robotics applications, including search and rescue operations. RL circumvents the need to manually specify a control policy for navigation and allows capturing aspects that are difficult to describe without relying on learning, e.g., that survivors or objects of interest are more likely to be found in specific regions of the environment. This is relevant for navigation policies guiding autonomous exploration and object search. To improve the performance of RL models guiding autonomous agents, we use 3D Scene Graphs (3DSGs) as a map representation. Previous work has shown that RL policies based on offline 3DSGs produce promising results in simulation, and this work takes initial steps towards extending these findings to 3DSGs produced online by Hydra, a new spatial perception system that builds 3DSGs in real-time. The work also provides an initial integration of the RL policies previously trained and evaluated in simulation [1] on a Unitree A1 quadruped robot. While the results are too preliminary to be conclusive, the thesis takes several integration steps towards deploying scene-graph-based RL policies for navigation on real robots.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Storytelling Entrepreneur Has No Clothes: Risks and&#13;
Rewards of Narrative Pitching</title>
<link href="https://hdl.handle.net/1721.1/152659" rel="alternate"/>
<author>
<name>Turner, Bradley</name>
</author>
<id>https://hdl.handle.net/1721.1/152659</id>
<updated>2023-11-03T03:03:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Storytelling Entrepreneur Has No Clothes: Risks and&#13;
Rewards of Narrative Pitching
Turner, Bradley
Scholars have shown that individuals and organizations can tell resonant stories, or narratives, to persuade audiences to evaluate them more highly. Yet other research has cast doubt on the persuasive power of storytelling, particularly for professional audiences like investors. To explore the conditions of narrative persuasion, I qualitatively study the practice of storytelling by entrepreneurs and responses by angel investors. By coding a representative sample of 330 pitches on the reality show Shark Tank, I produce the first catalogue of startup stories in pitches, focusing on character, tropes, temporalities, and shapes. This catalog illustrates isomorphism in pitching and the targets of narrative claims (e.g., the entrepreneur more than the market opportunity). While all Shark Tank entrepreneurs narrate answers to investors questions, only a third tell a story in the elevator pitch. Even when investors acknowledge such stories as high-quality, coherent, and resonant, they may still discount, dismiss, or counter them, instead demanding data, fact, and “substance.” I hypothesize that these limits to narrative persuasion derive not only from a competing institutional logic, but also from narrative’s malleability and conventional usage. I contribute the first catalog of startup stories and a novel theory of narrative backlash to entrepreneurship research, strategic communication, and institutional theory.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Bayes via ERM and Rademacher complexities: the Poisson model</title>
<link href="https://hdl.handle.net/1721.1/152656" rel="alternate"/>
<author>
<name>Teh, Anzo Zhao Yang</name>
</author>
<id>https://hdl.handle.net/1721.1/152656</id>
<updated>2023-11-03T03:04:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Empirical Bayes via ERM and Rademacher complexities: the Poisson model
Teh, Anzo Zhao Yang
We consider the problem of empirical Bayes estimation for (multivariate) Poisson means. Existing solutions that have been shown theoretically optimal for minimizing the regret (excess risk over the Bayesian oracle that knows the prior) have several shortcomings. For example, the classical Robbins estimator does not retain the monotonicity property of the Bayes estimator and performs poorly under moderate sample size. Estimators based on&#13;
the minimum distance and non-parametric maximum likelihood (NPMLE) methods correct these issues, but are computationally expensive with complexity growing exponentially with&#13;
dimension. Extending the approach of Barbehenn and Zhao (2022), in this work we construct monotone estimators based on empirical risk minimization (ERM) that retain similar theoretical guarantees and can be computed much more efficiently. Adapting the idea of offset Rademacher complexity Liang et. al (2015) to the non-standard loss and function class in empirical Bayes, we show that the shape-constrained ERM estimator attains the minimax regret within constant factors in one dimension and within logarithmic factors in multiple dimensions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Accelerator Generation and Optimization for Tensor Applications</title>
<link href="https://hdl.handle.net/1721.1/152655" rel="alternate"/>
<author>
<name>Zhang, Zhekai</name>
</author>
<id>https://hdl.handle.net/1721.1/152655</id>
<updated>2023-11-03T03:03:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Spatial Accelerator Generation and Optimization for Tensor Applications
Zhang, Zhekai
Modern foundation models and generative AI applications require multiple input modalities (both vision and language), which increases the demand for flexible accelerator architecture. &#13;
&#13;
Existing frameworks suffer from the trade-off between design flexibility and productivity of RTL generation: either limited to very few hand-written templates or cannot automatically generate the RTL.&#13;
&#13;
To address this challenge, we propose the LEGO framework, which automatically generates and optimizes spatial architecture design in the front end and outputs synthesizable RTL code in the back end without RTL templates. LEGO front end finds all possible interconnections between function units and determines the memory system shape by solving the integer linear equations, and establishes the connections by a minimum-spanning-tree-based algorithm and a breadth-first-search-based heuristic algorithm for merging different spatial dataflow designs. LEGO back end then translates the hardware in a primitive-level graph to perform lower-level optimizations, and applies a set of linear-programming&#13;
algorithms to optimally insert pipeline registers and reduce the overhead of unused logic when switching spatial dataflows.&#13;
&#13;
Our evaluation demonstrates that LEGO can achieve 3.2× speedup and 2.4× energy efficiency compared to previous work Gemmini, and can generate one architecture for diverse modern foundation models in generative AI applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Patient Access and Comprehension of Clinical&#13;
Notes: Leveraging Large Language Models to Enhance&#13;
Readability and Understanding</title>
<link href="https://hdl.handle.net/1721.1/152654" rel="alternate"/>
<author>
<name>Mannhardt, Niklas</name>
</author>
<id>https://hdl.handle.net/1721.1/152654</id>
<updated>2023-11-03T03:57:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Improving Patient Access and Comprehension of Clinical&#13;
Notes: Leveraging Large Language Models to Enhance&#13;
Readability and Understanding
Mannhardt, Niklas
Patient access to clinical notes has demonstrated numerous benefits, including an increased sense of control over their condition, enhanced engagement, improved medication adherence, and greater clinician accountability. However, the presence of medical jargon, abbreviations, and complex medical concepts within clinical notes hinders patient comprehension, thus diminishing the positive effects of note accessibility. These notes, primarily intended for clinicians, often appear disorganized and contain an abundance of technical terms. Breast cancer patients, in particular, face information overload and experience taxing symptoms related to their treatment, exacerbating this issue. Although some clinicians are adapting their writing style to meet patients’ needs, time constraints limit the feasibility of comprehensive note-taking. We propose the development of a patient-facing tool, in the form of a web application, to make information contained in clinical notes more accessible by leveraging machine learning models to simplify, summarize, extract information from, and add context to clinical notes. Through a series of user studies, we demonstrate that our proposed augmentations to clinical notes significantly improve comprehension and enhance patients’ reading experience.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyethylene-Based Multifunctional Composite Material for Radiation Shielding, Passive Thermoregulation, and In-Situ Fabrication for Space Exploration</title>
<link href="https://hdl.handle.net/1721.1/152651" rel="alternate"/>
<author>
<name>Xu, Duo</name>
</author>
<id>https://hdl.handle.net/1721.1/152651</id>
<updated>2023-11-03T03:02:09Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Polyethylene-Based Multifunctional Composite Material for Radiation Shielding, Passive Thermoregulation, and In-Situ Fabrication for Space Exploration
Xu, Duo
Ionizing radiation sources like energetic electrons, protons, gamma rays, and secondary particles, such as thermal neutrons, are abundant in low Earth orbit (LEO) and deep space environments like lunar and Martian surfaces, where space missions are often conducted. Such sources of space radiation can induce severe damage to human tissue and critical electronic components, thereby posing significant risks to space exploration. Moreover, the absence of convection in space confines heat removal exclusively to thermal radiation, which negatively impacts the performance of electronic components, unless they are designed to function at elevated temperatures. Due to the considerably high cost of transporting materials and equipment to space, a cost-effective solution for mitigating ionizing radiation and overheating risks is crucial, especially for extended, deep-space missions. This thesis presents a polyethylene-based multifunctional composite material aiming to achieve simultaneous space radiation attenuation, passive thermoregulation, and in-situ fabrication.&#13;
&#13;
The first component of the thesis explores the aspect of radiation shielding. While polyethylene is recognized as one of the best candidates for primary radiation shielding from Galactic Cosmic Rays (GCRs) and Solar Particle Events (SPEs), it does not adequately mitigate secondary particles. The proposed composite material, comprising polyethylene and boron-rich fillers, aims to match polyethylene's GCR and SPE performance while enhancing thermal neutron attenuation, the predominant secondary particle. The radiation shielding performance against GCR and SPE on the Martian surface is illustrated through a deterministic radiation transport simulation tool OLTARIS, and the performance of thermal neutron attenuation is demonstrated via a Monte Carlo particle transport tool PHITS and confirmed by EQ-SANS measurements.&#13;
&#13;
The second component of the thesis addresses the feasibility of additive in-situ fabrication via fused deposition modeling (FDM). Due to the difficulties of FDM printing polyethylene, the development of reliable printing approaches has been shown to be a challenging task. In the thesis, a reliable printing process is reported for an optimized blend of different polyethylene resins, with or without various nanoparticles as dopants.&#13;
&#13;
The third component focuses on passive thermoregulation performance. The optimized FDM printing process enables the fabrication of polyethylene with various fillers, hence providing design flexibility in filler selection, including compounds with boron and even other materials. With this flexibility, a coupled optics-heat transfer model has been developed to select materials providing both shielding and passive thermal regulation properties. This model accounts for the penetration of solar irradiation, power generation from inside, and temperature gradient across the layer, establishing the relationship between the optical properties of each material component and the temperature of the inner layer of the multifunctional material.&#13;
&#13;
Although this thesis primarily targets extraterrestrial applications, the techniques developed in each component have broader applicability. For instance, the radiation transport simulation could be employed in other radiation environments such as nuclear reactors; the optimized FDM printing process allows for the additive manufacture of polyethylene, a versatile and affordable thermoplastic material not previously reliably fabricated; and the coupled optics-heat transfer model has broader applications such as thermal transport in multilayer systems characterized by significant temperature gradients and solar irradiation penetration.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information-theoretic Algorithms for Model-free&#13;
Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/152649" rel="alternate"/>
<author>
<name>Wu, Farrell Eldrian S.</name>
</author>
<id>https://hdl.handle.net/1721.1/152649</id>
<updated>2023-11-03T03:36:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Information-theoretic Algorithms for Model-free&#13;
Reinforcement Learning
Wu, Farrell Eldrian S.
In this work, we propose a model-free reinforcement learning algorithm for infinte-horizon, average-reward decision processes where the transition function has a finite yet unknown dependence on history, and where the induced Markov Decision Process is assumed to be weakly communicating. This algorithm combines the Lempel-Ziv (LZ) parsing tree structure for states introduced in [4] together with the optimistic Q-learning approach in [9]. We mathematically analyze the algorithm towards showing sublinear regret, providing major steps towards the proof of such. In doing so, we reduce the proof to showing sub-linearity of a key quantity related to the sum of an uncertainty metric at each step. Simulations of the algorithm will be done in a later work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Memory Efficiency and Accuracy in&#13;
Spectral-Based Graph Transformers</title>
<link href="https://hdl.handle.net/1721.1/152648" rel="alternate"/>
<author>
<name>Ho, Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/152648</id>
<updated>2023-11-03T03:32:42Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Balancing Memory Efficiency and Accuracy in&#13;
Spectral-Based Graph Transformers
Ho, Kelly
The transformer architecture has been a significant driving force behind advancements in deep learning, yet transformer-based models for graph representation learning have not caught up to mainstream Graph Neural Network (GNN) variants. A major limitation is the large O(&#119899;2) memory consumption of graph transformers, where &#119899; is the number of nodes. Therefore, we develop a memory-efficient graph transformer for node classification, capable of handling graphs with thousands of nodes while maintaining accuracy. Specifically, we reduce the memory use in the attention mechanism and add a random-walk positional encoding to improve upon the SAN graph transformer architecture. We evaluate our model on standard node classification benchmarks: Cora, Citeseer, and Chameleon. Unlike SAN, which runs out of memory, our memory-efficient graph transformer can be run on these benchmarks. Compared with landmark GNN models GCN and GAT, our graph transformer requires 27.92% less memory while being competitive in accuracy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Fiber Extrusion Device for Educational Purposes: Redesign, Manufacture, and Computer Vision Integration</title>
<link href="https://hdl.handle.net/1721.1/152647" rel="alternate"/>
<author>
<name>Sefah, Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/152647</id>
<updated>2023-11-03T03:41:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Low-Cost Fiber Extrusion Device for Educational Purposes: Redesign, Manufacture, and Computer Vision Integration
Sefah, Gary
The Fiber Extrusion Device (FrED) serves as a hands-on learning tool and laboratory experience, simulating the continuous fiber draw process to provide insights into data acquisition, control systems, and smart manufacturing. This system enables learners to conduct experiments, manipulate manufacturing parameters and control systems, gather data, and conduct analyses. While successful classroom activities have been conducted using FrED, the preceding model's cost precludes widespread distribution for remote learning, a growing trend in education.&#13;
&#13;
This thesis encompasses a series of enhancements to FrED, aimed at refining its stability, cooling mechanisms, modularity, noise reduction, size, and overall functionality. Pulley variations were introduced to enhance fiber stability. Cooling strategies and pulley system's flexibility were optimized for the stability of the fiber, and noise reduction measures focused on the gear system. The camera system underwent significant redesigning, enabling more precise fiber diameter measurement. In addition to that, a shift from Teensy to Raspberry Pi improved system integration. Code for extrusion and gear motors, heater, and thermistor was rewritten, alongside redesigns of the extrusion system, PCB, and camera module.&#13;
&#13;
The final FrED design accomplished a 42% cost reduction ($159) and a weight reduction of 25% (1.7 kg) with optimal fiber cooling and stability, seamless integration of computer vision for diameter measurement and data collection was achieved, enabling its application in PID control and enhancing the teaching of machine learning principles.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost Optimization in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs</title>
<link href="https://hdl.handle.net/1721.1/152646" rel="alternate"/>
<author>
<name>Liggett, J. Chandler</name>
</author>
<id>https://hdl.handle.net/1721.1/152646</id>
<updated>2023-11-03T03:34:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Cost Optimization in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs
Liggett, J. Chandler
The cutting of sheet metal blanks from raw sheet stock is a crucial process in the sheet metal fabrication industry. One of the primary cost drivers for this process is sheet utilization, which is the amount of raw material processed into a usable blank compared to the total raw material processed. Nesting is a method that efficiently packs blanks onto raw sheets with the aim of reducing scrap generation by improving material utilization. Modern nesting algorithms are quite successful at maximizing sheet utilization given an explicit set of available raw sheets and a set of blanks defined as candidates for nesting. Because of this, nesting efficiency and thus sheet utilization are primarily determined by the characteristics of the candidate blanks and the number of candidate blanks that can be nested together. Nesting strategies may be chosen to include the maximum number of possible candidate blanks for maximized efficiency. On the other hand, nesting strategies may instead restrict the available part candidates for the purpose of reducing sorting and handling complexities downstream of the cutting operation. In between these two extremes, it is hypothesized that there exists an optimum nesting strategy that balances improved sheet utilization with the negative cost effect of more intensive handling requirements. In this work, the effect of varying nesting strategies on sheet utilization is studied in the context of a sheet metal manufacturing operation with plant locations across the globe. Cost models are produced that inform the selection of a globally optimized nesting strategy, and throughput models are considered which inform the validity of cost-optimized strategies. Additionally, regional differences in cost drivers are studied, and an optimized nesting strategy is validated for deployment across global plant locations. This work provides a detailed approach to optimizing sheet utilization in sheet metal manufacturing through selection of an optimized nesting strategy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigations into Ultra-Low-Power Underwater Imaging</title>
<link href="https://hdl.handle.net/1721.1/152645" rel="alternate"/>
<author>
<name>Naeem, Nazish</name>
</author>
<id>https://hdl.handle.net/1721.1/152645</id>
<updated>2023-11-03T03:21:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Investigations into Ultra-Low-Power Underwater Imaging
Naeem, Nazish
Imaging underwater environments is crucial to advancing our understanding of marine organisms, climate change, marine geology, aquaculture farming, and underwater archaeology. Despite significant advances in underwater imaging, scalable and longterm imaging of underwater environments is still an open problem. One of the main challenges in scalably imaging the ocean is that existing underwater cameras are too power-hungry for long-term observations. Recent work on ultra-low-power underwater imaging has shown that in-situ wireless underwater imaging is possible using fully submerged battery-free cameras and acoustic backscatter. Even though this is a promising advance, enabling truly useful ultra-low-power underwater imaging remains difficult due to many challenges and constraints including poor image quality (due to marine snow, hazing, and lighting conditions), limited energy, limited memory and computational power, and low bandwidth of the acoustic channel.&#13;
&#13;
This thesis investigates the various challenges that efficient and ultra-low-power underwater imaging faces and offers directions for solving them. In particular, we first survey the various challenges of ultra-low-power underwater imaging. Subsequently, we offer three solutions for addressing these challenges. First, we propose a simple denoising/desnowing method for ultra-low-power underwater imaging that shows ∼ 2&#119889;&#119861; improvement in the quality of the images while reducing the memory consumption by ∼ 17x when compared to the state-of-the-art systems. Second, we perform ultra-low-power underwater edge inference that is ∼ 19x more memory efficient when compared to the baseline model with comparable accuracies. Then, we propose a solution for enabling ultra-low-power color imaging that is ∼ 10x less power-hungry than the state-of-the-art battery-free underwater imaging system. We conclude by offering a path to integrating these solutions into future end-to-end ultra-low-power underwater imaging systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optically Controlled Vertical GaN finFET for Power Applications</title>
<link href="https://hdl.handle.net/1721.1/152641" rel="alternate"/>
<author>
<name>Hsia, Jung-Han</name>
</author>
<id>https://hdl.handle.net/1721.1/152641</id>
<updated>2023-11-03T03:40:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Optically Controlled Vertical GaN finFET for Power Applications
Hsia, Jung-Han
With the increasing demand for electricity, efficient power electronics with high voltage and current capabilities become crucial in many applications. However, current power devices are mostly electrically triggered. Multilevel converters made of such devices often require complicated gate-driving circuits and are susceptible to electromagnetic interference (EMI). Optically triggered power devices can significantly improve circuit complexity, EMI susceptibility, and system reliability.&#13;
&#13;
This thesis presents the first demonstration of an optically controlled vertical GaN finFET. The first part of the thesis describes the physics and design of the device assisted by simulation, followed by the fabrication using a Design-Technology Co-Optimization (DTCO) approach. Finally, device measurements are presented. Our devices have shown a maximum current density of J subscript DS &gt; 90A/cm² at V subscript DS = 3 V, triggered by a low-power 365nm LED, which translates into optical responsivity greater than 10⁵A/W. These preliminary results have shown promising aspects of our devices to enable future high voltage, high current power systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Mass-Manufacturable Globally Distributable Passive Prosthetic Foot</title>
<link href="https://hdl.handle.net/1721.1/152640" rel="alternate"/>
<author>
<name>Irani, Urvaksh</name>
</author>
<id>https://hdl.handle.net/1721.1/152640</id>
<updated>2023-11-03T03:18:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a Mass-Manufacturable Globally Distributable Passive Prosthetic Foot
Irani, Urvaksh
A lack of affordable energy storage and return (ESR) prosthetic feet compels amputees in low and middle income countries (LMIC) to adopt feet that do not meet the performance of ESR feet distributed in high-income countries. The GEAR Lab at MIT developed the LLTE design framework that systematically alters the geometry and stiffness of a foot to design ESR feet using a low cost material (Nylon 6/6) that enables close replication of able-bodied gait. LLTE-optimized foot prototypes have been tested in a long-term field trial in India and in gait labs in the United States. The feet demonstrated robustness to use for activities of daily living in India, and as good or better biomechanical performance and user satisfaction than commercial carbon fiber feet sold in the United States. However, these prototypes were not designed to be commercial products, but rather to demonstrate the viability of the LLTE design framework. The prototypes were CNC machined, resulting in a cost of &gt;$200 per foot (a significant expense for many individuals in LMIC) and were only compatible with a single attachment system, thus limiting the potential for adoption by LMIC distributors, each with their own unique attachment system. This thesis aims to translate these proof-of-concept prototypes to commercial products by making the foot mass-manufacturable, easily adoptable by major distribution networks, and incorporating a few upgrades: improved aesthetics, coronal compliance, and a sandal toe.&#13;
&#13;
The upgraded foot described in this thesis is comprised of a mass-manufacturable keel encased in a Polyurethane foam overmold resembling a biological footwith a ruggedized sole and two swappable attachment modules. The swappable attachment modules can be easily fastened to the foot to facilitate dissemination through the major distribution networks in LMIC. The first module ensures compatibility with the Bhagwan Mahavir Viklang Sahayta Samiti (BMVSS) attachment system, while the second module makes the foot compatible with both the ICRC attachment system and a pyramid adaptor. An upgraded architecture with a c-channel cross-section (to facilitate injection molding)was incorporated into the LLTE design framework and an optimization for a 60 kg person with a size 7 foot was run. The resulting optimized design has an LLTE value of ~0.1 and is thus expected to retain the high performance of previously tested LLTE prototypes.&#13;
&#13;
The mass-manufacturable keel was mechanically tested to validate that it behaved as predicted, and over-molded by Vibram to result in a final prototype. The prototypes will be ISO tested and then used in a field trial to compare their performance to existing LMIC feet. Following the field trial, a sizing system for a product line (with a finite amount of feet) will be developed such that a large percentage of the population can be prescribed a foot that is either optimal or close-to-optimal for them. Commercialization of this upgraded foot would offer amputees an affordable ESR option that can readily be adopted by major distribution networks in LMIC.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Motion Prediction for Efficient Human-Robot&#13;
Collaboration</title>
<link href="https://hdl.handle.net/1721.1/152639" rel="alternate"/>
<author>
<name>Kothari, Aadi</name>
</author>
<id>https://hdl.handle.net/1721.1/152639</id>
<updated>2023-11-03T03:56:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Real-Time Motion Prediction for Efficient Human-Robot&#13;
Collaboration
Kothari, Aadi
Human motion prediction is an essential step for efficient and safe human-robot collaboration. Current methods either purely rely on representing the human joints in some form of neural network-based architecture or use regression models offline to fit hyper-parameters in the hope of capturing a model encompassing human motion. While these methods provide good initial results, they are missing out on leveraging well-studied human body kinematic models as well as body and scene constraints which can help boost the efficacy of these prediction frameworks while also explicitly avoiding implausible human joint configurations. We propose a novel human motion prediction framework that incorporates human joint constraints and scene constraints in a Gaussian Process Regression (GPR) model to predict human motion over a set time horizon. This formulation is combined with an online context-aware constraints model to leverage task-dependent motions. It is tested on a human arm kinematic model and implemented on a human-robot collaborative setup with a UR5 robot arm to demonstrate the real-time capability of our approach. Simulations were also performed on datasets like HA4M and ANDY. The simulation and experimental results demonstrate considerable improvements in a Gaussian Process framework when these constraints are explicitly considered.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Modular Visual Data Manipulation Framework for Data Exploration in the Consumer Packaged Goods Industry</title>
<link href="https://hdl.handle.net/1721.1/152637" rel="alternate"/>
<author>
<name>Huang, Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/152637</id>
<updated>2023-11-03T03:01:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Developing a Modular Visual Data Manipulation Framework for Data Exploration in the Consumer Packaged Goods Industry
Huang, Allen
The rapidly increasing reliance on data analytics to drive strategic decision-making in today’s digital economy means that efficient and user-friendly data analysis tools are becoming increasingly important. Even as understanding and manipulating data becomes more critical, the technical complexity of traditional query languages like SQL often poses a substantial barrier to non-technical users.&#13;
&#13;
In this thesis, we present a fully visual analytics framework that can be arbitrarily integrated with relational data stored in an analytics platform. We describe the design and implementation of a frontend client by which nontechnical users can construct rich queries involving relational operations such as aggregations and filters on promotional data and view their outputs in tabular or graphical form. We also describe a protocol for uniquely and unambiguously describing these queries and the design and implementation of an engine by which these queries are efficiently executed.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Neuro-Symbolic Skills for Bilevel Planning</title>
<link href="https://hdl.handle.net/1721.1/152636" rel="alternate"/>
<author>
<name>Athalye, Ashay</name>
</author>
<id>https://hdl.handle.net/1721.1/152636</id>
<updated>2023-11-03T04:06:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Learning Neuro-Symbolic Skills for Bilevel Planning
Athalye, Ashay
It is challenging for robots to solve tasks in environments with continuous state and action spaces, long horizons, and sparse feedback. Hierarchical approaches such as task and motion planning (TAMP) address this challenge, enabling efficient problem solving by decomposing decision-making into two or more levels of abstraction. In a setting where expert demonstrations, symbolic predicates for state abstraction, and manually designed parameterized policies are given, prior work has shown how to learn symbolic operators and neural samplers for TAMP. But Manually designing parameterized policies can be difficult and impractical, so we would instead like our agent to learn them. In this work, we develop a method for learning parameterized polices in combination with operators and samplers from demonstrations. These components are packaged into modular neuro-symbolic skills and sequenced together with search-then-sample TAMP to solve new tasks. In experiments in four robotics domains, we show that our approach – bilevel planning with neuro-symbolic skills – can solve a wide range of tasks with varying initial states, objects, and goals, outperforming six baselines and ablations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CS2 Student Programming Performance Prediction and Intervention</title>
<link href="https://hdl.handle.net/1721.1/152634" rel="alternate"/>
<author>
<name>Dargan, Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/152634</id>
<updated>2023-11-03T03:14:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">CS2 Student Programming Performance Prediction and Intervention
Dargan, Hope
As the number of students in a course grows, it becomes increasingly difficult for instructors to identify and help students who are struggling to develop good understanding of the material. This study investigates scalable prediction and intervention methods in the context of 6.101, an intermediate programming course at MIT. First, a broad investigation was conducted into early predictive factors of students who earn a C, D, F, or Withdraw (CDFW) from 6.101. Results suggested that limited prior programming experience was associated with higher CDFW rates, as were other factors such as high amounts of early office hour usage and lower grades in certain prerequisites. Prediction efforts focused on students of interest (SOI) who initially committed to the course but were likely to earn a C, D, F or Later Withdraw (CDFLW) from 6.101. A hand-tuned model that combined various predictive factors identified SOI with 75 percent accuracy (13 percent sensitivity, 90 percent specificity) three weeks into the semester. In order to help SOI develop their programming skills, encourage independent problem solving, and increase feelings of belonging and community within the CS department, a series of optional weekly programming practice sessions were developed and implemented. While the results of the intervention are inconclusive due to the small number of students who attended sessions and responded to post-semester surveys, the available data from two semesters suggests that the intervention had limited impact in all three design areas. Overall SOI had lower exam scores, received more help with assignments, and reported lower ratings of belonging and community at the end of the semester compared to non-SOI. These findings have potential broader implications for how “at-risk” students are defined, how predictive models are created and used, and how interventions are designed.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A mechanically-derived contact model for adhesive&#13;
elastic-perfectly plastic particles</title>
<link href="https://hdl.handle.net/1721.1/152633" rel="alternate"/>
<author>
<name>Zunker, William R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152633</id>
<updated>2023-11-03T03:03:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A mechanically-derived contact model for adhesive&#13;
elastic-perfectly plastic particles
Zunker, William R.
A contact model able to capture the response of interacting adhesive elastic-perfectly plastic particles under a variety of loadings is presented. The contact model is valid through each of the three major contact regimes: elastic, fully-plastic, and bulk elastic---all with and without adhesion. &#13;
&#13;
In the elastic through fully-plastic contact regimes the model is built upon the Method of Dimensionality Reduction which allows the problem of a 3D axisymmetric contact to be mapped to a semi-equivalent 1D problem of a rigid indenter penetrating a bed of independent Hookean springs. Plasticity is accounted for by continuously varying the 1D indenter profile subject to a constraint on the contact pressure. Unloading falls out naturally, and simply requires lifting the 1D indenter out of the springs and tracking the force. Notably, by accounting for the incompressible nature of this plastic deformation, the contact model is able to detect and evolve secondary contacts caused by outward displacement of the free surface with good precision. JKR type adhesion is recovered seamlessly by simply allowing the springs to `stick’ to the 1D indenter's surface. &#13;
&#13;
To complete the contact model an additional treatment for the bulk elastic contact regime, characterized by a rapid stiffening in the force-displacement curve, is proposed. A simple formulation is presented for an additional bulk elastic force related to the particle's mean surface displacement, contact areas, particle volume, and bulk modulus. A novel criterion for triggering this force (i.e. detecting the bulk elastic regime) related to the remaining free surface area of the particle is also given. This bulk elastic force is then superimposed with the force response given by the Method of Dimensionality Reduction to achieve a contact model capable of capturing a variety of complex loadings. In this way, the methodology for treating the bulk elastic regime presented here stands independent and could be appended to any contact model. &#13;
&#13;
Direct comparison of all elements of the contact model are made to finite element simulations revealing the accurate predictive capabilities of the contact model.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities for System Dynamics Research in Operations Management for Public Policy</title>
<link href="https://hdl.handle.net/1721.1/152631" rel="alternate"/>
<author>
<name>Lopez, Jose</name>
</author>
<id>https://hdl.handle.net/1721.1/152631</id>
<updated>2023-11-03T03:28:08Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Opportunities for System Dynamics Research in Operations Management for Public Policy
Lopez, Jose
Operations management in the public policy context is extremely complex with many mutually interacting factors characterized by feedback loops, delays and nonlinearities as well multiple stakeholders pursuing divergent objectives. Prior researchers have called for a systems approach in these contexts, arguing that standard OM methodologies such as mathematical programming, and queuing theory often cannot fully address these problems. In this work, we create a roadmap for researchers—both those who are familiar with systems dynamics and those who are not—for the expanded use of system dynamics studying public policy-related OM problems. We review and organize relevant system dynamics literature in both traditional operations management venues as well as public policy venues unfamiliar to OM audiences. We then identify a set of interesting open questions and potential system dynamics building blocks for answering them by topic. Leveraging this review, we describe under what conditions system dynamics is most appropriate. We then identify several overarching methodological and domain gaps for future research. Finally, we propose a process for using system dynamics with traditional operations management methodologies.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Ion Transfer Capillary Geometry on Sensitivity of a Desorption Electrospray Ionization and Mass Spectrometry System</title>
<link href="https://hdl.handle.net/1721.1/152612" rel="alternate"/>
<author>
<name>Vinakollu, Nagashumrith Venkata</name>
</author>
<id>https://hdl.handle.net/1721.1/152612</id>
<updated>2023-11-01T04:08:19Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">Evaluation of Ion Transfer Capillary Geometry on Sensitivity of a Desorption Electrospray Ionization and Mass Spectrometry System
Vinakollu, Nagashumrith Venkata
This research examines the effect of ion transfer capillary geometry on the sensitivity of the Desorption Electrospray Ionization – Mass Spectrometry (DESI-MS) process.  Previous work has shown that heating the ion transfer capillary to 450˚C will improve the resolution of images taken by the DESI-MS owing to increased ion desolvation. This thesis studies how changing the inner diameter and cross-sectional profile of the capillary can improve ion desolvation by increased heat transfer to the center of the flow. Increasing heat transfer efficiency can obviate such high temperatures and will increase flexibility in the design of the capillary heater.&#13;
&#13;
The setup of this experiment involved making modifications to existing components to allow for rapid testing of many ion transfer capillary geometries. Mass spectrum data for sample sections of pig liver were collected, as these biological samples are acceptably homogenous with a known mass-to-charge (m/z) ratio of 885. Signal intensity within the 880- 890 m/z range is analyzed to reveal the impact of capillary geometry. Sources of variation, such as within-sample variation and sample-to-sample variation, are characterized to reveal the true impact of the variables.&#13;
&#13;
The results show that decreasing max particle distance from a wall can increase the sensitivity of the ion flow to heating. The best capillary cross-section provides nearly a 4x increase in sensitivity when compared to a circular capillary with a similar flow area. Pursuing these capillary designs will improve not only sample resolution and imaging time but also the client satisfaction of the Waters DESI- MS system.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An empirical test of the Modigliani-Miller model of market valuation of growth firms</title>
<link href="https://hdl.handle.net/1721.1/152607" rel="alternate"/>
<author>
<name>Lewis, William Stewart.</name>
</author>
<id>https://hdl.handle.net/1721.1/152607</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">An empirical test of the Modigliani-Miller model of market valuation of growth firms
Lewis, William Stewart.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1964; Appendix contains numerous pamphlets.; Includes bibliographical references (leaf 76).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor</title>
<link href="https://hdl.handle.net/1721.1/152604" rel="alternate"/>
<author>
<name>Grunden, Joanne B.
            (Joanne Barbara)</name>
</author>
<id>https://hdl.handle.net/1721.1/152604</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Simulation and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor
Grunden, Joanne B.
            (Joanne Barbara)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Title as it appears in the M.I.T. Graduate List, June 1992: Modeling, simulation, and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor.; Includes bibliographical references (leaf 188).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation on elastic anisotropy on x-ray stress measurement.</title>
<link href="https://hdl.handle.net/1721.1/152600" rel="alternate"/>
<author>
<name>Li, Fook-Kow.</name>
</author>
<id>https://hdl.handle.net/1721.1/152600</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Investigation on elastic anisotropy on x-ray stress measurement.
Li, Fook-Kow.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1945; Bibliography: leaves 78-80.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation on constant-pressure combustion turbine cycles with water injection.</title>
<link href="https://hdl.handle.net/1721.1/152599" rel="alternate"/>
<author>
<name>Hu, Hesheng,
            1928-</name>
</author>
<id>https://hdl.handle.net/1721.1/152599</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">An investigation on constant-pressure combustion turbine cycles with water injection.
Hu, Hesheng,
            1928-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1945; Bibliography: leaf 13.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copolymerization of styrene and ethyl maleate.</title>
<link href="https://hdl.handle.net/1721.1/152598" rel="alternate"/>
<author>
<name>Leff, Miriam W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152598</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Copolymerization of styrene and ethyl maleate.
Leff, Miriam W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1945; Bibliography; leaf 27.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Action of inorganic dehydrating agents on ethyl lactate.</title>
<link href="https://hdl.handle.net/1721.1/152597" rel="alternate"/>
<author>
<name>Hidalgo, Fausto Gaston.</name>
</author>
<id>https://hdl.handle.net/1721.1/152597</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Action of inorganic dehydrating agents on ethyl lactate.
Hidalgo, Fausto Gaston.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1945; Bibliography: leaves 32-33.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A theoretical investigation of the excitation of interstellar formaldehyde.</title>
<link href="https://hdl.handle.net/1721.1/152593" rel="alternate"/>
<author>
<name>Halket, Thomas Daniel.</name>
</author>
<id>https://hdl.handle.net/1721.1/152593</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">A theoretical investigation of the excitation of interstellar formaldehyde.
Halket, Thomas Daniel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 62-63.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of motivational patterns of managers and technical professionals in a manufacturing and development installation</title>
<link href="https://hdl.handle.net/1721.1/152588" rel="alternate"/>
<author>
<name>Rogers, James L.</name>
</author>
<id>https://hdl.handle.net/1721.1/152588</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">An analysis of motivational patterns of managers and technical professionals in a manufacturing and development installation
Rogers, James L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1986; Bibliography: leaves 117-119.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for More Equitable Neighborhood Adaptation: Climate Resiliency and Public Space Planning in U.S. Border Colonias</title>
<link href="https://hdl.handle.net/1721.1/152510" rel="alternate"/>
<author>
<name>Strech, Mikaela</name>
</author>
<id>https://hdl.handle.net/1721.1/152510</id>
<updated>2023-10-19T03:40:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design for More Equitable Neighborhood Adaptation: Climate Resiliency and Public Space Planning in U.S. Border Colonias
Strech, Mikaela
The relationship between environmental harms and the political and economic marginalization of communities cannot be easily disentangled in today’s world. Consequently, this thesis reexamines the relationships between planners, designers, and communities in response to environmental challenges that marginalized communities face. I advocate for beginning with incremental advancements in adaptation in design using community organization and a site and services approach as a way of contending with resource constraints and urgent issues. Acknowledging that this design work simultaneously enhances social resiliency, I argue that the timeliness of this approach promotes resilience.&#13;
&#13;
The research analyzes design and planning strategies for neighborhood scale environmental design, drawing from case studies in Puerto Rico, Detroit, Nairobi, and Texas. These insights inform conceptual framework plans in three neighborhoods to test what an incremental, nature-based approach to environmental hazards might accomplish, and how. This thesis has a specific focus on US border colonias in Texas, where flooding and disparities in adaptation and recovery resources are especially relevant. Considering the projected growth of fringe neighborhoods across the United States, this study contributes to the dialogue on equitable resilience.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Awarding Equitably: a process design framework for city grantmakers</title>
<link href="https://hdl.handle.net/1721.1/152509" rel="alternate"/>
<author>
<name>Kalish, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/152509</id>
<updated>2023-10-19T03:05:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Awarding Equitably: a process design framework for city grantmakers
Kalish, Sarah
Internal processes such as hiring, procurement, and grantmaking are the hidden engine that power the delivery of local government services. My research begins with a case study on designing the City of Boston’s organization-wide grantmaking process to standardize procedures. This effort became a priority due to the influx of ARPA funding, among other drivers related to digital transformation and a new mayoral administration. Through interviews with grants program managers, I documented the steps in the grants process and codified shared best practices in a grants process user guide. This initial exercise was a mechanical one, which was limited as other considerations and values, namely equity, were integral to work but only implicitly embedded in grantmaking.&#13;
&#13;
In my research to develop a more holistic process design framework, I discovered a gap in the literature on internally focused process design in public sector organizations. The process improvement discipline comes closest, but still lacks a systematic discussion of factors that influence process, including values, structures, norms, practices, and politics. In identifying these influences, I construct a framework that serves as an actionable toolkit for practitioners across government settings. I define five influences: philosophical values, organizational structures, cultural norms, operational practices, and political forces. For each, I outline definitions, principles, guiding questions, and complementary exercises. Then I apply the framework to analyze the Community Preservation Act (CPA), a Massachusetts-wide municipal grant program.&#13;
&#13;
There are further opportunities to apply the “five influences” framework to other internal processes across organizational contexts in public, private, and nonprofit sectors. Most importantly, the framework application must be user-friendly and actionable, and thoughtfully integrated into internal operations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power and Control in Disinvested Affordable Housing: San Francisco’s Limited Equity Housing Co-operatives</title>
<link href="https://hdl.handle.net/1721.1/152508" rel="alternate"/>
<author>
<name>Cohen, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/152508</id>
<updated>2023-10-19T03:57:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Power and Control in Disinvested Affordable Housing: San Francisco’s Limited Equity Housing Co-operatives
Cohen, Dylan
The promise of the co-operative housing typology extends beyond providing stable, affordable housing. Co-operatives strive to offer a resident-centered site of democratic participation, where ownership and limited equity combine to provide both collective and shareholder ownership of a valuable community asset. Contentiously, local governments and civic institutions seek certainty and control in housing, prioritizing technical expertise and institutional relationships over deeper investment in resident-owner capacity. Affordable housing practitioners face complex and politicized projects, where co-op health is often threatened by mistrust, institutional failures, and funding scarcity. &#13;
&#13;
In San Francisco, more than 2,000 limited equity housing co-operative units constitute a significant portion of the city’s legacy 1960s and 70s federally-funded housing stock. Co-ops routinely fall into crisis, where residents rely on dysfunctional boards, ill-suited housing management companies, and insufficient government support for their survival. Numerous co-ops face critical survival questions, including deferred maintenance and disrepair, potential redevelopment, political instability, and waning institutional support.&#13;
&#13;
This client-linked thesis delves into the landscape of one local government's relationship with its co-operative housing ecosystem. Through dozens of interviews, a literature review, policy analysis, and several case studies of existing co-ops, this thesis elucidates present-day challenges and findings, and by discussing peer-city case studies of Vancouver, Canada, and Washington, D.C., proposes viable solutions charting a path forward.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proposal for New Commuter Rail Service and TOD Master Plan Along Guangzhou-Shenzhen Railway</title>
<link href="https://hdl.handle.net/1721.1/152507" rel="alternate"/>
<author>
<name>Pan, Yingu</name>
</author>
<id>https://hdl.handle.net/1721.1/152507</id>
<updated>2023-10-19T03:32:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Proposal for New Commuter Rail Service and TOD Master Plan Along Guangzhou-Shenzhen Railway
Pan, Yingu
The Guangzhou-Shenzhen Suburban Railroad Project seeks to transform regional transportation and urban development within the Greater Bay Area by introducing suburban rail services on the Guangshen Railway (GSR). This proposal outlines a framework centered on investment sectors, innovative methods, public-private collaborations, and public welfare initiatives, all bolstered by the creation of a Joint Development Company. The report high- lights the importance of a people-first approach, station-city integration, and transit-oriented development to deliver a sustainable rail service that positively impacts local communities, businesses, and the environment. Through an analysis of current infrastructure, regional connectivity, and accessibility gaps, the proposal suggests strategies for rejuvenating the GSR and promoting eco- nomic integration. Featuring three case studies that demonstrate the current application of city-station integration and transit-oriented design principles in the region, the paper also provides three station redevelopment proposals complete with comprehensive master plans and urban design schemes that aim to offer insight and an evaluation framework for future research. This the- sis contributes to future research and policymaking by establishing a robust foundation for the sustainable development and integration of cities with sub- urban rail network in the Greater Bay Area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Memorable, Legible, and Accessible Cities: Co-Stewarding Historic Preservation and Public Transportation Agendas in Boston and Hong Kong</title>
<link href="https://hdl.handle.net/1721.1/152506" rel="alternate"/>
<author>
<name>Hasenfratz 柳相宜, Shannon L. X.</name>
</author>
<id>https://hdl.handle.net/1721.1/152506</id>
<updated>2023-10-19T03:56:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Memorable, Legible, and Accessible Cities: Co-Stewarding Historic Preservation and Public Transportation Agendas in Boston and Hong Kong
Hasenfratz 柳相宜, Shannon L. X.
This thesis seeks to understand how planners, designers, and policymakers can identify and leverage shared goals between historic preservation and public transit planning to support a memorable, legible, and accessible public realm. Preservation and transportation agendas are often described as inherently opposed to one another, and are generally administered through separate bureaucracies. Rather than being in opposition, I argue that the goals of preservation and transit accessibility are well-aligned through a shared commitment to serving the public interest and fostering sustainable development. I explore this alignment by analyzing how two coastal cities, Boston and Hong Kong, have accommodated transit needs alongside the cultural legacy of their built environments—resulting in positive and negative impacts on achieving sustainable development goals. Insights from Hong Kong and Boston neighborhoods, gleaned through interviews, on-site observations, and mapping exercises, inform a set of opportunities for better fostering the synergies between historic preservation and transit planning. These recommendations, organized around opportunities for collaborative governance structures and processes, seek to improve the usability and enjoyment of public transit system and historic sites to create memorable, legible, and accessible cities for the long-term.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Story of Rubina: Lessons on Self-governance in Peruvian informal settlements and Considerations for Community Land Trusts</title>
<link href="https://hdl.handle.net/1721.1/152505" rel="alternate"/>
<author>
<name>Vila Skrzypek, Flavio</name>
</author>
<id>https://hdl.handle.net/1721.1/152505</id>
<updated>2023-10-19T03:52:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Story of Rubina: Lessons on Self-governance in Peruvian informal settlements and Considerations for Community Land Trusts
Vila Skrzypek, Flavio
Since the 1990s, the Peruvian government has introduced two policies to address informal settlements' property and housing challenges: the formalization titling policy and the certificate of possession policy. Both have caused adverse side effects such as land speculation and land trafficking, respectively. This thesis studies the failure of these past policies and proposes that a new property regime - Community Land Trusts (CLTs) - might be the optimal way to address these property and housing challenges. First, I study why previous property policies failed to intervene in urban informality. Second, I conduct interviews to gather evidence on the self-governance of an informal settlement in Lima and compare it with the core components of different global CLT theories and models. Finally, I intersect both sections to learn about the potential and challenges of establishing a CLT such informal settlement. The implications of this thesis are a set of recommendations and additional research that the Peruvian government should consider when regulating CLTs in Peru.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic Metamaterials at the Microscale</title>
<link href="https://hdl.handle.net/1721.1/152502" rel="alternate"/>
<author>
<name>Sun, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/152502</id>
<updated>2023-10-19T03:04:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Acoustic Metamaterials at the Microscale
Sun, Rachel
Micro-architected materials allow for tunability of extreme static mechanical properties such as stiffness, Poisson’s ratio, or strength. However, dynamic and acoustic properties of micro-architected materials remain largely unexplored, partially because it is challenging to measure their response at these scales. Dispersion resulting from Bragg scattering occurs at wavelengths which are dictated by the characteristic dimensions of the metamaterials, while local resonance remains wavelength-independent. Therefore, micro-architected materials have the potential to allow control of mechanical waves both at high (MHz) and medium-range (kHz) frequencies. &#13;
&#13;
Here, we design, fabricate, and characterize micro-architected materials with tunable mechanical and acoustic properties in the megahertz regime. Using a two-photon lithography prototyping method, we explore the response of a class of architected material morphologies with varied mass distribution, features down to ~1.5 µm, and unit cell sizes of 15 µm. We demonstrate that decoupling mass and stiffness by strategically placing micro-inertia affects the effective stiffness scaling of this class of acoustic metamaterials at the microscale. We present novel measurement techniques for wave velocity of three-dimensional architected materials that employ laser-ultrasonic principles, demonstrating a tunable range of wave velocities around 1000 m/s for different designs in a wide range of relative densities. We then validate their acoustic response numerically with Bloch wave analysis to determine their dispersion relation and rod-wave velocities. Our results provide a baseline to map the tunable acoustic metamaterial design space at the microscale and megahertz regime. These materials could have important implications in acoustic devices in microelectromechanical systems, biomedical imaging, and microscale waveguides.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tabi. Tabbi. Tabique. Tabby.</title>
<link href="https://hdl.handle.net/1721.1/152501" rel="alternate"/>
<author>
<name>Idowu, Jola</name>
</author>
<id>https://hdl.handle.net/1721.1/152501</id>
<updated>2023-10-19T03:11:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Tabi. Tabbi. Tabique. Tabby.
Idowu, Jola
The uniqueness of tabby is based in its process of collecting local and accessible materials to produce concrete through a non-measured/estimated process. The origin of tabby as either  the North African tabbi, or the Spanish tapia, has long been debated and no conclusive evidence exists that points towards either location. This clouded absence of origin displays the character of tabbi’s history as both rooted and rootless; fluid and based in an intercultural exchange that removes tabby from the confinements of borders and a linear timeline of a beginning, middle, or end. In tabby’s move to the Western Hemisphere, its existence is blurred across socio-cultural divides as a symbol of militaristic power, the plantation economy, and the homes of the slaves who built both. In the United States, tabby was composed of oyster shells sourced from Native American middens: the remnants and discarded materials collected by Native Americans years prior, holding a record of indigenous practices and colonial erasure. The introduction of Portland cement and the end of slavery completely changed the prevalence of tabby which relied on free labor to produce the time intensive process of burning and collecting oyster shells.&#13;
&#13;
However, despite its importance in American building culture, tabby is a material that has faded historically and materially. If one were to happen across a tabby structure today, its former marble like finish will most likely suffer from deterioration due to weather damage and neglect, and the broken walls and floors will reveal the oyster shells beneath. In response, tabby structures across the country are undergoing many different types of preservationist practices, whether that is archaeological digs and recordkeeping, the physical preservation of tabby structures, or the continued use of oysters as a construction material in the American South. &#13;
&#13;
This project proposes a new approach to tabby preservation based on its connection to reuse and its subversion of cycles of capital by the enslaved and indigenous peoples associated with its labor. By archiving everyday practices involving oysters and tabby, I hope to rethink how we orient larger tactics of environmental and material resilience towards the stories and labor of marginalized peoples. In this context, material preservation becomes both a social and physical endeavor through the context of the American South and the shore becomes a place where processes of land, water, and people meet.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study: LIHTC-to-Condo Conversion</title>
<link href="https://hdl.handle.net/1721.1/152499" rel="alternate"/>
<author>
<name>Glasgow, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/152499</id>
<updated>2023-10-19T03:01:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Case Study: LIHTC-to-Condo Conversion
Glasgow, Rebecca
By the end of the decade, approximately half of Low-Income Housing Tax Credit (LIHTC)-funded housing units are anticipated to reach the end of their affordability restrictions. This thesis examines the potential benefits and challenges associated with the transformation of LIHTC rental units into homeownership condominium units through an in-depth case study of Quality Hill Phase IIB, a LIHTC Rental-to-Affordable Condominium project based in Kansas City, Missouri. The case study identifies key regulatory and financial factors that contributed to the model’s initial success. Most significant was the legal theory that the Internal Revenue Service (IRS) has no jurisdiction after the 15-year compliance period and sole jurisdiction lies with the State Housing Finance Agency (SHFA). This predicate was the basis for a private letter ruling granted from the Internal Revenue Service, with participation with the SHFA, that allowed a LIHTC tenant the right of first refusal to buy his or her unit as part of the condominium homeownership plan after year 15 of the compliance period. Despite the model’s initial success, the project grappled with substantial obstacles related to the 2008-2012 financial crisis, recapitalization of the capital partner, lack of end loan financing, and tenant eligibility issues that led to its eventual downfall. Despite these challenges, LIHTCto-condominium conversions hold potential as a strategy for creating affordable homeownership options. The case study provides lessons learned and tools to be applied to its application in a future condominium attempt. These include the use of tax codes sections 108, 183 and IRS Revenue Procedure 2014-12 to tackle feasibility of the model as well as securing mortgage financing from alternative lending institutions that can better accommodate to low-income tenants. In conclusion, this research broadens the academic dialogue on rent-to-own models. By highlighting the primary challenges associated with this approach and offering practical insights, this thesis hopes to provide a valuable resource for stakeholders considering LIHTC for affordable homeownership solutions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Kids Table: A Report Conceptualizing Youth Empowermentand Food Planning Methods Through the Case Study of theMattapan Food and Fitness Coalition</title>
<link href="https://hdl.handle.net/1721.1/152498" rel="alternate"/>
<author>
<name>Fall, Moctar N.</name>
</author>
<id>https://hdl.handle.net/1721.1/152498</id>
<updated>2023-10-19T03:45:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Kids Table: A Report Conceptualizing Youth Empowermentand Food Planning Methods Through the Case Study of theMattapan Food and Fitness Coalition
Fall, Moctar N.
There is an ageless saying directed towards youth (young adults aged 14-19) that continues to define and dictate their lives. Youth are our future. Yet, many governmental and planning institutions overlook the prospect of integrating the voices of youth, particularly of color, within decision-making processes that directly affect them and their communities. Youth should have the power to make key decisions around food security in their lived environments. In this thesis, I reveal the potential impacts youth can have when given adequate support and resources in the planning level - through the prospect of food system and planning.&#13;
&#13;
Building on my former thesis, existing research, case studies, historical analyses and analyzing data from my client partner, the Mattapan Food and Fitness Coalition (MFFC), this thesis: 1. Delves into the history of youth rights and engagement in the United States; 2. Brings to the forefront the tools of food through the analysis of food planning and its empowering attributes in the community; 3. Shows the impact youth have had on their respective community foodscapes with a primary focus on Mattapan and the MFFC; 4. Builds a framework on the crossroads of food planning, youth empowerment and community decision making; and 5. Calls to action institutions of governance and higher education to not only involve youth within urban food system decision making models and designs, but to also support youth and food organizations aimed at improving the landscape and lived environments of their communities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disaster Diplomacy: The spatial impact of international reconstruction aid in the aftermath of the 2015 Gorkha earthquake in Nepal</title>
<link href="https://hdl.handle.net/1721.1/152497" rel="alternate"/>
<author>
<name>Karmakar, Ipshita</name>
</author>
<id>https://hdl.handle.net/1721.1/152497</id>
<updated>2023-10-19T03:58:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Disaster Diplomacy: The spatial impact of international reconstruction aid in the aftermath of the 2015 Gorkha earthquake in Nepal
Karmakar, Ipshita
This thesis aims to investigate the spatial implications of international reconstruction aid in the aftermath of the 2015 earthquake of Nepal, particularly in the urban municipality of Lalitpur.&#13;
&#13;
I explore how emergency reconstruction aid, operationalized as support from international NGOs, bilateral agencies and multilateral organizations, has a spatial impact and imprint on cities. Particularly, I examine the impact of the aid community on the rent, land values, and infrastructural/amenity distribution within the wards of their operation. Second, I examine the impact of post-earthquake reconstruction projects leveraging international funding on urbanization patterns in the wards in which they are situated. To understand counterfactual trends, I examine the overall patterns of neighborhood externalities in earthquake affected wards of Lalitpur where no international aid funded projects or aid personnel are located.&#13;
&#13;
The argument advanced includes two suppositions that decipher the spatial implications of aid project presence and operational presence: 1) The increasing spatial cluster of physical outposts of international aid organizations’ headquarters, i.e. what I call here their operational presence, creates negative neighborhood externalities ad change that is privileging the rentier class rather than distributing housing, amenities, and infrastructure equitably to the city; 2) The presence of international aid funded reconstruction projects, i.e their project presence, creates a change in both amenities and small business distribution within wards within which they are situated to create neighborhood change, which accelerates inequity, but in ways unlike that of operational presence. I find that two wards within Lalitpur show significant negative neighborhood externality and change due to the presence of international reconstruction aid as opposed to the rest of the municipality i.e. Ward no.2 and Ward no.16. &#13;
&#13;
Particularly, these wards saw an exponential increase in rent and housing values (in the case of Ward no.2), a change in the nature and function of locally owned small businesses, and a tendency to cater to a rentier class that comprises international aid workers and tourists, as opposed to the rest of the municipality (both Ward no.2 and Ward no.16).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multifamily Affordable Housing Energy Retrofit Strategy for Richmond, CA</title>
<link href="https://hdl.handle.net/1721.1/152496" rel="alternate"/>
<author>
<name>Gowda, Shivali P.</name>
</author>
<id>https://hdl.handle.net/1721.1/152496</id>
<updated>2023-10-19T03:07:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multifamily Affordable Housing Energy Retrofit Strategy for Richmond, CA
Gowda, Shivali P.
Weatherization, energy efficiency, and electrification upgrades, which combined can be called energy retrofits, can reduce energy burden, provide health improvements through improved indoor air quality and increased comfort in the home, and reduce greenhouse gas emissions. This study explores how the City of Richmond, CA can incentivize weatherization, energy efficiency, and electrification upgrades as well as solar installation in multifamily affordable housing developments to provide these benefits to low-income residents in the City. Through interviews with energy program administrators, affordable housing providers, community-based organizations, and government agencies, this study identifies the key motivations, opportunities, and challenges of completing multifamily affordable housing energy retrofits in Richmond, CA. In addition, a comprehensive review of existing and upcoming federal, state, and local energy retrofit funding and resources was completed. Based on building permit data and utility payment structure and appliance fuel source survey data from buildings, existing affordable housing developments that are good candidates for electrification and solar installation in Richmond were identified. Utilizing interview findings, literature review, funding information, and building stock analysis, recommendations were created for the City of Richmond of short, medium, and long term programs that could be implemented to increase multifamily affordable housing energy retrofits, with staff capacity, funding requirements, and implementation timeline information included.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Olympic Challenge: Designing Equity into Mega-Events</title>
<link href="https://hdl.handle.net/1721.1/152495" rel="alternate"/>
<author>
<name>Velasquez-Soto, Sharon Jacqueline</name>
</author>
<id>https://hdl.handle.net/1721.1/152495</id>
<updated>2023-10-19T03:03:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Olympic Challenge: Designing Equity into Mega-Events
Velasquez-Soto, Sharon Jacqueline
In 2028, the City of Los Angeles will host the Olympic Games for the third time since the start of the last century; the first and second times being 1932 and 1984. While hosting the Olympics is regarded as a high honor with the potential to bring about significant and lasting benefits, it also presents challenges to the host municipality. Studies of mega-events like the Olympics Games cite place-based challenges such as displacement, gentrification, environmental damage, and lost opportunities to advance equitable development that outlasts the Olympics’ duration. One driver of these place-based challenges - and a manifestation of how communities of color have been left behind during mega-event planning - is the inequitable allocation of opportunities to build wealth, such as through diverse contracting. As such, more explicitly just contracting processes have been identified as one of many avenues that can help address the entrenched racial wealth gap in the United States and better forward equitable economic development through mega-event-induced business.  &#13;
&#13;
This thesis investigates the potential processes and operations entailed in operationalizing equity through diverse procurement as Los Angeles prepares for the 2028 Olympics. Interviews with leaders of the small business community in Los Angeles, a former leader of Exposition Park (one site of the 2028 Olympics), the 2028 Los Angeles Olympic and Paralympic Organizing Committee, and the City of Los Angeles employees confirm that procurement is a major opportunity to forward equity during the 2028 Games. In part, this is because the 2028 Games are billed as a “no-build Olympics,” meaning that the construction of new developments will not apply because Los Angeles already has a wealth of infrastructure. &#13;
&#13;
Borrowing the language of hazard mitigation from environmental planning and a framework for operationalizing equity in planning, this thesis evaluates the potentiality of diverse procurement and contracting in mega-events as a tool to minimize known vulnerabilities, particularly for traditionally marginalized communities, of hosting mega-events like the Olympic Games. The thesis leverages a prime, though imperfect, example of a more inclusive procurement program of the 1996 Olympics in Atlanta to explore lessons learned about diverse procurement and contracting in that city. It concludes with an analysis on what a transfer of best practices by Atlanta could look like in Los Angeles in pursuit of more equitable economic development and what is here termed “economic hazard mitigation” in the planning of mega-events in cities with histories of inequitable urban development.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing (Up)Zoning for Affordability: A Seattle Case Study</title>
<link href="https://hdl.handle.net/1721.1/152494" rel="alternate"/>
<author>
<name>Cameron, Nicholette Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/152494</id>
<updated>2023-10-19T03:43:18Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Implementing (Up)Zoning for Affordability: A Seattle Case Study
Cameron, Nicholette Paige
Today, U.S. cities are continuing to grapple with housing shortages and the affordability crisis. In the United States, as of 2020, 30% of households were cost-burdened and 14% were severely cost-burdened, paying more than 30% and 50% of their incomes on housing, respectively. One way in which cities are attempting to manage growth and affordability is with zoning changes. Cities can encourage new development and increase affordable housing options by loosening restrictions that allow for more density and tying affordability requirements to that new development capacity. This is also known as inclusionary upzoning.&#13;
&#13;
This thesis exists to document the case study of Seattle’s inclusionary upzoning policy, providing just one example of how cities are using zoning reform as a tool to address the affordability crisis. The case is presented as two components: Policy and Practice. The Policy section will provide an overview of the policy from ideation to implementation. First providing what steps were taken to implement both the upzone and the Mandatory Housing Affordability policy, including what buy-in was needed from various stakeholders. Second, it will outline how Seattle’s upzone and Mandatory Housing Affordability changed existing policy and if those changes impacted all neighborhoods equally. Lastly, it will provide a summary of what the policy had accomplished so far.&#13;
&#13;
The Practice section provides one example of how a developer has responded to the upzone. I chose this developer because they are utilizing a unique, community-based model instead of the traditional purchase-to-redevelop business model, which allowed me to explore how the developer is supporting current residents and the community, and what challenges the developer and the community are experiencing as they navigate the upzone and MHA policy.&#13;
&#13;
The thesis concludes with a set of recommendations that Seattle and other municipalities should consider when implementing [up]zoning reform for affordability, including implementing upzones citywide and changing the perspective of the role of communities in the development process.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Vibration Suppression for Wafer Transfer Systems in Semiconductor Fabrication Plants</title>
<link href="https://hdl.handle.net/1721.1/152492" rel="alternate"/>
<author>
<name>Qiu, Jiajie</name>
</author>
<id>https://hdl.handle.net/1721.1/152492</id>
<updated>2023-10-19T03:28:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Active Vibration Suppression for Wafer Transfer Systems in Semiconductor Fabrication Plants
Qiu, Jiajie
Vibration suppression is critical in precision mechatronic systems for nanofabrication. In semiconductor plants, automated wafer handling is performed by Overhead Hoist Transport (OHT) vehicles that transport wafers in front opening unified pods (FOUPs). When the wafers are transported in FOUPs, semiconductor chips are at risk of damage by excited small particles due to mechanical vibration, especially if such particles land on the critical area of the wafers. To minimize the vibration excitation force transferred to the FOUP, this thesis focuses on active suppression of the FOUP vibrations to improve the production yield. However, two primary challenges make this problem difficult. First, the OHT vehicle and the FOUP keep traveling, thus the target system is floating with no external anchoring point as a momentum source for control efforts. Second, no sensor attachment is permitted on mass-production FOUPs, which makes feedback control more challenging without measurement. To address the challenges and achieve the goal of reducing FOUP acceleration peaks, an inertia-based counterbalancing system is developed. To validate this system, a customized testbed is built to replicate the acceleration profile of the OHT vehicle in both travel and lateral axes. Additionally, an active vibration suppression system is designed to generate a controllable force on the hand unit. System modeling and identification are conducted using simulation and experiment to identify the system dynamics. Finally, a Disturbance Observer-Based Controller (DOBC) is developed and implemented on the hardware. The experimental results show that the DOBC achieves 38 percent reduction of OHT hand unit vibration and 42 percent reduction of FOUP vibration in the OHT travel direction. Furthermore, the proposed method successfully reduces the multi-axis FOUP-level acceleration peaks, further confirming the effectiveness of the proposed method.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collaboration in Unlikely Spaces: The Characteristics and Promise of Successful Collaboration Among Affordable Housing and Environmental Conservation Proponents</title>
<link href="https://hdl.handle.net/1721.1/152491" rel="alternate"/>
<author>
<name>Fullem, Abby K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152491</id>
<updated>2023-10-19T03:34:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Collaboration in Unlikely Spaces: The Characteristics and Promise of Successful Collaboration Among Affordable Housing and Environmental Conservation Proponents
Fullem, Abby K.
There is a decreasing amount of available land and competing priorities for the use of it. Land value appreciation and the effects of climate change reduce the amount of viable land at affordable prices. Sectors and stakeholders with contending interests for land parcels have a choice; they can contest the other, ignore the other and try to maximize their interests, or collaborate to maximize both of their interests on that land.&#13;
&#13;
Two sectors that face this choice are affordable housing developer non-profits and conservation land trust non-profits. Both are land-based, in need of inexpensive land, and struggling to achieve their missions alone. Collaboration, I suggest, is the preferred route for these sectors to take in the face of increasing competition, as it allows each sector to simultaneously advance their own interests by leveraging the other sector’s strategies and tools, and form a more powerful political coalition to further their shared interests.&#13;
&#13;
I describe and analyze an action research case study I conducted on a cross-sectoral collaboration in the Hudson Valley of New York State. Hudson Valley Affordable Housing and Conservation Strategy (HVAHCS) is comprised of ten affordable housing and conservation land trust non-profits that are choosing to collaborate in the face of increasing competition. Through a review of consensus building, network building, and collective impact theories, as well as interviews and experience as a member of the HVAHCS facilitation team, I look at what enables their cross-sectoral collaboration, and how they approach obstacles to it. I conclude with recommendations for other groups considering collaboration as a means to advance their individual and shared interests in the same physical space.&#13;
&#13;
Learnings from this action research case study point to the importance of employing an interests-based approach, allowing ideas and priorities to emerge from the network of organizations, balancing capacity and diffused leadership within the collaborative, using a third-party facilitator, prioritizing relationship-building, building a shared understanding, and supporting the organizations within the collaborative.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Repetitive Flooding in Riverine Towns: Understanding Responses, Barriers, and Challenges for the Future</title>
<link href="https://hdl.handle.net/1721.1/152490" rel="alternate"/>
<author>
<name>Campbell, Shaler Rodney</name>
</author>
<id>https://hdl.handle.net/1721.1/152490</id>
<updated>2023-10-19T03:01:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Repetitive Flooding in Riverine Towns: Understanding Responses, Barriers, and Challenges for the Future
Campbell, Shaler Rodney
Climate change is predicted to increase the intensity of precipitation events and increase inland flooding in the United States in the coming decades (Allan et al., 2020; Easterling et al., 2017; Kerlin, 2019; Mallakpour &amp; Villarini, 2015). Unlike coastal communities, which have seen increased attention in the face of climate change, riverine communities have received far less attention (Jongman et al., 2012). This is despite a long history of repetitive riverine flooding and associated responses and barriers to flood mitigation. Important insights can be drawn from towns that have endured repetitive flooding and how they have responded. This thesis explores riverine towns with repetitive flooding, the similarities and differences in their flood responses and barriers to mitigation, similarities that can be deduced for other riverine towns, and how policies may be improved to better support them. To answer these questions, results were compared from semi-structured interviews and historical research from four case study towns in the United States: Harrisburg, Pennsylvania; Freeport, Illinois; Ellicott City, Maryland; and Athens Borough, Pennsylvania. Firstly, results showed several barriers to flood mitigation, including a lack of institutional capacity, challenges with regionalism, and insufficient federal flood mitigation assistance. Secondly, results showed that mitigating flood risk from multiple flood profiles, managed retreat, and structural flood mitigation solutions are proving successful for some riverine towns as flooding events increase in severity. Lastly, results showed that current federal programs must better fully support smaller riverine towns needing funding for flood mitigation, and modifications to existing programs and new programs are necessary to support their unique circumstances. From a resource allocation perspective, this thesis highlights the need to devote more resources to riverine towns with repetitive flooding to help them mitigate the worst effects of flooding in the face of increasingly worse storm events due to climate change.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameterizing transport maps for ensemble data assimilation</title>
<link href="https://hdl.handle.net/1721.1/152488" rel="alternate"/>
<author>
<name>Sharp, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/152488</id>
<updated>2023-10-19T03:43:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Parameterizing transport maps for ensemble data assimilation
Sharp, Daniel
This thesis discusses methods for Bayesian parameter estimation, particularly in the case of state space models (SSMs). We begin by reviewing established methods for filtering in SSMs, and by examining the graphical model structure of a parameterized SSM. Then we discuss established methods for estimating the parameters of such an SSM, making use of its graphical structure. Next we employ monotone triangular transport maps as a method of estimating conditional probability densities and performing conditional sampling, and relate these tasks to the original filtering problem. We provide some practical results and experiments for employing these maps for inference, particularly examining the map parameterization for this function approximation problem. Using these ingredients, we introduce and discuss an algorithm that uses transport to perform online inference of the static parameters of an SSM, and relate this algorithm to prior methods. Finally, we tie the problems of function approximation and static parameter inference together with numerical examples of transport for sequential inference. &#13;
&#13;
Most of the results in this thesis are powered by two software packages that were developed at length over the course of the thesis work: EnsembleFiltering.jl, written in Julia for performing automatically-differentiable ensemble-based filtering on the CPU and GPU; and MParT, written in C++ for evaluating and training monotone triangular transport maps.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Sort Marker Digitization in Sort Center Operations</title>
<link href="https://hdl.handle.net/1721.1/152487" rel="alternate"/>
<author>
<name>Arellano Martinez, Nayeli</name>
</author>
<id>https://hdl.handle.net/1721.1/152487</id>
<updated>2023-10-19T03:37:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Visual Sort Marker Digitization in Sort Center Operations
Arellano Martinez, Nayeli
Companies worldwide recognize the importance of sustainability and are looking for ways to incorporate sustainable practices into their operations. Companies are looking to become more sustainable by reducing their carbon footprint. It can be done through various means, such as investing in renewable energy sources, implementing energy efficiency measures, and reducing the use of non-renewable resources. Another way companies are looking to become more sustainable is by being more responsible in their supply chain, ensuring that their materials and products are ethically and sustainably sourced, for example.&#13;
&#13;
With the growing awareness of the environmental impact of businesses, consumers are increasingly looking for companies that prioritize sustainability. Amazon, the world's largest online retailer, has announced its commitment to becoming more sustainable. This decision addresses the pressing issue of climate change and reduces the company's environmental footprint. By committing to sustainable practices, Amazon can attract and retain customers who value environmentally friendly products and services. In addition, Amazon's investment in sustainable practices can lead to cost savings in the long run.&#13;
&#13;
One of the many sustainable strategies Amazon is working on is the Small Shipping Label (SSL). This initiative aims to reduce the shipping label size and has a potential entitlement of ~$1 billion per year. Smaller labels facilitate the use of smaller shipping boxes, which ultimately reduces the overall amount of packaging materials required. This reduction in packaging materials, in turn, contributes to a decrease in the carbon footprint associated with transportation. Smaller boxes translate as optimized truck space, since more packages can be shipped in a single trip. As a result, the number of trucks or planes required for delivery is reduced, reducing associated fuel consumption and emissions. SSL implementation requires the removal of a physical Visual Sort Marker (VSM) from the package label. One of the critical manual processes in Middle- Mile operations (Sort Slide) currently relies on physical VSMs to inform sortation decision-making at the package level. Amazon is working towards removing physical VSMs while mitigating any risks to Throughput Per Hour (TPH) and Delivery Estimate Accuracy (DEA). Manual dependencies limit inflight shipment replanning to handle events such as missorts, unpredictable weather conditions, truck breakdowns, etc. Elimination of reliance on physical VSMs will provide the ability to decrease packaging waste, allowing for shipping items in packages smaller than the current 4x6 shipping label, bringing savings on packaging and transportation costs, and aligning with The Climate Pledge.&#13;
&#13;
This thesis looks into the operational challenges of implementing sustainable practices by assessing the trade-offs between sustainability vs. productivity. Its objective is to determine the effect of the short-term proposed solution for VSM removal in the Sort Center network. Specifically in the Sort Slide process capacity and utilization. The present analysis suggests that accepting a modestly degraded process rate may be a viable trade-off if it helps an organization achieve its sustainability goals and ensure the long-term viability of its financial growth.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impacts of Automated Buses on Travel Mode Preference for&#13;
Different Income Groups and Density Areas</title>
<link href="https://hdl.handle.net/1721.1/152486" rel="alternate"/>
<author>
<name>Tang, Ziyi</name>
</author>
<id>https://hdl.handle.net/1721.1/152486</id>
<updated>2023-10-19T03:14:50Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Impacts of Automated Buses on Travel Mode Preference for&#13;
Different Income Groups and Density Areas
Tang, Ziyi
Interest in promoting sustainable transportation continues to rise, and as a result, adopting more equitable, environmentally sustainable mobility is necessary. In the next decades, automated vehicles (AVs) could transform the transport system. Depending on how AVs are adopted, the impacts may differ. The existing literature has demonstrated that using automated buses (ABs) as part of public transit systems shows greater potential for mobility equity and sustainability than single-occupancy AVs. Despite the growing use of automation in the public transit industry, less interest has been given to research on and development of fixed-route ABs than to on-demand AVs.&#13;
&#13;
To fill this gap, this research focuses on fixed-route ABs and evaluates their impacts on transportation equity and sustainability for different income groups and density areas. The study analyzes travel surveys and then simulates the impact of ABs on travel-to-work mode choices. In particular, the research explores the mode preferences of residents in the Metro Boston Area based on the Massachusetts Travel Survey of 2011 and builds mode choice models that incorporate income strata differences. This study introduces ABs as a new mode with lower travel time (via higher frequency services and denser bus networks). The models simulate changes in mode choices in scenarios that provide different AB services to different income and density groups. Finally, this research evaluates scenarios according to four qualities: effectiveness, equity, sustainability, and health.&#13;
&#13;
The results show that the impacts of AB services vary across income and density groups. Providing AB services to low- and middle-income groups living in high- and middle-density areas, and on-demand small automated shuttles in high-density areas, might be the most balanced solution in terms of the four qualities. Overall, this research will support planning and policy decision-making to ensure that emerging AV technology leads to the most equitable and sustainable outcomes.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Shared Mobility Market: Dissolving Market Segmentation and Understanding Market Friction</title>
<link href="https://hdl.handle.net/1721.1/152484" rel="alternate"/>
<author>
<name>Guo, Xiaotong</name>
</author>
<id>https://hdl.handle.net/1721.1/152484</id>
<updated>2023-10-19T03:17:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing the Shared Mobility Market: Dissolving Market Segmentation and Understanding Market Friction
Guo, Xiaotong
Over the past decade, the growth of ride-sharing companies, also known as Transportation Network Companies (TNCs), providing on-demand transportation services for passengers, has been one of the fastest worldwide. However, in the governance of the shared mobility market of a city or metropolitan area, two conflicting principles emerge: the healthy competition between multiple platforms, such as Uber and Lyft in the United States, and economies of network scale, which leads to higher chances for trips to be matched and thus higher operation efficiency, but which also implies a monopoly. The current shared mobility markets, as observed in different cities in the world, are either monopolistic, or largely segmented by multiple platforms, the latter with significant efficiency loss. &#13;
&#13;
This thesis addresses the efficiency loss issues due to segmentation by proposing new market designs while keeping the competition between platforms. We first propose a theoretical framework for describing shared mobility markets and then propose four market structure designs thereupon. The framework and four designs are first discussed as an abstract model, without losing generality, thus not constrained to any specific city. High-level perspectives and detailed mechanisms for each proposed market structure are both examined. Then, to assess the real-world performance of these market structure designs, we used a ride-sharing simulator with real-world ride-hailing trip data from New York City to simulate. The proposed market designs can reduce the total vehicle-miles traveled (VMT) by 6\% while serving more customers with 8.4\% fewer total number of trips. In the meantime, customers receive better services with an on-average 5.4\% shorter waiting time. &#13;
&#13;
On the other hand, platform drivers in the shared mobility market frequently switch or work for multiple platforms, providing a natural way of dissolving the market segmentation. However, the presence of significant market friction preventing platform drivers from multi-homing has been found in a recent survey distributed in Jakarta, Indonesia. In this thesis, we taxonomize and estimates perceived switching and multi-homing frictions on mobility platforms. Based on a structural model of driver labor supply, we estimate switching and multi-homing costs in a platform duopoly using public and limited high-level survey data in a shared mobility market with a transportation network company duopoly. Estimated costs are sizeable, and reductions in multi-homing and switching costs significantly affect platform market shares and driver welfare. Driver labor supply elasticity with respect to platform wage is also discussed considering both multi-homing and switching frictions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hong Kong Time: Rethinking sustainable mobility and the 15-minute city in the context of equity</title>
<link href="https://hdl.handle.net/1721.1/152483" rel="alternate"/>
<author>
<name>Wang, Elaine</name>
</author>
<id>https://hdl.handle.net/1721.1/152483</id>
<updated>2023-10-19T03:21:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hong Kong Time: Rethinking sustainable mobility and the 15-minute city in the context of equity
Wang, Elaine
Cities around the world are going car-free. With concepts like the 15-minute city, planners and policymakers are investing in more sustainable transit modes: walking, biking, and public transit. Though this shift is critical to reducing emissions, it raises important equity issues that need to be explored. How does the move toward sustainable mobility impact equity? How might it address existing inequality or create new sources of inequality? And how can we ensure an equitable shift to sustainable mobility? This thesis explores these questions, using Hong Kong as a case study. By using spatial analysis, it introduces a Sustainable Mobility Score that quantifies access to urban amenities via sustainable transport modes, like walking and public transit. It then analyzes the relationship between this scoring system and neighborhood income levels. The results show that walkability is linked to spatial segregation, but public transit serves as an equalizer across different neighborhoods. Finally, this thesis discusses the implications of these findings to inform an equitable shift to sustainable mobility.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Housing Supply under Stringent Energy-efficiency Regulations</title>
<link href="https://hdl.handle.net/1721.1/152482" rel="alternate"/>
<author>
<name>Muzio, Maria Jimena</name>
</author>
<id>https://hdl.handle.net/1721.1/152482</id>
<updated>2023-10-19T03:27:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Housing Supply under Stringent Energy-efficiency Regulations
Muzio, Maria Jimena
Massachusetts's commitment to a 50% emissions reduction by 2030 and net-zero emissions by 2050 is reflected in the Green Communities Act of 2008, which requires the adoption of the Stretch Energy Code for every municipality that is designated as a Green Community. This appendix to the base building code adds more stringent energy-efficiency requirements, such as including the HERS Index rating system in every new residential construction. Despite their obvious environmental benefits, more stringent energy-efficiency building regulations can also lead to increased construction costs and negatively impact housing production and affordability. In this study, I investigate the tension in the housing supply resulting from the adoption of the Stretch Energy Code by analyzing municipalities' staggered designation as Green Communities to identify the causal mechanisms behind quantity and price effects in the residential real estate market. The results indicate that more energy-efficient properties command a positive sales price premium and that the Stretch Code adoption is associated with a decrease in the housing quantity and an increase in the average housing prices.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Black Art Planning: Exhibition Manifesto</title>
<link href="https://hdl.handle.net/1721.1/152481" rel="alternate"/>
<author>
<name>Saint Hilaire, Romy</name>
</author>
<id>https://hdl.handle.net/1721.1/152481</id>
<updated>2023-10-19T03:33:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Black Art Planning: Exhibition Manifesto
Saint Hilaire, Romy
Black Art Planning: Exhibition Manifesto, honors the many modes and forms of knowledge that inform Black artists acting as informal planners, designers and urbanists working to harmonize spatial urban realities for marginalized communities. This is a focused introspection of Black liminal realities and how art is used as a tool to challenge, redress and inform the healing of vulnerable communities in the United States. This thesis is in the form of an exhibit showcasing a series of manifesto posters highlighting the key elements of a Black Art Planning framework. Accompanied by a short flm capturing the essence of what has informed this thinking through travel and research in Saint Martin and South Africa. This thesis intends to combine an academic and practice-informed approach to synthesize the phenomena of Black artists and creative collectives cultivating planning solutions through an arts practice in cities across the US and abroad. In highlighting an approach that is intersectional in both the planning feld and the art sector, Black Art Planning is positioned in conjunction with curatorial critique, black critical thought, and city planning pedagogies that inform possibilities for thriving communities through the arts. Essentially exploring who has the right to Art in the city?
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power Play : An Historiographic about Women and Urban Renewal</title>
<link href="https://hdl.handle.net/1721.1/152480" rel="alternate"/>
<author>
<name>Berendschot, Octavie Eleonor</name>
</author>
<id>https://hdl.handle.net/1721.1/152480</id>
<updated>2023-10-19T03:32:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Power Play : An Historiographic about Women and Urban Renewal
Berendschot, Octavie Eleonor
This research will take form of a nonfiction graphic novel that resurfaces the lived experiences of womxn¹ in New York City during mid-twentieth century urban renewal projects. Pathologizing immigrant and nonwhite communities, city officials approved the wholesale demolition of the vibrant neighborhood of downtown Brooklyn by  issuing reports and approving masterplans for public housing. This group of exclusively white men intentionally made these documents opaque as a way to suppress protests and push their political agenda forward. The ongoing preservation of these records as part of the city’s archives ensures that the production of history about urban renewal is constrained by governmental archival practices; which biases histories towards formal participants in exclusionary processes. In contrast, this project seeks to amplify the voices of womxn who lived, worked and passed through these neighborhoods by both leveraging and questioning these archival sources as fragmented evidence of urban histories. This graphic novel explores techniques of representing authorial positionality, especially as it relates to the production of history. To fill the narrative gaps, the creative nonfiction story attempts to humanize neighborhood destruction; it also calls attention to the continuation of oppression and how these histories manifest in the present.&#13;
&#13;
¹Womxn is an intersectional term used to signal the inclusion of those who have traditionally been excluded from white feminist discourse.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crossroads: Exploring how micro organizations that leverage design shape urbanism practice</title>
<link href="https://hdl.handle.net/1721.1/152479" rel="alternate"/>
<author>
<name>Isidor, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/152479</id>
<updated>2023-10-19T04:01:12Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Crossroads: Exploring how micro organizations that leverage design shape urbanism practice
Isidor, Melissa
Crossroads is an exploration into the role micro organizations (1-10 people) that leverage design play within the greater urbanism field. At large, this research serves to build synergies between creative practitioners within or adjacent to the urbanism field, while providing insights and resources both from a philosophical and operational perspective. The research aims to think expansively about the definition of what design means, mainly conceptualizing design as a way of thinking and process. Using a case study approach, my investigation brings together the voices of six micro organizations based in the United States—including BlackSpace Urbanist Collective, JIMA Studio, Broad Community Connections, Design Studio for Social Intervention, Civic Studio, and Hector Design. Each conversation dives into the nuance of each organization’s foundations, process, and vision for the future. In understanding each group’s internal organizational practices, we begin to uncover the possibilities and challenges of practicing at this scale. At large, the findings lead me to believe that such organizations serve as the instigators and experimenters within the greater urbanism ecosystem.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Participatory Zoning: Collectivity, contradictions, and the politics of inclusion in neighborhood planning</title>
<link href="https://hdl.handle.net/1721.1/152478" rel="alternate"/>
<author>
<name>Lee, Gina Hanhee</name>
</author>
<id>https://hdl.handle.net/1721.1/152478</id>
<updated>2023-10-19T03:07:24Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Participatory Zoning: Collectivity, contradictions, and the politics of inclusion in neighborhood planning
Lee, Gina Hanhee
The strength of individual property rights and the sovereignty those rights secure means that even a collective of neighborhood residents successfully engaging with public institutions are highly unlikely to achieve outcomes that undermine private interests in property and profit. Yet, revised attempts at participation continue to be made and the participatory planning paradigm continues to be entrenched. This thesis aims to show that, despite the limited—maybe even predetermined—outcomes of resident participation in land use decision-making, their engagement in such processes generates alternative ways of using land use regulation as tools for spatializing collective survival and even sovereignty. It offers critiques of current participatory planning processes but also reveal incentives for continued resident participation in municipal neighborhood planning decisions.&#13;
&#13;
Through a case study of participatory land use and zoning decisions over five decades in Two Bridges, this thesis analyzes the formations through which residents engage in public processes and asses how property relations and transformations in broader planning contexts structure those engagements. It traces a genealogy of processes and outcomes on one particular site to evaluate the emergences and institutionalizations of participatory formations and the adaptations in representation and modes of participation by shifting local collectivities.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of deep learning to land cover classification: practical issues and strategies</title>
<link href="https://hdl.handle.net/1721.1/152476" rel="alternate"/>
<author>
<name>Fang, Ruoming</name>
</author>
<id>https://hdl.handle.net/1721.1/152476</id>
<updated>2023-10-19T03:48:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Application of deep learning to land cover classification: practical issues and strategies
Fang, Ruoming
Land Use and Land Cover (LULC) change is a process of essential importance to urban studies and planning. Large-scale databases provide comprehensive records, but in many circumstances, they need to be supplemented or substituted by alternative data sources. The advent of deep learning provides an efficient, low-cost data generation method in which a trained deep neural network (DNN) segments satellite images to classify land cover; in recent years, multiple models have been proposed and tested on satellite imagery. This study takes a practically oriented approach, in which we train a classic convolutional neural network (CNN) model on a novel labeled image dataset, then use the model to segment Sentinel-2 satellite images and classify the land cover of Massachusetts in 2019. While the model performs very well in classifying land covers at a broad level, the discrepancies between model predictions and reference data increase in distinguishing more nuanced land features due to many localized factors. In addition, model training and classification are highly sensitive to several issues specific to remote sensing data, such as defects in images and distribution shifts. We devise multiple empirical strategies to address these issues, including a progressive technique to select high-quality data samples from the imperfect dataset and the selection of normalization parameters to reduce the impact of covariate shifts. We contend that good models alone are insufficient to drive successful LULC mapping on remote sensing imagery; sound data engineering also plays a crucial role. Lastly, we explore potential improvements in the field that can benefit future applications.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hacer la vida en Ciudad Verde: Bringing Participatory Action Research to Colombia’s Affordable Housing macro-projects</title>
<link href="https://hdl.handle.net/1721.1/152475" rel="alternate"/>
<author>
<name>Pérez Carrillo, Ana María</name>
</author>
<id>https://hdl.handle.net/1721.1/152475</id>
<updated>2023-10-19T03:45:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hacer la vida en Ciudad Verde: Bringing Participatory Action Research to Colombia’s Affordable Housing macro-projects
Pérez Carrillo, Ana María
Housing macro-projects have been central to Colombia’s urbanization tradition over the past century. A new set of laws put forth in the 2010s, and a particular economic chapter in the country’s history have brought a new wave of affordable housing macro-projects over the past decade through public and private cooperation that are characterized by their immensity. Vastness and complexity are fertile ground for an anonymity that can feed loneliness and disconnection. In this urban-suburban immensity, how do the stories, experiences, and voices of residents get heard as they try to contribute to a heated national debate about the future of the housing and urbanization policies that made Ciudad Verde possible? Academic research and urban planning can bring together these voices— containing the joys, pains, hopes, and fears of residents—to the center of the national conversation. Bottom-up, participatory, and action-focused research processes, as attentive to time as they are to space, can help us understand this multiplying new urban form, so big in scale it threatens to overwhelm.&#13;
&#13;
Located on the outskirts of Bogotá, Ciudad Verde, an affordable housing complex that houses over 51.000 households, exemplifies the complexity of these macro-projects in terms of the possibilities that they bring to new residents and the challenges that come with this large-scale fast-paced urbanization. Through developing a Participatory Action Research structure and framework, the Resident Researcher Group of Ciudad Verde collected qualitative data around the experiences of habitation, coexistence, community, belonging and governance that take place for residents of Ciudad Verde.&#13;
&#13;
Through the implementation of a photo-voice process and a civic conversation design process led by 10 resident researchers of Ciudad Verde, our outputs include audiovisual elements that further capture and elevate residents' voices and perspectives. Our hope is that these stories and testimonies will inform decision making for the future of Ciudad Verde, future affordable housing macro-projects in Colombia and the overall Housing Policy scheme that made these projects possible in the first place.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imagining and building more equitable and democratic systems: lessons from Bay Area organizations</title>
<link href="https://hdl.handle.net/1721.1/152474" rel="alternate"/>
<author>
<name>Mohtadi, Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/152474</id>
<updated>2023-10-19T03:05:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Imagining and building more equitable and democratic systems: lessons from Bay Area organizations
Mohtadi, Tara
America’s democratic system has been built atop politics of exclusion and oppression. While strides have been made in enfranchisement and inclusion, communities continue to be systematically marginalized, dispossessed and disempowered. Processes illuminate the often invisible purpose and values that underlie systems, but as this research discusses, an overemphasis on process as the problem and solution has limited the potential to create substantive change.&#13;
&#13;
To build a true democracy requires both imagining and building alternative political and economic systems that rest on the premise of equity and collective power. Social movements are at the forefront of transforming oppressive systems, and marginalized communities in particular are often on the frontlines of the struggle for justice. Collective and cooperative organizations have emerged within and alongside movements as explicit infrastructures that both embody and support social change. They form to respond to unjust material conditions in their communities related to land, labor, wealth and housing, while simultaneously being embedded in sustained movements, coalition building and policy advocacy efforts to address the root cause of these injustices.&#13;
&#13;
Through numerous conversations with organizations located in the San Francisco Bay Area, this research highlights how systems that foster shared power are not only imaginable, but are being built. In sharing learnings from these organizations, this research tells the story of their challenges and visions, their various approaches to enacting change, and how they are linked to broader networks of mobilization. As microcosms of a truer democracy, collectives and cooperatives have implications for reshaping the relationship between people and power, at the individual, organizational, and societal level. Ultimately, this thesis presents these models as a pathway for transitioning from an extractive to a regenerative economy, and from concentrated to collective power.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Participatory Photo-Mapping (PPM) framework to observe and reflect on the transformation of public space: the case of the Paseo España Environmental Corridor in Bucaramanga, Colombia</title>
<link href="https://hdl.handle.net/1721.1/152473" rel="alternate"/>
<author>
<name>Castillo Castillo, Maria Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/152473</id>
<updated>2023-10-19T03:41:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Participatory Photo-Mapping (PPM) framework to observe and reflect on the transformation of public space: the case of the Paseo España Environmental Corridor in Bucaramanga, Colombia
Castillo Castillo, Maria Daniela
Public participation in urban planning processes is key to ensuring projects can be successful, can address community needs, and can be sustainable into the future. The current sociodemographic and political circumstances in Colombia, namely the opportunities that the peace agreement between the government and guerrilla groups, and the advances of regulations that help ensure public participation in political processes across the country, have contributed to the increasing need of creating community engagement processes specifically in urban centers that support urban planning decision-making while supporting community development and relationship building. In 2021, Bucaramanga, a capital city of 500,000 inhabitants, developed a Walkable City Plan and an accompanying Revitalization of Public Spaces Plan. These aim to move forward a vision of creating lively public spaces that enable connectivity, sustainable mobility, and ultimately improve citizens’ quality of life. As part of these Plans, Bucaramanga aimed to complete 400 projects by December 2022 for the city’s 400th anniversary. The projects were chosen by the architecture firm TABUU, with their technical and social teams working together on prioritizing the most impactful possible interventions. While there are certain requirements for social engagement these interventions must comply with, there is room to strengthen these strategies by creating more timely, open and transparent processes, by ensuring project assessment and oversight during and after the infrastructural intervention, and by leveraging existing digital tools to democratize information and data. Thus, this thesis reviews the academic literature, official documents, and relevant precedents that can help guide better practices in community engagement processes in Bucaramanga, and explores the opportunity of utilizing a participatory photo-mapping framework to enable spaces to collaborate, exchange knowledge, and develop relevant skills in community planning to continue increasing participation moving forward.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Welcome to Cambodia Town</title>
<link href="https://hdl.handle.net/1721.1/152472" rel="alternate"/>
<author>
<name>Goh, Jonathan Pei-Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/152472</id>
<updated>2023-10-19T03:47:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Welcome to Cambodia Town
Goh, Jonathan Pei-Ying
Cambodian American communities are at an inflection point as the generation that arrived in the U.S as refugees start to retire, and the younger generation often have other aspirations than to carry on their parents’ businesses. Cambodia Town—as the largest conglomeration of Cambodians in the U.S—embodies these changes in the form of population decline, and small businesses closing down. However, a new wave of Cambodian American digital creators that seek to use storytelling and design to represent and shape Khmer culture has also emerged out of this transition.&#13;
&#13;
I undertake a product design and development process that uncovers the needs of Cambodian American small businesses, and digital creators related to digital engagement, and develop a prototype of a mobile application to support them. I conduct exploratory data analysis of small businesses in Cambodia Town, and in-depth interviews with target users of the mobile app, which I translate into the prototype design.&#13;
&#13;
The heart of this work asks how might we imagine a platform that threads together digital and physical worlds for a geographically fragmented group of people, and what are the implications of such an endeavor for placemaking.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Cars Took Over America</title>
<link href="https://hdl.handle.net/1721.1/152471" rel="alternate"/>
<author>
<name>Strauss, Ilana</name>
</author>
<id>https://hdl.handle.net/1721.1/152471</id>
<updated>2023-10-19T03:53:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How Cars Took Over America
Strauss, Ilana
America has a love affair with the automobile, as the saying goes. The average American household has 1.88 cars. Songs like “Life is a Highway,” “On the Road Again,” and countless others celebrate the car. Americans have accepted endless sprawl, hours stuck in traffic, car crashes, and lung disease because they loved cars from the beginning. Or did they? I will dig into the history of how car-centrism took over the country to explore an alternative theory: what if Americans didn’t choose cars out of love, or even at all? What if a car-centric country was largely forced on Americans, and a narrative of “love” spun after the fact?
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Accessibility &amp; Affordability in Indonesian Transit-Oriented Development Projects, Case Study: TOD Tanah Abang, Indonesia</title>
<link href="https://hdl.handle.net/1721.1/152470" rel="alternate"/>
<author>
<name>Pratama, Daniel Caesar</name>
</author>
<id>https://hdl.handle.net/1721.1/152470</id>
<updated>2023-10-19T03:44:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Balancing Accessibility &amp; Affordability in Indonesian Transit-Oriented Development Projects, Case Study: TOD Tanah Abang, Indonesia
Pratama, Daniel Caesar
The Transit-Oriented Development (TOD) concept has been hailed for successfully increasing public transit ridership and improving residents’ accessibility. Its approach involves capturing the increase in property values by redeveloping areas surrounding transit stations to fund public transit investment. However, when proposed TOD neighborhoods are already densely populated and home to low-income residents, development-based value capture mechanisms can worsen the housing affordability crisis and increase the risk of gentrification and displacement for existing residents.&#13;
&#13;
This thesis examines the 'Tanah Abang TOD Urban Design Guideline (UDGL),' a newly proposed TOD area in Jakarta proposed by PT MITJ, a joint venture company of Jakarta’s Commuter Line and Jakarta’s Mass Rapid Transit companies. PT MITJ is appointed as a TOD operator responsible for regulating land-use changes and leading the development process. Tanah Abang TOD UDGL then presents an example of how an urban design proposal is used as a mechanism of urban regeneration. By evaluating the proposal's impact on accessibility and affordability compared to the existing state, this thesis aims to provide a framework for anticipatory planning measures that balance potential gains and losses for communities in Indonesian TOD projects.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determinants and Interventions for Physical Activity Adherence During COVID-19: A Global Study Using Machine Learning Approach</title>
<link href="https://hdl.handle.net/1721.1/152469" rel="alternate"/>
<author>
<name>Chai, Yuchen</name>
</author>
<id>https://hdl.handle.net/1721.1/152469</id>
<updated>2023-10-19T03:43:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Determinants and Interventions for Physical Activity Adherence During COVID-19: A Global Study Using Machine Learning Approach
Chai, Yuchen
Physical activity (PA) is crucial for maintaining both physical and mental health in urban and regional settings. However, public health hazards, such as pandemics, extreme temperatures, and air pollution, pose challenges for PA adherence due to voluntary or mandatory self-protection measures and the closure of exercise facilities in cities. Existing research on urban health resilience during crises primarily depends on small-scale exercise surveys and fails to consider the multifaceted determinants of exercise, including personal habits, social networks, and local policy or built environments. In this project, I use COVID-19 as a case study to systematically investigate the drivers of unequal PA adherence and identify opportunities for timely personalized interventions. First, I collect the universe of exercise records for 30 million individuals across more than 200 countries from Strava. Then, I develop advanced neural network methods to automate the identification of PA adherence prior and during the pandemic based on personal exercise habits and social network interactions, achieving accuracy rates of 89.9% and 82.1% respectively. Lastly, I integrate an explainable neural network approach with econometric analysis to reveal the impact of city-level policies, socio-demographics, and built environment factors on PA inequality. My findings suggest that regions worldwide experience significant PA shocks at the onset of the pandemic, particular during lockdown periods, yet followed by a positive rebound in the long term. Males and urbanites in less developed regions tend to experience more negative PA shocks during the COVID-19, likely moderated by exercise preferences and the availability of outdoor sports amenities. Social connectivity also plays a vital role in promoting PA adherence during crises. This study advances the field by combining large-scale digital data with machine-learning to provide in-time prediction of PA adherence and map its complex determinants. My thesis thus provides direct evidence-based support for multi-layered PA interventions from personal nudges, social networks, and city planning perspectives during public health crises.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Stitching the Fabric: Urban Highway Removal as anOpportunity for Equitable, Sustainable Transformation</title>
<link href="https://hdl.handle.net/1721.1/152468" rel="alternate"/>
<author>
<name>Boccon-Gibod, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152468</id>
<updated>2023-10-19T03:59:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Re-Stitching the Fabric: Urban Highway Removal as anOpportunity for Equitable, Sustainable Transformation
Boccon-Gibod, Alexander
The detrimental effects of a century of highway construction and use in U.S. cities are clear. From polluting the air, contributing to climate change, encouraging urban sprawl, and entrenching racial and economic injustice in the built environment, urban highways urgently need reimagining as we aim to build a more just and sustainable society. As a result, cities across the country have slowly begun to remove their highways and undo past harms by reclaiming public space, promoting sustainable modes of transportation, and redeveloping newly available land. While past removal projects have undoubtedly improved their urban public realms, they have often missed opportunities to encourage sustainable mode shift and resist community displacement. Given recent calls for highway removal by communities, local leaders, and the federal government, now is the time to ensure the benefits of these projects are shared by all.&#13;
&#13;
This thesis aims to outline a justice-oriented framework which can encourage more holistic highway removal processes. It first uses a case study approach to evaluate past projects through the lenses of sustainable mobility, public realm, and anti-displacement. Through analyses of the removal of part of the Central Freeway in San Francisco, CA and the Cypress Freeway in Oakland, CA, it identifies best practices to adopt and failures to avoid. It then specifies a set of analytical and procedural dimensions necessary for ensuring more equitable and sustainable outcomes. Finally, this framework is illustrated and tested using a proposed highway removal project: the rest of San Francisco’s Central Freeway.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strengthening Consumer and Retailer Responsibility for TextileReuse and Donation in Cambridge and Boston</title>
<link href="https://hdl.handle.net/1721.1/152467" rel="alternate"/>
<author>
<name>Lohmar, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/152467</id>
<updated>2023-10-19T03:05:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Strengthening Consumer and Retailer Responsibility for TextileReuse and Donation in Cambridge and Boston
Lohmar, Sarah
According to the Massachusetts Department of Environmental Protection (Mass DEP), Massachusetts residents dispose of approximately 230,000 tons of textiles each year. In Boston and Cambridge, almost all household trash is incinerated due to at-capacity landfills. This presents a critical need to divert textile waste towards secondary uses, avoiding the release of greenhouse gases and toxins from the incinerated clothes. Following the 2022 Mass DEP ban on disposing mattresses and textiles in municipal trash, there has been an increased emphasis on textile recycling in the two cities. However, existing strategies for textile reuse focus on the actions of individuals and municipalities which is at great odds with the global scale of textile waste generation.&#13;
&#13;
Through data collection, stakeholder interviews, and policy analysis, this work examines relations and roles of the existing textile landscape’s donation, collection, and resale actors spanning both public and private sectors. Drawing from this investigation, I propose a bundle of recommendations to improve the textile recovery space in three key categories: responsibility and stewardship, educational messaging and outreach, and potential policy actions. To effectively address and reduce the issue of textile waste, this work concludes that clothing manufacturers and retailers must take greater responsibility for end-of-life disposal of textiles. At the same time, individual consumers, residents, and cities must be mindful of consumption, continue to participate in existing textile recovery programing, and advocate for longer term change in material waste culture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Bus Operations Using High-Resolution Vehicle Location Data</title>
<link href="https://hdl.handle.net/1721.1/152466" rel="alternate"/>
<author>
<name>Huang, Yuzhu</name>
</author>
<id>https://hdl.handle.net/1721.1/152466</id>
<updated>2023-10-19T03:50:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Bus Operations Using High-Resolution Vehicle Location Data
Huang, Yuzhu
High-resolution location data (heartbeat data) of transit fleet vehicles is a newfound data source for many transit agencies. On its surface, the heartbeat data can provide a wealth of information about all operational details of a recorded transit vehicle trip, from its location trajectory to its speed and acceleration profiles. In reality, the heartbeat data is often noisy and recorded at inconsistent frequencies, making it a challenging task for analysts to interpret the data as is. This thesis delves into the task of extracting useful operational information about bus vehicles from heartbeat data. In particular, the thesis focuses on three aspects of how heartbeat data can be used to enable operational analysis of transit routes. &#13;
&#13;
First, a methodology is proposed to convert the raw, timestamped coordinate data into a continuous and smooth vehicle trajectory function of each bus trip. A case study using historical heartbeat data collected from a real-world bus trip is presented to showcase how a complete trajectory combined with the vehicle speed profile could allow for qualitative assessment of bus operations. Then, details are provided on how one can analyze the trajectories of multiple bus trips in aggregate to quantify the different types of delay encountered by bus vehicles, including stop dwell time, signal delay, crossing delay, and congestion delay. Case studies are presented to demonstrate how one can quantify each type of delay for a specific bus route or corridor served by multiple routes. Lastly, a thorough discussion is carried out about how one can conduct observational before-after studies using heartbeat data to draw conclusions about the effectiveness of transit improvement projects. A case study is provided to illustrate how one can evaluate the effectiveness of a stretch of bus-only lane by calculating the travel time savings due to the project. &#13;
&#13;
The technical discussions presented in this thesis provide a solid foundation for conducting in-depth analysis of bus operations using heartbeat data. The methodologies will allow transit analysts to gain better insight into the performance of transit routes and corridors, thus allowing transit agencies to develop more targeted strategies for continuously improving transit services.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Splitting rides in transit deserts: Ride-splitting dynamics in Chicago before, during and after the pandemic</title>
<link href="https://hdl.handle.net/1721.1/152465" rel="alternate"/>
<author>
<name>Charitatos, Paris</name>
</author>
<id>https://hdl.handle.net/1721.1/152465</id>
<updated>2023-10-19T03:01:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Splitting rides in transit deserts: Ride-splitting dynamics in Chicago before, during and after the pandemic
Charitatos, Paris
Transportation Network Companies (TNCs) might constitute a solution for transit dependent population who live in areas with limited or even non-existent public transit service, also known as “transit deserts”. Ride-splitting was introduced by TNCs as an affordable on-demand mobility option which offers door-to-door service while sharing a trip with another passenger. Due to its affordability, ride-splitting can increase even more the accessibility of low-income and disadvantaged population. Few studies have focused explicitly on the role of ride-splitting in underserved communities. We studied if ride-splitting services compensate for the lack of transit in transit deserts. We leveraged the suspension of ride-splitting services during the COVID-19 pandemic to examine how ride-splitting user behavior changed throughout three time periods: (1) pre pandemic, (2) during the pandemic and (3) post pandemic. By doing so, we study if ride-splitting users switched to single mode during COVID-19 and if ride-splitting levels have recovered in the post-pandemic era. For our analysis we used TNC trip records, provided by the city of Chicago, transit data from four different transit authorities, as well as demographic and job density data. We identified transit deserts by calculating a transit supply score for every census tract during five time periods: (1) weekday daytime hours; (2) weekday overnight hours; (3) weekday peak hours; (4) weekend daytime hours and (5) weekend overnight hours. We developed cluster and bivariate maps along with spatial regression models to determine the correlation between ride-splitting pickups/drop-offs, transit supply and neighborhood characteristics along these five temporal periods. Results revealed that in Chicago low transit supply is not significantly correlated with disadvantaged communities, suggesting that transit deserts can occur regardless of the racial and income composition, and spatial sorting of the area. Pooled pickups/drop-offs were negatively correlated with transit route density, transit stop density and proximity to rail station, which means that ride-splitting supplements the role of transit in transit deserts. We found that communities of color and transit-dependent population had a moderate positive influence on ride-splitting. There is little evidence that ride-splitting users switched to single mode during COVID-19, but overall single trips were relatively higher compared to pre pandemic.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Civic Atlas: Open Government, Civic Tech, and Making Zoning Case Data More Accessible</title>
<link href="https://hdl.handle.net/1721.1/152464" rel="alternate"/>
<author>
<name>Devine, John</name>
</author>
<id>https://hdl.handle.net/1721.1/152464</id>
<updated>2023-10-19T03:25:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Civic Atlas: Open Government, Civic Tech, and Making Zoning Case Data More Accessible
Devine, John
America’s founders believed that access to government records was essential for democracy. This belief was shared by the Obama Administration, which issued orders across the government to make documents more transparent.1 Even though these efforts focused on digitization, many documents are not easy to search or obtain. For example, zoning board meeting agendas typically only exist as pdf documents, making them hard to search by locations and topics of interest. This thesis seeks to understand the accessibility of zoning documents and how feasible it would be to develop an application called “Civic Atlas” that uses web scraping to reformat zoning board meeting agendas into interactive maps and visualizations. To identify the need for this application, the thesis uses an analysis of zoning cases in sixty cities across the United States to determine whether current practices meet the goals of Open Government initiatives. It then evaluates how feasible it is to use automation to extract data from these documents. This analysis revealed three typologies of zoning documents that we use to describe zoning record systems and assess what specific features make them more or less accessible to the general public. The results show that in most American cities, zoning documents are hard to access digitally and that government officials would like products to make them more accessible. However, while improved accessibility is an interest of government officials, they see many barriers to achieving that goal, including significant limitations in staff time and resources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning by Doing: Transitioning Healthcare Technology Innovations from MIT Labs to Resource-Scarce Communities</title>
<link href="https://hdl.handle.net/1721.1/152463" rel="alternate"/>
<author>
<name>Seabold, Amelia Claire Elston</name>
</author>
<id>https://hdl.handle.net/1721.1/152463</id>
<updated>2023-10-19T03:17:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning by Doing: Transitioning Healthcare Technology Innovations from MIT Labs to Resource-Scarce Communities
Seabold, Amelia Claire Elston
The affordability and accessibility of healthcare innovations is critical for the well-being of resourcescarce communities around the world. Yet little research centers on precisely how and when financial, material, and logistical resource constraints enter the design cycle producing such innovations. MIT labs across engineering and science departments, where novel research on healthcare technologies is strong, offer an ideal environment from which to explore how technological innovations from an academic lab translate into the real world and whether the resource constraints of low-income communities are used as a design input. This study is especially pertinent to my own work in healthcare technology innovation: I am designing and building a low-cost sickle cell disease diagnostic to be used in sub-Saharan Africa where sickle cell disease prevalence is high but there is a lack of diagnoses due in part to the cost of testing. As a student currently designing a product for explicit use in resource-scarce areas, I aimed to learn how MIT faculty, research scientists, and students have designed and implemented their products to be valuable to communities in need. My diagnostic project thus acts as the client project for this thesis. By interviewing women across Africa and Asia about women’s and children’s health in slums, settings of deep and growing income and resource scarcity and inequality, I gained an understanding of the need for accessible and affordable healthcare in areas where my diagnostic would be implemented. Through qualitative interviews with MIT scholars, the thesis explores how and when scarcity on the ground influences work, but also highlights the importance of incorporating the ability to manufacture and distribute new technologies, to consider systemic constraints, and to understand the needs of potential partners and stakeholders in the design of an innovation. Informed by participatory principles and a prioritization of situated knowledge in urban planning, this thesis shows how research and practice can be combined reflexively in the fields of global health and engineering to create a practical and implementable product in an academic lab with impact for some of the most marginalized communities in need of healthcare improvements.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-COVID Transit Fares for Riders and Recovery</title>
<link href="https://hdl.handle.net/1721.1/152462" rel="alternate"/>
<author>
<name>O'Neil Jr., Daniel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152462</id>
<updated>2023-10-19T03:24:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Post-COVID Transit Fares for Riders and Recovery
O'Neil Jr., Daniel M.
In the face of persistent large-scale changes in travel behavior spurred by the COVID19 pandemic, mass transit agencies face a landscape full of new challenges. Transit ridership, often used as a primary measure of agency success, remains diminished. Nevertheless, the purpose of and benefits provided by a well-designed and well-operated transit network remain unchanged. This thesis investigates one powerful tool at the disposal of transit providers: fare policy. Fare policy can be used to spur transit usage, to fund agency operations, and to respond to societal goals. Rider-centric fare policies that simultaneously increase transit travel volumes while showing only small negative fare revenue impacts can be identified. Implementation of such policies is key moving forward to maintain public investment and individual engagement.&#13;
&#13;
This thesis presents four case studies that analyze fare equity, new fare products, and multi-agency regional fare integration. First, fare equity is considered through a case study of Washington, DC’s Metrorail transit fare structure, residential and employment geography, and user demographics. The results highlight policy elements that consistently improve fare equity regardless of structure type, including peak pricing differentiation and removal of penalties for circuitous travel. The second case study designs and evaluates novel fare products using post-pandemic travel patterns on the CTA. The hypothetical products considered differ from traditional offerings by changing the usage restrictions and the validity periods. A flexible pass that confers a set number of CTA journeys at a discounted per-trip price is found to be the most promising, as it would provide the most utility to riders for whom pay-per-use travel is currently the most economical choice. The third case study considers single-day fare capping as an alternative to traditional 1-day passes for transit users in Chicago, identifying benefits to reduced-fare and bus-only riders while providing opportunities to boost agency ridership. Finally, the results of a recently introduced fully-integrated, multi-agency transit pass in the Chicago region are analyzed. Fare structure changes are used to estimate post-COVID commuter rail fare elasticity, and the elasticity for integrated passes. Additional findings include large increases in cross-agency travel, new customers accessing secondary transit agencies, and continued opportunities to integrate.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nature-Based Coastal Adaptation: A Comparative Assessment to Inform Effective Implementation</title>
<link href="https://hdl.handle.net/1721.1/152461" rel="alternate"/>
<author>
<name>Winer-Chan, Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/152461</id>
<updated>2023-10-19T03:06:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Nature-Based Coastal Adaptation: A Comparative Assessment to Inform Effective Implementation
Winer-Chan, Rose
As coastal adaptation planning becomes the new normal, governments have increasingly shifted a significant portion of new infrastructure from hardened “gray” structures toward natural and “nature-based” solutions (NbS): restored or constructed ecosystems that, by enhancing or mimicking natural processes, mitigate coastal hazards while offering socioeconomic, environmental, and public health benefits. However, the use of NbS remains limited due to uncertainty over cost and performance, a fragmented regulatory landscape, inconsistent planning tools, and the context dependence of NbS design. This thesis aims to explore these diverse uncertainties in detail by shedding light on the key factors and processes that may pose critical barriers or drive success during the implementation of nature-based coastal adaptation (NBCA) projects. This study employs stakeholder interviews to explore and compare four NBCA case studies from design through implementation: Hunter’s Point South Park and West Pond in Queens, New York; Rose Larisa Park in East Providence, Rhode Island; and the Sand Motor in South Holland, the Netherlands. By identifying the common challenges, success drivers, and success metrics shared across these projects, this thesis hopes to provide useful early insights that help NBCA decision-makers thoughtfully define and measure success, anticipate key challenges, and take steps to overcome those challenges and achieve more successful implementation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing the Next Generation of Autonomous Underwater Gliders</title>
<link href="https://hdl.handle.net/1721.1/152460" rel="alternate"/>
<author>
<name>Ventola, Peter T.</name>
</author>
<id>https://hdl.handle.net/1721.1/152460</id>
<updated>2023-10-19T03:26:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Developing the Next Generation of Autonomous Underwater Gliders
Ventola, Peter T.
This thesis presents a novel, hybrid Autonomous Underwater Glider (AUG) architecture developed for improved performance in shallow, high-current environments while maintaining all capabilities inherent to a deep, 1000m-rated AUG. Numerous regions of scientific interest, such as the marginal ice zone (MIZ) and continental shelf breaks present significant challenges to conventional AUG operations due to a combination of changing ocean currents and depths.&#13;
&#13;
AUGs are traditionally optimized for performance in shallow (less than 200m) or deep water (200m to 1000m) environments. The design of a buoyancy drive on a deep-rated AUG does not support the pump rate required for fast inflections in narrow depth bands.&#13;
&#13;
Contained within this thesis is the framework to expand the operational envelope of a Teledyne Webb Research (TWR) G3 Slocum glider through substantial modification of the glider's hardware components backed by rigorous hydrodynamic analysis and computational fluid dynamics (CFD) modelling. Since AUGs are limited in both speed and maneuverability, the goal of this thesis is to improve and modify the glider's flight characteristics, specifically the glider's speed through water, its inflection rate, and its efficiency. These performance improvements are accomplished through the introduction of a high-power thruster, modified wings, and aft fin surfaces. The modified glider's efficacy is evaluated through various laboratory experiments and field data obtained in Buzzards Bay and the Caribbean Sea. Design concepts for a future, more advanced glider are also discussed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated algorithms for constrained optimization and control</title>
<link href="https://hdl.handle.net/1721.1/152459" rel="alternate"/>
<author>
<name>Parashar, Anjali</name>
</author>
<id>https://hdl.handle.net/1721.1/152459</id>
<updated>2023-10-19T04:01:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerated algorithms for constrained optimization and control
Parashar, Anjali
Nonlinear optimization with equality and inequality constraints is a ubiquitous problem in several optimization and control problems in large-scale systems. Ensuring feasibility along with reasonable convergence to optimal solution remains an open and pressing problem in this area. &#13;
&#13;
A class of high-order tuners was recently proposed in adaptive control literature with an effort to lead to accelerated convergence for the case when no constraints are present. In this thesis, we propose a new high-order tuner based algorithm that can&#13;
accommodate the presence of equality and inequality constraints. We leverage the linear dependence in solution space to guarantee that equality constraints are always satisfied. We further ensure feasibility with respect to inequality constraints for the specific case of box constraints by introducing time-varying gains in the high-order tuner while retaining the attractive accelerated convergence properties. Theoretical guarantees pertaining to stability are also provided for time-varying regressors. These theoretical propositions are validated by applying them to several categories of optimization problems, in the form of academic examples, power flow optimization and neural network optimization.&#13;
&#13;
We devote special attention to analyze a special case of neural network optimization, namely, linear neural network training problem, to understand the dynamics of nonconvex optimization governed by gradient flow and provide lyapunov stability guarantees for LNNs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nature based solutions for coastal defense: Wave attenuation and economic analysis of marsh-fronted seawalls</title>
<link href="https://hdl.handle.net/1721.1/152458" rel="alternate"/>
<author>
<name>Lee, In Him</name>
</author>
<id>https://hdl.handle.net/1721.1/152458</id>
<updated>2023-10-19T03:47:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Nature based solutions for coastal defense: Wave attenuation and economic analysis of marsh-fronted seawalls
Lee, In Him
A seawall fronted by salt marshes is a hybrid, nature-based solution for rural and urban coastal protection. A one-dimensional wave attenuation model was developed using first principles to capture four mechanisms that impact wave evolution: Wave breaking, vegetation drag, shoaling, and bed friction. In particular, the vegetation drag was modeled using the stem and leaf morphology and material properties of specific marsh species. The model was validated with field wave height data. A benefit-cost analysis framework was used to present an economic argument for hybrid infrastructures. The additional wave attenuation from vegetation drag results in cost savings from lower seawall required for the same protection level, in addition to reduced scouring erosion, and the additional ecosystem services, such as habitat, water quality improvement, and carbon sequestration. Both the one-dimensional wave model and benefit-cost analysis framework were applied to an urban marsh-fronted seawall case study at Juniper Cove, Salem, Massachusetts. The presence of vegetation was found to reduce significantly the occurrence of wave breaking, which would be beneficial for sediment accretion to maintain a healthy marsh habitat and a less turbulent aquatic habitat. From the case study, narrow vegetation widths of 20 to 40 m can provide essential wave attenuation that would justify the marsh restoration in front of the existing seawall.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Art, Repair, and Spatial Justice in Boston's Chinatown and Seattle's International District</title>
<link href="https://hdl.handle.net/1721.1/152456" rel="alternate"/>
<author>
<name>Xie, Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/152456</id>
<updated>2023-10-19T03:15:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Art, Repair, and Spatial Justice in Boston's Chinatown and Seattle's International District
Xie, Lilian
There is a growing overlap between the fields of urban planning, art, and social justice. Projects within the realms of urban planning and socially engaged art seek to bring about changes that redistribute socially valued resources and opportunities, especially among racial and spatial lines. This thesis analyzes how socially engaged public art accomplishes these goals of spatial justice in Boston and Seattle’s historic Chinatowns. Building off of planning scholars Rashad Akeem William and Leonie Sandercock’s work framing the role of affect and emotions in healing planning conflicts, I will analyze how these projects support their community’s efforts to repair past spatial harms, and what distinguishes their function from other forms of political and social activism. Using a case study approach, I present a series of research findings from interviews with individuals who facilitated, created, and/or participated in public art projects in Seattle’s International District and Boston’s Chinatown.&#13;
&#13;
Through my research, I illustrate the unique capacity of public art to influence the important emotional and relational aspects of transformation, and the opportunity that public art presents for residents to directly shape the built environment. Public art, as a uniquely place-specific art form, offers an opportunity for communities pursuing spatial justice to shift the affective aspects of transformation and engage in the radial reimagination of how power is distributed in space. Art is an important and often underutilized strategy in the spatial justice toolkit, and this thesis presents opportunities for artists, community organizers, and planners to think creatively about how art can support their efforts to disrupt racial planning, dismantle White supremacy, and support the continued flourishing of urban communities.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affordable Housing Provision for Workers Constructing Nusantara, the New Capital City of Indonesia</title>
<link href="https://hdl.handle.net/1721.1/152455" rel="alternate"/>
<author>
<name>Prameswari, Pratiwi</name>
</author>
<id>https://hdl.handle.net/1721.1/152455</id>
<updated>2023-10-19T04:00:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Affordable Housing Provision for Workers Constructing Nusantara, the New Capital City of Indonesia
Prameswari, Pratiwi
Indonesia is an important ongoing example of a country relocating its capital city for economic and environmental reasons amid numerous challenges. The new capital city site is located far from existing cities, with limited infrastructure and only a small population. One major challenge entails how and where best to house the large population of construction workers coming to build the city. Learning from global experiences, some new capital cities had thought about providing affordable housing for people but failed to recognize the importance of housing for the construction workers who built the city. As a result, informal settlements have proliferated inside and around the cities, posing challenges for a long time. This thesis explores the efforts in providing affordable housing for construction workers in Nusantara and the challenges that come with ensuring equal access to housing for all, particularly around the aspects of (1) the adequacy of housing for construction workers; (2) stakeholders involved in the provision; (3) procedures of the housing provision. To address the issue of providing accommodation to construction workers in Nusantara, the government of Indonesia has built housing for construction workers called Hunian Pekerja Konstruksi (HPK). However, there is a possibility of quantitative inadequacy of this housing both in the short and long run. The housing is the responsibility of the Nusantara Capital City Authority and Badan Usaha Milik Otorita (BUMO), with the Ministry of Public Works and Housing assisting them in constructing the housing. It is a good step worth appreciating for the Indonesian government to develop housing for construction workers that can lower the possibility of informal settlement. Nevertheless, it is also important to acknowledge some challenges that need to be addressed despite the effort. Keywords: Indonesia, New Capital City, Nusantara, Housing for Construction Workers
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Any Port in a Storm: UK Freeports as a Typology of Governance</title>
<link href="https://hdl.handle.net/1721.1/152454" rel="alternate"/>
<author>
<name>Maddox, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/152454</id>
<updated>2023-10-19T03:28:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Any Port in a Storm: UK Freeports as a Typology of Governance
Maddox, Jay
In 2019, the United Kingdom announced a new initiative to set up several free trade zones across its four nations. While many analysts have viewed freeports through the lens of the UK's messy divorce from the European Union, it is crucial to understand the policy in the context of the country's regional planning and local development agenda. I argue that the freeport program represents an experimental typology of local governance developed by central authorities. By extending a novel combination of benefits and powers to local governmental bodies, this typology seeks to enable self-propelled “growth” and revitalization that isn’t depended on financial transfers from the central government. In the highly centralized context of England, the freeport governance typology does this by transforming local governmental bodies into empowered economic actors. Far from circumventing central government control, freeports are a centrally guided attempt to create a new form of governance that redefines the role of local and regional authorities. Lastly, I argue that this typology must first be understood as an amalgamation of several regulatory and fiscal features that have been developed over several decades beginning with the election of Margaret Thatcher.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Universities, Communities, and Service-Learning for Urban Development: Rethinking the Work of Kaya Clínica in Maputo, Mozambique</title>
<link href="https://hdl.handle.net/1721.1/152449" rel="alternate"/>
<author>
<name>Mapure, Idélcia Rebeca Domingos</name>
</author>
<id>https://hdl.handle.net/1721.1/152449</id>
<updated>2023-10-19T03:25:00Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Universities, Communities, and Service-Learning for Urban Development: Rethinking the Work of Kaya Clínica in Maputo, Mozambique
Mapure, Idélcia Rebeca Domingos
Urban areas in low-income countries are confronted with major challenges, including poverty, urban deterioration, unemployment, and informality. With the low capacity of local governments to respond to the increasing demands of a growing urban population, anchor institutions are called upon to leverage their permanent strategic positions to contribute to social and economic development in their areas of influence. Universities are distinctive anchor institutions with a strategic position to use their expertise and resources to drive change in the communities in which they operate, mainly for the underserved. However, academic-local community relationships are historically rooted in extractive practices, with little or no contribution to improving local people’s lives. This thesis explores alternatives for building strong and mutually beneficial collaborations between universities and their surrounding neighbors that can effectively create long-lasting community welfare through service-learning. Through service-learning, university students gain valuable experience for their careers, faculty learn to improve their curriculum to match emerging needs and advance their scholarship, and local communities get the support they need to address an issue they lack the expertise or resources to act on independently. &#13;
&#13;
In this thesis, I specifically examine the work of Universidade Eduardo Mondlane (UEM) in the Mozambican capital of Maputo and its relationship with the informal communities of the George Dimitrov neighborhood through a service-learning organization called Kaya Clínica. Kaya Clínica aims to address housing and urbanization challenges in underserved communities to help identify strategies the university can implement to improve its contribution to generating long-lasting welfare for the communities they work with. Through semi-structured interviews and focus group discussions with different people in the neighborhood, university students, and professors, I find that an effective academic-community partnership in this context requires a new paradigm of trust and respect between the university and the communities being studied in order to promote fairness and equality in deliberation, mutual support featuring co-production, dissemination, and to advance the use of knowledge to address real-life needs. More time and dedicated effort are needed to build strong, lasting connections and collaborations between UEM and local communities. This involves active listening, demands effective participation, entails continuing negotiation, and calls for solid win-win strategies to be defined and co-designed from the start.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Coordination Imperative: A Comprehensive Approach to Align Customer Demand and Inventory Management for Superior Customer Experience in Retail</title>
<link href="https://hdl.handle.net/1721.1/152447" rel="alternate"/>
<author>
<name>Kondo, Koichiro</name>
</author>
<author>
<name>Vicente, Ângelo José Bergamaschi</name>
</author>
<id>https://hdl.handle.net/1721.1/152447</id>
<updated>2023-10-19T04:00:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Coordination Imperative: A Comprehensive Approach to Align Customer Demand and Inventory Management for Superior Customer Experience in Retail
Kondo, Koichiro; Vicente, Ângelo José Bergamaschi
The rapid growth of customers traversing different channels during their buying journey presents both opportunities and challenges for organizations. Fragmented decision-making and siloed communication between marketing and supply chain teams can lead to inefficiencies and negatively impact customer experience. This thesis proposes a conceptual framework to align customer demand and inventory management. The framework is examined in the empirical context of the fashion industry, focusing on the US market and insights from Brazil and Japan. By introducing a PDCA (Plan Do Check Action) process and cross-functional metrics, such as NPS (Net Promoter Score) and OTIF (On time in Full), this study seeks to encourage cooperation between departments and coalesce decision-making around enhancing customer experience. The research will explore the quantitative and qualitative aspects of the retail industry focusing on fashion and identify opportunities to leverage technology, marketing, and supply chain management for improved performance. Our study validated the existence of siloed operations and the drawbacks caused by silos in today’s business. Through 16 expert interviews, we identify three key factors that contribute to silos between marketing and supply chain. They are technology fragmentation, lack of integrated KPIs, and complexity of multiple channels. Further, the interviews helped uncover how the experts tackled these challenges in daily operations.&#13;
&#13;
The expected deliverable is a framework that combines analyzed customer journeys with cross-functional metrics to support decision-makers in day-to-day operations. The goal is to deliver a world-class customer experience by aligning decisions to coordinate actions. There is potential to incorporate machine learning to suggest experiments and further optimize value delivery for 4  customers by retailers through multiple channels. Our conceptual framework applies to various businesses struggling with coordination between demand generation and fulfillment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplification of radicals with applications to solving polynomial equations.</title>
<link href="https://hdl.handle.net/1721.1/152394" rel="alternate"/>
<author>
<name>Zippel, R. E.
            (Richard E.),
            1952-</name>
</author>
<id>https://hdl.handle.net/1721.1/152394</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Simplification of radicals with applications to solving polynomial equations.
Zippel, R. E.
            (Richard E.),
            1952-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Bibliography : leaves 29-30.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Thesis - ReStacks</title>
<link href="https://hdl.handle.net/1721.1/152127" rel="alternate"/>
<author>
<name>Perryman, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/152127</id>
<updated>2023-09-14T03:12:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design Thesis - ReStacks
Perryman, Benjamin
ReStacks is a full-service real estate development company that specializes in the construction and management of modular garages and accessory dwelling units. This design thesis explores how garages and accessory dwelling units can become sustainable infrastructure for housing, transportation, and the sharing economy in the car-dependent urban geography of Columbus, OH.&#13;
&#13;
With growing concern about climate change and household economics, interests sometimes compete. While affordable housing and homeownership have become increasingly inaccessible, construction and the built environment represent over 40% of global fossil fuel emissions. Meanwhile, many existing homeowners can’t afford upgrading their homes and transportation to reduce their contribution to those emissions.&#13;
&#13;
In many Columbus neighborhoods, there are opportunities to address these challenges. Behind single-family homes, there are long-reaching alley systems which are lined with vacant lots and defunct infrastructure. Initially used for the storage of livestock and carriages, many of these alleys fell into disrepair after the introduction of gas-powered cars. Through partnership with homeowners, ReStacks has begun building modular garages and accessory dwelling units to introduce cost-effective and ecologically-sensitive infrastructure for affordable housing, electric vehicles, and the sharing economy; high utility in a small footprint.&#13;
&#13;
ReStacks’ design philosophy was derived from the concept of three-pronged sustainability and its approach involves dissecting the social, economic, and ecological dimensions of intervention to provide more diverse value for consumers, local economies, and the environment. The core principal of its business strategy is to drive marketvalue through overlapping, sustainable benefit. This document will highlight the design progression of ReStacks’ modular garages and accessory dwelling units as well as explore their potential profitability and impact.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical In-Process Monitoring Tools for Laser Powder Bed Fusion: Verifying Powder Area Coverage of a Layer Setup</title>
<link href="https://hdl.handle.net/1721.1/152122" rel="alternate"/>
<author>
<name>Modes, Jane Ellen</name>
</author>
<id>https://hdl.handle.net/1721.1/152122</id>
<updated>2023-09-14T03:26:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Optical In-Process Monitoring Tools for Laser Powder Bed Fusion: Verifying Powder Area Coverage of a Layer Setup
Modes, Jane Ellen
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods and is widely used in many industries. With any additive manufacturing process, achieving a successful Flayer is critical to the quality of the final part. Currently, no machine can provide objective evidence of a proper layer setup with in-process monitoring equipment. The strategy of this project was to utilize various sensors in tandem with the camera available within the machine to distinguish between passing and failing layers in a quantifiable manner. This thesis aimed to test the 3D printer’s on-machine camera and several other off the shelf cameras, (Spectral Instruments RVT100, GoPro HERO7 Black, STPCTOU Wireless Digital Microscope, and iPhone 12 camera,), to determine which if any of them were suitable for quantifying a layer setup through powder area coverage. Several tests were performed to look at the camera repeatability across one or several locations by analyzing image intensity values in ImageJ. Another test was performed to determine if there was a linear correlation between layer thickness and image intensity. The cumulative results from all tests indicate that the on-machine camera is the best option of all cameras tested for this application.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless Sub-Cellular Sized Stimulators for Minimally Invasive Deep Brain Stimulation with High Spatiotemporal Resolution</title>
<link href="https://hdl.handle.net/1721.1/152114" rel="alternate"/>
<author>
<name>Cai, Yubin</name>
</author>
<id>https://hdl.handle.net/1721.1/152114</id>
<updated>2023-09-14T03:23:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Wireless Sub-Cellular Sized Stimulators for Minimally Invasive Deep Brain Stimulation with High Spatiotemporal Resolution
Cai, Yubin
Deep brain stimulation (DBS) has become a mainstream treatment for motor disorders associated with neurodegenerative conditions such as Parkinson’s disease (PD). The DBS device, often called the “pacemaker for the brain”, utilizes surgically implanted leads with 4-8 contacts points into the targeting area. The implanted electrodes are then used to deliver high frequency (&gt;85 Hz) electrical stimulation via a pulse generator. In properly selected patients, DBS is proven to be remarkably effective, alleviating motor symptoms that either do not fully respond to medication treatment (such as tremor) or are caused by it (levodopa-induced dyskinesia). However, current DBS technology comes with inherent limitations and problems, including: 1) the need of a large invasive foreign body (the electrode) which can cause lead infections, 2) low coverage of entire movement-related territory in the target nucleus, and 3) adverse side effects such as muscle twitches and sensory complaints caused by diffused current into the tissues.&#13;
&#13;
In this work, we propose to develop a new paradigm of electrical neuromodulation, based on injectable micron-sized stimulator devices, which, once deployed, will allow tunable stimulation of the injected territory. The individual stimulators will produce highly localized stimulation effects, which will minimize current spread to neighboring structures. Since the stimulator devices will be activated from an super-low-frequency (SLF) external magnetic field source, the procedure would not require placement of permanent wired leads in the brain. Additionally, given that a lightweight low-power wearable coil array will power the stimulator devices, a continuous portable DBS treatment of Parkinson's disease will be unprecedentedly made possible.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geomorphic Concrete : Material and fabrication strategies for heterogeneous concrete morphology</title>
<link href="https://hdl.handle.net/1721.1/152111" rel="alternate"/>
<author>
<name>Kim, Il Hwan</name>
</author>
<id>https://hdl.handle.net/1721.1/152111</id>
<updated>2023-09-14T03:21:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Geomorphic Concrete : Material and fabrication strategies for heterogeneous concrete morphology
Kim, Il Hwan
Given evidence of climate change and the global supply chain crisis, it is no longer viable to continuously exploit nature and expect the global industrial system to remain perpetually dependable. We have to prepare for a world that is not entirely controllable or measurable, which is an inevitable architectural condition of the future. This thesis introduces geomorphic concrete, an alternative design approach and construction methodology closely aligned with geological formation process by incorporating natural forces as collaborators in concrete fabrication.&#13;
&#13;
Geomorphic concrete is an alternate paradigm of material-based design and construction methodology achieved by exploiting the variation in material properties respond to elemental forces. Nature shapes geological formations through a diverse array of materials and natural forces. For example, sedimentary rock’s stratified planes have varied grain, strength, and other characteristics, resulting in unique shapes and patterns through natural processes such as weathering, erosion, and sedimentation. A series of experiments in this thesis demonstrates how to design and construct concrete structures by mimicking the natural geological formation process, instead of relying solely on modernistic geometry-driven design.&#13;
&#13;
This methodology utilizes an injection-printing fabrication technique, inserting reinforcement and suspension materials in liquid concrete to produce cast objects with varying material properties that erode, break, reconfigure, and recover through engagement with natural agents. The thesis showcases three designs that exemplify geomorphic concrete: a material-based structure design by fabricating heterogeneous concrete; a concrete structure printed into granular formwork that erodes due to gravity; and a concrete object that evolves over time by dissolving the injected suspension material. &#13;
&#13;
This thesis contributes to acknowledging geological formation as a ecological process and developing an architectural fabrication concept that embraces elemental forces and material changes as agents in the building process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Displacement Sensors to Characterize Critical Powder Layers in Laser Powder Bed Fusion</title>
<link href="https://hdl.handle.net/1721.1/152108" rel="alternate"/>
<author>
<name>Wittenbrink, Jayna</name>
</author>
<id>https://hdl.handle.net/1721.1/152108</id>
<updated>2023-09-14T03:34:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Using Displacement Sensors to Characterize Critical Powder Layers in Laser Powder Bed Fusion
Wittenbrink, Jayna
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods, though process quality tools are still undeveloped. Uniform powder layers are critical to the final part quality for such processes, and currently, no machine can provide objective evidence of a proper powder layer with in-process monitoring equipment. In this project, proper powder layers are currently verified by unquantifiable means. The strategy of this project was to use various sensors in tandem with the camera available within the machine to objectively distinguish between passing and failing powder layers. The specific goal of this thesis was to characterize the actual powder thickness and percent coverage with laser displacement sensors by subtracting a before and after powder deposition scan of the build plate. The laser line scanner showed promising results, but the variation within the process and data alignment strategies used were not sufficient to provide a concrete correlation for powder layer characterization. This project nonetheless sets the groundwork for further work to more objectively characterize powder layers. The rest of the project used the same unquantifiable means that are currently being used to verify powder layers. Intensity values from the onboard camera’s images were able to successfully distinguish between powder layers to be used as a powder layer verification tool.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Construction of a fast-scanning far-infrared Fabry-Perot interferometer.</title>
<link href="https://hdl.handle.net/1721.1/152093" rel="alternate"/>
<author>
<name>Komm, David Serkes.</name>
</author>
<id>https://hdl.handle.net/1721.1/152093</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Construction of a fast-scanning far-infrared Fabry-Perot interferometer.
Komm, David Serkes.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long-span bridge live loads.</title>
<link href="https://hdl.handle.net/1721.1/152089" rel="alternate"/>
<author>
<name>Kram, Norman Simon.</name>
</author>
<id>https://hdl.handle.net/1721.1/152089</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Long-span bridge live loads.
Kram, Norman Simon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Bibliography: leaves 106-107.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systematic analysis of PRT, express bus, and rail transit systems.</title>
<link href="https://hdl.handle.net/1721.1/152088" rel="alternate"/>
<author>
<name>Kocur, George.</name>
</author>
<id>https://hdl.handle.net/1721.1/152088</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">A systematic analysis of PRT, express bus, and rail transit systems.
Kocur, George.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Number 141 used twice in paging.; Bibliography: leaves 404-407.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design manual for tall stacks.</title>
<link href="https://hdl.handle.net/1721.1/152086" rel="alternate"/>
<author>
<name>Kranz, William Thomson.</name>
</author>
<id>https://hdl.handle.net/1721.1/152086</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Design manual for tall stacks.
Kranz, William Thomson.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of turbulent flame propagation</title>
<link href="https://hdl.handle.net/1721.1/152082" rel="alternate"/>
<author>
<name>McNutt, Dinah Georgianna.</name>
</author>
<id>https://hdl.handle.net/1721.1/152082</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of turbulent flame propagation
McNutt, Dinah Georgianna.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A sense of touch for a mechanical hand</title>
<link href="https://hdl.handle.net/1721.1/152081" rel="alternate"/>
<author>
<name>Kappl, Joseph J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152081</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">A sense of touch for a mechanical hand
Kappl, Joseph J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1963; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Velocity modulation of electromagnetic waves</title>
<link href="https://hdl.handle.net/1721.1/152080" rel="alternate"/>
<author>
<name>Morgenthaler, Frederic R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152080</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Velocity modulation of electromagnetic waves
Morgenthaler, Frederic R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1956; Bibliography: leaves 85-86.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Cartographies of Transnational Infrastructure-led Urbanization</title>
<link href="https://hdl.handle.net/1721.1/152027" rel="alternate"/>
<author>
<name>Shoaib, Jehanzeb</name>
</author>
<id>https://hdl.handle.net/1721.1/152027</id>
<updated>2023-09-01T03:05:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Critical Cartographies of Transnational Infrastructure-led Urbanization
Shoaib, Jehanzeb
This thesis is a manifesto that traverses the binaries of land and sea to mediate between the preconceived notions of boundary and territoriality. The contextual landscape of this mediation is within the littoral territory of Gwadar, in the southern coastal region of Baluchistan, in Pakistan. This port city acts as a gateway to the China-Pakistan Economic Corridor, which, because of its deepsea edge, has been subjected to China’s infrastructure-led urbanization. As a result, the local fishing community – numbering close to 36,000 - and its eco-system have been impacted and displaced, triggering large-scale protests that have been censored by the state-run media. This thesis is thus a manifesto that gives voice to the littoral landscape and the indigenous community, inviting participatory forms of dialogue on the role of design and its agency. At issue here is the conception of Gwadar as an edge on which a highway has been built, restricting the fishing community’s access to the sea. For this community, known as nomads of the sea, Gwadar is not an edge but a gateway to the sea – just as its name implies: an amalgamation of two Balochi words, Guad means wind and dar means gateway, aggregating to mean the gateway of winds. By providing evidence of their territorial claims through critical cartographic methods of ethnography, photography, and mapping, this thesis frames the spatial-temporal thresholds of the littoral which, like the winds, morph with time. The manifesto argues, to view the coastal landscapes as thresholds, rather than mere coastlines. Moreover, it proposes re-learning from the indigenous collectives of rural commons towards creating a subsistent coastal community by circulating a zine pamphlet that legitimizes the claims of the indigenous inhabitants of the littoral landscapes, both human and non-human.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gaming Like a State: Historical Strategy Game Victoria and "Keyboard Politics" in China</title>
<link href="https://hdl.handle.net/1721.1/152026" rel="alternate"/>
<author>
<name>Wang, Jiaqi</name>
</author>
<id>https://hdl.handle.net/1721.1/152026</id>
<updated>2023-09-01T03:26:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Gaming Like a State: Historical Strategy Game Victoria and "Keyboard Politics" in China
Wang, Jiaqi
This thesis explores the role of historical strategy games as a platform for "keyboard politics" in contemporary China, where traditional channels for political expression are tightly controlled. Specifically, the study focuses on Victoria, a video game that allows one to "game like a state" in its simulation of the long-nineteenth-century global history. By examining the game design and analyzing paratexts from the Chinese player community—including forum discussions, game reviews, video recordings, and user-created mods— this research investigates the interaction between the technological affordance and the players, who bring their experiences, memories, and cultural milieu into the game. The thesis further examines how the gameplay in the virtual world reflects and shapes the political tendencies of young Chinese players: grassroots leftism, nationalism, and cynicism behind the "lying-flat" culture. Finally, from this local encounter between a Swedish video game and the Chinese players, the thesis aims to shed light on the global circuit of techno-cultural artifacts beyond a Eurocentric perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Optimization of a D2C Supply Chain Subject to Changing Cost Conditions and Consumer Preferences</title>
<link href="https://hdl.handle.net/1721.1/152025" rel="alternate"/>
<author>
<name>Sarasua, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/152025</id>
<updated>2023-09-01T03:33:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Network Optimization of a D2C Supply Chain Subject to Changing Cost Conditions and Consumer Preferences
Sarasua, Julie
This thesis examines and models the fulfillment operations of a US, medium-sized, direct-to-consumer (D2C) healthcare distributor that competes with lower-priced alternatives (e.g. Amazon) through high-service and deep customer relationships. Recent inflationary trends have pushed the company to seek new ways to reduce cost. Therefore, this thesis focuses on methods to lower operational cost via network optimization.&#13;
&#13;
This work attempts to solve the network layout problem through two primary approaches: (1) Integer Programming using the Gurobi optimization python package and (2) Scenario Analysis modeling the cost of feasible configurations of the uncapacitated facility layout problem under the company’s existing order allocation logic. Both approaches result in similar solutions with the second deemed more interpretable by leadership and more aligned with existing IT logic in terms of order-facility allocation. Both models are successfully able to show a decrease in total landed cost of fulfillment relative to the base case. Qualitative considerations are discussed, as well as model sensitivity to changing environmental inputs (e.g. population shifts and changes in cost).&#13;
&#13;
A concurrent project examining the reduction of shipping expense by incentivizing subscription-based customers to order less frequently (e.g. consolidating two orders into just one shipment), thereby maintaining revenues while lowering shipping expense. Two proposed solutions are examined: (1) existing incentives and (2) new incentives. The first was tested and showed preliminary positive impact to cost.&#13;
&#13;
The company referenced in this work has been renamed as “DistroCo” for privacy. Sensitive figures, data, and information may be redacted or masked.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Gait Muscle-Reflexes Through Hindlimb Characterization in Rodents</title>
<link href="https://hdl.handle.net/1721.1/152024" rel="alternate"/>
<author>
<name>Guvenilir, Ayse Angela M</name>
</author>
<id>https://hdl.handle.net/1721.1/152024</id>
<updated>2023-09-01T03:01:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Gait Muscle-Reflexes Through Hindlimb Characterization in Rodents
Guvenilir, Ayse Angela M
The complexity of gait modeling ranges from simple models that represent the leg as two linear springs to more complex ones that comprise muscle-tendons and spinal muscle-reflexes. However, the more complex models fail to have a strong empirical basis for muscle-reflex definition and function. To garner stronger evidence for the reflex component of muscle function in gait, we modeled gait muscle-reflexes through experimental limb characterization in rodents. We designed and implemented an animal skin port with multiple electrodes to measure the electromyography, muscle fascicle length, and muscle force. We conducted rat surgeries that implanted this skin port, and used the device to collect in vivo data through various terrain during walking trials. From collaborator’s in vivo data (n = 4 rodents), we implemented multiple linear reflex models with an r² ranging from 0.75 to 0.87 between measured and predicted muscle activations, consistent with predictions from muscle models found in the literature. We also found that the dominant contributor to the reflex in the medial gastrocnemius muscle is a positive force feedback. Future works could explore a similar paradigm in the tibialis anterior muscle, an antagonist to the medial gastrocnemius muscle, and explore higher order nonlinear muscle-reflex models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pharmamusicology: Exploring the Impact of Music on the Physiology and Psychology of Anxiety Disorders and Well-Being</title>
<link href="https://hdl.handle.net/1721.1/152023" rel="alternate"/>
<author>
<name>Lecamwasam, Kimaya H.M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152023</id>
<updated>2023-09-01T03:23:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Pharmamusicology: Exploring the Impact of Music on the Physiology and Psychology of Anxiety Disorders and Well-Being
Lecamwasam, Kimaya H.M.
This thesis investigates and assesses the impact of personalized approaches to music-based mental health and well-being support systems grounded in physiology and/or psychology, through analysis of biometric and self-report data. This work is divided into two streams, with four projects classified into the category of “Music as Expression" and one as "Music as Intervention." The first project explores the impact of music composition and performance on self-reported well-being via a "well-being workshop" where participants reported that the music-based activity was engaging and beneficial. The following three projects explored the relationship between live music performance and well-being through data collection during the world premiers of The Distance Between Us, Breathing Together, and the pilot of the Wellbeing Concerts at Carnegie Hall. The Wellbeing Concerts at Carnegie Hall and The Distance Between Us projects yielded novel methods of audience surveyal, such as the "In-Concert Well-Being and Affect Survey (ICWAS)," that were informed by the exploratory findings from the performance of Breathing Together. The pilot data, while limited, demonstrates the promise of these approaches and calls for further study. While composing The Distance Between Us, I also created and used a method of health-informed notation that is included in this thesis, alongside an archival recording of this piece. Finally, the fifth project, titled "Investigating the Physiological and Psychological Effect of an Interactive Musical Interface for Stress and Anxiety Reduction," assesses the utility of music to reduce the physiological and psychological symptoms of anxiety. Pilot results show a significant reduction in self-reported stress, while self-reported anxiety and biometrics highlight further improvements for future protocols. Together, these five projects serve as first steps towards a nuanced understanding of personalized applications of music-based strategies for mental health and well-being promotion and assessment, highlighting important findings and implications for future research and practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study and Early-Stage Structural Design for Tall Timber Buildings</title>
<link href="https://hdl.handle.net/1721.1/152021" rel="alternate"/>
<author>
<name>Stark, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152021</id>
<updated>2023-09-01T03:15:49Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Parametric Study and Early-Stage Structural Design for Tall Timber Buildings
Stark, John A.
Buildings today account for a substantial portion of global greenhouse gas emissions due to the usage of carbon intensive materials such as steel and concrete. An alternative to these materials such as timber, a low embodied carbon material, has been gaining traction in the tall building industry. However, timber structures today have been limited in height due to code restrictions, lack of research and design guidance, cost, and fireproofing. Research has proved that tall timber buildings are feasible but are highly susceptible to large overturning moments and drift due to timber’s light weight and lower stiffness. A solution and trend to create better performing tall buildings with timber has been to design a hybrid structure using a mixture of timber, reinforced concrete, and/or steel in the structural system. Nonetheless, there is a lack of research and guidance taking a wholistic and comparative view into the efficiencies of these different timber structural systems at taller heights. With material quantities determining the economics and efficiency of a tall building, and embodied carbon determining its carbon footprint, this thesis conducts a parametric study to evaluate the efficiencies of multiple tall timber structural systems ranging from 10-50 stories and ultimately creates the first timber premium for height graph. Results show that core and winged wall systems, as well as braced systems, are consistently efficient up to 50 stories for material quantities and embodied carbon. The timber premium for height curve shows that the material quantity of timber required for a safe building design increases linearly when designing for gravity loads, increases linearly when designing for lateral strength, and increases exponentially when designing for lateral serviceability. For gravity loads, this quantity of timber needed is 0.65 cu.ft/sf for 10 stories, linearly increasing to 0.80 cu.ft/sf for 50 stories. For lateral loads, the quantity of timber needed is 0.68 cu.ft/sf for 10 stories, exponentially increasing to 1.15 cu.ft/sf for 50 stories. The premium for height curve also proves that all-timber building designs are controlled by lateral strength up to 20 stories, whereas from 20-50 stories, designs are controlled by lateral drift, meaning a stiffness-controlled design. Timber-hybrid systems can be used for more stiffness, but result in a 3-12% increase in embodied carbon when compared to all-timber options and excluding timber sequestration of carbon. Ultimately, these results can be useful for early-stage structural design considerations for tall timber buildings and help promote a sustainable future.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of vdW Magnetic Materials for Spintronic Applications</title>
<link href="https://hdl.handle.net/1721.1/152020" rel="alternate"/>
<author>
<name>Kajale, Shivam Nitin</name>
</author>
<id>https://hdl.handle.net/1721.1/152020</id>
<updated>2023-09-01T03:01:08Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Study of vdW Magnetic Materials for Spintronic Applications
Kajale, Shivam Nitin
Energy consumption of artificial intelligence (AI) systems are projected to grow at an alarming rate over the next two decades and stand to stress the global energy sector. A way forward is to replace the traditional von-Neumann computing hardware with technologies like neuromorphic and stochastic computing which are better suited for AI applications. Here, I study van der Waals magnetic materials for their application in developing spintronic devices to form the building blocks of neuromorphic and stochastic computing architectures. Use of correlated systems like ferromagnets provides a way towards low energy device switching, while 2D nature of the materials provides an avenue for building spintronic devices with maximum dimensional&#13;
scalability and have strong prospects of enabling highly energy efficient mechanism of switching magnetism. A reliable protocol for fabricating devices with air-sensitive vdW magnetic materials and characterising them has been developed, including the electrochemical exfoliation of bulk vdW crystals, the design and building of a 2D material transfer setup and nanofabrication of devices using lithography, and magneto-transport measurements. This work will serve as a strong foundation for future work which would involve developing spin-valve devices with vdW materials and exploring energy-efficient modes of switching magnetism in them.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>EMG methods for prosthesis ankle-subtalar free-space control</title>
<link href="https://hdl.handle.net/1721.1/152019" rel="alternate"/>
<author>
<name>Qiao, Junqing</name>
</author>
<id>https://hdl.handle.net/1721.1/152019</id>
<updated>2023-09-01T03:40:50Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">EMG methods for prosthesis ankle-subtalar free-space control
Qiao, Junqing
EMG-based prosthetic joint controllers have been an active research field for more than fifty years. However, several challenges remain to be addressed[9]. Electrodes positioning, controllers calibration, and controllers’ linear approximation error are the most challenging problems among them.&#13;
&#13;
This thesis introduces three methods to solve those problems respectively. They are 1. A non-negative blind source separation algorithm named non-negative orthogonal decomposition(NOD). This algorithm aims to replace non-negative matrix factorization(NMF) for muscle motion base extraction. NOD recovers the source signal by finding the borders of the input signal and translating the borders onto the coordinate axis. The translated signals are the recovered signals. 2. An unsupervised algorithm for generating joint trajectories from EMG signals in reciprocating movements. The EMG signal and trajectory can be used for EMG-based prosthesis joint controller calibration. And 3. an innovative EMG-to-joint-position controller. It uses neural networks to compensate for the nonlinearity of the well-known bilinear model[2].&#13;
&#13;
The NOD algorithm successfully extracted motion bases from the EMG signals. Compared with NMF, the motion bases are more independent and stable. The minimum-jerk-based trajectory generator generated smooth and biomimetic trajectories on intact subjects. The trajectories are close to the ground truth collected from the goniometer. The third model also has considerable improvement in joint angle accuracy over the linear muscle model.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Affective Responses to Virtual Spaces Using Physiological Sensors and Verbal Descriptions</title>
<link href="https://hdl.handle.net/1721.1/152018" rel="alternate"/>
<author>
<name>Tu, Han</name>
</author>
<id>https://hdl.handle.net/1721.1/152018</id>
<updated>2023-09-01T03:01:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analyzing Affective Responses to Virtual Spaces Using Physiological Sensors and Verbal Descriptions
Tu, Han
Architects design spaces with assumptions about how their designs will affect users emotionally. These assumptions primarily rely on professional intuitions and subjective experiences. This thesis utilizes wearable sensors to collect and analyze human responses, especially emotions, when experiencing virtual spaces. It tests if the collected data can be used to predict users’ emotional responses to a spatial design in VR and to help architects design in a more informed and data-driven way.&#13;
&#13;
To achieve this, data from 86 individuals in four experiments, each placed in a simulated environment of the synthetic model or scanned model, were analyzed. Collected data include verbal descriptions, recorded visual targets, electroencephalography (EEG), galvanic skin response (EDAs), and heart rates. The study consists of three parts: (1) design and build a VR environment with wearable sensors to collect data; (2) conduct experiments to collect participants’ physiological responses, verbal descriptions, and visual target data in the VR spaces; and (3) analyze the collected data to confirm that they relate to spatial design.&#13;
&#13;
Experiments have demonstrated the existence of a certain relationship between physiological data and spatial parameters, such as EEG calm state and spatial height, and higher vigilance wandering from a relatively tall space to a relatively short space. In addition, by using verbal description analysis, we found an association between the physiological data and the spatial sequences and sounds. This evidence of correlations between physiological data and spatial or verbal description is a small step toward the development of a toolkit to assist designers in measuring user experience in VR environments.&#13;
&#13;
This methodology offers a useful emotion measurement for a virtual architectural design using multiphysiological sensors and verbal descriptions. It leads to a potential future application that combines physiological metrics and AI methods and informs designers of users’ emotional experiences before a design is finished.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limits of Expression: On Touch, Emotion, and Communication</title>
<link href="https://hdl.handle.net/1721.1/152017" rel="alternate"/>
<author>
<name>Tsogbe, Deborah</name>
</author>
<id>https://hdl.handle.net/1721.1/152017</id>
<updated>2023-09-01T03:25:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Limits of Expression: On Touch, Emotion, and Communication
Tsogbe, Deborah
Touch, being the first sense to develop in the womb, is fundamental to human experience. The tactile sense allows us to investigate the world by providing a framework for understanding it through its relationship to our body. Tactile methods are capable of expressing concepts beyond language. The most effective and meaningful of these expressions are often emotionally charged. They often concern the unspeakable sentiment behind many of our social interactions, the interpretation of which lends a certain depth to our relationships, but beyond this, we often employ self-touch gestures unconsciously or consciously. Through these gestures, we communicate with ourselves – to self-soothe, as a nervous habit, a mindless fidget. Touch expressions can be deployed in countless ways, and we have only begun to understand them. In parallel, we have developed countless methods of expressing ourselves through digital means which subtract some sensory experience from communication. Perhaps the perpetual digital togetherness afforded by the networks we find ourselves living in has dulled our sensitivities to the physical realm of human experience and all that it embodies. As we continue to move further away from physical togetherness, we may lose an understanding of this emotional depth, or lose touch with ourselves. The intention of this research is to marry physical and digital means of communication to understand the unspoken ways in which we are attuned to our inner emotional states and the physical behaviors we use to then express and regulate those states. In this research, I craft a garment embedded with computational means, so that we might develop a methodology for observing how the body understands and expresses itself through touch, and in turn how it communicates with other bodies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recursive Robotic Assemblers</title>
<link href="https://hdl.handle.net/1721.1/152015" rel="alternate"/>
<author>
<name>Smith, Miana M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152015</id>
<updated>2023-09-01T03:04:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Recursive Robotic Assemblers
Smith, Miana M.
Biology efficiently builds across size scales: at the scale of tens of nanometers, ribosomes assemble more ribosomes, enabling the highly parallelized production of proteins that make up living systems ranging from prokaryotes at the scale of microns, to blue whales at the scale of tens of meters. At a level above ribosomes, we might consider cell division as another type of assembly process: as the size scale of the assembled parts grows, the assemblers also grow. This represents a recursive and hierarchical assembly process. In contrast, current robotic and CNC construction processes, though often parallelized, are constrained to pre-set, limited assembly rates and sizes. Inspired by biology, this thesis considers how we might develop recursive and hierarchical robotic assembly systems. That is, similar to a biological assembly system, can we develop a robotic assembly system that is able to build robots, structures, and robots integrated in structures?&#13;
&#13;
To this end, we decompose both the robot and the structures into a set of compatible building blocks, or voxels, that can assemble and reassemble into more complex structures. The decomposition of the robot is based on a “functional voxel” that routes electrical signals and power, in addition to mechanical forces. Robotic modules are made by incorporating actuation, which then assemble into reconfigurable robots using a reversible solder joint. An additional set of construction voxels, which do not contain electrical features, enables the robot to assemble higher performance structures. This work exists at the intersection of modular robotics and collective robotic construction, prioritizing scalability— our ability to produce many robots that then build useful structures.&#13;
&#13;
A set of functional voxels, robot modules, and construction voxels have been developed and characterized. The robotic system is characterized by its function: the robot is able to assemble another robot and the robot is able to assemble construction voxels into small structures. The construction voxel system is characterized using mechanical testing, which verifies that the material system is performant. Together, this demonstrates all the elements required for recursive robotic assembly, in which a robot is able to assemble both more robots and larger structures.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering, Learning, and Exploiting Visual Cues</title>
<link href="https://hdl.handle.net/1721.1/152014" rel="alternate"/>
<author>
<name>Tiwary, Kushagra</name>
</author>
<id>https://hdl.handle.net/1721.1/152014</id>
<updated>2023-09-01T03:28:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Discovering, Learning, and Exploiting Visual Cues
Tiwary, Kushagra
Animals have evolved over millions of years to exploit the faintest visual cues for perception, navigation, and survival. Complex and intricate vision systems found in animals, such as bee eyes, exploit cues like polarization of light relative to the Sun’s position to navigate and process motion at one three-hundredth of a second. In humans, the evolution of the eyes and the processing of visual cues are also tightly intertwined. Babies develop depth-of-field at 6 months, are often scared of their own shadows, and confuse their reflections with the real world. As the infant matures into an adult, they intuitively learn from their experiences how these cues instead provide valuable hidden information about their environments and can be exploited for depth perception and driving. &#13;
&#13;
Inspired by our usage of visual cues, this thesis explores visual cues in the modern context of data-driven imaging techniques. We first explore how visual cues can be learned from and exploited by combining physics-based forward models with data-driven AI systems. We first map the space of physics-based and data-driven systems and show the future of vision lies in the intersection of both regimes. Next, we show how shadows can be exploited to image and 3D reconstruct the hidden parts of the scene. We then exploit multi-view reflections to convert household objects into radiance-field cameras that can image the world from the object's perspective in 5D. This enables applications of occlusion imaging, beyond field-of-view novel-view synthesis, and depth estimation from objects to their environments. &#13;
&#13;
Finally, we discuss how current approaches rely on humans to design imaging systems that can learn and exploit visual cues. However, as sensing in space, time, and different modalities become ubiquitous, relying on human-designed systems is not sufficient to build complex vision systems. We then propose a technique that combines reinforcement learning with computer vision to automatically learn which cues to exploit to accomplish the task without human intervention. We show how in one such scenario agents can start to automatically learn to use multiple cameras and the triangulation cue to estimate the depth of an unknown object in the scene without access to prior information about the camera, the algorithm, or the object.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resting State Neurophysiology of Agonist-Antagonist Myoneural Interface in Persons with Transtibial Amputation</title>
<link href="https://hdl.handle.net/1721.1/152009" rel="alternate"/>
<author>
<name>Chicos, Laura A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152009</id>
<updated>2023-09-01T03:17:14Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Resting State Neurophysiology of Agonist-Antagonist Myoneural Interface in Persons with Transtibial Amputation
Chicos, Laura A.
The agonist-antagonist myoneural interface (AMI) is a novel amputation surgery that preserves sensorimotor signaling mechanisms of the central-peripheral nervous systems. Our first neuroimaging study investigating AMI subjects (Srinivasan et al., Sci. Transl. Med. 2020) focused on task-based neural signatures, and showed evidence of proprioceptive feedback to the central nervous system. The study of resting state neural activity helps non-invasively characterize the neural patterns that prime task response. In this first study on resting state fMRI in AMI subjects, we compared resting state functional connectivity in patients with transtibial AMI (n=12) and traditional (n=7) amputations, as well as biologically intact control subjects (n=10). We hypothesized that the AMI surgery will induce functional network reorganization that significantly differs from the traditional amputation surgery and also more closely resembles the neural configuration of controls. We found AMI subjects to have lower connectivity with salience and motor seed regions compared to traditional amputees. Additionally, with connections affected in traditional amputees, AMI subjects exhibited a connectivity pattern more closely resembling controls. Lastly, sensorimotor connectivity in amputee cohorts was significantly associated with phantom sensation (R²=0.7, p=0.0008).  These findings provide researchers and clinicians with a critical mechanistic understanding of the effects of the AMI surgery on the brain at rest, spearheading future research towards improved prosthetic control and embodiment.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>996, moyu, and involution: tech work in the age of platform monopoly</title>
<link href="https://hdl.handle.net/1721.1/152005" rel="alternate"/>
<author>
<name>Tan, Jian Shen (JS)</name>
</author>
<id>https://hdl.handle.net/1721.1/152005</id>
<updated>2023-09-01T03:49:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">996, moyu, and involution: tech work in the age of platform monopoly
Tan, Jian Shen (JS)
Over the past decade, consumer internet companies such as Google, Facebook, Tencent, and Alibaba have come to symbolize a new era marked by dynamism, entrepreneurialism, and innovation. This has led us to believe that these internet companies play an outsized role in the creation of economic value today, which given their profits, may come as no surprise. However, in this present iteration of capitalist production, value is seldom thought about in terms of labor. How can we incorporate workers—and the labor they perform—into our economic analysis of consumer internet platforms? And what can a labor theory of value reveal about how value is created among these platforms? My research looks at the labor process of China’s increasingly disgruntled tech workers between the years of 2019 and 2022, the years of China’s so-called “internet winter.” Against popular conceptions of China’s elite tech workers—smart, hardworking, and entrepreneurial—my research shows that the labor process of tech work in China during these years is rife with contradiction. Workers stay long hours at the office when there’s no work to do, spend hours writing reports for managers who never read them, and compete ruthlessly against each other when there’s nothing to gain. In other words, they seem to be doing, what the late David Graeber famously called, a bullshit job. By looking at its labor process, my research tells a different story of the consumer internet. Rather than being about dynamism, entrepreneurialism, and innovation, my thesis looks at the bullshitization of tech work and explores why the consumer internet has become bloated with nonsense activity.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Novel DNA-Binding Proteins with Generative Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/152003" rel="alternate"/>
<author>
<name>Calman, Ido</name>
</author>
<id>https://hdl.handle.net/1721.1/152003</id>
<updated>2023-09-01T03:12:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Designing Novel DNA-Binding Proteins with Generative Deep Learning
Calman, Ido
Protein-DNA interactions play a critical role in various biological processes, such as gene regulation and genome maintenance. Designing protein backbones specifically tailored for DNA binding remains a challenging task, requiring the exploration of novel computational approaches. This thesis presents a novel framework for gen- erating protein backbones that exhibit affinity for DNA molecules. The proposed methodology leverages Graph Neural Networks (GNNs) for encoding protein struc- tures and diffusion models for conditional sampling. The GNNs capture the intricate relationships between amino acids in the protein backbone, allowing for the effective encoding of structural information relevant to DNA binding. The diffusion models enable the conditional generation of protein backbones, given specific DNA sequences as input. The thesis proposes a Transformer architecture and provides a practical way to diffuse from its protein encoding. The findings from this research have significant implications for the design and engineering of DNA binding proteins, facilitating ad- vancements in fields such as synthetic biology, gene therapy, and drug development.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Empathic Similarity in Personal Narratives</title>
<link href="https://hdl.handle.net/1721.1/151998" rel="alternate"/>
<author>
<name>Shen, Jocelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/151998</id>
<updated>2023-09-01T03:41:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Empathic Similarity in Personal Narratives
Shen, Jocelyn
The most meaningful connections between people are often formed through expression of shared vulnerability and emotional experiences. Despite the number of ways in which we are able to connect through technology-mediated platforms today, loneliness, apathy, and mental distress are still pervasive around the world. In this thesis, we aim to use NLP systems to humanize personal experiences through identifying similarity in personal narratives based on empathic resonance as opposed to raw semantic or lexical similarity. &#13;
&#13;
We present a novel task for the retrieval of empathically similar stories, as well as the first evaluation benchmark on this task. We operationalize empathic similarity in personal stories using insights from social psychology and narratology, and introduce EmpathicStories, a crowdsourced dataset of emotional personal experiences annotated with features based on our framework and empathic similarity scores between pairs of stories. From our dataset, we provide insights into what features contribute to emotionally resonant stories. &#13;
&#13;
We then compare prompting and fine-tuning large language models (LLMs) for empathic similarity understanding and empathy reasoning summarization. Our experiments show that our model fine-tuned on EmpathicStories achieves performance boosts across both similarity metrics and retrieval metrics compared to state-of-the-art baselines. We additionally conduct a human evaluation to assess the effect our model has on retrieving stories that users empathize with, and compare its performance against naive semantic similarity-based retrieval and ChatGPT generated stories. We find that participants empathized significantly more with stories retrieved by our model than standard, off-the-shelf sentence transformer retrieval. In addition, our user studies show that participants expressed they would empathize much less with AI-written stories than human written stories. Our work sheds light on how LLMs can be used to reason about the interplay of emotions between narrators and can have strong implications for a wide range of other recommendation, generation, and dialogue tasks. In doing so, we demonstrate the potential for social-emotional reasoning in NLP systems to foster prosociality, human connection, and empathy between people.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Private Equity Investments for Industrial Carbon Emission Reduction</title>
<link href="https://hdl.handle.net/1721.1/151996" rel="alternate"/>
<author>
<name>Jacobson, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/151996</id>
<updated>2023-09-01T03:41:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimization of Private Equity Investments for Industrial Carbon Emission Reduction
Jacobson, Peter
Industrial businesses are responsible for a significant portion of global greenhouse gas emissions. They must reduce these emissions due to financial, regulatory, and customer pressures, but the pathways to net zero emissions are complex and costly. For companies held by private equity, the inaction is exacerbated by a lack of clarity on how emissions reduction initiatives influence investment returns. This research presents a carbon footprint calculator and an optimization model to analyze emissions reduction projects. We highlight that using these two tools as part of a strategic framework can help manufacturing companies create actionable and profitable emission reduction strategies. The carbon footprint calculator identifies a company’s carbon emission sources and measures its carbon footprint, while the optimization model determines the most profitable investments to meet emissions goals. For our optimization, we use an integer linear programming model that schedules which furnace upgrades to implement each year, with an objective of minimizing total cost to the business. Our results highlight that ancillary process line benefits from furnace upgrades can significantly increase profitability of emissions reduction projects. We also show that, in the absence of new technology, there will need to be a combination of cleaner electricity and carbon pricing for manufacturing companies to profitably meet science-based emissions reduction goals.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liberatory Computing Framework: Empowering High School Students to Mitigate Systemic Oppression through Data Activism</title>
<link href="https://hdl.handle.net/1721.1/151995" rel="alternate"/>
<author>
<name>Walker, Raechel</name>
</author>
<id>https://hdl.handle.net/1721.1/151995</id>
<updated>2023-09-01T03:01:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Liberatory Computing Framework: Empowering High School Students to Mitigate Systemic Oppression through Data Activism
Walker, Raechel
One reason for the underrepresentation of African Americans in the field of computing is the lack of opportunities to engage with data science, particularly in ways that empower their communities.  Current computing curricula do not teach students how to leverage technical skills in service of projects that are more authentic and relevant to the African Americans they are claiming to assist. While computing has the potential to change the world and has become increasingly integrated into our daily lives, the longstanding reality remains that minoritized groups, including African Americans, are underrepresented in computing fields. Moreover, computing classes often present computing as abstract, neutral, and utopian, disregarding its potential for causing harm.&#13;
&#13;
While it is important for everyone to participate in the process of dismantling a complex system of barriers, I focus specifically on why this goal is of particular relevance to African American students. I highlight Dr. El-Amin’s “liberation tools,” which state how a sound racial identity, critical consciousness, liberation centered achievement identity, collective obligation, along with activism skills are essential to preparing African Americans to “fight for” racial liberation. Given that computing classes teach students critical thinking skills to solve complex problems, I argue that computing is well-positioned to incorporate “liberation tools”. Liberation tools teach students how to think in terms of systems, which is essential for racial liberation. By expanding the liberation tools, I coin the term, “liberatory computing,” to reveal how computing curricula can motivate and provide African American students with practical skills to address the racism embedded in society. &#13;
&#13;
I propose two innovative high school curricula that focus on data activism, integrating lessons on racism with the practical application of robust data science skills to support community organizers in their efforts. In the first data activism program, students utilize their data science and social justice skills to address systemic racism through an independent capstone project. They actively engage in conducting background research on specific instances of systemic racism, identifying relevant data sets, and implementing intersectional data analysis techniques. In the second data activism program, students collaborate with community partners to work on a data activism project aimed at supporting minoritized groups in the Greater Boston area. This comprehensive research project encompasses various essential components, such as analyzing student projects, conducting surveys and interviews, and seeking insights from community organizers.&#13;
&#13;
Notably, all community organizers expressed their intention to utilize the students' data activism projects as a valuable resource to enhance their advocacy efforts. For example, one community organization plans to leverage the student's intersectional data visualizations to advocate for policies and laws that address the issue of inland flooding in predominantly African American and low-income communities in Boston. In the second program, surveys indicated a significant increase in the number of students who now acknowledge the impact of data science in combating racism, along with an increased ability to employ their academic achievements to mitigate racial injustices. Furthermore, interviews conducted with students who participated in the second program revealed a unanimous desire to incorporate data activism into their future endeavors. Impressively, twelve out of seventeen students discussed specific ideas on how they plan to utilize data science and social justice principles in their forthcoming pursuits.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Building a Pedagogical Agent that Supports Children’s Exploration and Home Literacy Education</title>
<link href="https://hdl.handle.net/1721.1/151994" rel="alternate"/>
<author>
<name>Zhang, Xiajie</name>
</author>
<id>https://hdl.handle.net/1721.1/151994</id>
<updated>2023-09-01T03:31:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards Building a Pedagogical Agent that Supports Children’s Exploration and Home Literacy Education
Zhang, Xiajie
Early childhood education remains a critical topic and challenge in the past decades due to its impact on children’s future. Since the late 90s, early childhood and developmental psychology researchers have started promoting a new child-centered, exploration-and-play-focused early childhood education method. Despite the research findings, most countries still employ early childhood curricula focusing on school readiness and competency in kindergarten. Although, there is a new tendency toward using a holistic approach to early childhood education in institutions to promote child-focused learning and exploration. To support children’s exploration and self-promoted learning outside school, pedagogical agents are an under-studied platform. This thesis investigates the possibility of using a pedagogical AI agent to help children’s exploration at home.&#13;
&#13;
I first describe the design and development of an interactive storybook platform with explorable literacy features, through which children are endowed with the resources to learn by themselves. I then describe the robot’s behavior design for the exploration demonstration with a social robot platform. Later, I discuss two different robot interaction paradigms for the delivery of the demonstration behavior.&#13;
&#13;
I evaluate the system with 35 children and a between-group ABA study design. Participants interacted with the robot for 2 to 4 weeks and completed 8 sessions in total. In one of the study groups, children’s exploration was self-guided with their agency to decide when to interact with the robot peer. The robot re-actively delivered demonstration behaviors in response to the child’s initiation. In the other study group, the robot’s behaviors were driven by its personalization algorithm; thus, it autonomously delivered interactions without the child’s initiation.&#13;
&#13;
The data analyses were conducted on two scales–children’s self-explorative behaviors and vocabulary learning. The result shows that with a proactive robot peer with exploration demonstrations, children adapted to be more explorative than children who interacted with the reactive robot peer, despite that the robot demonstrated exploration in both conditions. Moreover, we find that children’s exploration is associated with their learning in the robot-guided exploration condition, suggesting that children’s self-explorative behavior in the robot-guided group is learning-oriented and related to their learning growth. When comparing children’s adaptation of exploration, we find a ceiling effect common in child-robot interaction–when children exhibited high exploration in the early phases of the intervention, their exploration growth in succeeding intervention sessions is less compared with less explorative children. Finally, we find an association between children’s exploration and the storybook genre, possibly due to their familiarity with the storybook genre and engagement.&#13;
&#13;
In addition, this thesis attempts to understand the educational needs of home literacy programs from parents’ perspectives. After living with the pedagogical social robot for several weeks, the experimenters conducted semi-constructed interviews with the parent. The qualitative analysis and coding of parents’ interview transcripts suggest common themes in robot design and educational functions that the parents want in a long-term pedagogical agent for their home.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PLACEIFY: A data-driven framework for evaluation-by-analogy in early-stage urban analysis and design</title>
<link href="https://hdl.handle.net/1721.1/151993" rel="alternate"/>
<author>
<name>Sanatani, Rohit Priyadarshi</name>
</author>
<id>https://hdl.handle.net/1721.1/151993</id>
<updated>2023-09-01T03:30:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">PLACEIFY: A data-driven framework for evaluation-by-analogy in early-stage urban analysis and design
Sanatani, Rohit Priyadarshi
Within the field of urban design and planning, the explicit parameterization of many complex aspects of urban environments is a challenge. Specifically, the representation of ‘intangible’ experiential and affective qualities is often difficult, which makes the quantitative evaluation of such qualities problematic. Owing to these challenges, representation and evaluation though reference has long remained an essential component in design processes. Through case-studies of existing environments, designers often attempt to represent, convey and evaluate complex qualities of their own designed outcomes. However, there exist very few frameworks for systematic and data-driven referencing in contemporary urban design workflows.&#13;
&#13;
Building on advances in urban information systems, big data analytics, and computer vision, this research demonstrates a data-driven framework for evaluation-by-analogy, that allows urban designers, planners and analysts to systematically reference real urban environments based on designed or envisioned urban qualities. The system generates a database of diverse urban locations across different geographical and cultural contexts around the world. A data collection pipeline is created for the extraction of selected visual, morphological, land-use and demographic features for each sample, from geolocated street-view imagery, Geographic Information System (GIS) data, landuse records and census data. For design exploration and evaluation, the system offers novel interfaces and representation structures that allow users to explore ‘similar’ samples based on envisioned urban qualities. It also allows for reference-based scenario building through explicit modification of urban parameters, as well as through other forms of reference exploration. For demonstration of the framework, a prototype web-application titled ‘PLACEIFY’ is developed. Usability-tests involving urban designers and planners indicate that such a system has strong potential to serve as a valuable decision support tool, by providing relevant data at each iteration of an imagination-modification-evaluation cycle in design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Process at Omnichannel DCs Undergoing Shifts in Channel Mix</title>
<link href="https://hdl.handle.net/1721.1/151992" rel="alternate"/>
<author>
<name>Gouthro, Fiona</name>
</author>
<id>https://hdl.handle.net/1721.1/151992</id>
<updated>2023-09-01T03:41:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Innovation Process at Omnichannel DCs Undergoing Shifts in Channel Mix
Gouthro, Fiona
An increase in digital demand has forced a DC that previously fulfilled mostly wholesale orders to adjust its operational strategy and pursue innovation to address the shift in channel mix. This thesis aims to develop and propose an innovation framework that can be used to identify the areas in the DC that are constrained and what technologies can be used to tackle these constraints. The proposed process is based on external innovation methods identified through research and internal innovation methods present in the company today. Once developed, the proposed innovation process is evaluated through the application of a previous DC innovation to ensure viability. The proposed process is then applied to the DC today to recommend areas and technologies for innovation investment. It is concluded that the proposed process performs well when applied to a previous innovation and is therefore deemed as viable. When applied to the DC today, the process recommends an investment in the DC’s footwear selection area using process and staffing optimization.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between the Lines: Encoding Relations Through Body, Tool, and Algorithm</title>
<link href="https://hdl.handle.net/1721.1/151991" rel="alternate"/>
<author>
<name>Schumacher, Zachary Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/151991</id>
<updated>2023-09-01T03:58:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Between the Lines: Encoding Relations Through Body, Tool, and Algorithm
Schumacher, Zachary Steven
The tools architects use orchestrate the discipline in seen and unseen ways. In recent decades, we have swapped early forms of mechanical drawing instruments for digital tools with unimaginable computing power. While this increased level of computational literacy allows us to script and code architectural forms more efficiently, it has also created incongruities between the computationally described object and material constructions. At times the digital tools we depend on today go as far as defining the aesthetic of our buildings. To complicate this further, the digital tools most often solicited by the architectural practice are non-native imports adapted for their visual potential and practical uses. Meaning embedded within the programming of tools that shape our buildings are residual values of other disciplines. For example, we can trace the origins of CAD software back to engineers and mathematicians at Boeing and here at MIT, who sought to mechanize the construction of splines and irregular curved surfaces for the production of slipstream automobiles, toothbrushes, and even letterforms. And much like the hidden algorithms in the background of our digital tools, there is an apparatus of choreography surrounding our physical tools that encode instructions on how the body engages with the object. In other words, the machines we use produce not only drawings but gestures as well, keying us into the always-present yet rarely discussed embodied dimensions of tools. &#13;
&#13;
To expand upon the embodied dimensions of our tools today, we need to reconsider the machine as the site of intervention. Motion data and performance envelopes surrounding our tools extend beyond the projective reenactment of the machine and offer us a means to measure the derivative of what it takes to produce a drawing, a surface, or a construction. This thesis dislocates the spline from its formal geometry associated with slipstream construction and recasts it as a way to record the tumble-type inscriptions surrounding an object’s performance — a tactic to mutually mark and negotiate the activity between humans and machines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ColloGraphy: Designing Augmented Visual-Haptic Feedback Systems to Support Fine Motor Skill Learning</title>
<link href="https://hdl.handle.net/1721.1/151989" rel="alternate"/>
<author>
<name>Fang, Mengying</name>
</author>
<id>https://hdl.handle.net/1721.1/151989</id>
<updated>2023-09-01T03:19:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">ColloGraphy: Designing Augmented Visual-Haptic Feedback Systems to Support Fine Motor Skill Learning
Fang, Mengying
Learning motor skills is an essential part of our daily lives. However, certain activities can be challenging to comprehend through observation alone, especially for beginners. Specifically, this thesis focuses on Chinese calligraphy writing as an example of a complex, fine motor task, in which learning it can be augmented by additional feedback. The traditional method of learning calligraphy writing involves learners comparing the visual differences in their writing with static expert manuscripts. In this process, the opportunity for novice learners to internalize the bodily sensations required for mastery is absent. To address this issue, this thesis presents the design of a series of prototypes that capture, recreate, and re-enact bodily movement during calligraphy writing. These prototypes support novice learners with augmented visual and haptic feedback. The thesis presents a comprehensive comparison of the three different approaches explored in the prototypes, highlighting the importance and challenge of finding the right amount of system intervention to achieve effective motor skill learning; in short, inconsistent or excessive intervention may lead to confusion or over-reliance, while insufficient intervention may fail to assist learning. Accordingly, design recommendations for successful multimodal feedback systems for supporting motor skill learning are recommended.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A conformable ultrasound patch for cavitation enhanced transdermal cosmeceutical delivery</title>
<link href="https://hdl.handle.net/1721.1/151988" rel="alternate"/>
<author>
<name>Shah, Aastha</name>
</author>
<id>https://hdl.handle.net/1721.1/151988</id>
<updated>2023-09-01T03:10:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A conformable ultrasound patch for cavitation enhanced transdermal cosmeceutical delivery
Shah, Aastha
Increased consumer interest in healthy-looking skin demands a safe and effective method to increase transdermal absorption of innovative therapeutic cosmeceuticals. However, permeation of small-molecule drugs is limited by the innate barrier function of the stratum corneum. Here, we report a conformable ultrasound patch (cUSP) that enhances transdermal transport of niacinamide by inducing intermediate-frequency sonophoresis in the fluid coupling medium between the patch and the skin. The cUSP consists of piezoelectric transducers embedded in a soft elastomer to create localized cavitation pockets (0.8 cm², 1 mm deep) over larger areas of conformal contact (20 cm²). Multiphysics simulation models, acoustic spectrum analysis and high-speed videography are used to characterize transducer deflection, acoustic pressure fields and resulting cavitation bubble dynamics in the coupling medium. The final system demonstrates a 26.2-fold enhancement in niacinamide transport in a porcine model in vitro with a 10-minute ultrasound application, demonstrating suitability of the device for short-exposure, large-area application of sonophoresis for patients and consumers suffering from skin conditions and premature skin aging.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materials Characterization and Spectroscopy for a Methane Abatement Catalyst</title>
<link href="https://hdl.handle.net/1721.1/151987" rel="alternate"/>
<author>
<name>Wilkinson, Mollie</name>
</author>
<id>https://hdl.handle.net/1721.1/151987</id>
<updated>2023-09-01T03:29:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Materials Characterization and Spectroscopy for a Methane Abatement Catalyst
Wilkinson, Mollie
Methane is the second-most emitted greenhouse gas after carbon dioxide, and it is significantly more powerful as a short-term warmer, making it a valuable target for climate change mitigation efforts. Zeolites are earth-abundant minerals common in catalysis for their low price combined with high conversion and throughput potential. This study evaluates a specific copper-zeolite (mordenite) methane oxidation catalyst for long-term durability and potential performance at 400 and 950 C. Using materials characterization and spectroscopy techniques including scanning-electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), Brunauer-Emmett-Teller analysis (BET), differential scanning calorimetry (DSC), and X-ray diffraction (XRD), chemical and structural changes are tracked, identified, and assessed over the course of three months. Samples treated at 400 °C show no major structural or chemical changes in the catalyst, while samples treated at 950 °C show gradual transformation into a nonporous quartz-mullite-cristobalite mixture. This suggests indefinite catalyst stability at the former temperature and progressive catalyst degradation at the latter temperature, providing plausible long-term operation conditions and peak temporary conditions for this method of methane abatement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Latent Lab: Exploration Beyond Search and Synthesis</title>
<link href="https://hdl.handle.net/1721.1/151986" rel="alternate"/>
<author>
<name>Dunnell, Kevin F.</name>
</author>
<id>https://hdl.handle.net/1721.1/151986</id>
<updated>2023-09-01T03:23:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Latent Lab: Exploration Beyond Search and Synthesis
Dunnell, Kevin F.
This Master’s thesis investigates the potential of artificial intelligence (AI) models, particularly machine learning and natural language processing techniques, to facilitate brainstorming and ideation in the invention process. The thesis centers around the iterative development of “Latent Lab,” an interactive tool for exploring relationships among MIT Media Lab research projects. The work offers insights into AI systems as co-inventors by addressing the challenges of organizing, searching, and synthesizing content. Our method for interacting with the material is based on “exploration” rather than search. The primary objective was to create a human-AI co-invention system and evaluate its performance on the novelty of co-created ideas. However, the research underscored the importance of accurate data organization for meaningful data generation. Consequently, later versions of Latent Lab focused primarily on improving data organization and interactive exploration. The tool’s success was measured by its effectiveness in familiarizing users with research projects at the Media Lab, ultimately laying the foundation for the future development of human-AI co-invention systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When War Becomes Peace: Ruination and Transvaluation in the Hiroshima and Nagasaki Peace Memorial Parks</title>
<link href="https://hdl.handle.net/1721.1/151985" rel="alternate"/>
<author>
<name>Shirokawa, Nanase</name>
</author>
<id>https://hdl.handle.net/1721.1/151985</id>
<updated>2023-09-01T03:32:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">When War Becomes Peace: Ruination and Transvaluation in the Hiroshima and Nagasaki Peace Memorial Parks
Shirokawa, Nanase
In postwar Japan, “peace” has become the memorial scaffolding that structures the collective national orientation towards the legacy of the Asia-Pacific War, in large part owing to the devastating bombings of Hiroshima and Nagasaki. Yet the atomic catastrophes endured by the two cities have become subsumed into what Anne McClintock terms the “administration of forgetting.” The traumas associated with the bombs have been construed in Japan as an experience of national victimhood and a moral lesson for humanity, in the process obfuscating histories of imperial terror that I argue are carried forward in significant formal continuities, transvalued in a discourse of peace. Peace, in this regard, becomes a mode for asserting a clean rupture and justifying political amnesia.&#13;
&#13;
Peace is the directive of the memorial landscapes of Hiroshima and Nagasaki, and peacemaking was the process by which ruination became the pretext for social, political, and urban reinvention. The Hiroshima and Nagasaki Peace Memorial Parks, both unveiled in 1955, manifest the ways in which dominant public discourses of peace-making and nuclear remembrance were actualized through the reconstruction of the post-atomic cities.&#13;
&#13;
The processes behind the making of the two parks and their approaches to remembering atomic violence trouble the perception that the memorials are shaped solely by the circumstances of the bomb and the postwar milieu of liberal democracy. These sites, I argue, are intimately informed by a constellation of transwar aspirations. wartime representational practices, bureaucratic tensions, as well as urban and regional histories that span beyond the moment of 1945.&#13;
&#13;
In its dual focus on the spatial narratives of Tange Kenzō’s plan for Hiroshima and the material and bodily politics of Kitamura Seibō’s Peace Statue in Nagasaki, this study also addresses the persistent marginalization of Nagasaki in the discourse of nuclear disaster. A close study of these two sites makes evident the need to take seriously the transmutation and transvaluation of representational modes across shifting regimes. The threat of historical forgetting emerges not only in the absences and forced silences, but also in the adoption of a passive gaze towards our extant memorial infrastructure.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Platforms for Biological Control</title>
<link href="https://hdl.handle.net/1721.1/151984" rel="alternate"/>
<author>
<name>Gretton, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/151984</id>
<updated>2023-09-01T03:40:26Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Platforms for Biological Control
Gretton, Dana
Innovation at the level of platforms provides the greatest leverage for shaping science and technology. Here, two key weaknesses in platforms for biology are addressed: the lack of accessible, extensible open-source software for the exploding field of robotic bioautomation, and the lack of a standardized, consistent screening platform for the manufacture of dangerous DNA. Pyhamilton and SecureDNA are introduced. Pyhamilton is the first open-source Python package for controlling Hamilton liquid-handling robots for biology. SecureDNA is the only DNA screening concept to prioritize anti-proliferation of pathogen genomes, and the first to employ modern cryptography to secure a global screening system that can keep up with anticipated exponential growth of the DNA synthesis market. For each platform developed, a software implementation is provided and exercised in a range of applications, and hardware demonstrators have been produced.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Automated Design of Machine Perception Systems</title>
<link href="https://hdl.handle.net/1721.1/151981" rel="alternate"/>
<author>
<name>Klinghoffer, Tzofi</name>
</author>
<id>https://hdl.handle.net/1721.1/151981</id>
<updated>2023-09-01T03:25:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Automated Design of Machine Perception Systems
Klinghoffer, Tzofi
Animal's visual perception systems have evolved to their environment over billions of years, enabling them to navigate, avoid predators, and hunt prey. In contrast, machine perception systems designed by humans require significant engineering and often use standard cameras that may not be well suited to their task or environment. Consider building a robot to pick up trash. The choice of robot sensors impacts which type of trash it can detect, e.g. perhaps an infrared sensor is needed to detect plastic bottles. In addition, animals are able to understand their environment from different viewpoints and under variable lighting, while machine perception systems often fail to generalize beyond the distribution of training data. Inspired by the evolution of animal's visual perception systems, this thesis explores two distinct but related problems: (1) automated design of machine perception systems, and (2) robustness of machine perception systems to physical phenomena, such as lighting and camera viewpoint. Machine perception systems -- also referred to as imaging systems in this thesis -- consist of cameras and perception models. Cameras are used to sense the environment and capture observations, while perception models are used to analyze captured observations. Cameras contain (1) illumination sources, (2) optical elements, and (3) sensors, while perception models use (4) algorithms. Directly searching over all combinations of these four building blocks to design a machine perception system is challenging due to the size of the search space. In Part I of this thesis, we introduce DISeR: Designing Imaging Systems with Reinforcement Learning, a method that allows task-specific imaging systems to be created and optimized in simulation. In Part II of this thesis, we study the robustness of machine perception systems to physical phenomena. We introduce two methods to mitigate the susceptibility of deep learning models to failure when exposed to out of distribution lighting and camera viewpoints. The first method uses disentanglement of features to improve robustness, while the second method modifies pixels to improve robustness. We evaluate our work using standard benchmarks and peer-reviewed publication.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SenseMate: An AI-Based Platform to Support Qualitative Coding</title>
<link href="https://hdl.handle.net/1721.1/151980" rel="alternate"/>
<author>
<name>Overney, Cassandra</name>
</author>
<id>https://hdl.handle.net/1721.1/151980</id>
<updated>2023-09-01T04:00:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">SenseMate: An AI-Based Platform to Support Qualitative Coding
Overney, Cassandra
Unstructured data can be analyzed numerically or qualitatively through methods like sensemaking. One of the key stages of sensemaking is qualitative coding, where the data is divided into units, and each unit is assigned a category or code. Unfortunately, coding is tedious and time-consuming when carried out manually. Finding a balance between manual and fully-automated coding can help increase efficiency while allowing human judgment and preventing systematic machine errors. In this thesis, I propose an accessible semi-automated approach to qualitative coding. First, I apply a novel machine learning method, rationale extraction models, to qualitative coding. These models recommend themes for each unit of analysis in qualitative data and tend to perform better with less ambiguous themes. Through an online experiment, I find that assistance from rationale extraction models increases coding performance and reliability. Next, I execute an iterative, human-centered design process to create SenseMate, an AI-based platform for qualitative coding. After 13 user testing sessions and 3 design iterations, I observe that model overreliance can be minimized through cognitive forcing functions and easy-to-understand model explanations. I also design several ways for users to efficiently provide feedback on machine-generated rationales. To connect my model and design evaluations, I implement a prototype of SenseMate and conduct a summative user evaluation through an online experiment. The evaluation reveals that participants with access to AI assistance have higher coding performances but spend more time on the platform. The effectiveness of various design decisions within SenseMate is also explored. Finally, I discuss a myriad of future work possibilities. Overall, this thesis offers a practical and accessible solution to analyzing unstructured data, which has broad applications for researchers and organizations across various fields.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generation of focal Depdc5 knockout mouse model and implications for focal epilepsy</title>
<link href="https://hdl.handle.net/1721.1/151978" rel="alternate"/>
<author>
<name>Groff, Karenna J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151978</id>
<updated>2023-09-01T03:55:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Generation of focal Depdc5 knockout mouse model and implications for focal epilepsy
Groff, Karenna J.
Epilepsy is a neurological disorder that impacts more than 65 million individuals globally, and one in every 200 children. DEPDC5 is the most commonly identified gene associated with familial focal epilepsy and malformations of cortical development. It is also associated with an increased risk of Sudden-Unexplained Death in Epilepsy (SUDEP). It remains unknown whether seizures due to DEPDC5 loss are a result of in utero cortical developmental defects or later neuronal dysfunction of mTORC1 signaling. To test this, we developed a postnatal adeno-associated virus (AAV) mediated focal cortical Depdc5 knockout mouse model. Viral vectors containing either 2/8AAV-Cre or control 2/8AAV-GFP were injected into the unilateral motor cortex of postnatal day zero or day one Depdc5 floxed (Depdc5c/c or Depdc5c/-) mouse pups. We confirmed a significant reduction in DEPDC5 levels and increased mTOR activity in the AAV-Cre injected hemisphere compared to the contralateral hemisphere or control AAV-GFP injected mice. Cortical lamination was not disrupted by AAV-Cre or AAV-GFP injection. Focal Depdc5 knockout mice have lowered seizure thresholds and increased mortality from seizures. Acute fasting is protective against seizures in a DEPDC5-dependent manner, which is facilitated by the control hemisphere of focal Depdc5 knockout mice. Focal Depdc5 knockout mice have increased cortical thickness, increased cortical neuron size and dysplastic neurons throughout the cortex, similar to the abnormal neurons seen in human focal cortical dysplasia specimens. Glial abnormalities in the Depdc5 knockout region are identified, such as hypomyelination, reactive astrogliosis, and microglial activation. Our focal Depdc5 knockout mouse model recapitulates clinical, pathological, and biochemical features of human DEPDC5-related epilepsy and brain malformations. Our study reveals that postnatal DEPDC5 loss without disruption of cortical migration is sufficient to cause epilepsy and SUDEP. Restoration of DEPDC5 function via gene therapy represents a viable treatment approach.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Walk Deserts</title>
<link href="https://hdl.handle.net/1721.1/151974" rel="alternate"/>
<author>
<name>Blinder, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/151974</id>
<updated>2023-09-01T03:35:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Walk Deserts
Blinder, Justin
This thesis describes a new methodology to identify, measure, and understand “Walk Deserts.” This methodology comprises a system for identifying, mapping, and visualizing areas that are ostensibly highly walkable places according to traditional criteria and indicators, but that are also plagued with invisible environmental factors that impede walkability and threaten public health. This research has two principal aims: (1) to better understand and begin to address blind spots in walkability indicators as well as perception-based measurements (that are dicult to quantify and subject to bias), and (2) to present a greater range of environmental data associated with walkability and negative health outcomes in publicly accessible ways in order to facilitate community engagement. Two key contributions emerged from this research: (1) a theoretical re-definition of the concept of “walk deserts” to highlight typically overlooked aspects of walkability, and (2) a creative and technical contribution that focuses on finding “walkable deserts in the City” and visualizing these deserts in immersive ways. Boston’s Chinatown district serves as a case study site, a “walk desert” hidden in plain sight. With the presence of greenways surrounded by highways, it appears to be a seemingly walkable and even heavily touristed neighborhood with dramatically poor health outcomes. Digital photogrammetry is used to explore how merging photorealistic, three-dimensional spatial models with environmental data can produce immersive and interactive data visualizations, including a web application, an augmented reality interface, and an interactive installation. These interfaces expose the “walk desert” hidden in Chinatown, and provide a mechanism to engage members of the community, as well as researchers and policy-makers, in the process of transforming degraded urban spaces into healthier and move vibrant ones.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motor Design and Control for Scalable Distributed Actuation</title>
<link href="https://hdl.handle.net/1721.1/151973" rel="alternate"/>
<author>
<name>Preiss, David</name>
</author>
<id>https://hdl.handle.net/1721.1/151973</id>
<updated>2023-09-01T03:26:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Motor Design and Control for Scalable Distributed Actuation
Preiss, David
Machine design is often constrained to a limited number of controllable degrees of freedom due to the cost and complexity associated with integrating large numbers of actuators. This thesis explores the hardware and control development for a motor architecture designed for distributed actuation, where many controllable degrees of freedom are required across macro scale structures. Availability of a low cost and easily integrated actuator at these scales would open new regimes for fields such as robotics, manufacturing, human computer interaction and wireless communication.&#13;
&#13;
A survey of prior distributed actuation research is conducted, including shape memory alloy, piezoelectric, hydraulic, and electric motor topologies. A new approach using a multiplexed two-phase axial flux PCB motor is designed and iterated on through empirical testing and simulation. These motors are integrated into a modular 64 actuator array, and a proof of concept is built capable of distributed linear motion for interpolation of a surface or as independent degrees of freedom. The prototype achieves 21&#120583;m linear resolution over 45mm of stroke, with a 1.9N stall force, and a density of 104 actuators per square foot. Motor commutation is achieved through multiplexing of individual motor windings, allowing for sub-linear cost and component count scaling. Actuator performance over a number of performance parameters is addressed, including output torque, speed, mass, resolution, range of motion, as well as parameters critical to scalability including motor footprint, cost and power consumption. Finally two applications of serially distributed actuation are discussed, including the design of modular continuum robots from a discrete toolkit of structural elements, as well as a serpentine actuator with many controlled degrees of freedom.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical Shock Analysis and Testing of an Air-Dropped Antarctic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/151972" rel="alternate"/>
<author>
<name>Brown Jr., Michael James</name>
</author>
<id>https://hdl.handle.net/1721.1/151972</id>
<updated>2023-09-01T03:02:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanical Shock Analysis and Testing of an Air-Dropped Antarctic Ice Penetrator
Brown Jr., Michael James
The Seismo-Geodetic Ice Penetrator (SGIP) is an air-dropped kinetic penetrator that will deposit a geophysics-grade seismometer and GPS receiver into Antarctic ice shelves. These instruments will measure vibrations in the infragravity (&lt; 0.1 Hz) range to improve understanding of forces that cause ice shelf calving. The penetrator will impact at its terminal velocity---roughly 40 m/s---to deploy the seismometer at least 2 meters below the ice shelf surface. SGIP will separate into two components, the body and flare, on impact to embed the seismometer into the ice shelf and transmit data from the ice shelf surface, respectively. However, the penetrator's impact can accelerate the primary payload up to 129 g along its central axis, which can damage the delicate seismometer. SGIP uses shock isolation to reduce the seismometer's peak acceleration. A structural response model is used to predict the seismometer's dynamic response to ice shelf impacts with hundreds of potential shock isolation designs. This structural response model is used to select a candidate shock isolation design based on the SGIP prototype's volume constraints. A 22 cm long, 4.8 cm diameter cylinder made of IMPAXX 300 foam can be used to limit the seismometer's peak axial accelerations to 55 g, which represents a 57\% reduction from the maximum expected peak axial acceleration. A shear pin assembly is designed to rigidly connect the body and flare during descent yet deliberately shear on impact to separate the body and flare. Risk reduction tests are conducted to lower the probabilities of the shear pins' four primary failure modes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Enhanced Reasoning: Augmenting Human Critical Thinking with AI Systems</title>
<link href="https://hdl.handle.net/1721.1/151971" rel="alternate"/>
<author>
<name>Danry, Valdemar M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151971</id>
<updated>2023-09-01T03:21:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">AI Enhanced Reasoning: Augmenting Human Critical Thinking with AI Systems
Danry, Valdemar M.
The pursuit of knowledge and understanding has been a driving force for humanity since the beginning of time. This relentless quest for reason has shaped the world as we know it, enabling us to unlock secrets of the cosmos, develop innovative technologies, and address complex global challenges. However, despite our cognitive leaps, we still grapple with the limitations of our rationality, biases, and emotions, especially in today's increasingly complex and information-saturated world. As AI systems become more entwined with our daily lives and institutions, there is a growing need to design and deploy AI systems that augment human reasoning, foster critical thinking, and promote well-informed decision-making.&#13;
&#13;
This thesis investigates the potential for AI-enhanced reasoning systems and their impact on human decision-making. Specifically, it explores three distinct aspects of critical thinking with AI systems: (1) the development of AI logic-checking systems designed to help identify reasoning flaws, (2) examining the susceptibility of individuals to deceptive AI-generated explanations, and (3) assessing the potential of a novel AI-framed questioning interaction method to provoke critical thinking through a series of human subjects experiments.&#13;
&#13;
These investigations aim to shed light on the implications of AI systems on human reasoning and provide insights into designing AI interventions that meaningfully enhance our cognitive abilities. The findings demonstrate the potential for intelligently designed AI systems to support human reasoning, while also highlighting the potential risks associated with overreliance on these tools. By addressing these challenges, this thesis contributes to the ongoing conversation around the development of AI systems that advance our reasoning, and steps towards cultivating a discerning and rational citizenry capable of navigating the complexities of the modern world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Optimization-based Approach for Identification of Illegal Trade in the Global Timber Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/151970" rel="alternate"/>
<author>
<name>Hallermeyer, Cyrian H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151970</id>
<updated>2023-09-01T03:22:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Network Optimization-based Approach for Identification of Illegal Trade in the Global Timber Supply Chain
Hallermeyer, Cyrian H.
Forest ecosystems play a crucial role in the global carbon cycle and provide numerous regional environmental and economic services. Thus, it is essential to limit their degradation due to human exploitation and the risk of climate change. To effectively regulate the production and trade of timber and to ensure that ambitious sustainability and legality targets are met, enforcement agencies need to reliably monitor the flows of timber products circulating in the global supply chain. However, available reported data on the trade of timber-based products at the global level is typically subject to a range of irregularities, including misreported and inconsistent data. Current estimates of these irregularities -- particularly, illegal trade -- analyze reported flows at the level of individual trade partners and do not account for the structure of global trade network. In this thesis, we attempt to address these limitations by imposing nodal volume balance across the entire network and develop a general framework to identify multiple link-level irregularities in trade data.&#13;
&#13;
Specifically, we present an optimization-based approach to model and identify data irregularities in the global timber supply chain. We evaluate the ability of this approach to recover flows of timber products volumes from perturbations to reported data on multiple links -- this is called the reconstruction problem -- and identify flows that are most likely to involve irregularities -- which is called the identification problem. These reconstruction and identification tasks essentially rely on the use of network optimization techniques. In this context, we explore both classic optimization formulations and matrix scaling-based algorithms. We extend the well-known formulation of matrix scaling algorithms to include prior knowledge of the reliability of the data. We propose a link-specific weighted iterative scaling algorithm (WIS) and a node-specific weighted iterative scaling algorithm (NSWIS). In doing so, we extend the current literature on matrix scaling algorithms by expanding the scope of their practical application to supply chain data correction problems.&#13;
&#13;
For the type of perturbations studied in the evaluation procedure, the WIS algorithm shows a strong ability to correctly reconstruct data, even under limited prior information on data reliability. Moreover, the combinations of the WIS with threshold-based identification models obtain satisfactory results on the identification phase (True Positive Rate of more than 75% for a False Positive Rate of less than 30%) even under limited prior information on data reliability. We evaluate the reconstruction and identification performance of the RIMs both on synthetic and real data. Our results support the relevance of a principled approach to network flow modeling and optimization for correcting and identifying irregularities in timber trade and timber production data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supernumerary Robotic Limbs for Next Generation Space Suit Technology</title>
<link href="https://hdl.handle.net/1721.1/151948" rel="alternate"/>
<author>
<name>Ballesteros, Erik</name>
</author>
<id>https://hdl.handle.net/1721.1/151948</id>
<updated>2023-08-24T03:47:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Supernumerary Robotic Limbs for Next Generation Space Suit Technology
Ballesteros, Erik
Extra-Vehicular Activities (EVAs) are considered one of the most complex operations an astronaut can perform during a spaceflight mission. Coordinating and executing EVAs are complex and costly affairs that are a necessity for any space vehicle; this is especially true for expanding the longevity of a spacecraft, like that of the International Space Station (ISS). A key challenge in planning EVAs is the amount of time an astronaut has to complete a series of tasks, which is inversely related to their metabolic load. Prior studies have determined that the bio-mechanics of a space suit wearing astronaut play a significant role in their metabolic load. In addition to this concern, another key challenge for astronauts conducting EVAs is to have access to a rigid tether to enable them full access to both of their arms when conducting a specific task. We propose the incorporation of a pair of wearable robots, called Supernumerary Robotic Limbs (SuperLimbs), which would be mounted on the xEMU’s Square Boss Interface (SBI), positioned such that each SuperLimb is on either side of the astronaut’s center of mass. The use of SuperLimbs during an EVA allows the astronaut to safely and efficiently move across a spacecraft in EVA. The SuperLimbs grab EVA handrails for securing the astronaut’s body, and guide the astronaut from one work location to another (thus reducing their overall work load). The incorporation of SuperLimbs onto the xEMU spacesuit forms a cooperative human-robotic system that can be modeled as a quadruped with two human arms and two SuperLimb grippers. Trajectory planning and control algorithms are developed as a quadrupedal locomotion problem, where the SuperLimbs act as followers while the astronaut operator is the leader. Furthermore, the quadruped human-robot system enables multiple points of contact at any point in the EVA, creating a secure bracing condition for the astronaut user that enhances both stability and controllability.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Barriers to the use of computational tools for embodied carbon reduction in structural engineering practice</title>
<link href="https://hdl.handle.net/1721.1/151946" rel="alternate"/>
<author>
<name>Smith, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/151946</id>
<updated>2023-08-24T03:03:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Barriers to the use of computational tools for embodied carbon reduction in structural engineering practice
Smith, Margaret
There is an immediate need to decrease carbon emissions to minimize the impacts of climate change, and building materials, which in 2019 accounted for 10% of global carbon emissions, have an important role to play in this reduction. As key stakeholders in the building design process, structural engineers must implement strategies to reduce embodied carbon. One strategy is using less material, and in academia, many methods and tools have been proposed to reduce embodied carbon through material efficiency. This includes parametric models that demonstrate how structural parameters impact embodied carbon and shape or topology optimized components which save considerable amounts of material compared to conventional alternatives. However, these tools are not used often in industry. To better understand why, a survey was distributed to practicing structural engineers in the northeast US which probed their views on embodied carbon and computational tools to reduce it. Case studies on parametric design, shape optimization, and topology optimization were presented, and participants were asked if they would use each tool and why or why not.&#13;
&#13;
A total of 38 structural engineers, representing 26 different employers, responded to the survey. Most respondents could name a strategy to reduce embodied carbon, however, low-carbon materials were mentioned far more than using less material, indicating that there is a need for increased education on the power of material efficiency to impact embodied carbon. As expected, respondents were most willing to use parametric design, followed by shape optimization, then topology optimization. For all case studies, time and/or cost increase was identified as the strongest barriers to their use. For parametric design, lack of power during the design process was also a strong barrier, as structural engineers often do not have complete control over all structural parameters. For shape and topology optimization, constructability and the robustness of optimized designs were key concerns. By formalizing the barriers to their use, this work enables researchers to create computational tools that are more likely to be adopted in industry. These tools have great potential to decrease embodied carbon emissions, and for this to be realized, they must be put into practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Study of a Linear Generator Wave Energy&#13;
Converter With Adaptive Bistable Control</title>
<link href="https://hdl.handle.net/1721.1/151945" rel="alternate"/>
<author>
<name>Wunderlich, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/151945</id>
<updated>2023-08-24T03:12:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Feasibility Study of a Linear Generator Wave Energy&#13;
Converter With Adaptive Bistable Control
Wunderlich, Alexander
Oceanic wave energy harvesting is a promising source of renewable energy that involves the conversion of oscillatory motion into electrical energy. However, the majority of traditional wave energy converters rely on intermediary mechanisms that increase complexity and can incur energy losses. Linear generators have emerged as a promising wave energy technology that bypass the limitations of intermediaries through direct mechanical to electrical energy conversion. The convention is to configure the generator so that incident waves excite the resonant frequency of the device. The irregular and broadband nature of ocean waves poses a challenge to this technique, as the device must be configured to respond to a wide range of incident frequencies. This thesis proposes a novel design for a linear permanent magnet generator that considers a tension leg platform oscillating in oceanic surge motion as its basis. The performance of the proposed device is analyzed using numerical simulations, and the potential for optimization techniques and the implementation of adaptive bistable control logic to improve broadband energy harvesting is investigated. The results demonstrate that these proposed alterations can increase the harvesting potential and efficiency of a wave energy converter, with the potential to contribute to the growing demand for renewable energy sources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Adaptive Laws for Adaptive Control Under Stochastic Disturbances</title>
<link href="https://hdl.handle.net/1721.1/151944" rel="alternate"/>
<author>
<name>Fisher, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/151944</id>
<updated>2023-08-24T03:05:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fast Adaptive Laws for Adaptive Control Under Stochastic Disturbances
Fisher, Peter
In this work, we consider two classes of adaptive laws for the adaptive control of a class of discrete-time nonlinear systems with all states accessible perturbed by a stochastic disturbance. First, we consider high-order tuner algorithms based on accelerated gradient methods for the optimization of convex loss functions, and derive a new adaptive law designed for stable adaptive control. Second, we review the state of the literature on recursive least-squares adaptive laws - especially those with variable-direction forgetting factor - and we derive an alternative to a recent method proposed in the literature.&#13;
&#13;
Recently, a high-order tuner algorithm was recently developed for the minimization of convex loss functions with time-varying regressors in the context of an identification problem. Based on Nesterov's algorithm, the high-order tuner was shown to guarantee bounded parameter estimation when regressors vary with time, and to lead to accelerated convergence of the tracking error when regressors are constant. In this work, we derive a new high-order tuner algorithm that preserves the accelerated convergence of the original under constant regressors, but that is also provably stable with the addition of projection to a compact set. This latter property allows us to apply the new high-order tuner to the adaptive control of a particular class of discrete-time nonlinear dynamical systems under stochastic disturbances.&#13;
&#13;
There has been a substantial body of literature on variable-direction forgetting methods for recursive least-squares-type adaptive laws. Recently, a new method has been developed that uses the SVD of the covariance matrix to apply directional forgetting. In this work, we place this method in the context of the broader RLS literature as well as other literature on variable-direction forgetting. We then use this context to argue that if the computation power is available for an SVD at every time step, it is better to simply use it to directly invert the covariance matrix at each time step rather than implementing variable-direction forgetting. We call this new adaptive law "Explicit Least-Squares" and show that ELS leads to provably stable adaptive control.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Gas Absorption with Nanoengineered Surfaces for Bubble Manipulation</title>
<link href="https://hdl.handle.net/1721.1/151942" rel="alternate"/>
<author>
<name>Joseph, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/151942</id>
<updated>2023-08-24T03:14:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing Gas Absorption with Nanoengineered Surfaces for Bubble Manipulation
Joseph, Tal
Efficiently reacting gases with liquid absorbents is a crucial aspect of numerous industrial processes on a large scale. When the gas phase is in the form of discrete bubbles within an absorber unit, such as in bubble column absorbers or gas sparging systems, the effectiveness of these bubbles' reaction depends on carefully controlling their properties and flow. This study demonstrates the efficacy of a novel method for gas absorption into a liquid absorbent, which involves using nanoengineered surfaces to spread bubbles into their texture and enhance mass transport between the gas and liquid phases. This surface-enhanced direct injection approach for gas absorption yields more than a two-order-of-magnitude improvement in reaction rate compared to captive bubbles when using a moderately alkaline potassium hydroxide as an absorbent solution for carbon dioxide gas. While the average reaction rates of non-spreading bubbles typically decrease with bubble size, the surface-enhanced absorption of spreading bubbles reverses this trend, enabling the most rapid absorption for the smallest bubbles. Moreover, non-spreading carbon dioxide bubbles cannot be fully absorbed due to product aggregation at their interface, whereas spreading bubbles can avoid this regime by reacting more quickly than the aggregation process on rapid timescales. Finally, we propose this surface-enhanced direct injection method as an absorption technique that scales advantageously for small-scale or distributed modular absorber designs compared to the traditional large-scale absorber units currently used in industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Path Planning for Trajectory Guided Freehand Ultrasound Scan</title>
<link href="https://hdl.handle.net/1721.1/151941" rel="alternate"/>
<author>
<name>Lin, Qian</name>
</author>
<id>https://hdl.handle.net/1721.1/151941</id>
<updated>2023-08-24T03:33:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Path Planning for Trajectory Guided Freehand Ultrasound Scan
Lin, Qian
Medical imaging plays a crucial role in medical diagnosis and analysis. 3D medical imaging provides more comprehensive and greater anatomical detail of internal body structures when compared to traditional 2D images, allowing for more accurate measurement of organ and tumor volume, and prediction and monitoring of some disease progression. 3D medical images can be obtained through various imaging modalities, including magnetic resonance (MR), computed tomography (CT), and ultrasound (US). Among those modalities, freehand ultrasound is preferred for its cost-effectiveness, non-invasiveness, portability, safety, versatility, and real-time information.&#13;
&#13;
However, the lack of information on the position and orientation of the ultrasound probe makes it challenging to obtain 3D images from 2D ultrasound slices. Without the expert knowledge, the user may not acquire precise images on the region of interest (RoI). To address this issue, we proposed a novel path planning framework that provides real-time guidance for freehand ultrasound and reconstructs 3D images in real-time. A low-cost RGB-D camera with IMU module is mounted on a regular ultrasound probe to estimate the spatial placement of the probe with respect to the RoI, and the acquired ultrasound images are analyzed and registered into 3D voxel grid. After the user performs initial scan, the system guides the user to find missing areas shaded by obstacles such as bones, resulting in more accurate, detailed, and efficient 3D ultrasound imaging. We validated our system on an ultrasound phantom and demonstrated its ability to investigate the area beneath the obstacle. Additionally, we developed a visualization system for real-time probe movement guidance and image display.&#13;
&#13;
This study demonstrates the feasibility of implementing an online path planning approach with real-time guidance and high-attenuation area avoidance for freehand ultrasound scanning, even in scenarios where prior knowledge of the scanning area is not available. The proposed path planning system not only enhances the efficiency and precision of ultrasound imaging in clinical settings, but also facilitates the acquisition of high-quality 3D ultrasound images by non-expert users in a more convenient manner, potentially allowing for long-term health monitoring.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Soft Robotic System for Mechanical Assistance to the Diaphragm</title>
<link href="https://hdl.handle.net/1721.1/151940" rel="alternate"/>
<author>
<name>Quevedo-Moreno, Diego A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151940</id>
<updated>2023-08-24T03:47:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Soft Robotic System for Mechanical Assistance to the Diaphragm
Quevedo-Moreno, Diego A.
The diaphragm is a critical muscle for the respiratory system, responsible for up to&#13;
70% of the inspiration effort. Phrenic nerve trauma or neuromuscular disease can&#13;
generate severe diaphragm dysfunction that ultimately leads to respiratory failure.&#13;
The current treatment for patients with severe diaphragm dysfunction is permanent&#13;
airway tethering to mechanical ventilation, which greatly impacts patient’s quality&#13;
of life and autonomy by hindering activities like speech, swallowing, and mobility.&#13;
Soft robots are ideal to assist in complex biological functions like the contraction of&#13;
the diaphragm. Diaphragmatic mechanical assistance using implantable soft robots&#13;
has shown promising results in restoring respiratory function. However, the the soft&#13;
robotic system can be optimized to effectively assist the diaphragm. In this work, the&#13;
design and control of a fabric-shelled soft robotic pneumatic actuator are developed&#13;
to efficiently assist on the diaphragm motion and in the inspiratory effort. The soft&#13;
robotic system, developed in this work, is capable of significantly restore physiological&#13;
thoracic and abdominal pressurization levels in a respiratory simulator and demonstrates its potential as an alternative treatment for patients with severe diaphragm&#13;
dysfunction.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Degree of Freedom Solid Rotor Velocity Control Induction Drive</title>
<link href="https://hdl.handle.net/1721.1/151939" rel="alternate"/>
<author>
<name>Roman, Jean C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151939</id>
<updated>2023-08-24T03:07:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Single Degree of Freedom Solid Rotor Velocity Control Induction Drive
Roman, Jean C.
This thesis studies a single degree of freedom (DOF), two-pole, three-phase, solid rotor,&#13;
induction motor operating in closed loop angular velocity control via a proportional-integral&#13;
(PI) controller applying a constant amplitude, variable frequency drive.&#13;
&#13;
The stator consists of six iron teeth, evenly spaced and pointing radially inward, wound&#13;
with 160 turns of copper wire each in a three-phase, two-pole configuration. A steel enclosure&#13;
houses the stator and is supported by a 3D-printed polylactic acid (PLA) enclosure. The&#13;
wiring is initially connected in wye configuration without a neutral wire but later converted&#13;
to three independent phases, each with its own input and output wire. The teeth have a&#13;
nominal air gap of 0.5mm with the rotor.&#13;
&#13;
The rotor consists of a solid iron cylindrical core with a 1mm aluminum sleeve press&#13;
fitted on the outside. Two mechanical bearings center the rotor inside the stator. A single-&#13;
input single-output (SISO) PI controller commands three 750 mA amplitude currents with&#13;
variable frequency, and offset by 120 degrees to provide a 3-phase drive resulting in a rotating&#13;
magnetic field. Each coil is powered by a custom linear transconductance amplifier with 5&#13;
kHz bandwidth and 0.3 A/V DC gain.&#13;
&#13;
The controller receives feedback through a contact-less magnetic encoder providing a&#13;
linear voltage measurement of the rotor’s angle. We differentiate the position measurement&#13;
to estimate the angular velocity of the shaft. A small diametrically magnetized cylindrical&#13;
permanent magnet (PM) is attached to the end of the shaft and constrained by a 3-D printed&#13;
PLA fixture. During operation, we produced up to 1.6 mNm of torque and velocities of up&#13;
to 8,000 RPM.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Testing of a Respiratory Simulator for the Optimization of Soft Robotic Assistive Breathing Devices</title>
<link href="https://hdl.handle.net/1721.1/151938" rel="alternate"/>
<author>
<name>Tagoe, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/151938</id>
<updated>2023-08-24T03:03:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Testing of a Respiratory Simulator for the Optimization of Soft Robotic Assistive Breathing Devices
Tagoe, Jonathan
Diaphragm dysfunction can lead to respiratory difficulties and failure, requiring interventions like positive pressure ventilation that forces air into the lungs. Such interventions can interfere or hinder a patient’s quality of life, making activities like speech and swallowing extremely difficult. Surgically implanted soft robotic actuators have been explored to mechanically support diaphragmatic motion in a patient that has lost function of such an important muscle. Optimizing these actuators before surgery is paramount and requires “in situ” testing that may take months in between porcine terminal studies, let alone human testing. This thesis works towards developing a bench top model that can recreate the physiological biomechanics of the respiratory system to effectively test and optimize the design of diaphragmatic assist devices before implantation in specimen. &#13;
&#13;
Through the product development cycle undertaken in this thesis, a respiratory simulator was fabricated, assembled, and tested in order to facilitate the optimization of soft robotic pneumatic actuators. We find that the simulator is capable of recreating and maintaining physiological pressures in the major cavities of the body, with active diaphragmatic motion. We demonstrate the effectiveness of the modular design, allowing for rapid testing of different types of diaphragmatic assist actuators, patient conditions and breathing patterns. Through testing of the assist devices, we demonstrate their ability to recreate physiologically relevant pressure drops. &#13;
&#13;
This respiratory simulator lays the groundwork for the rapid development of implantable assistive breathing devices that serve as a new ventilation option that will liberate the airways and not sacrifice quality of life.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local Shape Estimation Using Mechanochromic Structurally-Colored Tactile Sensors</title>
<link href="https://hdl.handle.net/1721.1/151937" rel="alternate"/>
<author>
<name>Thomsen, Max T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151937</id>
<updated>2023-08-24T03:27:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Local Shape Estimation Using Mechanochromic Structurally-Colored Tactile Sensors
Thomsen, Max T.
Tactile perception is increasingly used in robotics to augment the robot’s sense of its environment and the objects that it manipulates, particularly in cases where visual systems alone prove inadequate. Existing tactile sensors feel their surroundings by sensing many different types of tactile signals such as pressure, contact position, or contact shape. This thesis introduces a novel method for measuring and reconstructing the shape of an object based on a tactile imprint, and describes the framework for this method, the fabrication of the necessary materials, and the subsequent testing and validation of the process. The procedure outlined in this work involves the use of a custom tiled mechanochromic structurally-colored film in conjunction with a digital camera, and calculates shape and strain information of the film based on how the observed colors shift when undergoing deformation. When combined with a transparent elastomeric pad, this arrangement can be used to deduce information about objects that are pressed into the pad by observing the deformation in the surface. This ability to measure the shape and strain state of a surface by leveraging the high resolution of modern image sensors together with color-dynamic films tiled in a checkered pattern may allow for more effective tactile sensors, and more broadly can provide a useful tool for research and industrial applications. While this work focuses specifically on tactile shape reconstruction, the methodology presented can similarly be applied to more general cases where shape or strain information of a surface is desired.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Influence of Interannual Precipitation Variability on Terrestrial Ecosystem Productivity</title>
<link href="https://hdl.handle.net/1721.1/151932" rel="alternate"/>
<author>
<name>Chen, Minghao</name>
</author>
<id>https://hdl.handle.net/1721.1/151932</id>
<updated>2023-08-24T03:45:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Influence of Interannual Precipitation Variability on Terrestrial Ecosystem Productivity
Chen, Minghao
This study investigated the impact of interannual precipitation variability on above-ground terrestrial ecosystem productivity in the Hulunbuir ecosystem, using time series analysis, regression analysis, and machine learning models. The study's primary goal was to enhance our understanding of the effects of precipitation variability on ecosystems and develop practical solutions for promoting ecosystem sustainability and adaptability under changing climate conditions. The study analyzed trends and patterns of interannual precipitation variability within the study area, investigated the historic relationship between precipitation and ecosystem productivity using regression analysis, developed and compared machine learning models to predict the impact of interannual precipitation variability on ecosystem productivity, evaluated model performance, and provided insights into the mechanisms underlying the impacts of interannual precipitation variability on ecosystem productivity. The findings of this study suggested that precipitation is an important driver of vegetation productivity in the Hulunbuir ecosystem, and the machine learning models, particularly LSTM and CNN models, were found to be effective in predicting NPP in different ecosystems. The study's findings can inform ecosystem-specific management strategies to optimize productivity and resilience to environmental change, as well as policy decisions regarding the sustainable use of natural resources and the mitigation of climate change impacts.&#13;
&#13;
Keywords: interannual precipitation variability, terrestrial ecosystem productivity, time series&#13;
analysis, machine learning models, climate change impacts.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Comparative Analysis of Domestic Municipal Data Governance Systems</title>
<link href="https://hdl.handle.net/1721.1/151927" rel="alternate"/>
<author>
<name>Jiminez Jamarillo, Aleja</name>
</author>
<id>https://hdl.handle.net/1721.1/151927</id>
<updated>2024-03-22T17:53:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Comparative Analysis of Domestic Municipal Data Governance Systems
Jiminez Jamarillo, Aleja
Prompted in part by the COVID-19 pandemic, cities in the United States are becoming increasingly aware of the need to improve how they govern their data. Data governance is generally understood to involve practices pertaining to the production, storage, analysis, and sharing of data either within or across organizations (Abraham et al. 2019, Beaulieu and Leonelli 2021). This thesis sought to investigate how cities in the United States are using policy tools to institutionalize data governance practices and what these policies reveal about how city governments are conceptualizing the nature of municipal data. Using a case study approach, an extensive &#13;
 literature review paired with staff interviews was conducted for four cities: Baltimore, Maryland; Denver, Colorado; Portland, Oregon; and San Francisco, California. A comparative analysis of these cities’ data governance systems reveal four primary findings. First, municipal governments are seeking to balance the established use of data for public transparency with stronger practices to protect privacy. Second, data governance can be deployed for various political purposes by city governments and that deployment can frame how its purpose is defined and pursued. Third, municipal data governance systems implicitly extend beyond governing data to managing the technologies and employees who generate and handle data. Finally, the staffing plan for data governance shapes the expertise brought to bear on normative questions surrounding data generation and management as well as the capacity for data governance teams to establish legitimacy within city government. These findings point towards four recommendations for municipal policy makers: embed data governance leaders in departments whose skillsets and approaches are aligned with the intended outcomes of data governance; integrate data governance efforts with technology acquisition practices; establish and resource department-level data fiduciaries; and explicitly treat all city employees as data workers to foster a comprehensive and sustainable data governance system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role for Electricity Transmission in Net-Zero Energy Systems: A Spatially Resolved Analysis of the Continental US</title>
<link href="https://hdl.handle.net/1721.1/151926" rel="alternate"/>
<author>
<name>Shi, Nicole Xiaoyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151926</id>
<updated>2023-08-24T03:01:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Role for Electricity Transmission in Net-Zero Energy Systems: A Spatially Resolved Analysis of the Continental US
Shi, Nicole Xiaoyang
Due to climate change and the need for rapid emission reduction, new technologies including, hydrogen and negative emission technologies (NETs) including direct air capture and bioenergy with carbon capture and sequestration (BECCS) are being developed for integration into energy systems. Additionally, variable renewable energy (VRE) resources, which are expected to play a major role in decarbonization pathways, exhibit significant spatial variability and reliance on transmission infrastructure compared to existing fossil-fuel based energy systems, placing greater emphasis on transport and storage of material and energy. This case study evaluates pathways to a net-zero energy system in the continental US. To inform spatial infrastructure outcomes, we use an open-source energy system model that explores decarbonization pathways for the broader energy system under various technology availability and transmission network expansion assumptions. To attain a deeper understanding of technology interactions in a net-zero energy system, we use the Modeling to Generate Alternatives formulation that generates near-optimal solutions within a pre-defined threshold of the cost-optimal solution. We find that transmission network expansion enables the increased usage of high-quality wind resources. When the power sector is coupled with the hydrogen supply chain, the use of electrolyzers increase demand for electricity from VRE resources further. NETs, specifically BECCS, allow for the inclusion of natural gas in the generation mix while adhering to net-zero emissions targets. This approach helps mitigate the need for extensive transmission network expansion and VRE resources. We identify several transmission paths that policymakers should prioritize for expansion. Our analysis of near cost-optimal solutions provide confidence in the cost-optimal technology dependencies we identified.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Static Stability and Seismic Safety of Brunelleschi’s Dome of Santa Maria del Fiore</title>
<link href="https://hdl.handle.net/1721.1/151923" rel="alternate"/>
<author>
<name>Patel, Shailey</name>
</author>
<id>https://hdl.handle.net/1721.1/151923</id>
<updated>2023-08-24T03:09:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Static Stability and Seismic Safety of Brunelleschi’s Dome of Santa Maria del Fiore
Patel, Shailey
The dome of Santa Maria del Fiore is a long-standing pinnacle of engineering design creativity and fifteenth-century architectural grandeur. Its construction process and stability have been surveyed and researched for centuries. This thesis studies the dome of Santa Maria del Fiore dome in Florence, Italy in a two-fold exploration of its stability limits due to self-weight and horizontal ground acceleration. A parametric model using 2D equilibrium analysis is generated to quantify the minimum horizontal thrusts of the dome in the major and minor directions. The obtained minimum horizontal thrust values from the model in the major axis (5000 kN) and in the minor axis (3900 kN) are compared to existing values in literature. A simplified 2D analytical model predicts the collapse mechanism due to ground acceleration (0.15g) by adjusting the equilibrium analysis used to find the thrust values. This value is compared with experimental values obtained from a static tilt test, where the 3D-printed geometry of the dome and drum is slowly tilted until the point of collapse. The collapse mechanism forms at an angle of tilt of 17.6˚ in the weak direction (0.32g). A 3D analytical prediction is made by analyzing the observed experimental failure plane, which yields a collapse angle of 19.5˚ (0.35g), validating the experimental results. The difference between the 2D and 3D critical values can be attested to the various assumptions made in the conservative analytical model, including the negligence of hoop forces and friction. The analysis within this thesis demonstrates the safety of the dome of Santa Maria del Fiore under expected seismic activity.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continuous Improvement Framework for a Multi-model Production Line</title>
<link href="https://hdl.handle.net/1721.1/151922" rel="alternate"/>
<author>
<name>Sandifer, Darron</name>
</author>
<id>https://hdl.handle.net/1721.1/151922</id>
<updated>2023-08-24T03:16:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Continuous Improvement Framework for a Multi-model Production Line
Sandifer, Darron
In manufacturing, it is vital to identify current or potential bottlenecks and create plans to either eliminate, mitigate, or prevent them. There are many ways in which to assess a given system and identify the bottleneck: visual inspection, production data, anecdotal evidence, and experience, to name a few. Most strategies are a blend of methods with experience and anecdotal evidence comprising the majority of the approach which leads to large discrepancies between assessors.&#13;
&#13;
This thesis details a method to standardize the assessment method of under performing portions of a manufacturing line while still giving the assessor the ability to leverage their experience, expertise, and creativity to solve the problem. This framework will be applied in a case study conducted at Nissan North America’s Canton, Mississippi Assembly Facility resulting in reclamation of approximately 50 minutes of production time eliminating the overtime requirement for a pair of manufacturing cells.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Modeling of Shipwide Navy Integrated Power and Energy Corridor Cooling System</title>
<link href="https://hdl.handle.net/1721.1/151921" rel="alternate"/>
<author>
<name>Chatterjee, Avi</name>
</author>
<id>https://hdl.handle.net/1721.1/151921</id>
<updated>2023-08-24T03:02:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Modeling of Shipwide Navy Integrated Power and Energy Corridor Cooling System
Chatterjee, Avi
Naval ship systems are increasingly requiring more and more electricity to power the myriad advanced offensive and defensive electrically-powered systems. The Zumwalt class destroyer was the Navy’s first fully electric ship. The next generation destroyer,&#13;
DDG(X), is also planned to be an electric ship. The ships of the future can thus be anticipated to employ 100 megawatts or more of electric power. This rise in electrical demand begets the need to transfer that power more efficiently through compact and robust power distribution systems. &#13;
&#13;
As part of an ongoing U.S. Navy research consortium of next-generation all-electric warships, the Design Laboratory of the Massachusetts Institute of Technology (MIT) Sea Grant Program is developing the Navy integrated Power and Energy Corridor&#13;
(NiPEC) to serve as the vessel’s power distribution system. The corridor comprises several modular compartments capable of operating independently or as part of a network to execute energy storage, conversion, protection, control, isolation, and transfer functions [18]. The power conversion process is carried out by the corridor’s integrated Power Electronics Building Block (iPEBB). The iPEBB is a comprehensive and self-contained converter configured to provide power-dense solutions to the ship’s stochastic and dynamic loads [45]. The thermal management of the iPEBB is a central challenge in being able to fully realize its advanced semiconductor technology, constrained by the provision of indirect liquid cooling methods and sailor-friendly accommodations vis-à-vis handling, user interface, and operation.&#13;
&#13;
Padilla et al. [36] conducted a preliminary analysis of Power Electronics Building Block (PEBB) heat dissipation strategies utilizing liquid-cooled cold plates across the dry interface of the PEBB’s external surface. Reyes [39] extended this analysis in proposing a first-pass design of a NiPEC liquid cooling system capable of servicing a single nominal compartment within the larger corridor architecture. However, this most recent design presents infeasible operational and maintenance aspects given the number of cooling components required to adequately cool all envisioned NiPEC corridors, compartments, and PEBB stacks.&#13;
&#13;
This thesis used a combination of first-principles thermodynamic analysis and multi-physics-based modeling to design a NiPEC liquid cooling system and architecture suitable for shipwide deployment. Using Reyes’ first-pass cooling system design as a starting point, additional design iterations of the computer-modeled system were conducted and analyzed for thermal management robustness, success against key performance benchmarks, and adherence to relevant military standards. Additional modeling and analysis were conducted to determine how the cooling system could be scaled to accommodate an entire future all-electric Navy destroyer warship. This analysis examined key architectural system design considerations such as the level of component redundancy, utilization of different loop and zonal cooling schemes, and system survivability and control.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunity for Long Duration Storage Technologies: Thermal and Compressed Air Energy Storage</title>
<link href="https://hdl.handle.net/1721.1/151920" rel="alternate"/>
<author>
<name>Engelkemier, Seiji H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151920</id>
<updated>2023-08-24T03:07:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Opportunity for Long Duration Storage Technologies: Thermal and Compressed Air Energy Storage
Engelkemier, Seiji H.
To mitigate more severe consequences of climate change, rapid decarbonization is necessary. The electric power sector contributes about 25% of US and global emissions, and its decarbonization is critical as other sectors become increasingly electrified. Intermittent renewable energy sources, namely solar photovoltaics and wind turbines have reduced emissions in the power sector. A key part in achieving higher rates of renewables adoption is energy storage. Particularly, long duration energy storage (LDES) is needed, for which the key variables are capital cost of energy capacity and discharge efficiency. There are few economical options available today for LDES aside from pumped hydropower storage, which is limited by geography. Fortunately, new technologies are under development. &#13;
&#13;
Thermal energy storage (TES) is a promising class of technologies because energy can be stored cheaply as heat. A TES system converts electricity to heat and converts it back to electricity when needed. TES systems can utilize cheap storage material, but they must address the challenges of low discharge efficiency and to a lesser extent, high capital cost of discharge power capacity. Existing studies have mostly focused on a specific subsystem, such as the power block or storage material, or a single TES system. Few studies have reported on how the needs of future power systems and TES technology options guide the design choices for a TES system. This thesis addresses the topic and presents the opportunity space for TES systems. Three common strategies for system design are identified that balance the coupled tradeoffs of cost, performance, and technical risk. The first strategy is retrofitting thermal power plants with TES to replace combustion processes and operate the plants as storage assets. The second is the development of higher efficiency power cycles, primarily closed Brayton cycles, for new storage plants operating with maximum temperatures generally under 1000°C. The third strategy utilizes storage materials and power cycles at temperatures significantly above 1000°C which requires considerable research and development prior to commercialization efforts.&#13;
&#13;
Compressed air energy storage (CAES) is another type of storage technology that is cited as a candidate for LDES. Geologic and economic considerations are found to be limiting factors in large scale deployment of CAES systems rather than technology development. However, in certain situations, CAES may be a valuable storage option. Therefore, compared to the optimism found in literature, a more pragmatic outlook on CAES is recommended to focus efforts on critical questions and avoid wasted resources.&#13;
&#13;
The levelized cost of storage (LCOS) is used to assess future, representative TES and CAES systems in LDES applications. A sensitivity analysis is performed on LCOS parameters to show the effect of design choices on system cost. From the technology and cost assessments, recommendations are made to guide TES and CAES development as options for LDES.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Maneuvering Strategies for Heterogeneous Cooperative Navigation in Underwater Environments</title>
<link href="https://hdl.handle.net/1721.1/151918" rel="alternate"/>
<author>
<name>Flynn, Megan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151918</id>
<updated>2023-08-24T03:38:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring Maneuvering Strategies for Heterogeneous Cooperative Navigation in Underwater Environments
Flynn, Megan C.
Due to the challenges of the underwater environment and limited communication methods, undersea navigation is difficult. Autonomous underwater vehicles (AUVs) experience unbounded localization errors when operating below the surface. Range measurements between vehicles can be utilized to improve localization estimates. We define a two-agent team composed of a leader and a follower, in which the former has better navigational capabilities than the latter. The follower attempts to navigate to a destination while the leader aids in the follower’s localization by providing range measurements from varied locations. Planning the relative motion between agents is vital to ensuring that meaningful range measurements are provided to support an effective estimation of the follower’s pose.&#13;
&#13;
This work explores five different maneuvering strategies based on geometric and observability principles. After designing the strategies, we tested their impact on the localization quality of the team through extensive simulation results. To investigate the resilience of the strategies to environmental conditions, we altered the simulated ocean currents. For additional study we allowed the leader to operate at a higher speed to explore the relationship between energy use and estimation performance.&#13;
&#13;
Ultimately, the best maneuvering strategy was found to be the circling strategy due to its superior performance; however, the circling strategy used the most energy, especially with larger radii. Mission priorities may affect the selection of a maneuvering strategy; the zigzag and covariance squish strategies are still viable options as they do not suffer great performance loss when compared to the circling strategy.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement and Analysis of lubricant oil consumption in a single cylinder hydrogen IC engine</title>
<link href="https://hdl.handle.net/1721.1/151917" rel="alternate"/>
<author>
<name>Zakka, Ahmad</name>
</author>
<id>https://hdl.handle.net/1721.1/151917</id>
<updated>2023-08-24T03:42:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measurement and Analysis of lubricant oil consumption in a single cylinder hydrogen IC engine
Zakka, Ahmad
Understanding, predicting, and reducing lubricant oil consumption (LOC) in IC engines has been the focus of this lab for decades. Lubricant oil consumed in internal combustion engines is a significant contributor to harmful gas and particulate emissions directly threatening the environment and human health. This work focuses on the development and analysis of a direct LOC measurement method of a hydrogen combustion single cylinder test engine. This method utilizes an FTIR device to measure carbon dioxide in the exhaust gas. Since hydrogen is not a carbon-based fuel, its combustion reaction does not yield carbon dioxide. The only source of carbon in the system is the lubricating oil. Using this understanding, the carbon dioxide concentration in the exhaust is converted to oil consumption.&#13;
&#13;
This measurement method was used to study the effect of liner surface roughness, oil control ring design, and piston clearance on oil consumption. The liner finish was found to have large impact on LOC, particularly for the ring pack with Three-Piece oil control ring (TPOCR).  Very rough liner drastically increases LOC with a TPOCR.  One implication is that liner finish may need to be changed when adapting HD diesel engines to natural gas or hydrogen by using a TPOCR.&#13;
&#13;
For a Twin-Land Oil Control Ring based ring pack, slots/holes on the vertical wall of the ring was found to be effective in controlling LOC when the liner roughness is high.&#13;
&#13;
The main contribution of this work is developing a reliable and accurate method for measuring LOC in a hydrogen combustion engine. The data collected from this system will contribute to the development of a digital twin model with the capability of predicting LOC in any engine.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Authentic Learning with Portfolios: A Combination that K-12 Education Needs</title>
<link href="https://hdl.handle.net/1721.1/151915" rel="alternate"/>
<author>
<name>Vozza, Angelo</name>
</author>
<id>https://hdl.handle.net/1721.1/151915</id>
<updated>2023-08-24T03:57:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Authentic Learning with Portfolios: A Combination that K-12 Education Needs
Vozza, Angelo
Education systems play a critical role in sustaining a society by equipping citizens with the mindsets and skills necessary for professional and personal success. The American K12 education system has unfortunately not kept pace though with the demands of the 21st century. Students need systemic changes that make learning more meaningful and more engaging of their existing skills and interests. Authentic learning practices, like Project-Based, Community-Based, and Work-Based Learning, make such changes by orienting instruction around topics relevant to students' experiences and allowing students to practice their knowledge in real-world settings. Schools can encourage the adoption of authentic learning by implementing a complementary practice like portfolios. Local successes in schools using authentic learning and portfolios separately demonstrate their joint viability, but a system that combines the practices and can scale nationally has yet to be discovered.&#13;
 &#13;
Using the local "existence proofs" as starting points, I developed a system architecture that addresses many known barriers to adoption, including the time/resource constraints of schools, colleges, employers and the inequitable access some students have to engaging learning experiences. This initial proposal did not, however, address constraints imposed by schools' accountability obligations nor stakeholders' uncertainty over their peers' readiness to adopt the system. By investigating how federal and state policies have enacted similar transformations, I determined that authentic learning portfolios will likely require government mandates. These mandates could face pushback, however, from families concerned that the proposal would hurt their student's college options. I also interviewed colleges to establish what changes to the proposal were needed to ensure their support and thus satisfy parents' concerns. My findings helped refine the proposed system architecture as well as outline the next steps needed to successfully implement the proposal.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flexure-Based Device Enables Precise Quantitative Monitoring of Muscle Performance</title>
<link href="https://hdl.handle.net/1721.1/151913" rel="alternate"/>
<author>
<name>Lynch, Naomi L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151913</id>
<updated>2023-08-24T03:43:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Flexure-Based Device Enables Precise Quantitative Monitoring of Muscle Performance
Lynch, Naomi L.
Tissue engineering provides an avenue for improving our understanding of the contractile mechanisms of muscle. 3D engineered muscle models have been developed that mimic the structure and functionality of native muscle. These models have the potential to be used in a wide variety of clinical applications such as neuromuscular disease modeling and drug therapy testing. The contractile mechanisms of engineered muscle are often quantified by constraining the muscle on an elastomeric scaffold and measuring the scaffold’s deformation; however, structural imperfections in the scaffold can negatively impact the accuracy of the recorded contractile data. This paper proposes using a flexure-based device that enables decoding of muscle physiological signals – such as contraction force, contraction time, and relaxation time – in a more precise, reproducible, and automated manner.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The influence of current and ripple development on seagrass transplant survival</title>
<link href="https://hdl.handle.net/1721.1/151912" rel="alternate"/>
<author>
<name>Ishii, Jade</name>
</author>
<id>https://hdl.handle.net/1721.1/151912</id>
<updated>2023-08-24T03:43:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The influence of current and ripple development on seagrass transplant survival
Ishii, Jade
Seagrass restorations have been conducted globally but with low overall survival rates. To investigate the role of hydrodynamic energy from tidal currents in the survival of newly transplanted seagrass, Zostera marina rhizome fragments with living shoots were transplanted into a sediment bed and exposed to unidirectional flow in a flume. In accordance with planting techniques reported to improve restoration performance, garden staples were utilized to anchor transplants to the bed. Three flow conditions of increasing velocity were applied for a duration of six hours each, and current ripples developed and persisted in all cases. The ripples were characterized and related to the dislodgement of transplants from the sediment. The use of staples decreased the number of transplants that were dislodged. At lower velocities, transplant survival was further improved when the anchoring staple was oriented parallel to the direction of flow. Most of the transplants that were secured with a staple survived all velocity cases, even with average ripple amplitudes reaching the range of depth at which the roots and rhizomes were planted. These results can inform effective site selection and transplanting techniques for more successful seagrass restorations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Manufacturing of Educational Fiber Extrusion Device and Smart Factory</title>
<link href="https://hdl.handle.net/1721.1/151910" rel="alternate"/>
<author>
<name>Bradley, Russel</name>
</author>
<id>https://hdl.handle.net/1721.1/151910</id>
<updated>2023-08-24T03:07:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Manufacturing of Educational Fiber Extrusion Device and Smart Factory
Bradley, Russel
Fiber Extrusion Device (FrED) is a desktop fiber extrusion system that mimics continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing. It allows learners to perform experiments, vary manufacturing parameters and control system,  collect data, and perform analysis. Sucessful classroom activities have been conducted with FrED, however, the prior model is too costly to distribute to individual learners, given the rise of distant learning and MOOCs. A partnership with a university in Mexico, Tec de Monterrey, was formed to develop a low-cost FrED. This thesis covers the design, development and production of the low-cost variant in detail. Specifically discussing in depth the electronics system of FrED and the design for manufacturing and assembly (DfMA) process. An on-campus production and assembly facility, the FrED Factory, was made to mass produce FrEDs. The facility duals as a space for MIT students to learn about design and manufacturing. The FrED factory is undergoing digital transformation, aimed to streamline operations and to teach Industry 4.0 concepts. Three use cases are being developed: Machine Monitoring &amp; Analytics, Smart Assembly Station and Digital Inventory Management. This thesis also covers the educational initiatives that has formed around the FrED ecosystem, both on-campus and with our partner university, Tecnologico de Monterrey, that has been conducted during the past academic year.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Sensing, Inference, and Intelligence in the Information Environment</title>
<link href="https://hdl.handle.net/1721.1/151905" rel="alternate"/>
<author>
<name>Galligani, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151905</id>
<updated>2023-08-24T03:03:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Remote Sensing, Inference, and Intelligence in the Information Environment
Galligani, Thomas
This thesis considers the ways researchers and decision-makers deal with malicious actions in the information environment (IE). We are motivated by the profusion of research since 2016 aiming to understand, predict, and respond to various phenomena like mis- and disinformation within online social interactions. We begin by outlining three layers of complexity within this IE that make it exceedingly difficult to understand (strategic interaction, technological mediation, and cognitive obfuscation) and describe the framework of logical inference (induction, deduction, and abduction) that we use to assess research methodologies. We find that post-2016 literature focused on malicious actions in the IE has underappreciated insights from post-World War II propaganda analysis literature. We argue that researchers must separate modes of inference in their research, distinguishing between inductively testing a tool and abductively analyzing particular environmental conditions in order to provide results which are reusable and valuable to a decision-maker. This motivates our proposed methodological framework. Intelligence, Reconnaissance, and surveillance (ISR) -- a systematic way that the US military leverages research in remote sensing to understand complex physical environments -- provides a logical framework to ground this inferential distinction in research in the IE. Finally, we apply this methodology, developing a sensor which captures the influence operation tactic of reputation laundering, testing the sensor on a novel dataset of assassination-related Tweets, and find significant evidence (p&lt;0.0001) that our sensor's observations can capture this reputation laundering and integrate it into an analyst's workflow.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Race and Place in Residential Solar Photovoltaic (PV) Adoption</title>
<link href="https://hdl.handle.net/1721.1/151904" rel="alternate"/>
<author>
<name>Jackson, Joy Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/151904</id>
<updated>2023-08-24T03:57:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Race and Place in Residential Solar Photovoltaic (PV) Adoption
Jackson, Joy Kelly
The urgency of addressing climate change and grid decarbonization in the United States necessitates the rapid deployment of clean energy technologies at scale.  Residential solar photovoltaic (PV) technologies have emerged in the past decade as one such technology as a result of substantial cost declines, though market penetration remains low.  New government initiatives and policy incentives have been enacted to encourage the uptake of these technologies, however recent research has documented distributional challenges related to their deployment.  Building on emerging studies focused the racial equity implications of residential solar PV deployment, this research implements a series of regression models on two, national solar installation datasets, controlling for market, policy, and demographic variables.  The primary goal of this work is to systematically evaluate the effect of race and ethnicity on 1) the probability of a community having at least one solar installation and 2) the diffusion of solar PV technologies, defined as the total number of installations in a community.  Results indicate strong evidence that communities classified as majority-Black are associated with decreased likelihood of having any solar at all, and fewer installations overall, in most of the specified models. The results vary for majority-Hispanic communities, with observed disparities present in some of the models. Controlling for certain demographic variables has differentiated effects for different racial and ethnic majority classifications, due to the cumulative impacts of socioeconomic disadvantage for those groups.  The study concludes with a discussion of policy implications, methodological limitations, and avenues for future policy research to support an equitable clean energy transition.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Moving towards a more sustainable model of energy production &amp; consumption: a case for Indonesia</title>
<link href="https://hdl.handle.net/1721.1/151903" rel="alternate"/>
<author>
<name>Watel-Dehaynin, Tristan</name>
</author>
<id>https://hdl.handle.net/1721.1/151903</id>
<updated>2023-08-24T03:59:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Moving towards a more sustainable model of energy production &amp; consumption: a case for Indonesia
Watel-Dehaynin, Tristan
This thesis provides an assessment of Indonesia’s energy infrastructure following the decarbonization objectives set forth by the government at the G20 conference in Bali in 2022. Its goal is to compare the current models of production to local development objectives, and assess the state of key renewable energy sectors through the lenses of policy, technology, economic development, social stability and environmental conservation. It starts by providing historical context regarding the development of Indonesia as a country, looking at the influence of different civilizations over the land that is known today as Ibu Pertiwi. This assessment finds that the political and cultural spectrum of the country is highly diversified, and that democracy is still in the process of being fully established. The second part assesses the current policy environment and offers various tools to complement it. It finds that existing policy does not currently support the growing renewable energy industry. Solutions proposed include financial support for the national energy utility, an increase in the existing carbon tax, a phase out of fossil fuel subsidies, enhanced development of the private energy sector, and the application of energy standards. The third and final part reviews the growth of three key renewable energy markets: geothermal, solar and wind energy. It finds that, while resources are abundant, none of these markets have yet reached the pace of development expected by the government, mostly due to a lack of encompassing regulation, existing infrastructure and funding.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scenario Planning Framework &amp; Sensitivity Analysis for New Orthopedic Sets in the Spine Platform</title>
<link href="https://hdl.handle.net/1721.1/151902" rel="alternate"/>
<author>
<name>Vincent, Alura</name>
</author>
<id>https://hdl.handle.net/1721.1/151902</id>
<updated>2023-08-24T03:03:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Scenario Planning Framework &amp; Sensitivity Analysis for New Orthopedic Sets in the Spine Platform
Vincent, Alura
Spinal surgeries are a critical part of providing patients relief from spinal degenerative diseases or deformities. Johnson &amp; Johnson is developing a new spinal product in the Thoracolumbar space that builds on features of their legacy products in order to provide patients with high-quality pain relief.&#13;
&#13;
The goals of this project are twofold within the Thoracolumbar spine family of products. The first goal is to understand how inventory can be modeled over a long-time horizon for a new product launch. Forecasting over a long time-horizon is difficult due to uncertainty and exacerbated for new products due to a lack of historical data. The second goal is to understand the implications of various product launch scenarios on the broader Spine product family.&#13;
&#13;
To accomplish the first goal, a baseline model was created and a sensitivity analysis was conducted to analyze the impacts of changing prices and cost of goods sold on the profitability of the product family. The second goal was approached by developing a scenario model framework for product launches within the Spine business. The baseline model provided the team with an understanding of the most critical drivers of gross profitability for this product. The scenario framework provided a structured way for the team to identify and prioritize scenarios.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farm-scale Water Management in Adaptation to Climate Change in Morocco</title>
<link href="https://hdl.handle.net/1721.1/151901" rel="alternate"/>
<author>
<name>Vasseur Bendel, Aurélien</name>
</author>
<id>https://hdl.handle.net/1721.1/151901</id>
<updated>2023-08-24T03:18:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Farm-scale Water Management in Adaptation to Climate Change in Morocco
Vasseur Bendel, Aurélien
Morocco is already experiencing high levels of water scarcity, and rainfall is predicted to decrease by 20% to 50% under different climate change scenarios. As currently Morocco relies on large reservoirs built to achieve basin-scale water resources management, small-scale reservoirs are investigated here as a possible way to adapt to these dryer conditions and to collect overland flow for irrigation purposes before its evaporation or infiltration into the ground. We investigate a potential shift from basin-scale to farm-scale water resource management. A prototype of such small reservoirs has been built in the experimental farm of Benguerir and this thesis studies its catchment as well as the extent to which this technology could scale up in other regions of Morocco. Runoff production in the form of overland flow is simulated according to the Green-Ampt model while considering the formation of a thin crust of clay typical of dry environments such as southern Morocco. Overland flow is used as input to different models of reservoir management in order to determine the optimal capacity of a potential reservoir in a particular location as function of its catchment area, rainfall pattern, soil type, cost of construction, water price, as well as crops water requirements. Within reasonable assumptions, capacities close to the reservoir in Benguerir (4000 m³) are estimated. However, the results are sensitive to multiple partially unknown parameters such as soil heterogeneity, the intra-day distribution of rainfall and the ratio between construction cost and water price.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microneedles for Drug Delivery in Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/151900" rel="alternate"/>
<author>
<name>Wolfe, Colleen</name>
</author>
<id>https://hdl.handle.net/1721.1/151900</id>
<updated>2023-08-24T03:25:31Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Microneedles for Drug Delivery in Aquaculture
Wolfe, Colleen
Aquaculture is a rapidly growing industry that can address increased food demand from population growth as well as overfishing and other environmental concerns from traditional fishing methods. However, a major challenge in aquaculture is the spread of disease from close quarters of the fish. Current fish vaccination methods include oral, immersion, and injection, with injection being the most effective but also the most cumbersome to implement. This project proposes an alternative by using biocompatible microneedles that can be applied in situ and dissolve to release the drug. The focus of this project is the needle fabrication method and coating selection to provide necessary mechanical strength to withstand aquatic environments. It was found that hollow, silk microneedles coated in shellac/ethanol coatings of 33.3% w/w following the two-step method of full dip coating then tip-only dip coating was able fully coat microneedles. Compression testing was done on individual needles in their dry state and after 30 minutes of soaking in deionized water and seawater. A constant increase in force from the onset of testing across all needles showed little difference between all samples observed, indicating that the needle should be able puncture fish skin.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laser-Induced Particle Impact Testing in High-Pressure Oxygen Environments</title>
<link href="https://hdl.handle.net/1721.1/151898" rel="alternate"/>
<author>
<name>Alyassini, Samair</name>
</author>
<id>https://hdl.handle.net/1721.1/151898</id>
<updated>2023-08-24T03:59:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Laser-Induced Particle Impact Testing in High-Pressure Oxygen Environments
Alyassini, Samair
Particle impact ignition is an important source of metal fires in the high-pressure oxygen environments found in the turbines of oxygen-rich turbopumps. Understanding of particle impact ignition has been hindered by experimental challenges in reproducing this phenomenon under controlled laboratory conditions. This study addresses these challenges through the development of a specialized particle impact rig that integrates laser-induced particle impact testing (LIPIT) into an oxygen-compatible pressure vessel, thus enabling precise control over environmental conditions (target temperature, oxygen pressure) as well as impact variables (particle size/shape, impact velocity). This thesis describes the design of the oxygen-compatible pressure vessel, emphasizing considerations such as stress analysis, materials selection, oxygen-compatibility, and integration with the LIPIT system. The thesis concludes with pathfinding experiments successfully demonstrating particle ignition in a prototype rig, providing in situ images of single particle ignition events using application-relevant materials and particle sizes. Future work will use this rig to characterize the effects of operating conditions and material choices on susceptibility to particle impact ignition with a view toward developing more durable oxygen-compatible hardware for next-generation staged combustion rocket engines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impact of Biochemical and Mechanical Stimuli on Motor Neuron Growth</title>
<link href="https://hdl.handle.net/1721.1/151897" rel="alternate"/>
<author>
<name>Bu, Angel</name>
</author>
<id>https://hdl.handle.net/1721.1/151897</id>
<updated>2023-08-24T03:46:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Impact of Biochemical and Mechanical Stimuli on Motor Neuron Growth
Bu, Angel
Peripheral nerve injuries are one of the most prevelant trauma injuries, and the current golden standard for treatment is an autologous nerve graft. The use of tissue engineered nerve grafts has increased in recent years, a neural scaffold, cellular or acellular, is utilized to promote nerve repair. Prior research has shown that transcutaneous optogenetic stimulation on grafted engineered tissue can promote reinnervation and angiogensis in a rat volumetric muscle loss model. Our study queried the individual effects of biochemical and mechanical stimuli on neuronal growth to push the field of neuromuscular systems forward. We utilized optogenetic stimulation to emulate the biochemical effects and a magnetic fibrin platform to isolate the mechanical effect. To develop our magnetically actuatable substrate we optimized a fibrin hydrogel that would have a similar stiffness to skeletal muscle. Then, we added rectangular segments of 1:10 PDMS with 25% v/v 4 micron iron microparticles. These rectangular segments within the fibrin hydrogel were then cyclically actuated by a permanent neodymium magnet. Our results showed a substantial increase in neurite outgrowth in the experimental group which was supplemented with optogenetically exercised media from a muscle monolayer. The isolated biochemical effect was a substantial increase in the rate of neurite growth between the groups. In our preliminary neuromuscular system, we saw a degree of co-localized alignemnt between the neurites and differentiated muscle. This neuromuscular protocol seems to have physiological alignment similarities to in vivo tissue. We quantified alignment through a Fast Fourier Transform of the image data of the separate imaging channels, RFP for muscle and GFP for motor neuron. Finally, our magnetic fibrin platform found no significant increase in myofiber length and width when mechanical stimulation was applied after myoblast differentiation. In future research, we are exploring the biochemical and magnetic stimulation on our neuromuscular co-culture and actuate the myoblasts at an earlier cellular stage to impact alignment. In conclusion, our studies found that stimulated media aids in neurite outgrowth. In future research, we will perform RNAseq on our systems to verify the specific biological pathways and upregulated growth factors.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature and Thermal Noise Suppression for Precision Mechanical Experiments</title>
<link href="https://hdl.handle.net/1721.1/151896" rel="alternate"/>
<author>
<name>Fife, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/151896</id>
<updated>2023-08-24T03:00:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Temperature and Thermal Noise Suppression for Precision Mechanical Experiments
Fife, Dylan
There is currently a lack of experiments that would prove whether gravity exists as a quantum field. One possible proof of the quantum nature of gravity would be to entangle massive quantum harmonic oscillators. This quantum harmonic oscillator acts as a resonant sensor for the entanglement with gravity. The quality factor of a resonant sensor must be sufficiently high such that the sensor is not dominated by thermal noise and the sensor can be cooled to the ground state. This thesis creates scaling laws for the interaction between the mass size bonded to a membrane resonator and the resonator's quality factor. With such a resonator, the entanglement is anticipated to be weak and requires extensive averaging to achieve statistically significant measurements. As such, the creation of a long time stable environment is critical. Thus, the lab temperature where the experiment will be run was stabilized to an integrated deviation of 20mK from 1K. This resulted in a reduction of laser position noise by a factor of 2.7x.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layer-by-Layer Single-crystal Two-dimensional Material Growth by Geometric Confinement</title>
<link href="https://hdl.handle.net/1721.1/151894" rel="alternate"/>
<author>
<name>Lee, Doyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/151894</id>
<updated>2023-08-24T03:24:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Layer-by-Layer Single-crystal Two-dimensional Material Growth by Geometric Confinement
Lee, Doyoon
Two-dimensional (2D) transition metal dichalcogenides (TMDs) and their heterostructures have been widely studied for next-generation electronics. However, the following critical challenges have hindered them from their commercialization: 1) precise layer control during their growth, 2) maintaining single crystallinity at wafer-scale, and 3) inevitable transfer-process to fabricate heterostructure for various next-generation applications such as spintronics, valleytronics, and optoelectronics.&#13;
&#13;
This thesis introduces a confined-growth technique that can overcome the aforementioned hurdles simultaneously by introducing a geometric SiO₂ mask that has growth selectivity from the underlying substrate. As micrometer-scale SiO₂ trenches reduce the growth duration substantially, single-domain WSe₂ and MoS₂ arrays are obtained on an arbitrary substrate at wafer-scale by filling the trenches before the second layer of nuclei is introduced, thus enabling layer-by-layer growth without requiring epitaxial seeding.&#13;
&#13;
In addition, subsequent MoS₂ growth on the WSe₂ arrays yields MoS₂/WSe₂ heterostructures. Therefore, we for the first time demonstrate single-domain TMDs arrays and their heterostructures at wafer-scale with controllable thickness, which of performances are comparable to that fabricated from TMDs flake. This confined-growth technique not only can overcome key obstacles of 2D materials, but also provide a platform with great potential for next-generation 2D-material-based applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redesigning diabetic foot risk assessment for amputation prevention in low-resource settings: Development of a purely mechanical plantar pressure evaluation device</title>
<link href="https://hdl.handle.net/1721.1/151893" rel="alternate"/>
<author>
<name>Reddie, Madison</name>
</author>
<id>https://hdl.handle.net/1721.1/151893</id>
<updated>2023-08-24T03:50:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Redesigning diabetic foot risk assessment for amputation prevention in low-resource settings: Development of a purely mechanical plantar pressure evaluation device
Reddie, Madison
As global diabetes rates skyrocket, diabetic foot complications constitute a massive and rapidly growing global health problem, causing one million lower-extremity amputations every year. These amputations are typically preceded by largely preventable diabetic foot ulcers (DFUs). However, 80% of the world’s more than half a billion diabetics now live in low- and middle-income countries, where many healthcare settings lack the resources to implement recommended diabetic foot risk assessment and risk-based DFU prevention practices. Thus, the objective of this thesis was to redesign diabetic foot risk assessment specifically for low-resource settings in order to enable more efficient resource allocation for amputation prevention.&#13;
&#13;
To this end, a novel, low-cost, purely mechanical plantar pressure evaluation device was designed. The device consists of a grid of plastic bistable compliant mechanisms whose geometries can be tuned to generate a desired pressure threshold at which one part moves to a second stable position. The grid therefore presents a visual series of binary outputs in response to applied pressure. By having diabetic patients step on the device, non-specialist healthcare providers can easily assess patients' plantar pressures, which are known to be predictive of future DFU. A prototype was used to solicit feedback from 20 healthcare providers in Kenya. A design iteration was conducted based on their feedback, and an updated prototype was fabricated. The ability of this prototype to detect high plantar pressures was tested in a study with 41 healthy subjects. The prototype demonstrated a specificity of 100% and a sensitivity of 25.6%, though sensitivity reached 60% for heavier subjects. Sensitivity could likely be significantly improved by lowering the device's profile and increasing the sensing area. Strained health systems may then be able to use this device to allocate scarce healthcare resources more efficiently to prevent costly DFUs and amputations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Hydrodynamic Interactions of Underwater Vehicles in Close Proximity Using an Identical Ellipse Pair</title>
<link href="https://hdl.handle.net/1721.1/151892" rel="alternate"/>
<author>
<name>Rhodes, Preston W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151892</id>
<updated>2023-08-24T03:30:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing Hydrodynamic Interactions of Underwater Vehicles in Close Proximity Using an Identical Ellipse Pair
Rhodes, Preston W.
The hydrodynamic interactions between two identical 6:1 ellipses in close proximity were investigated using a 2D immersed interface method simulator in a viscous, rotational flow at Re=1500. Interactions in tandem, side-by-side, and staggered arrangements were characterized based on changes to the drag, lift, and yaw moment coefficients experienced by the ellipses. The drag and lift results agreed with existing studies of 2D cylinders performed in subcritical flow regimes. The drag interactions were divided into five regions based on changes to the individual ellipses and the overall system. The lift was repulsive and, for the closest parallel configurations, up to four times the value of drag. An overtaking maneuver was investigated by introducing a relative velocity between the ellipses. When both ellipses were moving, the lift was repulsive throughout the maneuver. The mean drag of the slower ellipse was mostly unaffected; although the largest instantaneous drag increase reached 2.5 times that of an isolated ellipse at the highest relative velocity, this was matched by a similar drag decrease in the second half of the maneuver. The drag of the faster ellipse was relatively unaffected by the overtaking maneuver. When one ellipse was stationary, the lift transitioned from repulsive to attractive as the moving ellipse passed the stationary ellipse. The stationary ellipse experienced a significant increase in mean drag at higher overtaking speeds, reaching more than half the value of an isolated ellipse moving at Re=1500. Its lift also changed significantly and was similar in magnitude to the drag. The overtaking ellipse experienced a three-to-four-fold increase in mean drag at all speeds, a thirty-fold increase in peak drag at the highest speed, and a mean lift similar in magnitude to the mean drag. The findings of this study can be used to inform fuel-efficient swimming configurations for underwater vehicles traveling in formation, as well as to increase safety when maneuvering in close proximity.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Energy Efficiency Analysis for Hydrogen and Jet Fuel in Next-Generation Long-Haul Aircraft</title>
<link href="https://hdl.handle.net/1721.1/151889" rel="alternate"/>
<author>
<name>Salgado Bobadilla, Diego Andre</name>
</author>
<id>https://hdl.handle.net/1721.1/151889</id>
<updated>2023-08-24T03:23:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparative Energy Efficiency Analysis for Hydrogen and Jet Fuel in Next-Generation Long-Haul Aircraft
Salgado Bobadilla, Diego Andre
The aviation sector aims to reach net-zero CO₂ emissions by 2050. Consequently, the industry must rely on a fuel with no life-cycle CO₂ emissions. Liquid hydrogen offers the potential to provide zero in-flight CO₂ emissions and low life-cycle CO₂ if produced from non-fossil electricity. While wide-body aircraft account for approximately 43% of in-flight CO₂ emissions, few studies have focused on hydrogen-powered aircraft of this size. Additionally, the performance of these aircraft in off-design missions is not typically discussed in the literature. A first-principles based approach was used to model long-haul hydrogen-powered aircraft and quantify fuel burn performance across a range of off-design missions. No engine thermodynamic improvements from using cryogenic fuel were assumed. Furthermore, sensitivity analyses were performed with respect to aircraft design range, material structural strength, and engine performance. This study shows that hydrogen-powered aircraft require roughly 2% less fuel energy at the design mission than conventional jet fuel aircraft. However, hydrogen-powered aircraft require approximately 10-30% more fuel energy for off-design missions between 1,000 and 4,000 nmi compared to jet fuel aircraft. While reducing the design range to cover 95% of all wide-body flights decreases this off-design fuel burn penalty, LH₂ aircraft still have a 5-25% increase in energy required to fly missions between 1,000 and 4,000 nmi relative to conventional aircraft. Additionally, the study indicates that improving material strength or engine performance only has a marginal effect on the relative fuel energy required between LH₂ and jet fuel aircraft.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Protecting Our Investment: Solving Fast Response Cutter Corrosion</title>
<link href="https://hdl.handle.net/1721.1/151888" rel="alternate"/>
<author>
<name>Patnode, Isabelle Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/151888</id>
<updated>2023-08-24T03:36:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Protecting Our Investment: Solving Fast Response Cutter Corrosion
Patnode, Isabelle Claire
The USCG Fast Response Cutter (FRC) fleet is experiencing corrosion at an alarming rate in the propulsion shaft tunnels. An investigation into this problem was conducted from the perspectives of “root cause” and “prevention.” Root causes for the corrosion stem from an interaction in a complex, two-stage galvanic protection system on-board the ship that uses both passive zinc protection and impressed current cathodic protection (ICCP) from an active, feedback-controlled power supply. By using custom measuring instruments and applying them on an in service FRC in order to better understand the complications with galvanic protection on the FRC, crucial insights were discovered. The ICCP power supply unit is intended to prevent corrosion by actively injecting current through anodes in order to raise the magnitude of the voltage measured between the reference electrode and the hull. When designing the FRC, it was expected that a combination of ICCP and passive zincs would protect the hull steel in tandem; however, this has not been the case along the entirety of the ship. The ICCP system is unable to accurately determine the reference potential, a useful indicator for whether the hull steel is adequately protected from corrosion, in every area of the ship, allowing some areas to corrode at an accelerated rate. This report details a full summary of  analysis and results, along with a review of laboratory experiments and field experiments with several FRCs in the USCG fleet concluding with specific, actionable suggestions for mitigating corrosion in the FRC stern tube. Additionally, this report outlines how non-intrusive load monitoring, which has a proven track record for preemptively recognizing faults in shipboard equipment, analyzed the ICCP system and how this relates to shipboard microgrids.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hydrogel Adhesive Marine Sensing System: Design,&#13;
Mechanism, and Applications</title>
<link href="https://hdl.handle.net/1721.1/151887" rel="alternate"/>
<author>
<name>Duque Londono, Camilo</name>
</author>
<id>https://hdl.handle.net/1721.1/151887</id>
<updated>2023-08-24T03:44:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Hydrogel Adhesive Marine Sensing System: Design,&#13;
Mechanism, and Applications
Duque Londono, Camilo
Marine animals offer a wealth of knowledge that goes beyond their role as a protein source for humans. Through careful observations, they offer valuable insights into the health of our oceans and provide inspiration for the design and control of unmanned underwater vehicles. Additionally, Research into their migrational patterns and response to external stimuli such as sonar, drilling, and offshore energy production is also important for informing government agencies and engineers of the potential effects of such activities on local fauna.&#13;
&#13;
Traditionally, sensors used to gather data from marine animals have been invasive and cumbersome, involving the use of subcutaneous anchors, bolts, or sutures. Traditional methods limit studies to large, resilient animals such as dolphins and whales, while smaller, more fragile animals are understudied. In this study, a hydrogel adhesive marine tagging system has been developed that offers rapid (less than 20 seconds), robust (interfacial toughness &gt; 160 J m−2 ), conformable, and non-invasive sensor integration on a variety of marine tissues, particularly soft and flexible ones. This system was tested on live marine animals with varying topological features, from soft skins to hard shells, to evaluate its effectiveness against current methods. The system is then used to conduct a kinematic study of skate locomotion, using a sensor network deployed across a skate fin, to showcase how this tool could be used to aid bio-inspired robotic studies. Further, hydrogel mechanics and design strategies are also presented, providing a deeper understanding of the adhesive system and its mechanism. Results from the various experiments show that this system has the potential to revolutionize the field by providing a reliable, quick, and non-invasive method of sensor adhesion.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Aerosol Composition with Low Cost Optical&#13;
Particle Counters</title>
<link href="https://hdl.handle.net/1721.1/151886" rel="alternate"/>
<author>
<name>Sharpe, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/151886</id>
<updated>2023-08-24T03:20:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating Aerosol Composition with Low Cost Optical&#13;
Particle Counters
Sharpe, Will
Particulate matter (PM) is a serious threat to human health and contributes to millions of premature deaths a year globally. Access to source attribution and compo-sitional data of PM can have many benefits from easier regulation to enabling a better understanding of the negative health effects associated with PM. Acquiring composi-tional data for ambient PM generally has a high associated cost and is done using complex instrumentation, manual postprocessing, and labor intensive lab work. These approaches produce very high quality data, but have low spatiotemporal resolution and a high cost. This work explores a novel method to generate basic compositional data for ambient PM with low-cost, easily deployable apparatuses in concert with a simple fully connected neural net. Simulated effects of thermal denuders as well as dryers/humidifiers are used to perturb aerosols before they enter simulated low-cost optical particle counters (OPCs). This provides information on the volatility and hygroscopicity of the aerosols. These OPC outputs are processed programmatically and fed into a neural net to classify what category an incoming aerosol belongs to. This method is run for both compound-derived categories which mimic real PM sources (Sea Salt, Biomass Burning, Dust, and Urban Smog), and property-derived aerosols which present more idealized conditions. The results of this method are near-perfect classification for single mode aerosol distributions and over 90% correct classification for two mode aerosol distributions. The results on the property-derived aerosols have shown robustness to changing aerosol properties, as well as to changing apparatus and ambient conditions. This work provides proof of concept for future real world experiments to verify this method and presents an experimental setup for this purpose. Having access to compositional data for ambient PM should allow access to PM sources at a very high spatiotemporal resolution for a relatively low price. This basic source attribution could provide the data needed for better informed regulation as well as future scientific work.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Soil Carbon Signatures from Hyperspectral Reflectance Data using Spectral Unmixing</title>
<link href="https://hdl.handle.net/1721.1/151885" rel="alternate"/>
<author>
<name>Zeng, Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151885</id>
<updated>2023-08-24T03:01:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Soil Carbon Signatures from Hyperspectral Reflectance Data using Spectral Unmixing
Zeng, Xinyi
Soil carbon stocks have been depleted mainly due to human activities, and there is a potential for soil carbon sequestration through regenerative agricultural practices, forest restoration, and similar interventions. The traditional laboratory treatments of soil samples provide ground-truth soil carbon content, but they are usually costly, time-consuming, and provide only one-time measurements with limited spatial coverage and resolution. The accelerating development of soil spectroscopy offers an opportunity for cheaper, more immediate, and continuous measurements of soil carbon content. Moreover, the recent advancement of hyperspectral imagers has significantly increased spectral resolution, allowing more granular information to be captured. These devices offer a potentially more accurate methodology to quantify and monitor soil properties globally. Nevertheless, there is no consensus on the optimal practices for soil carbon content estimation using hyperspectral reflectance data. Therefore, this thesis tests whether it is feasible to leverage spectral linear mixing models to decompose soil hyperspectral reflectance data into interpretable soil component spectral signatures and abundances. The results demonstrate that the proposed spectral linear mixing model can predict soil organic carbon (SOC) spectral signature and mass abundance with a nearly 0 average bias. However, biases can still be significant for certain spectra. To reduce these biases, it is essential to characterize the problem more effectively. Dedicated soil spectral data collection efforts designed explicitly for unmixing applications could enhance the quality of the results and contribute to a more comprehensive understanding of SOC spectrum and abundance. These findings motivate further development and refinement of spectral mixing models, as well as research into the application of hyperspectral reflectance data to soil property analysis.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Performance Analysis of Frequency-Shift Keyed Transmitter using Rapidly Tunable Lasers</title>
<link href="https://hdl.handle.net/1721.1/151883" rel="alternate"/>
<author>
<name>Pan, Carol</name>
</author>
<id>https://hdl.handle.net/1721.1/151883</id>
<updated>2023-08-24T03:27:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Performance Analysis of Frequency-Shift Keyed Transmitter using Rapidly Tunable Lasers
Pan, Carol
Optical Frequency Shift Keying (FSK) is a modulation scheme that encodes data in the wavelength of a carrier signal. Due to the large amount of optical bandwidth available in the erbium, 1.55-&#120583;m telecom band, FSK can potentially utilize a wide spectrum to achieve multi-Gb/s channel bandwidths. For free-space laser communication (lasercom) applications, links are usually point-to-point, have narrow beamwidths, and do not need to share a transmitting medium with other signals. Therefore, many lasercom applications could exploit the benefits of FSK by trading off spectral efficiency with power efficiency. This thesis investigates an FSK transmitter implementation utilizing a single, fast tunable laser, allowing scalability to high values of M-ary FSK, where M represents the number of wavelengths in the symbol constellation. This work will propose and implement a design for an FSKmodulated transmitter using a modulated-grating, y-branch tunable laser, and assess its suitability for lasercom applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Design and Fabrication Pipeline for Integrating Rotary Encoders into 3D Printed Mechanisms</title>
<link href="https://hdl.handle.net/1721.1/151882" rel="alternate"/>
<author>
<name>AlAlawi, Marwa</name>
</author>
<id>https://hdl.handle.net/1721.1/151882</id>
<updated>2023-08-24T03:02:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Design and Fabrication Pipeline for Integrating Rotary Encoders into 3D Printed Mechanisms
AlAlawi, Marwa
In this thesis, we introduce MechSense: rotary encoders 3D-printed in one pass alongside rotational mechanisms. MechSense encoders report on their angular position, direction of rotation, and speed. MechSense encoders utilize capacitive sensing by integrating a floating capacitor into the rotating element and three capacitive sensor patches in the stationary part of the mechanism. Unlike existing rotary encoders, MechSense does not require manual assembly and can be effortlessly integrated during design and fabrication. MechSense is accompanied by an editor that allows users to integrate the encoder within a rotating mechanism.&#13;
&#13;
We contribute a sensor topology and a computational model that can compensate for print deviations. We also evaluate our sensing model for angular position detection (mean error: 1.4°) across multiple prints and rotations, different spacing between sensor patches, and different sizes of sensors. Finally, we demonstrate MechSense through three application examples on 3Dprinted tools, tangible UIs, and gearboxes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preservation and Deployment of Biofertilizers to Mitigate Soil Phosphorous Loss from Agricultural Systems</title>
<link href="https://hdl.handle.net/1721.1/151881" rel="alternate"/>
<author>
<name>Barghouti, Zeina</name>
</author>
<id>https://hdl.handle.net/1721.1/151881</id>
<updated>2023-08-24T03:04:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preservation and Deployment of Biofertilizers to Mitigate Soil Phosphorous Loss from Agricultural Systems
Barghouti, Zeina
The world is approaching peak phosphorus within the next 20 years, which poses a severe threat to the food security of a rapidly growing global population. Phosphorus is the second most essential macronutrient for plants after nitrogen, and its scarcity results in stunted growth, poor root development, and reduced crop agricultural crop yields. While phosphorus is naturally present in the soil, it is often not available in sufficient quantities or in a form that plants can synthesize. The use of phosphate-rich fertilizers has compensated for the low levels of phosphorus in agricultural systems, however, up to 95% of these fertilizers become "fixed" in the soil, causing environmental damage and reducing soil fertility. It is critical to find sustainable ways to manage our depleting phosphorus resources and minimize the environmental impact of phosphate fertilizers. To address these challenges, this research introduces a framework that leverages silk-based biopolymer encapsulation to preserve and deliver phosphate solubilizing microorganisms to the soil on naturally occurring phosphate rocks. By enabling the revival of phosphate solubilizing bacteria and initiating the solubilization of adjacent phosphate rocks, this approach not only improves the accessibility of untapped phosphate resources for plant roots but also facilitates the continual solubilization of various forms of insoluble phosphate, including legacy phosphorus from past fertilizer applications. This research showed phosphate solubilizing bacteria encapsulated in the biopolymer-coated phosphate rock remained viable after 30 days of storage and demonstrated effective solubilization of its host phosphate rock in solution. The addition of the biopolymer-coated phosphate rocks to chickpea seedlings showed a significant increase in the phosphorous content in chickpea leaves compared to the addition of uncoated rocks. Additional investigations can be undertaken to evaluate the potential of this framework as a controlled-release fertilizer by varying coating parameters such as material processing, biopolymer concentrations, and fertilizer amounts. The results of this thesis provide a foundation for further exploration and development of natural phosphate biofertilizers, bringing us one step closer to a more sustainable and resilient future in agriculture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Thermal Behavior of Pyrolytic Graphite Sheets (PGS) at Low Interface Pressures</title>
<link href="https://hdl.handle.net/1721.1/151879" rel="alternate"/>
<author>
<name>Padilla, Joushua G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151879</id>
<updated>2023-08-24T03:02:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing the Thermal Behavior of Pyrolytic Graphite Sheets (PGS) at Low Interface Pressures
Padilla, Joushua G.
As the United States Navy continues to pursue its goal of developing fully electric ships the cooling of the critical electronic components on board must be solved. One of these critical components is the integrated Power Electronics Building Block (iPEBB); a universal converter that is programmed for its specific application when installed.  The iPEBB is a modular unit that can be easily swapped by a single person.  This unique modularity has led the Navy to pursue the design of a dry interface liquid cooling system to cool the iPEBB. This means that no liquid can cross the boundary of the iPEBB and thus the cooling system must be separate.&#13;
&#13;
In this thesis, an integral portion of the dry interface cooling solution, the thermal interface material (TIM) between the cold plate and iPEBB, was explored in a multitude of ways. First, commercially available TIMs were investigated for their thermal behavior at pressures less than 10 PSI as well as their structural qualities and usability metrics. Pyrolytic Graphite Sheets (PGS) were chosen to be investigated further. Second, a fourth order thermal conductivity model for PGS as a function of interface pressure was derived in the 0 – 10 PSI range. This model is important as it allows engineers to have conductivity inputs for the PGS in any thermal modeling done for future iterations of the iPEBB or in other systems where PGS is used as a TIM. Third, the design and testing of an experimental rig (PPR) for testing thermal interface materials under various average pressures and pressure profiles was presented. An empirical model was developed that demonstrates the effect that interface pressure profile has on component temperatures with PGS as the acting TIM between the cooling solution and the heated system. Finally, using the conductivity model, CFD simulations were run of PPR experiments. These simulation results were then compared to the results of the PPR experiments and it was discovered that using the conductivity model for PGS as an input in a CFD simulation is an effective way of modeling the contact resistance of PGS as a function of pressure. The effectiveness of the conductivity model – CFD simulation setup has a mean error of 1.4C ± 1.3C between the simulation’s outputted average resistor temperature and the actual average temperatures measured.&#13;
&#13;
The experiments and simulations conducted in this thesis provide a blueprint for the necessary steps required to thermally model not only the iPEBB dry interface cooling system, but also other systems that might use PGS as a TIM, using CFD. The information in this thesis will also help researchers model the thermal behavior of the iPEBB cooling system once a clamping mechanism for the iPEBB structure is designed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the Prospects and Development of China's Online Healthcare Industry: Opportunities and Challenges</title>
<link href="https://hdl.handle.net/1721.1/151877" rel="alternate"/>
<author>
<name>Zhu, Xianmin</name>
</author>
<id>https://hdl.handle.net/1721.1/151877</id>
<updated>2023-08-24T03:53:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of the Prospects and Development of China's Online Healthcare Industry: Opportunities and Challenges
Zhu, Xianmin
With the rapid development of the internet and technology, the healthcare industry has undergone a significant transformation in recent years. The online healthcare industry, in particular, has emerged as a promising sector in China, providing a convenient and accessible way for people to access medical services and information.&#13;
&#13;
To analyze the online healthcare industry in China, this thesis will employ two widely used frameworks: PESTEL analysis and Porter's five forces analysis. Furthermore, the thesis will select three major players in the online healthcare industry in China, namely AliHealth, Ping'an Healthcare, and JDHealth, to conduct a detailed analysis of the competition landscape of Online healthcare industry in China. The analysis will cover various aspects such as business models, product and service offerings, number of active users, and financial status using multiple metrics. Finally, the thesis will discuss the risks, challenges, and potential solutions in this industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Post Disaster Relief Structure</title>
<link href="https://hdl.handle.net/1721.1/151876" rel="alternate"/>
<author>
<name>Bharmal, Sabika</name>
</author>
<id>https://hdl.handle.net/1721.1/151876</id>
<updated>2023-08-24T03:33:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Post Disaster Relief Structure
Bharmal, Sabika
This thesis covers the design and optimization of a post-disaster relief shelter as well as a custom connection design. The goal of this work is to propose new solutions for temporary shelters and to streamline the design process. In particular, the structure is designed for flooding in Pakistan and uses steel hollow structural sections (HSS). The design works to minimize the number of unique parts, requires no power tools for assembly, utilizes all prefabricated elements, and meets the region's building codes for a typical residential home. Ultimately, the structure is a shelter that can be reused year to year by being assembled and disassembled as needed. This will help to reduce material waste and the overall effect on the environment. For the design of the structure, two different methods were employed, one focusing on parametric modeling and one focusing on repetitive elements. Designs from each method were optimized and then compared to determine the best solution. Once the top design was selected, the members in the design were grouped and then replaced based on the groups to reduce the number of unique elements. Finally, the last part of the thesis works on the design and prototyping of a custom steel node. The node is designed to connect eight HSS sections together with each element held using a single pin. Preliminary prototyping for the connection is also done using polymer and steel 3D printing methods. In conclusion, this thesis presents a workflow and design for a prefabricated shelter kit that can be assembled with no additional tools or materials while ensuring it resists all the appropriate loads for the area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Use of Inductive Transfer Learning and&#13;
RNN to Quantify Extreme Event Statistics of Ship Motions</title>
<link href="https://hdl.handle.net/1721.1/151874" rel="alternate"/>
<author>
<name>Kramer, Jarod</name>
</author>
<id>https://hdl.handle.net/1721.1/151874</id>
<updated>2023-08-24T03:45:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Use of Inductive Transfer Learning and&#13;
RNN to Quantify Extreme Event Statistics of Ship Motions
Kramer, Jarod
Ship motion software has been a critical tool for designers to study the extreme responses of ships in irregular waves. These studies and simulations often take thousands of hours to predict and analyze the ship’s motion. Simulation results are often imperative to ensure the development of accurate operational guidance, typically in the form of plots, advising the crew on safe course and speed combinations to avoid dangerous roll and pitch motions. Two programs in use by the Navy to fill this need are the fast, lower-fidelity SimpleCode program and the slower, higher-fidelity Large Amplitude Motion Program (LAMP). Previous efforts have developed a framework to leverage machine learning through a Long Short-Term Memory (LSTM) network architecture to augment the SimpleCode program by mapping its ship motion output to the more accurate LAMP output without adding significant computational overhead. This process of using an LSTM neural network to improve the SimpleCode output provides the opportunity to supply predictions and guidance to the crew in real-time. However, the limits of this mapping across various sea domains still need to be discovered. By investigating these limits, a more generalized LSTM can be realized through inductive transfer learning and a model agnostic meta-learning approach, one that leverages the training of previous networks to augment SimpleCode across a broader range of seas or produce more accurate results on a narrow set of sea conditions after very few training samples.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting the Interaction Between Energy Saving Devices on Surface Ships</title>
<link href="https://hdl.handle.net/1721.1/151873" rel="alternate"/>
<author>
<name>Uzoma, Jillian</name>
</author>
<id>https://hdl.handle.net/1721.1/151873</id>
<updated>2023-08-24T03:41:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Predicting the Interaction Between Energy Saving Devices on Surface Ships
Uzoma, Jillian
Greenhouse gas reduction technology is an important area of research that spans across all industries. The looming carbon neutrality deadlines are drawing closer and change must occur in order to reduce carbon emissions and meet these goals. One of the sectors facing these deadlines is the commercial shipping industry. This research was motivated by Oldendorff Shipping Company who aims to find the best method to reduce carbon emissions on its bulk carriers. This research effort involved collaboration between various labs across MIT’s campus who are each investigating different methods of reducing carbon emissions. This thesis and my contribution to the project involved investigating the carbon emission reduction through the addition of drag reduction devices to bulk carriers.&#13;
&#13;
A literature review of existing energy saving devices was completed in order to understand what devices are in use today, how they work, how prevalent they are, and what drag reduction or energy saving claim is made. Often there was conflicting information or unfounded claims of energy savings that had been made, so this literature review also involved comparing and analyzing sources. &#13;
&#13;
Two novel energy saving devices were explored: vortex generators and a morphing bow foil. A deep dive into how each of these devices work to reduce drag was completed and experiments were carried out in the Parson’s Laboratory Towing Tank. The vortex generator designs were iterated many times to try to optimize their shape, size, spacing, and location. These results were discussed and generally show that flow reattachment occurs and once scaled up to full scale, energy savings do occur. &#13;
&#13;
Finally, this thesis explored the concept of combining multiple devices at the same time. Meaningful combinations are ones that involve differing methods of drag reduction so that the presence of both devices lead to additive savings. Three combinations 3 were explored in depth and include microbubbles with vortex generators, Grothues Spoilers and Kappel Blades, and Becker Mewis Duct with rudder bulb.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study of the Effects of Piston Secondary Motion on Piston Ring Conformability and Coolant Cavitation in Heavy-Duty Engines</title>
<link href="https://hdl.handle.net/1721.1/151870" rel="alternate"/>
<author>
<name>Bradt, Casey S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151870</id>
<updated>2023-08-24T03:54:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Study of the Effects of Piston Secondary Motion on Piston Ring Conformability and Coolant Cavitation in Heavy-Duty Engines
Bradt, Casey S.
The transportation sector accounts for a significant share of global greenhouse gas emissions. There are a few particular modes of transportation such as shipping and long-haul trucking, which are dominated by heavy-duty diesel piston engines, that are more difficult to electrify or decarbonize via other alternatives. Because of this, environmental goals for those transportation modes is on two goals: reducing emissions and improving fuel efficiencies for existing internal combustion engine technologies. The work here presented is in two parts, each related to these two design goals. &#13;
&#13;
As future designs aspire to reduce emissions, lubricating oil consumption (LOC) is a primary concern because it is a main emissions source. Bore distortion has arguably the largest effect on LOC due to its influence on piston ring-bore conformability. Thermal distortion and out-of-roundness of the cylinder caused by head bolt stresses are routinely considered in conformability analyses for ring pack designs. However, the effect of piston impact on the bore distortion for ring-liner conformability analyses has not been addressed, even though the magnitude of the piston impact distortions can be as high as those from thermal distortions. This piston impact effect was added to existing bore distortion and ring-liner conformability analysis techniques in the work documented here. The simulation workflow incorporated piston secondary motion and oil transport, transient structural finite element analysis of the cylinder, and a curved beam ring-liner conformability model. Significantly higher ring-liner clearance and higher contact were observed. As a result, higher oil leakage, wear, and combustion gas blow-by may become substantial and design adjustments may be warranted. &#13;
&#13;
Separately, in attempt to achieve improved fuel efficiency, design efforts are often opposed by obstacles like durability issues. One such durability issue is cavitation erosion in wet liner engines. Cavitation erosion can cause tremendous damage and is often caused by vibrations in the liner driven by piston slap and other piston secondary motions. Piston secondary motion intensifies with many design trends aimed at increasing engine efficiency such as elevated combustion pressures and reduced structural weight. Thus, cavitation erosion acts as a barrier to higher efficiency designs. To prevent cavitation erosion, designers generally must find solutions based on engineering intuition paired with experiment, often sacrificing frictional performance. The work here documented developed a physics-based modeling and simulation capability to predict cavitation erosion during the design process and hopefully help overcome this barrier. A piston secondary motion software with consideration of oil transport was first used to calculate piston impact pressures. These impact pressures were then mapped to a coupled structure and fluids model of the liner and water jacket in Ansys. The developed Ansys model may employ either a one way or two-way structure-fluids coupling. A preliminary parametric study was also done to investigate the influence of various piston design parameters on the cavitation behavior.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanics of Seashell Growth: Examining the relationship between incompatibility, shape, and internal stress</title>
<link href="https://hdl.handle.net/1721.1/151869" rel="alternate"/>
<author>
<name>Carberry, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/151869</id>
<updated>2023-08-24T03:37:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanics of Seashell Growth: Examining the relationship between incompatibility, shape, and internal stress
Carberry, Dylan
Seashells are a fascinating example of surface growth in nature. As they develop, both their macroscopic form and their internal microstructure evolve, with the latter transitioning between different layer sizes and orientations during the shell's growth process. Several studies have examined the morphogenesis of seashells, with some considering the kinematics of growth that lead to different eventual shapes, and others investigating the biochemical pathways of these processes. However, the role of internal mechanical stresses that may develop due to incompatibility has yet to be investigated. In this thesis, we present a framework that models the shell growth continuously, with an aim to investigate the role of internal stresses on the structural changes that have been reported to occur within seashells. Considering an axisymmetric growing body and accounting for surface growth as an arbitrary sequence of addition of incompatible circular rings on its outer perimeter, we study the shape and mechanical forces that can develop throughout the shell's growth.  Our findings show that incompatibility has a large impact on the shape of a shell during surface growth, especially during early stages of development. This influence may be crucial in explaining the recorded crystallographic reorientation that is typical to various seashells.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Commissioning of a Hybrid Additive Manufacturing System Combining Inkjet Deposition and Laser Powder Bed Fusion</title>
<link href="https://hdl.handle.net/1721.1/151867" rel="alternate"/>
<author>
<name>Kutschke, Zach W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151867</id>
<updated>2023-08-24T03:12:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Commissioning of a Hybrid Additive Manufacturing System Combining Inkjet Deposition and Laser Powder Bed Fusion
Kutschke, Zach W.
Capabilities to combine multiple metal and/or ceramic materials in single components, and/or to achieve desired gradients in composition, will advance the performance of future propulsion and energy conversion systems. Multi-material and gradient capabilities have been demonstrated for metals in both powder bed and directed deposition additive manufacturing (AM) techniques; however, the dimensional fidelity and spatial precision of composition control is limited for several reasons. Here the design, fabrication, and preliminary validation of a new hybrid AM system combining inkjet printing with laser powder bed fusion (LPBF) for manufacturing compositionally graded components is presented. In the hybrid inkjet-LPBF process, a pattern of ink is deposited in a two-dimensional pattern to dictate compositionally modified regions prior to, or following, the spreading of each powder layer. Solids (e.g., nanoparticles) in the ink combine with the base powder to achieve locally controlled in situ alloying within the AM process. Key design considerations for the system including thermal isolation of the inkjet system, temperature control of the build volume (up to 500C), and atmosphere control are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Radiation Detection within the Gastrointestinal&#13;
Tract</title>
<link href="https://hdl.handle.net/1721.1/151866" rel="alternate"/>
<author>
<name>McLymore, Crystan</name>
</author>
<id>https://hdl.handle.net/1721.1/151866</id>
<updated>2023-08-24T03:01:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Real-Time Radiation Detection within the Gastrointestinal&#13;
Tract
McLymore, Crystan
The risk of a radiation emergency is becoming more prevalent as the misuse of nuclear facilities or technologies by terrorists or rogue nation-states continues to increase. A radiation emergency could cause instantaneous and sustained large releases of penetrating radiation, which would result in exposed individuals to suffer from acute radiation syndrome (ARS). FDA-approved medical countermeasures to combat ARS are most effective when administered as soon as possible to the point of exposure. Current methods to prevent morbidity and mortality require access to medical support and the proper use of radiation dosimetry. This work describes radiation monitoring internal to the gastrointestinal tract, which could provide a means of alerting the individual to their surroundings or trigger a drug delivery response. &#13;
&#13;
Internal radiation monitoring also has benefits in radiation therapy applications where injury to the gastrointestinal (GI) tract remains an unavoidable side effect due to its extension over a large surface area. Current in vivo dosimetry technology is only positioned in minimally invasive areas to monitor radiation, which increases the likelihood of delivered dose discrepancies in or near the treatment area. This work overcomes this limitation by demonstrating the use of PIN diode-based ingestible electronics to monitor radiation as required throughout the gastrointestinal tract.  &#13;
&#13;
The diode was first characterized in vitro for response to X-ray and gamma radiation while in temperature environments of 20°C to 40°C. Various sources were employed for characterization, including 2525 Curie Cesium, 2100 Curie Cobalt, 320 kV X-ray irradiator, linear accelerator (LINAC) with 6, 10, and 18 MV beam qualities, and a neutron beam sourced by a 5.7 MW nuclear reactor. An in vivo study was then performed in which the encapsulated diode was placed in a swine’s stomach, and 110 kVp X-ray images were captured of the swine’s abdominal region.&#13;
&#13;
The diode displayed repeatability within 3\% in its detection of the tested gamma and X-ray sources. The diode also proved to be energy independent for absorbed doses less than 3.5 Gy, evidenced by the LINAC characterization. Radiation absorption in body tissue had a dominating effect on the diode output signal, as shown by comparing the in vitro to in vivo results. &#13;
&#13;
This study demonstrates successful, first-time in situ radiation detection directly from core body areas in a non-invasive manner. Real-time feedback on the received radiation dose to the GI tract allows for active monitoring of GI doses.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additively Manufacturing High-Performance, Low-Cost Electrospray Ion Sources for Point-of-Care Mass Spectrometry</title>
<link href="https://hdl.handle.net/1721.1/151864" rel="alternate"/>
<author>
<name>Kachkine, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/151864</id>
<updated>2023-08-24T03:01:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Additively Manufacturing High-Performance, Low-Cost Electrospray Ion Sources for Point-of-Care Mass Spectrometry
Kachkine, Alex
Clinical mass spectrometry relies on ionization of liquid biological samples, often via electrospray. This work broadly leverages additive manufacturing for the development of electrospray emitters, doubling signal-to-clutter ratios relative to state-of-the-art. We demonstrate low-cost integration in clinically-relevant diagnostics protocols by designing emitters into surface mount devices, the first of their kind, that can be directly soldered to printed circuit boards with built-in digital microfluidics as part of automated device assembly. The benefits in terms of scalability of this solution are coupled with advantages gained from simultaneously tuning surface hydrophilicity, solvent evaporation, and geometry. Electrospray emitter efficiency is optimized, approaching the direct field ion evaporation limit. Several materials and additive manufacturing processes to make the electrospray emitters are evaluated; comparative testing is conducted with conventional paper spray and coated blade spray. Microstructure characterization with scanning electron microscopy shows reproducible microfabrication of bulk techniques and compatibility with additive manufacturing feedstock. Geometrically and electro-fluidically optimized electrospray emitters attain 130% higher steady-state currents than state-of-the-art emitters. The devices use novel extractor electrode designs, reducing corona discharge and air breakdown, enabling operation at ~24% larger bias voltages compared to conventional cylindrical inlets. MS data is presented for ZnONW-coated emitters, detecting therapeutically relevant targets at 1 µg/ml concentrations with a variety of solvents. In the case of Nicardipine, such emitters attain 99% higher signal-to-clutter ratios versus state-of-the-art, with far greater operative stability. This thesis bridges the gap between additive manufacturing and high-performance electrospray for mass spectrometry, unlocking industrial development of clinically relevant, next-generation point-of-care ion sources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Efficient Uncertainty Quantification of Turbulent Combustion Simulations via Kinetic Dimension Reduction</title>
<link href="https://hdl.handle.net/1721.1/151863" rel="alternate"/>
<author>
<name>Koenig, Benjamin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151863</id>
<updated>2023-08-24T03:25:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Efficient Uncertainty Quantification of Turbulent Combustion Simulations via Kinetic Dimension Reduction
Koenig, Benjamin C.
Propagating uncertainties in kinetic models through combustion simulations can provide important metrics on the reliability and accuracy of a model, but remains a challenging and numerically expensive problem especially for large kinetic mechanisms and expensive turbulent combustion simulations. Various surrogate model and dimension reduction techniques have previously been applied in order to reduce the cost of forward uncertainty propagation in combustion simulations, but these are often limited to low-dimensional, simple combustion cases with scalar solution targets. In the current work, a neural network-accelerated framework for identifying a low-dimensional active kinetic subspace was developed that applies to the entire temperature solution space of a flamelet table and can capture the mixture fraction and strain rate dependent effects of the kinetic uncertainty. The computational savings enabled by this novel framework were demonstrated through a proof-of-concept, flamelet-based application in a Reynolds-averaged Sandia Flame D simulation using a chemical mechanism for methane combustion with 217 reactions. By leveraging the large dimensional compression and low-cost scaling of the active subspace method, offloading the initial dimension reduction gradient sampling onto the laminar flamelet simulations, and accelerating the gradient sampling process with a specifically designed neural network, it was possible to estimate the temperature uncertainty profiles across the solution space of the turbulent flame with strong accuracy of 70-85% using just seven perturbed solutions. Additionally, as it occurs entirely within the flamelet table, the cost of identifying the reduced subspace does not scale with the cost of the turbulent combustion model, which is a promising feature of this framework for future application to larger-scale and more complex turbulent combustion applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D-Printed, Internally Fed Electrospray Thruster</title>
<link href="https://hdl.handle.net/1721.1/151862" rel="alternate"/>
<author>
<name>Kim, Hyeonseok</name>
</author>
<id>https://hdl.handle.net/1721.1/151862</id>
<updated>2023-08-24T03:27:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">3D-Printed, Internally Fed Electrospray Thruster
Kim, Hyeonseok
An electrospray thruster offers several benefits as a propulsion system for small satellites, including a lower power requirement when miniaturized and a broad range of thrust and specific impulse. However, traditionally it has been manufactured through microfabrication in a cleanroom, which is both expensive and time-consuming, and is not compatible with in-space manufacturing. Advances in 3D printing technology make it possible to create microstructures at a much lower cost than microfabrication; however, internally fed electrospray thrusters have only been fabricated in a cleanroom so far, primarily due to their high hydraulic resistance requirement. In this study, this problem was approached in two ways to 3D print the internally fed electrospray thruster. The first approach was optimizing the channel design, considering 3D printing resolution and electrospray physics. The second approach was the modification of liquid resin for 3D printing to expand the lower limit on the internal channel size. The characterization of a single-emitter device showed stable emission for multiple flow rates, with current and flow rate following the well-known scaling law of electrospray in cone-jet mode. The thrust and specific impulse estimates showed that the device performance is comparable to state-of-the-art microfabricated internally fed electrospray thrusters.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards low-cost context awareness on smart shelving using passive UHF RFID infrastructure</title>
<link href="https://hdl.handle.net/1721.1/151861" rel="alternate"/>
<author>
<name>Li, Heyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151861</id>
<updated>2023-08-24T03:09:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards low-cost context awareness on smart shelving using passive UHF RFID infrastructure
Li, Heyi
An omni-channel strategy is a method of selling and promoting products that offers customers a comprehensive and cohesive shopping experience. However, this strategy relies on store managers having an accurate, real-time understanding of product availability at all their distribution and retail facilities. Smart shelving is an important avenue for furthering the development of omni-channel retailing and meeting people’s needs. This thesis primarily focuses on the construction of a low-cost context awareness infrastructure for smart shelving using passive UHF RFID tags and radio tomographic imaging (RTI) algorithms.&#13;
&#13;
Firstly, location estimations without fingerprinting in one direction can reach an accuracy of 91.7% on four tested objects. Secondly, the number of stacked layers from 1-3 when placing items on the shelf can be estimated. It is shown that an increase in product volume on the shelf could be related to tag RSSI level changes for five different tested products.&#13;
&#13;
In addition, material classification could be achieved by tag RSSI attenuations. Tests are done between three classes (metal, glass, and plastic), with three objects each class. In the three-location tests, it is possible to clearly differentiate between three types of materials based on the value of variations in tag RSSI attenuations.&#13;
&#13;
Finally, the integration of battery-free environmental sensors is accomplished by incorporating an RFID tag equipped with resistance measurement capability and a photoresistor. By measuring the resistance of the photoresistor, the designed light sensor could provide additional information (besides the tag RSSI change) about the volume of material on a shelf. Moreover, this can be done using only a single UHF RFID Gen 2 protocol.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High energy density entrainment-based catalytic micro-combustor for portable devices in extreme environmental conditions</title>
<link href="https://hdl.handle.net/1721.1/151860" rel="alternate"/>
<author>
<name>Lin, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/151860</id>
<updated>2023-08-24T03:09:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High energy density entrainment-based catalytic micro-combustor for portable devices in extreme environmental conditions
Lin, Emily
The increasing demand for low-cost, high energy density heat sources has motivated the development of compact and lightweight combustion-based devices. In this work, we first optimized the catalytic bed segmentation scheme to enhance fuel management in a mesoscale parallel-plate combustor. After contextualizing the driving parameters for combustion efficiency, we developed an energy-dense (≈236 MW/m³) entrainment-based catalytic micro-combustor for heating portable systems. The multichannel micro-combustor (coated with Pt/Al₂O₃ catalyst) leverages a copper-nichrome wire to enable quick and localized ohmic preheating durations (2-3 mins). Furthermore, we demonstrated low ignition temperature (108-125°C), which facilitates low energy consumption (~1948 J). In addition, an optimal fuel flow rate (3.09×10⁻⁸ m³/s) was determined via FEM simulations and experiments to enable fuel savings (high fuel conversion) while achieving high heat fluxes by analyzing the reaction kinetics and species transport behavior in the microchannels. Additional FEM studies were performed to optimize the heat transfer between the high thermal mass and combustor at the insulating mica sheet stack interface. Afterwards, through independent testing, we established the micro-combustor’s ability to maintain long-term autothermal combustion at a high saturation wall temperature (585°C), which was attained at short timescales to enable fast heating/cooling cyclability. The successful cyclic heating demonstration of large thermal mass additions (at least 41 times the micro-combustor’s mass), coupled with the combustor’s high energy density, shows promise for device-level implementation for a range of commercial, defense, and energy conversion applications. Finally, a combustor array was assembled and tested in an atmospheric water extractor (AWE) device in harsh environmental conditions, at temperatures ranging from 1.7°C to 43.3°C.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Internal Combustion Engine Performance using Aluminum as Fuel</title>
<link href="https://hdl.handle.net/1721.1/151859" rel="alternate"/>
<author>
<name>Pratto, Linda</name>
</author>
<id>https://hdl.handle.net/1721.1/151859</id>
<updated>2023-08-24T03:26:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Internal Combustion Engine Performance using Aluminum as Fuel
Pratto, Linda
The aluminum-water reaction has been proven as a concept for a safe, economical, and energy-dense storage mechanism for hydrogen fuel. One of the challenges facing aluminum-fuel technology is the sensitivity of hydrogen fuel cells to temperature, humidity, vibrations, and particulate contamination. This paper explores internal combustion engines as an alternative energy conversion method to hydrogen fuel cells for aluminum-fuel applications. Specifically, this paper characterizes the impact of steam on engine performance. The aluminum-water reaction is highly exothermic, resulting in a high-temperature mixture of steam and hydrogen. In a fuel cell system, additional components are required to cool and dry the hydrogen which adds cost, weight and complexity. On the other hand, the higher temperature and steam content does not reduce the ability of the internal combustion engine to produce work up to molar water-fuel ratios of approximately 2.5. This work documents analytical predictions and experimental results to characterize the performance impact of steam on hydrogen internal combustion engines for use with aluminum fuel. For port-fueled injected engines, the prescience of steam reduces engine efficiency by about 8%, but increases the overall system efficiency by about 9%. For direct-injection engines, the prescience of steam increases engine efficiency by about 9% and increases overall system efficiency by 13%.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian optimization and Cartesian-grid simulations for artificial reef design</title>
<link href="https://hdl.handle.net/1721.1/151858" rel="alternate"/>
<author>
<name>Ronglan, Edvard</name>
</author>
<id>https://hdl.handle.net/1721.1/151858</id>
<updated>2023-08-24T03:40:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bayesian optimization and Cartesian-grid simulations for artificial reef design
Ronglan, Edvard
Coastal erosion threatens communities close to the shore worldwide, and it has become&#13;
a significant concern in recent years due to increased sea levels and storm frequency&#13;
driven by global warming. In the search for effective methods to prevent these effects,&#13;
natural coral reefs have demonstrated comparable wave energy dissipation to artificial&#13;
defenses while also providing a positive influence on the ocean ecosystem. Therefore,&#13;
this thesis presents an artificial reef structure with a drag coefficient that is an order of magnitude higher than that of single structures, which positively impacts the&#13;
ocean ecosystem by providing shelter for marine species. Energy dissipation was maximized using Bayesian optimization in combination with Cartesian-grid simulations&#13;
and towing tank experiments. To ensure the structure’s strength, ease of implementation, and biocompatibility, the reef structures were designed to be porous. Finally,&#13;
the complete artificial reef was constructed and tested in a towing tank with waves&#13;
to assess its energy dissipation capabilities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Experimental Conditions on Fracture Research Using 3D Printed Materials</title>
<link href="https://hdl.handle.net/1721.1/151857" rel="alternate"/>
<author>
<name>Almubarak, Majed Abdulsattar</name>
</author>
<id>https://hdl.handle.net/1721.1/151857</id>
<updated>2023-08-24T03:07:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Effects of Experimental Conditions on Fracture Research Using 3D Printed Materials
Almubarak, Majed Abdulsattar
The fracturing behavior and mechanical characterization of rocks are important for many applications in the fields of civil, mining, geothermal, and petroleum engineering. Laboratory testing of rocks plays a major role in understanding the underlying processes that occur on the larger scale and for predicting rock behavior. Fracturing research requires well-defined and consistent boundary conditions. Consequently, the testing design and setup can greatly influence the results.&#13;
&#13;
In this study, a comprehensive experimental program using an artificial material was carried out to systematically evaluate the effects of different parameters in rock testing under uniaxial compression. The parameters include post-processing curing, printing orientation, compression platen type, specimen centering, loading control method and rate, specimen size, specimen cross-sectional geometry, boundary constraints, and flaw parameters.&#13;
&#13;
The specimens were prepared using a 3D stereolithography printer utilizing clear resin material. Identical pre-existing quasi-elliptical (ovaloid-shaped) flaws were placed in the center of each specimen. The specimens were subjected to unconfined compression using a Baldwin load frame. The testing setup included a high-speed camera and a high-resolution camera for visual analysis of the fracturing processes.&#13;
&#13;
The results show that these testing conditions have a significant effect on the mechanical behavior of rocks. Post-processing curing increases the strength of the material, with longer curing times resulting in higher material strength. Different printing orientations exhibit varying strengths. Using a fixed compression platen helped reduce bulging of the material. Centering of the specimen played a critical role to avoid buckling and unequal distribution of stress. Slower displacement rates can control the energy being released once failure occurs to prevent the specimen from exploding. Larger specimens generally fail at lower stresses compared to smaller specimens. Also, the frictional end effects were investigated by comparing lubricated and non-lubricated end conditions. Very importantly, the study also identified variations in crack initiation and propagation between specimens with internal flaws and specimens with throughgoing flaws. This investigation showed that tensile wing cracks appeared in specimens with throughgoing flaws, while wing cracks with petal cracks were associated with the internal flaws. It also showed that the mechanical properties are influenced by the inclination of the flaws and established that specimens with internal flaws generally exhibit higher material strength compared to specimens with throughgoing flaws.&#13;
&#13;
The systematic analysis presented in this work sheds light on important considerations that need to be taken into account when conducting fracture research and adds knowledge to the fundamental understanding of how fractures occur in nature.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oscillating Energy Harvester for UUV Applications</title>
<link href="https://hdl.handle.net/1721.1/151856" rel="alternate"/>
<author>
<name>Stone, Lucas Kistner</name>
</author>
<id>https://hdl.handle.net/1721.1/151856</id>
<updated>2023-08-24T03:56:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Oscillating Energy Harvester for UUV Applications
Stone, Lucas Kistner
This thesis presents the design, modeling, and optimization of a novel oscillating energy harvester for use in a Bluefin-21 UUV. Real-world vessel acceleration data was used to optimize the harvester for four different potential energy profile configurations: free-floating, linear monostable, nonlinear monostable, and bistable. Active control was desired and two strategies were explored but deemed to be too costly to implement. The performance of each configuration was evaluated and it was found that the linear monostable model performed the best, although, due to detuning concerns, the free-floating configuration is expected to out perform the linear model across a range of sea state spectra. While the calculated power collection rate was insufficient for supplementing or recharging the main batteries, the harvester was found to be a promising alternative power source for an emergency location beacon, enabling continuous transmission as long as the UUV remained adrift. The findings of this thesis demonstrate the potential of oscillating energy harvesters in UUV applications and suggest avenues for further research into control strategies and experimental validation. &#13;
&#13;
Keywords: oscillating energy harvester, UUV, floating, linear, nonlinear, monostable, bistable, control strategy
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Application of&#13;
Elastohydrodynamic Lubrication Model for Piston&#13;
Pin</title>
<link href="https://hdl.handle.net/1721.1/151855" rel="alternate"/>
<author>
<name>Shu, Zhiyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/151855</id>
<updated>2023-08-24T03:02:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development and Application of&#13;
Elastohydrodynamic Lubrication Model for Piston&#13;
Pin
Shu, Zhiyuan
The piston pin, as the connection between the piston and the connecting rod, is a&#13;
crucial component in the internal combustion engine. It transfers the cylinder pressure&#13;
of combustion to the crankshaft and is subjected to high stress and harsh lubrication&#13;
conditions. Pin seizure is a severe problem in new engine development and coatings&#13;
could be a solution to this problem. However, by advancing the knowledge about the&#13;
lubrication effect and the contact patterns on the pin’s surface, it is possible to find&#13;
more cost-effective methods, such as modifying the profile or adding oil grooves.&#13;
&#13;
A numerical model was developed in this study to investigate the lubrication and&#13;
dynamics of the piston pin, taking into account the deformation of the structures and&#13;
oil cavitation. The model employs multi-body dynamics and elasto-hydrodynamic&#13;
lubrication. A routine for generating and processing compliance matrices was created&#13;
and improved. Additionally, a simple built-in run-in model was utilized to modify&#13;
the pin bore and small end’s profile based on asperity contact pressure. In order to&#13;
adapt to various oil supply situations, a method for controlling the boundary oil flow&#13;
on the piston pin’s surface was also implemented.&#13;
&#13;
The model was then applied to a large bore gas engine to simulate the piston pin’s&#13;
rotation and frictional forces under different operating conditions. The simulation&#13;
results indicate that hydrodynamic lubrication plays a dominant role in supporting&#13;
the normal load after break-in, and the direction and angular speed of the piston pin’s&#13;
rotation are closely linked to the operating conditions. The experimental results were&#13;
compared to the simulation, revealing the model’s reliability and accuracy.&#13;
&#13;
The second part of the thesis examines the oil supply boundary conditions at the&#13;
boundaries of the lubrication areas. A computational fluid dynamics (CFD) model&#13;
was established to analyze the flow of lubricating oil at the vicinity of the pin joints,&#13;
which reveals that the amount of lubricating oil supplied from different locations can&#13;
vary. It was found that during high-speed reciprocating motion, lubricating oil may&#13;
not be able to remain on the piston pin’s surface long enough, particularly at top and&#13;
bottom. Lubricating oil flow, contact and friction patterns with different oil supply&#13;
conditions were analyzed and compared in a heavy-duty diesel engine model.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Change Propagation in Complex Systems:&#13;
Industry Processes and Perceptions</title>
<link href="https://hdl.handle.net/1721.1/151854" rel="alternate"/>
<author>
<name>Willis, Robin</name>
</author>
<id>https://hdl.handle.net/1721.1/151854</id>
<updated>2023-08-24T03:33:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design Change Propagation in Complex Systems:&#13;
Industry Processes and Perceptions
Willis, Robin
Unpredicted change propagation remains a large issue for industries dealing with complex system design despite the numerous tools and processes developed by academics and professionals to help mitigate its effects. All changes, and change propagations, are not necessarily undesirable since change is instrumental to innovation, but left unchecked, unpredicted change propagations in particular can snowball—creating significant cost increases, time delays, and quality downgrades. This study presents the findings stemming from interviews with 32 experienced technical professionals, all of whom are accustomed to working on design in complex systems. There was no standardized formal approach to these problems that had been adopted by a majority of organizations within this study, but several themes were consistently present. Informal communication between people on teams was the backbone to collaboration on these types of projects, even when formal change management structures were put in place. Consequently, the state of the relationships between people working on the same project could have an impact on the project’s outcome. Furthermore, the culture surrounding change at the organization can also have an impact, as it shaped how design change management activities were seen and the effect they could have on people’s careers. Time constraints added pressure to every situation they were a part of, and could prevent “best practices” or new processes and tools from being enacted. This study introduces (1) several potential Change Propagation Risk Factors to help shed light on which types of project circumstances have the highest influence on how much of an issue unpredicted change propagation is for that workplace, and (2) the Project Management Pyramid, a new visualization of ties between project resources based on the Project Management Triangle, but with the added dimension of professional relationships. This area of research would benefit from future works that gather a higher number of data points than the interview format supports, apply objective forms of measurement to confirm consequences and situations described by industry professionals, or develop a reliable metric for individual contributions to system level successes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Packaging Design for Remote Clinical Trial Operations</title>
<link href="https://hdl.handle.net/1721.1/151853" rel="alternate"/>
<author>
<name>Noh, Joyce</name>
</author>
<id>https://hdl.handle.net/1721.1/151853</id>
<updated>2023-08-24T03:26:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Packaging Design for Remote Clinical Trial Operations
Noh, Joyce
As remote clinical trials continue to revolutionize the old-fashioned clinical trials model and increase patient enrollment, it has become apparent that there is no regulatory framework in place to standardize their utilization. In traditional clinical studies, there is very little independent user control and, therefore, error as a designated clinician performs all physiological measurements on each subject in person. However, in remote clinical settings, there is an added component of having to collect each participant’s physiological data and other necessary information over distance. This introduces the need for a “trial in a box” or remote clinical trial kits. Both the participant and the facilities in charge of the kits play a role in how these kits need to be designed, manufactured, and handled. However, there is a clear lack in the standardization of kit design.&#13;
&#13;
This project provides a framework for human-subjects researchers to establish their own remote clinical trial operations. This thesis, specifically, focuses on designing the clinical trial kits that could be used for the trials mentioned in this case while also detailing design decisions in order to standardize kit design for other remote clinical trials.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inequities in Air Pollution Exposure in the U.S.: An&#13;
Exploration of Disparity Metrics Across Geographic&#13;
and Temporal Scales</title>
<link href="https://hdl.handle.net/1721.1/151850" rel="alternate"/>
<author>
<name>Chen, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/151850</id>
<updated>2023-08-24T03:32:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inequities in Air Pollution Exposure in the U.S.: An&#13;
Exploration of Disparity Metrics Across Geographic&#13;
and Temporal Scales
Chen, Christina
In the United States (U.S.), exposure to ambient &#119875;&#119872;₂.₅ – fine particulate matter smaller than 2.5 micrometers in diameter– is responsible for the largest share of premature deaths associated with air pollution. Despite declines in average annual concentrations, significant disparities in &#119875;&#119872;₂.₅ exposure between racial and ethnic groups continue to persist. Existing research characterize &#119875;&#119872;₂.₅ exposure disparities across a range of different indicators, but few studies compare these metrics against one another nor do these studies explore these metrics at different geographic scales and demographic shifts over time. As policy makers begin to prioritize environmental justice concerns through the identification of disproportionately impacted communities, careful selection of indicators and metrics will be vital for ensuring that inequities are properly captured in decision making processes.&#13;
&#13;
Using population demographics from the U.S. Census and land-use regression &#119875;&#119872;₂.₅ concentration estimates from the Center for Air, Climate, and Energy Solutions (CACES), we compare the calculations of absolute and relative exposure disparities at different geographic scales and changing demographic shifts. Further, we discuss the policy implications of our findings and provide recommendations for both regulatory and community centered measures to address existing racial/ethnic disparities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy Law in Practice: Exploring Challenges to Modern Privacy Compliance</title>
<link href="https://hdl.handle.net/1721.1/151849" rel="alternate"/>
<author>
<name>Gulati-Gilbert, Sukhi</name>
</author>
<id>https://hdl.handle.net/1721.1/151849</id>
<updated>2023-08-24T03:57:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Privacy Law in Practice: Exploring Challenges to Modern Privacy Compliance
Gulati-Gilbert, Sukhi
Modern privacy legislation covers a broad data scope and introduces technically challenging data management requirements. Computer science research has emerged to resolve technical challenges, but proposed system designs could benefit from deeper understandings of user workflows. Existing qualitative work to understand privacy compliance on the ground gives both reason for optimism and alarm. There is a growing community of knowledgeable privacy professionals, but their effectiveness is hindered by organizational dynamics. We conduct 10 semi-structured interviews of privacy experts to further understand challenges faced by privacy practitioners. We find key challenges arising primarily from misaligned organizational incentives and difficulty in policy interpretation. We urge organizations to invest in and empower privacy engineers, researchers to explore different design directions, and policymakers to enable greater user recourse against corporations. We hope our work can help enable privacy respecting institutions and systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>U.S. AI Policy - A Balancing Act</title>
<link href="https://hdl.handle.net/1721.1/151848" rel="alternate"/>
<author>
<name>Hetrick, Ryan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151848</id>
<updated>2023-08-24T03:07:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">U.S. AI Policy - A Balancing Act
Hetrick, Ryan T.
Artificial intelligence policy is emerging as a critical component of U.S. strategy and strategies for countries around the world. What type of AI policy will allow the United States to continue to lead the world in AI innovation while doing it in an ethical and responsible manner? This work compares and contrasts 13 different countries and how each government approaches innovation, regulation, government funding, and law scope in the field of artificial intelligence. A significant portion of this analysis evaluates the tradeoffs that come with AI policies and their effects on society. Considering these tradeoffs, the U.S. needs to ensure that innovation in the field of artificial intelligence remains the top priority, while at the same time balancing the ethical deployment of AI to protect U.S. citizens. With China on the heels of the United States in terms of artificial intelligence capabilities, the United States needs to innovate more in the fields of foundation models, generative AI, human machine interaction, natural language processing (NLP), computer vision, and other emerging areas of artificial intelligence as well.&#13;
&#13;
This thesis takes an in-depth analysis of foundation models and generative artificial intelligence, while highlighting their importance and demonstrating their potential impact in the future. At the end of this body of work, there is a proposed Bill to U.S. lawmakers and Congress, titled “The Artificial Intelligence Startup, Innovation, Defense, Industry, and Academia Act (AI STIDIA Act)” that proposes a strategy for the United States to drive significant innovation in the field of artificial intelligence while deploying it in an ethical and responsible manner. The United States needs to prioritize ethical innovation in the field of artificial intelligence and cannot afford to emplace ineffective regulatory frameworks that curtails innovation. There will be a time when there is proper technology to extensively regulate artificial intelligence; however, there is not sufficient technology to extensively regulate AI as I publish this thesis. As the United States aims to generate the most innovative AI systems and create a culture that encourages the ethical deployment of AI, we should learn from past successes and failures when innovating technology. The United States needs to focus on creating AI technologies that enhances the wellbeing of U.S. citizens and people around the world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unlocking the Potential of Hydrogen in Intermittent Electricity Systems: A Global Assessment of Levelized Cost of Hydrogen and Low Carbon Industrial Hub Profitability</title>
<link href="https://hdl.handle.net/1721.1/151847" rel="alternate"/>
<author>
<name>Liu, Qingyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151847</id>
<updated>2023-08-24T03:15:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unlocking the Potential of Hydrogen in Intermittent Electricity Systems: A Global Assessment of Levelized Cost of Hydrogen and Low Carbon Industrial Hub Profitability
Liu, Qingyang
Recently, numerous countries have announced ambitious hydrogen production targets among clean energy transition objectives, recognizing the potential of hydrogen in decarbonization. However, significant uncertainty remains regarding the cost predictions for hydrogen production and the economic viability of green hydrogen-enabled industrial hubs with higher levels of intermittent renewable energy penetration. This study focuses on assessing the levelized cost of hydrogen generated through polymer electrolyte membrane electrolysis, accounting for regional variations, technology learning, energy intermittency and policy incentives such as those provided by the Inflation Reduction Act. We also evaluate the profitability and market viability of utilizing co-located hydrogen to decarbonize Aluminum and steel production in renewable-powered industrial hubs across various suitable regions worldwide. To accomplish this, we develop a generalizable cost model that identifies the optimal hydrogen production capacity factor and levelized cost of hydrogen under different levels of grid electricity volatility, and construct a regional hour-by-hour prioritized dispatch model to simulate a low-carbon industrial hub primarily powered by wind and solar supported by storage and firming. The results demonstrate that with the regions considered, the levelized cost of hydrogen is consistently high till 2040, but can be reduced to meet the $2/kg production cost target in the coming years through operating capacity optimization and the implementation of policy incentives. Besides, the optimal capacity leading to the lowest levelized cost of hydrogen is negatively correlated with electricity price volatility, highlighting hydrogen’s potential as a cost-effective means to absorb fluctuations in grid electricity prices. Moreover, our analysis reveals that for industrial hubs, hydrogen is the most economically viable when integrated with an industry where hydrogen serves both as a material input and as a storage mechanism, as exemplified by green steel manufacturing with the hydrogen-based direct reduced iron-electric arc furnace process. Finally, an analysis on past policies, geopolitical interests, and resource exploitation in developing countries associated with hydrogen highlights additional political and social considerations in hydrogen policymaking from a global development perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems-Level Analysis of Algorithmic Regulation</title>
<link href="https://hdl.handle.net/1721.1/151846" rel="alternate"/>
<author>
<name>Yew, Rui-Jie</name>
</author>
<id>https://hdl.handle.net/1721.1/151846</id>
<updated>2023-08-24T03:16:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Systems-Level Analysis of Algorithmic Regulation
Yew, Rui-Jie
Algorithmic tools are being wielded in the name of regulatory values as regulatory tools are lagging in mitigating the impacts of algorithmic systems. In this thesis, I characterize and evaluate the systemic relationship between regulation and algorithmic technologies in two parts. In Part I, I uncover the current mismatched application of laws to algorithmic systems and propose resulting implications and mitigations. In Part II, I consider regulatory design for emerging technologies that incentivizes efforts toward increasing the foreseeability of harm. &#13;
&#13;
While each chapter centers the interplay between different regulations and algorithmic technologies, the problems that are uncovered and the solutions proposed generalize to reasoning about algorithmic regulation as a whole. This analysis highlights the unexpected ways that regulations can shape incentives for algorithmic development, as well as the unexpected ways that algorithmic innovation can spark regulatory innovation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Microvoid Localization with Explicit&#13;
Finite Element Analysis</title>
<link href="https://hdl.handle.net/1721.1/151845" rel="alternate"/>
<author>
<name>Snow, Brandon D.</name>
</author>
<id>https://hdl.handle.net/1721.1/151845</id>
<updated>2023-08-24T03:24:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Microvoid Localization with Explicit&#13;
Finite Element Analysis
Snow, Brandon D.
Ductile fracture is characterized by the micromechanisms of void nucleation, growth, and coalescence. Numerical modeling of these micromechanical phenomena has typically been performed with an implicit finite element analysis (FEA) solver, which accurately produces quasi-static results, but introduces limitations in terms of computational expense and the ability to model highly non-linear processes. This has resulted in the majority of studies being restricted to single void representative volume elements (RVEs) under applied stress-states with moderate to large triaxialities. In this thesis, those limitations are largely overcome by solving microvoid localization problems with the explicit FEA method. A general framework for applying periodic boundary conditions and controlling the RVE-average stress-state is introduced and applied to both implicit and explicit FEA. Then, using the framework, a comparison of implicit and explicit FEA simulations of microvoid localization demonstrates that quasi-static results can be produced using the explicit FEA method with a significant reduction in computational cost. General guidelines for performing quasi-static explicit FEA simulations of RVE behavior are established that should be generally applicable to a wide range of micromechanics problems. Lastly, the explicit FEA method is applied to more complex RVEs which contain internal elastic particles. The simulations demonstrate that the presence of internal particles can accelerate microvoid localization under low triaxiality deformation, especially for low strain hardening materials. The results of the particle-containing RVE simulations have implications for precipitation strengthened alloys as well as metal matrix composites that utilize hard particles/phases to increase strength. The simulation capabilities introduced in this work highlight new opportunities to improve our collective understanding of ductile failure which will hopefully lead to better predictive capabilities and new materials designed to resist fracture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of 3D Architecture on Energy Dissipation during High-speed Particle Impact</title>
<link href="https://hdl.handle.net/1721.1/151844" rel="alternate"/>
<author>
<name>Butruille, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151844</id>
<updated>2023-08-24T03:17:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Effect of 3D Architecture on Energy Dissipation during High-speed Particle Impact
Butruille, Thomas
Ultralight mechanical metamaterials enabled by additive manufacturing (AM) have previously achieved density-normalized strength and stiffness properties that are inaccessible to monolithic materials, but the majority of this work has focused on static loading while the mechanical properties of these metamaterials under extreme dynamic loading conditions has remained largely unexplored. Here, using supersonic microparticle impact, the impact response of different 3D-printed microscale architectures are compared to each other and a non-architected mass equivalent sample to examine the effect of architecture on material impact response. This response is analyzed in a mass-normalized context and a dimensionless context analogous to (spatially confined) planetary impact. Ultra high-speed imaging and post-impact scanning electron microscopy reveal qualitative differences in the energy dissipation mechanisms present between impacts on architected and bulk materials. Additional uniaxial compression experiments on equivalent architected samples help separate the energy dissipation components during impact. This investigation could lead to improvements in the design process of lightweight materials for energy mitigation applications such as armor and protective coatings.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of A System Dynamic Model on U.S. Regional Real Estate Industry</title>
<link href="https://hdl.handle.net/1721.1/151843" rel="alternate"/>
<author>
<name>Zhang, Tianyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151843</id>
<updated>2023-08-24T03:26:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Application of A System Dynamic Model on U.S. Regional Real Estate Industry
Zhang, Tianyi
This research strives to explore and simulate the dynamics of regional real estate markets within the United States using the system dynamics methodology. Building upon the original model by John Sterman, the study expands it by introducing new structures related to construction-in-progress, unit prices, alternative funds, and sales. The model undergoes calibration utilizing historical data from 1975 to 2021, with a focus on its capacity to simulate key parameters such as start rate, construction-in-progress rate, construction rate, and price. Although calibration fitness demonstrates a reliable match for trends, it exhibits limitations in representing dynamics over short time periods and seasonality. Utilizing the calibrated model, the study generates forecasts for future real estate market trends under three scenarios: baseline, standard growth, and elevated interest rate. The forecast results emphasize the influential role of space demand, the effect of interest rates on prices, and the reinforcing feedback loop of future prices. The study highlights potential avenues for model enhancement and establishes a foundation for subsequent research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Evaluation of Social Network Sensors on Twitter During the Russia-Ukraine Conflict</title>
<link href="https://hdl.handle.net/1721.1/151840" rel="alternate"/>
<author>
<name>Ahlers, Miranda Nicolle</name>
</author>
<id>https://hdl.handle.net/1721.1/151840</id>
<updated>2023-08-24T03:41:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Empirical Evaluation of Social Network Sensors on Twitter During the Russia-Ukraine Conflict
Ahlers, Miranda Nicolle
The immense magnitude of information sharing, paired with increased privacy considerations, has rendered global monitoring of social media platforms virtually infeasible. Heuristic algorithms grounded in the friendship paradox have provided simple, accessible methods for strategic sampling of information from platforms while only requiring knowledge of the local network structure. However, it still remains unclear how well such algorithms perform in contexts where the spread of information consists of exogenous and endogenous modes of propagation.&#13;
&#13;
Herein, I evaluate the ability of randomly selected friends of random users to provide early awareness of discussions related to the Russia-Ukraine conflict on Twitter. I find that while selected sensors are more centrally located within the Twitter network, they fail to reliably provide early awareness of conflict-related hashtags. Lack of performance is exacerbated when only early adopters from each group are included in evaluations. Additionally, I find that the difference in time of adoption between control and sensor groups provides limited information about how popular a hashtag will become. Further, I propose a framework for using early participation in conflict discourse to condition the selection of sensors for future war-related trends – exploring both friendship and prior retweet connections as potential sensors. I then outline two systematic approaches for objectively quantifying the value of information acquired from selected sensor groups – a count-based approach and a predictive modeling framework. Ultimately, I find that both local and retweet sensors significantly reduce the noise of information produced by a random control group while effectively capturing over 80% of hashtags that become widely shared.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bending the ICT curve: Evaluating options to achieve 2030 sector-wide climate goals &amp; projecting new technology impacts</title>
<link href="https://hdl.handle.net/1721.1/151839" rel="alternate"/>
<author>
<name>Bell, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/151839</id>
<updated>2023-08-24T03:34:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bending the ICT curve: Evaluating options to achieve 2030 sector-wide climate goals &amp; projecting new technology impacts
Bell, Allison
The global impact of the information and communications technology (ICT) sector is both growing and changing as computing technologies continue to develop and industry leaders make more efforts towards emissions reductions. Recent work highlights the increasing importance of manufacturing emissions in regards to the total impact of computing systems, but the tradeoff space in which decisions made to reduce emissions or energy in one part of a device lifecycle might increase emissions or energy demand in another remains largely unexplored. We evaluate several options for global impact reduction within the ICT sector, namely within data center (server) and smartphone footprints, focusing both on the maximal potential impact of each intervention and highlighting associated tradeoffs and limitations. We find that the ICT sector’s 2030 target of a 45% emissions reduction from 2020 levels is potentially achievable through the mechanisms proposed, including: renewable energy for operation, low-carbon electricity for manufacturing, extended device lifetimes, and the harnessing of energy efficiency improvements for impact reduction. In addition, we propose a method for evaluating the total carbon footprint benefits of a new computing technology through a detailed case study of a prototypical analog accelerator device. We provide an example of underspecified estimation of scaled device manufacturing impacts obtained through a reorganization of existing process emissions data. We then demonstrate the use of that estimate to evaluate the benefits of adoption of the new technology from the perspective of total footprint reduction under varying device usage conditions. Both our framework for estimating global ICT sector impact reduction strategies and our framework for assessing tradeoffs associated with new computing technology adoption are intended to serve as starting points for continued discussion and to align different, often siloed, stakeholders within the computing industry towards effectively “bending the curve” of ICT sector emissions growth.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Urban Air Mobility Operations in a Corridor Network</title>
<link href="https://hdl.handle.net/1721.1/151838" rel="alternate"/>
<author>
<name>McDonald, Spencer T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151838</id>
<updated>2023-08-24T03:36:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing Urban Air Mobility Operations in a Corridor Network
McDonald, Spencer T.
Electric vertical-takeoff-and-landing vehicles give rise to Urban Air Mobility (UAM) concepts. Initial UAM systems are projected to operate high volumes of flights within a capacitated corridor network. This paper develops a tractable methodology to optimize vehicle dispatching and routing in UAM networks, along with flight trajectories between origin and destination, as well as flow directionality in corridors. We formulate an integer optimization model in a time-space-network that exploits a subpath structure at the flight level. We propose a column generation algorithm to decompose vehicle dispatching decisions in a master problem and flight trajectories in a pricing problem, using a tailored backward label-setting algorithm. This methodology scales to practical instances, with up to 50 vertiports, hundreds of corridor conjunctions, and 1,000 trip requests. We develop a data-driven experimental setup capturing real-world travel demand, air traffic infrastructure and weather patterns. Results demonstrate the benefits of the comprehensive optimization approach developed in this paper, as compared to benchmarks that do not capture flow directionality or operating capacities. This methodology identifies the bottlenecks created by legacy corridors, horizontal separation requirements and adverse weather to inform the design of emerging UAM systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing Stern Tube Corrosion through Shipboard Cathodic Protection</title>
<link href="https://hdl.handle.net/1721.1/151835" rel="alternate"/>
<author>
<name>Bishop, Michael James</name>
</author>
<id>https://hdl.handle.net/1721.1/151835</id>
<updated>2023-08-24T03:26:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preventing Stern Tube Corrosion through Shipboard Cathodic Protection
Bishop, Michael James
Cathodic protection extends the life of ships and decreases the cost of maintaining a ship fleet. Ships built with less noble metals, like steel, will corrode alarmingly and require protection. This work aims explicitly to present an analysis of the cathodic protection of a stern tube, a complex area to protect, and whether impressed current cathodic protection can aid a U.S. Coast Guard Fast Response Cutter with localized corrosion. Furthermore, this work presents multiple methods for studying the effectiveness of cathodic protection, including COMSOL Multiphysics, polarization experiments, and sacrificial anode wastage estimation. Lastly, nonintrusive load monitoring, with its diagnostic capabilities, provides an opportunity to advance the complicated field of study of corrosion protection. A nonintrusive load monitor (NILM) samples the voltage and current at the utility point and then computes real and reactive power, harmonic content, and system operating frequency. This work expands upon previous successes with NILM, namely, its ability to collect high-bandwidth data to generate an automatic log of the shipboard load operation. The record of energy consumption provided by a NILM gives designers in the seagoing services a continuously evolving picture of present and future power requirements, can shift some or all responsibility away from watchstanders, and provide data for corrosion research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detrainment and Settling of Sediment in Turbidity Currents: A Study to Inform Deep Seabed Mining</title>
<link href="https://hdl.handle.net/1721.1/151833" rel="alternate"/>
<author>
<name>Cathcart, Kelsey O'Brien</name>
</author>
<id>https://hdl.handle.net/1721.1/151833</id>
<updated>2023-08-24T03:41:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Detrainment and Settling of Sediment in Turbidity Currents: A Study to Inform Deep Seabed Mining
Cathcart, Kelsey O'Brien
Deep-sea mining for high demand minerals has recently been a topic of global conversation in relation to potential monetary value, geopolitical cooperation, and environmental impact.  The governing body of the deep-ocean, the International Seabed Authority (ISA), has simultaneously been attempting to protect the deep-sea from ruin by potentially harmful practices while also seeking to approve mining practices for mineral extraction from the deep-ocean for several years.  Understanding there is much to learn about not only the deep-ocean but how to build best practices for mining the deep-sea has led to several deep-ocean research projects to inform researchers, and in turn the ISA, on how to do so in a manner that leaves the smallest human footprint on this vast ecosystem.  Through both exploratory deep-ocean and laboratory research projects it has become apparent that the creation, and subsequent traveling, of turbidity currents across the seabed as a result of deep-sea mining will lead to impacts on a scale that is not yet entirely understood.   Building on decades of studies on gravity currents (both related and unrelated to deep-sea mining) the focus of this thesis and requisite experimentation focuses on not the head of the gravity current but rather the tail end of the current and observing the detrainment and settling that occurs after the gravity current has been created (or released).  The study of how these particles settle will inform the deep-sea mining field and the ISA on the potential environmental impact of this new practice and how to best move forward with potential deep-seabed exploitation following science informed practices and regulations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Wind Direction Across Very Short and&#13;
Short Term Time Horizons for Wind Turbine Control</title>
<link href="https://hdl.handle.net/1721.1/151831" rel="alternate"/>
<author>
<name>Fiallo Van Eenenaam, Ana Cristina</name>
</author>
<id>https://hdl.handle.net/1721.1/151831</id>
<updated>2023-08-24T03:01:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Forecasting Wind Direction Across Very Short and&#13;
Short Term Time Horizons for Wind Turbine Control
Fiallo Van Eenenaam, Ana Cristina
Wind energy systems will require improved efficiency to meet future electricity demands in alignment with net-zero goals. Wind turbine yaw control systems can provide incremental improvements in power efficiency across both individual turbines and the collective wind farm. Yaw control strategies commonly apply low-pass őlters of wind directions observed by the turbine in the most recent 10 minutes to determine turbine reactions to the incident wind, yet the inability to anticipate future changes in wind direction lead to suboptimal power production. While recent literature has explored wind speed and power forecasting, methods speciőc to forecasting wind direction are studied less frequently. This thesis utilizes high resolution LiDAR data provided by the Woods Hole Oceanographic Institute Air-Sea Interaction Tower to assess the predictive performance of three methods for deterministic wind direction forecasting: persistence, autoregressive moving average (ARMA), and ridge regression. The models were tested on several time horizons (Δ&#119879;) that are relevant to turbine control and operation, ranging from 30 seconds to 2 hours. In this thesis, persistence demonstrated the highest predictive accuracy among the three models across all evaluated timescales. Generally, for Δ&#119879; &lt; 5 minutes, ARMA was next in best performance and outperformed ridge regression. For Δ&#119879; &gt; 5 minutes, ridge regression outperformed ARMA forecasting but was still worse than persistence. Lastly, a comparison of model forecasting performance across several elevations demonstrated inconsistent results across the testing frameworks employed in this thesis, suggesting that future work should continue evaluating models’ performance across heights. Future work should also further develop deterministic and stochastic, data-driven strategies for wind direction forecasting across short term time horizons, as well as assess their impact on individual turbine and farm power efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Structural Design: An Algorithmic Approach to Synthesizing and Optimizing Steel Lateral Systems</title>
<link href="https://hdl.handle.net/1721.1/151830" rel="alternate"/>
<author>
<name>Hirt, Natasha K.(Natasha Karolina)</name>
</author>
<id>https://hdl.handle.net/1721.1/151830</id>
<updated>2025-12-02T19:27:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Generative Structural Design: An Algorithmic Approach to Synthesizing and Optimizing Steel Lateral Systems
Hirt, Natasha K.(Natasha Karolina)
Mitigating the immense environmental impact of the built environment is an important objective for the architecture, engineering, and construction industries. As initial decisions around layout and configuration have significant effects on the structural efficiency of buildings and are difficult to revise later in the design process, it is essential to provide designers with accurate material quantity and embodied carbon estimates at early design stages. The diversity of architectural expression and complexity of structural calculation has made it challenging to develop a tool that is sufficiently accurate, adaptive, and automated to accomplish this goal.&#13;
&#13;
This thesis presents a methodological and an analytical contribution. A novel generative structural design method is proposed, taking low-fidelity inputs, such as those that might be considered during early-stage design, and outputting a high-fidelity structural model that can be analyzed and iterated. The algorithm is tested on 233 structures drawn from wild and synthetic datasets, and a comparative analysis performed between five lateral system typologies. The findings correspond with the literature, verifying the premium for height proposed by Khan as well as Samyn’s slenderness premium.&#13;
&#13;
The analysis demonstrates the utility of synthetic structural system design for individual building analysis and generates new knowledge about the relative efficiencies of different lateral system typologies at a range of heights. The method evaluates how computational tools, such as design space visualization and topology optimization, may be realistically integrated into generative algorithms. Finally, the rich data produced with generative structural design reveals new ways to visualize, analyze, and understand the ways in which designers’ choices affect the ultimate efficiency and environmental impact of built structures.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Optimized Perimeter Steel Bracing of Tall Buildings under Different Seismic Regions</title>
<link href="https://hdl.handle.net/1721.1/151829" rel="alternate"/>
<author>
<name>Medina, Chelsea Karina</name>
</author>
<id>https://hdl.handle.net/1721.1/151829</id>
<updated>2023-08-24T03:24:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparing Optimized Perimeter Steel Bracing of Tall Buildings under Different Seismic Regions
Medina, Chelsea Karina
There are challenges associated with building high-rise structures sustainably and safely, especially in seismic regions. These types of structures face extreme loading conditions. One promising solution for these challenges is topology optimization, which involves determining the optimal material distribution to achieve desired performance criteria under certain constraints. However, implementing topology optimization for real-life structures under seismic design codes is challenging due to multiple nonlinear constraints, discrete variables, and high computational cost. Recently, there have been several attempts to use topology optimization for seismic design. Considered groundbreaking in this regard is research proposed by Amory Martin in 2020. This author’s work proposed a method called the sum of modal compliances to optimize a steel lateral frame system in tall buildings for seismic design. The focus of this work is to expand upon this method, generating lateral frame systems for tall buildings from response spectra in different seismic regions rather than from an idealized design spectrum. The structural performance of the various optimized framing layouts produced were further verified through a nonlinear analysis, which indicated that they had the potential to outperform traditional bracing systems under seismic excitation. This was a trend observed in multiple seismic regions in North America. This research has important implications as the use of topology optimization in designing lateral brace frames for tall buildings under seismic excitation could help develop safer and more sustainable structures, reducing embodied carbon while maximizing construction revenue.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Phylogenetic and Deep Learning Methods to Predict Seed Dispersal Mode</title>
<link href="https://hdl.handle.net/1721.1/151828" rel="alternate"/>
<author>
<name>Xu, Haodi</name>
</author>
<id>https://hdl.handle.net/1721.1/151828</id>
<updated>2023-08-24T03:45:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparing Phylogenetic and Deep Learning Methods to Predict Seed Dispersal Mode
Xu, Haodi
Increasing tree cover is a promising natural climate solution to reduce carbon under the pressing global warming. Seed dispersal is a key process in natural forest regrowth, where seeds are moved away from parent plants to establish new growth. Dispersal modes include biotic and abiotic methods, and vary depending on traits such as seed shape, size, and color. However, globally, data on seed dispersal modes of plant species is limited, hindering our understanding of the importance of wild animals in increasing tree cover and their role in carbon sequestration. The research goal of this study is to find a method to predict unknown seed dispersal modes with high accuracy by comparing a novel deep learning method with a typical phylogenetic imputation method. Here we show that the phylogenetic imputation method performed better than deep learning methods in predicting biotic seed dispersal mode. However, we also found that the deep learning methods demonstrate great potential in learning from community science photographs, despite their underperformance in this study. Furthermore, the study shows that incorporating a feature-extraction model could improve predictions of a single CNN model, highlighting the potential for future studies to include more models for better predictions of seed dispersal modes. We anticipate that the problems and potential improvements identified in this study relating to the deep learning method will serve as a starting point for further model development to predict the seed dispersal mode of unknown species with greater accuracy. This could involve applying multiple models, incorporating phylogenetic information with deep learning models, and including additional features. Accurately understanding how different plant species are dispersed can help scientists better predict future forest dynamics and carbon storage capacity, which is critical for studying future climate change and developing effective climate change mitigation strategies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consistent Estimators for Learning to Defer to an Expert</title>
<link href="https://hdl.handle.net/1721.1/151827" rel="alternate"/>
<author>
<name>Mozannar, Hussein</name>
</author>
<id>https://hdl.handle.net/1721.1/151827</id>
<updated>2023-08-24T03:49:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Consistent Estimators for Learning to Defer to an Expert
Mozannar, Hussein
Learning algorithms are often used in conjunction with expert decision makers in practical scenarios, however, this fact is largely ignored when designing these algorithms. In this thesis, we explore how to learn predictors that can either predict or choose to defer the decision to a downstream expert. Given only samples of the expert's decisions, we give a procedure based on learning a classifier and a rejector and analyze it theoretically. Our approach is based on a novel reduction to cost sensitive learning where we give a  consistent surrogate loss for cost sensitive learning that generalizes the cross entropy loss. We show the effectiveness of our approach on a variety of experimental tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration and Implementation of Conceptual Design Tools for Naval Warships</title>
<link href="https://hdl.handle.net/1721.1/151826" rel="alternate"/>
<author>
<name>Cathcart IV, John Harris</name>
</author>
<id>https://hdl.handle.net/1721.1/151826</id>
<updated>2023-08-24T03:59:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Integration and Implementation of Conceptual Design Tools for Naval Warships
Cathcart IV, John Harris
The Naval Construction and Engineering Program (2N) has relied on the Advanced Ship and Submarine Evaluation Tool (ASSET) as the primary tool for completing concept design projects for naval warships. ASSET is no longer supported by the U.S. Navy, and the Naval Concepts Requirements and Exploration (C&amp;RE) tool has been identified as a feasible replacement. Incorporating the C&amp;RE tool into the 2N program is part of new collaborative naval architecture research between Virginia Tech and MIT that is further supported by Naval leadership at NAVSEA, Naval Surface Warfare Center Carderock, Naval Surface Warfare Center Dahlgren, and others. The C&amp;RE tool has been converted for further use in MIT’s 2N program and is now available to all students for future warship design projects. Furthermore, a novel design tool is introduced that is capable of assisting naval architects to accurately and efficiently complete the preliminary arrangements of vital engineering and combat systems vital components. A case study for a new medium-sized surface combatant is conducted as a validation of both the C&amp;RE tool for 2N use and the application of the preliminary arrangements tool.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, construction and testing of a pumped molten chloride circulation loop</title>
<link href="https://hdl.handle.net/1721.1/151825" rel="alternate"/>
<author>
<name>Bichnevicius, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/151825</id>
<updated>2023-08-24T03:43:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design, construction and testing of a pumped molten chloride circulation loop
Bichnevicius, Michael
Next generation concentrating solar power (CSP) with thermal energy storage is envisioned to operate with a peak temperature of 700 °C or higher, but transitioning from a stainless steel to a nickel alloy infrastructure is prohibitively expensive. In light of this challenge, the present work investigates the use of refractory materials instead of nickel alloys. This thesis presents the design, construction and testing of a laboratory-scale pumped circulation loop made of refractory materials to circulate molten chloride salt above 700 °C. The components of the loop were initially constructed from conventional refractory materials such as graphite, carbon-carbon composite, molybdenum, and alumina, though the loop was subsequently adapted to test a laboratory-scale tank and pipe made of a novel calcium hexaluminate-based castable refractory designed to resist corrosion and penetration by molten chloride salt. In addition, this thesis describes the design and operation of a convection-enhanced rotating disk corrosion test apparatus to study the corrosion behavior of refractory materials in molten chloride salt under flowing conditions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preliminary Shipboard Layout of Navy Integrated Power and Energy Corridor (NiPEC)</title>
<link href="https://hdl.handle.net/1721.1/151824" rel="alternate"/>
<author>
<name>Kruse, Matthew Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151824</id>
<updated>2023-08-24T03:49:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preliminary Shipboard Layout of Navy Integrated Power and Energy Corridor (NiPEC)
Kruse, Matthew Thomas
Naval ship systems increasingly require more electricity. The Zumwalt class destroyer was the Navy’s first modern fully electric ship. Through its integrated power system, the prime movers provide electric power to meet propulsion, ship service, offensive, and defensive systems requirements. The next generation destroyer, DDG(X) is also planned to be an electric ship. The ships of the future can thus be anticipated to employ upwards of 100 Megawatt (MW) or more electric power. With such a rise in electrical power comes the requirement to move electricity efficiently over compact and reliable power distribution systems.&#13;
&#13;
To increase a ship’s electrical infrastructure density, MIT is developing a new electrical power distribution structure called the Navy Integrated Power and Energy Corridor (NiPEC). The distribution cables, load centers, power panels, and power conditioners are all co-located into the NiPEC [1]. This allows for electrical energy to be efficiently routed through the ship and increase electrical redundancy. Individual NiPEC sections will fit into reserve-space ship locations and may use the new Navy Integrated Power Electronics Building Block (iPEBB) to control and condition power. The NiPEC will include space to accommodate future power requirements with little refit needed to the ship or the power corridor.&#13;
&#13;
This thesis used a notional ship developed by Electric Ship Research and Development Consortium (ESRDC), past research into NiPEC electrical components, open source military specifications, and open source literature to build a power corridor concept 3D model within a single ship compartment. As this is the first 3D model concept, all components were based on existing technology to establish a benchmark of size and power conversion density. Once a single power corridor compartment was modeled, the components were duplicated throughout the notional ship. The 3D concept includes major power corridor elements with attention given to ease of construction, maintenance, and repair.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Hydrogen in Decarbonizing Heavy Industry</title>
<link href="https://hdl.handle.net/1721.1/151820" rel="alternate"/>
<author>
<name>Benavides, Kali</name>
</author>
<id>https://hdl.handle.net/1721.1/151820</id>
<updated>2023-08-24T03:39:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Hydrogen in Decarbonizing Heavy Industry
Benavides, Kali
Hydrogen is increasingly being seized upon as a widespread decarbonization solution. There are a number of potential applications for hydrogen and investments are being funneled into demonstration projects. In this thesis work I explore the economic competitiveness of hydrogen in two heavy industry applications; steelmaking and high temperature heating. These processes rely on fossil fuels for multiple attributes and there is not another low carbon alternative fuel that has all of these characteristics. I find that in all regions, low carbon hydrogen production costs are currently more expensive than fossil fuels. High temperature heating with hydrogen increases the cost of clinker by 58-225%, and raw glass by 16-73%. Applications of hydrogen in steelmaking increase steel costs by 24-90%. Cost ranges represent the different costs when using Blue or Green &#119867;2. As a competing low carbon steel production pathway, I also assessed steelmaking with CCS which increased steelmaking costs by (∼14%). Using the MIT Economic Projection and Policy Analysis (EPPA) model, I examined the deployment of &#119867;₂ based steelmaking and steelmaking with CCS under a deep decarbonization policy scenario. Results show that at current costs deployment is limited prior to 2050. However, if costs are reduced then these technologies can deploy rapidly (achieving up to 100% of the share of global steel production by 2050). Adoption of decarbonization technologies is regionally specific and there can be regional advantages to deploying certain production pathways.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Go Greene: The Complex Dynamics of the Ongoing Transition in Southwestern Pennsylvania</title>
<link href="https://hdl.handle.net/1721.1/151819" rel="alternate"/>
<author>
<name>He, Yiran</name>
</author>
<id>https://hdl.handle.net/1721.1/151819</id>
<updated>2023-08-24T03:26:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How to Go Greene: The Complex Dynamics of the Ongoing Transition in Southwestern Pennsylvania
He, Yiran
Coal mining is a huge part of the culture and identity in Greene County. Unprompted, when asked about their personal histories and backgrounds, people will mention family and ancestors who were coal miners, as a way of demonstrating their deep ties to the region. Now, the tax structure is such that a lot of public infrastructures are suffering from the loss in value of coal assets.&#13;
&#13;
Those former heights of prosperity are diminishing. The oil and gas industry has not stepped up to replace the loss in tax revenue intake, nor the loss in jobs, nor has the industry provided funds for remediation of the lands it has damaged.&#13;
&#13;
The residents of Greene County are trying to forge a way to bring the community to a place where everyone who considers it home can stay, and build their home for their kids. From economic diversification, to environmental effects of the fossil industry, to workforce issues, to tax base considerations, to long-term education and planning and housing, every issue is at its core about what it means for Greene County to feel like home.&#13;
&#13;
Not everyone agrees on how to move forward. Some believe in a model where a large investment sparks other service industries to move in and build out the economic base. Others believe in a more grassroots, endogenous model, where residents build their own way out. I hear tensions between those who believe in the benefits fracking has brought to some residents through royalties, and those concerned about lack of fresh food and clean water.&#13;
&#13;
Listening to and uplifting the stories of the community members is one way well-resourced institutions can begin to meaningfully engage with and contribute to local partners, through building trust and relationships. Looking to the future, I hope MIT and others can play supportive, collaborative roles, helping to build capacity and to empower communities to chart their own development.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equity and Affordability Impacts of Building Performance Standards: A Case Study of New York City’s Local Law 97</title>
<link href="https://hdl.handle.net/1721.1/151817" rel="alternate"/>
<author>
<name>Shepard, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/151817</id>
<updated>2023-08-24T03:47:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Equity and Affordability Impacts of Building Performance Standards: A Case Study of New York City’s Local Law 97
Shepard, Allison
Building operations account for more than two-thirds of emissions in most U.S. cities. To reduce emissions from the building sector, cities and states are adopting Building Performance Standards. These laws require large buildings to comply with increasingly stringent emissions or energy limits, or otherwise pay a fine. Building Performance Standards are powerful decarbonization tools, but may inadvertently over-burden and increase the risk of displacement for low-income households. The risk is particularly acute in unsubsidized affordable housing, also known as naturally occurring affordable housing, where building owners can pass the costs of compliance or non-compliance on to tenants. In this thesis, I quantitatively and qualitatively examine the impact of New York City’s Building Performance Standard – Local Law 97 – on multifamily buildings. I find affordable housing buildings are less energy efficient than market rate buildings and that non-compliance penalties, retrofit costs, and increased energy costs from electrification may substantially increase housing costs for tenants who are already severely rent burdened and energy burdened. To prevent low-income households from shouldering these costs and to ensure they benefit from the other results of building decarbonization – such as health improvement and job creation – cities and states should provide financial and technical assistance, protect tenants, and incorporate flexibility for affordable housing owners to comply with Building Performance Standards.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Economic Advantage of Computer Vision Over Human Labor, and Its Market Implications</title>
<link href="https://hdl.handle.net/1721.1/151816" rel="alternate"/>
<author>
<name>Svanberg, Maja S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151816</id>
<updated>2023-08-24T03:02:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Economic Advantage of Computer Vision Over Human Labor, and Its Market Implications
Svanberg, Maja S.
With the emergence of Artificial Intelligence (A.I.), our lives and economy are under- going a profound transformation. While there are huge benefits to be realized by the technology, we must also prepare for shifting circumstances, including changes in mar- ket dynamics and the labor market. Thus, to inform policy, we need to understand and forecast the implementation of A.I.&#13;
&#13;
Previous forecasts of A.I. proliferation have focused on the technical feasibility of replacing human labor in existing tasks. However, since the decision to deploy a technology is ultimately an economic one, I develop a framework that compares the cost of A.I. to the cost of worker compensation. As such, this approach considers not only technical feasibility, but also the economic advantage of A.I. over human labor.&#13;
&#13;
Using the framework, I examine the case of Computer Vision in the U.S. non- farm economy, drawing on previous work on the cost of Computer Vision, as well as government data on wages, tasks, and the size of firms. The results suggest that while Computer Vision can replace human labor across sectors and industries, it will only have an economic advantage over human labor in the very largest enterprises. In smaller companies, the sum of task-specific employee compensation does not exceed system development costs. Data is identified as the main driver of total Computer Vision development costs, placing incumbent firms at an advantage in the race to realize the economies of scale that Computer Vision, and A.I. in general, enable.&#13;
&#13;
Based on my findings and related work on labor markets, I argue that automation is not the only way in which the introduction of A.I. could harm workers. Increased market concentration, stemming from access to data being restricted to firms with existing operations as well as enhanced production efficiency, might cause a systemic power shift from workers to firms. I point to the facilitation of industry data-sharing as a tool for policy-makers to mitigate these effects by lowering the barriers to entry into A.I.-centric markets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Precision Stress Measurement in Thin Films for X-Ray Mirrors</title>
<link href="https://hdl.handle.net/1721.1/151815" rel="alternate"/>
<author>
<name>Whalen, Mallory</name>
</author>
<id>https://hdl.handle.net/1721.1/151815</id>
<updated>2023-08-24T03:02:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-Precision Stress Measurement in Thin Films for X-Ray Mirrors
Whalen, Mallory
Future X-ray observatories aim to achieve sub-arcsecond angular resolution with unprecedented sensitivity. Silicon meta-shell optics technology will enable the X-ray astronomy instrumentation community to create such an observatory. The light-weighted silicon mirrors used in meta-shell optics have a low stiffness which makes them susceptible to deformations caused by stress in their reflective coatings. Much research has been dedicated to figuring coated mirrors that have been deformed by their coatings and in creating low stress coatings. These coatings need to be stable over decades for the length of the observatory's mission. However, the stress stability of candidate X-ray reflective coatings has not been measured or proven to be small enough as to not re-deform the mirrors after they have been corrected. &#13;
&#13;
Membrane resonance techniques have been used to study thin film stress evolution during deposition. It has a superior sensitivity as compared to other techniques, such as substrate curvature methods. A novel device that uses the membrane resonance technique to repeatably measure stress in thin films is described. Sources of non-repeatability are discussed and repeatability studies are performed. The results presented in this thesis suggest that the membrane resonance technique is suitable for use in measuring X-ray reflective coating stress stability to the minute levels required for future X-ray observatories.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Koopman-Based Reduced-Order State Observer&#13;
for Visual Localization of Robots</title>
<link href="https://hdl.handle.net/1721.1/151814" rel="alternate"/>
<author>
<name>Williams, Jadal</name>
</author>
<id>https://hdl.handle.net/1721.1/151814</id>
<updated>2023-08-24T03:53:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Koopman-Based Reduced-Order State Observer&#13;
for Visual Localization of Robots
Williams, Jadal
A reduced-order observer using Koopman lifting linearization is developed for localization of a robot guided by a vision system. The Koopman operator is a powerful method for representing nonlinear robot dynamics as a linear model in a lifted space. Koopman faces two main challenges with robot localization. One is that the lifted linear system is not observable in general; standard Kalman filter and state observers cannot be applied to such non-observable systems. The other is that a large number of observables are required for accurate linearization. Here, we present 1) a new reduced-order state observer for a Koopman lifted linear model that satisfies the observability conditions, and 2) measurement of the multitude of Koopman observables by extracting many features from a camera image. These image features used as Koopman observables are directly measured in real-time and, thereby, make the observability matrix of the reduced-order state observer full rank. The method is developed for a robot crane system equipped with a vision system. We can estimate the endpoint of the robot using a reduced-order state observer of a lifted linear model where 20 observables are obtained from a visual image.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Strategy for In-Process Quality Assurance for Additive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/151811" rel="alternate"/>
<author>
<name>Ibrahim, Mariam Elisabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/151811</id>
<updated>2023-08-24T03:23:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Strategy for In-Process Quality Assurance for Additive Manufacturing
Ibrahim, Mariam Elisabeth
Additive manufacturing has transformed production by introducing a digital approach to manufacturing. Modern additive machinery consists of sensors that provide real- time data on environmental conditions; as a result, significantly more quantitative information is available for a manufacturing process. The applications for sensor-based data are numerous, especially when considered in tandem with information from across the entire process flow. This thesis examines the use of three main types of data in the additive process - feedstock age, environmental conditions, and furnace dynamics - to predict three specific quality outcomes (chemistry, porosity, and solid density) in medical implants at Stryker. By way of a series of predictive models, two main results are achieved for each quality test. First, input variable importance is quantified, enabling a deeper understanding of the significance of each leveraged data set in predicting quality. Second, models are designed to enable a double-digit percent reduction in testing volumes, enabling cost savings and increases in operational efficiencies. Quantifying variable significance enables future work to focus on improving predictions by investing in the quality of specific data sets. More broadly, the findings serve as a proof-of-concept for the impact of leveraging modern data science in additive manufacturing. While this work focuses on a single product line, the methodology can scale. In particular, gains may be far greater in industries that have higher failure-rate tolerances as a result of fewer issues with class imbalances in modeling.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating topology optimization codes using mesh refinement continuation</title>
<link href="https://hdl.handle.net/1721.1/151810" rel="alternate"/>
<author>
<name>Chen, Austin</name>
</author>
<id>https://hdl.handle.net/1721.1/151810</id>
<updated>2023-08-24T03:37:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerating topology optimization codes using mesh refinement continuation
Chen, Austin
A new concept for an algorithm accelerating topology optimization programs is introduced and explained in detail, which involves a continuation of increasing mesh resolutions to achieve low-compliance solutions with largely reduced computation times. Comparisons with examples from relevant literature show speedups of up to approximately 60% on discretizations up to the order of 106 elements for common benchmark problems. Improvements in speed can be attributed to taking advantage of running code on coarse meshes as a faster way to generate smart initial guesses to be reused as inputs for subsequent runs on finer meshes. A MATLAB script for the new algorithm and associated modifications to existing topology optimization code is included.&#13;
&#13;
Keywords: Topology Optimization, Mesh Refinement, Computational Efficiency, MATLAB
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Residential Electricity Consumption: A Utility’s Machine Learning Approach to Smart Metering Data&#13;
and the European Energy Crisis</title>
<link href="https://hdl.handle.net/1721.1/151807" rel="alternate"/>
<author>
<name>Canaan, Alexa Reese</name>
</author>
<id>https://hdl.handle.net/1721.1/151807</id>
<updated>2023-08-24T03:13:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Benchmarking Residential Electricity Consumption: A Utility’s Machine Learning Approach to Smart Metering Data&#13;
and the European Energy Crisis
Canaan, Alexa Reese
The European Energy Crisis is putting increasing pressure on the global energy supply and the residential sector is a key sector with variable consumption patterns that accounts for 40% of global energy consumption and residential buildings accounting for 27% of global energy consumption [32]. We use utility smart metering data at the hourly energy consumption level and daily peak consumption level from a subset of Iberdrola’s Spanish residential customers. Critically, we develop a model for utilities hoping to analyze smart metering data effectively. We test several different clustering methods and analyze energy consumption at different levels of granularity to identify the best benchmarking practices at all levels. We hypothesize that time, weather, and household characteristics are significant factors to identify energy consumption for a household and that outlier observations of energy consumption highlight opportunities to conserve more energy, a novel approach, critically not using any personal identifiable information. We also perform residual analysis to identify households that are most sensitive to changes in temperature. This creates a strong foundation for demand-response with customers. As Europe heads towards a long-term energy crisis, it is crucial that utilities have a framework to follow for their analysis before performing interventions with customers. Further potential uses for this methodology at the governmental, utility, and local/individual levels are also included at the end to motivate potential case studies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Financial Value of Building Decarbonization Technology: Case Studies on New Construction and Retrofitting in the Face of Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/151806" rel="alternate"/>
<author>
<name>Valdez Echeverria, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/151806</id>
<updated>2023-08-24T03:25:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Quantifying the Financial Value of Building Decarbonization Technology: Case Studies on New Construction and Retrofitting in the Face of Uncertainty
Valdez Echeverria, Alejandro
The built environment accounts for approximately 40% of global emissions. As a result, property owners face increasing pressure from regulators, investors, and tenants to reduce greenhouse gas emissions. However, building decarbonization requires costly investments that may or may not be recouped over a multi-decade horizon. The quantification of these financial returns is complicated by uncertainty in capital expenses, energy cost savings, emission regulations, and real estate market conditions. Against this background, building developers need a method to quantify the financial value of decarbonization under a variety of future uncertainties. This thesis develops an integrated framework that combines building energy modelling with real estate investment analysis to assess the energy-saving and financial impact associated with the adoption of decarbonizing technologies. To incorporate future uncertainties, the framework employs Monte Carlo techniques to simulate 10,000 different future scenarios of energy prices, real estate market conditions, energy performance, regulatory environments and grid decarbonization rates (which affect the emissions of a building.) We apply this framework in two case studies: (1) a new construction of an office building in NYC, and (2) an energy retrofit of an existing multifamily building in New Jersey. In the first case study, our simulations indicate that, in approximately 76 percent of scenarios, the most profitable decision for the building owner is to adopt a natural gas-powered heating system. However, adopting a building design that provides a building the flexibility to fully electrify at a later date is more profitable than a natural gas-heating building in 99 percent of scenarios. In the second case study, we evaluate 64 retrofit packages, and present a list of the top 30 retrofit solutions that maximize NPV, energy use reduction, and carbon emission reductions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some tools for event frequency decomposition and heterogeneous transfer line analysis</title>
<link href="https://hdl.handle.net/1721.1/151790" rel="alternate"/>
<author>
<name>Giancola, Augusto Rafael.</name>
</author>
<id>https://hdl.handle.net/1721.1/151790</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1995-01-01T00:00:00Z</published>
<summary type="text">Some tools for event frequency decomposition and heterogeneous transfer line analysis
Giancola, Augusto Rafael.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1995; Includes bibliographical references (leaf 137).
</summary>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Membrane materials for a nonthrombogenic blood oxygenator,</title>
<link href="https://hdl.handle.net/1721.1/151786" rel="alternate"/>
<author>
<name>Weathersby, Paul Kirby.</name>
</author>
<id>https://hdl.handle.net/1721.1/151786</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Membrane materials for a nonthrombogenic blood oxygenator,
Weathersby, Paul Kirby.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1972; Bibliography: leaves 66-72.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the generalized Boltzmann equation.</title>
<link href="https://hdl.handle.net/1721.1/151782" rel="alternate"/>
<author>
<name>Wei, Thomas Ying Chung.</name>
</author>
<id>https://hdl.handle.net/1721.1/151782</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Analysis of the generalized Boltzmann equation.
Wei, Thomas Ying Chung.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of exhaust gas recirculation on exhaust nitric oxide concentrations, cycle-to-cycle variations, and flame speed in an S.I. engine.</title>
<link href="https://hdl.handle.net/1721.1/151781" rel="alternate"/>
<author>
<name>Komiyama, Kunihiko.</name>
</author>
<id>https://hdl.handle.net/1721.1/151781</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Effects of exhaust gas recirculation on exhaust nitric oxide concentrations, cycle-to-cycle variations, and flame speed in an S.I. engine.
Komiyama, Kunihiko.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of the merged transistor logic gate.</title>
<link href="https://hdl.handle.net/1721.1/151780" rel="alternate"/>
<author>
<name>Kling, Gary William.</name>
</author>
<id>https://hdl.handle.net/1721.1/151780</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Optimization of the merged transistor logic gate.
Kling, Gary William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility study of a satellite-to-satellite data relay network.</title>
<link href="https://hdl.handle.net/1721.1/151779" rel="alternate"/>
<author>
<name>Eastwood, Lester Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/151779</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">Feasibility study of a satellite-to-satellite data relay network.
Eastwood, Lester Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1970; Bibliography: leaf 76.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance evaluation feedback in the United States Marine Corps, a critical analysis.</title>
<link href="https://hdl.handle.net/1721.1/151778" rel="alternate"/>
<author>
<name>Knowles, Robert Clement.</name>
</author>
<id>https://hdl.handle.net/1721.1/151778</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Performance evaluation feedback in the United States Marine Corps, a critical analysis.
Knowles, Robert Clement.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Correlated malfunctions in redundant systems.</title>
<link href="https://hdl.handle.net/1721.1/151742" rel="alternate"/>
<author>
<name>Weinstein, William Winiker.</name>
</author>
<id>https://hdl.handle.net/1721.1/151742</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Correlated malfunctions in redundant systems.
Weinstein, William Winiker.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Bibliography: leaves 88-89.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An objective measure for the loudness of impulsive noises.</title>
<link href="https://hdl.handle.net/1721.1/151737" rel="alternate"/>
<author>
<name>Weekly, Gordon David.</name>
</author>
<id>https://hdl.handle.net/1721.1/151737</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">An objective measure for the loudness of impulsive noises.
Weekly, Gordon David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Bibliography: leaves 96-97.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed international joint ventures in the exploration, development and production of petroleum.</title>
<link href="https://hdl.handle.net/1721.1/151736" rel="alternate"/>
<author>
<name>Warner, Eldon Irwin Gerard.</name>
</author>
<id>https://hdl.handle.net/1721.1/151736</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Mixed international joint ventures in the exploration, development and production of petroleum.
Warner, Eldon Irwin Gerard.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Lacking leaf 105. Leaf 7.1 inserted between Leaves 7 and 8.; Bibliography: leaves 79-82.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Altered retinal connections following partial tectum lesions in neonate hamsters.</title>
<link href="https://hdl.handle.net/1721.1/151735" rel="alternate"/>
<author>
<name>Jhaveri, Sonal Ramniklal.</name>
</author>
<id>https://hdl.handle.net/1721.1/151735</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Altered retinal connections following partial tectum lesions in neonate hamsters.
Jhaveri, Sonal Ramniklal.
Thesis: M.S., Massachusetts Institute of Technology, Department of Psychology, 1973; Vita.; Bibliography: leaves 67-73.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Respiratory Time Series Data for Breathing Discomfort Detection Prior to Sleep Onset During APAP Therapy</title>
<link href="https://hdl.handle.net/1721.1/151711" rel="alternate"/>
<author>
<name>Unger, Shelby</name>
</author>
<id>https://hdl.handle.net/1721.1/151711</id>
<updated>2023-08-01T03:39:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of Respiratory Time Series Data for Breathing Discomfort Detection Prior to Sleep Onset During APAP Therapy
Unger, Shelby
Discomfort during treatment continues to be a major barrier to adherence to positive airway pressure (PAP) therapy. Thus, a key pillar of ResMed’s business strategy is to deliver intelligent tools that assist healthcare providers in identifying which patients may be struggling with therapy, and why, to enable more effective interventions and personalized patient education. One potential cause of discomfort is perceived stuffiness from pressure levels that is lower than tolerable for some patient preferences. This thesis seeks to explore which patterns in the high-resolution breathing data from ResMed devices may be used to identify patients who are experiencing breathing discomfort at low pressures at the beginning of their therapy sessions. Specifically, time-series clustering is performed on sequential respiratory data to identify groups of patients with similar breathing patterns. The independence between clusters and variables pertaining to patients’ demographic characteristics, therapy settings, usage habits, respiratory characteristics, and self-reported comfort levels are evaluated via statistical testing. Based on the results, features in breathing data are identified that may be meaningful indicators for whether a patient is experiencing discomfort or breathlessness. Additionally, opportunities for additional data collection that would enable further analysis and more accurate modelling are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Industry 4.0 in Biomanufacturing: Predictive Real-Time Models Using Process Analytical Technology</title>
<link href="https://hdl.handle.net/1721.1/151710" rel="alternate"/>
<author>
<name>Murr, Michaela</name>
</author>
<id>https://hdl.handle.net/1721.1/151710</id>
<updated>2023-08-01T04:13:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Industry 4.0 in Biomanufacturing: Predictive Real-Time Models Using Process Analytical Technology
Murr, Michaela
In biomanufacturing, process analytical technology (PAT) has become an essential tool for improving product quality, reducing costs, and increasing efficiency. In this thesis, we collect capacitance and optical density data from two in-line sensors in production bioreactors to compute real-time readings of viable cell density (VCD) and viability, two critical metrics that drive product quality and batch yield. Comparing predictions with manually collected samples, a Gaussian Process Regressor model with Matern Kernel (nu=0.5) is found to be optimal, achieving an MAPE of 7.46%, well within 10% error as defined by Amgen process development scientists. We then utilize this VCD model in conjunction with the optical density probe, which measures total cell density (TCD), in a novel way to obtain real-time measurements of viability within 5% of offline measurements conducted using a cell counter. Our results demonstrate the effectiveness of using real-time sensor data and ML models for monitoring critical quality attributes in biomanufacturing. This will enable an estimated $2M per year in savings in avoidable product losses in Amgen’s new manufacturing plant in North Carolina, approximately 50% reduction in manual sampling efforts, and offer further process improvement opportunities, particularly for advanced process control. This use case demonstrates the potential of PAT for improving biomanufacturing processes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Technology Roadmapping of Sustainable Aviation Technologies</title>
<link href="https://hdl.handle.net/1721.1/151709" rel="alternate"/>
<author>
<name>Liu, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/151709</id>
<updated>2023-08-01T04:10:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Model-Based Technology Roadmapping of Sustainable Aviation Technologies
Liu, Lisa
The development of sustainable aviation products, and any products with long-term development cycles, requires the projection of future performance of nascent technologies. Technology roadmapping is a maturing field used by many firms to develop strategy and projects aimed at meeting certain market needs. However, tying together technology improvement rates of key figures of merit with performance of a system including said technology is challenging. Data on technology improvement rates are time-consuming to collect and difficult to decompose into improvement of specific aspects of a technology. Here, using low-temperature polymer electrolyte membrane hydrogen fuel cells as a pilot technology, we demonstrate a method for evaluating technology improvement which utilizes improvement rates mined from patent data in a first principles-based model. We demonstrate the ability to predict a similar technology improvement rate from this bottoms-up approach as is yielded through other methods in the literature and the ability to pinpoint the system parameters that will drive improvement. We discuss the drawbacks of using fuel cells in an aircraft and the organizational considerations required to adopt a broad shift in technology roadmapping approaches in a large firm. This approach reduces the time needed to gather technology improvement rate data from weeks to minutes and links the sensitivity of system performance to improvement rate. In combination with existing approaches to evaluating technology improvement from a top-down perspective, this method can provide insights into what parameters of system have the most potential to improve and meaningfully impact performance. For sustainable aviation technologies, a top-down approach has yielded insight and narrowing of potential pathways toward achieving net zero by 2050. Adding this type of approach to analyses can provide insight for predicting whether technologies may actually achieve performance goals set by the industry and what investments may make the most impact in improving performance. As sustainable aviation gains momentum, quantitative tools such as this one will be needed to make strategic technology investment decisions that will change the next generation of aircraft.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Actionable Maintenance Analytics with Ontology-driven Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/151708" rel="alternate"/>
<author>
<name>Pascualy, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/151708</id>
<updated>2023-08-01T04:16:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Actionable Maintenance Analytics with Ontology-driven Natural Language Processing
Pascualy, Gabriel
Asset maintenance is a crucial aspect of biomanufacturing operations, with significant associated costs. To mitigate these expenses, companies are increasingly adopting predictive maintenance approaches that leverage data to identify potential equipment issues before they arise. Maintenance work orders (MWOs) are essential tools for planning, scheduling, and tracking maintenance tasks. Although MWOs contain explicit ontologies with structured fields for information like costs, identifiers, and dates, valuable maintenance details are often stored in unstructured text fields. These fields include work descriptions, troubleshooting information, and completed work summaries. While the unstructured text generally contains an implicit ontology, such as a shared vocabulary and references to externally documented system hierarchies, automatically extracting insights is currently infeasible, necessitating manual analysis by engineers—a process that is not cost-effective at scale.&#13;
&#13;
This project aims to develop an ontology that integrates the explicit and implicit maintenance ontologies found at Amgen, as well as a natural language processing (NLP) tool for reconstructing this unified ontology from MWOs' structured and unstructured fields. By utilizing this unified ontology and NLP tool, we seek to explore the limitations of a post-processing NLP solution and pinpoint future research areas with the potential to enhance downstream analytics.&#13;
&#13;
For each MWO, our tool identifies the maintained assets and their corresponding components, classifies the rationale behind the work order generation, and assigns the problem, cause, and remedy associated with each component.&#13;
&#13;
Tested on a sample of 50 manually labeled MWOs covering maintenance of 188 assets, our tool achieved the following F1 scores across each category: 0.83 for failed assets, 0.46 for rationale, 0.62 for failed components, 0.58 for problems, 0.76 for causes, and 0.62 for remedies. The low F1 scores in some categories can be attributed to the missing context that is normally inferred by a reliability engineer during manual analysis. Despite these limitations, we provided recommendations for extending the explicit ontology to enhance performance and identified maintenance documentation enhancements as a potential area of future research.&#13;
&#13;
Furthermore, we employed the tool to examine 1000 leak records, showcasing the significance of a unified ontology in generating accurate baselines. Overall, our findings suggest that the analytics generated by this tool could substantially reduce the time engineers spend analyzing work order data, offering unprecedented insights into plant maintenance operations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Outside Inside, Inside Around : Leveraging External Innovation Through Strategic Investment</title>
<link href="https://hdl.handle.net/1721.1/151707" rel="alternate"/>
<author>
<name>Kramer, Jomi S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151707</id>
<updated>2023-08-01T03:41:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Outside Inside, Inside Around : Leveraging External Innovation Through Strategic Investment
Kramer, Jomi S.
Because continuous improvement and company growth are fundamental for future business success, companies constantly look for ways to innovate. A large company must operate to augment core business and create industry innovation to succeed. However, decades of literature and company performance reviews have demonstrated that large companies often struggle to develop innovative products or to foresee disruptive technology and capture the market quickly and nimbly. &#13;
&#13;
This thesis examines how a large company can effectively leverage external innovation for internal success. In an effort to stimulate the ideation process and internalization of bold innovations, a company can implement a Corporate Venture Capital (CVC) Team to evaluate, transition, and develop external innovation for internal company growth. A successful CVC understands the needs of the parent company, captures external venture capital (VC) opportunities, and facilitates the transition of new technology to support core business growth and develop industry innovation. By investigating the pathways through which innovation ideas evolve from a concept to fully integrated products, it is apparent that each method has its own merits and challenges. However, with a sound operating strategy, a large company can leverage the strengths of strategic investments. A CVC with an established and scalable process can facilitate the exploration and implementation of external innovation. Furthermore, the revelation that a CVC is essentially an internal sales team geared towards internal stakeholders provides a new framework for CVC teams to effectively engage internal stakeholders and portfolio companies to capitalize on external innovation for mutually beneficial growth opportunities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Growth Through Sales and Operations Planning, Inventory Management, and Supply Chain Expansion</title>
<link href="https://hdl.handle.net/1721.1/151706" rel="alternate"/>
<author>
<name>Cass, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/151706</id>
<updated>2023-08-01T04:23:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Driving Growth Through Sales and Operations Planning, Inventory Management, and Supply Chain Expansion
Cass, Gregory
Exponential sales growth creates unique challenges for manufacturing companies. An increasing backlog directly increases lead times unless production capacity can be scaled at the same rate. Lead times greater than the industry average threaten future growth as customers begin choosing substitutes that can arrive sooner.&#13;
&#13;
This research explores the hypothesis that production throughput can be increased through improved business processes. The investigation uses ShopSabre CNC as a case study to explore techniques for developing new processes, such as sales and operations planning (S&amp;OP), inventory modeling, and supply chain capacity, flexibility, and resilience analyses. The research effectiveness is measured by the observed changes in throughput and financial metrics.&#13;
&#13;
The research illustrates that all the investigated techniques contributed materially to either increasing production throughput or improving financial outcomes. The primary source of the observed impacts was improving stakeholder alignment that was limiting growth. The recommendations focus on improving quality and integrating disciplined validation into an established culture valuing urgent adaptation. Inventory model assumptions and backlog dynamics were identified as exciting potential follow-on research opportunities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Her Playing Eye: Courtesans at Chess in the Book of Games (c. 1283/84)</title>
<link href="https://hdl.handle.net/1721.1/151705" rel="alternate"/>
<author>
<name>Nansi, Khushi</name>
</author>
<id>https://hdl.handle.net/1721.1/151705</id>
<updated>2023-08-01T04:10:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Her Playing Eye: Courtesans at Chess in the Book of Games (c. 1283/84)
Nansi, Khushi
In the medieval educational codex the Book of Games: Chess, Dice, and Tables (Libro de los Juegos) completed in Seville in 1283, there is more than what meets the eye. Featuring some hundred chess problems, another hundred board and dice games, each accompanied by miniatures depicting games at play, players sit across the board from each other. Enclosed, frozen on the frame in the illustrations, men sit against women, kings and queens, women and women, courtesans and knights, monks and men, nuns and children, young and old—a range of players of different faiths and backgrounds.&#13;
&#13;
This thesis examines the covert complexity of women’s relationships in the thirteenth-century Castilian court of Alfonso X, el Sabio (1221-84), through their representations at games of chess. Chess in the medieval imaginary was a game not only strategic, but one also laden with sexual connotations. It mirrored the site of battle and the court—the composite of a series of moves—it replicated the advance of courtship and seduced the mind for the forfeit of a hand. Medieval epics and material culture visualize this phenomenon: when a man and a woman are represented at chess, it is read as a game between lovers. In the Book of Games, what is going on between women—for whom the archive always limited and fragmentary—what have our eyes missed? To explore this question, this thesis represents a necessary exercise in speculation. It begins with a review of the state of the discussion upon the manuscript in question, delving into the various threads of movement encapsulated within, to query the notion of autonomy in making. Through a close reading of key illustrations bearing a trace of personal reception, it probes the central methodological question of seeking to see, theorizing gaze and nazar in sites of potential encounter. Understanding the encounter, and alternate forms of intimacy made possible through play, I observe the women looking at each over the chessboard in a moment of mutual regard. This thesis argues the Book of Games possesses an already existing unseen complexity—perhaps queer or perhaps questioning—lying latent, that we must learn to seek to see, looking otherwise.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Min Bʻīd la-Bʻīd (from Far to Far): On Homemaking under Diasporic Conditions</title>
<link href="https://hdl.handle.net/1721.1/151704" rel="alternate"/>
<author>
<name>BuGhanem, Luna</name>
</author>
<id>https://hdl.handle.net/1721.1/151704</id>
<updated>2023-08-17T04:11:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Min Bʻīd la-Bʻīd (from Far to Far): On Homemaking under Diasporic Conditions
BuGhanem, Luna
In the villages of Mount-Lebanon, homes are realized through years of immigrants’ exchanged remittances, messages, objects, and visits. This thesis offers an expanded understanding of the homes and homemaking of diasporic families, through which they manage separation and fragmentation and adapt to personal and regional political change.&#13;
&#13;
To reveal how diasporic subjects build and make their homes while and from abroad or back and forth between locales, I extract from my conversations with owners of remittance-funded houses in ’Aley and Shūf. In reconstructed videos of the in-progress homes, first-hand accounts, concurrent global events, and the material traces of migration are juxtaposed, making the relationships between distance, its mediations, and the built form apparent.&#13;
&#13;
As a result, several signature architectural concepts are re-imagined. In the first chapter, “site” is no longer understood to simply be location, where the building is bound by coordinates or where owners have to be physically present; site, as captured through WhatsApp images, is dispersed, thus becoming the captured change that occurs throughout the conception and construction of their homes. Each following chapter similarly re-imagines and expands our use of architectural concepts such as “budget,” “program,” “phases,” “finishes,” “furniture, fixtures,” and “contracts” to suggest how we may appropriate this new understanding as a design tool.&#13;
&#13;
Ultimately, this thesis establishes the human experience of immigrant-builders as not ancillary but central to the discipline of architecture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-based Estimates of Structural Material Quantities for Urban-level Embodied Carbon Assessment in Buildings</title>
<link href="https://hdl.handle.net/1721.1/151703" rel="alternate"/>
<author>
<name>Sory, Leïlah Yadia Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/151703</id>
<updated>2023-08-01T03:02:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Physics-based Estimates of Structural Material Quantities for Urban-level Embodied Carbon Assessment in Buildings
Sory, Leïlah Yadia Kelly
Decarbonizing the built environment requires immediate actions to meet global climate targets. The world population growth and rapid urbanization rate add to the urgency of this challenge. In fact, buildings account for about 40% of all energy and carbon emissions from operations and materials’ production and construction processes. More specifically, buildings’ structural systems are responsible for a significant share of the upfront embodied carbon emissions before construction. Most LCA tools focus on fully detailed material takeoffs from high-resolution Building Information Models (BIM) and are therefore incomplete during conceptual design. Moreover, Urban building energy modeling (UBEM) is a proven technique allowing cities to evaluate technology pathways to achieve their net-zero emissions goals. It involves simplified building archetypes to estimate operational energy on a large scale with reasonable accuracy. However, little attention has been paid to urban-level embodied carbon assessment.&#13;
&#13;
Therefore, this thesis investigates the potential of implementing physics-based structural quantities estimation in early-stage design for embodied carbon quantification at the urban scale. This approach combines bottom-up engineering calculations with data-driven surrogate modeling to automatically predict embodied carbon from a high-fidelity model. Finally, structural parameters are defined into energy model archetypes to deploy this method into an existing urban scale modeling tool. The feasibility of the proposed methodology is assessed through case studies to estimate embodied carbon and energy use intensities at the individual-building and urban scales. Results show the benefits of spatially mapping the distribution of embodied and operational carbon in the building stock and obtaining more nuanced estimates of carbon emissions compared with existing benchmarking studies. The primary use case of this work is to better inform planning and policy decision-making for retrofitting strategies and future building design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Alimentaciones-Cruzadas: Procesos de Re-imaginación entre Epistemologías Acústicas/Cross-Feedback: Re-Imagining processes between Acoustemologies</title>
<link href="https://hdl.handle.net/1721.1/151702" rel="alternate"/>
<author>
<name>García Belmont, Cristóbal Herman</name>
</author>
<id>https://hdl.handle.net/1721.1/151702</id>
<updated>2023-08-01T03:20:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Re-Alimentaciones-Cruzadas: Procesos de Re-imaginación entre Epistemologías Acústicas/Cross-Feedback: Re-Imagining processes between Acoustemologies
García Belmont, Cristóbal Herman
This compilation of texts actively revisits and develops poetical mythologies through notions of reverberation and feedback to transcend juxtapositions. Re-imagining two mythological snakes, the Amphisbaena, and the Ouroboros, the reader is invited to revalue their placement in intricate post-colonial realities by imagining metaphysical circuits connecting different spaces and times. The work presents a series of contrasting essays reflecting the mouths and body of the Amphisbaena, the two-headed snake.  The first essay focuses on the littoral region of Perú during the transition from a viceroyalty into a republican state. Here we find a shapeshifting musical tradition engaging with percussive idiophonic—or self-resonating— instruments. Traditions get confined into the visual realm to capture a new multicultural identity.  Lost in transduction, how, via sound, does one create circuits which transform bodies, space, and time through reverberation? The second essay narrates the history of acoustic feedback as the birth of an ouroboric, selfeating cycle. By deconstructing/reconstructing a series of artworks, the text becomes a tale of metamorphosis of the ouroboros, getting into notions of active practice, later into an amphisbaena regarding notions of material resonance, and finally into concrete poetry in the processes of analyzing the development of consciousness around the artwork.  The work ties two sonic practices from different times, cultures, and locations by building sculptural self-resonators activated by auditory feedback. The repercussions of transforming spatial configurations via sound prove that sonic practices can alter how we approach our nomadic circulations. The sound created by contrasts blurs the borders of sense and perception, giving us a space for subjective interpretation and leading to new imaginaries.  This work reflects sonic feedback—dichotomous diaphragms, some idiophonic, some membranous, that have learned via the artwork to resonate together, accentuating relations, creating circuits, and shortening distances that once seemed far away.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the temporal consistency of satellite-based contrail detections using ensemble Kalman filtering</title>
<link href="https://hdl.handle.net/1721.1/151697" rel="alternate"/>
<author>
<name>Robion, Louis A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151697</id>
<updated>2023-08-01T03:10:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving the temporal consistency of satellite-based contrail detections using ensemble Kalman filtering
Robion, Louis A.
Condensation trails or contrails are line-shaped ice clouds which can form behind aircraft, and current estimates indicate that they account for the majority of aviation’s climate impacts. While contrail models exist to estimate these effects, a lack of experimental or observational data makes them difficult to validate.&#13;
&#13;
This thesis develops a method for retrieving large scale temporally consistent observations of contrails using satellite imagery. Having a consistent history of detections of an individual contrail is necessary to accurately derive observational constraints on contrail properties such as lifetime. Inconsistencies not only reduce the quality of such a dataset, but risk introducing biases in the computed properties.&#13;
&#13;
We use an existing deep-learning based contrail detector which as of now presents temporal inconsistencies that make tracking challenging. We address this issue by post-processing the model’s outputs with an ensemble Kalman filter. We create a hand-labeled dataset of 73 contrails tracked over a 2-hour time series which we use to quantify performance. We find that by adding temporal correlations, we are able to recover 53.25% of contrail pixels on an image, and that 53.25% of the pixels predicted as contrail by the detection framework are indeed contrail pixels. For individual contrail tracks, we find that after filtering, we increase the average duration of consecutive consistent contrail detections from 9.4 minutes to 25.7 minutes. On average, the duration of these consistent contrail detections after filtering represent 43.7% of a contrail’s total lifetime compared to only 15.5% for the baseline. We also find that the high frequency Fourier components of the signal, which are responsible for flickering and noise, are reduced by 50% in magnitude.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed estimation algorithms for autonomous&#13;
systems</title>
<link href="https://hdl.handle.net/1721.1/151695" rel="alternate"/>
<author>
<name>Oneci, Codrin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/151695</id>
<updated>2023-08-01T03:56:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Distributed estimation algorithms for autonomous&#13;
systems
Oneci, Codrin P.
This thesis investigates the theory and applications of distributed estimation algorithms. It was found that for specific objective functions, general meshes of distributed agents may estimate a state while maintaining a consensus over its PDF and satisfying communication/localization objective constraint at the same time. Hyperparameter tuning techniques for multiple algorithms preventing RMSE drift and communication network inefficient usage are described. An example of distributed algorithm application with rovers shows the power of such algorithms in robotics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scenario Analysis of Profitability through Simulation of Different Business Contract Models</title>
<link href="https://hdl.handle.net/1721.1/151693" rel="alternate"/>
<author>
<name>Heintz, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/151693</id>
<updated>2023-08-01T04:03:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Scenario Analysis of Profitability through Simulation of Different Business Contract Models
Heintz, Lauren
Many American manufacturing companies have faced supply chain distruption, and inflation on sourced goods, freight, and labor. Coupled with the growth of online retail and direct-to-consumer shipping trends, many businesses have had to rethink strategic partnerships and distribution models. These factors have incentivized the adult incontinence manufacturer "IncoMan" to seek out strategic partnerships with other businesses to reduce costs. The reimbursed healthcare market specifically has seen a decline in profitability. State-mandated reimbursement rates for products are inconsistent across the country, but have been consistently declining. Insurance agencies acting in the middle have further eroded margins. To continue to provide these necessary medical products, this incontinence manufacturer and distributor explores contract options with other business partners to leverage both companies’ strengths and maximize profitability in this market. This specific application of financial modeling and scenario analysis helps quantify the risk between two different possible contract models, a distributor model and a service model. Furthermore, it takes into account the uncertainty in demand parameters via a quasi-Monte Carlo simulator. The result is a set of visualizations that can be used to analyze both models under both deterministic and stochastic scenarios. The most influential factors in profitability stem from the state-mandated reimbursement price and the insurance agency contracts. Further, customer revenue-per-order and labor cost-to-serve each customer highly impacts profitability in both models. Of the two contract models simulated, the distributor model is more risky than the service model, but the service model lacks growth potential. The simulator can be reused and customized to different ranges of data and inputs, depending on the customer engagement. Ultimately, the goal is provide business leaders with a snapshot the first-order factors in any new contract agreement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Design for Emotional Intelligence: Leveraging Affective Computing in Medical Education for Improved Care for Substance Use Disorders</title>
<link href="https://hdl.handle.net/1721.1/151692" rel="alternate"/>
<author>
<name>Daulbayeva, Aidana</name>
</author>
<id>https://hdl.handle.net/1721.1/151692</id>
<updated>2023-08-01T03:08:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Behavioral Design for Emotional Intelligence: Leveraging Affective Computing in Medical Education for Improved Care for Substance Use Disorders
Daulbayeva, Aidana
The rise in opioid use has led to a significant increase in overdose deaths since 1999. Negative attitudes and stigma from doctors towards opioid patients can exacerbate the situation, resulting in under treatment, poor communication, and labeling. Stigma can also be expressed through one's affective states, where facial expressions may unintentionally convey negative emotions or judgments.&#13;
&#13;
To address this issue, this thesis aims to introduce medical trainees to Medship, an affective computing tool that promotes self-reflection about one’s facial expression and raises awareness about stigma, while also filling the gap in medical training. The focus is on changing human behavior without triggering the backfire effect on busy physicians. This will be accomplished by combining theories of behavioral design and using affective computing as a backbone for creating the app.&#13;
&#13;
The Medship project is a joint effort between the Affective Computing group at the MIT Media Lab and Cornell Weill Medicine, with funding support from the Foundation for Opioid Response Efforts. The ultimate goal is to integrate this project into the medical student curriculum and eventually improve the quality of care for substance use disorder patients.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Bench to Bucks: An Approach and Case Study in Scaling Additive Research and Development Technologies within the Aerospace Industry</title>
<link href="https://hdl.handle.net/1721.1/151690" rel="alternate"/>
<author>
<name>Smedberg, Allison R.</name>
</author>
<id>https://hdl.handle.net/1721.1/151690</id>
<updated>2023-08-01T03:04:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">From Bench to Bucks: An Approach and Case Study in Scaling Additive Research and Development Technologies within the Aerospace Industry
Smedberg, Allison R.
Scaling technology-focused companies is a unique challenge, as taking cutting-edge technology from the lab to consumer markets often requires significant R&amp;D work in parallel with all the typical challenges that any start-up faces. This thesis presents a framework for technology-focused companies to approach this crucial scaling period. The approach is centered on a tool called the House of Quality (HOQ), which is designed to help prioritize design features of consumer products. By defining a Holistic House of Quality (HHOQ) that includes company-wide capabilities and auxiliary functions, and applying HHOQs to company growth, this thesis explores whether HHOQs can help guide scaling decisions for companies in areas like manufacturing operations, organization structure and hiring, and trade-offs between short-term and long-term needs.&#13;
&#13;
This thesis explores the HHOQ scaling framework through the lens of Wingate, a technology-centered company in additive manufacturing that focuses on the material development and printing of high-temperature metals. Wingate had notably strong customer relationships and a technically superior product to competitors, and was facing the challenge of rapidly scaling operations to meet customer demand. The HHOQ process and scaling efforts were implemented and observed from January to August of 2022.&#13;
&#13;
During the timeframe of the research, Wingate grew headcount from 4 employees to 10 employees, reduced overdue customer backlog by 46% and increased on-time delivery by 15%. The HHOQ framework proved useful in providing a structured way to assess scaling efforts in relation to customer needs, and successfully painted a picture of what other auxiliary functions would be important besides the success of the technology itself. This thesis is anticipated to be a starting point for more wide-spread consideration of HHOQs as a tool in scaling decisions, including the effectiveness of the framework over longer time horizons and across various industries.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Analysis of 3D-Printed Ceramic Cores for GasTurbine Investment Castings</title>
<link href="https://hdl.handle.net/1721.1/151689" rel="alternate"/>
<author>
<name>Maristany, Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/151689</id>
<updated>2023-08-01T03:10:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Economic Analysis of 3D-Printed Ceramic Cores for GasTurbine Investment Castings
Maristany, Eduardo
When manufacturing blades and vanes for gas turbine engines, internal cooling channels are formed by investment casting with sacrificial ceramic cores. The hot injection and pressing techniques traditionally used to manufacture ceramic cores have long lead times and high up-front costs, motivating an interest to form cores via additive manufacturing. While industrial additive manufacturing technologies enable a faster and more iterative ceramic core manufacturing process, this efficiency comes with high per unit manufacturing costs of additive methods. To determine the quantities for which additive manufacturing is more economical than traditional hot injection and pressing methods, an economic analysis of the aircraft engine and investment casting markets is conducted. Current technical capabilities of several additive methods for forming ceramic cores are compared and Stereolithography and Digital Light Processing are found to be suitable for ceramic core manufacturing. The economic advantage of using viable additive manufacturing methods is assessed using publicly available financial, maintenance, and aircraft fleet size data in a manufacturing cost model. When considering user experience curve effects, the model shows that for a single core design, additive manufacturing is economical at quantities below 1,900 cores, or about 16 high pressure turbine stage sets. When considering multiple core designs needed to satisfy demand across all commercial aircraft in use, the model shows that additive manufacturing is economical at quantities below 720,000 cores, or 14% of the total core market demand in 2019. This motivates the use of additive manufacturing for new core design development and testing as well as the maintenance of older engine designs while reaffirming the use hot injection and pressing techniques for production level manufacturing and maintenance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>External Network Manufacturing Capacity Design and Procurement in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/151688" rel="alternate"/>
<author>
<name>Hoxha, Ori</name>
</author>
<id>https://hdl.handle.net/1721.1/151688</id>
<updated>2023-08-01T04:00:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">External Network Manufacturing Capacity Design and Procurement in the Pharmaceutical Industry
Hoxha, Ori
Pharmaceutical companies typically manufactured therapeutic drugs internally; however, the increased rate of innovation in the last decade has significantly pushed the balance toward external manufacturing. Moreover, the consolidation of contract manufacturing organizations (CMOs) has generated a need for new operating models between the parties. In this thesis we study the supply chain performance due to manufacturing asset cross-validations and the contract structure of the capacity reservation agreement. Our goal is to identify ways of working between pharmaceutical companies and CMOs that close the flexibility gap between a fully internal supply chain and one with external stages, while maintaining the cost advantages of the latter. &#13;
&#13;
We assess small-molecule manufacturing capacity constraints using a linearized, multi-objective optimization model. Stochastic simulations reveal that connecting all products and assets in the portfolio through cross-validations reduces the supply shortfall by 6%, while limiting the impact on a single product. However, in order not to be performance-constrained by the need to have net zero demand fluctuations across all products in the typical 24-month window between ordering and delivery, we propose an option contract-driven capacity reservation model. Such a procurement construct allows the pharmaceutical company to hedge against potential demand downside, while making potential upside accessible at constant cost of goods sold. Moreover, option contracts allow the CMO to increase its 10-year net present value per contract by 50%, at a reduced effective order lead time of 12 months.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Energy Modeling Tool for Electric and Gas Infrastructure Decision Support</title>
<link href="https://hdl.handle.net/1721.1/151687" rel="alternate"/>
<author>
<name>Galindez de Jesus, Francisco J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151687</id>
<updated>2023-08-01T03:45:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Integrated Energy Modeling Tool for Electric and Gas Infrastructure Decision Support
Galindez de Jesus, Francisco J.
This dissertation compares the total yearly cost to customers of a gas utility company fully electrifying heat with a focus on infrastructure costs versus utilizing hydrogen blending, taking into account the current cost of green hydrogen. Previous research has separately discussed implications and costs of both hydrogen blending and electrification. The former leading to increased safety risks at high blend rates, and minimal or low additional risk at low blend rates. The latter showing strong decarbonization capabilities, but not comparing the two directly in a case study format.&#13;
&#13;
The infrastructure costs associated with fully electrifying heat are substantial, including the installation of heat pumps and the associated electrical infrastructure. In contrast, the infrastructure costs associated with hydrogen blending are relatively low. We use 2022 company and customer data to model the cost to upgrade infrastructure to support the additional imposed electric load due to electrification of heating. This cost is aggregated to the energy cost for this new method of heating, taking into consideration energy transformation losses. While not a factor to cost, risks imposed by hydrogen blending are analyzed as a "go no-go" criteria. The paper also looks at the thermodynamic compatibility of hydrogen blends with existent natural gas systems and piping. &#13;
&#13;
Our analysis suggests that hydrogen blending is likely to result in a lower cost-to customer for utilities looking to decarbonize their heating systems. While the current cost of green hydrogen is high, it is expected to decrease with further adoption of hydrogen. Moreover, the gradual transition facilitated by hydrogen blending can minimize the overall cost impact on customers. We find that risk imposed by hydrogen blending can be mitigated at the target blending rate of 20%, however margins to risks such as fires, explosions, and pipeline brittle fracture are reduced. In conclusion, the decision between fully electrifying heat and utilizing hydrogen blending as a means of decarbonizing heat requires careful consideration of the associated costs, risks, andhow it helps to achieve company strategy. Our findings have important implications for company executives, who can use this information to determine how the customerwill be affected by major strategy decisons, just one aspect to be considered out of many before making the final decision for a given city or region.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference of the Novel Coronavirus 2019 in Patients fitted with Boston Scientific Medical Hardware</title>
<link href="https://hdl.handle.net/1721.1/151686" rel="alternate"/>
<author>
<name>Ayane, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151686</id>
<updated>2023-08-01T03:57:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inference of the Novel Coronavirus 2019 in Patients fitted with Boston Scientific Medical Hardware
Ayane, Daniel
As Boston Scientific’s Rhythm Management business is challenged by an increasingly commoditized market it is important to find new opportunities to diversify the products and services that the company offers. Traditionally, as a medical device manufacturing company, this differentiation comes in the form of hardware features but in the wake of a data revolution, the company seeks opportunities to diversify beyond hardware.&#13;
&#13;
By utilizing Boston Scientific’s physiological time series data from Heart Failure therapy devices such as pacemakers, we aim to determine if an algorithm can be built to anticipate worsening COVID-19 symptoms in real time in patients and therefore provide them with better healthcare solutions by intervening in a timely manner.&#13;
&#13;
Since the study includes a relative small number of patients with clinically established COVID-19 labels, we leverage the power of semi-supervised learning to extract useful signals and characterize the profile of COVID-19 in Boston Scientific Heart Failure patients. Specifically, we utilize constrained K means clustering to understand if there are any cardiovascular signals that are associated with COVID-19 in heart failure patients and then create pseudo labels that can be used to train an LSTM in a supervised fashion. We produce two models with the best model achieving a median alert rate of 3.8 Days with an unexpected alert rate of 3.8% and 93.3% specificity and a 99.7% sensitivity.&#13;
&#13;
This study is meant to be a proof of concept to help define a future product that can be rolled out across Boston Scientific’s LATITUDE product line.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Enhancements to Visual-Inertial SLAM&#13;
for Robots and Autonomous Vehicles</title>
<link href="https://hdl.handle.net/1721.1/151684" rel="alternate"/>
<author>
<name>Abate, Marcus</name>
</author>
<id>https://hdl.handle.net/1721.1/151684</id>
<updated>2023-08-01T03:23:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Performance Enhancements to Visual-Inertial SLAM&#13;
for Robots and Autonomous Vehicles
Abate, Marcus
Spatial perception is a key enabler for effective and safe operation of robots and autonomous vehicles in unstructured environments. Two key components of a complete spatial perception system are: identifying where the robot is in space, and constructing a representation of the world around the robot. In this thesis, we study Visual-Inertial Simultaneous Localization and Mapping (VI-SLAM) and present several findings on its application to a variety of robotic platforms to obtain globallyconsistent localization for a robot as well as a dense map of its surroundings. In particular, we extend Kimera, an open-source VI-SLAM pipeline, to be more effective in traditional use-cases (e.g., stereo-inertial VI-SLAM) as well as more broadly applicable to different platforms and sensor modalities.&#13;
&#13;
Our first contribution is to present a system built around Kimera for autonomous valet parking of self-driving cars, and test on real-world self-driving car datasets. This system uses a modified version of Kimera to support multi-camera VI-SLAM and perform dense free-space mapping using multiple cameras with non-overlapping field of view. Our second contribution is to describe recent updates to Kimera and showcase their beneficial effect on localization and mapping performance, while also comparing against the state of the art on extensive datasets collected on a variety of platforms. Finally, we present a novel method for detecting and tracking humans in the scene in order to build 3D Dynamic Scene Graphs for high-level perception tasks, and evaluate our method in a photorealistic simulation environment. We conclude by commenting on the advantages of Kimera and identifying areas for future work.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Coupled Nonhydrostatic-Hydrostatic Hybridizable Discontinuous Galerkin Method</title>
<link href="https://hdl.handle.net/1721.1/151681" rel="alternate"/>
<author>
<name>Saravanakumar, Aditya Karthik</name>
</author>
<id>https://hdl.handle.net/1721.1/151681</id>
<updated>2023-08-01T04:04:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Coupled Nonhydrostatic-Hydrostatic Hybridizable Discontinuous Galerkin Method
Saravanakumar, Aditya Karthik
Numerical modelling of ocean physics is essential for multiple applications such as scientific inquiry and climate change but also renewable energy, transport, autonomy, fisheries, water, harvesting, tourism, communication, conservation, planning, and security. However, the wide range of scales and interactions involved in ocean dynamics make numerical modelling challenging and expensive. Many regional ocean models resort to a hydrostatic (HS) approximation that significantly reduces the computational burden. However, a challenge is to capture and study local ocean phenomena involving complex dynamics over a broader range of scales, from regional to small scales, and resolving nonlinear internal waves, subduction, and overturning. Such dynamics require multi-resolution non-hydrostatic (NHS) ocean models. It is known that the main computational cost for NHS models arises from solving a globally coupled elliptic PDE for the NHS pressure. Optimally reducing these costs such that the NHS dynamics are resolved where needed is the motivation for this work.&#13;
&#13;
We propose a new multi-dynamics model to decompose a domain into NHS and HS dynamic regions and solve the corresponding models in their subdomains, reducing the cost associated with the NHS pressure solution step. We extend a high-order NHS solver developed using the hybridizable discontinuous Galerkin (HDG) finite element methodology by taking advantage of the local and global HDG solvers for combining HS with NHS solvers. The multi-dynamics is derived, and the first version is implemented in the HDG framework to quantify computational costs and evaluate accuracy using several analyses. We first showcase results on Rayleigh Taylor instability-driven striations to evaluate computational savings and accuracy compared to the standard NHS HDG and finite-volume solvers. We highlight and discuss sensitivities and performance. Finally, we explore parameters that can be used to identify domain regions exhibiting NHS behaviour, allowing the algorithm to dynamically evolve the NHS and HS subdomains.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Off-Lattice Kinetic Monte Carlo Framework&#13;
For Long-Time Atomistic Simulations</title>
<link href="https://hdl.handle.net/1721.1/151680" rel="alternate"/>
<author>
<name>Luzzatto, Julien L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151680</id>
<updated>2023-08-01T03:01:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Off-Lattice Kinetic Monte Carlo Framework&#13;
For Long-Time Atomistic Simulations
Luzzatto, Julien L.
The goal of this thesis is to develop an off-lattice Kinetic Monte Carlo (KMC) framework to simulate the atomistic dynamics of materials at extreme conditions over long time scales. Despite the dramatic increase in computational power over the last few decades, rigorous approaches such as classical Molecular Dynamics (MD) techniques cannot access the engineering and experimental time scales due to the fundamental scaling limitation constrained by atomic vibrations. KMC approaches are powerful stochastic computational techniques that focus on the simulation of rare atomistic events in order to analyze the coarse-grained dynamics of condensed matter systems and replicate non-equilibrium phenomena in a statistical fashion. However, their application to problems at extreme conditions — such as those encountered in materials science under high pressure, temperature, and radiation — has been limited by the complexity of atomistic interactions, by the variability and instability of underlying structures, and by the computational cost of simulating large systems over sufficiently long time-scales.&#13;
&#13;
To address such challenges, this thesis proposes an off-lattice, modular and scalable KMC framework that features adaptive inferred structures, efficient process sampling and dynamic rate constant calculations, together with the corresponding Julia implementation. The developed KMC framework is justified theoretically, described step-by-step methodologically, and then validated against MD results for early-time dynamics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Testing: Combining Static Analysis and Directed&#13;
Fuzzing</title>
<link href="https://hdl.handle.net/1721.1/151679" rel="alternate"/>
<author>
<name>Shields, Peyton</name>
</author>
<id>https://hdl.handle.net/1721.1/151679</id>
<updated>2023-08-01T03:51:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hybrid Testing: Combining Static Analysis and Directed&#13;
Fuzzing
Shields, Peyton
New CVEs are discovered each year and their underlying bugs leave applications vulnerable to exploitation. Software is still frequently written in bug prone languages, e.g. C and C++, and a single missed check during manual testing can result in vulnerabilities. Existing automated testing tools such as fuzzing are limited in scope or in the case of static analysis, have a high false positive rate. Without improved automated testing, it can be challenging for developers to debug large, complex codebases. In this paper, Hybrid Testing is presented as a solution. Hybrid Testing combines static and dynamic analyses, leveraging static analysis to perform complex reasoning about logic, memory management, and concurrency. It creates a novel orchestration system which allows us to automatically verify the output of static analysis tools using directed fuzzing. Hybrid Testing is the first vulnerability detection technique with full codebase coverage and no false positives. It can be seamlessly integrated into the development cycle and scales well to large codebases. This work details the design and implementation of Hybrid Testing and evaluates its performance across a corpus of open-source C and C++ applications in the Magma benchmark. Hybrid Testing aims to promote more secure software through rigorous testing, making it easier for developers to detect security issues. We demonstrate Hybrid Testing can find vulnerabilities up to 25% faster with 17% higher accuracy (when detecting additional bugs) than current automated testing strategies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing BREeze - a High-Performance Regular Expression Library Using Code Generation with BuildIt</title>
<link href="https://hdl.handle.net/1721.1/151678" rel="alternate"/>
<author>
<name>Mitrovska, Tamara</name>
</author>
<id>https://hdl.handle.net/1721.1/151678</id>
<updated>2023-08-01T03:20:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Implementing BREeze - a High-Performance Regular Expression Library Using Code Generation with BuildIt
Mitrovska, Tamara
Regular expression matching is a very common problem in software engineering, with applications in text processing, text searching, data scraping, syntax highlighting, deep packet inspection in networks, etc. Due to the varying complexity of regular expressions, having one general approach to match all types of expressions is usually not enough to get the needed performance for software applications. Many modern regular expression engines have tried to solve this problem by combining different algorithms and optimization techniques, which in most cases result in very complicated and large codebases. As a result, we introduce BREeze, a fully functional regular expression library implemented in just around 1500 lines of code with comparable performance to the modern regular expression engines. BREeze is implemented on top of BuildIt, a multi-stage code generation framework that makes it possible to generate high-performance, specialized code while keeping the implementation simple.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Configurable Online Multi-Tiered Storage in&#13;
a Database Management System</title>
<link href="https://hdl.handle.net/1721.1/151677" rel="alternate"/>
<author>
<name>DaCosta, Howard</name>
</author>
<id>https://hdl.handle.net/1721.1/151677</id>
<updated>2023-08-01T03:44:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Configurable Online Multi-Tiered Storage in&#13;
a Database Management System
DaCosta, Howard
Businesses of today produce data items on the order of millions on a daily basis. This is especially true in cloud environments, where much of this data comes in the form of logs and metrics about the performance and status of components in their cloud configurations. Maintaining efficient data storage and retrieval along with growing customer data capacity is very challenging. One reason for this is that newer data tends to be accessed more frequently, while older data needs to be archived for future analysis. Another reason is that maintaining large amounts of data in fast storage disks is very costly. One approach to this problem is a tiered storage system, where new data is allocated to faster storage tiers and older data is pushed to lower tiers with slower retrieval time. This thesis presents a fully online and configurable design and implementation for this in a database management system (DBMS) [1, 2], which has been difficult in the past due to two key constraints: the immutability of its columns and its lack of atomicity for sub-partition level operations. Without atomicity, there are no mechanisms in place that guarantee that a tenant’s data within a partition is moved or deleted completely, which can cause undetermined states that are difficult to identify and resolve. With the immutability of columns, data must be copied and inserted into other tiers, which raises a problem of duplicate data across tiers when a tenant is issuing queries. While these constraints are the exact optimizations that make this particular DBMS so performant for large analytical uses, they are the key features that need to be redesigned in building this system. The proof of concept developed here satisfies all of these requirements with an ingestion rate of 1 TB per day, minimal overhead, and about 70% in projected savings per instance — which could amount to hundreds of thousands of dollars saved per month in large production installations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Grit in NFL Cornerbacks using Statistical Analysis</title>
<link href="https://hdl.handle.net/1721.1/151676" rel="alternate"/>
<author>
<name>Kingston, Cole</name>
</author>
<id>https://hdl.handle.net/1721.1/151676</id>
<updated>2023-08-01T03:01:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measuring Grit in NFL Cornerbacks using Statistical Analysis
Kingston, Cole
Using the pass play tracking data from the 2018 National Football League (NFL) season, I compiled a Grit Score that measured cornerback responses to an adverse result to a play. I calculated this Grit Score using the results of whether a cornerback allowed their opposing receiver to catch the ball to measure change in performance. When comparing performance, I used the difference in average distance between the cornerback and opposing receiver to compile one score for each player in the NFL. I validated my calculations with Pro Football Focus Coverage Ratings and was able to classify players into 6 different categories based on talent and Grit Score. Overall, I found that most NFL players have high grit, or play consistently through adversity, which explains why they have made it to the highest level of football. NFL coaches and general managers prefer players who have increased performance following a bad event as those players tend to stay in the NFL for longer than those with decreased performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap Between Real-time Video and Backlogged Traffic Congestion Control</title>
<link href="https://hdl.handle.net/1721.1/151675" rel="alternate"/>
<author>
<name>Karimi, Pantea</name>
</author>
<id>https://hdl.handle.net/1721.1/151675</id>
<updated>2023-08-01T04:03:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bridging the Gap Between Real-time Video and Backlogged Traffic Congestion Control
Karimi, Pantea
Real-time video applications, such as video conferencing, have become essential to our daily lives, and ensuring reliable and high-quality video delivery in the face of network fluctuation and resource constraints is critical. However, video congestion control algorithms have been criticized for their sub-optimal performance in managing network congestion and maintaining satisfactory video quality and latency. At the&#13;
same time, state-of-the-art congestion control algorithms have demonstrated remarkable performance improvements, effectively addressing network congestion challenges and enhancing the overall quality of data transmission. In this work, we first demonstrate why there is such a gap between the performance of congestion control schemes&#13;
on backlogged flows compared to real-time video streams. Second, we present Dumbo, a design for reshaping the video traffic to look like backlogged traffic, thus enabling state-of-the-art delay-sensitive congestion control algorithms for real-time video. We implemented Dumbo atop WebRTC and evaluated it on emulated network conditions&#13;
using real-world cellular network traces. Our results show that Dumbo in comparison with GCC achieves a 1.5 dB improvement in PSNR, 1.6 dB improvement in SSIM, 100 ms lower frame latency, 35x faster convergence time, 16% increase in the video bitrate, 32% increase in network utilization, and 4x reduction in the network queueing delay.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Energy Requirement of&#13;
Computer Vision</title>
<link href="https://hdl.handle.net/1721.1/151673" rel="alternate"/>
<author>
<name>Edelman, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151673</id>
<updated>2023-08-01T03:09:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing the Energy Requirement of&#13;
Computer Vision
Edelman, Daniel
The energy requirements of neural network learning are growing at a rapid rate. Increased energy demands have caused a global need to seek ways to improve energy efficiency of neural network learning. This thesis aims to establish a baseline on how adjusting basic parameters can affect energy consumption in neural network learning on Computer Vision tasks. I catalogued the effects of various adjust adjustment from simple batch size adjustment to more complicated hardware configuration (such as power capping). Findings include that adjusting from single precision model to a mixed precision model can result in energy reductions of nearly 40%. Additionally power capping the GPU can reduce energy cost by an additional 10%.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Introductory Low-level Programming Course for&#13;
Students with a Python Background</title>
<link href="https://hdl.handle.net/1721.1/151672" rel="alternate"/>
<author>
<name>Quaratiello, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/151672</id>
<updated>2023-08-01T04:08:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Introductory Low-level Programming Course for&#13;
Students with a Python Background
Quaratiello, Grace
The study of C and assembly language can provide valuable insight about the innate nature of computing systems and higher level programming languages. However, before September 2022, the MIT Department of Electrical Engineering and Computer Science (MIT EECS) had not required students to take any class that covers this material and these relationships. The classes included in the introductory programming sequence taken by most MIT EECS students place a stronger emphasis on high-level languages such as Python, which abstract away the interactions that a program must have with memory. Previously, if C had been introduced in an introductory-level class, it was one of several simultaneous concepts being taught to the students and therefore was not explored in depth. In September 2022, MIT EECS revised the class requirements for two of its degrees, Electrical Engineering and Computer Science (Course 6-2) and Computer Science and Engineering (Course 6-3) [1] to require a six-unit introductory course that focuses on low-level programming using C and assembly language. This thesis focuses on the establishment of this introductory low-level programming class intended for students positioned early in the EECS curriculum. Students taking this class study C and assembly language so that they can enter later coursework with both the ability to use these programming languages and a basic understanding of computing systems and associated constraints.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Monte Carlo Tree Search With Applications To Chip Design</title>
<link href="https://hdl.handle.net/1721.1/151671" rel="alternate"/>
<author>
<name>Jones, Cooper</name>
</author>
<id>https://hdl.handle.net/1721.1/151671</id>
<updated>2023-08-01T03:13:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Distributed Monte Carlo Tree Search With Applications To Chip Design
Jones, Cooper
Monte Carlo Tree Search is a classic method in AI that builds up a search tree asymmetrically using random rollouts on a game tree. The work detailed in this thesis expands upon traditional implementations by allowing the capability of fully distributing each node onto different physical machines while enabling them to keep in constant communication. The ability to distribute work to other machines is a highly desirable capability that will allow users to save on single computer resources, enable an almost arbitrary level of scaling, and allow for the processing of states which previously would have been too large to run on a single computer realistically. When applied to the problem of automating the design of Printed Circuit Boards (PCB) from just a list of desired board specifications, this fully distributed search will allow increased search breadth and depth. This expands the computational limits of each action applied to the state, increasing the probability of finding an improved final state when compared to running the search on one physical machine. In this thesis, we discuss our motivating problem and the infrastructure changes necessary to enable this capability increase. We show results highlighting the potential improvements these changes will have on the process of generating a PCB design and identify significant areas for improvement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups</title>
<link href="https://hdl.handle.net/1721.1/151670" rel="alternate"/>
<author>
<name>Hampton, Lelia Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/151670</id>
<updated>2023-08-01T03:19:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups
Hampton, Lelia Marie
To deploy safe machine learning systems in the real world, we must ensure they are fair, robust, and calibrated. However, heavy-tails pose a challenge to this mandate, especially since real world data is often imbalanced and marginalized subgroups tend to be underrepresented. To move toward safer systems, we present two studies on fair pre-processing and ensemble learning, respectively. We show that fair pre-processing comes with a fairness-robustness-calibration tradeoff, and we present a novel adaptive sampling algorithm to overcome this tradeoff. Furthermore, we demonstrate that ensemble learning on its own increases the fairness, robustness, and calibration of machine learning models. The adaptive sampling algorithm and ensemble learning present opportunities for practitioners to overcome this tradeoff in practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concentration Inequalities for Dependent Random&#13;
Variables on Bayesian Networks</title>
<link href="https://hdl.handle.net/1721.1/151669" rel="alternate"/>
<author>
<name>Yao, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/151669</id>
<updated>2023-08-01T03:10:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Concentration Inequalities for Dependent Random&#13;
Variables on Bayesian Networks
Yao, Rui
The thesis presents a theoretical study of the concentration results for the function defined on the random variables on a Bayesian Network. In this work, we provide several concentration inequality results under the assumption that the function is Lipshitz or bounded difference. In addition, we illustrate about the concentration of the maximum likelihood estimator of some learning models. We also show the optimality of certain results and the comparison to the results in other relevant literature.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Halide in Molecular Dynamics</title>
<link href="https://hdl.handle.net/1721.1/151668" rel="alternate"/>
<author>
<name>Gayle Jr., Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/151668</id>
<updated>2023-08-01T03:35:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Halide in Molecular Dynamics
Gayle Jr., Ricardo
In many fields, especially biology and chemistry, it is important to understand how a collection of particles will interact with each other over some period of time. If only managing a system of two particles, it is simple enough to calculate the final positions of the atoms given their properties and the outside forces placed upon them. However, it is often the case that the system’s size is several magnitudes larger; therefore, the task is handed off to computers and simulators.&#13;
&#13;
Molecular dynamics, or MD, simulations tend to be extremely expensive, taking several weeks to compute less than a second’s worth of real time. Two significant reasons MD simulations are time intensive are due to the complex loop structures and math required to observe each time step. More tools and research are constantly being developed to increase performance of these simulations.&#13;
&#13;
In this thesis we introduce a tool from the image processing domain, Halide, and argue that Halide is a qualified candidate to efficiently implement MD simulations in the future. We rewrote a potential into Halide to achieve only a 20% slow down serially, which we are certain can reach parity with minimal changes to the code, and over 300% speed up when running in parallel. While it was challenging beginning to work with Halide and its limitations, we are still able to accomplish this performance and versatility writing 47% less code. Halide also makes the transformation to parallel scheduling trivial, whereas this is not the case in the original implementation. Halide was not able to represent all of the loop structures we wanted; however, we also suggest several additions and changes to Halide to make it more suitable to MD.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Chemical Reactions at the MechanisticLevel through Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/151666" rel="alternate"/>
<author>
<name>Jin, Edward H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151666</id>
<updated>2023-08-01T03:52:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Predicting Chemical Reactions at the MechanisticLevel through Deep Reinforcement Learning
Jin, Edward H.
Reaction prediction is a fundamental problem in chemistry. Previous work has mostly targeted chemical product prediction only, but did not elucidate any mechanisms or elementary steps from which the reaction proceeds. Here, we attempt to predict chemical mechanisms via deep reinforcement learning.&#13;
&#13;
We first define a new type of graph molecular representation that can better keep track of electron flow and can be generalized to non-traditional bonding, such as 3- center-4-electron bonds. We then define a molecular environment Markov Decision Process (MDP) that codifies the allowed mechanistic steps and evaluates them by utilizing a thermodynamic energy oracle as the reward function. To solve this environment, we build a graph neural network-based policy and value network, where the policy network is first pre-trained on an open database of elementary radical reactions (RMechDB). Then, we use proximal policy optimization (PPO) to fine-tune the model and predict reasonable reaction mechanisms for two case studies: a radical oxidation of the terpene limonene, and a radical cyclization cascade in the synthesis of hirsutene.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficacy of Antibody and T cell Therapies for Highly&#13;
Mutable Viruses like Human Immunodeficiency</title>
<link href="https://hdl.handle.net/1721.1/151665" rel="alternate"/>
<author>
<name>Murugan, Pranav M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151665</id>
<updated>2023-08-01T03:02:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Efficacy of Antibody and T cell Therapies for Highly&#13;
Mutable Viruses like Human Immunodeficiency
Murugan, Pranav M.
The isolation of broadly neutralizing antibodies (bnAbs) that can neutralize diverse strains of highly mutable viruses like human immunodeficiency virus (HIV) as well as identification of mutationally-constrained regions of the proteome that could be targeted by T cells has led to interest in passive immunotherapies and therapeutic vaccines as promising methods for treating chronic infection. However, the feasibility of creating a sufficiently powerful therapy remains uncertain. In this work, we develop a stochastic computational model of viral dynamics to help characterize the regimes where viral control or cure may be possible. We study the efficacy of either bnAb therapy or therapeutic vaccination that elicits T cell responses that target mutationally-constrained regions, as well as treatments that combine these two therapeutic modalities. Our results show that combination therapy has the best chance of maintaining viral control or achieving a cure. this is because administering combinations of bnAbs with broad coverage of viral strains for a sufficiently long time can potentially clear rare strains from the latent reservoir which are likely to escape T cell responses resulting in viral rebound. We also describe a strong relation between the outcome of treatment and the diversity of the reservoir of latently infected cells, which suggest that the best candidates for immunotherapy are those who started antiretroviral therapy shortly after infection. Importantly, we find that cure is likely to be a rare outcome, and that the average time to cure is long and independent of therapeutic modality as it depends on the rate of activation of the latent reservoir. Our results will help guide the design of new therapeutics, and provide a platform for future computational screening of of the efficacy of new treatment regimes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space</title>
<link href="https://hdl.handle.net/1721.1/151664" rel="alternate"/>
<author>
<name>Diao, Michael Ziyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151664</id>
<updated>2023-08-01T04:13:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space
Diao, Michael Ziyang
Variational inference (VI) seeks to approximate a target distribution π by an element of a tractable family of distributions. Of key interest in statistics and machine learning is Gaussian VI, which approximates π by minimizing the Kullback–Leibler (KL) divergence to π over the space of Gaussians. In this work, we develop the (Stochastic) Forward-Backward Gaussian Variational Inference (FB–GVI) algorithm to solve Gaussian VI. Our approach exploits the composite structure of the KL divergence, which can be written as the sum of a smooth term (the potential) and a non-smooth term (the entropy) over the Bures–Wasserstein (BW) space of Gaussians endowed with the Wasserstein distance. For our proposed algorithm, we obtain state-of-the-art convergence guarantees when π is log-smooth and log-concave, as well as the first convergence guarantees to first-order stationary solutions when π is only log-smooth. Additionally, in the setting where the potential admits a representation as the average of many smooth component functionals, we develop and analyze a variance-reduced extension to (Stochastic) FB–GVI with improved complexity guarantees.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning MRI-based Model for Prediction of&#13;
Clinically Significant Prostate Cancer</title>
<link href="https://hdl.handle.net/1721.1/151663" rel="alternate"/>
<author>
<name>Yang, Janice</name>
</author>
<id>https://hdl.handle.net/1721.1/151663</id>
<updated>2023-08-01T03:59:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Deep Learning MRI-based Model for Prediction of&#13;
Clinically Significant Prostate Cancer
Yang, Janice
Prostate cancer is one of the leading causes of death for men globally, despite many men being diagnosed with indolent tumors that do not warrant treatment. Increasingly, magnetic resonance imaging (MRI) is being used as a risk assessment tool, before more invasive prostate biopsies are performed for patients at suspicion of prostate cancer. We hypothesize that we can train a deep learning model that combines multi-parametric MRI images with clinical factors to accurately predict patient risk of developing clinically significant prostate cancer. We train an image model and combined image and clinical factors model on a set of 9391 MRIs from the Massachusetts General Brigham (MGB) hospital system, which achieved an area under the receiver-operator curve (AUROC) of 0.80 and 0.84, respectively, for 1-year prediction of clinically significant prostate cancer, surpassing current human baselines and existing risk models’ performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Optimal Re-planning of Quadrotor Trajectories</title>
<link href="https://hdl.handle.net/1721.1/151661" rel="alternate"/>
<author>
<name>Wang, Geoffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/151661</id>
<updated>2023-08-01T03:34:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Time-Optimal Re-planning of Quadrotor Trajectories
Wang, Geoffrey
With the rise of quadrotor drones in recent years, the research and development of time-optimal trajectory planners are pushing the boundaries. They now not only exploit the full dynamics of the drone to generate aggressive trajectories but also have runtimes that allow them to generate plans in near real-time.&#13;
&#13;
This work extends current state-of-the-art time-optimal quadrotor trajectory planners to allow for on-the-fly trajectory re-planning. Given new waypoints and a previous trajectory, the planner is able to generate an updated trajectory while maintaining time optimality. &#13;
&#13;
Because the planner leverages a learned sequence to sequence neural network model, it is able to generate trajectories magnitudes faster than optimization based approaches. This work then takes it one step further and optimizes the planner using a compiled real time inference library (NVIDIA TensorRT). The optimized planner is demonstrated to provide a 14.84 times increase in throughput and over 95% reduction in latency. The increase in throughput can be translated to better efficiency, and the reduction in latency is critical for trajectory re-planning where the drone is flying an active trajectory. Both improvements push it one step closer to running the planner onboard drones themselves. &#13;
&#13;
Although most experiments were conducted on desktop class hardware, mobile chips like the NVIDIA Jetson AGX Orin were also tested to mimic the class of hardware that could be flown onboard drones.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How early can we average Neural Networks?</title>
<link href="https://hdl.handle.net/1721.1/151660" rel="alternate"/>
<author>
<name>Nasimov, Umarbek</name>
</author>
<id>https://hdl.handle.net/1721.1/151660</id>
<updated>2023-08-01T03:01:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How early can we average Neural Networks?
Nasimov, Umarbek
There is a recurring observation in deep learning that neural networks can be combined simply with arithmetic averages over their parameters. This observation has led to many new research directions in model ensembling, meta-learning, federated learning, and optimization. We investigate the evolution of this phenomenon during the training trajectory of neural network models initialized from a common set of parameters (parent). Surprisingly, the benefit of averaging the parameters persists over long child trajectories from parent parameters with minimal training. Furthermore, we find that the parent can be merged with a single child with significant improvement in both training and test loss. Through analysis of the loss landscape, we find that the loss becomes sufficiently convex early on in training, and, as a consequence, models obtained by averaging multiple children often outperform any individual child.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Phonetic Category Learning from&#13;
Audio and Visual Input</title>
<link href="https://hdl.handle.net/1721.1/151659" rel="alternate"/>
<author>
<name>Zhi, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/151659</id>
<updated>2023-08-01T03:10:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unsupervised Phonetic Category Learning from&#13;
Audio and Visual Input
Zhi, Sophia
Understanding how children learn the phonetic categories of their native language is an open area of research in cognitive science and child language development. However, despite experimental evidence that phonetic processing is very often a multimodal phenomenon (involving both auditory and visual cues), computational research has primarily modeled phonetic category learning as a function of only auditory input. In this thesis, I investigate whether multimodal information benefits phonetic category learning under a clustering model. Due to the lack of an appropriate dataset, I also introduce a method for creating a high-quality dataset of synthetic videos of speakers’ faces for an existing audio corpus. This model trained and tested on audiovisual data achieves up to a 9.1% improvement on a phoneme discrimination battery over the random baseline compared to a model trained and tested on only audio data. The audiovisual model also outperforms the audio model by up to 4.7% over the baseline when both are tested on audio-only data, suggesting that visual information guides the learner towards better clusters. Further analysis indicates that visual information benefits most, but not all, phonemic contrasts. In follow-up analyses, I investigate the learned audiovisual clusters and their relationship to auditory gestures and phones, finding that the clusters capture a unit of speech smaller than phonemes. This work demonstrates the benefit of visual information to a computational model of phonetic category learning, suggesting that children may benefit substantively by using visual cues while learning phonetic categories.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative assessment of the frictional ignition resistance of&#13;
metals in high-pressure oxygen</title>
<link href="https://hdl.handle.net/1721.1/151658" rel="alternate"/>
<author>
<name>Garcia Jimenez, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/151658</id>
<updated>2023-08-01T03:44:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Quantitative assessment of the frictional ignition resistance of&#13;
metals in high-pressure oxygen
Garcia Jimenez, Andres
In this work, we developed a material index for selecting alloys resistant to frictional ignition in high-pressure oxygen environments. A previous ignition-resistance metric proposed by NASA WSTF varies strongly and unpredictably with test conditions, thus limiting its usefulness. The material index developed here incorporates key material properties that influence ignition behaviors, including friction coefficient, ignition temperature, and thermal effusivity. Finite element simulations were used to compute ignition temperatures for 15 alloys based on published frictional ignition data from NASA White Sands Testing Facility (WSTF). These values were used with the material index to construct property diagrams for ranking intrinsic frictional ignition resistance. The results demonstrate that nickel-based superalloys with low iron content are less likely to ignite under frictional heating than ferrous alloys and nickel-based superalloys with high content iron. The material index is then used to predict material performance outside of the test conditions, highlighting the effect of ambient temperature on ignition resistance. We conclude by developing an empirical relation between ignition temperature and enthalpy of oxidation which can guide the design of new ignition-resistant alloys.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Basis Alignment to create a Generalized&#13;
Multi-Relational Graph Convolution Network in the&#13;
Federated Setting</title>
<link href="https://hdl.handle.net/1721.1/151657" rel="alternate"/>
<author>
<name>Ramirez, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/151657</id>
<updated>2023-08-01T04:01:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Leveraging Basis Alignment to create a Generalized&#13;
Multi-Relational Graph Convolution Network in the&#13;
Federated Setting
Ramirez, Nicholas
Knowledge graphs have seen a significant rise in popularity and usage in recent years with many real-world applications taking advantage of their ability to model interlinked data easily. In general, many institutions maintain their own knowledge graphs, however these graphs tend to suffer from incompleteness. This is due to two main reasons: knowledge is naturally distributed across institutions and institutions are unable to share sensitive data. With this in mind, federated learning appears to be a promising solution to this problem as it enables clients to develop a shared global model without sharing any data. This thesis aims to solve the knowledge graph completion problem by introducing a federated learning protocol for the state-of-the-art Knowledge Embedding Based Graph Convolutional Network (KE-GCN) [51]. KEGCN was chosen for it’s unification of multiple graph convolutional networks and it’s ability to provide as much flexibility as possible for clients. As a result, my federated protocol, Fed-KE-GCN, is focused on data privacy and flexibility. In addition to Fed-KE-GCN, this thesis empirically shows that a common approach for differential privacy for deep learning, Differentially Private Stochastic Gradient Descent (DP-SGD) [2], is not viable in this domain due to the nature of graph data and the internal framework of Graph Convolutional Networks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing the Allocation of Capital&#13;
Among Offensive Positions in the NFL</title>
<link href="https://hdl.handle.net/1721.1/151656" rel="alternate"/>
<author>
<name>Calvetti Jr., Paul G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151656</id>
<updated>2023-08-01T03:03:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing the Allocation of Capital&#13;
Among Offensive Positions in the NFL
Calvetti Jr., Paul G.
Building a successful National Football League (NFL) team is a challenging task, requiring front offices to balance player selection and compensation while operating under a salary cap constraint. The salary cap represents the maximum amount a team can spend on player salaries in a given season. Effective team construction entails strategic allocation of resources across different positions to maximize performance within this budget. This paper focuses on the critical aspect of allocating salary cap resources among offensive positions to maximize team success. We introduce a novel model that considers the interplay between players at different offensive positions, as well as the variations in salaries and performance levels observed between players under rookie and veteran contracts. By framing the allocation challenge as a constrained optimization problem, we aim to help teams maximize their points per game while staying within the salary cap limit. Our model’s predictions enable us to identify the optimal distribution of resources across offensive positions, providing valuable insights for NFL front offices as they seek to allocate their salary cap to achieve maximum offensive performance and increase their chances of success on the field.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Language Pretrained Multiple Instance Zero-Shot&#13;
Transfer for Histopathology Images</title>
<link href="https://hdl.handle.net/1721.1/151651" rel="alternate"/>
<author>
<name>Lu, Ming Yang (Max)</name>
</author>
<id>https://hdl.handle.net/1721.1/151651</id>
<updated>2023-08-01T04:16:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Visual Language Pretrained Multiple Instance Zero-Shot&#13;
Transfer for Histopathology Images
Lu, Ming Yang (Max)
Contrastive visual language pretraining has emerged as a powerful method for either training new language-aware image encoders or augmenting existing pretrained models with zero-shot visual recognition capabilities. However, existing works typically train on large datasets of image-text pairs and have been designed to perform downstream tasks involving only small to medium sized-images, neither of which are applicable to the emerging field of computational pathology where there are limited publicly available paired image-text datasets and each image can span up to 100,000 x 100,000 pixels. In this paper we present MI-Zero, a simple and intuitive framework for unleashing the zero-shot transfer capabilities of contrastively aligned image and text models on gigapixel histopathology whole slide images, enabling multiple downstream diagnostic tasks to be carried out by pretrained encoders without requiring any additional labels. MI-Zero reformulates zero-shot transfer under the framework of multiple instance learning to overcome the computational challenge of inference on extremely large images. We used over 550k pathology reports and other available in-domain text corpora to pretrain our text encoder. By effectively leveraging strong pretrained encoders, our best model pretrained on over 33k histopathology image-caption pairs achieves an average median zero-shot accuracy of 70.2% across three different real-world cancer subtyping tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Neural Network for Efficient Video Recognition</title>
<link href="https://hdl.handle.net/1721.1/151649" rel="alternate"/>
<author>
<name>Pan, Bowen</name>
</author>
<id>https://hdl.handle.net/1721.1/151649</id>
<updated>2023-08-01T03:50:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Dynamic Neural Network for Efficient Video Recognition
Pan, Bowen
Recognizing real-world videos is a challenging task that requires the use of deep learning models. These models, however, require extensive computational resources to achieve robust recognition. One of the main challenges when dealing with real-world videos is the high correlation of information across frames. This results in redundancy in either temporal or spatial feature maps of the models, or both. The amount of redundancy largely depends on the dynamics and events captured in the video. For example, static videos typically have more temporal redundancy, while videos focusing on objects tend to have more channel redundancy.&#13;
&#13;
To address this challenge, we propose a novel approach that reduces redundancy by using an input-dependent policy to determine the necessary features for both temporal and channel dimensions. By doing so, we can identify the most relevant information for each frame, thus reducing the overall computational load. After computing the necessary features, we reconstruct the remaining redundant features from those using cheap linear operations. This not only reduces the computational cost of the model but also keeps the capacity of the original model intact.&#13;
&#13;
Moreover, our proposed approach has the potential to improve the accuracy of real-world video recognition by reducing overfitting caused by the redundancy of information across frames. By focusing on the most relevant information, our model can better capture the unique characteristics of each video, resulting in more accurate predictions. Overall, our approach represents a significant step forward in the field of real-world video recognition and has the potential to enable the development of more efficient and accurate deep learning models for this task.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Annealing Cryogenically Irradiated High TemperatureSuperconductors with Current Pulses</title>
<link href="https://hdl.handle.net/1721.1/151648" rel="alternate"/>
<author>
<name>Fisher, Zoe Lilah</name>
</author>
<id>https://hdl.handle.net/1721.1/151648</id>
<updated>2023-08-01T03:43:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Annealing Cryogenically Irradiated High TemperatureSuperconductors with Current Pulses
Fisher, Zoe Lilah
Tokamak fusion power plants rely on electrogmanets engineered from high temperature superconductors (HTS) made of Rare Earth Barium Copper Oxide (REBCO) to confine a thermonuclear grade plasma. The HTS performance must be predictable despite the radiation damage caused by fast neutrons from fusion reactions that damage to the REBCO microstructure, decreasing the magnet’s critical current. This lowers the reactor’s achievable magnetic field–and therefore performance. The damage, however, is not necessarily permanent. By applying a short current pulse above the critical current of the coated conductor, resistive heating briefly raises the REBCO’s temperature well above that of the surrounding cryogenic environment. This process,&#13;
called annealing, heals defects and recovers some of the performance losses. Magnets are the limiting factor for tokamak lifetimes, therefore pulse annealing could dramatically increase the economical viability of fusion energy by reducing shutdown frequency and duration.&#13;
&#13;
This experiment focuses on sending 400A pulses through an irradiated HTS tape to identify the optimal duration for critical current recovery. Using a cryogenic proton irradiation facility capable of applying current pulses as high as 2000A and as short as 100 ns, we found that a 400A pulse can display up to 400% critical current recovery with respect to the post-irradiation critical current value. The optimal length for this current pulse is 5.5 ms, which results in a maximum calculated temperature of 630K in the REBCO microstructure. Future works will pursue measuring (rather than calculating) the temperature in the REBCO microstructure and parameterizing the maximum critical current recovery at different pulse amplitudes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Market Mechanisms for Service Provider Operations in Advanced Air Mobility</title>
<link href="https://hdl.handle.net/1721.1/151647" rel="alternate"/>
<author>
<name>Qin, Victor L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151647</id>
<updated>2023-08-01T04:14:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Market Mechanisms for Service Provider Operations in Advanced Air Mobility
Qin, Victor L.
The proliferation of advanced air mobility (AAM) flights in the form of vertical take-off and landing aircraft (VTOL) and uncrewed aircraft systems (UAS) in the near future will require a new air traffic management system adapted for on-demand flights flying at low altitudes and far from existing airports and aviation hubs. The FAA has proposed UAS traffic maangement (UTM) and urban air mobility (UAM) as two concepts of operations for AAM, where private service providers (SPs) will be responsible for managing these novel forms of air traffic alongside but independently from existing air traffic control services. The roles and characteristics of these new SPs are still not well defined today.&#13;
&#13;
In this work, we propose possible methods that can fulfill the concepts proposed. First, we present cost-aware prioritization methods based on the second price auction for air traffic management protocols for use in an SP's internal operations. Next, we show a Shapley value profit-sharing mechanism to incentivize cooperation cooperation in efficiently routing flights between SPs. Finally, we extend the Shapley value framework to accommodate multiple SPs in the same region of airspace, and study how the combination of airspace structure, traffic demand, and sector allocation leads to differences in profit earned between SPs. We conclude with future directions for studying and building service providers in the AAM context.&#13;
&#13;
Keywords: advanced air mobility, Shapley value, service providers, market mechanisms
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact Analysis and Design Development for Air-Dropped Antarctic Seismo-Geodetic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/151646" rel="alternate"/>
<author>
<name>Miller, Alex S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151646</id>
<updated>2023-08-01T03:03:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Impact Analysis and Design Development for Air-Dropped Antarctic Seismo-Geodetic Ice Penetrator
Miller, Alex S.
Existing measurement tools for ice shelves and other glaciated regions have limited capability to measure dynamic events in remote areas. The Seismo-Geodetic Ice Penetrator (SGIP) offers a method for rapid deployment of a broadband seismometer and Global Navigation Satellite System (GNSS) positioning system designed to sense ice shelf resonant forcings caused by ocean gravity waves and atmospheric waves. Additionally, SGIP will track seismic indications of calving and rifting, facilitating better estimates of sea level rise. During operation, SGIP is dropped from an aerial vehicle, reaching a terminal velocity of 42 ms⁻¹; during impact with the snowpack surface SGIP experiences an average acceleration of approximately 500 ms⁻². Upon impact, a fore-body section separates from the upper aft-body "flare" section and continues several meters into the ice shelf, while the aft-body remains at the surface with a set of communications antennas. The SGIP platform is compared to previously envisioned and tested penetrator systems. Impact modeling of SGIP into glacial firn is detailed, with a focus on fast simulation run-times for design exploration. Designs of snow spikes and a rigid antenna mast are detailed, analyzed and tested. Results from a full-scale prototype hardware test in Juneau, Alaska are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sybil: Predicting Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography</title>
<link href="https://hdl.handle.net/1721.1/151645" rel="alternate"/>
<author>
<name>Mikhael, Peter G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151645</id>
<updated>2023-08-01T03:02:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Sybil: Predicting Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography
Mikhael, Peter G.
Low-dose computed tomography (LDCT) for Jung cancer screening is effective, though most eligible people are not being screened. Tools that provide personalized future cancer risk assessment could focus approaches toward those most likely to benefit. We hypothesize that a deep learning model assessing the entire volumetric LDCT data could be built to predict individual risk without requiring additional demographic or clinical data. We develop a model called Sybil using LDCTh from the National Lung Screening 'Trial (NLST). Sybil requires only one LDCT and does not require clinical data or radiologist annotations; it can run in real-time in the background on a radiol­ogy reading station. Sybil is validated on three independent datasets: a held-out set of 6,282 LDCTs from NLST participants, 8,821 LDCTs from Massachusetts General Hospital (MGH) and 12,280 LDCTs from Chang Cung Memorial Hospital (CGMH, which included people with a range of smoking history including non-smokers). Sybil achieves areas under the receiver-operator curve for Jung cancer prediction at 1-year of 0.92 (95% CI 0.88, 0.95) on NLST, 0.86 (95% CI 0.82, 0.90) on MGH and 0.94 (95% CI 0.91, 1.00) on CGMH external validation sets. Concordance indices over six years were 0.75 (95% CI 0.72, 0.78), 0.81 (95% CI 0.77, 0.85), and 0.80 (95% CI 0.75, 0.86) for NLST, MGH, and CGMH, respectively. The model is publicly available at https://github.com/reginabarzilaygroup/Sybil.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unleashing the Power of Generative AI: The Race for Advancement and the Global Ramifications</title>
<link href="https://hdl.handle.net/1721.1/151643" rel="alternate"/>
<author>
<name>Chiang, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/151643</id>
<updated>2023-08-01T03:29:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unleashing the Power of Generative AI: The Race for Advancement and the Global Ramifications
Chiang, Ian
Generative AI, including language models like ChatGPT, has had a significant impact on a wide range of industries and applications. It has created new opportunities in industries like content creation, marketing, and design thanks to its capacity to produce high-quality text, images, and other types of media. The increased use of generative AI has, however, also sparked a global arms race for supremacy in the area.&#13;
&#13;
Concerns have been raised about the potential misuse of generative AI technology, including the production of fake news, propaganda, and deepfakes, as countries and corporations compete for control over it. The creation of highly sophisticated generative AI systems has also sparked discussions about the moral and societal ramifications of making machines that can generate content on their own with little to no human input.&#13;
&#13;
Despite these worries, generative AI will probably continue to have a positive social impact in the years to come. The potential for industries to undergo a revolution and for our interactions with media and information to change as technology becomes more widely available and sophisticated. As a result, it is critical that we keep a close eye on its development and application while also attempting to address any potential ethical and societal issues that may come up.&#13;
&#13;
Through this research, I will analyze the holistic view of generative AI and raise up concerns about the effect of AI growth &amp; global repercussions to such tension in the race for superior generative AI. Additionally, I will take parallels from past disruptive technologies to forecast the outcome of generative AI’s abrupt changes to society.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis and Comparison&#13;
of the Creation of University Spin-off Startups in Deep Tech&#13;
between the United States and Japan</title>
<link href="https://hdl.handle.net/1721.1/151642" rel="alternate"/>
<author>
<name>Ito, Masumi</name>
</author>
<id>https://hdl.handle.net/1721.1/151642</id>
<updated>2023-08-01T04:21:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis and Comparison&#13;
of the Creation of University Spin-off Startups in Deep Tech&#13;
between the United States and Japan
Ito, Masumi
Research-based universities have played a significant role in the economic growth of nations, particularly in the United States, where companies originating from these universities have generated substantial employment opportunities and revenue. &#13;
&#13;
There exists a substantial disparity in the number of spin-off companies created from these universities between the United States and Japan. Although Japan is not far behind the United States in terms of patent numbers, it significantly lags behind in successfully commercializing research outcomes through the establishment of startups.&#13;
&#13;
Therefore, this thesis focuses on the Massachusetts Institute of Technology (MIT), a leading institution in spin-off creation in the United States, and the University of Tokyo, the leading institution in Japan. The objective is to investigate how their university-based ecosystems, including university-supported venture capital initiatives and on-campus entrepreneurship programs, influence the establishment of university spin-offs. The analysis is conducted through interviews and a literature review to examine the impact of these ecosystems on the formation of university spin-off startups. &#13;
&#13;
Many of the spin-off startups emerging from research-based universities fall under the category of "deep tech" companies, which are based on long-term research outcomes and require substantial investments and development time. Consequently, a funding gap referred to as the "valley of death" arises, presenting a unique financial challenge for entrepreneurs between research invention and commercialization. It is essential for entrepreneurs to overcome this funding gap, and thus, we also investigate how university spin-offs in Japan and the United States make fundraising choices to bridge the capital gap.  &#13;
&#13;
By conducting these surveys, we aim to gain insights into the effectiveness of university-affiliated venture capital firms, university spin-off startups, and the overall university ecosystem.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Techno-Economic Analysis of Hydrogen, Electric, and Diesel Fuel in Medium- and Heavy-Duty Transportation Applications</title>
<link href="https://hdl.handle.net/1721.1/151641" rel="alternate"/>
<author>
<name>Kennington, Lindsey</name>
</author>
<id>https://hdl.handle.net/1721.1/151641</id>
<updated>2023-08-01T03:28:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Techno-Economic Analysis of Hydrogen, Electric, and Diesel Fuel in Medium- and Heavy-Duty Transportation Applications
Kennington, Lindsey
This paper presents a techno-economic analysis for three distinct vehicle drivetrains (hydrogen fuel-cell vehicles (FCVs), battery-electric vehicles (BEVs), and diesel vehicles (ICE-D)) across a variety of applications in the medium- and heavy-duty (MD/HD) transportation market. The primary basis for evaluating each drivetrain is vehicle total cost of ownership (TCO). This paper analyzes the primary cost categories that contribute to TCO, capital and operational costs, as well as incentives and subsidies. The study also addresses the external social costs of FCVs and BEVs and provides a risk analysis for each zero-emissions (ZEV) drivetrain.&#13;
&#13;
TCO analyses are developed across a variety of medium- and heavy-duty fleet applications. These fleet applications include Long-Haul Trucking (Class 8), Short-Haul Trucking (Class 8), Parcel Delivery (Class 4), Tipper Dump Trucks (Class 6), Refuse (Garbage) Trucks (Class 6), Forklifts (Class 3), School Buses (Class 6), Transit Buses (Class 7). Certain application segments are modeled under multiple scenarios to account for key operational differences, such as volume vs. weight limited fleet applications, or single vs. multi-shift operational schedules. TCO financial modeling for each drivetrain-application-scenario pairing illuminates which ZEV is a more natural fit within the MD/HD transportation fleet market segment. The results of the study demonstrate that the TCO of FCVs and BEVs are heavily influenced by several factors such as the initial purchase price, the price of hydrogen fuel, the cost of vehicle operator downtime, the vehicle charging rate, and the vehicle rated payload.&#13;
&#13;
This study concludes that FCVs are a natural fit for long- and short-haul trucking applications that operate under weight-limited operations or follow a multi-shift schedule. However, there are current infrastructure limitations for this market. Most notably, hydrogen fuel station corridors do not currently exist in the United States outside of California. This infrastructure limitation illuminates the key challenge to the success of a future hydrogen economy: potential hydrogen economy end-users want the guarantee of significant hydrogen infrastructure developments before committing to hydrogen-powered equipment – but the funding required to support the hydrogen infrastructure upgrades will only be secured once future hydrogen end-users and customers are secured. Therefore, it is recommended that stakeholders and policymakers interested in developing the hydrogen economy and future hydrogen fuel cell markets push for infrastructure development by securing partnerships with fleet owners in the specific application segments where FCVs outperform BEVs from a TCO perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The MIT-IBM CloudSec 16: A Cloud Cybersecurity Benchmarking Framework</title>
<link href="https://hdl.handle.net/1721.1/151640" rel="alternate"/>
<author>
<name>Lewke, Damien</name>
</author>
<id>https://hdl.handle.net/1721.1/151640</id>
<updated>2023-08-01T03:50:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The MIT-IBM CloudSec 16: A Cloud Cybersecurity Benchmarking Framework
Lewke, Damien
This paper proposes a novel cloud security benchmarking framework and scoring system to improve cyber risk management. Cyber risk management is challenging and has become even more difficult as organizations digitally transform their business and IT from on-premises environments to cloud infrastructure. Threats proliferate as organizations’ attack surfaces expand due to shadow IT, software supply chain security, outsourced networking, and virtualization. Existing cyber risk management frameworks and controls are too exhaustive or generic and provide no means for organizations to assess their cyber risk against their peers. The MIT-IBM CloudSec 16 developed in this paper is a new security benchmarking framework and scoring system built specifically for cloud deployments in the financial service sector. When paired with MIT’s SCRAM secure computation platform, the MIT-IBM CloudSec 16 can provide an overview of cloud security in the financial service sector and enable organizations to and remediate areas of relative weakness.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additively Manufactured, Multi-Langmuir Probe Plasma Sensing Device for Use on CubeSats</title>
<link href="https://hdl.handle.net/1721.1/151639" rel="alternate"/>
<author>
<name>Bigelow, Zoey</name>
</author>
<id>https://hdl.handle.net/1721.1/151639</id>
<updated>2023-08-01T03:49:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Additively Manufactured, Multi-Langmuir Probe Plasma Sensing Device for Use on CubeSats
Bigelow, Zoey
This thesis presents a Langmuir probe (LP) sensor array for CubeSat ionospheric plasma diagnostics. The ionosphere is a critical layer of Earth's atmosphere consisting entirely of plasma, and reliable in-situ measurements made on satellites orbiting within the ionosphere are necessary for better understanding its properties. LPs are an ideal choice for CubeSats due to their simplicity, versatility, and minimal maintenance requirements.&#13;
&#13;
This thesis focuses on the development and characterization of a novel LP sensor array that employs three types of LP arrangements (single, dual, and triple) to measure plasma properties. This includes the development of low-power electronics to run the multi-LP device and the design of 3D-printed housing to push the lower bounds of device size and electrode spacing. The designs were rigorously tested in a helicon plasma chamber.&#13;
&#13;
The resulting LP sensor array is the first of its kind, allowing for the development of better and cheaper CubeSat sensors. The multi-LP device is designed to draw low power and is intended to be cost-effectively manufactured via rapid prototyping techniques. This makes it an ideal solution for CubeSats that is compatible with in-space manufacturing. This device provides critical data to help us better understand the thermosphere's plasma and its impact on climate change.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of Stability and Control Systems for Small, Deployable Aircraft</title>
<link href="https://hdl.handle.net/1721.1/151638" rel="alternate"/>
<author>
<name>Gaubatz, Julia C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151638</id>
<updated>2023-08-01T04:20:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Development of Stability and Control Systems for Small, Deployable Aircraft
Gaubatz, Julia C.
This thesis presents the development of the tail system for Firefly, a kilogram-scale, transonic, rocket-powered, deployable UAV. The propulsion system, vehicle size, and stowability requirements present challenges in designing control surfaces with adequate stability and control performance. To satisfy the stowability requirements, the tail was designed with an oblique hinge in which the deployment axis doubles as the control-surface actuation axis. Actuation mechanics, deployment spring sizing, and other mechanical details are also presented. To model the stability and control effects of the large oblique motion of the tail's control surfaces, a custom pre-processor was developed to deflect them for vortex lattice computations. The accuracy of this method is compared against the conventional "control" vector method in subsequent testing. Wind tunnel testing was performed to evaluate longitudinal stability and controllability. Unpowered flight tests were conducted to collect flight data and test the mechanical functionality of multiple tail designs. The accuracy of the vortex lattice aero-control predictions are discussed and recommendations are made in regards to the applicability of the oblique surface deflection pre-processor.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two case studies on indoor air quality in New York&#13;
City decarbonized affordable housing</title>
<link href="https://hdl.handle.net/1721.1/151636" rel="alternate"/>
<author>
<name>Morales, Manuel</name>
</author>
<id>https://hdl.handle.net/1721.1/151636</id>
<updated>2023-08-01T03:57:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Two case studies on indoor air quality in New York&#13;
City decarbonized affordable housing
Morales, Manuel
To mitigate the effects of climate change, building decarbonization and energy efficiency measures have expanded in scope. At the same time, interest has grown in how these changes affect indoor air quality (IAQ) and thus personal health.&#13;
&#13;
This thesis analyzes the concentrations of gas pollutants and particulate matter (PM) within occupied apartments in two New York City affordable housing projects, which we will refer to as Bushwick and Woodlawn. At Bushwick, we explore how gas and PM concentrations are impacted by retrofits that decarbonize the building and increase its energy efficiency to meet passive house standards. At Woodlawn, we monitor PM in a new development built to passive house standards to observe how concentrations are impacted by occupancy and controlled changes to ventilation and filtration settings.&#13;
&#13;
Results at Bushwick were limited by the availability of data and confounding factors but indicated the potential for a retrofit to passive house standards to improve IAQ. PM and gas sensors were initially installed in four apartments, but only one apartment (Apt. D) maintained both of these sensors online throughout the study. In addition, one apartment (Apt. A) kept only the PM sensor online and another (Apt. B) kept only the gas sensor online. This ultimately allowed us to analyze changes in PM and gas concentrations in two apartments each. Of note, a few tenants in Apt. D who used to smoke in the unit moved out during the retrofit, so these changes confounded any effect of the retrofit on air pollution that we hoped to observe. We observed statistically significant decreases in most gas and PM pollutants across apartments following the retrofit. PM1 saw the most steep decreases in PM, with mean concentrations dropping 55% in Apt. A and 44% in Apt. D after the retrofit. Amongst gases, mean CO2 concentrations decreased by 62% in Apt. B and 45% in Apt. D. This decrease in air pollution resulted in greater compliance with Health Canada IAQ guidelines after the retrofit.&#13;
&#13;
Results at Woodlawn were supported by strong data collection for a year in nine apartment units. By observing air pollution before and after tenants moved in, we determined that occupancy had a statistically significant effect in increasing PM concentrations in all observed apartments. We also observed that the combined effect of increasing ventilation rates by 25% and using in-unit HEPA filters resulted in statistically significant decreases in PM concentrations across most units. Across all interventions in occupancy, ventilation, and filtration, PM2.5 and PM10 concentrations in all units fully complied with WHO ambient air quality guidelines. Furthermore, air pollution indoors was consistently lower than that outdoors, evidence that passive house construction can keep indoor air quality high and protect residents from outdoor air pollution.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Powderworld: A Platform for Understanding Generalization via Rich Task Distributions</title>
<link href="https://hdl.handle.net/1721.1/151635" rel="alternate"/>
<author>
<name>Frans, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/151635</id>
<updated>2023-08-01T03:16:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Powderworld: A Platform for Understanding Generalization via Rich Task Distributions
Frans, Kevin
One of the grand challenges of reinforcement learning is the ability to generalize to new tasks. However, general agents require a set of rich, diverse tasks to train on. Designing a ‘foundation environment’ for such tasks is tricky – the ideal environment would support a range of emergent phenomena, an expressive task space, and fast runtime. To take a step towards addressing this research bottleneck, this work presents Powderworld, a lightweight yet expressive simulation environment running directly on the GPU. Within Powderworld, two motivating challenges are presented, one for world-modelling and one for reinforcement learning. Each contains hand-designed test tasks to examine generalization. Experiments indicate that increasing the environment’s complexity improves generalization for world models and certain reinforcement learning agents, yet may inhibit learning in high-variance environments. Powderworld aims to support the study of generalization by providing a source of diverse tasks arising from the same core rules.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable Modeling of Immunotherapy Response&#13;
Factors</title>
<link href="https://hdl.handle.net/1721.1/151634" rel="alternate"/>
<author>
<name>Ting, Britney A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151634</id>
<updated>2023-08-01T03:43:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Interpretable Modeling of Immunotherapy Response&#13;
Factors
Ting, Britney A.
Immunotherapy, which treats cancer by either stimulating or suppressing the immune system, has been extraordinarily effective for some cancers, such as breast cancer and B-cell lymphoma. A type of immunotherapy, checkpoint inhibitors work by blocking the ability of cancer cells to evade immune system detection. However, not all patients respond to checkpoint inhibitors, even those with the same tumor types, and the complexity of biological networks and diversity of patients makes it difficult for clinicians to understand why a patient does not respond to treatment. This thesis integrates RNA and whole-exome seqeuencing (WES) data into an interpretable machine learning model and investigates genetic factors that may separate responders from nonresponders. We discovered that both data types contribute to response separation and that certain gene sets may be especially important factors for predicting response. Further analysis to elucidate how much individual genes contribute to significant gene sets and response needs to be performed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Branch-and-Price for Prescriptive Contagion Analytics</title>
<link href="https://hdl.handle.net/1721.1/151633" rel="alternate"/>
<author>
<name>Ramé, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/151633</id>
<updated>2023-08-01T03:14:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Branch-and-Price for Prescriptive Contagion Analytics
Ramé, Martin
Contagion models are ubiquitous in epidemiology, social sciences, engineering, and management. This thesis formalizes prescriptive contagion analytics problems where a centralized decision-maker allocates shared resources across multiple segments of a population, each governed by contagion dynamics. We define four real-world problems under this umbrella: distributing vaccines, deploying vaccination centers, mitigating urban congestion, promoting online content, and combating drug addiction. Prescriptive contagion problems involve mixed-integer non-convex optimization models with constraints governed by ordinary differential equations, thus combining the challenges of combinatorial optimization, non-linear optimization, and continuous-time system dynamics. This thesis develops a branch-and-price methodology for these problems based on: (i) a set partitioning reformulation; (ii) a column generation decomposition; (iii) a novel state clustering algorithm for discrete-decision continuous-state dynamic programming; and (iv) a novel tri-partite branching scheme to circumvent non-linearities. Extensive experiments show that the algorithm scales to large and otherwise- intractable instances, significantly outperforming state-of-the-art benchmarks. Our methodology provides a novel decision-making tool to support resource allocation in contagion systems. In particular, its application can increase the effectiveness of vaccination campaigns by an estimated 50-70%, resulting in 12,000 extra saved lives over 12 weeks in a situation mirroring the COVID-19 pandemic.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fault Tolerant Broadcast in Bandwidth-Constrained&#13;
Networks</title>
<link href="https://hdl.handle.net/1721.1/151630" rel="alternate"/>
<author>
<name>Kaklamanis, Ioannis</name>
</author>
<id>https://hdl.handle.net/1721.1/151630</id>
<updated>2023-08-01T03:28:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fault Tolerant Broadcast in Bandwidth-Constrained&#13;
Networks
Kaklamanis, Ioannis
This thesis addresses the problem of achieving scalable fault-tolerant broadcast in networks with limited bandwidth. We begin by examining the limitations of leaderbased protocols, such as HotStuff, which suffer from a leader bottleneck and reduced system throughput as the number of servers increases. To mitigate this, we propose CodedBcaster and Coded HotStuff, a Byzantine Fault Tolerant (BFT) broadcast scheme based on erasure coding, demonstrating a significant improvement in throughput. We further explore the problem of optimal rate allocation in heterogeneous node-constrained networks and provide concrete theoretical results for determining the optimal system throughput rate. Additionally, we propose the MaxMin Rate Controller (MaxMin-RC) protocol as a feedback-based solution to optimize broadcast throughput in non-BFT settings, achieving close alignment with the optimal throughput rate. Through extensive simulations and evaluations, we demonstrate the effectiveness of our proposed solutions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Secure Shared Memory for Side-Channel-Resistant Enclaves</title>
<link href="https://hdl.handle.net/1721.1/151629" rel="alternate"/>
<author>
<name>Gomez-Garcia, Miguel</name>
</author>
<id>https://hdl.handle.net/1721.1/151629</id>
<updated>2023-08-01T03:59:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Implementing Secure Shared Memory for Side-Channel-Resistant Enclaves
Gomez-Garcia, Miguel
With the rise in cloud computing, it has become more critical than ever for remote users to get strong security guarantees to secure sensitive computation they run on untrusted machines. Enclaves or Trusted Execution Environments (TEEs) are a powerful trusted computing primitive that can address this problem; through carefully co-designed hardware and software mechanisms, enclaves enforce strong isolation and integrity properties. While many enclave implementations already exist, most do not consider the threat of microarchitectural side channels and transient execution attacks. And although one academic proposal – MI6 – has addressed this stronger threat model, these security guarantees often come at a cost of a more limited capability, as well as performance overheads. As a result, no industrial hardware vendor has made any announcement to include these attacks in their threat model.&#13;
&#13;
This thesis presents research in improving the capabilities of side-channel-resistant enclaves through the addition of secure shared memory, providing a mechanism for enclave applications to communicate with outside processes while maintaining the same strong isolation security guarantees provided by MI6. This allows for the development of a wider range of enclave applications with a significant performance improvement compared to existing enclave communication mechanisms. We hope that this work will demonstrate that enclaves can maintain strong security properties while being able to run a wide range of expressive programs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performing Actionable Evaluations of Sustainability&#13;
Investments</title>
<link href="https://hdl.handle.net/1721.1/151625" rel="alternate"/>
<author>
<name>Hopkins, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/151625</id>
<updated>2023-08-01T03:19:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Performing Actionable Evaluations of Sustainability&#13;
Investments
Hopkins, Jacob
Businesses are rapidly pursuing investments in sustainable technologies to meet their climate goals, but performing actionable evaluations of technologies is difficult, especially in decentralized businesses. Actionable evaluations have high accuracy, high precision, and address uncertainty. Sustainable technologies are not well characterized and their expected performance is uncertain. For numerous reasons, approaches used in industry do not currently address these concerns. This research investigated tools to improve accuracy and precision and proposes a methodology to address uncertainty. The methodology includes a Monte-Carlo simulation tool and a method to assess data quality that address the concerns in traditional approaches. We believe this methodology can help decentralized businesses perform more actionable evaluations of sustainable technologies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Passenger Electric Vehicle Charging Demand with Machine Learning Using Telematics Data and Temperature</title>
<link href="https://hdl.handle.net/1721.1/151624" rel="alternate"/>
<author>
<name>Barber, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/151624</id>
<updated>2023-08-01T03:20:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Passenger Electric Vehicle Charging Demand with Machine Learning Using Telematics Data and Temperature
Barber, Adam
Electric vehicles (EVs), with their potential to drastically reduce greenhouse gas emissions, pose a problem for energy distribution infrastructure which was not previously designed with hosting capacity capable of handling the additional demand generated by their mass adoption. Understanding when customers charge their EVs and how much energy they consume better enables electric utilities to provide more reliable and affordable energy to all customers while aiding the transition to clean transportation. The purpose of the research was to analyze passenger EV charging data from National Grid's Massachusetts EV Off-Peak Charging Program and determine whether generalizable and scalable machine learning models could be built to predict EV charging energy demand, and further determine the lowest possible geographic granularity of such models. This research was novel in its charge rate estimation methodology, normalization of charging energy on a per-vehicle basis, accounting for charging energy demand flowing into and out of the studied system, and the addition of ambient air temperature as a feature variable. Modeling employed supervised machine learning methods with random forests deemed optimal in terms of accuracy, complexity, and computational intensiveness. Ultimately, this research successfully created and operationalized an accurate service territory model and illuminated the challenges associated with utilizing telematics data for demand modeling.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Monkey Cheeks Toolkit: Design Strategies for Mitigating Flood Impacts in the Bangkok Metropolitan Area</title>
<link href="https://hdl.handle.net/1721.1/151623" rel="alternate"/>
<author>
<name>Rattanathumawat, Pimpakarn</name>
</author>
<id>https://hdl.handle.net/1721.1/151623</id>
<updated>2023-08-01T04:05:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Monkey Cheeks Toolkit: Design Strategies for Mitigating Flood Impacts in the Bangkok Metropolitan Area
Rattanathumawat, Pimpakarn
Bangkok, the capital of Thailand, has been facing frequent and destructive floods due to the recent decades of urban expansion and inadequate public drainage infrastructure. Although the Bangkok Metropolitan Administration (BMA) has actively improved the flood drainage network as the city expanded, its developed capacity and configuration have not kept pace with its population growth and rapid urbanization. Additionally, due to the escalating impact of climate change, Bangkok is expected to face more severe flooding, as well as the potential for greater water supply challenges, over the course of this century. &#13;
&#13;
Rather than solely depending on flood protection via large-scale infrastructure, this thesis proposes a decentralized approach to stormwater management, in which rain is captured where it falls through a local flood control measure called “Monkey Cheeks.” Although this measure concept is commonly utilized in large water retention areas, the thesis applies this retention system to an ultra-urban environment such as the Bangkok Metropolitan Area, where the availability of land is limited. The main objective is to embrace water as a valuable resource and seize the opportunity to incorporate it into the fabric of the city. The outcome of this research is presented in the form of a Design Toolkit, a set of strategies for implementing Monkey Cheeks across various scales of urban conditions, ranging from small individual property-level to large-scale publicly owned spaces. The Toolkit concludes with case studies illustrating how these strategies can be applied to existing conditions of Bangkok’s urban fabric, and how they can be combined to alleviate flooding throughout the city at large. Together, a network of Monkey Cheeks within the city can play a critical role in mitigating flood risk by slowing down runoff that could otherwise overwhelm public sewage systems, storing rainwater to tackle water supply challenges, and restoring the hydrologic function of the urban landscape by releasing water back to the aquifer. As such, the research contributes to the advancement of sustainable urban water management practices and highlights the importance of integrating traditional knowledge with modern urban areas.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Hands: Neural Implicit Manifold Learning of Hand Gestures</title>
<link href="https://hdl.handle.net/1721.1/151622" rel="alternate"/>
<author>
<name>Chatzinikolis, Dimitrios</name>
</author>
<id>https://hdl.handle.net/1721.1/151622</id>
<updated>2023-08-01T03:37:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Making Hands: Neural Implicit Manifold Learning of Hand Gestures
Chatzinikolis, Dimitrios
The human hand is a complex and sophisticated biological machine. Hand gesturing is key to our understanding of and interacting with the world around us. Hand gesturing is also instrumental in design and making. I argue that understanding the geometry of the space of hand gestures leads to an intuitive human-computer interaction. I decompose the gesture into its constituent parts, i.e., the hand motion – global coordinate system – and the hand pose – local coordinate system. I propose modeling the configuration space of hands as a high-dimensional manifold via neural unsigned distance fields, and I define plausible hand poses as points on the manifold. Next, I apply a distance metric to their configuration space. A trajectory in that space is a finite or infinite sequence of hand poses. These trajectories represent the different ways that the hand gestures. To demonstrate my approach, I restrict my study to a dataset of hands grasping everyday objects, and I evaluate my model on unknown grasps. Extending the model, the learned manifold acts as a prior for hand pose denoising, hand pose interpolation, and hand pose synthesis. Constraining that space can be interpreted as excluding impossible hand poses while constraining the manifold can be interpreted as defining a set of desirable hand poses. The former emphasizes the importance of bridging deep learning with existing mathematical structures, while the latter underlines future directions for the fields of design and computational making.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liquid Metal Printing</title>
<link href="https://hdl.handle.net/1721.1/151621" rel="alternate"/>
<author>
<name>Karsan, Zain</name>
</author>
<id>https://hdl.handle.net/1721.1/151621</id>
<updated>2023-08-01T03:24:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Liquid Metal Printing
Karsan, Zain
The pace of worldwide material production and its deleterious effect on the climate motivate the need for materially efficient and sustainable methods of manufacture. Additive manufacturing (AM), commonly referred to as 3D Printing, presents one approach to sustainable manufacturing, affording complexity at high resolution with minimal scrap. For example, polymer, ceramic, and metal materials have been employed in AM to produce parts across industries as varied as aerospace to construction.&#13;
&#13;
Nevertheless, metal AM remains a high-cost process with slow process rates and build environments that are challenging to scale up, impeding the application of these manufacturing techniques but for products for which the cost per volume is significant. Liquid Metal Printing (LMP) is a novel approach to AM that is fast, scalable, and low cost, invented by the Self-Assembly Lab at MIT in 2020. However, this technique is nascent, and has only been developed to print with low melting point alloys that are unsuitable for any realistic use. Notwithstanding, LMP offers a new way of thinking about additive manufacturing by printing large scale, low resolution parts extremely quickly.&#13;
&#13;
Therefore, this thesis explores the redesign of several of the LMP components to print aluminum, describes a set of design rules and toolpath strategies for printing 2.5D multi-layer structures, and proposes several theoretical models for characterizing the print output. Finally, through a selection of case studies, this thesis assesses the applicability of LMP as a rapid coarse resolution additive manufacturing process in mechanical and product design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithm and Hardware Co-optimization for Image Segmentation in Wearable Ultrasound Devices: Continuous Bladder Monitoring</title>
<link href="https://hdl.handle.net/1721.1/151618" rel="alternate"/>
<author>
<name>Song, Zhiye</name>
</author>
<id>https://hdl.handle.net/1721.1/151618</id>
<updated>2023-08-01T03:57:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Algorithm and Hardware Co-optimization for Image Segmentation in Wearable Ultrasound Devices: Continuous Bladder Monitoring
Song, Zhiye
From monitoring muscle during exercise and training, to assess cardiovascular diseases, to estimate bladder volume, continuous autonomous tissue-monitoring is essential. Recent development in wearable ultrasound patches provide the foundation of wearable ultrasound devices with on-device image processing. Collaborating with Massachusetts General Hospital, we established bladder volume monitoring as the example use case. Real-time bladder monitoring can facilitate the diagnosis of post-operative urinary retention, and reduce indwelling urinary catheter usage and the risk of catheter-associated urinary tract infection. Using machine learning and hardware co-design, this thesis developed and validated a low-compute memory-efficient deep learning model and an energy-efficient all-parameters-on-chip application-specific integrated circuit (ASIC) for accurate bladder region segmentation and urine volume calculation.&#13;
&#13;
U-Net is the state-of-the artneural network (NN) for biomedical image segmentation [1]. We trained two binarized models with 4-bits and 6-bits skip connections. They achieved an accuracy within 3.8% and 2.6% of the floating-point U-Net without any floating-point operations, and reduced memory requirement 11.5× and 9.0×, respectively, to under 150 kB. This thesis also designed the first neural network accelerator targeting U-Net-like image segmentation. Using interleaving feature map representation, skip connection compression, and extensive design space exploration, the accelerator does not require external memory or any co-processor, and consumes only 14.4μJ per 128 × 128 image segmentaiton.&#13;
&#13;
The lightweight bladder volume estimation algorithm together with the energy-efficient image segmentation ASIC can be integrated with existing ultrasound probes to reduce the burdens of nurses in hospital settings and improve outpatient care. Moreover, the quantization and compression techniques and the image segmentation accelerator can be applied to other clinical applications, such as monitoring fetal heart rate and neural therapy. This technology, together with advances in compact ultrasound patches, will enable real-time tissue monitoring on the edge, thereby not only maintaining health data privacy, but also improving both point-of-care and inpatient healthcare.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing Language Models for Contextual ScaleUnderstanding</title>
<link href="https://hdl.handle.net/1721.1/151617" rel="alternate"/>
<author>
<name>Vedantam, Saaketh</name>
</author>
<id>https://hdl.handle.net/1721.1/151617</id>
<updated>2023-08-01T04:01:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Probing Language Models for Contextual ScaleUnderstanding
Vedantam, Saaketh
Pretrained language models (LMs) have demonstrated a remarkable ability to emit linguistic and factual knowledge in certain fields. Additionally, they seem to encode relational information about different concepts in a knowledge base. However, since they are trained solely on textual corpora, it is unclear whether these models implicitly understand anything grounded about the real world. This work investigates the extent to which LMs learn the structure of the physical world. By probing the contextualized embeddings of sentences, we examine how well LMs predict the sizes of real-world objects. We further explore the effect of adjectival modifiers on object embeddings. We show that while larger models more accurately convey scalar information through their embeddings, they perform on par with smaller models in the task of contextual prediction. Fortunately, the models are capable of identifying a difference in scale when an adjectival modifier is introduced, implying that the relevant context is successfully incorporated into the object’s embedding through the LM’s attention mechanism.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Invasive Vision-Based Measurement of Hand&#13;
Kinematics and Interaction</title>
<link href="https://hdl.handle.net/1721.1/151615" rel="alternate"/>
<author>
<name>Wang, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/151615</id>
<updated>2023-08-01T03:46:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Non-Invasive Vision-Based Measurement of Hand&#13;
Kinematics and Interaction
Wang, Margaret
The ability to manipulate and interact with the world is a key part of what distinguishes humanity from other animals, and the human hand is perhaps the most powerful tool we have to do so. As such, a great deal of research goes towards better understanding how the hand behaves during interaction tasks. Study of physical interaction requires measurement of hand kinematics and interaction forces. Unfortunately, current methods involve cumbersome sensors or external forces that inherently change the way that the subject behaves. In order to avoid these confounding factors, this thesis presents an approach to measuring hand kinematics, dynamics, and physical interaction using a non-encumbering vision-based tool.&#13;
&#13;
The proposed tool consists of (1) Vision-based hand tracking of hand kinematics in joint space (2) Synergy extraction and synergy space projection (3) Visual soft tissue deformation based contact detection (4) An exploration of force estimation at the fingertips.&#13;
&#13;
This pipeline is applied to a piano-based experiment for validation and comparison with existing tools. The results indicate that vision-based kinematics measurement is largely comparable to and at times shows more sensitivity to joint angle variation than traditionally instrumented approaches. However, force estimation is not yet a consistent alternative to physical sensor interfaces.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Pipelines for Information Extraction from Semi-Structured Documents in Structured Format</title>
<link href="https://hdl.handle.net/1721.1/151614" rel="alternate"/>
<author>
<name>Chu, Jung Soo</name>
</author>
<id>https://hdl.handle.net/1721.1/151614</id>
<updated>2023-08-01T03:32:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Automated Pipelines for Information Extraction from Semi-Structured Documents in Structured Format
Chu, Jung Soo
As documents are one of the main tools for storing and communicating information, there have been a large amount of eff orts towards developing methods to parse information from them automatically. While many parts of this industry are automated, there are still scenarios where certain types of documents cannot be read by machine with high accuracy and throughput. It becomes especially more difficult when the documents are semi-structured, or in other words have widely varying formats. With the significant leaps in optical character recognition, computer vision, and natural language processing, there have been great progress towards this problem. In this paper, we propose two pipeline designs that utilize these newer techniques to extract information from semi-structured documents in a structured output format. The two pipelines are the fully automated pipeline and semi automated pipeline. The fully automated pipeline has a region detection module that finds the location of text blocks and table blocks regardless of the format of the document and a region extraction module that extracts information from each of the text and table blocks. The semi automated pipeline on the other hand has a classification module and an extraction module. The classification module determines the format class of the input document, while the extraction module has templates that can parse information from the documents in each format class. We evaluate the two pipelines in four key metrics: accuracy, coverage, time efficiency, and scalability. The fully automated pipeline shows a strong result in coverage and scalability, while the semi automated pipeline succeeds in accuracy and time efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Refinement Cost Estimators for Bilevel&#13;
Planning</title>
<link href="https://hdl.handle.net/1721.1/151612" rel="alternate"/>
<author>
<name>Luong, Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/151612</id>
<updated>2023-08-01T04:01:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning Refinement Cost Estimators for Bilevel&#13;
Planning
Luong, Lilian
Bilevel planning is an effective approach for solving complex task and motion planning (TAMP) problems with continuous state and action spaces, that involves first searching for a high-level abstract plan and then refining it into a sequence of lowlevel actions. Although the low-level refinement process is a significant contributor to the total time needed to solve a task, this cost is typically unaccounted for during high-level planning. This can result in undesirable behavior if abstract plans that are difficult or even impossible to refine are selected over alternatives that may be slightly longer but can also be refined significantly faster. This work develops a method for learning to estimate the cost of refining an abstract plan and a framework for using the estimator to guide high-level search in a bilevel planner. We demonstrate using two environments that our proposed approach considerably improves on the combined planning and execution cost required for tasks compared to several baselines, including a standard benchmark bilevel planner and alternative estimator models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Method for Mach-Zehnder Interferometer&#13;
Phase Stabilization</title>
<link href="https://hdl.handle.net/1721.1/151611" rel="alternate"/>
<author>
<name>Hardy, Max</name>
</author>
<id>https://hdl.handle.net/1721.1/151611</id>
<updated>2023-08-01T03:27:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Novel Method for Mach-Zehnder Interferometer&#13;
Phase Stabilization
Hardy, Max
The Mach-Zehnder Interferometer (MZI) is a device used across many areas of research including quantum computing, optical communication, sensing, and imaging. The proper function of an MZI depends upon the stabilization of the relative phase between its arms as thermal gradients and vibrations can cause this phase to drift. This thesis proposes a novel method for MZI phase stabilization. The stabilization method was modelled mathematically and simulated, and a low cost and simple prototype control circuit was constructed which successfully proved the feasibility of this stabilization method. Finally, the initial mathematical model was refined according to experimental observations. This novel stabilization method could be impactful to any field or application which depends upon the use of MZIs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Measurement Tool for Videoconferencing User&#13;
Experience</title>
<link href="https://hdl.handle.net/1721.1/151610" rel="alternate"/>
<author>
<name>Jin, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/151610</id>
<updated>2023-08-01T04:06:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Measurement Tool for Videoconferencing User&#13;
Experience
Jin, Caroline
The COVID-19 pandemic forced people to work remotely and use videoconferencing software like Zoom in their daily lives. While people are returning to their prepandemic lifestyle, many still depend on videoconferencing software. As a result, application developers need to regularly monitor user experience in terms of video quality, stalls, and network conditions, and identify areas of potential improvement. Companies and academic researchers focus user experience analysis on dual-endpoint, controlled conditions that do not reflect everyday user calls. Gathering data on a large scale without knowing the network structure and getting permission for traffic analysis takes time and effort. Such large-scale experiments often use lengthy procedures to obtain the right permissions and deploy monitoring infrastructure in the middle of the campus network.&#13;
&#13;
In contrast to existing approaches, an ideal measurement application would merely run on users’ devices without cooperation from the other endpoint that they’re conversing with. Such an application enables researchers to collect network statistics across a wide range of Internet conditions at a fine-grained level without significant overheads. This thesis proposes the Single Endpoint Zoom Measurement Application (SEZMA) that computes and logs network and video metrics when a user is on a Zoom call and sends metric logs to a centralized server. In addition to providing insights for users and researchers, the application aims to be explanatory, usable, lightweight, and privacy-preserving.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>zk-Sigstore: System for Anonymous Certificate-Based Software Signing</title>
<link href="https://hdl.handle.net/1721.1/151609" rel="alternate"/>
<author>
<name>Merrill, Kelsey</name>
</author>
<id>https://hdl.handle.net/1721.1/151609</id>
<updated>2023-08-01T04:22:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">zk-Sigstore: System for Anonymous Certificate-Based Software Signing
Merrill, Kelsey
Most software developers get their software dependencies from online repositories, allowing for greater efficiency during the development process. However, downloading software from the internet comes with security concerns, and issues with open source software security have led to several high-profile attacks. In order to combat the problem, many repositories have implemented digital signatures for packages to verify the contributor’s identity, but with limited success due to well-documented usability issues surrounding key management. The digital signature primitive itself also does not provide an answer to which signers have the authority to sign which artifact. Proposals like Sigstore aimed at fixing the usability problems with digital signatures come with privacy concerns that have limited uptake, and though they provide some answers to the signing authority question, these come with scalability, verifiability, and privacy concerns.&#13;
&#13;
This thesis presents zk-Sigstore, a system for usable (certificate-based) and anonymous digital signatures for software. zk-Sigstore is a certificate-based signature system, but instead of publishing identities in the clear, identities are obfuscated with a cryptographic commitment. Techniques from key transparency verifiable key directories inform a scalable, verifiable, and private authorization record for mapping digital artifacts to the maintainers with the authority to sign them.&#13;
&#13;
Using zk-Sigstore for software signing, signing and verifying times are on the order of hundreds of microseconds even for the largest of software repositories, and deployment of zk-Sigstore requires minimal changes to existing infrastructure, making it a practical solution to this real-world problem.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Commentator: Narrating Sports Games through&#13;
Multimodal Perception and Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/151608" rel="alternate"/>
<author>
<name>Purohit, Sonia</name>
</author>
<id>https://hdl.handle.net/1721.1/151608</id>
<updated>2023-08-01T03:19:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">AI Commentator: Narrating Sports Games through&#13;
Multimodal Perception and Large Language Models
Purohit, Sonia
Automated visual understanding is an essential part of the sports industry, particularly in the context of major sports tournaments. The scale of generated video footage necessitates the use of automated systems to generate insights and enhance fan experiences. One area where this is particularly challenging is commentary, which requires detailed information about play-by-play action, a task that cannot be efficiently carried out by human commentators at scale.&#13;
&#13;
We tackle this problem for grand-slam tennis through an IBM partnership with the Championships, Wimbledon. This thesis introduces a novel system that utilizes computer vision to extract play-by-play metadata and convert it into fluent commentary using large language models. Our computer vision module utilizes a single camera feed to understand every detail of the game – court and net detection, player and ball tracking, player poses, and fine-grained shot classification, all in near-real-time. This metadata is then combined with additional information from other modalities, such as crowd audio and radar-measured ball speed, and fed into a "data2text" large language model to generate commentary in natural language.&#13;
&#13;
Our system not only supports the narration of match content at scale, but also powers the collection of additional metadata to facilitate additional match insights in the future.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Networks on Eigenvector Data</title>
<link href="https://hdl.handle.net/1721.1/151606" rel="alternate"/>
<author>
<name>Lim, Derek</name>
</author>
<id>https://hdl.handle.net/1721.1/151606</id>
<updated>2023-08-01T04:16:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Neural Networks on Eigenvector Data
Lim, Derek
The need to process eigenvectors derived from data arises across numerous domains in computing and the sciences. However, eigenvectors differ from other types of data, as they have particular symmetries; for any eigenvector of a matrix, the negation of that vector is also an eigenvector of the same eigenvalue, so there are sign symmetries. There are also more general continuous basis symmetries in higher dimensional eigenspaces. In this thesis, we present the first neural networks that process eigenvector input while respecting these symmetries. We build neural networks that are invariant to sign and basis symmetries as well as neural networks that are equivariant to sign symmetries. Under certain conditions, these networks are provably universal — they can approximate any continuous functions with the desired invariances. When used with Laplacian eigenvectors, our invariant neural networks are provably powerful for graph representation learning, as they can approximate several classes of important functions on graphs. Our networks empirically improve machine learning models with eigenvectors, in tasks including molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inverse Inverse Graphics</title>
<link href="https://hdl.handle.net/1721.1/151605" rel="alternate"/>
<author>
<name>Chandra, Kartik</name>
</author>
<id>https://hdl.handle.net/1721.1/151605</id>
<updated>2023-08-01T04:07:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inverse Inverse Graphics
Chandra, Kartik
To be human is to express—not only facts about the world, but also experiences, emotions, and ideas. We paint, explain, compose, persuade, dance, teach, sculpt, and sing, not merely to communicate bits of information from person to person, but rather to fan the windmills of one another's minds.&#13;
&#13;
Well then, how do we do it? This thesis suggests that when engaging in artistic expression, we reflect on our shared perceptual and cognitive faculties—"common sense," so to speak—and then construct stimuli to evoke experiences in our audience's brains. Drawing on ideas from computer graphics, cognitive science, and literary theory, I offer ways to think about images (Chandra et al., 2022) and stories (Chandra et al., 2023a, 2023b) as objects to be designed with respect to a model of the audience. In particular, if we think of the audience's mind as solving inverse problems—perception as inverse rendering, action understanding as inverse planning—then we can think of expression as solving a kind of *inverse* inverse problem.&#13;
&#13;
I then show how to implement such "inverse inverse" methods computationally. Starting with classic Bayesian models of vision and social cognition, I present algorithms for optimizing over inference to create "adversarial examples" that evoke various desired inferences in those models. For example, we optimize images that are visual illusions (Chapter 1), and animations that tell surprising stories (Chapter 2). Because the Bayesian models capture human intuitions well (better than, say, typical neural network models), the optimized stimuli "transfer" to evoke similar experiences in humans. I demonstrate this with a variety of human subject studies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Koopman Operator Theory Applied to Lambert’s Problem with a Spectral Behavior Analysis</title>
<link href="https://hdl.handle.net/1721.1/151603" rel="alternate"/>
<author>
<name>Pasiecznik, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/151603</id>
<updated>2023-08-01T03:09:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Koopman Operator Theory Applied to Lambert’s Problem with a Spectral Behavior Analysis
Pasiecznik, Julia
Astrodynamics is abundant with nonlinear dynamical systems, such as satellites operating within Earth’s gravitational field. With the increase in the number of satellite constellations, making accurate predictions of the motion of satellites throughout space is becoming more relevant than ever. Influences from gravitational forces, atmospheric drag, and solar radiation pressure introduce highly nonlinear terms in the equations that model these dynamical systems. The predictions of these effects are essential for planning future space missions. Intrinsically tied to this is Lambert’s problem, which concerns finding an optimal transfer orbit that connects two position vectors within a specified time of flight. Furthermore, solving Lambert’s problem in the context of these nonlinear dynamical systems is crucial for identifying optimal or- bit trajectories of spacecraft in Earth orbit and beyond. Traditional Lambert solvers often involve iterative methods that are computationally intensive, which may not be able to capture the nonlinearities of the dynamical systems accurately, and might have constraints in their applications. Using operator theory to simplify a system’s nonlinear dynamics presents a promising avenue for research. &#13;
&#13;
This Thesis bridges the gap in implementing operator theory to effectively solve Lambert’s problem. The Koopman Operator is used to embed the nonlinear dynamics involved in Lambert’s problem into a global linear representation, enabling the study of the nonlinear dynamical systems from a global perspective for future state prediction away from fixed points. The Koopman Operator is applied to solve variants of Lambert’s problem including solving for the minimum energy and minimum Δv solutions, the single and multi-revolutions solutions, and the multi-impulse solution. Furthermore, the Koopman Operator enables the computation of these solutions with low computational complexity. A variety of initial conditions are considered, proving the range of applicability of the Koopman Operator to Lambert’s problem. Comparisons made with numerical methods and another Lambert solver demonstrate the robustness and accuracy of the Koopman Operator solutions. Finally, an analysis of the spectral behaviors of the dynamics considered is provided, with insights into the stability of the dynamical systems and accuracy of the solutions found.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Privacy Communications in Smart Home Technology for Older Adults: Evaluation, User Attitudes and Concerns, and Design Implications</title>
<link href="https://hdl.handle.net/1721.1/151602" rel="alternate"/>
<author>
<name>Vaidya, Manasi Atul</name>
</author>
<id>https://hdl.handle.net/1721.1/151602</id>
<updated>2023-08-01T03:42:49Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Data Privacy Communications in Smart Home Technology for Older Adults: Evaluation, User Attitudes and Concerns, and Design Implications
Vaidya, Manasi Atul
As the global population continues to age, there is an increasing need for smart home technology that supports older adults in living independently. There is evidence that technology today is capable of automating and carrying out various tasks in the home. However, the adoption of such technology by older adults has been limited, beyond usability and accessibility challenges, due to data privacy and security concerns. Through the evaluation of privacy policies and user agreements of smart home devices from the perspective of the aging population, and a collection of the beliefs and attitudes older adults share about data privacy and smart home technology adoption, this thesis provides a set of guidelines for improving the design of privacy communications that have been evaluated through in-person interviews that companies operating in the smart home technology space can use. These guidelines can be used to inform the way that companies convey content related to data privacy, and also to develop and design devices that are customized to the requirements of older adults which will facilitate a larger adoption of such technology among this population. Ultimately, the hope is that informed adoption of technology will contribute to the overall well-being and quality of life of older adults by enabling them to age-in-place.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-fidelity Design with Optimization Guided Incremental Decisions</title>
<link href="https://hdl.handle.net/1721.1/151601" rel="alternate"/>
<author>
<name>Lee, Dongjoon</name>
</author>
<id>https://hdl.handle.net/1721.1/151601</id>
<updated>2023-08-01T03:20:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multi-fidelity Design with Optimization Guided Incremental Decisions
Lee, Dongjoon
The aerospace industry is continually seeking novel, sustainable, and efficient aircraft designs to address global emissions reduction goals and growing air travel demands. In this context, innovative configurations are being explored to provide sustainable air transportation in diverse markets. However, traditional design processes based on empirical methods and expert knowledge may be inadequate for these novel configurations. This thesis introduces a design strategy that utilizes physics-based models and Multi-Disciplinary Analysis and Optimization (MDAO) as a framework to approach designs without precedents. The proposed strategy is demonstrated through a design example of an electric regional passenger aircraft, where both analysis fidelity and geometry representation are incrementally refined.  The design example culminates in an optimization formulation that determines the optimal wing, fuselage, and tail surface geometries, as well as the ideal flight conditions for takeoff, climb, cruise, and landing. The final formulation incorporates two high-fidelity models within the optimization loop: MSES for airfoil analysis at multiple spanwise locations, integrated over the wing using a lifting line method, and a finite element structural analysis for sizing optimal wing structural components.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Double Cropping to Expand Sustainable Aviation Fuel Production in the United States</title>
<link href="https://hdl.handle.net/1721.1/151600" rel="alternate"/>
<author>
<name>Demsky, Sarah Elaine</name>
</author>
<id>https://hdl.handle.net/1721.1/151600</id>
<updated>2023-08-01T04:20:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of Double Cropping to Expand Sustainable Aviation Fuel Production in the United States
Demsky, Sarah Elaine
Sustainable aviation fuels (SAFs) have been identified as a short to mid-term solution for reducing aviation carbon emissions. However, the industry is limited by cropland availability. This thesis analyzes the suitability of double cropping in the United States as a method to expand biomass for SAF production. The suitability for double cropping is quantified for 2020, 2035, 2050, and 2100, using temperature and rainfall data and regional projections. Twelve double crop pair combinations were studied for seven crop feedstocks using the hydroprocessed esters and fatty acids (HEFA) and alcohol to jet (ATJ) via ethanol pathways for conversion. The maximum SAF potential was quantified with current land use considerations, and showed that today, double cropping can increase SAF production by 268% to 464% compared to single cropping alone, depending on the land turnover time between crops. When allocating SAF production for minimum land usage, jet demand was met mostly by ATJ feedstocks. When allocating jet demand for maximum emissions savings, SAF production was almost entirely HEFA feedstocks. Based on the current climate, employing double cropping to meet jet fuel demand with 100% SAFs can lower total U.S. carbon emissions by 3.48% if optimized for maximum emissions savings, or 2.87% if optimized for minimum land use (including co-product emissions savings), compared to using entirely Jet-A. Overall, this thesis shows that double cropping can significantly expand SAF yields and has the potential to lower the carbon emissions of the U.S. aviation industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Use of TROPICS Pathfinder&#13;
Observations for Lunar Calibration</title>
<link href="https://hdl.handle.net/1721.1/151599" rel="alternate"/>
<author>
<name>Chew, Juliana L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151599</id>
<updated>2023-08-01T03:56:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluating the Use of TROPICS Pathfinder&#13;
Observations for Lunar Calibration
Chew, Juliana L.
Traditional two-point in-situ calibration systems for microwave radiometers use large, on-board hot targets [12]. Small satellites such as CubeSats, however, are unable to house these hot targets due to SWaP constraints. Instead, CubeSat radiometers such as the TROPICS Pathfinder use noise diodes as smaller alternative hot calibration targets [18]. Noise diodes can experience calibration drifts that must be characterized and accounted for to maintain the reliability of radiance measurements. Given the stability of lunar radiative transfer models in microwave frequencies, lunar vicarious calibration may be a feasible method to detect calibration drifts. In this thesis, we evaluate the use of TROPICS Pathfinder observations for lunar calibration. We develop a lunar calibration approach that takes TROPICS observations as input, processes TROPICS data for lunar observations, estimates lunar intrusion temperature and scan geometry, and accounts for pointing error. We compare the lunar brightness temperature estimates and measured antenna temperatures to the lunar radiative transfer model developed by Yang and Burgdorf [21]. We test our lunar calibration model on TROPICS Pathfinder lunar observations from November and December 2021. Pathfinder’s antenna temperatures are within 1 K and 2 K of the simulated antenna temperatures for the W/F and G bands, respectively. We find that even though the simulated antenna temperatures generally agree, work remains to improve agreement between measured and modeled lunar brightness temperatures. The antenna temperature differences can fluctuate by ±2K, ±4K, and ±5K for the W, F, and G bands, respectively, so the reliability of this methods needs to be improved further before operational use for calibration. Possible ways to improve lunar calibration results include tuning Yang and Burgdorf’s lunar model, additional pointing error analyses, and lunar calibration model adjustments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Shape of Music. Computational Specification of Hand Gestures in Piano Playing.</title>
<link href="https://hdl.handle.net/1721.1/151590" rel="alternate"/>
<author>
<name>Lamprou, Aikaterini</name>
</author>
<id>https://hdl.handle.net/1721.1/151590</id>
<updated>2023-08-01T03:24:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Shape of Music. Computational Specification of Hand Gestures in Piano Playing.
Lamprou, Aikaterini
Essential parts of human communication, expression, and productive action rely on gestures. Examining these gestures is central to design and to understanding creativity. However, techniques and skills manifested in human hand gestures are hard to capture in computational terms. In this research, I investigated how skilled hand gestures can be described computationally and apprehended visually as shapes. A case study was carried out on piano performance. I explored piano hand gestures in terms of technique and expressive intent, essential elements of the pianist’s playing style.&#13;
&#13;
I designed and ran a controlled study to capture gesture variation in performances of the same music, with six proficient piano players participating in the recordings. I developed technical workflows and processes to record multimodal performance data, including video, audio, MIDI and 3D motion capture, and prepared the data for visualization and analysis. I compared two performers’ piano techniques from their executions of technical exercises and detected elements of expression in a pianist’s performances of Prelude in C Minor, BWV.999 by J.S. Bach.&#13;
&#13;
Experts evaluated the performances from audio and video. From their comments, I extracted qualities of technique and expression essential in piano playing. Then, I determined features to measure these qualities in the motion and MIDI data. I explored the recorded data by calculating the features for different partitions of the music score. I developed feature visualizations displaying the pianists’ playing patterns. To exemplify how motion and music data could be decomposed into gestures, I presented an outline for a shape grammar for parsing musical performances.&#13;
&#13;
I concluded that expressive patterns detected in the played music and hand motion could be combined to identify and study gestures according to stylistic elements of piano performance. Overall, the study points towards a perceptual approach to determining and analyzing skilled hand gestures in piano playing and beyond.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The (a)rchitectural Lexicon of (B)lack Hair: A Production of Knowledge</title>
<link href="https://hdl.handle.net/1721.1/151588" rel="alternate"/>
<author>
<name>Johnson, Jensen</name>
</author>
<id>https://hdl.handle.net/1721.1/151588</id>
<updated>2023-08-01T03:17:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The (a)rchitectural Lexicon of (B)lack Hair: A Production of Knowledge
Johnson, Jensen
The author hereby grants to MIT a nonexclusive, worldwide, irrevocable, royalty-free license to exercise any and all rights under copyright, including to reproduce, preserve, distribute and publicly display copies of the thesis, or release the thesis under an open-access license.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Load Selection, Truck Dispatch, and Backhaul Activation in Outbound Logistics Operations</title>
<link href="https://hdl.handle.net/1721.1/151587" rel="alternate"/>
<author>
<name>Tanski, Max</name>
</author>
<id>https://hdl.handle.net/1721.1/151587</id>
<updated>2023-08-01T03:17:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Automating Load Selection, Truck Dispatch, and Backhaul Activation in Outbound Logistics Operations
Tanski, Max
Logistics operations is a field concerned with improving the efficiency and effectiveness of the movement of goods and resources. It involves the use of advanced technologies, data analysis, and process improvements to streamline logistics operations and reduce costs while enhancing customer satisfaction. One of the most significant advances in the field of logistics operations has been the integration of automation and artificial intelligence (AI) technologies. Automation has enabled logistics companies to optimize many aspects of their operations, from demand forecasting and route planning to warehouse management and quality control. By leveraging algorithms and machine learning techniques, logistics operations can make faster and more accurate decisions, leading to improved efficiency, cost savings, and customer satisfaction. However, enterprises ability to implement such technologies is hindered by a lack of awareness to their efficacy and poor access to appropriate data inputs that automation requires. Here we show that incorporating these technologies into business operations is not only feasible in sparse data environments but can also result in significant financial gains and enhanced employee decision-making and satisfaction. By applying algorithmic search methods, we automated essential tasks for employees and showed how automating various capacity management strategies and tasks could potentially save up to $873,396 and generate an additional revenue of $1,193,618 in a small manufacturing operation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ChatGPT and the Future of Management Consulting: Opportunities and Challenges Ahead</title>
<link href="https://hdl.handle.net/1721.1/151586" rel="alternate"/>
<author>
<name>Kamaruddin, Ryan Idris</name>
</author>
<id>https://hdl.handle.net/1721.1/151586</id>
<updated>2023-08-01T03:05:49Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">ChatGPT and the Future of Management Consulting: Opportunities and Challenges Ahead
Kamaruddin, Ryan Idris
This thesis explores the implications of ChatGPT, a cutting-edge artificial intelligence (AI) language model, on the management consulting industry, focusing on opportunities and challenges it presents. Through interviews with management consultants, the study aims to explore the potential impacts of ChatGPT on consulting processes, outcomes, and the competitive landscape. &#13;
&#13;
The findings indicate that the integration of ChatGPT in consulting services has the potential to streamline data analysis, enhance decision-making, and improve client relationships through personalized and prompt communication. Moreover, ChatGPT can increase efficiency in repetitive tasks, enabling consultants to focus on higher-value activities such as creative problem-solving and strategic planning. However, challenges such as data privacy concerns, ethical implications, and potential job displacement must be addressed. The adoption of AI-powered solutions must be balanced with measures to ensure the responsible and secure use of these technologies, along with adequate professional development opportunities for consultants in the evolving landscape.&#13;
&#13;
By examining current applications, future possibilities, and inherent risks, this thesis contributes to the dialogue on the future of work and ChatGPT's potential to transform industries, offering valuable insights for stakeholders in management consulting to better prepare for and navigate the changes brought by this transformative technology.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SME Lending  - Asymmetries and Opportunities in the Brazilian Market</title>
<link href="https://hdl.handle.net/1721.1/151585" rel="alternate"/>
<author>
<name>Pereira, Anderson Da Silva</name>
</author>
<id>https://hdl.handle.net/1721.1/151585</id>
<updated>2023-08-01T04:00:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">SME Lending  - Asymmetries and Opportunities in the Brazilian Market
Pereira, Anderson Da Silva
Small and medium-sized enterprises (SMEs) play a crucial role in a global society. They account for a significant portion of employment and contribute to innovation and economic growth both in developed and emerging economies. Strength SMEs have a direct impact on job creation and stimulate economic activity in local communities. It can also help to promote entrepreneurship and encourage the development of new ideas and innovations. All of these factors contribute to the overall well-being of societies and communities worldwide.&#13;
&#13;
However, SMEs often face considerable challenges in obtaining financing, limiting their ability to thrive. SME finance is crucial for the economy because it helps these businesses access the capital they need to invest in their operations, hire employees, and pursue new opportunities. By providing access to financing, SMEs can grow and contribute to the health and vitality of the economy.&#13;
&#13;
In perfect markets, high margins attract new players, and the increase in competition creates innovations that will decrease the price of accessing goods. At the same time, it promotes access to specific consumers who, at the previous point, do not have access to it. Nonetheless, this does not happen in the SME finance market, especially in emerging - and imperfect - economies. This can take the form of loans, lines of credit, or other financial instruments that have enormous differences in price despite targeting similar customers.&#13;
&#13;
This study aims to understand the roots of the interest rate for SME lending in different aspects of emerging markets, particularly focusing on Brazil. This study also aims to collect data, conduct qualitative interviews, and understand the financial technologies applied to increase competition, reduce interest rates, and promote access to financial products.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Early Stage Embodied and Operational Analysis for Guiding Sustainable Architectural Design Decisions</title>
<link href="https://hdl.handle.net/1721.1/151584" rel="alternate"/>
<author>
<name>Lyu, Yiwei</name>
</author>
<id>https://hdl.handle.net/1721.1/151584</id>
<updated>2023-08-01T03:56:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Early Stage Embodied and Operational Analysis for Guiding Sustainable Architectural Design Decisions
Lyu, Yiwei
Buildings account for a significant portion of global energy consumption and greenhouse gas emissions. Simulating building performance in the early design stage allows architects and engineers to adjust design decisions to reduce embodied carbon and energy consumption. Life-cycle assessment (LCA) is one of the most comprehensive methodologies to evaluate the environmental impact of architectural production and operation. This thesis aims to address the challenges involved in applying LCA to architectural design in the early design stage. By conducting a literature review of the status quo of architectural LCA and identifying the gaps in existing research and tools, this paper continues the research of a novel workflow in Grasshopper that calculates greenhouse gas (GHG) emissions and costs from both embodied and operational phases. The workflow addresses the early-stage uncertainty through random inputs with a Monte Carlo approach and implements surrogate models to accelerate the process for each iteration. The author's contribution to the workflow includes improving its robustness and accuracy by redesigning the simulation model to generate more accurate training data and transitioning to a new machine-learning algorithm. The results of the study provide insights into design decisions that can reduce embodied and operational carbon. A parallel case study was conducted to assess the trade-offs between embodied and operational carbon with regard to construction material selection. In the end, the thesis also proposes possible future research directions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsettling Roads :  Ethnographies of Dust across Rural Nepal</title>
<link href="https://hdl.handle.net/1721.1/151583" rel="alternate"/>
<author>
<name>Bhandari, Shubhekshya</name>
</author>
<id>https://hdl.handle.net/1721.1/151583</id>
<updated>2023-08-01T04:13:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unsettling Roads :  Ethnographies of Dust across Rural Nepal
Bhandari, Shubhekshya
Although the three of us traveled on the same path, our intimacy with the land will always be different.This thesis is a conversation between my grandfather, my father and myself. It is a conversation between the mountains, rivers, footpaths, In Hatiya, Nepal  that find themselves entangled with deployed methods of Road Building. Roads precede with expectations of stability and acceleration. In Nepal, and specifically Hatiya, they unsettle physically, atmospherically and metaphorically. The intrusion of the Mid-hill highway along the valley has lured the desires of connectivity. Feder roads sit still, frozen and fragmented in a state of construction, exposed to the phase-shifts of climate.&#13;
&#13;
This thesis looks at the evolving physicality of transportation networks  in rural Nepal and its implications to the state of the village, or ga’um. As the foot-paths we walked are encroached by larger operations of excavation and erasure, the ga’um has responded accordingly. What was once a landscape abundant with cultivation and agrarian livelihood is slowly integrating into economical hubs catering to schedules of tourism and commerce. The slow intrusion of national highways being the main artery facilitating these forces.&#13;
&#13;
Road building in Nepal is linked to decades of national promise of forthcoming economic prosperity. The reality is a contested network of material confluence. Over the last three generations, government involvement in rural road expansion has challenged existing notions of time, acceleration, and mobility, resulting in different waves of sociocultural shifts in rural regions of Nepal. How can roads be deconstructed, redesigned, diverted, and striated, to initiate new types of intimacy with the land. How can roads be deconstructed, redesigned, diverted, and striated to invite new intimacies with the land. How can geological speeds and collisions of earth and life be visualized and their conflicts rendered?  A intimacy and rendering that allows for containment of some operations and the spilling of others. One that welcomes my grandfather's history, my fathers translations and my fragmented understanding and experiences of our hometown.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eating On And Beyond The Infinite Corridor</title>
<link href="https://hdl.handle.net/1721.1/151582" rel="alternate"/>
<author>
<name>Searight, Tristan</name>
</author>
<id>https://hdl.handle.net/1721.1/151582</id>
<updated>2023-08-01T04:14:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Eating On And Beyond The Infinite Corridor
Searight, Tristan
Infinite Stops are part of a design strategy for MIT’s Campus that aims to make eating well effortless and enticing. Approaches to improving wellbeing and community, in addition to reducing carbon emissions and resource use at MIT must account for the benefits of social, plant-based meals.&#13;
&#13;
Foodscape research uses the tools of architecture, GIS, behavioural economics, and participatory planning to explore how the relationship between daily life and the built environment shapes eating habits. Mapping parties invited members of MIT to describe their typical meals and spaces that support their social ideals. Typically, people walk a maximum of 5 minutes from previous and preceding activities to obtain meals which are eaten in 18 minutes or less. Work related convenience, cost, and the opportunity to run into friends often dictates where, what and how people eat. Social meals are valued, and people travel further to find spaces that exhibit an attractive social atmosphere in its architecture, menu, music, and hospitality. &#13;
&#13;
In combination with MIT’s geographic isolation from food places, time constraints make the spatial and cultural setting of the Infinite Corridor a key ingredient to people’s eating habits social opportunities. Infinite Stops are built structures that intervene on the corridor; punctuating its “corridic” setting with plant-based food linked with a variety of “staying” spaces. The Stops provide fast and slow meals which help connect and mediate the densely populated corridor space with the underutilised outdoor spaces. Infinite Stops presents a vision for MIT to leverage design—graphic, architectural and urban—to achieve its health, community and sustainability goals. Though they butt up against systemic socio-economic challenges, they hint at how over the course of a university program, teaching or staffing role, the occasional meal can create meaningful and positive behaviour change. The underlying approach and findings can empower planning departments to study their respective time-famished, work-driven foodscapes and find opportunities to support eating well across different mealtime needs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rearrangements – Four Urban Experiments Between Soil and Sky</title>
<link href="https://hdl.handle.net/1721.1/151581" rel="alternate"/>
<author>
<name>Senise, Luca Smith</name>
</author>
<id>https://hdl.handle.net/1721.1/151581</id>
<updated>2023-08-01T03:38:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Rearrangements – Four Urban Experiments Between Soil and Sky
Senise, Luca Smith
In this thesis, I situate my creative practice as an architect-artist in the fields of urban intervention, ontological design and experimental worldbuilding. My projects involve prototyping arrangements in urban space that contain ideas currently excluded from modern colonial urban design. These ideas, both specific and broad, include holding resources in common, the idea that public corridors could be places of life rather than only movement, the idea that individual citizens should be empowered to contest the conditions they live in, or the even bigger idea that the world is not an inanimate bank of resources (as Enlightenment thinking would have it) but in fact an interdependent, living collection of relationships.&#13;
&#13;
How might the act of intervening in urban space suggest new arrangements? I argue that urban intervention as an artistic method exists between craft and design, taking craft as a means of operating directly in the material world and design as an act of translation between an intention and a form. Examining a variety of precedent practices, particularly the work of Flavio de Carvalho (Brazilian polymath, b.1899) and the Design Studio for Social Intervention (Boston-based design group, “ds4si”), I understand urban intervention as a method that has a double-power in both a material dimension and an abstract dimension. I then use the analytical framework developed by ds4si, “Ideas-ArrangementsEffects” to consider the relationship between physical prototypes and the social ideas present in my work.&#13;
&#13;
The thesis chronicles four projects (1. Rhythmwalk, 2. Citizen Chair, 3. Free Rain/Free Rein, 4. Boston Lead Gardens) undertaken as a student in the Art, Culture and Technology program at MIT, revealing an evolving set of methodological approaches to creating urban interventions: from independent to collective, with permission and without permission, between curiosity and intention. The thesis concludes with a meditation on what it means to belong to the ground that supports us and an evaluation of the arc the four projects have taken.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting and Preventing Unsafe Events at an Enterprise</title>
<link href="https://hdl.handle.net/1721.1/151580" rel="alternate"/>
<author>
<name>Ukaire, Onyinyechi</name>
</author>
<id>https://hdl.handle.net/1721.1/151580</id>
<updated>2023-08-01T03:03:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Predicting and Preventing Unsafe Events at an Enterprise
Ukaire, Onyinyechi
Amgen Incorporated, like many organizations with manufacturing operations, has undesirable incidents. Corporate workers trip and fall. Contractors get cuts that require first aid. Shop floor operators can be injured by automated equipment, and so on. Seeking to become a leader in environmental health and safety, more so than avoiding painful costs related to work-related injuries, Amgen wants to curb and ultimately reduce its unwanted events to zero. To do so, a dominant approach is to build a model that predicts undesirable incidents. If predicted, then it can be prevented. Previous studies predicting safety relied on incidence rate per worker, medical costs, or days away from work — variables biased to staff reporting practices. The availability of standardized, routinely, and automatically collected data on work equipment, work orders, human resources allow for the hypothesis that operating parameters are indicative of unwanted events. We find that machine and work order variables are poor indicators of the frequency of unsafe incidents. A reason for this unexpected finding is that the company has reduced machine generated errors to the extent that they are no longer the primary drivers of unsafe events in the system. In checking for alternative drivers of systemic safety, we find human factors promising. Even when a predictive model is in effect, we demonstrate the need to develop organizational capabilities to ensure that safety systems continuously improve. We argue for a new approach that puts standardization of incident documenting, data engineering, and performance metrics reporting at the core. Given ample opportunities for evaluating safety systems, we are confident that those variabilities that result in accidents can be subdued for Amgen and many industrial companies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a robust qualitative method to assess MIT REAP impact on formerly engaged innovation ecosystems.</title>
<link href="https://hdl.handle.net/1721.1/151579" rel="alternate"/>
<author>
<name>Morgensztern, Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/151579</id>
<updated>2023-08-01T03:54:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design of a robust qualitative method to assess MIT REAP impact on formerly engaged innovation ecosystems.
Morgensztern, Alice
The MIT Regional Entrepreneurship Acceleration Program (REAP) is a program that seeks to create efficient regional innovation-driven entrepreneurial ecosystems worldwide. The program engages with various communities internationally and aims to help regional Teams gain a deeper understanding of the strengths and weaknesses of their innovation ecosystem. This is done by evaluating the engagement and level of collective impact by regional stakeholders.&#13;
&#13;
The REAP program is grounded in MIT five stakeholders model, a modernization of the triple helix model, which is one of the foundational academic theories on innovation ecosystems. The program runs for two years and welcomes approximatively eight Teams in each cohort. The curriculum has been structured to foster collaboration among stakeholders and long-term engagement in building a sustainable and impactful strategy for the regional ecosystem. After the program ends, it encourages the creation of a Back-bone Organization to consolidate a longlasting organization capable of implementing a Must Win Battle. It is important to both initiate and sustain the acceleration deriving from the REAP program.&#13;
&#13;
We developed a methodology to assess the impact that REAP has had on the ten cohorts of regional ecosystems that participated in the program. Given the early stage of the reflection, the variety of regions involved, the need for detailed and contextualized data, the number of factors of failure or success, a qualitative assessment method based on interviews is the most relevant. The research was conducted by designing a questionnaire and interviewing nine former champions of Teams that participated in REAP from all around the world.&#13;
&#13;
The analysis helped to better understand the individual Team journeys, including the diversity of regions and their specific challenges, the immediate outcomes of REAP for each region, and the extent to which they were able to maintain the momentum after REAP to continue having an impact on the regional ecosystem. Additionally, the key factors of success and failures for the Teams when going through REAP were identified, such as the five stakeholders'  3 engagement, the crucial role of the champion in building the right Team dynamics during and after the program, the necessity to nourish positive relationships within the Team and among stakeholders to foster collaboration, and the need to select stakeholders’ representatives that have strong local engagement.&#13;
&#13;
Finally, recommendations were made to manage those issues efficiently and further improve the REAP program to make the most of its ten years of implementation. The recommendations tackle the building of the right Team's dynamics, the design of the right cohorts and curriculum, the importance of alignment, and the leverage of the international network. Overall, the recommendations can help REAP enhance its effectiveness and continue having a positive impact all around the world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study of the Individual Pension Funds Allocation Strategy in China</title>
<link href="https://hdl.handle.net/1721.1/151578" rel="alternate"/>
<author>
<name>Shi, Xiaoyu</name>
</author>
<id>https://hdl.handle.net/1721.1/151578</id>
<updated>2023-08-01T04:05:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Study of the Individual Pension Funds Allocation Strategy in China
Shi, Xiaoyu
Building a multi-level pension system is urgently needed as China's aging issue gets more serious. Individual pension funds can considerably improve the quality of life for retirees by providing a stable source of income during their retirement years. By supporting the development and adoption of individual pension funds, China can not only improve the financial security of its citizens, but also contribute to the long-term viability of its social security system as a whole.&#13;
&#13;
The study investigates individual pension funds in developed markets, with a particular emphasis on the development of target date funds and target risk funds. The study also employs Monte Carlo simulations to compare the market performance and utility performance of various strategies in order to determine which fund strategy is best suited for Chinese individual pension funds, thereby assisting individuals in making more effective pension investments in order to accumulate more wealth after retirement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Supply Chain Resiliency through Aseptic Connector Alignment and Standardization</title>
<link href="https://hdl.handle.net/1721.1/151577" rel="alternate"/>
<author>
<name>Guiriba, Toni</name>
</author>
<id>https://hdl.handle.net/1721.1/151577</id>
<updated>2023-08-01T03:49:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving Supply Chain Resiliency through Aseptic Connector Alignment and Standardization
Guiriba, Toni
Single-use technologies (SUT) for biomanufacturing have been gaining wide adoption over the last ten years. This was even more accelerated during the COVID-19 pandemic when vaccine developers utilized the technology, having priority access to manufacturing capacity and material inventory through Operation Warp Speed. This was a testament to the manufacturing and development efficiencies enabled by SUT compared to traditional stainless-steel manufacturing, but also a bane to the rest of the pharmaceutical industry from a supply chain perspective.&#13;
&#13;
To persist in the short term, Amgen continued their operations through a dedicated task force that collaborated closely with internal plants and external suppliers to anticipate shortages and mitigate them. To build supply resiliency in their single-use assemblies for the long-term, Amgen sought to standardize aseptic connectors, enabling greater collaboration and network transferability of parts within plants that are currently standardized to different connector preferences.&#13;
&#13;
Here we show a detailed assessment of the various aseptic connector options at Amgen, along with a cost-benefit-risk evaluation of standardization, and an implementation plan supported by an external benchmarking of a few of Amgen’s peer companies. Our analyses and recommendations were informed by internal stakeholder interviews, peer company and subject matter expert interviews, supplier outreach, internal data analysis, and a manufacturing associate survey. We evaluated the connectors based on technical design specifications, supply robustness, defect risk, and user experience.&#13;
&#13;
Due to cost constraints to undertake standardization comprehensively all at once, we recommend selecting a single candidate connector for standardization with a phased approach for implementation upon new site builds and technology introduction. This allows Amgen to deploy standardization as part of other value-adding improvements to their operations, such as their new site build in Amgen North Carolina. With introduction of a standard aseptic connector in this new site, over 60% of existing connectors from two other plants are covered by the revision, lowering the barrier for those plants to move to the standard in the future. This approach to evaluating the impact of process component standardization across a network of manufacturing sites is useful for other technology standardization that companies are evaluating.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Drivers of ESG Index Outperformance : &#13;
A Transatlantic Analysis of US and European Markets</title>
<link href="https://hdl.handle.net/1721.1/151576" rel="alternate"/>
<author>
<name>Chen, Jinlan (Iris)</name>
</author>
<id>https://hdl.handle.net/1721.1/151576</id>
<updated>2023-08-01T04:13:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Drivers of ESG Index Outperformance : &#13;
A Transatlantic Analysis of US and European Markets
Chen, Jinlan (Iris)
The purpose of this study is to meticulously investigate the varying effects of diverse Environmental, Social, and Governance (ESG) integration approaches on the financial performance of securities within the European and US markets over the decade from 2013 to 2023. This research topic represents a valuable contribution to the existing literature, which it provides a more nuanced perspective on how ESG considerations should be intricately woven into the fabric of investment decision-making processes, serving as an actionable playbook for investors of ESG-related goals. The study exhaustively examines over 200 portfolio simulations, utilizing a comprehensive selection of 22 equity and bond indexes spanning both European and US markets. The findings reveal that a 'best-in-class', sector-relative selection approach based on ESG ratings typically outperforms in Europe. Conversely, an 'optimization-focused' approach that leans towards market-cap weighting based on ESG scores delivers superior performance in the US. A range of factors that potentially influence these differential outcomes are explored in depth. These include the unique regulatory environments across regions, the dynamic nature of markets, the varying preferences of investors, and the distinct sector compositions inherent to each region. Furthermore, the research acknowledges the pivotal role those emergent technologies, such as big data and artificial intelligence (AI), are playing in shifting the global investment landscape towards sustainable practices. To provide a future-oriented perspective, the study incorporates several practical applications of AI technology in the domain of ESG investing. These insights not only demonstrate the transformative potential of AI but also underscore the importance of technological adaptation in achieving sustainable investment outcomes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Steel Reuse as a Cost-Effective Carbon Mitigation Strategy</title>
<link href="https://hdl.handle.net/1721.1/151575" rel="alternate"/>
<author>
<name>Berglund-Brown, Juliana</name>
</author>
<id>https://hdl.handle.net/1721.1/151575</id>
<updated>2023-08-01T04:22:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Structural Steel Reuse as a Cost-Effective Carbon Mitigation Strategy
Berglund-Brown, Juliana
New structures can be designed from existing steel elements at lower cost and with dramatically lower carbon emissions when compared to conventional steel structures. Steel building structures typically have the highest embodied carbon impacts when compared to masonry, wood, concrete, and reinforced concrete projects (De Wolf et al 2016). Designing with salvaged structural steel is a beneficial alternative for structural engineers to reduce embodied carbon in the built environment and implement life-cycle oriented and cost-conscious design of steel structures. However, there are still many barriers to designing with reused gravity elements in buildings at scale, such as the uncertainty surrounding element availability, and understanding which factors contribute to carbon emissions associated with reuse. &#13;
&#13;
This thesis establishes more certainty about the supply of steel elements, quantifies potential carbon and cost savings, and identifies the variables that most impact such savings to better enable designing steel frames. This work first provides the context and terminology to connect structural systems to circular economy and reuse, and then outlines why reusing gravity beams and columns is particularly advantageous via a state-of-the-art overview of the steel value-chain. Next, a high-level material flow analysis is conducted for the U.S. structural steel market, indicating that the quantity of the existing steel heavy section scrap covers 140% of the demand for imports of steel. An LCA utilizing a comparative cut-off method is then performed and coupled with a cost estimation, which demonstrates a potential for around an 87% reduction in carbon emissions from steel reuse instead of recycling. Based on the findings of the partial LCA, an exploratory data analysis is then performed with both a stochastic sampling and nine real building projects to identify the variables most impacting carbon cost associated with reuse. Structural weight is found to have the greatest effect on reuse emissions, followed by number of elements, and then transportation distance. &#13;
&#13;
Finally, this thesis explains the implications steel reuse has for stakeholders in the structural steel industry, including fabricators and engineer and design teams. In short, this thesis presents the case for steel reuse, and the intrinsic carbon, cost, and structural value it could have.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recognizing Speech with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/151573" rel="alternate"/>
<author>
<name>Zeitoun, Abbas</name>
</author>
<id>https://hdl.handle.net/1721.1/151573</id>
<updated>2023-08-01T03:04:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Recognizing Speech with Large Language Models
Zeitoun, Abbas
Recent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on a lot of data or on training the model to perform surface-level audio-text classification tasks. In this work, we show that a pretrained T5 encoder-decoder language model fine-tuned on as little as 10 hours of speech data can transcribe the contents of input audio embeddings and even outperforms a specialized baseline speech-to-text model at transcribing more difficult speech utterances. The resulting model serves as a first step towards language models that can manipulate audio inputs just as well as text inputs and can leverage the additional information in audio inputs to perform tasks that are not possible with text inputs alone.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Circumventing Memory Corruption Mitigations in the Spectre Era: Real-World Attacks and Systematic Analysis of Defenses</title>
<link href="https://hdl.handle.net/1721.1/151572" rel="alternate"/>
<author>
<name>Na, Weon Taek</name>
</author>
<id>https://hdl.handle.net/1721.1/151572</id>
<updated>2023-08-01T03:14:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Circumventing Memory Corruption Mitigations in the Spectre Era: Real-World Attacks and Systematic Analysis of Defenses
Na, Weon Taek
Modern systems are becoming increasingly complex, exposing a large attack surface with vulnerabilities in both software and hardware. In the software layer, memory corruption vulnerabilities can be exploited by attackers to alter the behavior or take full control of a victim program. In the hardware layer, microarchitectural side channel vulnerabilities can be exploited to leak arbitrary data within the victim program’s address space. Today, it is common for security researchers to explore software and hardware vulnerabilities separately, considering the two vulnerabilities in two disjoint threat models.&#13;
&#13;
This thesis studies the synergies that arise at the convergence of the two threat models. In particular, this thesis first presents PACMAN, a novel attack methodology that leverages speculative execution attacks to circumvent ARM Pointer Authentication, a critical memory safety feature in many state-of-the-art ARM processors. The key insight of the PACMAN attack is that PAC verification results can be leaked via microarchitectural side channels while suppressing crashes. The PACMAN attack removes the primary barrier to conducting control-flow hijacking attacks on a platform protected by ARM Pointer Authentication. Moreover, we show that the PACMAN attack works across privilege levels, meaning that we can attack the operating system kernel as an unprivileged user in userspace.&#13;
&#13;
Alas, the discovery of the PACMAN attack calls for a drastic re-evaluation of all memory corruption mitigations under a synergistic threat model; a threat model that encompasses both the memory corruption threat model and the side channel threat model. Driven by this need, the thesis next presents Penetrating Shields, a systematic analysis of memory corruption mitigations from both academia and industry. We start by systematizing a taxonomy of the state-of-the-art memory corruption mitigations focusing on hardware-software co-design defenses. This taxonomy helps us to identify 10 likely vulnerable defense schemes out of 20 schemes that we analyze. Next, we develop a graph-based model to analyze the 10 likely vulnerable defenses and reason about possible countermeasures. Finally, we present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Graph Summarization</title>
<link href="https://hdl.handle.net/1721.1/151569" rel="alternate"/>
<author>
<name>Zeng, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/151569</id>
<updated>2023-08-01T04:20:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Causal Graph Summarization
Zeng, Anna
Causal inference is critical for scientific progress, especially in social sciences like public health and education—however, analysts often only have access to partial data which may lead to erroneous conclusions if critical confounding biases are not accounted for. To do this, they critically rely on (often unavailable or incomplete) domain knowledge to identify attributes to include for causal analysis, which is often tediously, manually specified in the form of a causal DAG.&#13;
&#13;
Given state-of-the-art methods, analysts might automatically gather and causally organize a much more comprehensive set of attributes to include in their analysis; however, at best, such tools provide large, nearly-complete causal graphs which are difficult to comprehend, let alone verify to use in causal analysis tasks; as these graphs get bigger and denser with the growth of automated causal discovery methods, domain experts will struggle to comprehend, interpret, and correct causal graphs for practical applications. Existing methods for graph summarization developed in other domains, such as graphics, social networking, and mapping, are not guaranteed to provide a summarized graph eligible for use in causal analysis tasks; some methods even result in introducing spurious causal relationships that render erroneous conclusions if used in causal analysis.&#13;
&#13;
We hypothesize that causality-specific graph summarization algorithms could surmount these challenges. To demonstrate this, we introduce CAMBA, a prototype causal graph summarization algorithm that efficiently generates high-quality causal graph summaries that are interpretable and usable for causal inference. In this thesis, we formalize the Causal DAG Summarization problem, identify a causal information metric, extend causal inference foundations to summary graphs, identify graph summarization techniques which can preserve this causal information, and propose a range of possible causal-specific graph summarization optimizations, and evaluate such methods on a range of causal analysis scenarios.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Terrain Relative Navigation:&#13;
Pose Estimation, Neural Fields, and Verification</title>
<link href="https://hdl.handle.net/1721.1/151568" rel="alternate"/>
<author>
<name>Maggio, Dominic</name>
</author>
<id>https://hdl.handle.net/1721.1/151568</id>
<updated>2023-08-01T04:20:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Visual Terrain Relative Navigation:&#13;
Pose Estimation, Neural Fields, and Verification
Maggio, Dominic
Visual Terrain Relative Navigation (TRN) is a method for GPS-denied absolute pose estimation using a prior terrain map and onboard camera. TRN is commonly desired for applications such as planetary landings, unmanned aerial vehicles (UAVs), and airdrops, where GPS is either unavailable or cannot be relied upon due to both the possibility of signal loss or outside signal jamming attack. This thesis presents a threefold constribution to visual TRN.&#13;
&#13;
Firstly, due to the high altitude and high speeds of planetary TRN missions, acquiring non-simulation test data oftentimes proves difficult, and thus many datasets used to test TRN systems are from lower altitudes and speeds than what the system would actually be deployed. We present an experimental analysis of visual TRN on data collected from a World View Enterprises high-altitude balloon from an altitude range of 33 km to 4.5 km. We demonstrate less than 290 meters of average position error over a trajectory of more than 150 kilometers. Additionally, we evaluate performance on data we collected by mounting two cameras inside the capsule of Blue Origin’s New Shepard rocket on payload flight NS-23, traveling at speeds up to 880 km/h, and demonstrate less than 55 meters of average position error.&#13;
&#13;
Secondly, as accurate terrain map representation is at the core of TRN performance, we explore the question of whether newly emerging Neural Radiance Fields (NeRF) can be efficiently leveraged as a map for visual localization. We propose a NeRF-based localization pipeline coined Loc-NeRF which uses a particle filter backbone to perform monocular camera pose estimation utilizing NeRF.&#13;
&#13;
Thirdly, since TRN is often performed in high-risk missions, we explore the problem of monitoring the correctness of a monocular camera pose estimate at runtime. For this, we again leverage the ability of NeRF to render novel viewpoints and propose a technique coined VERF that incorporates NeRF into a geometrically constrained method to provide assurance on the correctness of a camera pose estimate.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Solving Parabolic Partial Differential Equations on Discrete Domains</title>
<link href="https://hdl.handle.net/1721.1/151567" rel="alternate"/>
<author>
<name>Mattos Da Silva, Leticia</name>
</author>
<id>https://hdl.handle.net/1721.1/151567</id>
<updated>2023-08-01T03:21:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Framework for Solving Parabolic Partial Differential Equations on Discrete Domains
Mattos Da Silva, Leticia
We introduce a framework for solving a class of parabolic partial differential equations on triangle mesh surfaces, including the Hamilton-Jacobi equation and the Fokker- Planck equation. Certain PDE in this class often have nonlinear or stiff terms that cannot be resolved with standard methods on triangle mesh surfaces. To address this challenge, we leverage a splitting integrator combined with a convex optimization step to solve these PDE. Our machinery can be used to compute entropic approximation of optimal transport distances on geometric domains, overcoming the numerical limitations of the state-of-the-art method. In addition, we demonstrate the versatility of our method on a number of linear and nonlinear PDE that appear in diffusion tasks in geometry processing.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gamifying Higher Education for Generation Alpha: Aligning Cognitive Behavioral Needs with Business Value through a Human-Centered Approach</title>
<link href="https://hdl.handle.net/1721.1/151565" rel="alternate"/>
<author>
<name>Kong, Yvette Man-yi</name>
</author>
<id>https://hdl.handle.net/1721.1/151565</id>
<updated>2023-08-01T03:46:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Gamifying Higher Education for Generation Alpha: Aligning Cognitive Behavioral Needs with Business Value through a Human-Centered Approach
Kong, Yvette Man-yi
This thesis proposes a human-centered framework for gamification in higher education that aligns cognitive behavioral needs with business value for Generation Alpha, the cohort born after 2010. By analyzing the gaps between current education practices and Generation Alpha's needs, the methodology aims to bridge the divide. The literature review underscores the importance of empathizing with Generation Alpha's cognitive behavioral needs, including socialization and communication skills, creativity and innovation, digital literacy and technology skills, emotional intelligence and resilience, and cultural competency and global awareness. &#13;
&#13;
Gamification is posited as a potential strategy for engaging and motivating Generation Alpha in higher education. The benefits of gamification encompass personalization and feedback, collaborative and social learning, real-world application and problem-solving, and experiential and immersive learning. The thesis concludes by emphasizing the importance of gamification in higher education for Generation Alpha and its implications for higher education providers and policymakers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamics Modeling and Analysis of Continuous Production Agility: Policies and Enablers for Resilient Satellite Constellations</title>
<link href="https://hdl.handle.net/1721.1/151564" rel="alternate"/>
<author>
<name>Liu, Peter Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/151564</id>
<updated>2023-08-01T03:26:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">System Dynamics Modeling and Analysis of Continuous Production Agility: Policies and Enablers for Resilient Satellite Constellations
Liu, Peter Y.
As space becomes more proliferated with satellite constellations from commercial and government actors, the ability to deploy and sustain these critical assets on-orbit grows in importance. The traditional risk-adverse approach for space systems trended towards longer development cycles and highly optimized payload and bus designs, leading to high program costs. However, with the advent of small satellite megaconstellations by commercial companies such as SpaceX’s Starlink and Amazon’s project Kuiper, the paradigm has started to shift towards an agile, rapid approach to satellite production and engineering. In 2020, the Aerospace Corporation proposed an approach dubbed Continuous Production Agility (CPA), which features include streamlined satellite manufacturing lines, a schedule-certain basis for launch, and standards and mindsets that contribute to industry learning curves. In this paper, a CPA system dynamics model was constructed utilizing industry feedback loops to examine the viability of a CPA approach in terms of on-orbit constellation resilience and total program costs. The results demonstrate how a CPA approach can lead to faster reconstitution of satellite constellations when subject to harsh space environments. Total program cost performance of the CPA approach varied when compared to the traditional, launch-on-demand approach depending on the raw number of satellites and launch vehicles utilized for this superior performance; however, per-unit costs and industry learning captured in the long term benefitted from the CPA approach. Risk-tolerant approaches also proved to be effective in driving down costs for mass-producing satellites. It is anticipated that space actors will work with companies around the globe to shift to these competitive CPA-oriented strategies as space strategies become more oriented towards agility and flexibility. These approaches could benefit not only from technology enablers but from various domestic and international policy developments to foster proper regulation and innovation in this era of new, proliferated space.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Custom Electrical Impedance Tomography Forward Models for Muscle Rehabilitation and Radiation Monitoring</title>
<link href="https://hdl.handle.net/1721.1/151563" rel="alternate"/>
<author>
<name>Schein, Gila</name>
</author>
<id>https://hdl.handle.net/1721.1/151563</id>
<updated>2023-08-01T03:10:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Custom Electrical Impedance Tomography Forward Models for Muscle Rehabilitation and Radiation Monitoring
Schein, Gila
Electrical impedance tomography (EIT) has become increasingly prevalent in medical research. However, there are gaps where EIT has the ability to provide more context and introduce new insights into internal body composition. For example, physical rehabilitation is traditionally based on motion tracking, though muscle tracking is the ultimate goal. A wearable EIT sensing device can fill this gap. We use sets of electrodes to image and reconstruct muscles in the body for real-time visualization of muscle activity. The results indicate that monitoring and visualizing muscle engagement can improve therapeutic exercise accuracy for rehabilitation. In addition, radiation treatment can be monitored in a more accessible way than is currently available. A wearable EIT sensing device is cheaper and faster to use than current options and can provide information about internal organs in real-time.  We explore changes of electrical properties in materials due to radiation using EIT. Although the project is still in an early stage, the results indicate promise for applications of EIT real-time monitoring during and after radiotherapy treatments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inflation, Taxation and Corporate Investment in the U.S. During the Great Inflation</title>
<link href="https://hdl.handle.net/1721.1/151562" rel="alternate"/>
<author>
<name>Usenko, Yevhenii</name>
</author>
<id>https://hdl.handle.net/1721.1/151562</id>
<updated>2023-08-01T04:13:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inflation, Taxation and Corporate Investment in the U.S. During the Great Inflation
Usenko, Yevhenii
U.S. corporate taxation is not neutral to inflation. Two of its features – historical cost depreciation and FIFO inventory accounting – are expected to lower real after-tax corporate cash flows and, thereby, make investment less attractive when expected inflation is elevated. Using Compustat data for 1965-1980 and a difference-in-differences research design, I do not find evidence in support of this hypothesis. I discuss possible explanations for this non-result. In addition, I find a robust effect of statutory tax changes on corporate investment during the Great Inflation. The effect is economically meaningful and consistent with the prior literature: a tax reform that increases firm's cost of capital by 10% lowers investment of affected firms by 2 percentage points of total assets relative to firms not affected by the reform.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivities of Atmospheric Composition to High-Altitude Vehicles Emissions</title>
<link href="https://hdl.handle.net/1721.1/151561" rel="alternate"/>
<author>
<name>Oh, Lucas Jeongsuk</name>
</author>
<id>https://hdl.handle.net/1721.1/151561</id>
<updated>2023-08-01T03:19:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Sensitivities of Atmospheric Composition to High-Altitude Vehicles Emissions
Oh, Lucas Jeongsuk
This thesis explored the environmental implications of high-altitude transportation, with a focus on civil supersonic transport (SST) emissions. The potential atmospheric impacts of these emissions, including changes in atmospheric composition and air quality, are examined in detail. Previous studies have evaluated global ozone changes and radiative forcing resulting from aviation emissions. However, they have sometimes neglected the variability due to altitude, latitude, and species, which are needed to understand the impacts of these emissions. This study addressed the research gap concerning surface air quality impacts, specifically surface ozone and particulate matter (PM₂.₅) concentrations.&#13;
&#13;
We used the GEOS-Chem chemistry-transport model to quantify the atmospheric sensitivities linked to civil supersonic transport, specifically evaluating the consequences of four distinct supersonic emission inventories on surface ozone, global column ozone, and PM₂.₅ concentrations via forward sensitivity analyses. Under scenario A, the established ceiling altitude of 21 km and a total fuel burn of 122 Tg corresponded with a decrease of 0.48 ppbv in population-weighted mean (PWM) surface ozone. This change is attributable to contributions from NOₓ (-0.22 ppbv) and SOₓ (-0.20 ppbv). Additionally, we observed a decline of 14.1 DU in global column ozone, with NOₓ and SOₓ contributing -11.2 DU and -2.8 DU, respectively. In contrast, population-weighted mean (PWM) PM₂.₅ levels rosed by 0.12 μg/m³, with the major contribution coming from NOₓ emissions, accounting for an increase of 0.092 μg/m³, while SOₓ added 0.004 μg/m³. Under scenario B1, where the ceiling altitude was 17 km and the total fuel burn was 43.1 Tg, we estimated a 0.058 ppbv increase in PWM surface ozone. This increase was primarily due to a 0.057 ppbv rise from NOₓ emissions, while black carbon and organic carbon caused a reduction of 0.0005 ppbv. Additionally, we noted an increase in global column ozone (0.14 DU), with NOₓ and water vapor contributing 0.20 DU and -0.026 DU, respectively. PWM PM₂.₅ increased by 0.006 μg/m³, where NOₓ was responsible for 0.0054 μg/m³ and black carbon and organic carbon offset it by 0.00013 μg/m³.&#13;
&#13;
The composition of PM₂.₅ was found to be influenced by the altitude of emissions. In the higher altitudes ranging from 20 to 22 km, sulfate composed 97% of PM₂.₅, and a 15% reduction in PM₂.₅ is linked to nitrate. However, the sulfate proportion decreased, while the nitrate proportion correspondingly increased with the decrease in altitude. For example, in the altitudes bracket between 10 to 12 km, nitrate became the dominant constituent, making up 92% of PM₂.₅, while a 15% reduction in PM₂.₅ was attributed to sulfate.&#13;
&#13;
The Linear Sensitivity Combination (LSC) method was developed and applied, achieving a strong correlation with the GEOS-Chem results (coefficients of determination above 0.96 and 0.99 for scenarios A and B1, respectively). The LSC method was employed to evaluate the implications of fuel composition, illustrating that the selection of fuel – such as a transition to hydrogen fuel, Ultra-Low Sulfur (ULS) fuel, or biofuel – impacts atmospheric composition and overall environmental effects. In scenario A, the population-weighted mean (PWM) surface ozone demonstrated shifts from -0.48 ppbv to -0.30 ppbv with hydrogen, -0.28 ppbv with ULS fuel, and -0.29 ppbv with biofuel. PWM PM₂.₅ concentration showed shifts from an initial 0.12 μg/m³ to 0.013 μg/m³ when hydrogen was used, 0.090 μg/m³ with ULS fuel, and 0.092 μg/m³ with biofuel. This comprehensive study offers understandings into the environmental implications of supersonic transportation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emerging Markets Penetration Strategy in the &#13;
Deglobalization Era - A Case Study of the NEV Industry &#13;
in Southeast Asia</title>
<link href="https://hdl.handle.net/1721.1/151560" rel="alternate"/>
<author>
<name>Huang, Ningxin</name>
</author>
<id>https://hdl.handle.net/1721.1/151560</id>
<updated>2023-08-01T03:18:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Emerging Markets Penetration Strategy in the &#13;
Deglobalization Era - A Case Study of the NEV Industry &#13;
in Southeast Asia
Huang, Ningxin
In recent years, due to the slowdown in global economic development and shifts in the balance of national strength, the trend of deglobalization has arisen. This trend has introduced various risks to multinational enterprises, including political instability, trade protectionism, and supply chain disruptions. Despite these challenges, there is an enduring need for businesses to explore new markets, particularly during the period when emerging markets are experiencing rapid growth. Among these emerging markets, Southeast Asia represents a market of significant potential, especially within the burgeoning new energy vehicles (NEV) industry. This paper uses the NEV industry in Southeast Asia as a case study to explore the challenges that companies may face when penetrating emerging markets. These challenges range from unstable political environments, export-oriented economies, underdeveloped infrastructure, scarcity of skilled talent, to cultural and religious diversity. In addition, the paper also discusses the opportunities that come with these challenges in the &#13;
current era.&#13;
&#13;
This study draws on an analysis of market data, reviews of Southeast Asian policies, and interviews with industry practitioners. Utilizing real world examples, the paper aims to synthesize a strategic framework to guide companies in navigating emerging markets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling the ESG Landscape: Exploring Revealed Preferences&#13;
through Archetypal Analysis of Decision-Makers in&#13;
Environmental, Social, and Governance Causes</title>
<link href="https://hdl.handle.net/1721.1/151559" rel="alternate"/>
<author>
<name>Robinet, Mathilde</name>
</author>
<id>https://hdl.handle.net/1721.1/151559</id>
<updated>2023-08-01T04:22:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unveiling the ESG Landscape: Exploring Revealed Preferences&#13;
through Archetypal Analysis of Decision-Makers in&#13;
Environmental, Social, and Governance Causes
Robinet, Mathilde
This research investigates the motivations and decision-making patterns driving individuals' behavior when giving to Environmental, Social, and Governance (ESG) causes. Through the concept of revealed preferences, this study offers  a comprehensive exploration of the intricate landscape of  ESG resource allocation. The analysis is conducted through the innovative use of the ESG Machine game, offering unique insights into how individual preferences shape unique allocation portfolios and the strategies underpinning these decisions. &#13;
&#13;
The study identifies four distinct archetypes of decision-making - payoff-maximizer based, equality-based, proportional-based, and value-based - and employs a suite of indicators to measure the degree of each archetype in individuals. A key finding is the dominance of value-based decision-making, although strategies like impact maximization and equal distribution of payoffs across all causes also emerged.&#13;
&#13;
These findings bear profound implications for ESG investing by offering invaluable insights that can shape the formulation of more potent investment strategies. By uncovering the complexities of individual decision-making and the differing strategies at play, this study paves the way for designing interventions that align personal values and financial choices effectively. This tailored approach has the potential to resonate deeply with individuals, engaging them based on their decision-making archetype, thereby enhancing the efficiency of ESG investments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wearable Sensor System for Quantifying Proprioceptive Competence in Microgravity</title>
<link href="https://hdl.handle.net/1721.1/151558" rel="alternate"/>
<author>
<name>Lin, Shu-Yu (Michelle)</name>
</author>
<id>https://hdl.handle.net/1721.1/151558</id>
<updated>2023-08-01T03:48:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Wearable Sensor System for Quantifying Proprioceptive Competence in Microgravity
Lin, Shu-Yu (Michelle)
Microgravity poses a significant challenge for our neurovestibular and proprioceptive systems. Past spaceflight and parabolic research have shown degraded movement control upon microgravity exposure and adaptation of performance with time. However, most research does not address the functional, dynamic, whole-body movements we expect in spaceflight. In particular, as commercial microgravity experiences become ubiquitous, maladapted proprioceptive systems in novice flyers pose risks to themselves, other crew members, and expensive spacecraft equipment. We propose a framework to assess proprioceptive competence (introduced and defined in this thesis) through the metric of fluidity, a biomechanical property often used in medical rehabilitation and functional gait assessment. We designed, built, and pilot tested a wearable sensor system capable of inertial motion capture in the parabolic flight environment. Through comparing whole-body joint fluidity in translation movements done in 1-g and microgravity, we found evidence suggesting an increased fluidity upon entry into microgravity and increased fluidity throughout  microgravity exposure.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using a System-Theoretic Approach for Cyber Mission Assurance of the Royal Canadian Air Force Over the Horizon Radar System</title>
<link href="https://hdl.handle.net/1721.1/151557" rel="alternate"/>
<author>
<name>Kim, James Jaehak</name>
</author>
<id>https://hdl.handle.net/1721.1/151557</id>
<updated>2023-08-01T03:17:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Using a System-Theoretic Approach for Cyber Mission Assurance of the Royal Canadian Air Force Over the Horizon Radar System
Kim, James Jaehak
Since 1958, the North American Aerospace Defence between Canada and the United States remains as the only bi-national military command in the world. Among many of its responsibilities, the need for early detection of threats against the North American aerospace demands improved visibility in terms of both range and coverage over the Northern Canadian Area of Responsibility. However, the existing fleet of radar systems are not only limited but fast approaching technological obsolescence against modern adversarial weapon systems. As a solution, the Royal Canadian Air Force committed to deliver the Over the Horizon Radar systems that will significantly enhance the existing NORAD capabilities in detecting adversarial northern approaches.&#13;
&#13;
The Royal Canadian Air Force conducts Cyber Mission Assurance on its future weapon systems. Hence understanding of cyber vulnerabilities permeating the Over the Horizon Radar systems is a mandatory exercise that must take place concurrent to the Project Management and acquisition efforts. Considering this, a novel methodology known as the STPA-Sec is employed to conduct Cyber Mission Assurance of the Over the Horizon Radar systems. Contrary to the traditional methods to manage cyber risks, the STPA-Sec defines the scope of the system, illustrates the attack surface, as well, offers a set of operational constraints within which, if complied, minimizes risks of defined system failures. The application of STPA-Sec on the Over the Horizon Radar systems yields a concrete set of recommendations that, if followed, will minimize systemic and multi-faceted risks that are otherwise unconceivable using the traditional methods.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Supply Chain Connectivity and Capacity Analysis for Strategic Production Planning in Biosurgery Oxidized Regenerated Cellulose</title>
<link href="https://hdl.handle.net/1721.1/151556" rel="alternate"/>
<author>
<name>Jiang, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/151556</id>
<updated>2023-08-01T03:39:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Digital Supply Chain Connectivity and Capacity Analysis for Strategic Production Planning in Biosurgery Oxidized Regenerated Cellulose
Jiang, Justin
Ethicon, a subsidiary of Johnson &amp; Johnson (J&amp;J), is experiencing increased demand within its Oxidized Regenerative Cellulose (ORC) product line. This increase in demand is causing challenges within the current ORC value stream as production dynamics and manufacturing capacities are being stretched to meet growing consumer needs. In addition, the ORC digital data thread is disjointed across multiple enterprise resource planning (ERP) systems and various teams. Consequently, planning activities that support strategic investment decisions are increasingly burdensome and require a significant level of effort. This project analyzes and quantifies the ORC manufacturing process through a statistical lens and applies process flow modeling, descriptive statistics analysis, and optimization techniques to support strategic and tactical&#13;
capacity planning. An optimization algorithm is presented that minimizes the revenue shortfall for Ethicon given current ORC manufacturing capacity and product demand profiles. This optimization algorithm is projected to minimize the shortfall in revenue to 11.8% of maximum expected revenue. 1Additionally, this project strives to connect Ethicon’s various channels of information flow within ORC manufacturing and build the digital thread using Microsoft’s PowerBI.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supplier Development Framework in Supply Chain Cybersecurity Evaluation of Small and Medium-sized Enterprises</title>
<link href="https://hdl.handle.net/1721.1/151555" rel="alternate"/>
<author>
<name>Chang, Erh Chieh</name>
</author>
<id>https://hdl.handle.net/1721.1/151555</id>
<updated>2023-08-01T03:01:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Supplier Development Framework in Supply Chain Cybersecurity Evaluation of Small and Medium-sized Enterprises
Chang, Erh Chieh
Modern organizations rely on suppliers to meet customer needs and improve operations. However, the interconnectedness between organizations and their suppliers, brought about by digital transformation, has led to an increase in significant cyber breaches. To mitigate these risks, organizations use various methods and tools to both assess and monitor potential threats. Despite this, a gap exists between assessment and monitoring/improvement. The objective of this study is to address the gap between cybersecurity assessment and monitoring/improvement by developing a supplier development process in the supply chain that enhances the cybersecurity capability of small and medium enterprise (SME) suppliers. The theoretical framework is built on a literature review, anecdote evidence and best practices in supply chain management, and feedback from industry experts. The framework is a four-stage process that enhances the cybersecurity capability of SME suppliers by improving their security posture, providing training, and fostering collaboration between suppliers and clients. The study highlights the importance of collaborative capability building between client organizations and suppliers to improve cybersecurity. Future research can focus on developing this concept further and exploring its implementation in various industries.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Factors in REIT Pricing</title>
<link href="https://hdl.handle.net/1721.1/151553" rel="alternate"/>
<author>
<name>Burton, Daryl J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151553</id>
<updated>2023-08-01T03:18:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Novel Factors in REIT Pricing
Burton, Daryl J.
This paper investigates the cross-section of U.S. REIT returns from 1980 to 2022 by constructing various asset pricing factors, with a specific focus on Leverage and ICR factors. Despite observing positive long-run returns for most of the constructed long/short factors, the study finds weak evidence that these factors significantly impact the cross-section of returns over the examined period. Moreover, traditional asset pricing factors exhibit limited explanatory power for the cross-section of REIT returns, suggesting their highly idiosyncratic nature. Importantly, the paper identifies a potential link between the constructed factors and ESG scores in the post-2010 period, revealing statistically significant dispersion of ESG characteristics across REITs. This finding paves the way for future research, exploring the construction of ESG factors, their explanatory power in Fama and Macbeth (1973) regressions, and their relationship with the factors investigated in this study.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Entrepreneurship as a Catalyzer of Housing Quality Enhancement in Colombia: Tervi</title>
<link href="https://hdl.handle.net/1721.1/151552" rel="alternate"/>
<author>
<name>Cuéllar Cerón, Alberto</name>
</author>
<id>https://hdl.handle.net/1721.1/151552</id>
<updated>2023-08-01T03:47:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Entrepreneurship as a Catalyzer of Housing Quality Enhancement in Colombia: Tervi
Cuéllar Cerón, Alberto
The evolution of cities in Latin America has been shaped by a complex interplay of factors, including political history, informality, geography, and culture. With one of the highest rates of urbanization in the world, the region's urban centers have experienced a surge in makeshift settlements as governments struggle to meet demand and provide affordable housing. The result is a critical housing deficit, both in terms of quantity and quality, which requires innovative solutions from both the government and the private sector.&#13;
&#13;
The narrative in this thesis unfolds exploring the housing deficit in the region, focusing specifically on Colombia's case and the implications of the existing social housing system on the market. By examining the actors involved, the policy framework, and the current status quo, I sought to reveal the potential for local governments, developers, entrepreneurship, and technology to play a more influential role in addressing the quality gap. In 2021, I co-founded Tervi, a platform designed to provide low- and mid-income homeowners in Colombia access to design, financing, and construction services to improve their substandard dwellings and dignify their living conditions. Drawing on my experiences in conceiving, developing, and engaging with families, communities, and stakeholders during the deployment of the minimum viable product and proof of concept, this thesis highlights the potential of tech-enabled solutions to have a direct impact on life quality through home improvements. Furthermore, the thesis explores potential alternatives to address housing quality deficiencies and challenges the notion of the qualitative deficit as a fixed threshold for classifying the complex concept of home. It argues that factors such as livability and well-being are equally important in the creation of just and comfortable living conditions, and that policies must take these factors into account to avoid perpetuating substandard housing.&#13;
&#13;
The outcome of the process outlined in this thesis is a digital platform that aims to bridge the gap between much of the homeowner population in Colombia and access to high-quality standard homes. In essence, a platform that provides home improvement as a service supporting social housing homeowners in transforming their incomplete dwellings by using technology to optimize their resources and unlock the full potential of their equity. Ultimately, stating that developing prop-tech platforms in the service of communities can augment their opportunities to progress and contribute to the creation of healthier, more comfortable, and just living conditions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Common Energy Saving Programs in Residential Buildings Operation: A Survey and Analysis of Existing Studies</title>
<link href="https://hdl.handle.net/1721.1/151551" rel="alternate"/>
<author>
<name>Hu, Zhiyuan (Shawn)</name>
</author>
<id>https://hdl.handle.net/1721.1/151551</id>
<updated>2023-08-01T03:14:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Common Energy Saving Programs in Residential Buildings Operation: A Survey and Analysis of Existing Studies
Hu, Zhiyuan (Shawn)
In 2020, the U.S. residential building sector alone generated 923.1 MMtCO2e emissions in total (20% of the national total emissions). Residential building is the 3rd highest carbon emitter among all the end-use sectors in the country. To reach the goal set by the Paris Agreement, decarbonizing the residential building sector is imperative. This thesis explores the main sources of carbon emissions from the residential sector, the comparative carbon profiles of different types of residential properties, and the common programs to decarbonize the residential sector, including energy efficiency enhancement, fuel switching, energy supply decarbonization, and behavioral energy efficiency (BEE) programs. This thesis elaborates on the empirically approved behavioral science principles that make effective the various types of BEE programs. Further, this thesis investigates the implementation cost and carbon reduction effectiveness of conventional structural programs vs BEE programs. The preliminary conclusion is that behavioral programs have superior cost-benefit ratio over conventional structural programs that requires huge upfront capital expenditure, the more BEE program proportionally included in a residential energy reduction portfolio, the more cost-efficient it is. However, due to the lower cap of the maximum effectiveness of BEE programs, an optimal mixture of the two with priority for BEE programs over conventional structural program is recommended to achieve the best cost-efficient carbon reduction for property owners or real estate developers that are subject to budget constraints. Lastly, this thesis identifies the problem of underutilization and underproliferation of behavioral based programs and proposes the means to boost the adoption of behavioral interventions via policy recommendations and though the lens of different stakeholders within the residential building lifecycle.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Effect on Teams, Team Processes, and&#13;
Performance</title>
<link href="https://hdl.handle.net/1721.1/151550" rel="alternate"/>
<author>
<name>Liu, Donald Dee</name>
</author>
<id>https://hdl.handle.net/1721.1/151550</id>
<updated>2023-08-01T04:15:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Network Effect on Teams, Team Processes, and&#13;
Performance
Liu, Donald Dee
Team communication provides a foundation for the emergence and development of important team processes. This research focuses on indicators of a team’s transactive memory processes. Through the use of text analysis and natural language processing (NLP) techniques, we illustrate how teams become more efficient in their work processes and develop a shared problem-solving framework, which in turn are beneficial for team performance. These computational tools allow us to measure and assess the influence of established team processes in an online and distributed work context.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modernized Power Converter Development Platform&#13;
for Educational Applications</title>
<link href="https://hdl.handle.net/1721.1/151549" rel="alternate"/>
<author>
<name>Nardomarino, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/151549</id>
<updated>2023-08-01T04:09:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modernized Power Converter Development Platform&#13;
for Educational Applications
Nardomarino, Anthony
A modern education in electrical engineering requires the development of a modern infrastructure capable of supporting new technologies and increasingly power efficient circuit designs. However, there is a requirement that a versatile infrastructure for power converter development also be accessible to a massive student base, easily expansive to future technologies, and generate a positive experience for students beginning to learn power electronics from a hands-on perspective. This thesis details the development, validation, and application of a robust power converter development platform, the Tritotem Development Board III (TDBIII), capable of supporting a modern power electronics curriculum which quantifiably improves upon the powerrating, bandwidth, and robustness to failure of its predecessor. Further, this thesis expands the curriculum to consider the benefits of Gallium Nitride Field Effect Transistor (GaN-FET) switching converters, detailing the development of a platform intended to clearly display their performance benefits over Silicon-based converters, as well as demonstrating the key design principles that become more significant when designing GaN-based converters.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application Considerations of Multiphase&#13;
Monolithic Buck Regulators with Coupled Inductors</title>
<link href="https://hdl.handle.net/1721.1/151548" rel="alternate"/>
<author>
<name>Nguyen, My Uyen Tran</name>
</author>
<id>https://hdl.handle.net/1721.1/151548</id>
<updated>2023-08-01T03:03:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Application Considerations of Multiphase&#13;
Monolithic Buck Regulators with Coupled Inductors
Nguyen, My Uyen Tran
Power delivery requirements for computing applications have continuously grow over the years to keep up with more complex and demanding processing capabilities. The electronics industry increasingly see up to and beyond 100A power rail and microseconds transient recovery requirements for applications in wireless, healthcare, defense, industrial, and automotive industries. This thesis investigates multiphase design and application for monolithic buck switching regulators to multiply output current for low-voltage and power-intensive applications. In particular, the research incorporates coupled inductors to take advantage of its magnetic coupling between different phases to achieve minimal output ripples and ultra-fast transient response. The results of this research emphasize performance differences in control loop and transient response, output ripple, efficiency, and thermal performance when using discrete and coupled inductors at fixed output capacitance in a multiphase step-down application.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intersection Attacks on Discrete Epochs</title>
<link href="https://hdl.handle.net/1721.1/151547" rel="alternate"/>
<author>
<name>Lin, Andrea</name>
</author>
<id>https://hdl.handle.net/1721.1/151547</id>
<updated>2023-08-01T03:09:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Intersection Attacks on Discrete Epochs
Lin, Andrea
Anonymous messaging systems with churn in the set of online users are vulnerable to intersection attacks. Researchers have evaluated the success of the state of the art intersection attack using a model of user messaging simulated from a generated social graph. This thesis compares the success of the state of the art intersection attack using a model simulated from a generated social graph versus models simulated from real social graphs, such as those of Twitter and Google+. We find that users lose anonymity at a slower rate if the model uses a real social graph rather than a generated social graph.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>OnionChopper: A Modular Arithmetic Hardware Accelerator for Private Information Retrieval</title>
<link href="https://hdl.handle.net/1721.1/151546" rel="alternate"/>
<author>
<name>Shay, Georgia</name>
</author>
<id>https://hdl.handle.net/1721.1/151546</id>
<updated>2023-08-01T04:16:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">OnionChopper: A Modular Arithmetic Hardware Accelerator for Private Information Retrieval
Shay, Georgia
Private information retrieval (PIR) is a protocol which allows a user to retrieve data from a database on a server without the server being able to deduce which records were retrieved. Due to the homomorphic cryptography systems required to make these protocols work and large amount of data processing required per user query, these algorithms tend to run much slower than needed for real-time applications such as streaming movies or voice calling. To improve these speeds to ones more tolerable for user applications, we designed OnionChopper: a small, fast, and energy efficient hardware accelerator on which to offload the heaviest computational work. This hardwareaccelerator is optimized for astate-of-the-art PIRalgorithm, Onion- PIR, but is widely applicable due to similarities in the fundamental algorithms and cryptographies used in private information retrieval. We identified the major bottle- neck operation common to OnionPIR and other PIR schemes and designed computation units to aid with that operation. We designed a near-storage accelerator with on-chip parallel computation units, SRAMs and register files for exploiting data reuse, and a near-storage connection to the SSD to exploit its high internal bandwidth to access the database. We used a space exploration tool to identify the optimal architecture and scheme of computation and data movement over that architecture. Our resulting design offers a nearly300×speed improvement over running on a general- purpose processor for a 64GB database.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference of Cyber Threats, Vulnerabilities, and Mitigations to Enhance Cybersecurity Simulations</title>
<link href="https://hdl.handle.net/1721.1/151545" rel="alternate"/>
<author>
<name>Liu, Kyle</name>
</author>
<id>https://hdl.handle.net/1721.1/151545</id>
<updated>2023-08-01T03:51:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inference of Cyber Threats, Vulnerabilities, and Mitigations to Enhance Cybersecurity Simulations
Liu, Kyle
Machine Learning techniques can provide insight in a variety of inference tasks involving not only text data but also source code. We apply these techniques to BRON, a graph database linking cybersecurity threats, vulnerability sources, and mitigation techniques, in order to extract a wider variety of relationships, and more effectively analyze them. We find that prompt engineering in large language models improves performance in edge classification within BRON. We in addition explore these inferences in practice, by modeling the interaction between cybersecurity attackers and defenders on a given network in a zero-sum game. We apply coevolution in a novel multi-step feedback framework to improve performance in modelling attacks, and find that allowing attackers to dynamically select their attack strategies improves their payoff.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refactoring Tutor: An IDE Integrated Tool for&#13;
Practicing Key Techniques to Refactor Code</title>
<link href="https://hdl.handle.net/1721.1/151544" rel="alternate"/>
<author>
<name>Leyva, Mario</name>
</author>
<id>https://hdl.handle.net/1721.1/151544</id>
<updated>2023-08-01T04:02:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Refactoring Tutor: An IDE Integrated Tool for&#13;
Practicing Key Techniques to Refactor Code
Leyva, Mario
Refactoring code is an important skill to become a competent software engineer, however it is usually never explicitly taught in coding intensive courses. Even though engineers in academia and industry agree refactoring is important, most novice programmers are unaware of the code smells they should avoid when writing code. This thesis discusses a novel tutoring system to assist novice programmers with refactoring. This tool provides refactoring exercises to students in an introductory programming class. The tutor exposes students to various types of code smells and has them deliberately practice how to refactor. The tutor infrastructure has proven to be robust to several refactoring exercises. Based on a user study involving the students and the staff members from 6.1010: Fundamentals of Programming, the tutor infrastructure has shown to be robust to bugs and staff feedback. The tutor shows promise, but further studies with more students are necessary to evaluate its effectiveness on teaching student refactoring.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Performance of Transformer&#13;
Inference</title>
<link href="https://hdl.handle.net/1721.1/151543" rel="alternate"/>
<author>
<name>Ouyang, Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/151543</id>
<updated>2023-08-01T04:09:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding the Performance of Transformer&#13;
Inference
Ouyang, Anne
The state of the art results in natural language processing tasks have been obtained by scaling up transformer-based machine learning models, which can have more than a hundred billion parameters. Training and deploying these models can be difficult and extremely expensive, and performance engineering efforts to improve the latency and throughput of these models are crucial in enabling widespread applications.&#13;
&#13;
We developed an analytical model for studying the performance of transformer inference and combined it with empirical studies using existing frameworks to gain insights into the performance characteristics of transformers and efficiency of existing implementations. The findings revealed the contribution of the different operations to the total parameter count, floating-point operations count, activation memory. A comparison between prefilling and generation stages highlighted differences in performance characteristics, with generation being slower due to low arithmetic intensity operations. Empirical studies with existing implementations on single GPUs showed that the implementation has a high roofline utilization but low FLOPs utilization during the generation stage, which indicates that implementation is reasonably efficient, but the low arithmetic operations during autoregressive generation is an inherent limitation of transformer-based architectures.&#13;
&#13;
We also experimented with various parallelism strategies for different inference workloads and distilled our observations as recommendations for effectively using parallelism. We found that the best parallelism strategy depends on the specific workloads (batch size and input and output sequence lengths). We also found that model parallelism can be useful for reasons beyond fitting the model in the GPU memory –– for example, in the case where a model fits in a single GPU, in the generation stage, tensor parallelism can decrease the latency for small batch settings.&#13;
&#13;
We hope that a comprehensive understanding of the performance characteristics and trade-offs can serve as a guide for researchers to optimize hardware resource utilization and enhance the efficiency of large language models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grid Inference and Partial Scan Registration for&#13;
Intelligent Collaborative Robot Systems</title>
<link href="https://hdl.handle.net/1721.1/151542" rel="alternate"/>
<author>
<name>Chen, Valerie K.</name>
</author>
<id>https://hdl.handle.net/1721.1/151542</id>
<updated>2023-08-01T03:45:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Grid Inference and Partial Scan Registration for&#13;
Intelligent Collaborative Robot Systems
Chen, Valerie K.
This thesis proposes advancement of the collaborative and intelligent abilities of Tutor Intelligence robot systems through leveraging the geometry of array structures to perform online inference of object locations and registering partial in-hand scans to automatically orient objects. This research will automate portions of the data annotation process required for the robots’ deep intelligence, enabling the collaborative robot systems to more efficiently and effectively perform pick-and-place tasks. Evaluation is conducted through an exploratory pilot study, and further design recommendations are given.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Landslide Susceptibility Prediction Adaptive to&#13;
Triggering Events</title>
<link href="https://hdl.handle.net/1721.1/151541" rel="alternate"/>
<author>
<name>Adebi, Ikechukwu Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151541</id>
<updated>2023-08-01T03:01:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Landslide Susceptibility Prediction Adaptive to&#13;
Triggering Events
Adebi, Ikechukwu Daniel
Landslide detection and susceptibility prediction are valuable tools for disaster prevention. Despite there being many various solutions to accomplish these tasks, they all generally depend on topographic features of the environment. However, there are not many solutions that can adapt to triggering events such as hurricanes, earthquakes, or volcanic eruptions. This lack of adaptability can greatly limit the performance of the algorithms designed to solve these problems, which, in turn, makes it difficult for emergency managers and responders in the area to prepare for these events appropriately. This work experiments with various kinds of machine learning models and analyzes the effects of incorporating dynamic features based on triggering events in the training process. Ultimately, the final versions of the best performing models produced in this thesis will be deployed as a part of a landslide monitoring system to be used in Mocoa, Colombia. This system is being adapted and developed for the Drones/UAVs for Equitable Climate Change Adaptation (DECCA) project run by MIT’s Environmental Solutions Initiative.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Warehouse of the Future: The Impact of Automation on Industrial Assets</title>
<link href="https://hdl.handle.net/1721.1/151540" rel="alternate"/>
<author>
<name>Nuckel, Reilly J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151540</id>
<updated>2023-08-01T03:53:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Warehouse of the Future: The Impact of Automation on Industrial Assets
Nuckel, Reilly J.
The impact of warehouse automation on operational efficiencies is well documented; however, the current and potential future impacts on real estate from a design, spatial, and economic perspective remains to be explored. We undertake a qualitative approach, leveraging data from industrial developers, supply chain operators, and robotic firms to provide context for current trends and uncover insights regarding how the technology will influence these assets in the future. Contrary to what has often been assumed, the rise of automation in warehouses presents both headwinds and tailwinds for the asset class as a whole. Automation in warehouses can increase the value of new warehouse development and offset land supply constraints by more fully utilizing the cube of a building to maximize throughput. Alternatively, solving for density and maximizing throughout could mitigate operators’ needs for space, countering increases in valuation at a macro-level for the asset class.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Correlation-based Linking of Epigenomic Regions to Target Genes in Bulk and Single Cell Epigenomic Data</title>
<link href="https://hdl.handle.net/1721.1/151538" rel="alternate"/>
<author>
<name>James, Benjamin Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151538</id>
<updated>2023-08-01T03:32:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Correlation-based Linking of Epigenomic Regions to Target Genes in Bulk and Single Cell Epigenomic Data
James, Benjamin Thomas
Single-cell sequencing has catalyzed a significant shift in biological modeling and hypothesis generation. In this study, we present advanced algorithms that utilize multiple modalities for peak-gene linking via correlation-based mechanisms. This approach provides a robust framework for bias correction and noise reduction at single-cell resolution.&#13;
&#13;
Our research introduces a novel algorithm that employs peak-gene linking to integrate epigenomic data, thereby translating it into transcriptomic data. This integration facilitates comprehensive secondary analyses on meta-cells and sub-cell types. Crucially, our methodology enables swift computation of modules from any single-cell assay, promoting exploration of intricate biological and disease mechanisms that might remain unmodeled with a pseudo-bulk approach.&#13;
&#13;
By combining snRNA-seq and snATAC-seq data, our method substantially outperforms equivalent tasks of gene expression estimation, showing promising results even with unpaired real data. Furthermore, the modeling of genomic peak modules using our algorithms uncovers additional signal potentially overlooked when examining single peaks.&#13;
&#13;
We envision correlation-based linking as a key aspect of future single-cell multiomic technology, as it allows for correlation between assays at the single-UMI level. As such, improved modeling of data at the single-cell level will enhance our understanding of complex gene regulatory networks and disease mechanisms.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Compilation Latency in the Julia&#13;
Programming Language</title>
<link href="https://hdl.handle.net/1721.1/151537" rel="alternate"/>
<author>
<name>Chintalapudi, Prem</name>
</author>
<id>https://hdl.handle.net/1721.1/151537</id>
<updated>2023-08-01T03:24:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Reducing Compilation Latency in the Julia&#13;
Programming Language
Chintalapudi, Prem
The Julia programming language is a high performance computing language that employs an LLVM-based just-in-time compiler and an LLVM-based ahead-of-time compiler to produce optimized machine code. When Julia uses its just-in-time compiler, compilation of methods must be done before methods can begin execution, which presents as a delay the first time a function is executed. When Julia compiles code ahead of time into code images, a large fraction of time is spent optimizing and emitting machine code for large numbers of functions. Here, we investigate ways of exposing opportunities for parallelism to both compilers, as well as investigate ways to perform less work during the compilation process using newer LLVM technologies. Our results show that we can achieve speedups of 8-16X in the ahead-of-time compiler when compiling on multiple threads, while the just-in-time compiler can achieve speedups between 1.5-3X under certain circumstances. Additionally, we find that LLVM’s new CompileOnDemandLayer for delaying compilation until code is executed can avoid 30-40% of compilation work in certain applications, while LLVM’s new pass manager framework can reduce optimization time by up to 17.5% compared to the legacy pass manager. Incorporation of these compiler improvements into the Julia language yields marked decreases in the initial delays that are observed by users today.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Representation Learning from&#13;
Intravascular Ultrasound Videos</title>
<link href="https://hdl.handle.net/1721.1/151536" rel="alternate"/>
<author>
<name>Jain, Lay</name>
</author>
<id>https://hdl.handle.net/1721.1/151536</id>
<updated>2023-08-01T03:04:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unsupervised Representation Learning from&#13;
Intravascular Ultrasound Videos
Jain, Lay
Vascular diseases such as atherosclerosis are a leading cause of mortality and morbidity worldwide. Intravascular Ultrasound (IVUS) is an imaging technology that has the distinctive ability to offer real-time endovascular information of the coronary vasculature. However, its low signal-to-noise ratio, low data availability, and numerous artifacts make it challenging to use both for humans and automated methods. This work explores the use of representation learning and de-noising techniques to address these challenges and aid in the diagnosis of vascular diseases. We test our methods on the task of stent malapposition detection, where naive approaches fail discouragingly. We improve the naive baseline accuracy by 16%. In addition, we develop a deep learning approach for real-time stabilization of the IVUS videos, which performs registration 20-fold faster than the classical ANTs approach. Our results demonstrate the importance of incorporating domain knowledge in performance improvement while still indicating the limitations of current systems for achieving clinically ready performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topics in Sparsity and Compression: From High dimensional&#13;
statistics to Overparametrized Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/151535" rel="alternate"/>
<author>
<name>Benbaki, Riade</name>
</author>
<id>https://hdl.handle.net/1721.1/151535</id>
<updated>2023-08-01T03:50:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Topics in Sparsity and Compression: From High dimensional&#13;
statistics to Overparametrized Neural Networks
Benbaki, Riade
This thesis presents applications of sparsity in three different areas: covariance estimation in time-series data, linear regression with categorical variables, and neural network compression.&#13;
&#13;
In the first chapter, motivated by problems in computational finance, we consider a framework for jointly learning time-varying covariance matrices under different structural assumptions (e.g., low-rank, sparsity or a combination of both). We propose novel algorithms for learning these covariance matrices simultaneously across all time blocks and show improved computational efficiency and performance across different tasks.&#13;
&#13;
In the second chapter, we study the problem of linear regression with categorical variables, where every categorical variable can have a large number of levels. We seek to reduce or cluster the number of levels for statistical and interpretability reasons. To this end, we propose a new estimator and study its computational and statistical properties.&#13;
&#13;
And in the third chapter, we explore the problem of pruning or sparsifying the weights of a neural network. Modern neural networks tend to have a large number of parameters, which makes their storage and deployment expensive, especially in resource-constrained environments. One solution to this is compressing the network by pruning or removing some parameters, while trying to maintain a similar level of performance compared to the dense network. To achieve this, we propose a new optimization-based pruning algorithm, and show how it leads to significantly better sparsity-accuracy trade-offs compared to existing pruning methods.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MakeMu: An Online, Cross-Platform, Collaborative Web Application for Music-Making</title>
<link href="https://hdl.handle.net/1721.1/151534" rel="alternate"/>
<author>
<name>Liu, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/151534</id>
<updated>2023-08-01T03:02:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">MakeMu: An Online, Cross-Platform, Collaborative Web Application for Music-Making
Liu, Richard
Existing technologies that support online music collaboration are either overly simple in their capabilities, or may require complex setup and installation. Truly real-time collaborative cross-platform “music systems” may only offer a few, quite simple features. Meanwhile, fully expressive collaborative music systems rarely offer cross-platform support, such as support for desktop and mobile collaboration. MakeMu, designed with musicians in mind, proposes an expressive and cross-platform alternative. The MakeMu web application is fully-featured, and offers a grid-based, real-time collaborative music experience. By providing an intuitive platform for music producers to sketch musical ideas on their phone, with support for collaboration, MakeMu enables a novel, satisfying music composition experience.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward improved non-interactive proof systems</title>
<link href="https://hdl.handle.net/1721.1/151533" rel="alternate"/>
<author>
<name>Kwon, Sophia Seoyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/151533</id>
<updated>2023-08-01T03:57:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Toward improved non-interactive proof systems
Kwon, Sophia Seoyoung
The study of non-interactive proof systems, such as Merlin-Arthur proof systems and probabilistically checkable proofs, can yield insights into problems for which we do not yet have efficient algorithms. This work compiles many recent results in improved non-interactive proof systems–primarily Merlin-Arthur protocols for problems studied in the field of fine-grained complexity–and proposes a construction for interesting Merlin-Arthur protocols for any problem from an efficiently constructable PCP for the same.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neighborhood Transformation Marginalization forOOD Detection</title>
<link href="https://hdl.handle.net/1721.1/151532" rel="alternate"/>
<author>
<name>Hulkund, Neha</name>
</author>
<id>https://hdl.handle.net/1721.1/151532</id>
<updated>2023-08-01T04:24:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Neighborhood Transformation Marginalization forOOD Detection
Hulkund, Neha
Out-of-distribution (OOD) detection is an important part of enabling the real world deployment of machine learning models. Many recent methods developed to perform OOD detection rely on calculating a score function on a given test point then thresholding the value to classify the point as in-distribution (ID) or OOD. However, calculating a score function on a single example may give biased or inaccurate estimates, especially as examples are sampled further and further OOD. In this paper we propose TraM: Transformation Neighborhood Marginalization, a method to improve the estimation of score functions used for OOD detection by calculating their expectation over a transformation neighborhood. TraM demonstrates improvements on a subset of commonly used OOD score functions in the OpenOOD benchmark, improving a baseline ODIN score function by up to 6 AUROC. However, it is not found to improve other baseline metrics signficantly, indicating the need for further research on this topic.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Tensor Compiler for Simple and EfficientFully Homomorphic Encryption</title>
<link href="https://hdl.handle.net/1721.1/151531" rel="alternate"/>
<author>
<name>Krastev, Aleksandar</name>
</author>
<id>https://hdl.handle.net/1721.1/151531</id>
<updated>2023-08-01T03:31:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Tensor Compiler for Simple and EfficientFully Homomorphic Encryption
Krastev, Aleksandar
Fully Homomorphic Encryption (FHE) enables computing on encrypted data, letting clients securely offload computation to untrusted servers. Though FHE is slow on CPUs, hardware acceleration enables large FHE programs, like deep neural networks. Unfortunately, FHE is extremely hard to program: translating even modest applications into efficient FHE programs takes months of work by experts. This is because FHE requires packing encrypted data into large vectors (tens of thousands of elements long), FHE provides limited operations on these vectors, and these operations have unintuitive performance tradeoffs.&#13;
&#13;
We address FHE’s programmability challenges with the Fhelipe FHE compiler. Fhelipe exposes a simple, numpy-style programming interface for working on tensors. Tensors cover key domains that map well to FHE, like machine learning and linear algebra. By leveraging tensor semantics and through several novel techniques, Fhelipe is the first compiler to produce efficient FHE programs that use large vectors well. Fhelipe automates all aspects of FHE programming, including bootstrap placement.&#13;
&#13;
We evaluate Fhelipe on both a state-of-the-art FHE accelerator and a CPU. Fhelipe matches or exceeds the performance of large hand-optimized FHE applications, like deep neural networks, and outperforms state-of-the-art FHE compilers by gmean 24.5×. At the same time, Fhelipe dramatically simplifies programming, reducing code size by 10×–34×.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Selection for Intein Splicing inPhage Assisted Continuous Evolution</title>
<link href="https://hdl.handle.net/1721.1/151530" rel="alternate"/>
<author>
<name>Hennes, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/151530</id>
<updated>2023-08-01T04:06:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Selection for Intein Splicing inPhage Assisted Continuous Evolution
Hennes, Andrew
Developing tools for target manipulation of protein chemistry remains a longstanding goal in chemical biology. Trans-splicing inteins fulfill a crucial niche in this area by enabling post translational splicing of separate polypeptides, introduction of post translational modifications, and other chemistry on the amide backbone. This has motivated engineering and evolution of bespoke intein properties, including extein constraints, split site, kinetics, and orthogonality. Rational approaches are highly biased and preexisting directed evolution methods are laborious and often struggle with guaranteeing splicing dependance in a selection. To accelerate intein evolution we introduce a phage assisted continuous evolution (PACE) for intein properties. We show this selection is strictly splicing dependant, discriminates between intein splicing rates from one minute to several hours, discriminates between preferred over unpreferred extein contexts, supports propagation of phage in a multi-passage PANCE format, and circumvents recombination-driven phage cheating through a recombination-resistant helper strain. We anticipate this selection will enable facile, rapid evolution of inteins with bespoke properties.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decision Transformer-based Traveling Salesman&#13;
Tour Generation</title>
<link href="https://hdl.handle.net/1721.1/151529" rel="alternate"/>
<author>
<name>Liu, Daniel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151529</id>
<updated>2023-08-01T03:17:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Decision Transformer-based Traveling Salesman&#13;
Tour Generation
Liu, Daniel S.
With the surge of new machine learning methods, research in classic problems like the Traveling Salesman Problem (TSP) is receiving a resurgence of popularity. One of the biggest goals in this renewed interest is to create a model that can not only outperform state-of-the-art heuristic solvers in speed for trivial sizes, but also generalize to larger TSP instances that are currently intractable. In this thesis we approach the TSP with the Decision Transformer, a transformer-based architecture transforming reinforcement learning environments into transformer-compatible sequence-modeling problems. By modeling a TSP instance as an graph-based environment with states and actions, we can input partial tours into the Decision Transformer to infer the next best action in an autoregressive fashion. With the power of the transformer, we take the first step in making headway on the issue of generalization where past models have failed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Options to Address Submersion Criticality for Low-Enriched Uranium Nuclear Thermal Propulsion Rocket</title>
<link href="https://hdl.handle.net/1721.1/151528" rel="alternate"/>
<author>
<name>Moore, Michael Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/151528</id>
<updated>2023-08-01T03:15:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design Options to Address Submersion Criticality for Low-Enriched Uranium Nuclear Thermal Propulsion Rocket
Moore, Michael Kenneth
Missions to Mars with eventual establishment of a Mars base/colony will require a versatile and consistent transportation method between Earth and Mars. Nuclear thermal propulsion (NTP) is well suited to be the main propulsion mechanism for interspace travel due to its high specific impulse. In order to develop a robust NTP system, the reactor core must be able to prevent a supercritical state from occurring in the event a launch failure or an atmospheric reentry results in the reactor entering a body of water. In this accident, the flooding of the hydrogen coolant channels causes a surge in reactivity, which can be harmful to the environment around the reactor. This work focuses on investigating the effectiveness of various design options in mitigating a submersion criticality accident and their impacts on the fuel lifecycle for a modified version of the Space Capable Cryogenic Thermal Engine (SCCTE) reactor core. Multiple design options were considered such as enhanced accident tolerant control drums, coolant channel radius adjustment, telescoping control rods, and the implementation of a spectral shift via enrichment zoning. Analysis was performed using Monte-Carlo code SERPENT 2.1.3.1, supported by 1D thermal hydraulics modeling when necessary. Both fuel lifecycle and peaking factors are included as metrics for comparing each method’s effectiveness. The analysis determined that many of the design options limited the core’s fuel lifecycle, and that only control drum enhancement and the employment of telescoping control rods were independently capable of keeping the reactor subcritical in the event of a water submersion. While some designs were feasible in their mitigation of the submersion worth, additional thermal analysis is required to verify their compatibility with the high temperatures present within the core.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Combinations of Play Styles in the NBA</title>
<link href="https://hdl.handle.net/1721.1/151527" rel="alternate"/>
<author>
<name>Pilsbury, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151527</id>
<updated>2023-08-01T04:10:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluating Combinations of Play Styles in the NBA
Pilsbury, Daniel
This thesis explores the influence of player play styles on offensive performance in the NBA. It offers valuable insight to coaches and managers who seek to understand the significance of play styles and identify the optimal combinations. Through classifying player play styles and analyzing their relationships with team performance, this research reveals that play styles have a tangible impact on performance, even when adjusting for individual skill levels. The findings highlight the importance of three point shooting, the ability to create shot opportunities for teammates, and the benefit of court spacing. Offensive performance is not simply equivalent to the sum of the individual talents, but can be greater or less depending on how the styles of players complement each other. Coaches can use this information to make informed decisions about what players to acquire for optimal offensive performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complex System Simulation Framework for SharedAugmented Reality Applications</title>
<link href="https://hdl.handle.net/1721.1/151526" rel="alternate"/>
<author>
<name>Mekala, Praneet</name>
</author>
<id>https://hdl.handle.net/1721.1/151526</id>
<updated>2023-08-01T04:02:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Complex System Simulation Framework for SharedAugmented Reality Applications
Mekala, Praneet
Complex systems perspectives are a tool that give an individual the ability to better understand the outcomes of actions on both the environment and other actors within the system. These perspectives are rarely given an opportunity to develop within educational environments, and many students lack this perspective. System simulations to aid in the development of complex systems perspectives have been created in the past. These past simulations, however, fail to provide either an immersive of expansive experience for the students. The We’re In This Together (WIT) team is addressing these issues by developing complex system simulations for mixed reality headsets to be used in a classroom setting. This project specifically focuses on the subtask of creating a programmatic interface for simulation data to be updated and shared uniformly across devices. This interface will be usable by developers to create their own customized simulations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ClickTrails: Enhancing Web Navigation with&#13;
Usage-Based Stylization of Clicked Web Page&#13;
Elements</title>
<link href="https://hdl.handle.net/1721.1/151525" rel="alternate"/>
<author>
<name>Jin, Kathryn J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151525</id>
<updated>2023-08-01T04:09:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">ClickTrails: Enhancing Web Navigation with&#13;
Usage-Based Stylization of Clicked Web Page&#13;
Elements
Jin, Kathryn J.
A modern web user might visit dozens of websites in a day and return to the same website countless times over a lifetime. However, unlike physical travels that leave behind tangible traces on the path, navigating through the web doesn’t usually leave visible traces on web pages. Drawing inspiration from concepts in information foraging theory, this thesis investigates methods to capture and visualize the typically unseen traversals made by users in the online domain as a method of improving the information scent of web pages. We introduce ClickTrails, a browser extension that enables users to leave a visible trail as they navigate the web. ClickTrails highlights the links and buttons that the user clicks on, and the highlights increase in intensity with more clicks. Users can also switch to an alternative interaction mode, where clicking on elements causes them to fade. Based on the mode they select, a user’s web page interactions can either become more or less conspicuous, allowing them to choose the mode that aligns with their specific goals for each website. Our user evaluation indicates that ClickTrails helps users by surfacing information about their web use that would otherwise not be visible; ClickTrails helps users retrace frequently-used paths on the web, avoid already-visited links, and gain awareness of their browsing habits. These results demonstrate the promising potential of ClickTrails’ designs to enhance web navigation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancements in Word Alignment: Introducing a&#13;
Novel Count-Based Subword Model Alongside&#13;
Neural and Ensemble Models</title>
<link href="https://hdl.handle.net/1721.1/151524" rel="alternate"/>
<author>
<name>Ghosh, Shinjini</name>
</author>
<id>https://hdl.handle.net/1721.1/151524</id>
<updated>2023-08-01T03:17:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Advancements in Word Alignment: Introducing a&#13;
Novel Count-Based Subword Model Alongside&#13;
Neural and Ensemble Models
Ghosh, Shinjini
The task of aligning words across source and target languages, known as word alignment, plays a crucial role in natural language processing and machine translation. This thesis addresses the word alignment problem by developing and comparing three models: a count-based subword model, a baseline encoder-decoder neural alignment model, and an ensemble model. The count-based subword model utilizes statistical measures and co-occurrence statistics for word alignment estimation. The neural alignment model employs an encoder-decoder architecture with attention mechanisms for end-to-end alignment learning. The ensemble model combines the strengths of both the count-based and neural models to improve alignment accuracy and robustness. Through extensive experimentation, we demonstrate the effectiveness of each model in capturing subword boundaries, identifying relationships, and aligning words across parallel sentences. The results highlight the superior performance of the count-based subword model and the ensemble model, showcasing the potential for more accurate and robust alignment techniques with applications in various natural language processing tasks. This research contributes to the advancement of word alignment techniques, providing valuable insights and methods for enhancing multilingual processing, machine translation, and other language-related applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric Approaches for 3-Dimensional Shape Approximation</title>
<link href="https://hdl.handle.net/1721.1/151522" rel="alternate"/>
<author>
<name>Sonecha, Ria</name>
</author>
<id>https://hdl.handle.net/1721.1/151522</id>
<updated>2023-08-01T03:47:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Geometric Approaches for 3-Dimensional Shape Approximation
Sonecha, Ria
Having 3D simulation models which represent the visual geometry and contact dynamics of arbitrary objects is important for achieving robust planning and control for robotic manipulation tasks and sim2real transfer. Currently, the most common solution for obtaining such models is generating them by hand. However, this process is not generalizable or scalable. Neural Radiance Fields (NeRFs) are able to generate photorealistic 3D renderings of arbitrary objects based only on a few RGB images. 3D meshes that are extracted from NeRFs are often complex and hard to use in simulation. In this thesis we propose geometric approaches based on convex optimization for simplifying such meshes into unions of primitive shapes so that they are faster and more accurate to simulate.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Human Perception Through MooneyFaces</title>
<link href="https://hdl.handle.net/1721.1/151521" rel="alternate"/>
<author>
<name>Arora, Riya</name>
</author>
<id>https://hdl.handle.net/1721.1/151521</id>
<updated>2023-08-01T03:05:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Human Perception Through MooneyFaces
Arora, Riya
Human vision is remarkably tolerant to image distortions: even when every pixel in an image has been destructively altered, as in classic Mooney displays, humans can still extract information about identity, pose, and more. Most current deep learning computer vision models perform well with standard face images, but they struggle with stimuli which differ from their training data, like Mooney faces. What makes human perception so comparatively robust? We consider a version of the analysis-by-synthesis proposal for perception, in which visual input is interpreted by inverting a model of image formation, as a potential model for human visual perception. Taking Mooney faces as a case study, we evaluate the model against human performance in a test domain, determining head pose, with the objective of replicating human perception. Previous human psychophysical studies have identified an illusion in which the perceived pose of a Mooney face differs from the pose recovered from an uncorrupted image. The analysis-by-synthesis model does not show a similar effect.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Locally computing edge orientations</title>
<link href="https://hdl.handle.net/1721.1/151520" rel="alternate"/>
<author>
<name>Singhal, Mihir</name>
</author>
<id>https://hdl.handle.net/1721.1/151520</id>
<updated>2023-08-01T04:00:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Locally computing edge orientations
Singhal, Mihir
We consider the question of orienting the edges in a graph &#119866; such that every vertex has bounded out-degree. For graphs of arboricity &#120572;, there is an orientation in which every vertex has out-degree at most &#120572;, and moreover, this is the best possible. We are thus interested in algorithms that can achieve a maximum out-degree of close to &#120572;. A widely studied approach for this problem in the distributed algorithms setting is a “peeling algorithm” that provides an orientation with maximum out-degree &#120572;(2 + &#120598;) in a logarithmic number of iterations.&#13;
&#13;
We consider this problem in the local computation algorithm (LCA) model, which quickly answers queries of the form “What is the orientation of edge (&#119906;, &#119907;)?” by probing the input graph. When the peeling algorithm is executed in the LCA setting by applying standard techniques, e.g., the Parnas-Ron paradigm, it requires Ω(&#119899;) probes per query on an &#119899;-vertex graph. In the case where &#119866; has unbounded degree, we show that any LCA which orients its edges to yield maximum out-degree &#119903; must use Ω( √ &#119899;/&#119903;) probes to &#119866; per query in the worst case, even if &#119866; is known to be a forest (that is, &#120572; = 1). We also show several algorithms with sublinear probe complexity when &#119866; has unbounded degree. When the maximum degree Δ of &#119866; is bounded, we demonstrate an algorithm that uses [formulation] probes to &#119866; per query. To obtain this result, we develop an edge-coloring approach that ultimately yields a graph shattering-like result. We also use this shattering-like result to demonstrate an LCA which can 4-color any tree using sublinear probes per query.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Baba is AI: A Grounded Benchmark for&#13;
Compositional Generalization in Dynamic Rule&#13;
Systems</title>
<link href="https://hdl.handle.net/1721.1/151519" rel="alternate"/>
<author>
<name>Jens, Meagan</name>
</author>
<id>https://hdl.handle.net/1721.1/151519</id>
<updated>2023-08-01T04:11:31Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Baba is AI: A Grounded Benchmark for&#13;
Compositional Generalization in Dynamic Rule&#13;
Systems
Jens, Meagan
People leverage the compositional nature of their environment to generalize to new scenarios. For example, if you understand the meaning of the verb "to sing" and the adverb "loudly," then you can determine the meaning of the novel phrase "to sing loudly" from these known components. This process is known as generalization through systematic compositionality. Developing agents that can use systematic compositionality to generalize to new conditions has been a long-standing problem in AI. In response to this challenge, grounded benchmarks have been developed to evaluate an agent’s ability to generalize using this approach. However, there are key problems with the current grounded benchmarks. To start, these benchmarks are ad-hoc. They propose sets of tasks without any formalism, so it is challenging to determine whether or not these tasks exhaustively explore the set of possible generalizations. This lack of structure also makes it challenging to compare benchmarks concretely. Another key issue with these benchmarks is that their environments are defined by a fixed set of rules and a small set of objects whose states can be changed. By strictly delineating the rules of these environments, we have overlooked a critical rule-understanding and manipulation capability that agents will need in the real world. Our approach to addressing these issues is twofold. First, we define a formalism to investigate generalization mathematically as a function of the environment architecture. We then use this formalism to create a novel type of generalization benchmark for agents that must learn to change the rules of their environments. Lastly, we run both supervised learning and reinforcement learning models on a small subset of the benchmark tasks to validate our environment and pinpoint key conditions under which agents fail to generalize.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying Objects’ Inertial Parameters with&#13;
Robotic Manipulation to Create Simulation-Ready&#13;
Assets</title>
<link href="https://hdl.handle.net/1721.1/151518" rel="alternate"/>
<author>
<name>Lambert, Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/151518</id>
<updated>2023-08-01T03:16:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Identifying Objects’ Inertial Parameters with&#13;
Robotic Manipulation to Create Simulation-Ready&#13;
Assets
Lambert, Andy
Real2Sim is the problem of simulating objects and scenes via real world data, allowing a robot to imagine future interactions with its environment. However, many existing approaches either do not consider the dynamics of objects being simulated or make assumptions about their mass distributions. In this work, we aim to make use of robotic arm payload identification techniques in order to enhance the dynamic accuracy of objects generated from a Real2Sim pipeline for manipulation tasks. While the payload identification literature is vast, applying these methods in practice has various challenges and limitations. Upon implementing these techniques, we gain understanding of best practices in the engineering sense. We hope that these methods can be used to provide ground truth data for other robot learning tasks on the road towards generalized dynamic intuition.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deciphering and modelling the action of immune&#13;
cells using highly multiplexed imaging and deep&#13;
learning techniques</title>
<link href="https://hdl.handle.net/1721.1/151517" rel="alternate"/>
<author>
<name>Reid, Clinton</name>
</author>
<id>https://hdl.handle.net/1721.1/151517</id>
<updated>2023-08-01T04:13:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Deciphering and modelling the action of immune&#13;
cells using highly multiplexed imaging and deep&#13;
learning techniques
Reid, Clinton
Cells of the immune system are capable of responding to foreign antigen, promoting host defense while limiting damage to host tissues, through an act known as selftolerance. T cells, their activation and their effector roles are of particular interest due to their prominent roles in antigen discrimination and subsequent cell-mediated immunity. However, there are diverse effector T cell types interacting to regulate the immune response. Understanding the mechanisms by which intercellular interactions exert precise control over the immune system is a crucial step in elucidating the manner in which the immune system behaves during infection, health, or chronic disease. Multiplexed imaging is a beneficial tool that is used to visualize distinct cell types and functional states directly in tissues. This technology is particularly important for understanding how cells organize spatially to enforce this boundary between hostprotective responses and autoimmunity. Therefore, it is valuable to image interacting cells in highly-multiplexed images. In order to do this, it has become increasingly important to increase the number of biomarkers that one can record in a single tissue section at a time. Here, I summarize our efforts to employ imaging and deep learning tools to analyze the structure of the immune system, ending with a critical insight regarding our cell segmentation models alongside an experimental workflow and pipeline that will allow even more to be revealed about the mechanisms of control that exist within the immune system. Current methods for acquiring highly multiplexed images are somewhat time-consuming and labour-intensive while computational methods for analyzing these images and identifying relevant spatial patterns are lacking. We seek to improve and simplify our current multiplexing capabilities by eventually coupling fluorescence lifetime with fluorescence intensity measurements—two distinct imaging modalities. Moreover, we aim to develop new computational pipelines to aid in downstream image analysis and identify new spatial motifs that control immune response in tissues.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privilege-Separating Embedded Applications using Web Assembly in the Plat FIDO2 Security Key</title>
<link href="https://hdl.handle.net/1721.1/151516" rel="alternate"/>
<author>
<name>Kettle, Benjamin B.</name>
</author>
<id>https://hdl.handle.net/1721.1/151516</id>
<updated>2023-08-01T03:03:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Privilege-Separating Embedded Applications using Web Assembly in the Plat FIDO2 Security Key
Kettle, Benjamin B.
Plat is a FIDO2 security key that uses privilege separation to protect the application’s private keys even if bugs are present in bug-prone parts of its codebase. Plat’s design encapsulates drivers and parsers in sandboxes that are isolated from the secrets that are used to perform authentication.&#13;
&#13;
To achieve privilege separation in the embedded context, Plat uses a new WebAssembly-based toolchain for ARM microcontrollers to implement and enforce isolation between individual components of an existing system without rewriting drivers and application code. This toolchain includes special support for device drivers, safely enabling isolated modules to access peripheral memory-mapped IO.&#13;
&#13;
Plat’s privilege separation reduces the lines of code in the trusted code base by 60% from our 20,000-line reference implementation while adding only 319 new trusted lines. Plat’s isolation strategy has acceptable performance overhead that does not prevent interactive use, with the slowest step of an authentication jumping from 277ms natively to 600ms when sandboxed.&#13;
&#13;
Plat ensures the protection of its secret key, and thus the security of the accounts it authenticates, in the presence of several classes of bugs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Object-Centric Planning for Long-Horizon Robotic Manipulation and Navigation</title>
<link href="https://hdl.handle.net/1721.1/151515" rel="alternate"/>
<author>
<name>Curtis, Aidan</name>
</author>
<id>https://hdl.handle.net/1721.1/151515</id>
<updated>2023-08-01T04:03:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Object-Centric Planning for Long-Horizon Robotic Manipulation and Navigation
Curtis, Aidan
A primary objective within the robotics research community is the development of robotic agents capable of executing long-horizon tasks within complex and novel environments. The sparse and factored nature of object-centric planning makes it a good candidate for the reasoning engine inside such an agent. However, several challenges remain under an object-centric planning framework. Challenges arise in areas such as efficiently grounding states with novel objects in cluttered environments, maintaining efficiency under large object sets, and safe exploration and manipulation in partially observable and nondeterministic environments. This thesis examines these limitations and proposes several strategies for solving them while maintaining the generalizability and flexibility of object-centric planning in long-horizon tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of 3D Genome Organization in 4-Cell Mouse Embryos</title>
<link href="https://hdl.handle.net/1721.1/151514" rel="alternate"/>
<author>
<name>Davarmanesh, Parmida</name>
</author>
<id>https://hdl.handle.net/1721.1/151514</id>
<updated>2023-08-01T03:04:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of 3D Genome Organization in 4-Cell Mouse Embryos
Davarmanesh, Parmida
Cell fate determination, which refers to how distinct types of signals interact with a cell to determine its fate and differentiation into a particular cell type, is a fundamental questions lying at the heart of developmental biology. Given that the type and state of a cell is deter-mined by its gene expression program and recent work has shown a tight coupling between the spatial organization of the genome inside the cell nucleus and a cell’s gene expression proﬁle, in order to gain insights into cell fate determination, we studied the 3D genome organization; in particular how chromosomes are arranged in the 3D space of the cell nucleus. More precisely, we studied the 3D genome organization in early mouse embryos at the 4-cell state based on in-situ sequencing data. Interestingly, while at this stage the cells have only undergone 2 divisions, researchers have found that the type of division (equatorial or meridional) matters for the chance of survival of the embryo at later stages. Using in-situ genome sequencing data, 1) we developed a constrained k-means algorithm that clusters the genomic loci into two clusters (corresponding to the paternal/maternal copies of each chromosome); 2) using parental origin information (available for a few genomic loci) we inferred the parental origin of the identiﬁed chromosomes; and 3) we built an alignment algorithm to assign an inter-cell distance between any two cells given their dissimilarity in terms of the 3D organization of the chromosomes in the cell nucleus. Using these inter-cell distances, we mapped all the cells in the data set into an embedding space; we found that not only the sister cells in the data set cluster together, but the majority of cousin cells (cells belonging to the same 4-cell embryo) also cluster together. Additionally, while the type of cell divisions (equatorial or meridional) in each embryo in our data set is unknown, we used the inferred 3D genome organization to make a hypothesis about the type of division and the resulting shape of the 4-cell embryo. Although additional experiments are needed to correlate embryo shape with 3D genome organization, our preliminary results may open novel avenues towards analyzing cell fate determination. All of the code for reproducing the analysis can be found here: https://github.com/pdavar/Analysis-of-3D-Mouse-Genome-Organization
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equitable Community Health Worker Deployment in sub-Saharan Africa: A Modeling Framework for Stochastic Health Progression</title>
<link href="https://hdl.handle.net/1721.1/151513" rel="alternate"/>
<author>
<name>Reubenstein, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/151513</id>
<updated>2023-08-01T03:55:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Equitable Community Health Worker Deployment in sub-Saharan Africa: A Modeling Framework for Stochastic Health Progression
Reubenstein, Rebecca
Community health workers (CHWs) are increasingly important to healthcare delivery in many African countries. As CHW programs are being scaled up and integrated into national healthcare delivery systems, an important policy question is how to deploy CHWs to various localities, given their differences in disease burden and population density as well as the limited national resources for the program. We develop a modeling framework which jointly describes the health impact of a CHW program as a function of an area’s disease characteristics, operational environment, and CHW deployment density. Specifically, we use a continuous logistical model to capture the travel time constraints of CHWs and a novel continuous time stochastic model to describe individual health progression. We then demonstrate how our model can be used to inform policy decisions by formulating and solving an optimization problem for CHW deployment. Our numerical results demonstrate that we can prioritize equity in our deployment approach without incurring too high a price in terms of total country-wide utility.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning New Dimensions of Human Visual Similarity using Synthetic Data</title>
<link href="https://hdl.handle.net/1721.1/151511" rel="alternate"/>
<author>
<name>Fu, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/151511</id>
<updated>2023-08-01T04:18:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning New Dimensions of Human Visual Similarity using Synthetic Data
Fu, Stephanie
Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object poses, and semantic content. In this thesis, we develop a perceptual metric that assesses images holistically. Our first step is to collect a new dataset of human similarity judgments over image pairs that are alike in diverse ways. Critical to this dataset is that judgments are nearly automatic and shared by all observers. To achieve this we use recent text-to-image models to create synthetic pairs that are perturbed along various dimensions. We observe that popular perceptual metrics fall short of explaining our new data and introduce a new metric, DreamSim, tuned to better align with human perception. We analyze how our metric is affected by different visual attributes, and find that it focuses heavily on foreground objects and semantic content while also being sensitive to color and layout. Notably, despite being trained on synthetic data, our metric generalizes to real images, giving strong results on retrieval and reconstruction tasks. Furthermore, our metric outperforms both prior learned metrics and recent large vision models on these tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Dataset and Developing a Video Event Classifier for Football</title>
<link href="https://hdl.handle.net/1721.1/151510" rel="alternate"/>
<author>
<name>Best Jr., Reginald</name>
</author>
<id>https://hdl.handle.net/1721.1/151510</id>
<updated>2023-08-01T03:51:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Building a Dataset and Developing a Video Event Classifier for Football
Best Jr., Reginald
The challenges and inaccuracies from manually collecting and processing event data for football have highlighted an increasing need to automate event detection. While leveraging tracking data makes it possible to begin extracting events automatically, it becomes difficult to differentiate between events which share a similar context, such as types of duels, saves, fouls, stoppages, and restarts. Video classification, a well-established computer vision tool for identifying events in video clips, can be used in applications where tracking data alone fails to retell the game in its entirety.&#13;
&#13;
In this paper, we develop an end-to-end video classification pipeline to identify player duels in football using data from the 2022 Qatar Men’s World Cup. The methodology includes syncing manually annotated events with game video, generating 3 second video clips for all duel-like events in the tournament, and fine-tuning pretrained 3D convolutional neural networks to produce event predictions. We conduct several experiments to compare various camera angles, video resolutions, and binary versus multi-class models. We find that binary models outperform multi-class models significantly. To further improve the performance, future iterations can optimize the training parameters and increase the number of examples to narrow this gap.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Workflows in Biologics Drug Substance Process Development Through Automation</title>
<link href="https://hdl.handle.net/1721.1/151502" rel="alternate"/>
<author>
<name>Judge, Alexander LC</name>
</author>
<id>https://hdl.handle.net/1721.1/151502</id>
<updated>2023-08-01T04:25:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing Workflows in Biologics Drug Substance Process Development Through Automation
Judge, Alexander LC
In the face of increasing competition, increasing pipeline complexity, and increasing resource requirements for bringing new drugs to market, streamlining process development is an important means of controlling costs and achieving competitive advantage in the biopharmaceutical industry. One potential means of achieving such improvements in process development is through the implementation of high throughput technologies, equipment (and associated methods and software) used to generate and process large amounts of data in little time. It is important, however, that implementation of these solutions is optimized across the entire process development organization rather than applications be deployed piecemeal within specific functions.&#13;
&#13;
This thesis develops a framework for identifying promising opportunities for use of high throughput technologies and quantifying the value that can be derived from their implementation. Though the framework is more broadly applicable than just to research and development organizations, the thesis is focused on its application to biologics process development within Amgen. It is used to assess the value of implementing a specific high throughput platform, Sartorius ambr® 250 systems, in upstream biologics process development.&#13;
 &#13;
Through mapping and analyzing the workflows of Amgen’s Biologics Drug Substance Technologies (Biologics DST) group, the implementation of this system was identified as a promising opportunity for employing high throughput technologies. In particular, a net present value (NPV) analysis was performed to show that investment in ambr 250 systems is likely to yield a positive NPV. However, the expected NPV depends strongly on both the expected useful lifetime of the systems and their capacity utilization. In addition, high throughput technologies provide substantial upside potential not captured in the NPV. Specifically, for the ambr 250 this includes cutting 6.5 weeks off development time for projects where process development is on the critical path. Using ambr 250 for Process Characterization (PC) on such programs could increase highly valuable weeks of sales.&#13;
&#13;
A framework was also developed for assessing how three models of staffing support for high throughput technologies affect the value that can be derived from their implementation. This framework was applied to the use of ambr 250 systems at Amgen to determine how to realize the maximum possible value from investment in this equipment. The assessment found that a dedicated team model is most likely to successfully facilitate the high capacity utilization and maximum potential useable life that are critical for achieving positive NPV. A formal subject matter expert (SME) model may also achieve these goals at lower cost, though at higher risk. The informal champion model, however, is advised against. &#13;
&#13;
The recommended path forward is to purchase one or two ambr systems to use in Commercial Process Development (CPD) and to establish whether they can be used for PC. Once it is established that the ambr 250 can be used for PC, it is recommended that the existing systems be used immediately thereafter on key projects for which increased development speed can increase speed to market, and that a third system be purchased to expand capacity.&#13;
&#13;
Though this work focuses specifically on process development at Amgen, the frameworks developed herein are broadly applicable to many types of organizations, from R&amp;D to manufacturing to the service sector. In any industry where high throughput technologies exist, these frameworks can be used to identify promising opportunities for their implementation, quantify the value they can provide to determine if investment is worthwhile, and decide how they should be supported to maximize the value realized by the organization.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Higher-Order Automatic Differentiation and Its Applications</title>
<link href="https://hdl.handle.net/1721.1/151501" rel="alternate"/>
<author>
<name>Tan, Songchen</name>
</author>
<id>https://hdl.handle.net/1721.1/151501</id>
<updated>2023-08-01T03:01:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Higher-Order Automatic Differentiation and Its Applications
Tan, Songchen
Differentiable programming is a new paradigm for modeling and optimization in many fields of science and engineering, and automatic differentiation (AD) algorithms are at the heart of differentiable programming. Existing methods to achieve higher-order AD often suffer from one or more of the following problems: (1) exponential scaling with respect to order due to nesting first-order AD; (2) ad-hoc handwritten higher-order rules which are hard to maintain and do not utilize existing first-order AD infrastructures; (3) inefficient data representation and manipulation that causes significant overhead at lowered-order when compared to nesting highly-optimized first-order AD libraries. By combining advanced techniques in computational science, i.e., aggressive type specializing, metaprogramming, and symbolic computing, we introduce a new implementation of Taylor mode automatic differentiation in Julia that addresses these problems. The new implementation shows that it is possible to achieve higher-order AD with minimal overhead and without sacrificing the performance of lower-order AD and obtain significant speedup in real-world scenarios over the existing Julia AD library. In addition, this implementation automatically generates higher-order AD rules from first-order AD rules, which is a step towards a general framework for higher-order AD.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation of Major Component Disposal Costs for Advanced &#13;
Nuclear Reactors</title>
<link href="https://hdl.handle.net/1721.1/151498" rel="alternate"/>
<author>
<name>Mokoena, Chumani</name>
</author>
<id>https://hdl.handle.net/1721.1/151498</id>
<updated>2023-08-01T04:17:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Investigation of Major Component Disposal Costs for Advanced &#13;
Nuclear Reactors
Mokoena, Chumani
Waste disposal is an important aspect for decommissioning of Nuclear Power Plants (NPPs) that requires an understanding of the characteristics of the waste forms. The costs of disposal depend on the activity, volume, dose rate and waste handling/packaging. The U.S. average nuclear reactor age is 40 years old, with many pursuing license extensions. Decommissioning costs are several hundred million dollars, with waste disposal alone predicted to cost over $100 million by the U.S. Nuclear Regulatory Commission (NRC). The current NPP fleet is exclusively made of light water reactors and as such there are no detailed study of decommissioning for advanced reactors that are being developed for deployment. This work modelled activity and benchmarked the disposal costs for a Pressurized Water Reactor’s (PWR) components (core shroud, barrel and reactor pressure vessel) against previous work funded by the NRC. The same methodology was then applied towards characterizing disposal costs for a Molten Salt Reactor.&#13;
 &#13;
Both a crude analytical method that is accessible to the general community and more detailed numerical method with MCNPX and CINDER90 were used to estimate the activity of the reactor vessel and its internals. Disposal costs are based on the Texas Low Level Waste (LLW) facility. Generally, the analytical method overestimated the flux and/or activity of components closer to the core such as the core barrel (PWR) or MSR internals. However, the waste classification was consistent for both methods. The nuclides contributing to the long-term activity of the components throughout the study were Ni-59, Co-60 and Ni-63, while the nuclides with a half-life less than five-years dominated the initial total activity. The PWR core shroud, barrel and vessel were designated as greater than Class C, Class C and Class A, respectively. Based on the disposal costs of the PWR components analyzed, the levelized cost of disposal for a PWR was scaled to be $0.68-$0.9/MWh assuming a 40-year operating lifetime, below the $1/MWh that is typically budgeted. &#13;
&#13;
The MSR analysis focused on the activity and disposal costs of the graphite reflectors, core can/shroud, and reactor vessel. Metal components were modelled as either SS316 or Hastelloy N based with an operating period of 5 to 10 years. Graphite reflectors were Class C waste with a specific disposal cost of about $2,200/kg. The core can was greater than Class C waste for both Hastelloy N and SS316. The vessel was Class C for SS316 (5-10 years) and Class C for Hastelloy N (5-7 years) before becoming greater than Class C waste for a 10-year operating lifetime. MSR disposal costs were computed with and without a PWR activation charge limits assuming both an immediate disposal (high cost) and disposal after a 20-year decay period after plant shutdown (low cost). Without activation cost limits, the total levelized cost of disposal is $8.27 to an enormous $779/MWh but the range reduces to $7.25-$20.10/MWh if limits on activation charge are imposed. In all scenarios, the MSR disposal of the reactor vessel and its internals alone were larger than $1/MWh commonly assumed for light water reactors. In addition, the noted cost does not include increased scope for fueled-salt cleanup and decontamination of the considered components as well as primary piping and heat exchangers. Therefore, this work motivates advanced reactor developers, particularly, the MSR community, to estimate the disposal cost of their technologies as it may play an important role in their economic viability.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Enterprise Architecture for Railway Machinery Engineers</title>
<link href="https://hdl.handle.net/1721.1/151496" rel="alternate"/>
<author>
<name>Takahashi, Koji</name>
</author>
<id>https://hdl.handle.net/1721.1/151496</id>
<updated>2023-08-01T03:50:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Developing Enterprise Architecture for Railway Machinery Engineers
Takahashi, Koji
This thesis presents an analysis of the machinery department organization at the Central Japan Railway Company (CJR) with the aim of enhancing the company's resilience in response to ecosystem and stakeholder changes, as well as the impact of technological advancements on company operations. Based on the publication review of cutting-edge maintenance-related technologies, theories, and contract types, this study uses the Architecting Innovative Enterprise Strategy (ARIES) framework to analyze the machinery department of CJR as an enterprise. ARIES models the enterprise from holistic views including ecosystem landscape analysis, and stakeholder analysis, and generates an architectural concept.&#13;
&#13;
The study suggests that a direct maintenance strategy in collaboration with academic research would be a more flexible and resilient enterprise architecture than the current architecture. This new enterprise architecture is expected to deliver better value to both internal and external stakeholders. Finally, the study validates this generated enterprise architecture using evaluation criteria such as the Pugh matrix and future-proofing scenario analysis.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ship-Pack Optimization to Minimize Fulfillment Costs from Manufacturing to Customer</title>
<link href="https://hdl.handle.net/1721.1/151494" rel="alternate"/>
<author>
<name>Fullerton, Avery</name>
</author>
<id>https://hdl.handle.net/1721.1/151494</id>
<updated>2023-08-01T03:00:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ship-Pack Optimization to Minimize Fulfillment Costs from Manufacturing to Customer
Fullerton, Avery
Ship-pack optimization is a crucial tool for companies to reduce operations cost in their distribution system. Cost elements including transportation costs between manufacturing and distribution, transportation costs between distribution and customer, distribution center labor costs, and distribution box costs are all influenced by the ship-pack size from manufacturing to distribution. Companies that control their distribution channels want to be sure to minimize the amount of repacking that occurs between manufacturing and distribution to the customers. “Eachorders” occur when a company must fulfill an order outside of the ship-pack quantity sent from manufacturing. These “each-orders” incur more distribution handling costs, customer complaints, and more costly freight terms, thereby carrying a high “cost-to-fulfill” number relative to the amount of product sold to customers. This thesis explores two ways to reduce operational costs through adjusting ship-pack delivery. The first is an optimization to change the ship-pack quantity, which results in a savings of around 5% of operational costs annually. The second is an optimization of customer ordering behavior, which results in a savings of around 9% of operational costs annually.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum-inspired and Quantum Optimization on a Superconducting Quantum Processor</title>
<link href="https://hdl.handle.net/1721.1/151493" rel="alternate"/>
<author>
<name>Banner, William P.</name>
</author>
<id>https://hdl.handle.net/1721.1/151493</id>
<updated>2023-08-01T04:17:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Quantum-inspired and Quantum Optimization on a Superconducting Quantum Processor
Banner, William P.
Quantum and Quantum-inspired optimization represent rapidly growing fields that combine classical optimization techniques with either quantum-inspired ideas or quantum hardware to address complex optimization problems. This thesis provides an overview of quantum-inspired optimization as well as quantum optimization, including the theoretical underpinnings of both processes on hardware and in software. In particular, this thesis considers a specific, practically relevant problem, a BMW production planning problem, and evaluates the performance of quantum-inspired optimizers. This evaluation is implemented by comparing the performance of a family of quantum-inspired optimizers with that of several common black-box combinatorial methods. We find that the use of important operations research techniques including the incorporation of domain-specific information as well as state-space pruning improves the performance of all solvers. In addition, we find that in a majority of tested cases, quantum-inspired methods tie or improve upon the results of their conventional counterparts, albeit by small margins, particularly in regimes of moderate state-space size. This thesis demonstrates that quantum-inspired optimization can outperform many conventional optimization methods in some cases, motivating future use and study of quantum-inspired protocals as well as implementation of fully-quantum optimization techniques.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhanced Digital Capability through the use of Simulation in Footwear Product Creation</title>
<link href="https://hdl.handle.net/1721.1/151491" rel="alternate"/>
<author>
<name>Hinton, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/151491</id>
<updated>2023-08-01T03:33:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhanced Digital Capability through the use of Simulation in Footwear Product Creation
Hinton, Zoe
Digital simulations, such as those utilizing the Finite Element Analysis method, are a common engineering tool in product development to build confidence in product performance and robustness of design. Footwear product creation has historically been a blend of art and science, with design teams relying on 2D sketches sent to factory partners to create physical samples of shoes with the desired aesthetics. These samples were then tested mechanically and on athletes to verify the structural integrity of the design. This thesis explores how FEA simulation can be implemented into the footwear development process, especially in the environment and context of an onsite product creation center. &#13;
&#13;
First, a case study on plate bending analysis to measure relative stiffness of cleated footwear design features is presented, exploring three approaches: mechanical testing, beam theory calculations using plate geometry, and FEA simulation. All three approaches will provide similar results for design teams. However, depending on the project timeline, level of complexity, and resources available, a team should evaluate the best approach.&#13;
&#13;
Next, a framework for a qualitative cost-benefit analysis is developed to aid footwear business leaders to decide if investment in digital tools such as FEA are worth their expected benefits. The major costs of implementing FEA into a product creation center will be the start up time and financial cost of FEA engineers, software programs, and integration into the companies design process. However, the benefits could provide increased product quality, higher customer satisfaction with performance, and positive mindset shifts within design teams, allowing cross-functional team members to be more collaborative and efficient in their communication.&#13;
&#13;
While the potential benefits could increase sales and brand loyalty in the long run, investment in a digital service like FEA may not be suited for a product creation center, whose main focus is on building low volume physical footwear prototypes. Instead, FEA simulation and other digital capabilities should be integrated directly into design teams responsible for footwear projects. The hope is that the case study and associated cost-benefit analysis developed in this thesis can be applied to the evaluation of other technologies and digital tools into the footwear industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Site Hydrogen Production via Distributed Methane Pyrolysis</title>
<link href="https://hdl.handle.net/1721.1/151490" rel="alternate"/>
<author>
<name>Myers, Madison</name>
</author>
<id>https://hdl.handle.net/1721.1/151490</id>
<updated>2023-08-01T03:28:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">On-Site Hydrogen Production via Distributed Methane Pyrolysis
Myers, Madison
Current clean hydrogen production technologies are most affordable as large-scale centralized facilities, but current transportation and storage options can make hydrogen cost-prohibitive for small-scale and intermittent consumers. For these customers, a distributed hydrogen production method may be more desirable. A novel method for producing low-emissions hydrogen from natural gas, known as methane pyrolysis, is unique in that it can be scaled down more economically for use in distributed hydrogen applications. This paper describes an analysis of the economic viability and technical feasibility of three companies developing this technology across several small-scale and intermittent consumer applications. Three markets were identified where distributed methane pyrolysis is the lowest cost solution – hydrogen refueling stations, small-scale and intermittent industrial hydrogen consumption, and power generation for critical infrastructure located in areas with highly variable electricity prices. Distributed methane pyrolysis has the potential to provide small-scale and intermittent consumers with low-emissions hydrogen for as little as $1.70/kg H₂. This is significantly lower than the estimated delivered clean hydrogen cost of $7.36/kg H₂ for green hydrogen produced at a centralized production facility. Costs could also be further reduced by any available low-carbon economic incentives at the time and place of production. Widespread deployment of this emerging technology can further decarbonize the global economy while leveraging already existing natural gas networks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory Characterization of Mars In-Situ Resource Utilization (ISRU) using the Mars Oxygen ISRU Experiment (MOXIE) FlatSat Testbed</title>
<link href="https://hdl.handle.net/1721.1/151488" rel="alternate"/>
<author>
<name>Hariharan, Shravan</name>
</author>
<id>https://hdl.handle.net/1721.1/151488</id>
<updated>2023-08-01T03:52:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Laboratory Characterization of Mars In-Situ Resource Utilization (ISRU) using the Mars Oxygen ISRU Experiment (MOXIE) FlatSat Testbed
Hariharan, Shravan
The Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) is a payload onboard NASA’s Perseverance Rover demonstrating the production of oxygen through solid oxide electrolysis of carbon dioxide in the Martian atmosphere. MOXIE has successfully generated oxygen on Mars 14 times since landing in February 2021 and will continue to demonstrate oxygen production during night and day throughout all Martian seasons.&#13;
&#13;
As opportunities to run MOXIE on Mars are limited due to mission constraints such as energy usage and a fixed instrument configuration, the MOXIE team at the Massachusetts Institute of Technology (MIT), MIT Haystack Observatory, and NASA Jet Propulsion Laboratory (JPL) developed the MOXIE FlatSat as a ground-based operational testbed to further characterize the MOXIE system, evaluate and validate planned MOXIE operations on Mars, and demonstrate potential operating modes and configurations for a next-generation Mars in-situ resource utilization (ISRU) system. &#13;
&#13;
The research presented in this thesis involves a series of experiments conducted on the FlatSat testbed to inform design and operation of a next-generation Martian ISRU system. Specifically, this thesis discusses the capabilities of the FlatSat system, and how experiments analyzing the FlatSat compressor and FlatSat operations at low pressures inform optimal operating conditions for a future full-scale Martian ISRU system that minimize energy usage and maximize oxygen production. In addition, qualitative and quantitative differences between the FlatSat and MOXIE Flight Model are discussed to examine the extensibility of FlatSat data to MOXIE's operations on Mars.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Prediction of Quantum Chemical Properties with Multitask Gaussian Process Regression</title>
<link href="https://hdl.handle.net/1721.1/151484" rel="alternate"/>
<author>
<name>Fisher, Katharine</name>
</author>
<id>https://hdl.handle.net/1721.1/151484</id>
<updated>2023-08-01T04:15:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Efficient Prediction of Quantum Chemical Properties with Multitask Gaussian Process Regression
Fisher, Katharine
Multitask inference offers an efficient approach to bringing together multiple sources of information to train a surrogate model to predict chemical properties. In this thesis, we explore the task of inferring probability distributions on quantities of interest when we have access to a limited amount of highly accurate CCSD(T) data as well as data obtained using a range of approximations to the exchange-correlation functional in density functional theory (DFT). A CCSD(T) calculation can incur 1000 to one million times the computational cost of a DFT calculation, so an inference model which leverages both types of predictions can benefit from the accuracy of CCSD(T) and the relative efficiency of DFT. We specifically focus on inference methods based on Gaussian process (GP) regression. One example of such an approach, the Delta method, uses GP regression to model the difference between two different observation data sets, in our case CCSD(T) and DFT. The multitask method, by contrast, models a regression problem for each observational data set and assumes some relationship between the problems so that all relevant data sets can support the primary regression task. &#13;
&#13;
We test the performance of the Delta and multitask methods in the tasks of predicting the ionization potential of small organic molecules and the interaction energies of water dimers. The Delta method outperforms the multitask approach for data sets where it can be applied, but this approach requires CCSD(T) and DFT data sets to correspond to the same set of molecules and must have access to DFT data for target molecules to make final predictions. The multitask method can use information from CCSD(T) and DFT data sets which correspond to different molecules and can be applied without any DFT insight into the target molecule. For a given training set generation cost, the multitask method produces more accurate predictions than a GP regression model trained only on CCSD(T). The true training set generation cost may be smaller than the listed cost since the flexibility of the multitask method allows it to make use of already existing data sets. Additionally, we find that we can increase accuracy at low computational cost by increasing the number of DFT observation data sets used to inform the model. &#13;
&#13;
Finally, we consider the accuracy of the variances of the distributions predicted by GP inference methods as uncertainty indicators for the models. Though these indicators can capture uncertainty due to limited data set size and extrapolation, they are not designed to capture the impact of the disparity between modeling assumptions and reality. Future work may seek to better understand and represent this reality.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Ink Feedback Control System for Vision&#13;
Controlled Jetting 3D Printer</title>
<link href="https://hdl.handle.net/1721.1/151483" rel="alternate"/>
<author>
<name>Tiankanon, Krittamate</name>
</author>
<id>https://hdl.handle.net/1721.1/151483</id>
<updated>2023-08-01T03:08:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving Ink Feedback Control System for Vision&#13;
Controlled Jetting 3D Printer
Tiankanon, Krittamate
Vision-Controlled Jetting (VCJ) is one of the 3D-printing techniques that operates by repeatedly depositing photopolymer materials in 2 dimensions onto a printing plane, solidifying them using UV light, and scanning the printing plane’s topography to readjust the printing layer accordingly. It has a high printing resolution while also maintaining high printing speed, so it is suitable for prototyping or manufacturing parts that require high precision. However, to maintain its dimensional accuracy while printing in a noisy environment, it requires the usage of a noise-robust ink controller algorithm to generate an appropriate printing layer based on the scanning information. The current VCJ controller algorithm, not able to tolerate a systematic hardware misalignment, sometimes generates printing artifacts which deform or destroy the printed part. Our work aims to answer the question of which feedback control systems of VCJ printers can adapt their printing behavior to avoid such artifacts better than the original algorithm. We propose a new algorithm, called the Image Fitting algorithm, that can detect and allow the controller to react to the hardware misalignment. We first created a VCJ printing simulator based on the real VCJ printing process. Then, we tested the controller’s ability to correct the printing artifacts against the printing simulator. The results shows that the Image Fitting algorithm can react to the hardware misalignment both in the VCJ printing simulator and during the actual physical printing process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonizing Metropolises: Analyzing New York’s LL97 and Boston’s Berdo Net Zero Policies</title>
<link href="https://hdl.handle.net/1721.1/151482" rel="alternate"/>
<author>
<name>Pandey, Akrisht</name>
</author>
<id>https://hdl.handle.net/1721.1/151482</id>
<updated>2023-08-01T04:14:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Decarbonizing Metropolises: Analyzing New York’s LL97 and Boston’s Berdo Net Zero Policies
Pandey, Akrisht
Carbon neutrality and net zero have emerged as critical goals in global climate governance, seeking to address human activities’ environmental and social implications on Earth. This thesis explores the decarbonization of urban environments by critically analyzing New York's Local Law 97 (LL97) and Boston's Building Energy Reporting and Disclosure Ordinance (BERDO) through system thinking. The study evaluates the impact of these pioneering policies and renewable grid integration on office spaces.&#13;
&#13;
The narrative unfolds by analyzing the challenges faced by pre-1985 office buildings in Manhattan. It employs system thinking to decipher developers’ decision-making processes when choosing between renovation and demolition to pursue more sustainable buildings. The study further explores the potential of repurposing aging office spaces into residential units, considering the complex dynamics involved and utilizing Net Present Carbon to calculate the time value of carbon.&#13;
&#13;
Shifting focus to Boston's BERDO, the research investigates developers’ experiences using system thinking. The analysis illustrates BERDO's impact on older buildings at the neighborhood level, revealing the unintended consequences of a one-size-fits-all policy approach.&#13;
&#13;
Examining the policies on a larger picture, the study examines federal and state-level policies across the United States, investigating their potential to bolster decarbonization efforts in New York and Boston. It unravels the economics of sustainable construction, contemplating ripple effects on housing prices and exploring pioneering practices of developers embracing circular building materials.&#13;
&#13;
This thesis synthesizes the effectiveness of LL97 and BERDO policies in driving urban decarbonization while acknowledging their good intentions and the pressures they exert on big players. In doing so, it also highlights areas for refinement to address unintended consequences and better cater to diverse segments of the built environment. Through these means, the study contributes to understanding net zero policies as catalysts for a greener, more sustainable urban built environment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous Drone Assisted Aircraft Inspections</title>
<link href="https://hdl.handle.net/1721.1/151481" rel="alternate"/>
<author>
<name>Mighty, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/151481</id>
<updated>2023-08-01T03:38:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Autonomous Drone Assisted Aircraft Inspections
Mighty, Andrew
The safety of the passengers, crew, and mechanics is of the utmost importance for any aircraft manufacturer or operator. Visual inspections of the exterior of aircraft are critical to their safe operation, as defects such as corrosion, dents, lightning strikes, or missing parts can compromise the structural integrity of the whole aircraft. Currently, aircraft visual inspections are conducted by human mechanics in a process that is not only time consuming, but also puts the mechanics and the aircraft at risk, as mechanics must use lifts and cranes to inspect top portions of the aircraft, while at times even walking along the wings and spine. Throughout this process, paper records are maintained to document inspection findings, often without standard processes and dedicated equipment for capturing the current state of aircraft damage through imagery.&#13;
&#13;
In an attempt to improve the safety, record management, and time required of this process, we developed an approach to the inspection process using autonomous small unmanned aerial systems (SUAS) to capture the required inspection imagery. This approach also implements the use of a computer vision model to process the inspection imagery, aiding the mechanic in the review of imagery and identification of inspection findings. During this process, we analyzed the effects of computer vision and machine bias on the human inspectors and inspection accuracy, recommending processes to mitigate these effects and maintain inspection accuracy equivalent to the current human-only process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reinventing the (Spinning) Wheel: Maps to Scale Bacterial-Grown Materials</title>
<link href="https://hdl.handle.net/1721.1/151480" rel="alternate"/>
<author>
<name>Keyser, Jocelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/151480</id>
<updated>2023-08-01T03:52:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Reinventing the (Spinning) Wheel: Maps to Scale Bacterial-Grown Materials
Keyser, Jocelyn
As we move to decarbonize and transition to a sustainable economy, we can tap into nature’s intelligence to find materials whose properties often surpass those manufactured by industry. However, accessible biomaterials that can be extracted from nature represent only a small fraction of biology’s repertoire. The emergence of synthetic biology presents the opportunity to essentially code matter - creating a vast array of bio-inspired materials with increased functionality and design. At the same time, it beckons a new era of manufacturing that could potentially reduce environmental impact in a world besieged by the damaging legacy left by the industrial revolution, with textiles being some of the most egregious offenders. Each stage of a textiles’ cradle-to-grave journey is ripe with environmental consequences and represents one of the most polluting industries on our planet. Solutions from synthetic biology and beyond offer reprieve but must overcome challenges that come with scale and not without its own faults and required caution.&#13;
&#13;
After nine months of immersion in this realm – conversations with researchers and founders spanning bacterial-grown silk, nanodots, and food proteins, hours in lab engineering E. coli to make products ranging  from coral dyes and squid proteins to biocement, a wealth of knowledge from industry speakers like Gingko Bioworks and TWIST Bioscience, and thousands of pages of industry reports, articles, and literature, this report explores the root causes of the negative externalities attributed to the textile industry and paths to redesign with biosynthetic materials in order to collectively build a better system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Defio: Instance-Optimized Fusion of AWS Database Services</title>
<link href="https://hdl.handle.net/1721.1/151479" rel="alternate"/>
<author>
<name>Fanggohans, Dean</name>
</author>
<id>https://hdl.handle.net/1721.1/151479</id>
<updated>2023-08-01T04:22:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Defio: Instance-Optimized Fusion of AWS Database Services
Fanggohans, Dean
Building large-scale data infrastructures is hard: There are often more than a single type of workloads and business requirements, but unfortunately, “one size does not fit all”. Modern database systems tend to specialize towards a specific type of workload, and thus organizations are left to integrate these differently-specialized database systems in order to achieve sufficient performance for all of their use cases and workloads.&#13;
&#13;
This kind of hybrid architecture—also known as Data Mesh architecture—often leads to the increasing complexity of maintaining and utilizing database services, both for the data engineers and the end users. However, we believe that some of this complexity can be abstracted away from the end users, in particular with respect to query routing, i.e. determining where to execute each individual SQL query among the multiple database engines.&#13;
&#13;
To overcome this challenge, we propose Defio, a unified interface to multiple specialized database engines that can intelligently handle myriads of workloads without having the end users think about the underlying execution of each query. Specifically, this thesis focuses on the design and implementation of an instance-optimized query router, which ultimately enables Defio to take advantage of the performance benefits of each specialized database in a Data Mesh architecture—resulting in what we call a fusion of database services.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Yaku Cosmo-Infrastructures : Designing with Water Across the Andes</title>
<link href="https://hdl.handle.net/1721.1/151477" rel="alternate"/>
<author>
<name>Malca Vargas, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/151477</id>
<updated>2023-08-01T03:50:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Yaku Cosmo-Infrastructures : Designing with Water Across the Andes
Malca Vargas, Kevin
Water in the Andes is a dual entity: matter and energy, terrestial and celestial, substance, and Cosmos. This thesis is a provocation to reimagine water infrastructures across the Andes as a collaboration of ancestral and modern forms of relating with Water.&#13;
&#13;
As an Andean descendant, my mother taught me that Water is a living being. A series of walks during the summer of 2023, visiting ancestral places, learning from Water nurturers, and participating in the water festival in the South of Peru allowed me to reconnect with my family knowledge. Marcela Machaca, a “Water Nurturer”, taught me that the Andean cosmology considers Yaku (Water in Qechua) to be a person. Yaku Mama (Mother Water) creates life in the Andes through a reciprocal nurturing relationship with the communities.&#13;
&#13;
In contrast, modern epistemologies frame Water as a resource managed through infrastructures that extract, store, and distribute it across places. This approach disregards the Andean communities’ ancestral practices, disrupting the local ecological cycles. In the Quispillacta community, the duality of Water is evident, as they are both a resource managed by a dam and a living entity nurtured through ancestral practices. The incoming infrastructure planned by the government in Quispillacta raises an opportunity to embrace this duality by asking, how can we address the need for water access while also embracing the ancestral practices of living with Water?&#13;
&#13;
An paradigmatic shift in water infrastructurtes is necessary. This thesis argues for an alternative way to represent, design, and live with Water in the Andes. It proposes Cosmo-infrastructures as a new architectural paradigm that embraces the collaboration of ancestral and modern ways of interacting with Water. By proposing the design of a seasonal learning path in Quispillacta, this thesis articulates stations that mediate, interchange, and regenerate Water in collaboration with the local ecology. This project invites us to think Water as a pluriverse, co-creating with other modes of relating to the world that challenge the canonic binary divisions between Water and land, architecture and landscape, and, most importantly, humans and nature.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pulse Design for Two-Qubit Gates in Superconducting Circuits</title>
<link href="https://hdl.handle.net/1721.1/151476" rel="alternate"/>
<author>
<name>Ding, Qi</name>
</author>
<id>https://hdl.handle.net/1721.1/151476</id>
<updated>2023-08-01T03:04:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Pulse Design for Two-Qubit Gates in Superconducting Circuits
Ding, Qi
Despite tremendous progress towards achieving low error rates with superconducting qubits, error-prone two-qubit gates remain a bottleneck in realizing large-scale quantum computers. To boost the two-qubit gate fidelity to the highest attainable levels given limited coherence time, it is essential to develop a systematic framework to optimize protocols for implementing two-qubit gates. In this thesis, we formulate the design of the control trajectory for baseband controlled phase gates in superconducting circuits into a pulse design problem. Our research indicates that the Chebyshev trajectories – the trajectories based on the Chebyshev pulse and weighted Chebyshev approximation – have the potential to outperform the Slepian trajectories based on the Slepian pulse, which are currently widely used in quantum experiments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Economic Experiments Using Large&#13;
Language Models: Design and Development of a&#13;
Computational Tool</title>
<link href="https://hdl.handle.net/1721.1/151473" rel="alternate"/>
<author>
<name>Kar, Sohini</name>
</author>
<id>https://hdl.handle.net/1721.1/151473</id>
<updated>2023-08-01T04:07:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Simulating Economic Experiments Using Large&#13;
Language Models: Design and Development of a&#13;
Computational Tool
Kar, Sohini
Large language models, known for capturing the syntax, semantics, and broader representation of human behavior, can be used to simulate humans in AI-based experimentation. Because these models provide responses consistent with humans, they may be used to pilot studies or glean insights into social and economic scenarios. This research presents the development of homo silicus, a Python-based library for simulating economic experiments and social scenarios using AI subjects. Using the library, users can design, test, and analyze results of experiments conducted in silico. We replicate various classical economic research to better understand how well AI subject responses align with observed human behavior, and we test the impact of parameters such as temperature and prompt engineering to determine their influence on results. The homo silicus library provides researchers with a cost-effective method to iterate on projects without extensive resources or specific participant recruitment. This research contributes to advancing the field of AI-powered social simulation models and their applications in economic experimentation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Data Fusion for Deep Learning Applications in Intracoronary Image Segmentation</title>
<link href="https://hdl.handle.net/1721.1/151472" rel="alternate"/>
<author>
<name>Ahn, So Hee</name>
</author>
<id>https://hdl.handle.net/1721.1/151472</id>
<updated>2023-08-01T03:05:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multimodal Data Fusion for Deep Learning Applications in Intracoronary Image Segmentation
Ahn, So Hee
This thesis describes steps towards the construction of a multi-anatomical, multimodal segmentation and co-registration platform for intracoronary images. Although manual annotation and co-registration of intracoronary images from different modalities remain the gold standard today for facilitating the use of intravascular image analysis and morphological component extraction in guiding clinical decision-making, building automated pipelines for these tasks is of increasing interest to optimize these processes. This thesis consists of the construction of an optimized and robust multianatomical segmentation model, with experimentation detailed on different possible modes of pre-training. We also contribute to the process of creating a flexible, reliable platform that can segment and co-register intracoronary images of different imaging modalities, improving upon an in-house non-rigid registration procedure for co-registering coronary computed tomography angiography (CCTA) and optical coherence tomography (OCT) frames with the initialization of a new hybrid model that uses user-inputted fiduciary bifurcations as landmarks to guide the non-rigid registration process for intermediary frames in multimodal pullbacks. We hypothesize that this will enable the co-registration model to account for the global environment when aligning corresponding frames rather than relying solely on local optimization. The simultaneous development of both intravascular image segmentation and co-registration processes is conducted to contribute towards a greater ambition of creating a platform that can segment and co-register images from multiple modalities, pre and post-intervention.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SparkSim: A Counterfactual Approach for Spark Cluster Scheduling</title>
<link href="https://hdl.handle.net/1721.1/151471" rel="alternate"/>
<author>
<name>Rodríguez Garnica, Sol Estrella</name>
</author>
<id>https://hdl.handle.net/1721.1/151471</id>
<updated>2023-08-01T03:50:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">SparkSim: A Counterfactual Approach for Spark Cluster Scheduling
Rodríguez Garnica, Sol Estrella
Simulating and testing scheduling policies can be immensely time- and resourceintensive. In this work, we explore a novel approach, SparkSim, to scheduling policy training that is faster and more efficient than traditional scheduling policy testing. Our approach is based on an extension of CausalSim’s existing trace-driven approach [3], which we apply to replace the current Spark Cluster scheduling policy testing in simulation.&#13;
&#13;
To simulate the runtime under a new scheduling policy, our method consists of training a neural model to learn about unseen and unbiased computation elements of the cluster, extracting them, and using them as latents in predicting the duration of a workload from an existing trace. We implement this using a counterfactual approach, which takes a trace that was executed to predict a new one as if it had taken place under the same cluster conditions.&#13;
&#13;
My thesis focuses on evaluating and investigating the performance of SparkSim. We evaluate SparkSim on two baselines that do not require training. Our results show that SparkSim underperforms against these baselines during easier prediction tasks (such as copying from source), but outperforms them when the prediction tasks get harder. Future work lends itself to greatly improve upon these results.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Composing Visual Relations with Composable Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/151470" rel="alternate"/>
<author>
<name>Wei, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/151470</id>
<updated>2023-08-01T04:02:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Composing Visual Relations with Composable Diffusion Models
Wei, Megan
Humans are able to build complex representations of our world – representing the world as compositional combinations of both objects and their interdependent relations. Recent work in text-guided diffusion models have produced impressive results in generating photorealistic images, but such models often fail to capture spatial relationships between objects, and will often generate scenes where individual specified relations are incorrectly captured. An underlying cause is that such models are not explicitly compositional – when given a relational text description such as fork on plate or plate on fork, models will regress to generating the previously seen images, and will only generate images with a fork on a plate. We propose an approach to more accurately capture relations by decomposing the image probability density as a hierarchical product between lifted density representing abstract relations between objects and individual densities representing each object. We illustrate how this approach is simple to implement in practice and enables us to scale to accurately capture relations between objects across simulated and realistic scenes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI in the Cath Lab: Implications of Clinical AI-Enabled Assistance for Intravascular Ultrasound Procedures</title>
<link href="https://hdl.handle.net/1721.1/151469" rel="alternate"/>
<author>
<name>Borris, Mercer</name>
</author>
<id>https://hdl.handle.net/1721.1/151469</id>
<updated>2023-08-01T03:21:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">AI in the Cath Lab: Implications of Clinical AI-Enabled Assistance for Intravascular Ultrasound Procedures
Borris, Mercer
Clinical decision support tools enabled by artificial intelligence (AI) are entering the medical field slowly, emerging into a space not yet fully regulated by the FDA and with unclear impacts to both medical professionals and patients. Early AI-based clinical decision support systems in healthcare often focus on assisting tasks related to various modalities of medical imaging. Given the emphasis of the supportive nature of these tools, physicians maintain responsibility for the ultimate decisions. This thesis investigates the impact of an AI-enabled tool on clinical decision-making and task completion time of intravascular ultrasound (IVUS) workflows for coronary procedures. It is based on an experiment that was conducted with the support of Boston Scientific, where engineers have recently developed an exploratory AI model for annotating IVUS images as part of a strategic effort to improve patient outcomes and increase in-house AI knowledge. The experiment results show that completed tasks incorporating the tool’s AI-based calculations are 31% more accurate than the current standard, though this difference is not statistically significant. The AI-enabled tool also significantly reduces the time spent on IVUS workflow decision-making by 18%. A software architecture is also proposed to gather insights on physician-AI interaction and enable continuous monitoring and improvement of productionized AI algorithms.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Systems-Theoretic Analysis to Work Movement in Production Systems</title>
<link href="https://hdl.handle.net/1721.1/151468" rel="alternate"/>
<author>
<name>Barstow, John</name>
</author>
<id>https://hdl.handle.net/1721.1/151468</id>
<updated>2023-08-01T03:43:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Application of Systems-Theoretic Analysis to Work Movement in Production Systems
Barstow, John
In the aerospace industry, long product lifecycles, life extension programs, and highly specialized manufacturing capabilities combine to produce a challenge for Original Equipment Manufacturers (OEMs) in providing aftermarket support for fielded products. These factors drive the need for a work movement capability, where production of a product is physically relocated from one facility to another. The dynamics of work movement efforts are especially challenging when external suppliers are involved.&#13;
&#13;
This thesis presents the results of an application of Causal Analysis based on Systems Theory (CAST) to a loss of producibility following the movement of a production process from a supplier to an aerospace OEM. In addition to the principal analysis, background information is presented on the relevant production systems and processes, company structure, and the theoretical basis of the CAST process in Systems-Theoretic Accident Model and Processes (STAMP) and systems theory.&#13;
&#13;
In this thesis, CAST is shown as an effective tool for analysis of production systems by its ability to explain the loss of producibility that occurred following the work movement in question. In addition, the thesis demonstrates the utility of the STAMP control structure model for organizational analysis, and recommendations are provided to improve the functioning of production management organizations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Zanzibar Pizza Hut: Stone Town’s  Duckorated Sheds</title>
<link href="https://hdl.handle.net/1721.1/151466" rel="alternate"/>
<author>
<name>DeGiulio, Zachariah</name>
</author>
<id>https://hdl.handle.net/1721.1/151466</id>
<updated>2023-08-01T03:46:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Zanzibar Pizza Hut: Stone Town’s  Duckorated Sheds
DeGiulio, Zachariah
Zanzibar Pizza Hut: Stone Town’s Duckorated Sheds examines three cultural artifacts produced in Zanzibar’s Stone Town— Christ Church at the Old Slave Market, the kanga cloth, and the Zanzibar Pizza. These artifacts, which emerged in 1897, the early twentieth century, and the late 1980s, respectively, demonstrate a similar set of contradictions between what these objects’ suggested meaning is and what the conventions of naming imply, contradictions that produce what I’m calling Duckorated Sheds. Ultimately, the symbolic forms of these architectures have meanings that are obfuscated by the descriptions around them.  &#13;
&#13;
The shared salience of these cultural artifacts lies in the way they exist in and amplify multiple temporalities—knotting together the supposedly rupturing moments of the end of slavery, the inauguration of colonial power, and the late-millennium embrace of corporate multinational capitalism. The logics of Duckorated Sheds suggest less a rupturing event than a continuation of existing modes of thinking, being, and non-being—a continuation of slavery and of colonization in all its metastasized recapitulations. These objects ultimately lubricate the semiotic friction that occurs when a restructuring event alters the modes by which meaning is rendered. Zanzibar Pizza Hut takes these specific Duckorated Sheds and applies their logics to the design of a pavilion in Stone Town’s Forodhani Gardens, a colonial vestige that sits underutilized during the day but serves as the site for a food market in the evening, mainly geared towards tourists. Zanzibar Pizza Hut attempts to design for a variety of actors, all the while maintaining the awareness of the underlying continuities produced by the logic of the Duckorated Shed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Unified Approach to Controlling Implicit Regularization Using Mirror Descent</title>
<link href="https://hdl.handle.net/1721.1/151464" rel="alternate"/>
<author>
<name>Sun, Haoyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/151464</id>
<updated>2023-08-01T03:02:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Unified Approach to Controlling Implicit Regularization Using Mirror Descent
Sun, Haoyuan
Inspired by the remarkable performance of deep neural networks, understanding the generalization performance of overparameterized models and the effect of optimization algorithms on it has become an increasingly popular question. In particular, there has been substantial effort to characterize the solutions preferred by the optimization algorithms, such as gradient descent (GD), something referred to as implicit regularization. In particular, it has been argued that GD tends to induce an implicit $\ell_2$-norm regularization in regression and classification problems. Despite significant progress in this space, the implicit bias of various algorithms are either specific to a particular geometry or only exist for a particular class of learning problems, and there is a lack of a general approach for controlling the implicit regularization. To this end, we present a unified approach via mirror descent (MD), which is an important generalization of GD, to control implicit regularization in both regression and classification settings. In particular, we show that MD with a general class of homogeneous potential function converges in direction to a generalized maximum-margin solution for linear classifications problems, thereby answering an open question in the classification setting. Additionally, we show that under suitable conditions, MD can be efficiently implemented with minimal overhead compared to GD and enjoys fast convergence to the maximum-margin solution induced by its implicit bias. Using comprehensive experiments with both linear and deep neural network models, we demonstrate that MD is a versatile method to produce learned models with different regularizers, which in turn lead to different generalization performances.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rank2Reward: Learning Robot Reward Functions&#13;
from Passive Video</title>
<link href="https://hdl.handle.net/1721.1/151463" rel="alternate"/>
<author>
<name>Yang, Daniel Xin</name>
</author>
<id>https://hdl.handle.net/1721.1/151463</id>
<updated>2023-08-01T03:20:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Rank2Reward: Learning Robot Reward Functions&#13;
from Passive Video
Yang, Daniel Xin
Teaching robots novel skills with demonstrations via human-in-the-loop data collection techniques like kinesthetic teaching or teleoperation is a promising approach, but puts a heavy burden of data collection on human supervisors as well as instrumentation for inferring states and actions. In contrast to this paradigm, it is often significantly easy to be provided visual data of tasks being performed. Ideally, this data can serve to guide robot learning for new tasks in novel environments, informing both what to do and how to do it. A powerful way to encoder both what to do and how to do it in the absence of low-level states and actions is by inferring a well-shaped reward function for reinforcement learning. The challenging problem is determining how to ground visual demonstration inputs into a well-shaped and informative reward function for reinforcement learning. To this end, we propose a technique, Rank2Reward, for learning behaviors from videos of tasks being performed, without access to any low-level states and actions. We do so by leveraging the videos to learn a reward function that measures incremental “progress" through a task by learning how to rank the video frames in a demonstration in order. By inferring an appropriate ranking, the reward function is able to quickly indicate when task progress is being made, guiding reinforcement learning to quickly learn the task in new scenarios. We demonstrate the effectiveness of this simple technique at learning behaviors directly from raw video on a number of tasks in simulation as well as several tasks on a real world robotic arm.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient, Accurate, and Flexible PIM Inference through Adaptable Low-Resolution Arithmetic</title>
<link href="https://hdl.handle.net/1721.1/151461" rel="alternate"/>
<author>
<name>Andrulis, Tanner</name>
</author>
<id>https://hdl.handle.net/1721.1/151461</id>
<updated>2023-08-01T03:08:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Efficient, Accurate, and Flexible PIM Inference through Adaptable Low-Resolution Arithmetic
Andrulis, Tanner
Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural Network (DNN) inference by reducing costly data movement and by using resistive RAM (ReRAM) for efficient analog compute. Unfortunately, overall PIM accelerator efficiency and throughput are limited by area/energy-intensive analog-to-digital converters (ADCs). Furthermore, existing accelerators that reduce ADC area/energy do so by changing DNN weights or by using low-resolution ADCs that reduce output fidelity. These approaches harm DNN accuracy and/or require costly DNN retraining to compensate.&#13;
&#13;
To address these issues, this thesis explores tradeoffs around ADC area/energy and develops optimizations that can reduce ADC area/energy without retraining DNNs. We use these optimizations to develop a new PIM accelerator, RAELLA, which can adapt the architecture to each DNN. RAELLA lowers the resolution of computed analog values by encoding weights to produce near-zero analog values, adaptively slicing weights for each DNN layer, and dynamically slicing inputs through speculation and recovery. Low-resolution analog values allow RAELLA to both use efficient low-resolution ADCs and maintain accuracy without retraining, all while computing with fewer ADC converts.&#13;
&#13;
Compared to other low-accuracy-loss PIM accelerators, RAELLA increases energy efficiency by up to 4.9x and throughput by up to 3.3x. Compared to PIM accelerators that cause accuracy loss and retrain DNNs to recover, RAELLA achieves similar efficiency and throughput without expensive DNN retraining.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Cermet Fuel Thermal Margin with Thoria for Nuclear Thermal Propulsion</title>
<link href="https://hdl.handle.net/1721.1/151460" rel="alternate"/>
<author>
<name>Park, Gyutae</name>
</author>
<id>https://hdl.handle.net/1721.1/151460</id>
<updated>2023-08-01T04:02:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Increasing Cermet Fuel Thermal Margin with Thoria for Nuclear Thermal Propulsion
Park, Gyutae
Nuclear thermal propulsion (NTP) technology was identified as an alternative for faster space travel over chemical combustion propulsion systems by NASA in its Design Reference Addendum 5.0. Potential improvements to NTP performance were considered by improving the fuel margin to melting point. A thorium-dioxide stabilized, high-assay low enriched uranium (HALEU) tungsten-uranium dioxide (W-UO2) CERMET fueled nuclear thermal propulsion (NTP) concept was produced based on the Space Capable Cryogenic Thermal Engine (SCCTE) reactor1.&#13;
&#13;
Axial fuel thoria fraction adjustments to improve the fuel thermal margin the reactor’s specific impulse were studied using a one-dimensional axial thermohydraulic analysis of an equivalent annulus model of the average fuel coolant channel. Based on the one-dimensional analysis, fuel composition was adjusted leading to a fuel mass decrease of 5.45 kilograms, excess-reactivity reduction of 962 pcm, and an increased fuel margin to melting point of 740 K for the average fuel.&#13;
&#13;
Finally, a three-dimensional computational fluid dynamics (CFD) model of the hottest fuel pin of the base and adjusted designs with neutronics-informed three-dimensional fuel heating rates were compared. The CFD analysis predicted fuel melting in the hottest pin of both designs, identifying the potential need for additional design adjustment outside of fuel composition. The suggested changes reduced the total melting volume by 10 percent. Thus, temperature-informed adjustment of fuel thoria fraction offered improvements in fuel melting point.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Synergistic Partnership: Decision-Making for Green Energy Adoption in China Data Centers for Sustainable Business Development</title>
<link href="https://hdl.handle.net/1721.1/151459" rel="alternate"/>
<author>
<name>You, Zehao</name>
</author>
<id>https://hdl.handle.net/1721.1/151459</id>
<updated>2023-08-01T04:06:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Synergistic Partnership: Decision-Making for Green Energy Adoption in China Data Centers for Sustainable Business Development
You, Zehao
This thesis presents a critical analysis of strategic decision-making for green energy adoption in China's rapidly growing Internet Data Center (IDC) industry. The industry is grappling with pressing challenges around energy consumption, carbon emissions, and stringent regulatory pressures. This research bridges the fields of economics, business strategy, and environmental studies, providing a comprehensive view of the IDC industry's green transition within China's distinctive energy landscape.&#13;
&#13;
The research offers a holistic examination of the economic, environmental, and regulatory drivers prompting Chinese IDCs to integrate green energy. It elucidates a range of green energy strategies, highlighting potential benefits, inherent challenges, and the complex decision-making processes IDCs face. The centerpiece of this thesis is a novel hierarchical decision-making framework comprising six stages: demand analysis, supply analysis, regulatory compliance, financial considerations, technical feasibility, and risk management. It offers a comprehensive approach to strategy formulation, integrating diverse factors that influence green energy decisions.&#13;
&#13;
The utility and versatility of the proposed decision-making framework are illustrated through case studies of two companies in China, underscoring the flexibility of the framework in accommodating unique circumstances and strategic priorities, thereby illuminating the nuances of green energy decision-making.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technical and Commercial Feasibility Assessment of&#13;
Nuclear Microreactors as a Clean Energy Source for&#13;
Data Centers and Mining Sites</title>
<link href="https://hdl.handle.net/1721.1/151458" rel="alternate"/>
<author>
<name>Andrade Aparicio, Santiago</name>
</author>
<id>https://hdl.handle.net/1721.1/151458</id>
<updated>2023-08-01T03:31:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Technical and Commercial Feasibility Assessment of&#13;
Nuclear Microreactors as a Clean Energy Source for&#13;
Data Centers and Mining Sites
Andrade Aparicio, Santiago
Nuclear Microreactors are 1-10MWe stand-alone, plug-and-play energy platforms that can supply electricity and heat from a small footprint. Factory-assembled and factory-fueled, they are compact enough to fit within ISO standard shipping containers. Proposed to be used as a co-located source of clean energy, their potential applications span different industries, including mining and data centers. This low-carbon energy source can support industrial players’ transition away from fossil fuels. However, technical and commercial assessments are needed to understand its feasibility. Under the explored assumptions, Nuclear Microreactors appear to be cost-competitive for different scenarios. Simulations show that nuclear-based systems outperform, in Net Present Cost and Levelized Cost of Energy, diesel-based ones across most of the sensitivity cases tested. Results suggest that, when available, Nuclear Microreactors will be competitive and encouraged to be integrated into existing renewable systems. Finally, Nuclear Microreactors in mining and data center operations appear to have large carbon abatement potential.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from pre-pandemic data to forecast viral&#13;
antibody escape</title>
<link href="https://hdl.handle.net/1721.1/151457" rel="alternate"/>
<author>
<name>Gurev, Sarah(Sarah Faye)</name>
</author>
<id>https://hdl.handle.net/1721.1/151457</id>
<updated>2026-02-02T14:06:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning from pre-pandemic data to forecast viral&#13;
antibody escape
Gurev, Sarah(Sarah Faye)
Effective pandemic preparedness relies on anticipating viral mutations that are able to evade host immune responses in order to facilitate vaccine and therapeutic design. However, current strategies for viral evolution prediction are not available early in a pandemic – experimental approaches require host polyclonal antibodies to test against and existing computational methods draw heavily from current strain prevalence to make reliable predictions of variants of concern. To address this, we developed EVEscape, a generalizable, modular framework that combines fitness predictions from a deep learning model of historical sequences with biophysical structural information. EVEscape quantifies the viral escape potential of mutations at scale and has the advantage of being applicable before surveillance sequencing, experimental scans, or 3D structures of antibody complexes are available. We demonstrate that EVEscape, trained on sequences available prior to 2020, is as accurate as high-throughput experimental scans at anticipating pandemic variation for SARS-CoV-2 and is generalizable to other viruses including Influenza, HIV, and understudied viruses with pandemic potential such as Lassa and Nipah. We provide continually updated escape scores for all current strains of SARS-CoV-2 and predict likely additional mutations to forecast emerging strains as a tool for ongoing vaccine development (evescape.org).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Customization is Key”: Four Characteristics of Textual Affordances for Accessible Data Visualization</title>
<link href="https://hdl.handle.net/1721.1/151456" rel="alternate"/>
<author>
<name>Jones, Shuli</name>
</author>
<id>https://hdl.handle.net/1721.1/151456</id>
<updated>2023-08-01T04:09:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">“Customization is Key”: Four Characteristics of Textual Affordances for Accessible Data Visualization
Jones, Shuli
Current best practices recommend using textual descriptions to make data visualizations accessible to blind and low vision (BLV) screen reader users. While recent research has explored laying out such descriptions hierarchically to enable reading varying levels of detail, the textual descriptions remain fixed: their syntax and semantics are set by the visualization author or tool, and cannot be changed by a BLV user based on their preferences or task-specific needs. In this thesis, I explore four characteristics of customizations for hierarchical textual descriptions of visualizations: presence, or what content is present in the description; verbosity, or the length and conciseness of the content; ordering, or the sequencing of content; and duration, or how long a particular customization lasts. I instantiate these characteristics as extensions to Olli, an open source library that converts web-based visualizations into hierarchical textual structures, and evaluate my work through a mixed-methods study with 13 BLV participants. Users reported that customization is crucial to their agency and that being able to change the four characteristics helps them efficiently carry out their desired tasks on the data. However, differences in preferred defaults, prior experiences, and enthusiasm for customization indicate that there is no one-size-fits-all system even for customization itself: both accessible data visualizations and user interfaces for customizing them must be flexible enough to meet a variety of needs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bi-directional Flyback Converter Circuit Design for&#13;
Flapping-wing Microrobots</title>
<link href="https://hdl.handle.net/1721.1/151455" rel="alternate"/>
<author>
<name>Chen, Shiqi</name>
</author>
<id>https://hdl.handle.net/1721.1/151455</id>
<updated>2023-08-01T04:10:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bi-directional Flyback Converter Circuit Design for&#13;
Flapping-wing Microrobots
Chen, Shiqi
Scientists and Engineers have shown great interest in developing biologically inspired flapping-wing micro aerial robots that mimic the behavior of aerial insects. One of the major design challenges for sub-gram aerial robots is to achieve power autonomy. Over the past decade, numerous designs of sub-gram aerial robots have been proposed, with most of them using piezoelectric bimorphs actuators. Recently, Professor Yufeng Chen’s group has designed and used dielectric elastomer actuators(DEAs) in their aerial micro-robots, which have achieved marvelous collision resilience and acrobatic maneuvers. However, powering the DEAs requires a much higher voltage than the piezoelectric bimorphs actuators. This work addresses the issue of powering these DEAs with minimal weight carried by the powering circuit itself. The bidirectional flyback topology is chosen and explored, and a complete simulation and analysis are presented. It can be shown that with a careful component selection and a careful pulse frequency modulation, the bidirectional flyback converter can achieve a much better power consumption than a tapped boost circuit while maintaining the same or better level of performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Case Studies in Differential Privacy for Computer&#13;
Networking Research</title>
<link href="https://hdl.handle.net/1721.1/151454" rel="alternate"/>
<author>
<name>Meles, Amelia</name>
</author>
<id>https://hdl.handle.net/1721.1/151454</id>
<updated>2023-08-01T03:09:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Case Studies in Differential Privacy for Computer&#13;
Networking Research
Meles, Amelia
We conduct two case studies on the use of differential privacy in computer networking research: private analysis of 1) Internet performance measurements from the Measuring Broadband America dataset and 2) flow-based network traces from the NF-UNSW-NB15 Netflow dataset. We survey two open-source tools for this analysis, Ektelo and Tumult Analytics, and evaluate the experience for a data practitioner at each step of designing a differentially private statistical release with each of these tools. In Ektelo, we asses the privacy versus utility trade-off for 5 algorithms (Identity, H2, HB, GreedyH, and DAWA) and provide examples of context-specific utility functions and post-processing techniques for the Internet measurement data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Morphological and Electrical Change of Driven Quantum Dot LEDs</title>
<link href="https://hdl.handle.net/1721.1/151453" rel="alternate"/>
<author>
<name>Geng, Jamie</name>
</author>
<id>https://hdl.handle.net/1721.1/151453</id>
<updated>2023-08-01T03:43:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Morphological and Electrical Change of Driven Quantum Dot LEDs
Geng, Jamie
Quantum dot LEDs (QD-LEDs) are an emerging technology that promises brighter, more color pure, higher-color gamut displays. In recent years, indium phosphidenquantum dots have become the one of the most promising device emitters due to their lower toxicity. However, InP QD-LED devices require further development, especially in the area of long-term stability, to be viable on the market. &#13;
&#13;
In this thesis, we present an exploration of degradation mechanisms via device cross-sectioning and TEM analysis. We show that quantum dots in QD-LEDs coarsen and oxidize upon driving by measuring space-resolved chemical composition and bandgap. We validate that this manifests electrically as increased resistance. Additionally, we studied hydrogen doping of the ZnMgO electron transport layer, which is known to improve QD-LEDs during operation. We performed in-situ TEM to simultaneously hydrogen dope and image drop-cast ZnMgO nanoparticles, and show that they coarsen through this process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Debug Tutor: Automated Deliberate Debugging&#13;
Practice for Undergraduate Programmers</title>
<link href="https://hdl.handle.net/1721.1/151452" rel="alternate"/>
<author>
<name>Ecanow, Gabrielle E.</name>
</author>
<id>https://hdl.handle.net/1721.1/151452</id>
<updated>2023-08-01T04:16:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Debug Tutor: Automated Deliberate Debugging&#13;
Practice for Undergraduate Programmers
Ecanow, Gabrielle E.
Novice programmers struggle with debugging. Despite a rich literature of research on the effectiveness of teaching debugging, debugging is often not taught systematically in computer science curricula. This thesis presents the Debug Tutor, an automated debugging tutor for explicit debugging practice at the college level. The Debug Tutor’s suite of exercises drill particular microskills essential for competent debugging, and it offers automated expert hints and feedback by observing students’ debugging actions in real time. The Debug Tutor was incorporated into MIT’s undergraduate Software Construction course (6.102, formerly 6.031) in the Spring 2023 term. The Debug Tutor’s effectiveness at teaching important low-level debugging skills was investigated by analyzing exercise completion statistics and subsequent debugging-related quiz scores of the over 500 MIT students enrolled in the undergraduate course. The analysis revealed that completing Debug Tutor exercises was positively correlated with performance on debugger-related exam questions, regardless of students’ prior comfort levels with using a debugger. Furthermore, the software design of the Debug Tutor as a tutoring architecture with event tracking support was shown to be robust in capturing specific student actions to compare against exercise event patterns, flexible enough to handle a wide range of unexpected action sequences and on-the-fly updates, and extensible to domains other than the use of the debugger.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Space Race: Progress in Algorithm Space Complexity</title>
<link href="https://hdl.handle.net/1721.1/151451" rel="alternate"/>
<author>
<name>Rome, Hayden</name>
</author>
<id>https://hdl.handle.net/1721.1/151451</id>
<updated>2023-08-01T03:18:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Space Race: Progress in Algorithm Space Complexity
Rome, Hayden
This paper presents the first broad survey of the space complexities of algorithms for important problems in computer science, analyzing more than 800 algorithms for different problem families, and comparing the different algorithms for each of these problem families. The survey reveals the increasing importance of space complexity in recent years and discusses its relationship with time complexity. Our findings reveal an increasing trend in the percentage of algorithm papers that include space complexity analysis. We identify an increasing trend in the percentage of problem families with asymptotic time-space tradeoffs. Additionally, we find that the few problem families that see improvements in space complexity have typically improved at rates faster than the improvement rates of DRAM access speed and DRAM capacity. Under the right conditions, these algorithmic improvements to space complexity can be much more important than hardware improvements when considering computational speedups related to data accesses. This study sheds light on the space complexity of algorithms and contributes to a better understanding of the relationship between time and space complexities. We have also uploaded the space complexity work for this paper to our website, The Algorithm Wiki¹, to serve as a useful resource for theorists and practitioners alike.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Large Language Models for Robot&#13;
Navigation and Scene Understanding</title>
<link href="https://hdl.handle.net/1721.1/151450" rel="alternate"/>
<author>
<name>Chen, William</name>
</author>
<id>https://hdl.handle.net/1721.1/151450</id>
<updated>2023-08-01T03:10:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applications of Large Language Models for Robot&#13;
Navigation and Scene Understanding
Chen, William
Common-sense reasoning is a key challenge in robot navigation and 3D scene understanding. Humans tend to reason about their environments in abstract terms, with a wealth of common sense on object and spatial relations to back up such inferences. Thus, if robots are to see widespread deployment, they must also be able to reason with such knowledge to support tasks specified in such terms. As modern language models trained on large text corpora encode much worldly knowledge, we thus investigate methods for extracting common sense from such models for use in non-linguistic semantically grounded robotics tasks. We start by examining how language models can be used for attaching abstract room classes to locations based on visual percepts and lower-level object classes, commonly generated by spatial perception systems. We detail three language-only approaches (zero-shot, embedding-based, and structured language) as well as two vision-and-language approaches (zero-shot and fine-tuned), finding that language-leveraging systems outperform both standard pure-vision and scene graph neural classifiers while yielding impressive generalization and transfer abilities. We then consider a simple robot semantic navigation task to see how an agent can act upon prior knowledge encoded within language models in order to find goal objects by reasoning about where such objects can be found. Our framework, Language Models as Probabilistic Priors (LaMPP), uses the language model to fill in parameters of standard probabilistic graphical models. We also touch upon use cases outside of robotics, namely semantic segmentation and video action segmentation. Lastly, we show how common-sense knowledge can be extracted from language models and encoded in abstract spatial ontology graphs. We measure how well language model scores align with human common sense judgements regarding object and spatial relationships. Ultimately, we hope this work paves the way for more advanced robot semantic scene understanding and navigation algorithms that leverage language models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolutionary Debris Modeling of LEO and Cis-Lunar Space</title>
<link href="https://hdl.handle.net/1721.1/151449" rel="alternate"/>
<author>
<name>Pasiecznik, Celina</name>
</author>
<id>https://hdl.handle.net/1721.1/151449</id>
<updated>2023-08-01T03:06:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evolutionary Debris Modeling of LEO and Cis-Lunar Space
Pasiecznik, Celina
Space debris can be detrimental to missions in any orbital regime. With the advent of large satellite constellations in Low Earth Orbit (LEO) and planned return missions to the Moon, the risk created by fragmentation events in both LEO and cis-lunar space motivates an analysis of space debris evolution in these regions. Source-sink models allow for the study of debris evolution by considering various sources and sinks of debris, including atmospheric drag and fragmentation events. In this thesis, the evolution of the LEO environment is studied using a source-sink model with a variety of launch cases, including static and dynamic launch rates. A dynamical systems analysis is applied to the model to assess the stability of the LEO environment, finding stable equilibrium points for certain launch rates. Additionally, perturbations to the equilibrium state of the source-sink model are studied to determine the population of objects that trigger Kessler syndrome, and a new measure for orbital capacity is proposed. A calibrated explosion model is implemented in the source-sink model and an improved post-mission disposal model for satellites and rocket bodies is proposed. Possible improvements and current limitations of the source-sink model are explored, and the model’s predictions are validated against ESA’s DELTA model using 200 year-long simulations with a No-Further-Launch Case and an extrapolated launch case. The fragmentation analysis of orbiting objects that was conducted for the LEO environment is extended to a case study in cis-lunar space. The explosion model is implemented for a spacecraft in a Near-Rectilinear Halo Orbit around the Moon. The evolution of debris is studied in the Circular-Restricted Three-Body Problem, providing insight into the danger space debris poses to future missions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Influence of Turbofan Engine Design Parameters on Aircraft Environmental Impact</title>
<link href="https://hdl.handle.net/1721.1/151448" rel="alternate"/>
<author>
<name>Lee, Kanghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/151448</id>
<updated>2023-08-01T03:52:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Influence of Turbofan Engine Design Parameters on Aircraft Environmental Impact
Lee, Kanghyun
Despite the emergence of new energy carriers and propulsion systems architectures, turbofan engines power the majority of commercial aircraft. Therefore, aviation’s environmental impacts are significantly influenced by the design of these turbofan engines. Hence, we should drive the design of modern turbofan engines, informed with each design parameter’s effect on environmental implications, namely the climate and air quality impacts. To understand the connection between the engine design parameters and an aircraft’s environmental impact, it is important to have the capability to quantify the environmental impact resulting from a combined “Aircraft-Engine- Operation” scenario. Through modeling and connecting aircraft, engines, flight operations, emissions, and their resulting impacts on climate and air quality, we can link the end-to-end impact propagation chain and evaluate the outcomes of any engine design alteration. We investigate free design variables such as overall pressure ratio (OPR), fan pressure ratio (FPR), and turbine entry temperature (TET), as well as technology level indicators such as component efficiencies, cooling, and material temperature capability. Sensitivities are calculated for three different reference engines, and the differences in trends between the engines are analyzed. Influence of external-to-aviation uncertainties and valuation choices are also illustrated. Comparison between Jet-A and different sustainable aviation fuels (SAFs) are conducted from an environmental and societal point of view. The study also explores how the derived influence coefficients or sensitivities can provide valuable guidance to stakeholders when making decisions regarding technological investments, design space change, or regulatory assessments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Matching Individual Environmental, Social and Governance Revealed Preferences with Investment Portfolios</title>
<link href="https://hdl.handle.net/1721.1/151447" rel="alternate"/>
<author>
<name>Berner Bensan, Rodrigo</name>
</author>
<id>https://hdl.handle.net/1721.1/151447</id>
<updated>2023-08-01T04:12:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Matching Individual Environmental, Social and Governance Revealed Preferences with Investment Portfolios
Berner Bensan, Rodrigo
Environmental, Social, and Governance (ESG) investing has become increasingly popular in the last decade. However, portfolio managers face challenges in creating customized portfolios that align with investors' ESG preferences due to varying interpretations of sustainable investing and the lack of accurate information. This research shows that different ESG preferences can be measured and be represented in different investment portfolios. Using a web platform called ESG Machine, the individual ESG preferences and rationality were measured through a gamified experiment using the theory of Revealed Preferences. Then an adapted Constant Elasticity of Substitution utility function was calculated to estimate individuals’ preferences and substitution parameters between seven different ESG categories. Finally, using Robeco's SDG Scores, individuals’ portfolios and a "social portfolio" were created in different scenarios to perform the analysis.&#13;
&#13;
The research found that individuals prioritize personal values and goals when making decisions, leading to significant variation in ESG preferences. Individuals tend to allocate resources in a way that maximizes their overall utility based on their preferences when stimulus as matching donations are presented, rather than following an equalitarian distribution approach between all the ESG categories. Furthermore, individuals tend to choose portfolios that are either equal or very similar to a social portfolio when prioritizing a small number of companies, even though their perceived utility differs significantly. Nevertheless, as the number of ESG categories considered in the utility function or the portfolio size increases, different individual preferences lead to different investment portfolios. Therefore, a single social ESG portfolio that accommodates the social preferences of all the individuals won’t necessarily represent a significant portion of the individuals.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven clustering for new garment forecasting</title>
<link href="https://hdl.handle.net/1721.1/151446" rel="alternate"/>
<author>
<name>Luciano Rivera, Gianpaolo</name>
</author>
<id>https://hdl.handle.net/1721.1/151446</id>
<updated>2023-08-01T03:44:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Data-driven clustering for new garment forecasting
Luciano Rivera, Gianpaolo
The ability to detect patterns early in the design process is critical for fashion firms to make decisions, particularly given the speed at which new garments are introduced. Traditionally, most garment defining features were only used by designers and buyers since the data was intractable for a computer: shape, color, fit, etc. By using natural language processing (NLP) techniques that preserve semantics, in combination with traditional data-mining, we unlock the potential to use these garment characteristic and embed them in a numerical space that's tractable. By using this novel approach to fashion data, this thesis develops two custom algorithms to forecasting the size-curve distribution of a new garment. This task is achieved by automatically finding a set of comparables of previous garments and leveraging the know results to make predictions. We develop and implement two main algorithms: \textit{Cluster-While Regress} (CWR) and \textit{k-Nearest Neighbours} (kNN) and show that with enough data the algorithms should achieve human-level accuracy and automate the comparables-finding process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bespoke Design Meets Systems at Scale: A Design Study with Judy Heumann</title>
<link href="https://hdl.handle.net/1721.1/151445" rel="alternate"/>
<author>
<name>Ahn, Grace S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151445</id>
<updated>2023-08-01T04:00:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bespoke Design Meets Systems at Scale: A Design Study with Judy Heumann
Ahn, Grace S.
This thesis documents a bespoke design process for disability; it also establishes its relevance to population-scale systems in caregiving and health technology spaces. As people with disabilities are the largest minority group and one of the most underrepresented in the world, it is crucial to recognize these unique challenges and address them through inclusive design practices. To do so, the study explores the sleep needs of Judy Heumann, a wheelchair user with evolving medical needs for assistive technology, and a world-renowned civil rights activist on disability inclusion. Heumann’s daily sleep routine involved thirty minutes of building an intricate elevation device to support her lower body. If any part of the device was not properly tuned to her comfort levels, Heumann was unable to sleep. The study utilizes design thinking methodologies to deliver a working prototype that meets her functional needs and alleviates recurring pain points. The final thesis deliverable is a bespoke prototype for Heumann, integrating concepts from biomedical technologies and custom home adaptations. The prototype resembles an intuitive-origami-like setup including adjustable and collapsible features for comfort and travel. By using a design-for-one framework, the final prototype meets Heumann’s material sleep needs and simultaneously reveals common pain points in systems where caregiving and health technology meet. &#13;
&#13;
Concurrent to prototyping, the research expanded to other wheelchair users to investigate their overlapping and unique needs. Interviews revealed insightful latent needs and accompanying systems upstream and outside of the product context. This includes the economics and supply of human staffing while evaluating where smart home technology is heading. The aim of the thesis is to provide hybrid insights that blend technology and human services, where technology alleviates tedious burdens, and humans can be empowered in areas of connection and agency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Guided Vehicles for Material Flow in Fulfillment Centers</title>
<link href="https://hdl.handle.net/1721.1/151444" rel="alternate"/>
<author>
<name>Thomas Wilson, Kaya</name>
</author>
<id>https://hdl.handle.net/1721.1/151444</id>
<updated>2023-09-08T03:58:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Automated Guided Vehicles for Material Flow in Fulfillment Centers
Thomas Wilson, Kaya
The eCommerce industry utilizes fulfillment centers for product inventory, order&#13;
packaging and distribution. The fulfillment process at Amazon has been highly&#13;
automated within their Amazon Robotics (AR) Sortable facilities, mainly within the&#13;
inventory and order picking processes.&#13;
Although there has been significant progress in introducing technology into the&#13;
fulfillment processes, there are still several opportunities for further integration. This&#13;
work proposes the integration of automated guided vehicles (AGVs) in Amazon's&#13;
fulfillment centers (FCs) to improve process efficiency, labor utilization, and improve&#13;
employee safety.&#13;
Through utilizing the Six-Sigma DMAIC method, the process path of Transport Support&#13;
associate was selected as a focus because of the manual labor involved in their role often&#13;
moving empty material throughout the facility. The improved process path is proposed&#13;
with integration of AGVs and modeled using a process-based discrete-event simulation&#13;
framework.&#13;
The specific hardware and software requirements for an AGV to fit the proposed process&#13;
path results in a recommendation for a small packet AGV which utilizes LiDAR&#13;
scanners and vision-based navigation technology. The simulation results indicate that&#13;
the integration of AGVs in Inbound Stow process can increase individual throughput by&#13;
4-6% per shift per associate and reduces total idle time. The results demonstrate the&#13;
potential for AGVs to improve the productivity of FCs while contributing to reducing&#13;
potential work-related injuries. The work concludes that AGVs can improve FC&#13;
operations in the short and long term, with the potential for significant labor cost&#13;
savings.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Atmospheric Saliency of Space Debris Reentries: Estimating Distribution, Lifetime and Radiative Forcing of Reentry-Ablated Alumina</title>
<link href="https://hdl.handle.net/1721.1/151443" rel="alternate"/>
<author>
<name>Jain, Asha Kailin</name>
</author>
<id>https://hdl.handle.net/1721.1/151443</id>
<updated>2023-08-01T04:06:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">On the Atmospheric Saliency of Space Debris Reentries: Estimating Distribution, Lifetime and Radiative Forcing of Reentry-Ablated Alumina
Jain, Asha Kailin
As the space economy grows, numerous satellite operators are looking to build megaconstellations in Low Earth Orbit (LEO). These megaconstellations are expected to have hundreds to thousands of satellites operating in altitudes from 350km to 650km. Once built, many of these constellations will require satellite replenishment to replace spent satellites, leading to a continuous flow of new satellites and rocket bodies into LEO. &#13;
&#13;
To make room for future satellites, decommissioned space objects are often removed from LEO via atmospheric reentry. During reentry, unshielded satellites and rocket bodies experience extreme heating loads and material ablation, depositing small metallic particles in atmosphere. These particles can remain suspended in the atmosphere and interact with important atmospheric processes. However, the salience of these particles and their atmospheric effects are unknown.  &#13;
&#13;
To address this gap, this thesis estimates the atmospheric consequences of reentry-ablated alumina, characterizing its distribution, lifetime and radiative effect using a state-of-the-art atmospheric model. We consider a future scenario where all of the megaconstellations with public Federal Communications Commission filings are deployed and maintained, leading to steady flux of 13,900 satellite reentries and 500 rocket body reentries per year by 2040. As a first order, conservative approximation, this work finds that reentries in this scenario produce alumina particles that persist in the atmosphere for one to two years, leading to a modest radiative forcing of approximately \(-0.2 mW/m^2\).  We present various metrics to normalize this radiative forcing and compare these metrics across other industries. Reentries produce a stronger radiative forcing per reentry event than the radiative forcing that aviation produces per flight. We conclude that future work is necessary to increase the fidelity of our results and better understand the full scope of atmospheric consequences of reentry-ablated alumina.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global Localization and Guided Relocalization in&#13;
Unstructured Environments using Semantic Objects</title>
<link href="https://hdl.handle.net/1721.1/151440" rel="alternate"/>
<author>
<name>Pedlow, Jacqueline</name>
</author>
<id>https://hdl.handle.net/1721.1/151440</id>
<updated>2023-08-01T04:17:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Global Localization and Guided Relocalization in&#13;
Unstructured Environments using Semantic Objects
Pedlow, Jacqueline
This thesis presents a novel framework for global localization and guided relocalization of a vehicle in an unstructured environment. Compared to existing methods, this pipeline does not rely on cues from urban fixtures (e.g., lane markings, buildings), nor does it make assumptions that require the vehicle to be navigating on a road network. Instead, localization is achieved in both urban and non-urban environments by robustly associating and registering the vehicle’s local semantic object map with a compact semantic reference map, potentially built from other viewpoints, time periods, or modalities. Robustness to noise, outliers, and missing objects is achieved through the graph-based data association algorithm. Further, the guided relocalization capability of the pipeline mitigates drift inherent in odometry-based localization after the initial global localization. The pipeline is evaluated on two publicly-available, real-world datasets to demonstrate its effectiveness at global localization in both nonurban and urban environments. The Katwijk Beach Planetary Rover dataset [17] is used to exemplify the pipeline’s ability to perform accurate global localization in unstructured environments at as low as 0.58m accuracy. Demonstrations on the KITTI dataset [15] achieve an average pose error of 3.8m across all 35 localization events on Sequence 00 when localizing in a reference map created from aerial images. Compared to existing works, this pipeline is more generalizable because it can perform global localization in unstructured environments using maps built from different viewpoints and dates.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Defining Core Manufacturing Capabilities at Raytheon Missiles &amp; Defense</title>
<link href="https://hdl.handle.net/1721.1/151439" rel="alternate"/>
<author>
<name>Stuart, Thomas R.</name>
</author>
<id>https://hdl.handle.net/1721.1/151439</id>
<updated>2023-08-01T04:09:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Defining Core Manufacturing Capabilities at Raytheon Missiles &amp; Defense
Stuart, Thomas R.
Raytheon Technologies makes a diverse array of products ranging from microelectronic components to fully integrated exo-atmospheric missiles and jet engines. In order to better rationalize its operations strategy, the organization is implementing a methodology to identify its manufacturing technologies and products that are (i) most financially attractive and (ii) most technically complex. With respect to technical complexity, Raytheon Technologies would like to prioritize production of technically complex products, all else being equal. This aligns with the company’s competitive strategy and value proposition to its customers. This thesis examines two alternative methods by which Raytheon Missiles and Defense, a business unit of Raytheon Technologies, can measure the (i) financial attractiveness and (ii) technical complexity of its products or manufacturing technologies. Four products, supporting the same program (referred to as Program X), are then measured with both methods as part of a pilot study. The results of the pilot study are used to assess the reasonableness of each methodology and to compare the two methods.&#13;
&#13;
The first method, referred to as the RTX Framework, uses metrics developed by Raytheon Technologies. The technical metrics are (T1) Product Impact to End Item, (T2) Future Demand, (T3) Manufacturing Complexity, (T4) Sourcing Alternatives, and (T5) Intellectual Property. Each of these metrics are measured, scored based on a projection of each measure, and then combined via a weighted average into a single technical score. The financial metrics are (F1) Operations Labor Cost, (F2) Operations Labor Cost Certainty, (F3) Capital Invested, (F4) Manufacturing Utilization, and (F5) Scrap, Rework and Repair costs. Again, each of these metrics measured, scored based on a projection of each measure, and then combined via a weighted average into a single financial score. The author develops the approach by which each of these metrics can be measured, measures the pilot study products with these metrics, and, most importantly, analyzes and critiques each metric. The author utilizes Delphi survey methodology to conduct technical assessments of the Product Impact to End Item and Manufacturing Complexity metrics. Subject matter expert interviews are used to measure the Sourcing Alternatives and Intellectual Property metrics. Business system data are used to measure the Future Demand metric. Data from business systems are used to measure the five financial metrics. The analysis reveals that the financial metrics are either potentially misleading or suffer from logical fallacies such as giving weight to sunk costs. The analysis reveals stronger justification for the technical metrics.&#13;
&#13;
Based on these insights, the author develops a second method, referred to as the Alternative Framework. It is based, in part, on Baldwin and Clark’s work regarding the economic value of system modules. It functions as a three-level decision tree. At the first level, the product (or manufacturing technology) is measured with the Product Impact to End Item, Manufacturing Complexity, and Intellectual Property technical metrics. A sufficiently high score in any of these three metrics designates the product as “core” from a technical perspective. At the second level, the Sourcing Alternatives metric is measured only if the business wants to consider moving the product or manufacturing technology to another location within or outside of the company. This is a strategic decision that must consider factors beyond what can be accounted for in a framework. At the third level, if suitable production alternatives are found to exist, then the Alternative Framework compares the alternatives and status quo via a net present value comparison. Essentially, the Alternative Framework eliminates the financial metrics inherent in the RTX Framework and instead relies on a net present value comparison at the end of the process.&#13;
&#13;
A comparison of the pilot study results obtained by using the two methods indicates that the Alternative Framework yields reasonable results and avoids the pitfalls of the financial metrics used in the RTX Framework. By comparison, the RTX Framework results appear to be reasonable from a technical perspective, but are potentially problematic with respect to the financial metrics. The Alternative Framework is the primary output of this document and its approach should be extendable to organizations other than Raytheon that are interested in developing highly technical manufacturing capabilities as a source of competitive advantage.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An algorithm for characterizing&#13;
context-governed speech production patterns</title>
<link href="https://hdl.handle.net/1721.1/151438" rel="alternate"/>
<author>
<name>Torres, Deborah Cheron</name>
</author>
<id>https://hdl.handle.net/1721.1/151438</id>
<updated>2023-08-01T03:14:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An algorithm for characterizing&#13;
context-governed speech production patterns
Torres, Deborah Cheron
Speech recognition and analysis can be improved by using methods that can effectively characterize important speech patterns of a speaker without requiring hours of data. This thesis defines a method by which key contexts related to systematic speech modification can be used to create a profile of the speech produced by a speaker. Using acoustic and prosodic information, contexts that create the potential for speech modifications can be specified. Then, by filtering speech produced by a speaker in the targeted contexts, the patterns of speech production in these contexts can be characterized. With these productions, likely underlying contexts that are associated with the productions can be used to enhance speech recognition when these contexts arise in new speech.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Testing for Subtle Cognitive Impairments in aClinically Informed iPad Platform</title>
<link href="https://hdl.handle.net/1721.1/151429" rel="alternate"/>
<author>
<name>Ascanio Alino, Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/151429</id>
<updated>2023-08-01T03:08:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Testing for Subtle Cognitive Impairments in aClinically Informed iPad Platform
Ascanio Alino, Maria
Early detection of cognitive impairment enables more effective treatments and better outcomes. Despite advances in cognitive assessments, traditional assessment methods are often time-consuming and require extensive medical expertise, making it challenging to reach many patients and limiting data availability. This thesis explores an innovative solution - an advanced, unified iPad platform designed to administer tests in a way that embodies some of the skills of a practiced clinician. The platform provides faster and self-administered medical assessments with granular testing information, allowing for early detection of cognitive impairment. This application streamlines the assessment process and increases patient access, providing valuable data for ongoing research and treatment development. In addition, the platform’s ease of use, interactivity, and accessibility make it a valuable tool for both medical professionals and patients, as it embodies some clinical expertise enabling it to interact in ways similar to a human examiner. This thesis provides an in-depth examination of this cutting-edge platform, describing its benefits and its potential to improve the early detection of cognitive impairment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manufacturing Integration: Managing Throughput and Organizational Change</title>
<link href="https://hdl.handle.net/1721.1/151428" rel="alternate"/>
<author>
<name>Tomasovic, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/151428</id>
<updated>2023-08-01T03:08:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Manufacturing Integration: Managing Throughput and Organizational Change
Tomasovic, Jacob
Private equity-owned manufacturing companies frequently buy competitors to bolster revenue and capture market share within their disciplines. Decisions regarding facilities, employees, and manufacturing flows are made as demand rises, synergies are pursued, and costs are subjected to greater scrutiny. Companies in this situation begin to make changes to maintain growth and continue to excel in their specialties.&#13;
&#13;
This research focuses on implementing lean methodologies into a manufacturing space and recommends various improvements in a single manufacturing site. With demand increasing by nearly 100% the researcher proposes a shift in manufacturing strategy that decreases cycle times and overall throughput through the facility. Capacity analyses and manufacturing layout improvements, in addition to the manufacturing strategy, are explored through various departments of the product build process. The product in this case is an outdoor shade system.&#13;
&#13;
The results of this research are still ongoing. However, the proposals in this investigation serve as a guide for the integration of two manufacturing companies and three facilities into one company servicing customers out of a single manufacturing site. It demonstrates that future product forecasts can be achieved with improved manufacturability and flow.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning the Language of Antibody Hypervariability&#13;
Through Biological Property Prediction</title>
<link href="https://hdl.handle.net/1721.1/151427" rel="alternate"/>
<author>
<name>Im, Chiho</name>
</author>
<id>https://hdl.handle.net/1721.1/151427</id>
<updated>2023-08-01T03:24:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning the Language of Antibody Hypervariability&#13;
Through Biological Property Prediction
Im, Chiho
Machine learning-based protein language models (PLMs) have proven to be successful in a variety of structure and function-prediction contexts. However, foundational PLMs (those trained on the corpus of all proteins) rely on evolutionary co-conservation of protein sub-sequences, but this distributional hypothesis does not hold for antibody hypervariable regions. Consequently, methods like AlphaFold 2 have relatively weak performance on antibody sequences. In this work, we propose AbMAP (Antibody Mutagenesis-Augmented Processing), a new transfer learning framework that fine-tunes foundational models specifically for antibody-sequence inputs by supervising on examples of antibody structure and binding specificity. We demonstrate how our feature representations can be applied to the accurate prediction of an antibody’s local and global 3D structures, mutational effects on antigen binding specificity, as well as identification of its paratope. The scalability of AbMAP newly enables large-scale analysis of human antibody repertoires. We find that the AbMAP representations of individual repertoires have remarkable overlap, more so than can be discerned by sequence analysis. Our findings provide robust evidence in support of the hypothesis that antibody repertoires across individuals converge towards similar structural and functional coverage. We anticipate AbMAP will accelerate efficient and effective design and modeling of antibodies and expedite antibody-based therapeutics discovery.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Working with APIs in App Inventor</title>
<link href="https://hdl.handle.net/1721.1/151426" rel="alternate"/>
<author>
<name>Tabunshchyk, Viktoriya</name>
</author>
<id>https://hdl.handle.net/1721.1/151426</id>
<updated>2023-08-01T03:06:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Working with APIs in App Inventor
Tabunshchyk, Viktoriya
App Inventor is a widely used tool for inexperienced developers to learn to program for the first time with block-based coding. Although it offers a wide range of capabilities to be used when building apps, there has previously been no simple way for users to take advantage of the immense library of online tools through APIs. This work implements a new framework in App Inventor that allows users to import and use any public API they find on the web, given its specification follows a standardized format. A small study with high school students was done after the implementation of this work, where they built apps that used the weather and OpenAI APIs to create their own weatherman app with ChatGPT. It was shown that this new API framework both strengthens an interest in programming and a career involving programming among students, and boosts their confidence in their ability to create their own innovative apps from the ground up.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Privacy-Preserving Transferable Video  Representations</title>
<link href="https://hdl.handle.net/1721.1/151425" rel="alternate"/>
<author>
<name>Zhong, Howard</name>
</author>
<id>https://hdl.handle.net/1721.1/151425</id>
<updated>2023-08-01T04:08:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning Privacy-Preserving Transferable Video  Representations
Zhong, Howard
Pretraining on massive video datasets has become essential to achieve high action recognition performance on smaller downstream datasets. However, most large-scale video datasets are accompanied with issues related to privacy, ethics, and data protec-tion, often preventing them to be publicly shared with the community for reproducible research. Existing work has attempted to alleviate these problems by blurring faces, downsampling videos, or training on synthetic data. On the other hand, analysis on the transferability of privacy-preserving pretrained models to downstream tasks has been limited. In this work, we study this problem by ﬁrst asking the question: can we pretrain models for human action recognition with data that does not include humans? To this end, we present, for the ﬁrst time, a benchmark that leverages real-world videos with humans removed and synthetic data containing virtual humans to pretrain a model. We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition datasets. Furthermore, we propose a novel pre-training strategy, called Privacy-Preserving MAE-Align, to eﬀectively combine synthetic data and human-removed real data. Compared to previous baselines, our approach reduces, by a large margin, the performance gap between human and no-human action recognition representations on downstream tasks. Our benchmark, code, and models will be made publicly available.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling of Nanocryotron Superconducting Logic</title>
<link href="https://hdl.handle.net/1721.1/151424" rel="alternate"/>
<author>
<name>Foster, Reed A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151424</id>
<updated>2023-08-01T03:49:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Scaling of Nanocryotron Superconducting Logic
Foster, Reed A.
This thesis presents the design and characterization of a superconducting shift register based on nanocryotrons. Such a shift register has applications in nanocryotron circuit testing as well as integrated readout and memory for high count rate imagers based on superconducting nanowire single photon detectors (SNSPDs). Characterization of the shift register shows that it can readily operate in large external magnetic fields that would present a challenge to Josephson-junction-based superconducting technologies. Furthermore, analysis of the input ranges which produce correct operation in a small experimental device suggest that such a circuit may be scalable to millions of nanocryotrons.&#13;
&#13;
A device with a million nanocryotrons would be several orders of magnitude larger than any existing digital circuit based on superconducting nanowires. Development of circuits with more than just a few nanocryotrons has been limited in part due to the difficulty in testing and characterizing these superconducting devices. The absence of standard, well-tested nanocryotron circuits puts the burden of testing on conventional room-temperature electronics such as oscilloscopes and arbitrary waveform generators. However, limited flexibility of on-board computation for preprocessing data handicaps the ability of such systems to characterize larger scale circuits. To address this challenge, this thesis presents a design of an analog frontend for interfacing superconducting circuits with a high speed field-programmable gate array (FPGA) that could automate these tests.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Concept-based Analysis of Dark Patterns in User&#13;
Interface Design</title>
<link href="https://hdl.handle.net/1721.1/151423" rel="alternate"/>
<author>
<name>Xiong, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/151423</id>
<updated>2023-08-01T04:04:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Concept-based Analysis of Dark Patterns in User&#13;
Interface Design
Xiong, Katherine
In this thesis, we present a new theory of dark patterns that aims to move the literature away from fragmented, top-down analysis of systems through subjective dark patterns taxonomies towards a more objective and generalizable framework. Focusing on user interface related dark patterns, we propose a single similarity that they all have: the user interface misrepresents the underlying conceptual functionality of the system in a way that benefits the system owner at the cost of the user. We then present a domain- and modality-independent framework that breaks systems down into isolated, reusable units of functionality called concepts and systematically codifies user expectations for each concept and how it should be mapped to the user interface. We illustrate our framework on three popular concepts in e-commerce applications that a significant number of dark patterns stem from: Catalog, ShoppingCart, and MailingList. We show that the framework design allows us to apply our dark pattern definition in an objective manner, while capturing dark patterns previously identified in the literature related to these concepts and being extensible to analyzing new designs in the future. We conclude with possible use cases of the framework for researchers, industry designers, and legislators.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liquid News - A Semantic-Relational Model for&#13;
Enhanced Understanding</title>
<link href="https://hdl.handle.net/1721.1/151422" rel="alternate"/>
<author>
<name>Haile, Dagmawi Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/151422</id>
<updated>2023-08-01T03:23:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Liquid News - A Semantic-Relational Model for&#13;
Enhanced Understanding
Haile, Dagmawi Samuel
The landscape in which society interacts with news has evolved due to the advent of the internet and modern communication platforms. Although this evolution has led to greater diversity and accessibility of news media, it has also created challenges regarding selective news coverage, bias, and fake news. This work proposes a novel news platform called Liquid News that aims to enhance people’s understanding of news by leveraging machine-learning-based analysis and semantic navigational aids. Semantic segmentation and unsupervised clustering are the core machine-learning tasks underpinning Liquid News. Thus far, many state-of-the-art (SoTA) large language models provide building blocks for both tasks. However, more research needs to be done on combining large language models and their application to analyzing video news. Liquid News addresses this domain gap by intersecting semantic segmentation, unsupervised clustering, and video processing in application to video news. Furthermore, Liquid News investigates solutions to overcoming the challenges of anisotropy in semantic embedding and clustering of text.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expansion Microscopy of Cells in Suspension</title>
<link href="https://hdl.handle.net/1721.1/151421" rel="alternate"/>
<author>
<name>Han, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/151421</id>
<updated>2023-08-01T03:02:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Expansion Microscopy of Cells in Suspension
Han, Nathan
Expansion microscopy is a laboratory technique that enables nanoscale imaging of biological samples with conventional light microscopes. While expansion microscopy has traditionally been applied to specimens consisting of tissue and adherent cell culture, it has not been optimized for specimens consisting of cells in suspension. In this work, a straightforward expansion microscopy protocol was developed for suspension cells. This protocol was validated across multiple cell types including in vitro and in vivo disease models, and multiple expansion microscopy versions encompassing different methods of sample fixation, anchoring, and gelation. Suspension cells imaged after conducting the protocol exhibited increased resolution compared to images of the initial raw sample, as well as a high rate of sample retention at a variety of initial concentrations. These findings suggest the potential for the wide use of expansion microscopy to study suspension cells, which provide a versatile and scalable system for investigating cellular processes and developing therapeutic treatments. The protocol created in this work can be directly used in the future to interrogate suspension cells at nanoscale resolution to identify underlying molecular and morphological mechanisms.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Analysis Experiments on Log Extraction and&#13;
Processing for Causal Insights</title>
<link href="https://hdl.handle.net/1721.1/151420" rel="alternate"/>
<author>
<name>Khine, Min Thet</name>
</author>
<id>https://hdl.handle.net/1721.1/151420</id>
<updated>2023-08-01T03:29:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Causal Analysis Experiments on Log Extraction and&#13;
Processing for Causal Insights
Khine, Min Thet
Recent decades have seen tremendous advancements in the design and implementation of data processing systems for various applications and use cases. However, even systems that support the most complex queries are mostly used for business reporting, prediction, and classification tasks based on the data. These systems do not necessarily inform users of the causal relationships that are inherent in the data. To this end, we design a new log-based data processing system that provides answers to causal questions based on timestamped logs. This thesis work focuses on improving the current log extraction methods and performing causal analysis experiments on inferred causal models extracted from the logs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gamification in Marketing to Increase Customer Retention</title>
<link href="https://hdl.handle.net/1721.1/151418" rel="alternate"/>
<author>
<name>Chen, Yu Tai (Tony)</name>
</author>
<id>https://hdl.handle.net/1721.1/151418</id>
<updated>2023-08-01T03:25:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Gamification in Marketing to Increase Customer Retention
Chen, Yu Tai (Tony)
Gamification has emerged as a powerful marketing tool over the past decade, with its ability to boost user engagement, retention, and brand loyalty. This thesis examines the application of gamification in marketing through the lens of game development frameworks and the 4Ps of marketing. By analyzing case studies of Nike SNKRS and Duolingo, this thesis sheds light on best practices, such as the incorporation of gaming mechanics into non-gaming scenarios, adapting to gaming trends, and aligning gamification strategies with company goals. Although both cases demonstrate the potential benefits of gamification, they also reveal challenges, such as consumer desensitization, ethical concerns, and the risk of detracting from a brand's core message. The future of gamification in marketing is promising, with the integration of cutting-edge technologies such as AR, VR, and AI, alongside the increased adoption of gamified strategies on social media platforms and mobile devices. However, marketers must remain mindful of potential challenges and strive to balance innovative gamified experiences with responsible marketing practices, ensuring user privacy, ethical design, and preventing customer fatigue. By striking this balance, the future of gamification in marketing is set to revolutionize the way businesses engage with their audiences and build lasting relationships.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Generous Interface for the Discoverability of Text Collections</title>
<link href="https://hdl.handle.net/1721.1/151417" rel="alternate"/>
<author>
<name>Shen, Jeffrey J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151417</id>
<updated>2023-08-01T03:50:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Generous Interface for the Discoverability of Text Collections
Shen, Jeffrey J.
Existing search interfaces for digital collections are excellent for finding specific items, but are unfriendly to inexperienced users, fail to facilitate exploration, and do not highlight the internal relationships and structures within a collection. Generous interfaces have been theorized as an alternative to search-centric information retrieval, and have begun to be implemented in a limited number of digital cultural institutions. However, there are almost no examples of generous interfaces for text-based collections, a significant omission. In this thesis, I propose and implement an experimental interface to explore a collection of nearly 50,000 theses sourced from MIT’s DSpace collection: a “spatial search” where content is mapped onto a two-dimensional virtual space that enables users to search by “moving” rather than through queries. I then use an experiment to evaluate the interface against a traditional search interface, and find consistent evidence that the interface improves exploration, quality of items found, and user engagement when participants are asked to freely explore.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Segmentation and Registration of the&#13;
Placenta in BOLD MRI</title>
<link href="https://hdl.handle.net/1721.1/151416" rel="alternate"/>
<author>
<name>Das, Haimoshri</name>
</author>
<id>https://hdl.handle.net/1721.1/151416</id>
<updated>2023-08-01T03:51:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving Segmentation and Registration of the&#13;
Placenta in BOLD MRI
Das, Haimoshri
Blood Oxygen Level Dependent (BOLD) MRI images are used to study placental oxygen transport. To analyze the time series dataset of BOLD MRI images of the whole uterus for placental function, we need to segment the placenta in the images and register the images to a common template.&#13;
&#13;
In the following thesis, we primarily aim to explore deep neural networks to improve segmentation and registration of placental MRI images. Much of the work that is being done in this area is for the brain. But the placenta, unlike the brain, lacks a definite structure. The placenta also undergoes more deformations due to maternal and fetal motions and contractions. We aim to adapt, extend and modify the neural networks for the placenta specific problems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Concepts through Labs that Present&#13;
Real-World Scenarios in an Introductory Computer&#13;
Science MOOC</title>
<link href="https://hdl.handle.net/1721.1/151415" rel="alternate"/>
<author>
<name>Yang, Yilinn</name>
</author>
<id>https://hdl.handle.net/1721.1/151415</id>
<updated>2023-08-01T04:18:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Explaining Concepts through Labs that Present&#13;
Real-World Scenarios in an Introductory Computer&#13;
Science MOOC
Yang, Yilinn
Massive Online Open Courses (MOOCs) such as those hosted on edX have the ability to reach thousands of students all over the world. Typically, they consist of a combination of lectures, graded assignments, and tests and are generally less interactive than the typical in person course. We believe that the interactive aspect of in person courses add a lot of value to the class and can and therefore should be added to MOOCs. Adding additional ungraded exercises that are fun, interactive, engaging, and present real-world scenarios can allow students to further explore concepts otherwise only taught through lectures. Additionally, by connecting these concepts to real-world scenarios they are already familiar with, students may be better able to think intuitively about the concepts and the related code. In this thesis we present a set of labs that will hopefully provide these benefits in the edX course 6.00.1x (Introduction to Computer Science and Programming Using Python).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding impact of life experiences on&#13;
performance and learning behavior in an&#13;
Introductory Computer Science MOOC</title>
<link href="https://hdl.handle.net/1721.1/151414" rel="alternate"/>
<author>
<name>Eain, Yun Shwe</name>
</author>
<id>https://hdl.handle.net/1721.1/151414</id>
<updated>2023-08-01T03:43:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding impact of life experiences on&#13;
performance and learning behavior in an&#13;
Introductory Computer Science MOOC
Eain, Yun Shwe
In this thesis, we attempt to understand the impact of life experiences on performance and learning behavior in an introductory computer science MOOC (Massive Open Online Course). Through data analysis work, this thesis identifies that some life experiences have an impact on a student’s performance and learning behaviors in the course. When it came to a student’s academic/career life experiences, exposure to the concept of induction in math positively impacts a student’s performance in the final exam, while experience in management negatively affects performance in all graded portions of the class except for problem sets. In terms of learning behavior, students without management experience tend to have performed a higher number of solution submissions and watched a larger fraction of the course videos, while students without extensive experience in writing lengthy reports (20+ pages) show greater engagement in the course forum. In regards to a student’s non-academic life experiences, students with over 50 hours of experience in open-world strategy games tend to perform better in overall grades and problem sets. Lastly, in analyzing dayto- day behaviors, a positive correlation was observed between regular engagement in riddles, brainteasers, or sudoku and overall grades and performance in the final exam, although no significant correlation is found between day-to-day behaviors and learning behaviors in the course.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analysis of Neural Rationale Models andInfluence Functions for Interpretable MachineLearning</title>
<link href="https://hdl.handle.net/1721.1/151413" rel="alternate"/>
<author>
<name>Zheng, Yiming</name>
</author>
<id>https://hdl.handle.net/1721.1/151413</id>
<updated>2023-08-01T03:38:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Analysis of Neural Rationale Models andInfluence Functions for Interpretable MachineLearning
Zheng, Yiming
In recent years, increasingly powerful machine learning models have shown remarkable performance on a wide variety of tasks and thus their use is becoming more and more prevalent, including deployment in high stakes settings such as for medical and legal applications. Because these models are complex, their decision process is hard to understand, suggesting a need for model interpretability. Interpretability can be deceptively challenging. First, explanations for a model’s decision on example inputs may appear understandable. However, if the underlying explanation method is not interpretable, more care must be taken before making a claim about the interpretability of the explanation method. Second, it can be difficult to use interpretability techniques efficiently on large models with many parameters.&#13;
&#13;
Through the lens of the first challenge, we examine neural rationale models, which are popular for interpretable predictions of natural language processing (NLP) tasks. In these, a selector extracts segments of the input text, called rationales, and passes these segments to a classifier for prediction. Since the rationale is the only information accessible to the classifier, it is plausibly defined to be the explanation. However, through both philosophical perspectives and empirical studies, we argue rationale models may be less interpretable than expected. We call for more rigorous evaluations of these models to ensure desired properties of interpretability are indeed achieved. Through the lens of the second challenge, we study influence functions which explain a model’s output by tracing the model decision process back to the training data. Given a test point, influence functions compute an influence score for each training point representing how influential it is on the model’s decision with the test point as input. While expensive to compute on large models with many parameters, we aim to gain intuition on influence functions in low dimensional settings and develop simple, cheap to compute heuristics which are competitive with influence functions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lab-bc: A Serverless Computing Platform for MIT&#13;
Educators</title>
<link href="https://hdl.handle.net/1721.1/151412" rel="alternate"/>
<author>
<name>Lang, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/151412</id>
<updated>2023-08-01T04:07:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Lab-bc: A Serverless Computing Platform for MIT&#13;
Educators
Lang, Jay
In this thesis, we investigate the deployment of real-world Electronic Design Automation (EDA) tools in the digital design classroom. Hands-on experience with these tools is essential to prepare students for state-of-the-art research and industry settings. However, modern EDA tools such as Xilinx’s Vivado have limited to no compatibility with a broad subset of architectures and operating systems used by students. Past digital design courses have sidestepped this problem by providing a lab space stocked with research-grade computers - however, this solution is incompatible with trends in remote and hybrid learning in the wake of the COVID-19 pandemic. &#13;
&#13;
Accordingly, we have designed and implemented lab-bc, a serverless computing platform which allows our students to invoke industry-standard EDA tooling remotely. Our system is significant in that it provides locality transparency: in contrast to existing remote interfaces, students may invoke tools like Vivado as if they are installed on their own devices. This interface grants students the generality, ease-of-use, and performance of a local installation regardless of the hardware they own. lab-bc is deployed onto a set of excess servers in our lab space, where despite frequent spinning disk failures and unreliable power delivery, the system provides high-performance, strongly-sandboxed Vivado instances to MIT students nationwide.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Real-World Human Activities with&#13;
VirtualCity: A Large-Scale Embodied Environment&#13;
for 2D, 3D, and Language-Driven Tasks</title>
<link href="https://hdl.handle.net/1721.1/151411" rel="alternate"/>
<author>
<name>Ren, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/151411</id>
<updated>2023-08-01T03:55:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Simulating Real-World Human Activities with&#13;
VirtualCity: A Large-Scale Embodied Environment&#13;
for 2D, 3D, and Language-Driven Tasks
Ren, Jordan
Embodied environments act as a tool that enables various control tasks to be learned. Within these simulators, having realistic rendering and physics ensures that the sim2real gap for tasks isn’t too large. Current embodied environments focus mainly on small-scale or low-level tasks, without the capability to learn large-scale diverse tasks, and often lack the realism for a small sim2real gap. To address the shortcomings of current simulators, we propose VirtualCity, a large-scale embodied environment that enables the learning of high-level planning tasks with photo-realistic rendering and realistic physics. To interact with VirtualCity, we provide a user-friendly Python API that allows the modification, control, and observation of the environment and its agents within. Building this realistic environment brings us closer to adapting models trained in simulation to solve real-world tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proliferated Low Earth Orbit (pLEO) Satellite&#13;
Constellation Handover Cost Analysis</title>
<link href="https://hdl.handle.net/1721.1/151410" rel="alternate"/>
<author>
<name>Grant, Veronica M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151410</id>
<updated>2023-08-01T04:24:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Proliferated Low Earth Orbit (pLEO) Satellite&#13;
Constellation Handover Cost Analysis
Grant, Veronica M.
In any mobile network, handovers between routing nodes generally cause a reduction in available resources for users. This is very true of proliferated Low Earth orbit (pLEO) satellite constellation networks in which both the satellite and the user are mobile with respect to each other. As satellites travel in their obits, they move into and out of ground users’ views every few minutes [4], and mobile users can move into and out of satellite spot beams frequently as well. When existing communication between a user and its serving satellites (uplink and downlink) terminate, user data must be relayed to the next serving satellite, possibly incurring additional data transmissions and overhead in the form of network management and control actions for acquisition in the network. This issue is becoming more relevant as commercial companies building their own satellite networks must figure out an efficient handover strategy to reduce unnecessary data transmissions and handover overhead. In this thesis, I estimate the satellite handover cost by quantifying the number of transmission hops required to relay existing queued data to/from the next serving satellite. The handover cost of a satellite network will depend on factors such as the network topology and the handover algorithm itself. I will quantify the impact of the aforementioned factors on the satellite network handover cost. A lower handover cost generally implies that the overall monetary cost (capital expenditure and operational expenditure) of a network to the provider (and also the user) is lower as well.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap: Designing Accessible Industrial&#13;
Robotics UIs for Non-Technical Users with Concept&#13;
Design</title>
<link href="https://hdl.handle.net/1721.1/151409" rel="alternate"/>
<author>
<name>Heng, Tommy Seng</name>
</author>
<id>https://hdl.handle.net/1721.1/151409</id>
<updated>2023-08-01T03:19:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bridging the Gap: Designing Accessible Industrial&#13;
Robotics UIs for Non-Technical Users with Concept&#13;
Design
Heng, Tommy Seng
A recent robotics AI startup hopes to provide industrial robotics-as-a-service (RaaS) to a non-technical demographic. Their affordable collaborative robotic arms are equipped with computer vision systems. This grants the simplicity, flexibility, and affordability necessary to lower the barrier of entry into automation technologies for smaller and medium-sized manufacturing businesses. In this thesis, I outline my approach to redesigning the startup’s prototype UI inspired by Concept Design, a methodology formalized by MIT professor Daniel Jackson. By clarifying the core concepts of the startup’s application, the presentation became clearer and the robotics company was able to deploy a more adaptable, robust, and intuitive user experience to their first customers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of AI on Resource-ConstrainedHardware with a focus on Anomaly Detection</title>
<link href="https://hdl.handle.net/1721.1/151408" rel="alternate"/>
<author>
<name>Ziegler, Travis</name>
</author>
<id>https://hdl.handle.net/1721.1/151408</id>
<updated>2023-08-01T04:16:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applications of AI on Resource-ConstrainedHardware with a focus on Anomaly Detection
Ziegler, Travis
This thesis addresses the challenges of improving the performance of AI models on resource-constrained microcontrollers (MCUs). As the complexity of modern models continues to grow and the demand for smaller mobile devices increases, optimizing model latency, memory usage, and accuracy on tiny devices remains a persistent problem. This thesis makes contributions to the field by (1) benchmarking common AI inference engines to identify trade-offs between them, (2) developing a framework that can assist neural-architecture searches to discover more efficient models, (3) proposing model conversion techniques that enable online-learning on MCUs, resulting in improved real-world accuracy, (4) creating a novel visual anomaly detector for MCUs, and (5) collecting a new dataset for anomaly detection benchmarks. The task of visual anomaly detection is to discern between known "Good" objects and objects that are slightly damaged or have imperfects. Being able to spot defective parts has huge applications in industrial and manufacturing settings. The proposed anomaly detector, MCU-PatchCore, is based on PatchCore – a state-of-the-art anomaly detector. MCU-PatchCore achieves a mean accuracy of 86% on the widely used MVTec AD dataset, which contains images of screws, cloth, glass bottles, etc., and their anomalous chipped, torn, cracked, etc., counterparts. While MCU-PatchCore’s accuracy is not as competitive as the GPU-based PatchCore detector, it only requires 200KB of RAM and less than 1MB of storage to run. Additionally, MCU-PatchCore outperforms a few other anomaly detectors in the literature, and shows promising potential for future improvement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building and Evaluating Cancer Prescreening&#13;
Models with Electronic Health Records</title>
<link href="https://hdl.handle.net/1721.1/151407" rel="alternate"/>
<author>
<name>Saowakon, Pasapol</name>
</author>
<id>https://hdl.handle.net/1721.1/151407</id>
<updated>2023-08-01T03:50:31Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Building and Evaluating Cancer Prescreening&#13;
Models with Electronic Health Records
Saowakon, Pasapol
Cancer is a leading cause of death that kills over ten million people every year, and many times delayed treatment is the culprit. Building on a recent framework, we used electronic health records from TriNetX to develop prescreening models for ten different cancer types: biliary tract, brain, breast (female), colon, esophageal, gastric, kidney, liver, lung, and ovarian. The models showed great performance, with neural network models consistently but marginally outperforming their logistic regression counterparts. As expected, we found that models trained to detect specific cancer types performed noticeably better than ones trained more generally to detect any cancer. All models proved to be reasonably robust in geographical, racial, and temporal external validations, although a prospective study is still needed to verify the performance and the potential impact of our models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a Persistent Offline Cache Improving Time to&#13;
First Execution (TTFX) of GPU Code in Julia</title>
<link href="https://hdl.handle.net/1721.1/151406" rel="alternate"/>
<author>
<name>Warner, Collin</name>
</author>
<id>https://hdl.handle.net/1721.1/151406</id>
<updated>2023-08-01T03:42:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Implementing a Persistent Offline Cache Improving Time to&#13;
First Execution (TTFX) of GPU Code in Julia
Warner, Collin
GPU’s allow users the ability to run code with high data parallelism efficiently on specialized hardware. GPUCompiler.jl provides a GPU compilation process to Julia allowing users to write highly efficient vector operations common in scientific computing. GPUCompiler.jl does not support the same level of persistent offline caching that is available in the core Julia compiler. This increases the time to first execution (TTFX) as programs need to recompile GPU code on every package reload regardless of if any code was changed. In this thesis we implement a persistent offline cache that is capable of storing both type inferred and native code drastically reducing the TTFX on precompiled GPU code. We demonstrate that by caching native code, execution can be sped up 2-3x while reducing compilation storage costs by 3-40x when compared to the current GPU compilation process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Software Library for Generative Model Applications</title>
<link href="https://hdl.handle.net/1721.1/151405" rel="alternate"/>
<author>
<name>Hernandez, Carlos</name>
</author>
<id>https://hdl.handle.net/1721.1/151405</id>
<updated>2023-08-01T03:51:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Software Library for Generative Model Applications
Hernandez, Carlos
The generation of data by machine learning models is a powerful concept that has impacted the field of Artificial Intelligence in the past few years. In this thesis, we focus on building a software library to facilitate the workflow, evaluation, and analysis of generative models. Our work is primarily aimed at helping a specialty chemicals company use a state of the art molecule generation model for their specific applications. We reference the body of work containing the model as DEG, short for Data-Efficient Graph Grammar Learning for Molecular Generation [16]. DEG is capable of creating synthesizable molecules from small amounts of data, making it quite attractive for companies looking for practical methods to explore new molecules. As an overarching goal, we will design our library to incorporate other types of generative models and become a tool that the field can benefit from.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BURLAP: Bits of Useful Randomness enable Learning with Adjustable Privacy</title>
<link href="https://hdl.handle.net/1721.1/151404" rel="alternate"/>
<author>
<name>Reyes, Rene David Reyes</name>
</author>
<id>https://hdl.handle.net/1721.1/151404</id>
<updated>2023-08-01T03:26:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">BURLAP: Bits of Useful Randomness enable Learning with Adjustable Privacy
Reyes, Rene David Reyes
Training accurate models over sensitive data that is distributed among multiple users is an important problem in Machine Learning (ML). Good solutions would open the door to the use of these powerful algorithms in high-impact domains such as healthcare, finance and policy.&#13;
&#13;
While cryptography-based approaches such as Fully Homomorphic Encryption (FHE) can be used to provide privacy guarantees that have been rigorously characterized and proven, their adoption comes with two main practical hurdles. First, these tools often incur a significant computational overhead that does not scale to the size of state-of-the-art models. Second, they use advanced mathematical concepts that are unfamiliar to most ML practitioners, causing a very steep learning curve.&#13;
&#13;
The first challenge is a major research question that is being addressed by many cryptographers and engineers. On the theoretical side, there is a quest for better algorithms and constructions. On the practical side, significant effort is being put into performance engineering and hardware acceleration.&#13;
&#13;
In this work, we focus on the second hurdle and design BURLAP, a detailed protocol that combines cryptographic tools with various ML techniques to provide a secure training framework. We provide a proof-of-concept implementation to show that this system can be realized with existing tools, but find that it does not yet scale to the distributed ML setting we are interested in. Nonetheless, given other ongoing efforts to make FHE more practical, we believe BURLAP is a significant conceptual step towards bridging the gap between cryptography and ML.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Fiscal and Monetary Policy on the Cross-Sectional Value Factor</title>
<link href="https://hdl.handle.net/1721.1/151403" rel="alternate"/>
<author>
<name>Suvak, Colin</name>
</author>
<id>https://hdl.handle.net/1721.1/151403</id>
<updated>2023-08-01T03:43:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Impact of Fiscal and Monetary Policy on the Cross-Sectional Value Factor
Suvak, Colin
I find strong evidence that the cross-sectional value factor's returns are impacted by fiscal and monetary policy in the post-Bretton Woods era.  Using a custom set of 768 value factors formed on the intersection of five portfolio construction design choices, which I take to represent the concept of the "value" premium in aggregate, I find that both structural and revaluation returns to the factor are lower than average during periods when fiscal and monetary policy are jointly loose.  Oppositely, when each policy is tight, total and decomposed returns to value are all higher than average.  My findings provide an explanation for at least part of the time-varying nature of value's returns.  Factor timing strategies that tactically utilize the information contained in fiscal and monetary policy weakly improve on strategic allocations to value over the long-run.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing an Efficient Power/Control System for a Network of Piezoelectric Speakers</title>
<link href="https://hdl.handle.net/1721.1/151402" rel="alternate"/>
<author>
<name>Tang, Grace W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151402</id>
<updated>2023-08-01T03:02:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Designing an Efficient Power/Control System for a Network of Piezoelectric Speakers
Tang, Grace W.
Extremely flat and flexible piezoelectric speakers have recently been invented, and one of their potential applications is in directional speakers. If a network of these speakers is hidden in the walls, each speaker’s signal can be phase shifted individually such that the sound cancels in some parts of the room and is amplified in others. Driving 16-64 high voltage speakers with individual signals requires a control system and amplifier that is as energy efficient as possible. To effectively manage so many analog channels, a multi-channel DAC was paired with a microcontroller and SD card and read out precalculated low-voltage signals for each speaker. An amplifier was then designed to boost these signals high enough to drive the speakers. Two designs were explored here: a common-emitter paired with a class AB amplifier, and a class D amplifier. Although testing was limited due to an unsuccessful 200V power supply design, the class D amplifier was found to be the more efficient solution with a wider voltage range. However, the class AB amplifier was much simpler and took up less space.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Starlogo Nova as a Classroom Assignment&#13;
Orchestration Tool for Learning Computational&#13;
Modeling in DC High Schools</title>
<link href="https://hdl.handle.net/1721.1/151401" rel="alternate"/>
<author>
<name>Zhang, Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/151401</id>
<updated>2023-08-01T03:52:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Using Starlogo Nova as a Classroom Assignment&#13;
Orchestration Tool for Learning Computational&#13;
Modeling in DC High Schools
Zhang, Ann
DC-Models, a 4-year NSF project, is exploring the incorporation of computational modeling (CM) into the science curriculum of high school students in the District of Columbia Public Schools (DCPS) to address the lack of relevant resources for computational thinking (CT) education. This project focuses on integrating the DC-Models science curriculum with StarLogo Nova (SLNova), a block-based programming/CM platform. Recognizing the limitations of SLNova’s existing functionality to effectively accomplish this goal, we explore designing and integrating a dedicated classroom assignment management platform within SLNova to address the gaps. The platform includes a centralized environment for students to view and teachers to manage classes and assignments, an assignment editor for seamlessly working on assignment questions and modifying SLNova models, and tools for monitoring student progress and answers. By reducing logistical barriers and unnecessary cognitive load, the platform enhances the educational experience of CM, allowing teachers to focus on teaching, students to focus on learning, and researchers to gain insights into student progress. Ultimately, this work contributes to the goal of providing all students access to CT education and skills for 21st century careers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Impact of Social Determinants of Health on&#13;
Prediction of Clinical Outcomes in the Intensive Care Unit</title>
<link href="https://hdl.handle.net/1721.1/151399" rel="alternate"/>
<author>
<name>Yang, Ming Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/151399</id>
<updated>2023-08-01T03:08:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluating the Impact of Social Determinants of Health on&#13;
Prediction of Clinical Outcomes in the Intensive Care Unit
Yang, Ming Ying
Social determinants of health (SDOH) – the conditions in which people live, grow, and age – play a crucial role in a person’s health and well-being. There is a large, compelling body of evidence in population health studies indicating that a wide range of SDOH is strongly correlated with health outcomes. Yet, a majority of the risk prediction models based on electronic health records (EHR) do not incorporate a comprehensive set of SDOH features as they are often noisy or simply unavailable. Our work links a publicly available EHR database, MIMIC-IV, to well-documented SDOH features. We investigate the impact of such features on common EHR prediction tasks across different patient populations. We find that community-level SDOH features do not enhance the predictive accuracy of a model, but they can improve the model’s calibration and fairness. We further demonstrate that SDOH features are vital for conducting thorough audits of algorithmic biases beyond protective attributes. We hope the new integrated EHR-SDOH database will enable studies on the relationship between community health and individual outcomes and provide new benchmarks to study algorithmic biases beyond race, gender, and age.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating a Baseball Hitter’s Bat Speed Using One&#13;
Camera</title>
<link href="https://hdl.handle.net/1721.1/151398" rel="alternate"/>
<author>
<name>Greve, Peyton</name>
</author>
<id>https://hdl.handle.net/1721.1/151398</id>
<updated>2023-08-01T04:00:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Estimating a Baseball Hitter’s Bat Speed Using One&#13;
Camera
Greve, Peyton
The speed a hitter swings a baseball bat has become more and more popular among the baseball analytics community. Determining a player’s bat speed not only can tell you how fast a player can swing but also would allow you to measure how hard player’s can hit the baseball. Bat speed is very difficult to measure without attaching any tool to the bat as the task takes multiple camera angles from precise distances from the hitter. This thesis presents a method to develop a tool that can estimate the bat speed of a swing captured by a single camera as video. This thesis also shows the success a regression model can have on a synthetic dataset of swings as proof of concept.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fair Selective Regression</title>
<link href="https://hdl.handle.net/1721.1/151397" rel="alternate"/>
<author>
<name>Qu, Xiaoran (Steven)</name>
</author>
<id>https://hdl.handle.net/1721.1/151397</id>
<updated>2023-08-01T03:44:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fair Selective Regression
Qu, Xiaoran (Steven)
Selective regression allows for abstention from prediction when uncertainty is high, creating a tradeoff between coverage rate and prediction error. In this thesis, we consider how selective regression interacts with data that is partitioned into subgroups by a sensitive attribute. Specifically, we define two notions of fairness with respect to these subgroups: monotonic prediction error in the coverage rate, and similar prediction error between subgroups. In each case, we develop and analyze appropriate fairness constraints on the feature set that yield fair selective regression: a calibration condition for the former, and a local differential privacy condition for the latter.&#13;
&#13;
Based on our theoretical results, we design two novel inference algorithms for fair selective regression that enforce their respective feature set constraints via regularization in a neural network. Calibration is enforced with a contrastive loss for subgroup mean-squared error and local differential privacy is enforced with a mutual information approximation. We find that our algorithms effectively enforce fairness without significantly compromising accuracy on a variety of synthetic and real-world datasets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications and Implications of MIDI 2.0</title>
<link href="https://hdl.handle.net/1721.1/151396" rel="alternate"/>
<author>
<name>Hamelberg, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/151396</id>
<updated>2023-08-01T03:22:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applications and Implications of MIDI 2.0
Hamelberg, Julian
Since its introduction in 1983, Musical Instrument Digital Interface (MIDI) has been the standard for connecting electronic music instruments, computers, and other audio devices to play, edit, and record music. The MIDI Association recently announced a new specification, MIDI 2.0, to add more flexibility to the MIDI protocol while still being backwards compatible with the MIDI 1.0 specification. This thesis presents an analysis of MIDI 2.0 by comparing it to previous versions of MIDI and the limitations of those specifications including keyboard bias, 12-tone bias, limited controller value resolution, and limited per note expression. In addition, we examine the core features of the MIDI 2.0 specification including MIDI Capability Inquiry (MIDI-CI) and Universal MIDI Packets (UMPs).&#13;
&#13;
To further demonstrate the capabilities of MIDI 2.0, we provide examples of MIDICI messages and implement a Python library for creating and sending UMPs using Apple’s CoreMIDI framework to explore creative use cases of UMPs. Several Python applications are presented to demonstrate the use of new features of MIDI 2.0 such as note attributes, new pitch representations, and per-note expression. Finally, we analyze MIDI 2.0 to investigate implications of the updated specification, how it can increase musical expression, and how it can be used creatively by independent developers and musicians.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overview of Non-Fungible Tokens: Key Features, Opportunities, Challenges, and Business Use Cases</title>
<link href="https://hdl.handle.net/1721.1/151395" rel="alternate"/>
<author>
<name>Pramniya, Krittamate</name>
</author>
<id>https://hdl.handle.net/1721.1/151395</id>
<updated>2023-08-01T03:05:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Overview of Non-Fungible Tokens: Key Features, Opportunities, Challenges, and Business Use Cases
Pramniya, Krittamate
Non-fungible tokens (NFTs) gradually move into public awareness before exploding into mainstream adoption at the beginning of 2021. Although the cryptocurrency market experienced a downturn in 2022, the total volume of NFT sales continued to grow compared to the previous year. Nevertheless, NFT technology remains in its nascent stages, and many companies and individuals still do not understand how NFTs work and how to apply them to create value or solve real-world business problems.&#13;
&#13;
This paper will start by providing a comprehensive overview of NFTs, encompassing aspects such as their definition, underlying technology, essential properties, current market landscape, and primary concerns. Then, to delve deeper into NFTs, the author investigates business case studies across five industries: NFT digital art, NFT ticketing, NFT gaming, NFT digital wearables, and NFT digital real estate. Subsequently, plus and minus analysis is conducted to comprehend better the opportunities and challenges posed by NFTs in the business world. The final section of the paper offers predictions on the future of NFTs, aiming to equip businesses and individuals with the necessary insights to fully harness the potential of NFTs while mitigating potential downsides.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of Control and Perception Subsystems for an Autonomous Surface Vehicle for Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/151394" rel="alternate"/>
<author>
<name>Klahn, Daniel Asher</name>
</author>
<id>https://hdl.handle.net/1721.1/151394</id>
<updated>2023-08-01T03:31:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Implementation of Control and Perception Subsystems for an Autonomous Surface Vehicle for Aquaculture
Klahn, Daniel Asher
Aquaculture offers a sustainable seafood production alternative to over fishing in today's vulnerable oceans. Ward Aquafarms LLC., a local New England seafood producer, farms oysters off the coast of Cape Cod. They grow their oysters in rigid plastic bags which must be flipped over every 7-10 days to prevent the growth and accumulation of bio-fouling which can reduce the flow of oxygen and other nutrients to the growing crop of oysters. The farm manages arrays of hundreds of bags and the arduous task of flipping each individual bag (each weighing up to 60lbs) is difficult, unpleasant, and puts workers at risk of injury. The MIT Sea Grant lab is designing an autonomous surface vehicle (ASV) to automate the bag flipping process and to reduce the strain on workers. This thesis focuses on the design, implementation, and testing of the electronics and power distribution system, emergency stop safety system, a dynamics and optimization-based motor control system, and a computer vision system that detects and locates oyster baskets between the hulls of the ASV. These subsystems will enable the ASV to maneuver in its environment, to successfully interact with the oyster baskets, and to more accurately monitor its position and progress through the array.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced-Order Atmospheric Density Modeling for LEO Satellite Orbital Reentry Prediction</title>
<link href="https://hdl.handle.net/1721.1/151393" rel="alternate"/>
<author>
<name>Clark, Nicolette LeAnn</name>
</author>
<id>https://hdl.handle.net/1721.1/151393</id>
<updated>2023-08-01T04:20:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Reduced-Order Atmospheric Density Modeling for LEO Satellite Orbital Reentry Prediction
Clark, Nicolette LeAnn
Atmospheric density modeling and uncertainty quantification for fast and accurate orbit propagation are vital to drag force estimation for satellite reentry prediction and conjunction assessment in today's increasingly cluttered Low Earth Orbit (LEO) environment. Current density models can be computationally expensive, and often contain large errors, which make density modeling a leading cause of uncertainty in drag estimation. Reduced-order atmospheric density Models (ROMs) have shown potential to provide good predictive performance at a significantly lower computational cost, by propagating a low-dimensional representation of the atmospheric density state instead of the entire density state.&#13;
&#13;
In this thesis, ROMs were implemented in a high-fidelity orbital propagator and tested on the problem of reentry modeling for LEO objects. First, uncertainty quantification was performed on a test case for three significant sources of uncertainty in reentry modeling to compare the impact of uncertainty from initial state, ballistic coefficient, and space weather indices on residual lifetime estimation. These results highlighted features of interest in ROM behavior relative to empirical models.&#13;
&#13;
Second, ROMs were used to predict reentry of three LEO object test cases. ROMs were found to provide residual lifetime estimation performance comparable to current empirical models such as JB2008 and NRLMSISE-00, with a run time reduction of up to 70\% compared to the empirical models. ROMs were especially effective for longer predictions starting two or more days prior to LEO object reentry, where in some cases ROMs outperformed both empirical models while saving hours of run time. These findings validate the utility of ROMs for orbit propagation applications such as reentry prediction.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Information Retrieval with Access Control</title>
<link href="https://hdl.handle.net/1721.1/151392" rel="alternate"/>
<author>
<name>Goyal, Pawan</name>
</author>
<id>https://hdl.handle.net/1721.1/151392</id>
<updated>2023-08-01T03:50:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Private Information Retrieval with Access Control
Goyal, Pawan
Private Information Retrieval (PIR) allows a user to query for a record from a remote database without revealing the query to the database server. However, PIR does not provide access control guarantees, allowing any user access to any record. Moreover, the database server cannot check access permissions through conventional techniques as they are fundamentally incompatible with PIR.&#13;
&#13;
In this thesis, we present Pirac—a novel framework for access control in PIR. In Pirac, only users who have permission to access a specific database record can retrieve it. Our constructions make black-box use of the underlying PIR schemes and therefore apply to both single-server and multi-server PIR.&#13;
&#13;
We evaluate our open-source implementation of Pirac when applied to state-of-theart PIR schemes. For databases with roughly one million 4 KiB records, adding access control via Pirac incurs a 2.6× server-side computational overhead in single-server PIR and 3.1× in multi-server PIR, while keeping user processing and communication overheads at a minimum.&#13;
&#13;
We show that Pirac enables new applications of PIR, including privacy-preserving password breach lookups, multi-user databases with personal content, and private friend discovery, among others.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>HIPAAway: developing software for de-identification&#13;
and exploring bias in name detection</title>
<link href="https://hdl.handle.net/1721.1/151391" rel="alternate"/>
<author>
<name>Lim, Shulammite</name>
</author>
<id>https://hdl.handle.net/1721.1/151391</id>
<updated>2023-08-01T03:46:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">HIPAAway: developing software for de-identification&#13;
and exploring bias in name detection
Lim, Shulammite
De-identification, the process of removing identifiers, is a crucial step in the preparation of clinical data for use in biomedical research. Advances in natural language processing have increased interest in developing an accurate and adaptable automatic de-identification system for clinical text. Models for de-identification have been found successful but are largely unavailable for public use due to a lack of provided code and a cost associated with using commercial models. A lack of transparency in deidentification model training may bias the models against certain demographic groups, which are hidden in overall performance metrics and need to be evaluated due to the disproportionate potential harm to marginalized communities. In this thesis, we review current de-identification methods, present a new de-identification dataset, audit demographic biases in existing de-identification approaches, and develop an easy-to-use, open-source de-identification software package. This package would make clinical text de-identification more accessible to researchers and clinicians, alleviating the bottleneck of de-identification to free up more data for biomedical research. This would help make future research more robust and beneficial to not only the medical community, but also people around the world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Deep Learning to Financial Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/151390" rel="alternate"/>
<author>
<name>Camelo Sa, Lucas</name>
</author>
<id>https://hdl.handle.net/1721.1/151390</id>
<updated>2023-08-01T04:12:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applications of Deep Learning to Financial Time Series Forecasting
Camelo Sa, Lucas
Deep learning has recently risen as a dominant technique in a variety of settings comprising large-scale and high-dimensional data. In the particular case of financial modeling, one of the most important data analysis problems consists of predicting the future volatility of a given asset. In this thesis, we investigate how the Transformer architecture performs at the task of volatility forecasting by comparing its performance against that of previously explored deep learning architectures such as the LSTM.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Creating Synthetic Data Testbeds for Research</title>
<link href="https://hdl.handle.net/1721.1/151389" rel="alternate"/>
<author>
<name>Oufattole, Nassim</name>
</author>
<id>https://hdl.handle.net/1721.1/151389</id>
<updated>2023-08-01T04:10:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Creating Synthetic Data Testbeds for Research
Oufattole, Nassim
Insurance datasets are generally private in order to protect user information, making it difficult for the ML research community to access and experiment with this data. To increase accessibility and innovation on private insurance data, we compile and share publicly available insurance datasets, analyze challenges inherent in these datasets, and propose, motivate, and evaluate a Synthetic Data sharing framework called Synthetic Insurance Data (SID) Testbed that can be used to improve ML performance on tabular datasets by allowing collaborators to generate Synthetic Data for Data Augmentation. In addition to this framework, we recognize that tabular data augmentation is not a well understood phenomenon, and we run controlled experiments to better understand how and when data augmentation improves machine learning performance in the setting of tabular data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systemic Issues with US Army Talent Management and Retention</title>
<link href="https://hdl.handle.net/1721.1/151387" rel="alternate"/>
<author>
<name>Pinigis, Alexander J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151387</id>
<updated>2023-08-01T03:17:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Systemic Issues with US Army Talent Management and Retention
Pinigis, Alexander J.
The Officer Personnel Management Act of 1947 standardized the talent management of officers across all the DoD services and implemented an “up or out” system that incentivized top-performing officers for promotion while removing officers from service who did not possess the potential to serve at the next level. In practice, only the most talented officers would remain and continue serving in the US Army while poor performers were removed. However, this industrial-era one-size-fits-all promotion system leaves little flexibility for career progression and has quickly become outdated. The Army’s method of evaluating its officers with an Officer Evaluation Report (OER) is also plagued with biases and inconsistencies. The OERs that determined officer promotions led to a skewed distribution of representation and a lack of diversity in the upper ranks of the officer corps.&#13;
&#13;
In 2018, the FY19 National Defense Authorization Act granted nine new personnel management authorities, allowing the Army to offer more career flexibility and reward top performers. In conjunction with the new Army People Strategy, the Army Talent Management Task Force has made significant progress in revolutionizing how the Army manages its talent. New programs include a comprehensive assessment for battalion commanders, officers having more flexible options for their career timelines, and the Assignment Interactive Module (AIM) giving officers greater transparency and the ability to apply and compete for all positions available to them for their next assignment.&#13;
&#13;
While progress is being made, systemic issues within Army talent management remain that should be addressed. Additionally, recent talent management changes could be causing unforeseen adverse effects. The biggest challenge the Army will continue to face is retaining talent, regardless of how much it improves its talent management system. The Army’s top talent will continue to leave unless the Army addresses more significant problems that concern officer career satisfaction and the support of their families. This thesis evaluates current Army talent management practices and recent changes while recommending system improvements. As an all-volunteer force, the Army must adapt to societal changes and compete with private industry opportunities to effectively manage and retain its talent.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Semi-supervised Estimation of Distributions</title>
<link href="https://hdl.handle.net/1721.1/151386" rel="alternate"/>
<author>
<name>Erol, Hasan Sabri Melihcan</name>
</author>
<id>https://hdl.handle.net/1721.1/151386</id>
<updated>2023-08-01T04:26:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">On Semi-supervised Estimation of Distributions
Erol, Hasan Sabri Melihcan
We study the problem of estimating the joint probability mass function (pmf) over two random variables. In particular, the estimation is based on the observation of &#119898; samples containing both variables and &#119899; samples missing one fixed variable. We adopt the minimax framework with [notation] loss functions, and we show that the composition of uni-variate minimax estimators achieves minimax risk with the optimal first-order constant for &#119901; ≥ 2, in the regime &#119898; = &#119900;(&#119899;).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gaussian processes at the Helm(holtz): A more fluid model for ocean currents</title>
<link href="https://hdl.handle.net/1721.1/151385" rel="alternate"/>
<author>
<name>Berlinghieri, Renato</name>
</author>
<id>https://hdl.handle.net/1721.1/151385</id>
<updated>2023-08-01T03:02:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Gaussian processes at the Helm(holtz): A more fluid model for ocean currents
Berlinghieri, Renato
Oceanographers are interested in predicting ocean currents and identifying divergences in a current vector field based on sparse observations of buoy velocities. Since we expect current velocity to be a continuous but highly non-linear function of spatial location, Gaussian processes (GPs) offer an attractive model. But we show that applying a GP with a standard stationary kernel directly to buoy data can struggle at both current prediction and divergence identification – due to some physically unrealistic prior assumptions. To better reflect known physical properties of currents, we propose to instead put a standard stationary kernel on the divergence and curl-free components of a vector field obtained through a Helmholtz decomposition. We show that, because this decomposition relates to the original vector field just via mixed partial derivatives, we can still perform inference given the original data with only a small constant multiple of additional computational expense. We illustrate the benefits of our method on synthetic and real ocean data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Broken Expectations, Broken Concepts: A New&#13;
Diagnosis of Dark Patterns</title>
<link href="https://hdl.handle.net/1721.1/151383" rel="alternate"/>
<author>
<name>Caragay, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/151383</id>
<updated>2023-08-01T03:20:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Broken Expectations, Broken Concepts: A New&#13;
Diagnosis of Dark Patterns
Caragay, Evan
Interest in analyzing and preventing dark patterns, commonly defined as design that is manipulative or deceptive, has been rising. Most analyses of dark patterns today are descriptive, describing and categorizing examples of these patterns in user interfaces, making it difficult for stakeholders to identify the root causes of more structural and nuanced patterns. In this thesis, I propose a new definition: dark patterns emerge when a given design breaks users’ expectations and harms their interests. To formally define and analyze expectations, I use an existing design framework called concept design, which defines applications in terms of independent building blocks called concepts. I present a new type of design catalog built on concepts to empower organizations to set standards around expectations. I then introduce a formal definition of design extensions within concept design, bringing further precision to dark pattern analysis. This concept-based approach to analyzing dark patterns enables the identification of more subtle dark patterns at both a user interface level and a structural level, more precise analysis of the root cause of darkness within a design, and more nuanced consideration of the impact of cultural expectations on perceptions of darkness.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Many Americans Work Remotely?&#13;
A Survey of Surveys and Their Measurement Issues</title>
<link href="https://hdl.handle.net/1721.1/151381" rel="alternate"/>
<author>
<name>TuYe, Hong-Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/151381</id>
<updated>2023-08-01T04:17:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How Many Americans Work Remotely?&#13;
A Survey of Surveys and Their Measurement Issues
TuYe, Hong-Yi
Remote work surged during the Covid pandemic but there is disagreement about the extent of the change. To address this question, we field a new, nationallyrepresentative survey: the Remote Life Survey (RLS). We find that in October 2020, 31.6 percent of the continuously employed workforce always worked from home (WFH) and 21.9 percent sometimes or rarely WFH, totaling 53.5 percent. We compare our results with alternative measurement approaches, with a focus on government surveys and provide estimates on the impact of four factors: (a) differences among mail versus web-based survey respondents, (b) differences in the inclusion of self-employed workers, (c) the industry mix of the sample, and (d) the exclusion of people who were already remote pre-pandemic. We find that the last explanation (d) explains the bulk of the difference in estimates between the Current Population Survey (CPS) and other measures of remote work. Policymakers and researchers who turn to the BLS-CPS data series for an estimate of remote work prevalence in the American economy should note that it might be underestimating WFH levels by up to 25 percentage points. Under our preferred estimates, we find that about half of the U.S. workforce worked remotely at least one day each week as of December 2020.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Efficient Training &amp; Inference of Neural Differential Equations</title>
<link href="https://hdl.handle.net/1721.1/151379" rel="alternate"/>
<author>
<name>Pal, Avik</name>
</author>
<id>https://hdl.handle.net/1721.1/151379</id>
<updated>2023-08-01T03:53:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">On Efficient Training &amp; Inference of Neural Differential Equations
Pal, Avik
The democratization of machine learning requires architectures that automatically adapt to new problems. Neural Differential Equations have emerged as a popular modeling framework, enabling ML practitioners to design neural networks that can adaptively modify their depth based on the input problem. Neural Differential Equations combine differential equations with neural networks and rely on adaptive differential equation solvers for the forward process.&#13;
&#13;
The flexibility of automatically adapting the depths comes with the cost of expensive training and slower predictions. Several prior works have tried to accelerate training and inference. However, almost all of them have severe tradeoffs. Either these works rely on expensive training methods to accelerate predictions or use algorithms that are harder to integrate into existing workflows.&#13;
&#13;
This thesis will discuss two methods to accelerate Neural Differential Equations. We propose an Infinite Time Neural ODE, which paradoxically can be trained faster than integrating a Neural ODE to a fixed time-point. We also build upon prior works on regularized Neural ODEs and propose a stochastic local regularization scheme that can be used as a drop-in replacement for Neural ODEs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapidly Estimating Swarm Resource Needs Through Autonomous Simulation</title>
<link href="https://hdl.handle.net/1721.1/151377" rel="alternate"/>
<author>
<name>Young, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/151377</id>
<updated>2023-08-01T03:15:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Rapidly Estimating Swarm Resource Needs Through Autonomous Simulation
Young, Eric
The maritime industry spends significant time and resources accomplishing long lasting collaborative tasks such as search and rescue or ocean surveying. Autonomous swarm ships’ ability to scale rapidly and operate with limited resources allows them to outperform conventional crewed ships at these collaborative operations. Despite their incredible potential, perpetually operating productive autonomous swarms creates significant logistic challenges. This thesis aims to solve these problems. Specifically, this thesis aims to maximize collaborative swarm productivity, by predicting and managing robot resource needs, using operations theory, simulation, and machine learning.&#13;
&#13;
Maximizing swarm productivity first requires developing a common scenario to measure productivity. Drawing from multi-robot patrol research, this thesis implements two resource-aware multi-robot patrol missions in MOOS-IvP. In each mission, vehicles perpetually patrol a grid and must periodically break patrol formation to refuel at a depot. Missions measure their performance based on how frequently robots visit each portion of the mission operating area (grid idle time) and how much area each robot controls (average Voronoi polygon area). With a common patrol scenario developed, this thesis then simulates patrol missions using different vehicle and depot parameters to generate a broad performance dataset. &#13;
&#13;
Finally, this thesis develops a method to predict future mission performance from the simulated productivity dataset. Simulated mission data is post processed and used to train XGBoost models. Compared to mission simulations, these models take far less time to produce while still showing planners what performance and vehicle output they can expect from a given mission.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Model-Based Systems Engineering Integration Challenges and Improvements</title>
<link href="https://hdl.handle.net/1721.1/151376" rel="alternate"/>
<author>
<name>Pandolf, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/151376</id>
<updated>2023-08-01T04:21:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigation of Model-Based Systems Engineering Integration Challenges and Improvements
Pandolf, Jennifer
Company X produces some of the world’s most sophisticated engineered products. Increased technological complexity has made design, testing, and implementation more difficult. Model Based Systems Engineering (MBSE) promises to help by improving system understanding across stakeholders, managing traceability, complexity, and capacity for design reuse, as well as reducing risk through earlier system validation and verification. Despite those purported benefits, deployment of, and integrated use of MBSE has proven challenging, so its full benefits are not being realized in terms of quality, cost, and speed. The integration challenges associated with implementing MBSE within Design Center Y were studied with the goal of recommending project and organizational strategies to improve system model integration while progressing adoption throughout a heterogeneous organization. Ethnographic methods were used to create case studies about local MBSE deployment. A survey was run to understand how descriptive and analytical modeling methods have been deployed to better understand associated challenges locally and at scale. These data revealed five coupled views that describe challenges of MBSE deployment within Design Center Y, as reflected in needs such as model reuse, data authority, schedule considerations, skills development and both internal and external collaboration. These views are captured as data, model, project, supplier, and engineering management lenses. A comparison between Design Center Y’s experience and across NASA programs was done. This comparison supported the validity of the five lenses framework’s explanatory power and suggested strategies for achieving success. These include modeling champions, model management and development planning,&#13;
establishing specific project readiness criteria, integrated vision/strategy setting, and influencing relevant stakeholders related to process/methods, tools, and skills to enable scaled deployment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Top-Down Synthesis for Library Learning</title>
<link href="https://hdl.handle.net/1721.1/151374" rel="alternate"/>
<author>
<name>Bowers, Matthew L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151374</id>
<updated>2023-08-01T04:24:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Top-Down Synthesis for Library Learning
Bowers, Matthew L.
This thesis introduces corpus-guided top-down synthesis as a mechanism for synthesizing library functions that capture common functionality from a corpus of programs in a domain specific language (DSL). The algorithm builds abstractions directly from initial DSL primitives, using syntactic pattern matching of intermediate abstractions to intelligently prune the search space and guide the algorithm towards abstractions that maximally capture shared structures in the corpus. We present an implementation of the approach in a tool called Stitch and evaluate it against the state-of-the-art deductive library learning algorithm from DreamCoder. Our evaluation shows that Stitch is 3-4 orders of magnitude faster and uses 2 orders of magnitude less memory while maintaining comparable or better library quality (as measured by compressivity). We also demonstrate Stitch’s scalability on corpora containing hundreds of complex programs that are intractable with prior deductive approaches and show empirically that it is robust to terminating the search procedure early—further allowing it to scale to challenging datasets by means of early stopping. We publish the code, the documentation, a tutorial, and a Python library for interfacing with our for our Rust implementation of Stitch.&#13;
&#13;
Tutorial &amp; Documentation (Python Library): https://stitch-bindings.read thedocs.io/en/stable/intro/tutorial.html &#13;
&#13;
Rust Implementation: https://github.com/mlb2251/stitch &#13;
&#13;
Artifact (Awarded: Reusable): https://github.com/mlb2251/stitch-artifact
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neutronic Analysis of Horizontal-Compact High Temperature Gas-cooled Reactor</title>
<link href="https://hdl.handle.net/1721.1/151373" rel="alternate"/>
<author>
<name>Kristina</name>
</author>
<id>https://hdl.handle.net/1721.1/151373</id>
<updated>2023-08-01T03:49:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Neutronic Analysis of Horizontal-Compact High Temperature Gas-cooled Reactor
Kristina
To address the significant cost challenges associated with advanced reactors, a 150MWt horizontal compact high temperature gas-cooled reactor (HC-HTGR) has been proposed. The HC-HTGR has potential to reduce the capital cost of a traditional vertical oriented HTGR by 20% through reduction in reactor building volume. This benefit comes with a trade-off in control system design that requires the usage of control drums due to sagging thin rods in a horizontal layout. Commonly utilized in microreactors, a thorough investigation of control drums must be conducted in reactors with power &gt;100MWt. Parametric studies using OpenMC were carried out to ensure its feasibility. With a uniform enriched core, 12 rotating control drums with an outer radius of 23.4407cm, 0.5cm thickness of 90% enriched B₄C, and 0.3cm incoloy cross supports, achieved the highest shutdown margin (SDM) of 3.23%. A sensitivity study on fuel enrichment yielded a SDM of 6.29%, that satisfied the HTGRs design requirement. 2D radial and axial power peaking factor (PPF) with the new enrichment pattern was found at 1.847 and 1.344, respectively. Homogenization using ring reactivity equivalent physical transformation (RRPT) method was developed to reduce the complexity of the core and showed a good performance with a 4pcm difference in steady-state calculation. Depletion analysis was modeled to ensure the reliability of the new fuel enrichment pattern. The first cycle core sustained criticality for 2.37 years with an average enrichment of 15.5% which meets the design target goal of 2 years cycle length. Overall, the neutronics assessment of HC-HTGR core met the initial safety and design requirements.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Do Women Ask? Gender Differences in Applying for Internal Job Openings</title>
<link href="https://hdl.handle.net/1721.1/151371" rel="alternate"/>
<author>
<name>Mang, Audrey G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151371</id>
<updated>2023-08-01T04:09:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Do Women Ask? Gender Differences in Applying for Internal Job Openings
Mang, Audrey G.
Gender differences in application behavior can contribute to gender inequality in hiring outcomes. People are unlikely to be selected for jobs if they do not put themselves forward to be considered for positions. This paper focuses on understanding supply-side mechanisms that may stifle female advancement; in particular, responding to ideas about how women behave in the labor market that would lead us to suspect that they are “leaning out” of opportunities. We study the internal labor market within a single firm to examine the extent of gender differences in application to internal job openings. Importantly, in determining the rates of application, we have the advantage of being able to observe the risk set of potential applications in this setting. Our findings show few differences in application rates by gender, even when considering variation in hierarchical distance of the opportunity or the level from which the candidate is applying. Despite existing theories sf constraints that differentially affect workers by gender, in this setting, there is very little evidence that women are not leaning into advancement opportunities, or that they’re leaning in less than men.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D Deep Learning Segmentation for Fiber Break Analysis of Carbon Fiber Reinforced Polymer Tomograms</title>
<link href="https://hdl.handle.net/1721.1/151367" rel="alternate"/>
<author>
<name>Vuong, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151367</id>
<updated>2023-08-01T03:03:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">3D Deep Learning Segmentation for Fiber Break Analysis of Carbon Fiber Reinforced Polymer Tomograms
Vuong, Daniel
Carbon fiber reinforced polymers (CFRPs) find extensive use in modern aerospace structures due to their adaptability in stiffness and strength for specialized applications. However, their heterogeneous composition and the microscopic scale of fiber and matrix lead to complex damage mechanisms, as well as make failure difficult to predict, slowing the progress of CFRP adoption over traditional engineering materials such as metals. X-ray computed tomography allows for non-destructive, volumetric imaging of CFRPs under stress, enabling real-time 3D observation of the material’s failure. Due to the data-rich nature of the 3D scans at each time step, such experiments can result in thousands of 2D images per scan and multiple scans at different time- or loading-steps per test. This accumulates to hundreds of thousands of images following a typical test campaign. Human analysis, of these scans is time- and resource-intensive, creating the need for an automated way to segment and analyze these images. Recent work has shown that the application of 2D convolutional deep learning models to the identification of damage types in CFRP yields accuracy levels exceeding those of humans and requires a fraction of the working time. However, similar research with deep learning models applied to medical images (MRIs, X-rays, etc.) has noted 3D convolution as strictly better and is now the standard.&#13;
&#13;
Here, a 2D vs. 3D deep learning model comparison of the segmentation of carbon fiber breaks, an imbalanced classification problem with less than 0.01% of the data being fiber breaks of interest, shows overall similar performance between 2D and 3D segmentation (e.g., IoU scores of 67.5% and 70.7%, respectively). Qualitative and quantitative analysis reveals that the 3D model has the ability to embed the third dimension of spatial information, such that 3D segmentation is evaluated as improved over 2D for the fiber break problem, which is also desirable for future applications in composite damage segmentation beyond fiber breaks, suggesting 3D will be strictly better in the composite damage classification problem space.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Orchestral Conducting in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/151366" rel="alternate"/>
<author>
<name>Kim, Nathaniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151366</id>
<updated>2023-08-01T04:17:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Orchestral Conducting in Virtual Reality
Kim, Nathaniel
Orchestral conducting is a complex task, and learning to conduct well requires thorough practice. However, live training with a real orchestra is difficult to obtain, especially for beginners. To bridge this gap, we built the Virtual Reality Conducting system (VRC), a virtual reality system in which a user conducts a simulated orchestra.&#13;
&#13;
VRC detects the user’s hand movements and algorithmically infers fundamental conducting gestures, such as timekeeping and accent signaling, with the use of carefully fine-tuned parameters. In turn, VRC adjusts the orchestra’s produced sound, modeling a real-life orchestra’s response as best as possible. VRC is designed to encourage good conducting habits, especially as instructed by pedagogy for beginners.&#13;
&#13;
Trials demonstrate that VRC is effective in providing beginners with helpful, practical training in real-life conducting. Still, VRC has potential areas for improvement, especially in enabling users to realistically slow down the orchestra. VRC also illuminates interesting phenomena, especially regarding gestures and detection, that vary across pieces of different speeds and styles.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedding StarLogo Nova into WISE for a Seamless&#13;
Student Experience</title>
<link href="https://hdl.handle.net/1721.1/151359" rel="alternate"/>
<author>
<name>Xiao, Timmy</name>
</author>
<id>https://hdl.handle.net/1721.1/151359</id>
<updated>2023-08-01T04:23:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Embedding StarLogo Nova into WISE for a Seamless&#13;
Student Experience
Xiao, Timmy
Support for teaching computational thinking has been increasing throughout K-12 schools as the world is being more utilized by computer technology [2]. The Scheller Teacher Education Program (STEP) at MIT uses educational technologies to create innovative learning experiences. An example project is StarLogo Nova, a block-based programming environment that facilitates the creation of agent-based models to study complex systems [17]. Currently StarLogo Nova is a website where students independently login and make projects for their models. However, the overall experience of using StarLogo Nova can be improved as there is no guidance when a student makes a model. In this thesis, we will augment StarLogo such that there will be a concept of activities, a user experience for students to receive instructions and answer questions while still allowing convenient interactions with a StarLogo project, such as editing or viewing a model. To do this, we integrate StarLogo into a platform called WISE (Web-based Inquiry Science Environment).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Last Layer Retraining of Selectively Sampled Wild&#13;
Data Improves Performance</title>
<link href="https://hdl.handle.net/1721.1/151358" rel="alternate"/>
<author>
<name>Yang, Hao Bang</name>
</author>
<id>https://hdl.handle.net/1721.1/151358</id>
<updated>2023-08-01T04:06:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Last Layer Retraining of Selectively Sampled Wild&#13;
Data Improves Performance
Yang, Hao Bang
While AI models perform well in labs where training and testing data are in a similar domain, they experience significant drops in performance in the wild where the data can lie in domains outside the training distribution. Out-of-distribution (OOD) generalization is difficult because these domains are underrepresented or non-existent in training data. The pursuit of a solution to bridging the performance gap between in-distribution and out-of-distribution data has led to the development of various generalization algorithms that target finding invariant/"good" features. Recent results have highlighted the possibility of poorly generalized classification layers as the main contributor to the performance difference while the featurizer is already able to produce sufficiently good features. &#13;
&#13;
This thesis will verify this possibility over a combination of datasets, generalization algorithms, and training methods for the classifier. We show that we can improve the OOD performance significantly compared to the original models when evaluated in natural OOD domains by simply retraining a new classification layer using a small number of labeled examples. We further study methods for efficient selection of labeled OOD examples to train the classifier by utilizing clustering techniques on featurized unlabeled OOD data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Student Interactions to Explore Systems&#13;
Thinking in Augmented Reality</title>
<link href="https://hdl.handle.net/1721.1/151357" rel="alternate"/>
<author>
<name>Weinstein, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/151357</id>
<updated>2023-08-01T03:05:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Designing Student Interactions to Explore Systems&#13;
Thinking in Augmented Reality
Weinstein, Anna
Educational practices are shifting to incorporate new curriculum guidelines and new technologies. At the same time, the field of augmented reality (AR) is rapidly expanding, enabling a new world of opportunities and requiring new approaches to UI/UX design. Incorporating Augmented Reality into classrooms provides a unique opportunity to create engaging, immersive, and transportative learning experiences. The following work explores the intersection of these threads, asking, how do we start designing for student interactions within the augmented classroom. These discussions will be rooted in WIT, a project which is aimed at exploring how headset based AR can be used to teach complex-systems learning in middle school classrooms as well as providing a platform to develop similar experiences. These concepts will also be discussed at a broader scale, first presenting considerations based on the affordances of AR and current education practices, then diving into the technology underpinning these ideas.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiple-Path Generation to Improve Autonomous&#13;
Vehicle Planning</title>
<link href="https://hdl.handle.net/1721.1/151356" rel="alternate"/>
<author>
<name>Penubarthi, Vishnu</name>
</author>
<id>https://hdl.handle.net/1721.1/151356</id>
<updated>2023-08-01T03:19:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multiple-Path Generation to Improve Autonomous&#13;
Vehicle Planning
Penubarthi, Vishnu
Path planning is an integral part of ensuring autonomous vehicles become more safe and efficient in order to facilitate greater adoption of the technology and make it a viable option for an increased number of applications. While there are many documented approaches to path planning on roads, the challenge of path planning in an off-road environment is less studied and presents additional challenges including traversing an unknown and unstructured map environment. This can result in scenarios where previously unknown obstacles are discovered along a path a vehicle is traversing, forcing the vehicle to re-route and take a potentially inefficient route to its final destination. We present an extensible framework to mitigate this issue in which we generate multiple paths, select an efficient path for the vehicle with respect to its navigation to the final destination, and incentivize the vehicle to adhere to the selected path. We also implement this framework within the Nebula team’s ROS-based autonomous vehicle software stack for DARPA’s Racer challenge and compare its performance to the current implementation. Through testing performed in simulation on topologies of interest to the Nebula group, we find that the proposed framework results in a 37% increase in average speed and a 24% decrease in time to reach the final destination compared to the current implementation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge Distillation for Interpretable Clinical Time Series Outcome Prediction</title>
<link href="https://hdl.handle.net/1721.1/151355" rel="alternate"/>
<author>
<name>Wong, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/151355</id>
<updated>2023-08-01T04:03:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Knowledge Distillation for Interpretable Clinical Time Series Outcome Prediction
Wong, Anna
A common machine learning task in healthcare is to predict a patient’s final outcome given their history of vitals and treatments. For example, sepsis is a life-threatening condition that happens when the body has an extreme response to an infection. Treating sepsis is a complicated process, and we are interested in being able to predict a sepsis patient’s final outcome. Neural networks are a powerful model to make accurate predictions on such outcomes, but a major drawback of these models is that they are not interpretable. Being able to accurately predict treatment outcomes while also being able to understand the model’s predictions is necessary for these models and algorithms to be used in the real world.&#13;
&#13;
In this thesis, we use knowledge distillation, which is a technique for taking a model with high predictive power (known as the "teacher model"), and using it to train a model that has other desirable traits such as interpretability (known as the "student model"). For our teacher model, we use an LSTM, which is a type of neural network, to predict mortality for sepsis patients, given information about their recent history of vital signs and treatments. For our student model, we use an autoregressive hidden Markov model to learn interpretable hidden states. To incorporate the knowledge from the teacher model into the student model, we use a similarity-based constraint. We evaluate a method from a previous work that uses variational inference to learn the hidden states, and also develop and evaluate an alternative approach that uses the expectation-maximization algorithm. We analyze the interpretability of the learned states. Our results show that, although there is room for improvement in maintaining the generative performance of the model after adding the similarity constraint, the expectation-maximization algorithm is successful in incorporating the constraint to achieve high predictive power similar to the teacher model, along with better interpretability when compared to the teacher model.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SAMoSSA: Multivariate Singular Spectrum Analysiswith Stochastic Autoregressive Noise</title>
<link href="https://hdl.handle.net/1721.1/151354" rel="alternate"/>
<author>
<name>Mann, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/151354</id>
<updated>2023-08-01T03:37:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">SAMoSSA: Multivariate Singular Spectrum Analysiswith Stochastic Autoregressive Noise
Mann, Sean
The well-established practice of time series analysis involves (i) estimating deterministic, non-stationary trend and seasonality components, followed by (ii) learning the residual stochastic, stationary components. Recently, it has been shown that one can learn the deterministic non-stationary components accurately using multivariate Singular Spectrum Analysis (mSSA) in the absence of a correlated stationary component; meanwhile, in the absence of deterministic non-stationary components, the Autoregressive (AR) stationary component can also be learnt readily, e.g. via Ordinary Least Squares (OLS). However, a theoretical underpinning of multi-stage learning algorithms involving both deterministic and stationary components has been absent in the literature despite its pervasiveness. We tackle this issue by establishing desirable theoretical guarantees for a natural two-stage algorithm, where mSSA is first applied to estimate the non-stationary components despite the presence of a correlated stationary AR component, which is subsequently learned from the residual time series. We provide a finite-sample forecasting consistency bound for the proposed algorithm, SAMoSSA, which is data-driven and thus requires minimal parameter tuning. To establish theoretical guarantees, we overcome three hurdles: (i) we characterize the spectra of Page matrices of stable AR processes, thus extending the analysis of mSSA; (ii) we extend the analysis of AR process identification in the presence of arbitrary bounded perturbations; (iii) we characterize the out-of-sample or forecasting error, as opposed to solely considering model identification. Through representative empirical studies, we validate the superior performance of SAMoSSA compared to existing baselines. Notably, SAMoSSA’s ability to account for AR noise structure yields improvements ranging from 5% to 37% across various benchmark datasets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Redistributive Tax Policies on Fuel Demand</title>
<link href="https://hdl.handle.net/1721.1/151353" rel="alternate"/>
<author>
<name>Tricot, Loan</name>
</author>
<id>https://hdl.handle.net/1721.1/151353</id>
<updated>2023-08-01T03:25:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Effects of Redistributive Tax Policies on Fuel Demand
Tricot, Loan
The objective of this study is to evaluate the efficiency of policies aimed at reducing fuel demand. A model is developed to illustrate the channels through which policies --- such as fuel taxes and electric vehicle subsidies --- affect fuel demand. The model is based on a consumer theory framework at the household level. I model consumption of fuel and vehicles simultaneously and study the consumer's choice between a combustion vehicle and an electric vehicle. The study underlines the role of income elasticity of vehicle miles traveled in consumers' vehicle choice, and explores the policy implications of this role. The National Household Travel Survey's data is used to uncover stylized facts of fuel demand, which I compare with those exhibited by my model. Studying this subject is crucial for understanding and enhancing the effectiveness of policies aimed at reducing fuel demand, which is key to addressing climate change by promoting sustainable transport options.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LightSpeed: A Framework to Profile and Evaluate&#13;
Inference Accelerators at Scale</title>
<link href="https://hdl.handle.net/1721.1/151351" rel="alternate"/>
<author>
<name>Williams, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/151351</id>
<updated>2023-08-01T03:19:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">LightSpeed: A Framework to Profile and Evaluate&#13;
Inference Accelerators at Scale
Williams, Christian
The massive growth of machine learning-based applications, and the end of Moore’s law, created a pressing need to build highly efficient computing platforms from the ground up. Consequently, researchers and practitioners have been developing highly innovative cutting-edge architectures to meet today’s exponentially increasing demands for machine learning services.&#13;
&#13;
However, evaluating the performance gains of newly developed machine learning systems at scale is extremely challenging. Existing evaluation platforms are often specialized to a specific hardware target, such as GPUs, making them less amenable to novel designs. Moreover, evaluating the performance of a newly designed system at scale requires careful consideration of workload and traffic patterns.&#13;
&#13;
To address the above challenges, I introduce LightSpeed, a framework to profile and evaluate inference accelerators at scale. LightSpeed is an event-based simulator that enables users to compare the performance of their system to best-in-class accelerators at scale. LightSpeed profiles the computation and communication requirements of real-world deep neural networks through accurate measurements on hardware. It then simulates the service time of inference requests under a variety of accelerators and scheduling algorithms.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Compositional Image Decompositionwith Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/151350" rel="alternate"/>
<author>
<name>Su, Jocelin</name>
</author>
<id>https://hdl.handle.net/1721.1/151350</id>
<updated>2023-08-01T03:26:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unsupervised Compositional Image Decompositionwith Diffusion Models
Su, Jocelin
Our visual understanding of the world is factorized and compositional. With just a single observation, we can ascertain both global and local attributes in a scene, such as lighting, weather, and underlying objects. These attributes are highly compositional and can be combined in various ways to create new representations of the world. This paper introduces Decomp Diffusion, an unsupervised method for decomposing images into a set of underlying compositional factors, each represented by a different diffusion model. We demonstrate how each decomposed diffusion model captures a different factor of the scene, ranging from global scene descriptors, (e.g. shadows, foreground, or facial expression) to local scene descriptors (e.g. constituent objects). Furthermore, we show how these inferred factors can be flexibly composed and recombined both within and across different image datasets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recreating Past Environments in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/151346" rel="alternate"/>
<author>
<name>Villa, Eli</name>
</author>
<id>https://hdl.handle.net/1721.1/151346</id>
<updated>2023-08-01T04:16:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Recreating Past Environments in Virtual Reality
Villa, Eli
Reconstructing environments from a collection of images or videos could allow people to revisit locations from their past. Existing reconstruction methods impose strict constraints on input images, and require more data than most people have at their disposal. This work provides a pipeline that takes a short video from a user’s past and creates a virtual environment that they can experience in VR. In particular, we at Photogrammetry and Neural Radiance Fields as ways of representing 3D environments that can be converted into meshes and exported to VR. A technical evaluation compares the two methods and one is selected for use in our pipeline. Through a human-subjects study, we found that experiencing past environments as immersive walkable spaces, when compared to simply watching a video, improves users’ sense of presence and ability to recall memories of the space.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Truthfulness in Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/151345" rel="alternate"/>
<author>
<name>Liu, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/151345</id>
<updated>2023-08-01T04:00:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Truthfulness in Large Language Models
Liu, Kevin
Large language models (LLMs) have been experiencing a rapid rise in utility, accessibility, and popularity, but there are still many areas in which they can improve. One such area for improvement is their truthfulness. We seek to improve the truthfulness of LLMs by probing their internal representations. We find that a linear probe on the last hidden layer representation is able to improve a model’s accuracy by reducing its confidence in incorrect answers. However, this probe is less effective at perturbing the model to change its behavior and driving the model towards correct answers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capturing Worlds of Play: A Framework for&#13;
Educational Multiplayer Mixed Reality Simulations</title>
<link href="https://hdl.handle.net/1721.1/151344" rel="alternate"/>
<author>
<name>Wang, Ellen</name>
</author>
<id>https://hdl.handle.net/1721.1/151344</id>
<updated>2023-08-01T03:33:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Capturing Worlds of Play: A Framework for&#13;
Educational Multiplayer Mixed Reality Simulations
Wang, Ellen
Multiplayer AR participatory simulations offer highly engaging hands-on lessons to teach systems thinking to K-12 students. However, several challenges obstruct the implementation of these simulations. These simulations require that participating mixed reality devices share a global view of the simulation. Devices must share a robust map of the physical space to render virtual objects in the same physical location for every viewer. In addition, the environmental data of the room must be captured and processed to inform the simulation. Finally, the system must respond dynamically to player input. This thesis project aims to create a framework that abstracts the process of building these highly dynamic AR p-sims.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Don’t Over Think It: Mechanically Intelligent Manipulation</title>
<link href="https://hdl.handle.net/1721.1/151343" rel="alternate"/>
<author>
<name>Xie, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/151343</id>
<updated>2023-08-01T03:30:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Don’t Over Think It: Mechanically Intelligent Manipulation
Xie, Gregory
Developing capable, robust robots that can effectively operate in unstructured environments requires reasoning over how embodiment and cognition impact the capability and complexity. In this thesis, we focus on the design of hands, finding a balance between capability and complexity through the lens of mechanical intelligence: the ability of the body to contribute to functionality. We develop two grippers that serve as examples of this idea, Belt Orienting Phalanges and Flexible Robust Observant Gripper.&#13;
&#13;
Belt Orienting Phalanges (BOP) enables in-hand manipulation through the addition of two belts on each finger of a parallel-jaw gripper, allowing control over the roll, pitch and a translation of a grasped object. We demonstrate how these motion primitives and other aspects of BOP’s morphology enable the simple adaption of an existing planning frameworks to perform a complex manipulation tasks.&#13;
&#13;
Flexible Robust Observant Gripper (FROG) eases perception and control through the structure of each finger, allowing for proprioception and robust grasping while being strong and remaining comparable in complexity to other soft grippers. We demonstrate how these features enable FROG to grasp gently and deal with shape and pose uncertainty.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Robustness of Vision Models and Humans to Occlusion-Based Corruptions</title>
<link href="https://hdl.handle.net/1721.1/151342" rel="alternate"/>
<author>
<name>Lu, David</name>
</author>
<id>https://hdl.handle.net/1721.1/151342</id>
<updated>2023-08-01T03:51:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding the Robustness of Vision Models and Humans to Occlusion-Based Corruptions
Lu, David
Humans are excellent object recognizers. Not only can they identify fully visible objects, but they can also recognize objects that are partially blocked from view (i.e., occluded). Moreover, vision models have made substantial progress in object recognition over the past decade. However, their proficiency in identifying occluded objects has not been thoroughly investigated. In this work, we analyze the robustness of models and humans to occlusions by building artificial occlusion transforms that mask out parts of images. We design occlusion transforms to model a diverse range of occlusion scenarios, varying two key factors: (1) the percentage of the image that is occluded, and (2) the granularity of the occlusion pattern, from large chunks to fine-grained pepper noise. We then evaluate the performance of humans and models on these occluded images. Our experiments yield several key findings. Intriguingly, pretrained models exhibit a U-shaped accuracy curve, with medium-granularity occlusions posing the greatest challenge. This pattern closely aligns with the one observed in our human experiments, which is particularly surprising, considering the substantial disparities between human visual systems and machine-based perception. Additionally, we explore whether performance losses caused by occlusions can be mitigated through two approaches: finetuning using occluded images and inpainting occluded pixels before classification. We discover that finetuning leads to a considerable increase in accuracy, but we suspect that finetuned models are relying on a different set of features. Inpainting helps significantly for mid- and high-frequency occlusions, but has the disadvantage of misleading both models and humans at low frequencies. Lastly, we introduce a new adversarial occlusion task, and propose two attack methods based on differential evolution and Grad-CAM. We find that occluding fewer than 10% of pixels is enough to fool vision classifiers. This demonstrates that adversarial attacks can be executed by eliminating image content rather than introducing perturbations. Complementing our analysis of a variety of state-of-the-art models, we offer our occlusion benchmark as a resource for researchers to evaluate the performance of future models intended for real-world deployment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Modal Transit Time Prediction for E-Commerce Fulfillment Optimization and Carbon Emissions Reduction</title>
<link href="https://hdl.handle.net/1721.1/151341" rel="alternate"/>
<author>
<name>Angevine, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/151341</id>
<updated>2023-08-01T04:22:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multi-Modal Transit Time Prediction for E-Commerce Fulfillment Optimization and Carbon Emissions Reduction
Angevine, Kathryn
Consumers are purchasing an increasing amount of goods through digital channels as compared to brick and mortar and expect fast, reliable delivery. At the same time, society is facing the urgent challenge of reducing carbon emissions to limit global warming to levels considered safe by climate scientists. A global sportswear retailer is investing in improving the digital consumer experience while meeting its aggressive 2030 carbon reduction goals. This work studies how machine learning can be used to both improve the retailer’s digital fulfillment operations and reduce their carbon emissions footprint. It focuses on enhancing the decision-making used to select a distribution center to fulfill a consumer’s order from, and aims to do so by increasing the accuracy of a key input into that process. Specifically, the work targets accuracy improvement of transit time estimates, which quantify the number of days between a parcel’s carrier induction and delivery.&#13;
&#13;
Machine learning techniques are leveraged to develop a model for predicting transit times. Model development begins with data preparation, which is inclusive of sourcing, cleaning, sampling and feature engineering. It then continues with a series of experiments to provide insights into favorable model design elements. A final model is created under consideration of experimentation results. This model is associated with an accuracy of 67%, which is a improvement beyond the current state accuracy of 45%. A counterfactual analysis is conducted to assess the impact of improved transit time estimates on key fulfillment metrics. On a one month sample, the model enables improved fulfillment decisions; namely ones that are associated with a 4.5% decrease in lead time, a 3% reduction in CO2 emissions, and a 1.5% reduction in cost.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How To Pack Anything</title>
<link href="https://hdl.handle.net/1721.1/151340" rel="alternate"/>
<author>
<name>Rong, Victor</name>
</author>
<id>https://hdl.handle.net/1721.1/151340</id>
<updated>2023-08-01T04:09:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How To Pack Anything
Rong, Victor
3D printers can make anything, from household tools to rocket engines, so long as the models can be packed into a given enclosure. The denser the packing, the more efficient the print is. Unfortunately, modern packing research has struggled with complex shapes and even proprietary packing software is not capable of effectively processing the wide variety of models created for 3D printing. A recent work (Spectral Packing [1]) uses the frequency space of a voxel grid to achieve impressive performance. However, voxel grids present their own challenges as their resolution and complexity are intrinsically linked. Our proposed extension alleviates these issues by allowing sub-voxel positioning, faster disassembly of loose objects, and incorporating a combinatorial search algorithm of tunable complexity. With these elements, we achieve improved results of packing arbitrary 3D objects.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Natural Language Processing to Facilitate Common Student Misconception Analysis</title>
<link href="https://hdl.handle.net/1721.1/151339" rel="alternate"/>
<author>
<name>Zaman, Azreen</name>
</author>
<id>https://hdl.handle.net/1721.1/151339</id>
<updated>2023-08-01T03:03:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Using Natural Language Processing to Facilitate Common Student Misconception Analysis
Zaman, Azreen
There is a large variation in the educational background and purpose of incoming university students. To improve the overall learning experience of these students, we can utilize natural language processing such as topic modeling and sentiment analysis to facilitate common student misconception analysis. This project aims to develop an algorithm via natural language processing that extracts specific topics and common errors that students struggle with in class from online feedback semi-automatically to allow instructors to adjust lesson plans and place emphasis on topics of concern. Using these tools, we can conduct study on the effect on student grades when instructors take into account the information extracted by the model in their lesson plans. This project is aimed at MIT freshmen taking two semesters of physics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigate and Analyze the Impact of Electronification in Fixed Income Bond Markets and Equity Stock Markets via ARIES Framework</title>
<link href="https://hdl.handle.net/1721.1/151329" rel="alternate"/>
<author>
<name>Uppal, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/151329</id>
<updated>2023-08-01T03:54:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigate and Analyze the Impact of Electronification in Fixed Income Bond Markets and Equity Stock Markets via ARIES Framework
Uppal, Abhishek
Electronic trading continues to increase and evolve within and across financial markets globally. This growth is primarily driven by market participants searching for greater transparencies, operational efficiencies, and regulatory compliant trading solutions. Following tremendous growth in the equities domain electronic trading is gaining prominence in the fixed income markets and is contributing to changes in its market structure, price discovery mechanisms, and strategies to access untapped and alternative liquidity sources. New electronic trading platforms, venues, and entrants have emerged that prioritize in providing clients and investors with modernized trading solutions that are competitive and cost-effective in nature and challenge the status quo. Technological advancements empowered by data and analytics are enabling rapid dissemination of pre-trade, at-trade, and post-trade information in the financial ecosystem leading to more integrated and efficient markets with reduced fragmentation. Electronic trading is also promoting greater use of algorithmic trading and is introducing numerous workflow automation technologies into various stages of the trading lifecycle. Innovative trading protocols supporting best execution for clients have come into existence and serve as strategic tools to attract and retain market share. Although electronification has a lot to offer, it still might be offering challenges or operational resistance to some market participants.&#13;
&#13;
Through literature review and a series of semi-structured interviews this thesis investigates the emergence and impact of electronification in the fixed income bond markets and equities stock markets. It explores, examines, and discusses the value-propositions, drivers and motivations for transformation, challenges inhibiting growth and adoption, and the future outlook subject to electronification in bonds and stock markets. This thesis further applies key elements of the ARIES framework to generate unique perspectives and assess the impact of electronification, in its current state, across these select asset classes and instruments.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategies for Influential Interactivity in the Physical Domain</title>
<link href="https://hdl.handle.net/1721.1/151328" rel="alternate"/>
<author>
<name>Mun, Reina Suyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/151328</id>
<updated>2023-08-01T03:28:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Strategies for Influential Interactivity in the Physical Domain
Mun, Reina Suyeon
The proliferation and accessibility of technological tools have facilitated the emergence of innovative, creative practices that blur the boundaries between conventional design and art fields. At the crux of these practices lies the notion of interactivity, which has become ubiquitous in both theory and practice over the last half-century. Nevertheless, the frequent and indiscriminate usage of the term 'interactive' without well-defined parameters and pragmatic implementations has made me scrutinize the significance of interactivity and the authentic essence of interactive experiences. &#13;
&#13;
Despite several decades of development in interactive arts and design there remains a pressing need to focus more intently on the affective and cognitive impacts these systems engender. The discourse surrounding design and media arts must give greater weight to the roles of cognitive and emotional factors, as they fundamentally shape our perceptions, reactions to reality, and our processing of information on both emotional and intellectual levels. This thesis seeks to contribute to this critical emphasis by exploring the creative possibilities of interactivity beyond the mere construction of feedback loops with technology, thus transcending superficial fashionable approaches. &#13;
&#13;
In this thesis, crucial factors contributing to the formulation of efficacious interactive strategies are discerned through an analysis of influential interactivity disintegration, an investigation of the philosophical underpinnings of emergence, and a synthesis of multi-sensory experience extensions. By probing the complexities of components within both the interactants and the interactive system, the thesis advocates an open-ended, exploratory approach to making. This is accomplished by examining the interconnected nature of sensory modalities encompassing memories, social cues, and emotions while concurrently addressing technology's inherent limitations as an instrument for crafting interactive experiences.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study of Project Management of COVID-19 Vaccination in Japan</title>
<link href="https://hdl.handle.net/1721.1/151327" rel="alternate"/>
<author>
<name>Majima, Eishi</name>
</author>
<id>https://hdl.handle.net/1721.1/151327</id>
<updated>2023-08-01T04:18:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Case Study of Project Management of COVID-19 Vaccination in Japan
Majima, Eishi
COVID-19 vaccination played a critical role in preventing the spread of the disease in the global pandemic. Research on operational mechanisms of COVID-19 vaccination projects is limited compared with the epidemiologic and socio-economic studies. Japan is a unique country with various operational information on COVID-19 vaccinations publicly available. This research evaluated vaccination trends in 49 countries and developed the model of vaccination trends in Japan to understand the operational mechanisms of national-scale projects. The international comparison revealed Japan’s slow vaccine authorizations and yet the 13th earliest and 3rd fastest to achieve the achievement of 70% full vaccination coverage. Globally comparing the daily vaccination trends exhibited a slow pace in Japan’s first 80 days of the vaccination project. The study found the different levels of ceiling effects of vaccine distributions on daily first-dose vaccinations by vaccine category in Japan. Based on the observations, the research developed a system dynamics model on vaccination trends with four operational factors: willing people to take vaccines, daily vaccine deliveries, vaccine stocks on sites, and human resource capacities. The model fit the actual 7-day smoothed daily vaccination trends with the R-squared of 0.943, 0.909, and 0.915 for the total, first, and second doses with Pfizer/BioNTech and Takeda/Moderna vaccines in the primary series in Japan. The simulation predicted cumulative vaccination trends with 70% coverage achievement period errors (percentage errors) of 10 days (4.24%), 12 days (5.41%), and 8 days (3.23%) for the total, first, and second doses, respectively. The developed model was applied to explore room for operational improvement in Japan for resource-saving and acceleration purposes. The experiment demonstrated the potential savings of over 20 thousand healthcare worker recruitments without vaccination delay due to vaccine supply constraints and by a modified team structure in sites with the higher nurse–doctor ratio of 3 or more. For acceleration purposes, the model estimated limited opportunities with human resource management under the vaccine supply constraints, only shortening the 70% full vaccination coverage period by 3 days. This research provides performance metrics and a simulation tool for model-based project planning and management applicable to future pandemics and public emergency responses by practitioners.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economics of Renewable Electricity:  Lessons for potential investors from the California and Texas Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/151326" rel="alternate"/>
<author>
<name>de la Sierra Cauley, Carmen</name>
</author>
<id>https://hdl.handle.net/1721.1/151326</id>
<updated>2023-08-01T03:08:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Economics of Renewable Electricity:  Lessons for potential investors from the California and Texas Electricity Markets
de la Sierra Cauley, Carmen
This thesis describes the comparative analysis conducted to understand the similarities and differences between the California and Texas electricity markets from a regulatory standpoint. Each market has adopted unique policies to kickstart its green energy transition. While both regions have seen a predominant renewable penetration, they have also started to experience hurdles in their grid operations due to the inherent intermittent nature of renewables.&#13;
&#13;
The comparative analysis was carried out through a flexibility model to reflect the uncertainty that exists in each market over time. The development of renewable electricity markets are highly dynamic in nature, so the period under analysis for the investment at hand was separated into three phases to reflect the effects renewable penetration has on the grid and to show how those effects have been and could be handled by the Texas and California grid operators.&#13;
&#13;
The comparison provides a recommendation for investors trying to deploy capital into a 50 MW solar PV, renewable asset, under development with a 30-year useful life segregated into each phase. The critical metric for steering that recommendation is based on the net present values of the cashflows from the projects in each location. These cashflows directly reflect the policies in each market, given the power price evolution.   The economics behind renewables operating in electricity markets follow three distinct phases. At first, renewables expect high margins from high market prices, followed by a decrease in power prices from their penetration. In the final phase, their intermittent nature creates a need for new capacity. Therefore, the long-term value of investments is risky and requires careful analysis that reflects the highly dynamic and uncertain nature of power markets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Understanding Challenges in Preserving User Privacy</title>
<link href="https://hdl.handle.net/1721.1/151325" rel="alternate"/>
<author>
<name>Govada, Mervine Anand</name>
</author>
<id>https://hdl.handle.net/1721.1/151325</id>
<updated>2023-08-01T04:08:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Understanding Challenges in Preserving User Privacy
Govada, Mervine Anand
In recent years, enterprises' collection and processing of personal data has raised significant concerns about customer privacy. Ensuring customer privacy is vital for ethical data use and building trust. However, enterprises may need to enhance their efforts to safeguard customer privacy effectively.&#13;
&#13;
Customers have become increasingly aware of how businesses handle their personal information and the potential risks that come with it; they proactively seek businesses that prioritize privacy protection. However, customer trust in how enterprises protect customer data varies, emphasizing the need for businesses to be transparent and communicate clearly with customers about their data protection practices. Clear and concise communication can include privacy policies and obtaining informed consent from customers.&#13;
&#13;
Enterprises typically use anonymization, encryption, data masking, pseudonymization, and access control to protect customer privacy. The thesis explores two key technologies to enhance customer privacy and increase customer trust in enterprises: Federated Learning and Differential Privacy. &#13;
&#13;
Preserving customer privacy is essential for building trust with customers, ensuring ethical use of personal data, and compliance with regulations. Improving privacy from a technology standpoint might not necessarily result in the customers' desired outcome. Therefore, it is essential to take the entire system into account. A systems approach can aid in analyzing and understanding the challenges of holistically preserving customer privacy from the perspectives of the customer, enterprise, and other stakeholders. By adopting a systems approach, enterprises can identify potential risks and challenges within the system, gain a better understanding of interconnections and interdependencies, and develop more effective solutions.&#13;
&#13;
The systems approach involves identifying and analyzing subsystems, goals, and interactions, allowing enterprises to view their data practices holistically and identify potential privacy risks. By using a systems approach and leveraging technologies such as Federated Learning and Differential Privacy, enterprises can take a customer-centric approach to reduce privacy concerns.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information Design Considerations for Effective Communication of Sustainability Metrics</title>
<link href="https://hdl.handle.net/1721.1/151323" rel="alternate"/>
<author>
<name>Sternberg, Zachary</name>
</author>
<id>https://hdl.handle.net/1721.1/151323</id>
<updated>2023-08-01T03:49:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Information Design Considerations for Effective Communication of Sustainability Metrics
Sternberg, Zachary
As the world becomes more aware of the dangers posed by climate change, there is a growing conversation around the carbon footprint and how we reduce our impact. Everything we do contributes to our carbon footprint, from the food we eat to the planes we fly, but emissions are invisible, abstract, and ambiguous. It is quite difficult to conceptualize what a metric ton of carbon-dioxide is or means for the environment. Digital eco-feedback solutions have emerged to help individuals quantify and visualize their footprint. These robust data-driven interfaces display metrics on electricity usage or carbon emissions to users to help track goals and identify strategic improvements. But why is it that designers assume people are able to know how to understand the spikes and dips in a time-based line graph and draw the connection to their heating and air-conditioning or an inefficient dryer in the laundry room? This thesis analyzes eleven of these platforms including carbon calculators, emissions trackers, smart meters, and smart thermostats, which all share the goals of quantifying, communicating, and helping users reduce their consumption and emissions. The work looks through the collective lenses of information design, behavior science, and sustainability to evaluate the platforms with a framework of five clarities created from secondary research: purpose, truth, mappings, affordance, and legibility. Input from key informants sheds light on further challenges faced in the design development process and the discipline as a whole. The output is a list of considerations for designers and product teams to build more effective communication tools to convey the information around emissions data such that future visualizations result in a clear understanding of the subject matter, empowering users to take action.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering Abstractions from Language&#13;
via Neurosymbolic Program Synthesis</title>
<link href="https://hdl.handle.net/1721.1/151322" rel="alternate"/>
<author>
<name>Grand, Gabriel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151322</id>
<updated>2023-08-01T03:04:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Discovering Abstractions from Language&#13;
via Neurosymbolic Program Synthesis
Grand, Gabriel J.
Large language models (LLMs) are growing highly adept at language-guided program synthesis: translating natural language specifications into code to solve programming tasks. Nevertheless, current approaches require searching through a vast space of strings, often needing thousands of guesses to discover solutions to difficult tasks at inference time. In contrast, human programmers learn to solve problems on-the-fly by building up hierarchical libraries of abstractions: symbolic expressions that encapsulate reusable functionality. In this work, we draw on models of library learning from the programming languages (PL) literature, enriching them with the ability to perform search and abstraction learning with LLMs. We introduce Lilo, a neurosymbolic framework for Library Induction from Language Observations, which consists of three components: an LLM synthesizer, a symbolic compression module, and an auto-documentation (AutoDoc) procedure. Drawing on human language as a source of commonsense knowledge, Lilo learns abstractions that would be intractable to discover with traditional enumerative search. In our evaluations against DreamCoder, a state-of-the-art library learning algorithm, we find that Lilo solves more tasks while achieving faster search times and comparable computational costs. A central aspect of Lilo is a neurosymbolic integration between the LLM synthesizer and Stitch, a high-performance program compression algorithm that identifies useful abstractions in lambda calculus expressions. Lilo augments Stitch with AutoDoc, which generates human-readable names and docstrings for abstractions using an LLM. In addition to improving interpretability, we find that AutoDoc crucially assists Lilo’s synthesizer to infer the semantics of abstractions. In sum, Lilo offers an optimistic “better together” vision where human programmers work in tandem with LLMs and PL tools, building up shared libraries of abstractions to enable creative solutions to complex software problems. &#13;
&#13;
Code for this work is available at: github.com/gabegrand/lilo.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing an Investment Research System for Asset Management Based on Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/151319" rel="alternate"/>
<author>
<name>Chen, Yanzhang</name>
</author>
<id>https://hdl.handle.net/1721.1/151319</id>
<updated>2023-08-01T03:13:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Designing an Investment Research System for Asset Management Based on Natural Language Processing
Chen, Yanzhang
In recent years, the asset management industry has experienced rapid growth, with the global asset management scale continuously increasing. Conventionally, investment research in asset management entails the acquisition of data and information from a myriad of sources, which is then manually processed and analyzed. However, in the face of macroeconomic volatility, fierce competition, and a deluge of fragmented information, this traditional approach to investment research increasingly struggles to manage the sheer volume of financial market data and information.&#13;
&#13;
Natural Language Processing (NLP), an essential subset of artificial intelligence, has achieved significant breakthroughs in recent years. It facilitates automatic processing, analysis, and text generation to specific tasks, aiding investment institutions in swiftly assimilating and dissecting massive volumes of information, and consequently formulating investment research results. NLP can assist investment institutions in rapidly integrating and analyzing vast information and automatically generating investment reports. This paper aims to trace the evolution of NLP, evaluate its prospective positive impact on asset management, and deliberate on designing an investment research system grounded in NLP technology.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Out-Of-Memory Sparse-Dense Matrix Multiplication</title>
<link href="https://hdl.handle.net/1721.1/151318" rel="alternate"/>
<author>
<name>Yue, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/151318</id>
<updated>2023-08-01T04:16:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing Out-Of-Memory Sparse-Dense Matrix Multiplication
Yue, Brandon
We will examine state-of-the-art approaches for sparse-dense matrix multiplication (SpMDM), with a focused application on graph machine learning workloads, such as graph neural networks (GNNs), though this work is general enough such that it should apply to any application tailored for running matrix multiplication workloads that cannot fit in memory. Specifically, we will conduct a thorough and in-depth analysis on the various optimization strategies, including sparse matrix formats, tiling, load balancing, and data locality, and investigate how they affect performance. Based on the performance study, we will design and implement an out-of-core framework that supports massive graph datasets which can not fit into memory. We foresee challenges in mitigating the overhead of accessing external storage, as well as finding a way to balance performance with optimization of CPU/GPU memory usage. We will compare our out-of-core solution with state-of-the-art in-memory solutions as well as distributed solutions, and analyze the algorithmic complexity and overall overhead involved in our implementation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NDF-Based API for Human-assisted Language Planning (HaLP)</title>
<link href="https://hdl.handle.net/1721.1/151317" rel="alternate"/>
<author>
<name>Fong, Alisha</name>
</author>
<id>https://hdl.handle.net/1721.1/151317</id>
<updated>2023-08-01T04:13:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">NDF-Based API for Human-assisted Language Planning (HaLP)
Fong, Alisha
Recent works have show the promise of LLMs for generalizable task planning. Challenges in integrating LLM for high-level planning include outputting infeasible or sub-optimal plans, but the potentials include cultural commonsense to reason about high-level tasks on par with a human. Related works will generate free-form text that may contain actions inaccessible to the robot or over constrain the planner by providing it a static set of possible actions to select from and output mediocre plans. Humans can also decide when a task is infeasible due to limitations in the action space, and try to propose alternative plans. We show that LMs are also able to. We present an LLM-planner with the ability to request online learning of skills to output and execute optimal tabletop manipulation plans, even when the initial set of robot skills is insufficient. We build a fullstack system and deploy our method in simulation and hardware to demonstrate the capabilities of the planner and the preference for these plans over others in our ablation experiments. To support the learning of new skills, we present a low-level control API conditioned on natural language using Neural Descriptor Fields (NDFs) for out-of-plane category level manipulation that is SE(3)-equivariant and highly data-efficient to enable online learning.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tardigrade: A Hardware Accelerator for Sparse Matrix Multiplication and Sparse Convolution</title>
<link href="https://hdl.handle.net/1721.1/151316" rel="alternate"/>
<author>
<name>Attaluri, Nithya</name>
</author>
<id>https://hdl.handle.net/1721.1/151316</id>
<updated>2023-08-01T03:57:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Tardigrade: A Hardware Accelerator for Sparse Matrix Multiplication and Sparse Convolution
Attaluri, Nithya
Sparse matrix-sparse matrix multiplication (SpMSpM) and sparse convolution are critical primitive operations for scientific computing and deep learning. Prior work has proposed accelerators for each of these primitives, but these systems are often specialized to run either SpMSpM or sparse convolution efficiently. Although there are methods to run sparse convolution on an SpMSpM accelerator, and vice versa, this typically incurs unnecessary space overheads, higher memory traffic, or reduced performance. Ideally, a single hardware accelerator should provide native support for both operations. This work addresses this challenge through Tardigrade, a hardware accelerator for both SpMSpM and sparse convolution. Tardigrade extends the design of Gamma, a recent hardware accelerator for SpMSpM, to accelerate sparse convolution while retaining its SpMSpM capabilities. We compare Tardigrade’s performance against that of Gamma and recent accelerators for sparse convolutional neural networks (CNNs). Tardigrade shows comparable performance on SpMSpM and achieves a gmean 3.1× improvement in speed on sparse convolution.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Culturally-Integrative Encoding: A Human-Computer Interaction Approach to Cultural Learning Interfaces</title>
<link href="https://hdl.handle.net/1721.1/151315" rel="alternate"/>
<author>
<name>Prakash, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/151315</id>
<updated>2023-08-01T04:05:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Culturally-Integrative Encoding: A Human-Computer Interaction Approach to Cultural Learning Interfaces
Prakash, Megan
I introduce the Encoding and Decoding Model of Cultural Learning Experiences, which provides structure for analyzing how a creator transforms a source culture into an interactive pedagogical experience about that culture and how individual users interpret that experience. I synthesize principles from museology, cultural computing, cultural pedagogy, and computer-supported collaborative learning (CSCL) to create an approach broadly accessible to creators and human-computer interaction (HCI) researchers. My novel approach emphasizes how CSCL principles can be applied to leverage capabilities of social informal learning spaces; I also embed the ability to address prosocial concerns and to actively support inclusive design practices. Secondly, I propose the Culturally-Integrative Encoding Methodology for constructing an interactive cultural learning experience that (1) represents the culture in a manner that is acceptable and recognizable as accurate to members of that culture, (2) meets contemporary cultural pedagogy goals by supporting a diverse audience in concrete acquisition, abstract acquisition, perspective-taking, and possible perspective transformation, and (3) is usable and engaging for its target audience. I prototype the usage of this methodology by creating the Virtual Latin Quarter Experience (VLQE) in collaboration with the Universal Hip-Hop Museum, demonstrating that a collaborative design process described in the methodology can be used to create a culturallyintegrative online learning interface. A pilot study (n=14) suggests that the VLQE may be able to support relevant pedagogical and usability goals. Future work includes exploring how the Encoding and Decoding Model and the Culturally-Integrative Encoding can scaffold applications of CSCL research, pedagogical frameworks, and inclusive design practices.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explicit Regularization for Overparameterized Models</title>
<link href="https://hdl.handle.net/1721.1/151314" rel="alternate"/>
<author>
<name>Huang, Tiffany Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/151314</id>
<updated>2023-08-01T03:41:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Explicit Regularization for Overparameterized Models
Huang, Tiffany Y.
In many learning problems, it is desirable to incorporate explicit regularization in the objective to avoid overfitting the data. Typically, the regularized objective is solved via weight decay. However, optimizing with weight decay can be challenging because we cannot tell if the solution has reached a global minimum. Further, weight decay can have large run-to-run variations and is sensitive to the choice of regularization hyperparameter. To this end, we propose a new approach to optimize objectives with explicit regularization, called Regularizer Mirror Descent (RMD). In the overparameterized regime, where the number of model parameters exceeds the size of data, RMD provably converges to a point “close” to a minimizer of the regularized objective. Additionally, RMD is computationally efficient and imposes virtually no overhead to standard gradient descent. We observe that RMD is remarkably robust and consistent compared to gradient descent with weight decay despite solving for the same objective. We also illustrate the practical utility of RMD by applying it to learning problems with corrupted labels, where it can match or outperform the state-of-the-art methods without requiring additional hyperparameter tuning or ad-hoc heuristics tailored for this task.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy-Preserving Natural Language Dataset Generation</title>
<link href="https://hdl.handle.net/1721.1/151313" rel="alternate"/>
<author>
<name>Chen, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/151313</id>
<updated>2023-08-01T03:13:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Privacy-Preserving Natural Language Dataset Generation
Chen, Ashley
As we depend on data more heavily to power the insights made by machine learning systems, it becomes imperative that we design guarantees for protecting the privacy of such data. Recent research has shown the ease with which attacks such as membership inference or model inversion can extract potentially sensitive training data given the model alone. To prevent curious or malevolent users from gleaning training data through these attacks, we propose the generation of private synthetic datasets to replace the original datasets in training and testing the model. These synthetic datasets will have the same semantic and statistical distribution as the original dataset, but will be differentially private, thus preventing individuals in the dataset from being identified. This would guarantee that no sensitive information from the original dataset can be extracted from the generated synthetic dataset. Compared to related works that dealt with either structured data or unstructured data separately, our work developed a pipeline for generating synthetic datasets given a complex dataset consisting of structured and unstructured text, as well as numerical data. We used a number of metrics to evaluate the generation pipeline according to its statistical similarity to the original dataset, its utility, and its privacy. Our experiments focused on varying the degree of privacy across the sub-modules of the pipeline. We found that we can generate differentially private synthetic datasets whose structured and unstructured components each achieve good performance in similarity, utility, and privacy.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simultaneous Localization and Calibration in a Wireless Network of Uncooperative Nodes</title>
<link href="https://hdl.handle.net/1721.1/151309" rel="alternate"/>
<author>
<name>Wan, Kai Yee</name>
</author>
<id>https://hdl.handle.net/1721.1/151309</id>
<updated>2023-08-01T03:25:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Simultaneous Localization and Calibration in a Wireless Network of Uncooperative Nodes
Wan, Kai Yee
Wi-Fi's fine-time measurement (FTM) protocol supports indoor localization with 1-2 m accuracy by allowing two devices to cooperatively measure their signal round-trip-time (RTT). But as of 2023, few commercially-deployed Wi-Fi access points (APs) actually support the protocol. &#13;
&#13;
Using a one-sided RTT measurement technique that does not require cooperation from the AP, a mobile device can obtain distance measurements with most APs in operation today. A major obstacle to using one-sided RTT for localization is that measurements have an unknown bias or offset quantity that is about two orders of magnitude larger than the RTT being measured. &#13;
&#13;
This thesis proposes an algorithmic solution enabling a mobile device to determine its position using only one-sided RTT measurements from uncooperative APs, without prior manual calibration for RTT offsets. Based on the Newton-Gauss method for non-linear least squares problems, it performs both calibration and localization by iteratively updating estimates of position and RTT offset. Experimental results show the solution can achieve about 5 meter, two-dimensional accuracy within the area bounded by APs. Additional characterizations of one-sided RTT range measurements, and the effects of different geometry and frequency bands are also presented.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surrogate Neural Networks for Efficient Simulation-based Trajectory Planning Optimization</title>
<link href="https://hdl.handle.net/1721.1/151308" rel="alternate"/>
<author>
<name>Ruff, Evelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/151308</id>
<updated>2023-08-01T04:16:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Surrogate Neural Networks for Efficient Simulation-based Trajectory Planning Optimization
Ruff, Evelyn
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory. Simulation-based optimization is necessary when there is no analytical form of the system accessible, only input-output data that can be used to create a surrogate model of the simulation. Like many high-fidelity simulations, this trajectory planning simulation is very nonlinear and computationally expensive, making it challenging to optimize iteratively. Through gradient descent optimization, our approach finds the optimal reference trajectory for landing a hypersonic vehicle. In contrast to the large datasets used to create the surrogate models in the prior literature, our methodology is specifically designed to minimize the number of simulation executions required by the gradient descent optimizer. We demonstrated this methodology be more efficient than the standard practice of hand-tuning the inputs through trial-and-error or randomly sampling the input parameter space. Due to the intelligently selected input values to the simulation, our approach yields better simulation outcomes that are achieved more rapidly and to a higher degree of accuracy. Optimizing the hypersonic vehicle's reference trajectory is very challenging due to the simulation's extreme nonlinearity, but even so, this novel approach found a 74% better performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Introducing Technologies to Bridge Inspection and Maintenance in Japan</title>
<link href="https://hdl.handle.net/1721.1/151307" rel="alternate"/>
<author>
<name>Nakajima, Kosuke</name>
</author>
<id>https://hdl.handle.net/1721.1/151307</id>
<updated>2023-08-01T03:55:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluation of Introducing Technologies to Bridge Inspection and Maintenance in Japan
Nakajima, Kosuke
This study investigates the current problems in bridge inspection and maintenance in Japan and explores how the technologies can help address the problems. The literature reviews and interviews with bridge owners identify the potential problems and needs of the bridge owners in chapter 2. Chapter 3 broadly explains the technologies, such as data collection devices, and introduces the digital twin concept as a potential solution in the future to approach the identified problems. While DT might work for bridge inspection systems, the data analytics capability of DT, such as machine learning, cannot provide an accurate outcome enough to replace manual inspectors’ ability due to the insufficient amount of training data. Also, quantitative data such as displacements and acceleration have not been collected since the devices have not yet been installed on bridges. Therefore, Chapters 4 and 5 explore the potential combinations of the data collection devices to provide the best values to bridge owners by performing tradspace analyses. Chapter 4 provides the data sources of the technologies used in the tradespace study. Chapter 5 presents the outcome of the tradespace analysis and the insights. The insights from the interviews and the tradespace analysis are summarized in the following:&#13;
&#13;
Interviews&#13;
•	Small municipalities have more issues than large ones, such as lack of budget, technical personnel, and skills and experience.&#13;
•	The identified needs are increased budget, improved efficiency, improved quality, and a safer work environment. &#13;
&#13;
Tradespace&#13;
•	The utility curve increases as more technologies are adopted, so adopting technologies could provide new values to bridge owners. However, costs increase more significantly than utilities do, as the slope of the curve is close to a flat. The tradespace model considers only a few PMs to evaluate the potential benefits of technologies, so the increase in utilities is modest compared to the increase in costs.&#13;
•	The high costs often result from expensive fixed devices such as displacement meters, accelerometers, etc. &#13;
&#13;
Based on the above insights, the author suggests recommendations, including improvement of knowledge and experience in bridge owners, effective uses of technologies, and increased financial support from the government. The author argues that an in-house inspection or a hybrid form of an in-house and outsourced inspection might work well for small municipalities with limited budgets and resources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uranium Enrichment Signatures of Fluorinated&#13;
Epoxy</title>
<link href="https://hdl.handle.net/1721.1/151306" rel="alternate"/>
<author>
<name>Reinfurt, Daniel Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/151306</id>
<updated>2023-08-01T03:22:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Uranium Enrichment Signatures of Fluorinated&#13;
Epoxy
Reinfurt, Daniel Robert
Accounting for the production of fissile material is an important component of enforcing the international non-proliferation regime. However, while there are well defined forensic techniques for estimating and verifying the production of plutonium, there are currently no such techniques for the production of enriched uranium, despite the fact that many nations have used uranium enrichment to acquire weapons. Through the use of Fast Scanning Calorimetry, this thesis shows that alpha radiation from uranium can cause a detectable change in the glass transition temperature in UV cured fluorinated epoxy (a material which can be used uranium enrichment cascades) at doses equivalent to enriching enough uranium to make on the order of 1 significant quantity (IAEA standard). This change in the glass transition temperature is likely due to chain scission, which degrades the polymer chains and results in less energy being needed to allow movement in the molecular structure. The change in &#119879;&#119892; can then be related to uranium enrichment and production. This potentially allows a way to fill the gap in nuclear forensics regarding uranium enrichment, allowing a more comprehensive verification of fissile material production for nations subject to the non-proliferation treaty.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing the Future Architecture of Steelmaking Enterprise in Japan</title>
<link href="https://hdl.handle.net/1721.1/151305" rel="alternate"/>
<author>
<name>Kawasaki, Toru</name>
</author>
<id>https://hdl.handle.net/1721.1/151305</id>
<updated>2023-08-01T03:50:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analyzing the Future Architecture of Steelmaking Enterprise in Japan
Kawasaki, Toru
The steel industry in Japan has supported the development of the Japanese economy together with Japanese domestic consumers, especially automobile companies, and NSC has built a sophisticated business system architecture. However, the environment surrounding the company and the needs of its stakeholders are rapidly changing, and NSC is faced with the need to transform its architecture. This study utilizes the ARIES framework to examine the strategy that NSC should adopt to pursue profitability, development, and efficiency for sustainable growth from an organizational design perspective.&#13;
&#13;
The objective of this study is to investigate the steelmaking enterprise against landscape changes and stakeholder needs and to generate resilient alternative architectures that are compatible with the NSC context. Furthermore, alternative architectures are evaluated in the context of several extreme scenarios, such as the rapid development of decarbonization technologies. The results indicate that a data-driven alternative architecture that brings together a new digital technology-savvy workforce within the company and a new HR department to manage this group may be the most rational option for NSC. Finally, this thesis provides an implementation plan for NSC's architectural team. The methodology of this study can be applied to other overseas steel mills, such as AM/NS India and G/GJ Steel, where NSC is expanding its operations, and identifies approaches that enable the steel industry to succeed in sustainable development in times of rapid change, including the development of decarbonization and digital technologies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operational Analysis and Mission Engineering: A strategy and framework to analyze any industrial ecosystem.</title>
<link href="https://hdl.handle.net/1721.1/151304" rel="alternate"/>
<author>
<name>Day, Robert L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151304</id>
<updated>2023-08-01T04:21:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Operational Analysis and Mission Engineering: A strategy and framework to analyze any industrial ecosystem.
Day, Robert L.
The field of system engineering builds on the principles of system architecture, system engineering, and product management while attempting to balance the sociotechnical and socioeconomic impacts.  Today's industries face the ever-increasing business dynamics of changing technologies, competition, and regulations that affect their products, services, and processes.  Yet, they all continue spending large sums of money on R&amp;D during and after product launch.  These products and services must meet critical financial, sales, and customer targets.  The situation has become dire within all industries as they attempt to find the answers to the questions by applying different Product Delivery Processes like stage gate, spiral, waterfall, AGILE, Scaled AGILE, etc.&#13;
&#13;
Through our research looking at the enterprise strategy and its development, enterprises aren’t looking at it from and system thinking perspective. This thesis suggests that operational analysis, mission engineering, mission architecture, technology road mapping, portfolio management, product development, order fulfillment, and lifecycle management, we have only focused on the product development perspective without other elements previously mentioned, for the most part, have been siloed.&#13;
&#13;
So, this thesis will explore if system design and management principles and practices are applied upfront in the strategy development process to identify key opportunities within the industrial ecosystem in which the enterprise resides has the potential to allow the product delivery process to reduce the risk of not meeting the enterprise's financial, sales, and customer targets.&#13;
&#13;
We will explore the potential to apply operational analysis and mission engineering within the context of the industrial ecosystem in the enterprise resides to identify opportunities and their&#13;
subsequent missions and relationships.  It will also explore how, through operational analysis and mission engineering; we can further understand the socioeconomic and sociotechnical ramifications providing additional inputs when developing the enterprise strategy. &#13;
&#13;
Through this framework, these building blocks will be critical to the enterprise strategy reducing the risk to the outcomes of any product delivery process.&#13;
&#13;
Through this understanding, we can clarify how the enterprise sits in the ecosystem and identify our relationship to ensure our strategy and vision meet or exceed our business and customer needs.  Through this approach, we also believe that enterprise efficiency, effectiveness, and market penetration enable sustainable growth while embracing technology and minimizing the socioeconomic impact on society.&#13;
&#13;
We have limited the scope within this thesis to an enterprise strategy currently. Future research must apply the framework and structure proposed within the thesis.  It can provide an avenue into a more comprehensive understanding of the enterprise's economic benefits and the socioeconomic and sociotechnical impacts as the demands of the 21st century will challenge us as&#13;
professionals.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conquering the challenge of reliability: text mining&#13;
to map trends in reliability engineering literature</title>
<link href="https://hdl.handle.net/1721.1/151303" rel="alternate"/>
<author>
<name>Brown, Charles K.</name>
</author>
<id>https://hdl.handle.net/1721.1/151303</id>
<updated>2023-08-01T03:42:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Conquering the challenge of reliability: text mining&#13;
to map trends in reliability engineering literature
Brown, Charles K.
Reliability engineering faces many of the same challenges in 2023 that it did at its inception in the 1950s. The fundamental issue remains uncertainty in system representation, specifically related to performance model structure and parametrization. Details of a design are unavailable early in the development process and therefore performance models must either account for the range of possibilities or be wrong. Increasing system complexity has compounded this uncertainty. In this work, we seek to understand how reliability engineering literature has changed over time with the assumption that the focus of literature shifts in part due to challenges in the field. Illuminating this change provides reliability practitioners guidance for what they can do in the face of growing complexity. We build this understanding by executing a systematic literature review of 30,543 reliability engineering papers. Topic modeling was performed on the abstracts of those papers to identify 279 topics. Hierarchical topic reduction resulted in the identification of 8 top-level method topics (prognostics, statistics, maintenance, quality control, management, physics of failure, modeling, and risk assessment) as well as 3 domain-specific topics (nuclear, infrastructure, and software). We found that topics more associated with later phases in the development process (such as prognostics, maintenance, and quality control) have increased in popularity over time relative to other topics. We propose that this is a response to the challenges posed by the previously-discussed model uncertainty and increasing complexity. Through zero-shot classification by a large language model, we also found that papers are including more practical examples or case studies and that those topics associated with later phases typically include more practical examples. Thus, while reliability remains fundamentally difficult to predict early in the development process, the field has shifted focus to later-stage and more applicable activities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Historic Steel Beam Reuse: A Case Study of a 100-year-old Warehouse</title>
<link href="https://hdl.handle.net/1721.1/151301" rel="alternate"/>
<author>
<name>Wenger, Karissa J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151301</id>
<updated>2023-08-01T03:01:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Historic Steel Beam Reuse: A Case Study of a 100-year-old Warehouse
Wenger, Karissa J.
The production of steel accounts for approximately 9% of global C0₂ emissions today. The reuse of existing steel sections could drastically decrease this percentage. However, a lack of information about the material properties of existing steel sections can prove a barrier to steel reuse, especially for historic sections. This thesis explores coupon testing and full-scale beam testing as a way to obtain information about the ductility and strength of historic steel beams produced in the United States between 1894 and 1911. Four historic steel beams were obtained from the MIT Metropolitan Warehouse, located in Cambridge, Massachusetts. During full-scale beam testing, the steel beams were simply supported and a distributed load was applied at their center. The displacement of the beam’s bottom flange was measured throughout the test and the yield strength, ductility, and failure mode of the beams were determined. During coupon testing, three coupons were taken from the web of each beam and were subjected to tensile testing. The yield strength, ultimate strength, and ductility of the beams were determined. Finally, the beams’ experimentally determined allowable stresses were calculated and compared with the allowable stress values specified in historic building codes.&#13;
&#13;
All four beams have values for elastic modulus, yield strength, and ultimate strength that are comparable to those of A36 steel produced today. All the beams have an elastic modulus of about 29,000 ksi, indicating the beams are adequately ductile. The beams’ yield strengths vary between 34.8 ksi and 41.5 ksi, while the beams’ ultimate strengths vary between 55.5 ksi and 62.7 ksi. Additionally, the four beams have experimentally determined allowable stresses that are 35% to 55% higher than the allowable stresses specified in the historic building code. Using these experimentally determined allowable stresses instead of assuming the allowable stresses specified in the historic code could significantly decrease the amount of strengthening necessary during renovations. Finally, the load capacity of the floor system in the Metropolitan Warehouse was determined. It is estimated that the floor system can carry between 800 psf and 1,530 psf of distributed live load, in addition to its self-weight. The low estimate assumes no composite action between the steel beams and concrete slab, while the high estimate assumes complete composite action. Even if composite action is neglected, the floor system can carry eight times the standard distributed live load value of 100 psf used for design today.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mission and System Design for In-Situ Resource Utilization in the Outer Solar System using Nuclear Propulsion Technologies</title>
<link href="https://hdl.handle.net/1721.1/151300" rel="alternate"/>
<author>
<name>Julia Witham</name>
</author>
<id>https://hdl.handle.net/1721.1/151300</id>
<updated>2023-08-01T04:02:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mission and System Design for In-Situ Resource Utilization in the Outer Solar System using Nuclear Propulsion Technologies
Julia Witham
The parameter space of a theoretical in-situ propellant acquisition system is examined for a sample return mission from trans-Neptunian destinations (&gt;40au) using two theoretical spacecraft designs, which compare nuclear propulsion systems for impulsive and continuous thrust. Each available propellant substance is compared in each propulsion system. Propellant acquisition systems are shown to be less advantageous overall for continuous thrust missions, where the system mass must remain well below the roughly 6 tons of additional propellant otherwise necessary to affect the return journey in 5 years. &#13;
&#13;
While the spacecraft using impulsive thrust could return in 50 years with roughly equal propellant mass in a Hohmann maneuver, it required more than 70 tons of additional hydrogen propellant at launch to be capable of a comparable transit time to the electric spacecraft. For impulsive spacecraft, the optimal propellant mass for a return transit varies based on system and propellant characteristics. These optimal values are found for each scenario and each system. Even relatively inefficient (&lt; 20 mL/kWh) acquisition systems were found to be constrained by propellant tank size. The low density of propellants such as liquid hydrogen had a pronounced impact on collection strategy and ultimate production capacity, such that propellant tank capacity became a valuable resource for a spacecraft.&#13;
&#13;
Optimal parameters and collection goals for the propellant acquisition system are described for each potential destination, such that the mission time is minimized while making full use of the available propellant capacity. Among the available substances in the Kuiper belt, nitrogen propellants are the worst options in terms of mission time, propellant mass, and collection time requirements. Using raw methane as propellant instead of a source of hydrogen can offer significantly higher ∆V and lower total processing requirements in some scenarios. Hydrogen extraction from methane via pyrolysis presented the best overall performance for both propulsion systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Humanizing Urban Waters : Civilian led Water Corps to Strengthen Decentralized Water Systems in Western India</title>
<link href="https://hdl.handle.net/1721.1/151292" rel="alternate"/>
<author>
<name>Jagtap, Pramada</name>
</author>
<id>https://hdl.handle.net/1721.1/151292</id>
<updated>2023-08-01T03:14:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Humanizing Urban Waters : Civilian led Water Corps to Strengthen Decentralized Water Systems in Western India
Jagtap, Pramada
Over the past century, we have witnessed global water-based displacement owing to the climate crisis, and displacement caused by large scale water infrastructure such as dams, long-distance pipelines, promenades and river fronts. Urban waters have infamously been presented as disruptors within community and ecology, often perceived as a violent threat or unpredictable “hazard” to urbanism. This thesis uses policy, design, public dialogue and sensory engagement to redefine our experience of water as a ubiquitous fluid, intrinsic to settlement and the very ground on which urbanism dwells.&#13;
&#13;
Pune city in Western India is fertile for this exploration since beneath its dense urban settlement is a ground of flowing waters both surface and subsurface, a culture rich in indigenous techniques, historical waterfronts, stepped wells, aqueducts and water collection tanks. The Peshwa of the Maratha empire dominated a large portion of the Indian subcontinent from 1674 to 1818. During this time, they constructed numerous small scale water systems, such as canals, step wells and temple tanks in Western India, particularly in Pune, Maharashtra. While water infrastructure built in this era has been widely subjected to scrutiny of gender and caste based discrimination, one cannot ignore its close attention to community and geomorphology. India’s changing climate continues to have a significant impact on its water resources, which includes rainwater, groundwater, and surface waters. This thesis proposes the revival and re-adoption of existing resources through a civilian led water corps to design for long term resilience. I use a multi-pronged approach that includes a) grassroots organization creating water stewardship, b) conservation of traditional techniques and structures, using c)multimedia representation of water and the d) adoption of new experiments in nature based technologies.&#13;
&#13;
Beyond its portrayal of urban landscapes, the medium of film has not been drawn upon in architecture as a tool for advocacy and social change. This thesis adds to the field by using filmmaking as a method of inquiry, tapping into its potential to represent a diverse set of voices. Based on evidence collected through this community sourced videography, the thesis proposes the creation of a water corps, a catalyzing force at the intersection of community and water. The corps will train and employ young adults to develop learning tools, and lead the movement to long term resilience building with communities and ecology in the forefront of urbanism. This creation of a civilian water corps in Pune can accelerate a growing need to implement nature-based-solutions that enable equitable access and supply of water, while reviving traditional water systems. By taking proactive steps now, India can work towards a more sustainable and resilient water future by supporting the socio-ecological design of urban waters.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assistive Personal Robots for Older Adults: Bridging the Divide&#13;
Between Robotic Technology Development and End-Users in Practical Applications</title>
<link href="https://hdl.handle.net/1721.1/151290" rel="alternate"/>
<author>
<name>Kim, Eunah</name>
</author>
<id>https://hdl.handle.net/1721.1/151290</id>
<updated>2023-08-01T04:10:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Assistive Personal Robots for Older Adults: Bridging the Divide&#13;
Between Robotic Technology Development and End-Users in Practical Applications
Kim, Eunah
The global demographic shift towards an aging population has led to a growing demand for healthcare options supporting older adults' independence and well-being. Robotics has emerged as a promising solution to address this challenge, serving as supportive complements or substitutes for human caregivers. However, designing robots that are suitable for older adults is a complex challenge, and there is a need to improve our understanding of users' expectations in this domain. Moreover, the gap between laboratory research on robot technology and its real-world applications needs to be addressed, as the technology needs to meet the needs and expectations of users in practical settings.&#13;
&#13;
The primary objective of this thesis is to bridge the divide between robotic technology development and end-users in practical applications. The research will explore the design, development, and implementation of personal robots, with an emphasis on understanding their unique characteristics, capabilities, and potential applications. The study aims to define how assistive personal robots for older adults should look, behave, and interact from a user's point of view. Additionally, the study aims to understand how older adults perceive these robots and what level and type of support they are comfortable with.&#13;
&#13;
The results of this study will contribute to identifying key dimensions of the overall user experience and systematic solutions that can serve as guidelines for developers of future robots. This study will not only contribute to academic research in the field but also have practical implications for the design and development of robots that contribute to improving the quality of life of older adults.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Enabled Inorganic Synthesis Planning and Materials Design</title>
<link href="https://hdl.handle.net/1721.1/151288" rel="alternate"/>
<author>
<name>Karpovich, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/151288</id>
<updated>2023-08-01T03:59:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Machine Learning Enabled Inorganic Synthesis Planning and Materials Design
Karpovich, Christopher
The discovery and design of materials is essential for addressing important societal problems in areas such as energy, biomedicine, and computing technology. Data-driven synthesis planning with machine learning is a key step in the design of novel inorganic compounds with desirable properties. Inorganic materials synthesis is often guided by heuristics and chemists' prior knowledge and experience, built upon experimental trial-and-error that can be both time and resource consuming. Recent developments in natural language processing (NLP) have enabled large-scale text mining of scientific literature, providing open source databases of synthesis information of realized compounds, material precursors, and reaction conditions (temperatures, times). In this thesis, we employ supervised classification machine learning (ML) models to distinguish between solid-state, sol-gel, and solution (hydrothermal, precipitation) synthesis routes based on specified reaction target material and/or precursor materials. We demonstrate regression ML models which are able to predict suitable temperatures and times for the crucial inorganic synthesis steps of calcination and sintering given the reaction target and precursor materials. We contrast this regression-based condition modeling with a conditional variational autoencoder (CVAE) neural network which can generate appropriate distributions for the synthesis conditions of interest. We evaluate model interpretability using the SHAP (SHapley Additive exPlanations) approach to gain insight into factors influencing suitability of synthesis route and reaction conditions. We find that the aforementioned models are capable of learning subtle differences in target material composition, precursor compound identities, and choice of synthesis route that are present in the inorganic synthesis space. Moreover, they generalize well to unseen chemical entities, outperform common heuristics in the field, and show promise for predicting appropriate reaction routes and conditions for previously unsynthesized compounds of interest.&#13;
&#13;
Another major obstacle to the realization of novel inorganic materials with desirable properties is efficient optimization over both the materials property and synthesis spaces. We propose two novel reinforcement learning (RL) approaches to inverse inorganic materials design which can efficiently identify promising compounds with specified properties and synthesizability constraints. Our models successfully learn chemical guidelines such as thermodynamic stability, charge neutrality, and electronegativity neutrality while maintaining high chemical diversity and uniqueness. We demonstrate a multi-objective reinforcement learning approach which can generate novel compounds with both desirable materials properties (formation energy, bulk modulus, shear modulus) and synthesis objectives (low sintering temperatures). Using this approach, the models can predict promising compounds of interest, while suggesting an optimized chemical design space for inorganic materials discovery.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Design to Assess Team Performance Through Shared Mental Models</title>
<link href="https://hdl.handle.net/1721.1/151282" rel="alternate"/>
<author>
<name>Hallock, Neil K.</name>
</author>
<id>https://hdl.handle.net/1721.1/151282</id>
<updated>2023-08-01T04:12:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Experimental Design to Assess Team Performance Through Shared Mental Models
Hallock, Neil K.
Nearly every domain in the world is moving to a team-based environment. Regardless of application or desired outcomes, decisions must be made in groups. Individual decision-making presents a challenge in every domain, and this challenge grows exponentially more difficult when teams of individuals are forced to build a consensus.&#13;
&#13;
Joint decision-making is a complex system-of-systems, being operated on by teams of teams. This thesis focuses on the challenges of establishing shared mental models within teams, their performance, and possible acceleration of the formation of these shared mental models, to achieve the best possible outcomes of the system.&#13;
&#13;
In order to assess the quality of shared mental models, a framework for an experiment is laid out, in which the quality of a team’s shared mental model is correlated to the team effectiveness during execution of high-stress, fast-paced tasks. Limitations, future research, and practical steps for the implementation of an experiment are outlined.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study of Environmental Testing in Satellite Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/151279" rel="alternate"/>
<author>
<name>Johnson, Paul Mitchell</name>
</author>
<id>https://hdl.handle.net/1721.1/151279</id>
<updated>2023-08-01T03:26:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Parametric Study of Environmental Testing in Satellite Manufacturing
Johnson, Paul Mitchell
Low Earth Orbit Satellite constellations have begun to provide consumers with internet access in remote and under-served areas. Firms have begun to manufacture satellites at a higher rate than ever seen before in the aerospace industry. These satellites require environmental qualification testing to meet the standards required to launch and operate in space. For a new product, it is unclear how much testing capability will be necessary. This paper will provide an in-depth look at efforts to estimate the correct capital expenditure for the purposes of environmental testing for satellite manufacturing. It uses uncertainty analysis and parametric studies to compare options for precision manufacturing in an unknown field of production. The study finds that testing machine requirements vary through different periods of production, challenging the producer to purchase more capacity than will ultimately be necessary. These results highlight the need to develop failure rate predictions at the lowest component level to accurately assess testing requirements for the overall system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Restricted Stock Grant (RSG) Issuance on Financial Performance of US Software and IT Companies</title>
<link href="https://hdl.handle.net/1721.1/151278" rel="alternate"/>
<author>
<name>Jia, Hongxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/151278</id>
<updated>2023-08-01T03:59:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Impact of Restricted Stock Grant (RSG) Issuance on Financial Performance of US Software and IT Companies
Jia, Hongxuan
This study examines the relationship between the issuance of Restricted Stock Grant (RSG) and the financial performance of Software and Information Technology (IT) companies in the United States (US). Data pertaining to RSG and financial performance are obtained from the companies' 10-K reports and the Refinitiv database, respectively. The sample size will include 30 publicly traded companies listed on the US exchange, with a study period spanning from 2013 to 2022. To estimate the effect of RSG issuance on future corporate financial performance, multiple linear regression models will be utilized. Empirical results show that there is a significant positive relationship between the value of RSGs issued (RSGV) and 2-year forward return on assets (ROA). No significant relationships were discovered between RSGV and return on equity (ROE) and between RSGV and Tobin’s Q (TQ). The results emphasize the importance for corporate managers to tailor their equity compensation schemes to meet the needs of their employees better. From an investor's perspective, the implications of this study signify the potential for the integration of a novel metric in the evaluation and appraisal of a company's financial potential when making investment decisions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ThoughtLine Web Server for Mental Health Wellness and Psychotherapy</title>
<link href="https://hdl.handle.net/1721.1/151277" rel="alternate"/>
<author>
<name>Cantow, Michael R.</name>
</author>
<id>https://hdl.handle.net/1721.1/151277</id>
<updated>2023-08-01T03:17:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">ThoughtLine Web Server for Mental Health Wellness and Psychotherapy
Cantow, Michael R.
Mental health is a significant and growing public health concern, with depression representing the leading cause of disability world-wide. While ongoing research actively explores the potential of mobile technology and artificial intelligence to support mental health, limited attention has been given to the field of mental health wellness and prevention. Drawing inspiration from the preventive benefits of exercise in physical health, this project aims to develop a platform for mental health wellness and prevention in the form of an online web server for audiotherapy. The server architecture allows for the utilization of various audio files, including mindfulness meditation, storytelling, nature sounds, and neuromodulation. In this thesis, I describe the design and architecture of the server platform, emphasizing academic digital mental health research and future project reusability. In addition to providing access to audio files, the web application incorporates standard psychological clinical assessments for health screening and a chat bubble feature that employs a pre-existing NLP algorithm to identify signs of depression. By collecting data on audio file access history, and tracking psychological assesssments and NLP model outcomes over time, clinicians can use this platform to engage in formal quantitative research within this emerging research domain. Our research platform, ThoughtLine.org, includes a user-friendly smartphone and web interface powered by our containerized server infrastructure, DockMed. The initial prototype described here has been prepared for a future clinical study with MIT students; however, the platform can be used for a variety of clinical studies with support for individual account registration and user groups.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Concept Representations and their Transformations in Transformer Models</title>
<link href="https://hdl.handle.net/1721.1/151276" rel="alternate"/>
<author>
<name>Kearney, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/151276</id>
<updated>2023-08-01T03:31:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Concept Representations and their Transformations in Transformer Models
Kearney, Matthew
As transformer language models continue to be more widely used in a variety of applications, developing methods to understand their internal reasoning processes becomes more critical. One category of such methods called neuron labeling identifies salient directions in the model’s internal representation space and asks what features of the input these directions represent and how they evolve. While research using these methods has focused on finding and automating the label process, a prerequisite to this is first identifying which directions are the salient ones in the model’s computation. There exists theoretical arguments that the activations of the first layer of the multi-layer perceptrons (MLPs) in transformers are the salient basis for represent the information the model is using for computation. However, there currently do not exist any empirical studies comparing these internal representations to others that have been used in prior work. This research answers this question by comparing several directions in the internal representation space of transformers in terms of how well they represent basic linguistic concepts we expect the model to be using in computation. We find that the empirical evidence does support the theoretical arguments and that the first layer of the MLP modules is the most representative basis for these concepts. We further extend this exploration by examining the connections between MLP neurons and developing a method of determining which neurons have the potential of communicating information between one another. In the process we discover specialized neurons for erasing and preserving information in the model’s hidden state and characterize this phenomenon.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Augmentation and Conformal Prediction</title>
<link href="https://hdl.handle.net/1721.1/151275" rel="alternate"/>
<author>
<name>Lu, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/151275</id>
<updated>2023-08-01T04:22:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Data Augmentation and Conformal Prediction
Lu, Helen
Conformal prediction is a popular line of research in uncertainty quantification. Conformal predictors output sets of predictions accompanied by a guarantee that the set contains the true label. Conformal prediction is particularly promising because it makes no distributional assumptions and requires only a black-box classifier to produce sets with this type of guarantee. Unfortunately, existing conformal predictions can produce uninformatively large prediction sets for certain examples, which limits their applications to real-world contexts. In this thesis, we explore the impact of data augmentation, a popular computer vision technique, on the performance of conformal predictors. In particular, we present multiple ways of combining data augmentation with conformal prediction by introducing five methods of test-time-augmentation-enhanced conformal prediction (TTA-CP). We find that certain TTA-CP methods can improve upon the size and stability of prediction sets created by traditional conformal prediction. Using ImageNet and Fitzpatrick 17k, two datasets differing in size, complexity, and balance, we reveal dataset-dependent decisions that are key to improving performance in conformal prediction.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the Performance of Parallel Loops in OpenCilk</title>
<link href="https://hdl.handle.net/1721.1/151273" rel="alternate"/>
<author>
<name>Govedic, Luka</name>
</author>
<id>https://hdl.handle.net/1721.1/151273</id>
<updated>2023-08-01T03:08:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving the Performance of Parallel Loops in OpenCilk
Govedic, Luka
For good performance, parallel loop scheduling must achieve low scheduling overheads and multidimensional locality in nested loops. This thesis explores both challenges and contributes an extension to randomized work-stealing for first-class loop support that reduces scheduling overheads.&#13;
&#13;
Randomized work-stealing schedulers traditionally execute parallel-for loops using parallel divide-and-conquer recursion, which is theoretically efficient and scalable but can incur substantial overheads in practice. This thesis extends randomized work-stealing with a custom work-stealing protocol called on-the-fly loop splitting. I introduce loop frames to make work stealing on parallel-for loops more efficient and flexible.&#13;
&#13;
Loop frames make two key changes to work stealing for parallel-for loops. First, loop frames extend work stealing by directly encoding information about intervals of loop iterations in the runtime. Loop frames add first-class support to work stealing for parallel-for loops that composes with classical randomized work stealing. Second, loop frames allow intervals of loop iterations to be split on-the-fly, such that worker threads attempt to steal half of the unexecuted loop iterations rather than a deterministically constructed partition of loop iterations. On-the-fly loop splitting allows for more flexible dynamic load balancing of loop iterations while keeping the work overheads low and maintaining the theoretical efficiency of divide-and-conquer.&#13;
&#13;
I evaluate loop frames in practice by implementing loop frames in the OpenCilk runtime system. In particular, loop frames augment the THE protocol from Cilk to coordinate updates to loop frames. I observe that loop frames and on-the-fly loop splitting incur substantially less overhead than the divide-and-conquer algorithm without sacrificing parallel scalability.&#13;
&#13;
Finally, I study the impacts of increased locality in more than one dimension in nested loop applications. Results show that both cache-aware and cache-oblivious reordering of nested loop iterations can result in performance benefits up to a factor of 1.7×.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplicity and Probability Weighting in Choice Under Risk</title>
<link href="https://hdl.handle.net/1721.1/151272" rel="alternate"/>
<author>
<name>Puri, Indira</name>
</author>
<id>https://hdl.handle.net/1721.1/151272</id>
<updated>2023-08-01T04:08:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Simplicity and Probability Weighting in Choice Under Risk
Puri, Indira
This is a reprint of work published in the American Economic Review: Papers and Proceedings during the degree.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Multiagent Trajectory Planning in Real-World Environments</title>
<link href="https://hdl.handle.net/1721.1/151269" rel="alternate"/>
<author>
<name>Kondo, Kota</name>
</author>
<id>https://hdl.handle.net/1721.1/151269</id>
<updated>2023-08-01T03:35:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Decentralized Multiagent Trajectory Planning in Real-World Environments
Kondo, Kota
In the rapidly evolving domain of unmanned aerial vehicle (UAV) applications, multiagent trajectory planning plays an indispensable role. The applications encompass search and rescue missions, surveillance, package delivery, and more. Each of these scenarios necessitates intricate coordination amongst multiple UAVs, driving the need for sophisticated multiagent trajectory planning.&#13;
&#13;
Although many centralized trajectory planners exist, they hinge on a single entity for trajectory planning, making them less scalable and challenging to deploy in real-world environments. To address this hurdle, the focus has shifted towards decentralized multiagent trajectory planners, where each agent independently plans its trajectory. In this thesis, we introduce two novel approaches —Robust MADER (RMADER) and PRIMER, aiming at further advancing the field of decentralized multiagent trajectory planning for UAVs. One of the primary hurdles in achieving a multiagent trajectory planner lies in the development of a system that is both scalable and robust, and can be effectively deployed in real-world environments. These environments present numerous challenges, including communication delays and dynamically moving obstacles. To counter these hurdles, we propose RMADER, a decentralized, asynchronous multiagent trajectory planner. RMADER is designed to be robust to communication delays by introducing (1) a delay check step and (2) a two-step trajectory-sharing scheme. RMADER guarantees safety by always keeping a collision-free trajectory and performing a delay check step, even under communication delay. To evaluate RMADER, we performed extensive benchmark studies against state-of-the-art trajectory planners and flight experiments using a decentralized communication architecture called a mesh network with multiple UAVs in dynamic environments. The results demonstrate RMADER’s robustness and capability to carry out collision avoidance in dynamic environments, outperforming existing state-of-the-art methods with a 100% collision-free success rate.&#13;
&#13;
While RMADER achieves highly scalable and robust multiagent trajectory planning, it requires agents to communicate to share their future trajectories. However, due to localization errors/uncertainties, trajectory deconfliction can fail even if trajectories are perfectly shared between agents. To address this issue, we first present PARM and PARM*, perception-aware, decentralized, asynchronous multiagent trajectory planners that enable a team of agents to navigate uncertain environments while deconflicting trajectories and avoiding obstacles using perception information. PARM* differs from PARM as it is less conservative, using more variables to find closer-to-optimal solutions. Though these methods achieve state-of-the-art performance, they suffer from high computational costs as they need to solve large optimization problems onboard, making it difficult for agents to replan at high rates. To overcome this challenge, we present PRIMER, a learning-based planner trained with imitation learning (IL) using PARM* as the expert demonstrator. PRIMER leverages the low computational requirements at deployment of neural networks and achieves much faster computation speed than optimization-based approaches.&#13;
&#13;
In summary, this thesis puts forth RMADER and PRIMER as innovative solutions in the realm of decentralized multiagent trajectory planning, enhancing scalability, robustness, and deployability in real-world UAV applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Kuiper Connectivity with Edge Computing to New Industry Verticals</title>
<link href="https://hdl.handle.net/1721.1/151268" rel="alternate"/>
<author>
<name>Koller, Scarlett</name>
</author>
<id>https://hdl.handle.net/1721.1/151268</id>
<updated>2023-08-01T03:02:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applying Kuiper Connectivity with Edge Computing to New Industry Verticals
Koller, Scarlett
Large “megaconstellations” comprising hundreds or even thousands of satellites in Low Earth Orbit (LEO) to provide global connectivity coverage have been a goal of satellite operators for decades. Within the last 5 years, these projects have become technically feasible due to advances in communications and launch technology, finally making them economically viable compared to previous satellite communications systems. These types of projects, particularly Starlink, Kuiper, OneWeb and Telesat, have been hailed as ushering in an era of broadly accessible broadband internet, unrestricted by the availability of terrestrial fiber. The COVID-19 pandemic heightened awareness of the need for broadband internet connectivity, especially as schools and universities stopped in-person learning and students in underserved areas struggled to access remote learning. Starlink’s provision of ground terminals to regions of Ukraine suffering communications problems due to the Russian invasion emphasized the significance of connectivity in providing help and support to regions in need. Even SES’ new effort at a satellite communications constellation is explicitly named “O3b” for “other three billion”, citing the global “digital divide”.&#13;
However, creating a satellite communications constellation merely overcomes the need for terrestrial fiber infrastructure to connect homes, businesses &amp; communities that currently lie outside its reach. Developing the constellation alone cannot overcome other issues barring individuals from accessing the internet, such as affordability, low literacy rates, access to internet devices, access to reliable grid power, and a clear use case to encourage adoption. Non-consumers of internet services will not necessarily become consumers merely because a satellite constellation now covers their region.&#13;
This thesis therefore describes the process of developing a project that specifically addresses segments of non-consumption of broadband internet with a goal of generating internet use uptake in those segments. The selected segment was underserved rural Indian populations, and the project was focused on developing a concept that would generate a “pull” for internet services. This project was undertaken as part of Amazon’s Project Kuiper Business Operations during a Leaders for Global Operations internship. In service of Kuiper’s entry into India, the project focuses on using Kuiper’s ultra- compact low-cost, low-power customer terminal option to deliver 100Mbps internet; the ground receive power will limit the available data rate. If successfully taken up by the initial target user of rural Indian Self-Help Groups, of which there are 7.8 million nationwide, the project could both help improve internet connectivity in rural areas and engender goodwill from Indian regulatory bodies.&#13;
Please note that all opinions are the author’s own and not those of Amazon, Inc.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Apartheid in Schaarbeek: Belgian Migrant Labor and Human Rights in Europe’s Carbon Transition, 1945-1973</title>
<link href="https://hdl.handle.net/1721.1/151267" rel="alternate"/>
<author>
<name>Khan, Rustam</name>
</author>
<id>https://hdl.handle.net/1721.1/151267</id>
<updated>2023-08-01T03:41:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Apartheid in Schaarbeek: Belgian Migrant Labor and Human Rights in Europe’s Carbon Transition, 1945-1973
Khan, Rustam
Scholars have regularly used histories of postwar migrant labor in Europe to narrate the fraught relationship between European nation-states and their former colonies, questions of citizenship and belonging, and ideas of multiculturalism and integration. The economic rise of the future European Economic Community and later the European Union went hand in hand with the rise of guest workers recruited from Europe’s impoverished, southern regions, North Africa, and Turkey, among others. In Belgium, these people constituted an important labor force in old and newly-emerging industries such as coal mining, car manufacturing plants, petrochemicals and domestic work. However, many of these “temporary workers” were also denied access to basic living conditions and political participation, thus giving rise to conventional idea of “permanent temporality” within the scholarly fields of refugee studies and postwar Europe. These debates, often built on research in state archives, mark how guest workers became a body politic without citizenship – a marker of political agency – situated outside the imagined borders of a nation – in this case Belgium.&#13;
&#13;
This paper problematizes this approach by focusing on how migrant laborers in Belgium and its neighboring countries to an extent publicly claimed and contested an emergent discourse of human rights during the 1970s. They did so by setting up grassroots community initiatives, organizing along with traditional labor unions, and allying with new left and student movements. In their work, they were deeply cognisant of their colonial past and present, something which often fell on deaf ears among unfriendly state policies, violent police institutions, anti-immigrant people movements, and even left allies. Specifically, I examine how a language of universal human rights found its way among migrant activists and how they altered its meaning by bridging this discourse to histories of colonial oppression and the consequent necessity of reparations and inclusivity. To support this argument, I use personal testimonies and biographies, independent press from activist communities, and labor union archives in Belgium. I thus argue how migrant workers and refugees were far from passive subjects “stuck in permanent temporality,” and reverted to mobilization and political contestation very early onwards.&#13;
Belgium offers a resourceful intellectual and empirical terrain to think through the afterlives of European empires for different reasons. It lived throughout its history as an amorphous borderland zone between Europe’s major powers before and after 1945. It became a major developer of infrastructure for the coal-to-oil transition on the continent – which also necessitated the mass recruitment of foreign workers. Finally, it offers a peculiar case study that can illuminate how (post)colonial and migrant identities were shaped in the context of mass migration from North Africa and the Middle East. This paper contributes to the on-going discussion of how migrant workers and refugees construct counter-pasts. These communities created a different past and a hopeful (even if yet unrealized) future where migrants and refugees were not simply victims and lost fugitives.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frientelligent: Autonomous multi-agent collaboration, competition, and interaction curriculum for young children</title>
<link href="https://hdl.handle.net/1721.1/151265" rel="alternate"/>
<author>
<name>Alcantara, Raul</name>
</author>
<id>https://hdl.handle.net/1721.1/151265</id>
<updated>2023-08-01T03:06:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Frientelligent: Autonomous multi-agent collaboration, competition, and interaction curriculum for young children
Alcantara, Raul
AI has become a critical part of our day-to-day activities, often operating discretely as a core component within our day-to-day devices. Despite its hidden nature, it’s important to teach and learn about it as that would provide transparency about how revolutionary it is, what are their limitations, and how one can leverage its potential responsibly and ethically.&#13;
&#13;
Moreover, introducing AI to students at a young age not only helps them have more understanding and appreciation of this technology, but it gives them a chance to contribute to the community later on. Furthermore, early education in AI can help students develop critical thinking skills, creative problem-solving abilities, and a heightened awareness of the ethical implications surrounding AI.&#13;
&#13;
Unfortunately, there is still a gap in the literature for methods to teach AI to young learners, even when considering the traditional lecture-based style of teaching. To fill this gap, this research creates a novel AI curriculum and pedagogy that delivers AI concepts related to multi-agent interaction to students of ages 9-14. We evaluate the effectiveness of our curriculum in learning AI concepts, and in keeping students engaged throughout.&#13;
&#13;
In the curriculum, where multi-agent collaborate or compete, we teach the concepts of path planning and policy making. We do this by having students use a web-based interface where they’ll be able to control the different policies a robot can take and see how this makes a difference on their behavior. Students were given the opportunity to try two different versions of the same game: A virtual version, where all the interaction happens on a computer; and a physical version, in which they have to rearrange physical bots, "obstacles" and "rewards" (made of Legos) to build their own playfield.&#13;
&#13;
To evaluate the effectiveness of our curriculum, we gave students a pre and postquestionnaire to see how their knowledge of different AI concepts had changed. To evaluate how engaging students found the interface, we took observation notes, had a post-interview where we asked them about their experience, and recorded their interaction to look for signs of excitement/boredom.&#13;
&#13;
For the virtual mode, we found that the majority of students enjoyed the freedom the UI gave them to construct and manipulate their desired elements while witnessing the real-time replication of their actions on their friend’s screen. However, a few of them were overwhelmed by the multitude of available options. Likewise, for the physical mode, students really enjoyed being able to physically interact with different objects, see their changes detected in the interface, and observe the corresponding movements of the robots following the policies they had selected. Similarly, we noticed that overall students’ knowledge and confidence about the AI concepts increased after performing our activity.&#13;
&#13;
We conclude that using a collaborative web-based interface can be a useful way to teach AI concepts and that even though using physical objects for learning makes the experience more engaging, an all-virtual approach gives students more freedom for quick trial-and-error, which increases their learning as well.&#13;
&#13;
There are several ways in which this work can be extended. For instance, introducing customization options for each bot, such as allowing students to give them unique names, could greatly enhance student engagement. Additionally, expanding the library of themes and assets available would provide more opportunities for children to build according to their preferences and desires. By addressing these aspects, we can further enrich the learning experience and make a bigger impact on the student’s learning and perception of AI.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feeling Images of the Sun on Earth</title>
<link href="https://hdl.handle.net/1721.1/151263" rel="alternate"/>
<author>
<name>Abou Ras, Ous</name>
</author>
<id>https://hdl.handle.net/1721.1/151263</id>
<updated>2023-08-01T03:28:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Feeling Images of the Sun on Earth
Abou Ras, Ous
We are solar societies with diverse relationships and practices that revolve around the Sun. But as mechanical systems became reliable, we retreated to thermally controlled environments away from the Sun. Energy became a utility to feed these systems. As the time spent indoors increased, energy demands rose alongside its greenhouse emissions; and the connection to our Star was reduced to a utility with a paradoxical duality: both as a valuable source of energy to be maximized, and a nuisance in the summer to be minimized. Dealing with this conundrum, architecture has divided sunlight into two experiential components. The aesthetic qualities of light are reserved for its visual phenomenon, while its thermal characteristic is either portrayed as a renewable energy source to be utilized outside of the design domain, or as a nuisance to be shaded from.&#13;
&#13;
In this thesis, I explore sunlight as a carrier of energy – where energy is seen as mass, and our visual and thermal experiences are determined by intensity and contrast of the mass of the Sun falling on Earth. Building off landscape practices that created diverse microclimates with sunlight, solar capturing techniques are analyzed and reimagined as parts of analogue machines that translate the homogonous array of sunlight falling on an outdoor public site into a landscape of concentrated energies. The proposal is a temporal field of hot – and perhaps even burning – surfaces that provide warm moments that act as urban hearths for people to collimate around in an otherwise cold empty park in Cambridge, Massachusetts.&#13;
&#13;
Leveraging recent advancements in computer graphics, a ray-tracing tool is developed to estimate solar collector energy output and visualize light concentration of different geometries and materials across varying solar positions. By repurposing solar technologies to heat small volumes for short periods of time, this thesis reimagines how we might view the Sun’s energy – from utility to a metaphysical cosmic mass– creating images that can be felt even on a cold cloudy day.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Residential Real Estate Energy-Rating Systems in Germany, and their Applicability to the United States</title>
<link href="https://hdl.handle.net/1721.1/151262" rel="alternate"/>
<author>
<name>Naerger, Felix</name>
</author>
<id>https://hdl.handle.net/1721.1/151262</id>
<updated>2023-08-01T03:50:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluation of Residential Real Estate Energy-Rating Systems in Germany, and their Applicability to the United States
Naerger, Felix
Energy efficiency has become an increasingly important topic as economies globally race to combat climate change. Real estate is a major driver of global emissions, however also a key potential area for emissions reductions.&#13;
&#13;
In Germany, residential real estate has been identified as a major source potential of emissions savings, given the impact of the sector. As such, energy efficiency systems have been implemented, providing transparency for property owners and renters. Motivated by cost-savings, the aspects of these systems pertain to energy production, energy mix use, and heating efficiency.&#13;
&#13;
The United States as a country differs too strongly to allow for blind copying of the system in place in Germany. Its geographic size, climatic range, and population densities require a system of its own. Nevertheless, the lessons learnt from the German systems can help inspire initial steps towards effective American systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying Bottlenecks through Process Consistency in High-Capacity Automated Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/151259" rel="alternate"/>
<author>
<name>Lux, Kyle J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151259</id>
<updated>2023-08-01T03:02:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Identifying Bottlenecks through Process Consistency in High-Capacity Automated Manufacturing
Lux, Kyle J.
This thesis explores the identification of bottlenecks in automated assembly lines through process consistency. In general, large automated assembly lines have processes grouped together that have a huge affect on one another. They can only operate as fast&#13;
as the slowest process (the bottleneck) in their group. These groups are established by placing buffers in between several processes to decouple the slowest processes in groups from one another and to attempt to level production in the manufacturing plant. By comparing processes in the same group to one another, even though every one of them will operate at approximately the same speed, the process that operates the most consistently at this speed is often the bottleneck process.&#13;
&#13;
Consistency to identify bottlenecks was applied at Nissan’s Canton manufacturing plant in Canton, MS during the development of this thesis. Using the techniques described in this thesis, the bottleneck identification process was transformed from a multi-day process into a five minute procedure.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing and testing a portable device for tracking small deviations in the hydration levels of a human body</title>
<link href="https://hdl.handle.net/1721.1/151258" rel="alternate"/>
<author>
<name>Bedi, Saloni</name>
</author>
<id>https://hdl.handle.net/1721.1/151258</id>
<updated>2023-08-01T04:18:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Developing and testing a portable device for tracking small deviations in the hydration levels of a human body
Bedi, Saloni
Maintaining proper hydration is widely recognized as essential for overall health, yet dehydration remains a prevalent issue, particularly among vulnerable populations such as the very young and elderly, and can contribute to increased morbidity and mortality rates. There have been numerous endeavors in both academia and industry to develop non-invasive methods for monitoring hydration status, specifically in non-clinical settings. However, a practical and reliable solution for routine hydration measurements is still lacking.&#13;
&#13;
To address this critical need, the hydration team at MIT has been actively working on the development of a non-invasive sensor to bridge the gap in hydration monitoring. This thesis focuses on the development of a wearable setup for the existing technology, enabling human studies to be conducted with individuals by comfortably integrating the hydration sensor into their daily routines for study periods of 24 hours and longer. The primary objective of the human studies is to investigate whether the sensor readings obtained during prolonged periods of dehydration can be distinguished from those obtained during normal (euhydrated) activities.&#13;
&#13;
Throughout the course of this research, a comprehensive wearable setup was meticulously developed. Human studies were conducted in two distinct cohorts, namely Euhydration and Dehydration, where participants wore the wearable setup to capture hydration data. This allowed for the analysis of hydration trends and would help with a comparative assessment of sensor readings under different conditions.&#13;
&#13;
The results obtained and the data analysis facilitates the evaluation of the wearable setup's efficacy and reliability in monitoring hydration status during prolonged periods of dehydration. Furthermore, the findings contribute to the advancement of hydration monitoring by shedding light on the potential of distinguishing between hydration states using the developed wearable setup.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient perturbative framework for coupling of radiative and&#13;
guided modes in nearly periodic surfaces</title>
<link href="https://hdl.handle.net/1721.1/151257" rel="alternate"/>
<author>
<name>Fisher, Sophie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/151257</id>
<updated>2023-08-01T03:09:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Efficient perturbative framework for coupling of radiative and&#13;
guided modes in nearly periodic surfaces
Fisher, Sophie E.
We present a semi-analytical framework for computing the coupling of radiative and guided waves in slowly varying (nearly uniform or nearly periodic) surfaces, which is especially relevant to the exploitation of nonlocal effects in large-area metasurfaces. Our framework bridges a gap in the theory of slowly varying surfaces: aside from brute-force numerical simulations, current approximate methods can model either guided or radiative waves, but cannot easily model their coupling. We solve this problem by combining two methods: the locally periodic approximation, which approximates radiative scattering by composing a set of periodic scattering problems, and spatial coupled-wave theory, which allows the perturbative modeling of guided waves using an eigenmode expansion. We derive our framework for both nearly uniform and nearly periodic surfaces, and we validate each case against brute-force finite-difference time-domain simulations, which show increasing agreement as the surface varies more slowly.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the impact of vaccinations and AI based screening on cervical cancer prevention in low resource settings</title>
<link href="https://hdl.handle.net/1721.1/151255" rel="alternate"/>
<author>
<name>Rahemtulla, Jahanara</name>
</author>
<id>https://hdl.handle.net/1721.1/151255</id>
<updated>2023-08-01T03:39:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Assessing the impact of vaccinations and AI based screening on cervical cancer prevention in low resource settings
Rahemtulla, Jahanara
Cervical cancer disproportionately impacts lower-middle income countries (LMICs), with over 90% of cervical cancer related deaths occurring in these nations. Despite significant research and knowledge on how to prevent and manage cervical cancer, many women in low resource settings lack access to the necessary vaccinations, screening, and treatment. The WHO strategy for cervical cancer elimination recommends that: 90% of girls are vaccinated by the age of 15; 70% of women are screened using a high-performance test by the age of 35, and again at 45; and lastly that 90% of positively screened women are treated or their cancer is managed. These targets are optimistic relative to the current levels of prevention and treatment in LMICs.&#13;
&#13;
Through this paper, we use HPVsim (an agent-based simulation model created by the Institute of Disease Modeling) to simulate the impact of vaccinations, screening, and treatment on health outcomes such as HPV prevalence, cervical cancer incidence, and mortality. We focus specifically on the impact of Automated Visual Evaluation (AVE); an AI based screening technology developed by Global Health Labs that leverages machine learning models to diagnose precancer.&#13;
&#13;
Our results demonstrate that in the long term, HPV vaccination is more effective than screening and treatment strategies in reducing age standardized cervical cancer incidence rates (ASIR). Vaccinations are predicted to reduce ASIR by 41%, compared to 12% for screening and treatment interventions over the next 35 years. Although the impact of vaccinations is greater than the impact of screening and treatment in the long run, the effects of vaccinations take years to be realized. Therefore, the importance of screening is critical in the short run. The paper also evaluates the impact of AI based screening interventions (such as AVE). We find that in the long term (i.e., after 35 years), a 1% increase in screening probability is associated with a reduction in ASIR of 0.019, a 1% increase in treatment probability is associated with a reduction in ASIR of 0.015, and a 1% increase in AVE device sensitivity is associated with a reduction in ASIR of 0.09.&#13;
&#13;
We supplement our analysis with primary research interviews, which focused on best practices for deploying AI based cancer screening technologies. Our interview findings emphasize the importance of a systems approach and underscore the need to implement screening tools within the behavioral and social contexts of the societies being served. Overall, our study provides insights into the potential impact of cervical cancer prevention strategies and highlights the importance of tailored and context-specific approaches to screening and treatment in LMICs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Phygital Transformation: Adding Physical Devices to Digital Products to Improve the User Experience</title>
<link href="https://hdl.handle.net/1721.1/151254" rel="alternate"/>
<author>
<name>Gembali, Sahas</name>
</author>
<id>https://hdl.handle.net/1721.1/151254</id>
<updated>2023-08-01T03:31:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Phygital Transformation: Adding Physical Devices to Digital Products to Improve the User Experience
Gembali, Sahas
I see bright rectangles everywhere. Or, as Professor Hiroshi Ishii from the Tangible Media Group at the MIT Media Lab describes it, the “Pixel Empire.” Over the two decades, digital screens have taken the front stage in our lives, especially after the advent of the smartphone. While this has enabled incredible experiences that would not have been possible without the digital realm, we have lost tangibility in the process and the myriad of affordances that physical objects can provide and, with it, the richness of human interaction with the physical world. This thesis explores the concept of Phygital Transformation, the process of adding a physical device component to an existing digital product to improve the user experience by bringing back some of the advantages of the physical world to the digital world. It covers case studies of products currently in the market ranging from the fintech world to fitness and healthcare where Phygital Transformation has taken place successfully and analyzes the factors for their success in improving the user experience. It explores the benefits to the user and the business from a phygital user experience. Finally, it offers a framework for other digital products to evaluate their digital-only experience, build phygital concepts, and follow a systematic interdisciplinary process to add physical devices to their digital products where they can map the user experience benefits with the business value gain.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Method to design and fabricate an octahedral-tetrahedral spaceframe from repurposed scaffolding</title>
<link href="https://hdl.handle.net/1721.1/151253" rel="alternate"/>
<author>
<name>Arul, Jerome</name>
</author>
<id>https://hdl.handle.net/1721.1/151253</id>
<updated>2023-08-01T03:03:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Method to design and fabricate an octahedral-tetrahedral spaceframe from repurposed scaffolding
Arul, Jerome
Spatial structures find many applications in architecture and construction as flat trusses, but few examples take advantage of the rich variety of configurations that a multi-layered octahedral- tetrahedral (octet) spaceframe can accommodate. The octet geometry is considered because of its inherent versatility, rigidity and economy. This lattice has received renewed interest in the study of nano and microscale cellular structures due to advances in material science and additive manufacturing; we revisit the octet spaceframe in steel at the macroscale using repurposed components combined with accessible methods of fabrication.&#13;
&#13;
Available connection systems for octet lattices are complex and require intensive production, and existing structural systems are proprietary or purpose-engineered solutions. This provides an opportunity to simplify the art of both joint and strut system, and document an inexpensive and open technology with broad application in resource-strapped and remote environments where material efficiency and accessible assembly are essential. This thesis demonstrates a method to design and fabricate an octet spaceframe using repurposed scaffolding.&#13;
&#13;
A range of configurations and forms are modelled and generated within an octet point cloud. The structure can be evaluated in terms of human factors, utility and stiffness. The members used are commoditized steel cross-bracing with a variety of sizes that are commercially available. The joints are fabricated from steel plate using computer numerically controlled (cnc) water jet and are welded together into an orthogonal gusset. We justify a scale that is appropriate for an individual or small group to handle, fabricate and erect, with only the use of a few manual tools. A kit-of-parts of a multi-layered and multi-scale octet lattice is demonstrated, and FE methods to analyze and evaluate the structure are shown.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revamping Manufacturing Systems:&#13;
Utilization of Data Driven Models, Interpretable Machine Learning, and Data-Product Stakeholder Flow Analysis</title>
<link href="https://hdl.handle.net/1721.1/151252" rel="alternate"/>
<author>
<name>Adiwijaya, Zenia</name>
</author>
<id>https://hdl.handle.net/1721.1/151252</id>
<updated>2023-08-01T03:41:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Revamping Manufacturing Systems:&#13;
Utilization of Data Driven Models, Interpretable Machine Learning, and Data-Product Stakeholder Flow Analysis
Adiwijaya, Zenia
In the manufacturing environment, high volume of data can be easily generated. However, to provide valuable insight, the right tools, medium, and communication flow within stakeholders are crucial. This thesis presents a comprehensive exploration of developing data products in the manufacturing sector. It includes modeling industrial coffee roaster systems, improving the interpretability of machine learning models, and analyzing stakeholder flow to develop effective manufacturing data products. &#13;
&#13;
The first study involves modeling an industrial coffee roaster system. Using production data collected during the roasting process and multiple experiments, an 11-stacked long short-term memory (LSTM) neural network was developed and trained to model the dynamics of the industrial coffee roaster plant. The model was validated, and an initial closed-loop system was developed in MATLAB to further validate the model.&#13;
&#13;
The second study focused on improving the interpretability of machine learning models in the Semiconductor Fab. The SHapley Additive exPlanations (SHAP) methodology was applied to generate beeswarm and bar plots for the SHAP results, which identified the most important features to improve the throughput prediction. The study showed that Machine E utilization has a significant influence on the throughput prediction.&#13;
&#13;
Finally, the third study involved conducting a qualitative analysis of stakeholder flow in developing data products for manufacturing cases using interview methods and design structure matrix (DSM). The study found that the stakeholder flow varied based on the resources and stage of an organization, with the interaction with end-users being the main driver of the flow. The study also highlighted the importance of identifying and managing stakeholders with the highest coupled interaction for the development of data products.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site Material Supply Chain Optimization</title>
<link href="https://hdl.handle.net/1721.1/151251" rel="alternate"/>
<author>
<name>Schleuter, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/151251</id>
<updated>2023-08-01T03:12:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Site Material Supply Chain Optimization
Schleuter, Lisa
A warehousing product traditionally falls under the category of “project-based” management due to heavy construction, multi-locational delivery, and relatively low production numbers. A different approach for material procurement and delivery must be taken in light of today’s supply chain environment to sustain the high rate of delivery demanded of a leading company in this field. &#13;
&#13;
This thesis explores alternates to a project-based supply chain model as well as an evaluation of the “central warehouse” inventory model originally proposed by the company. The focus is on setting an inventory strategy for a product that is somewhat repeatable, but constructed in a unique location for each delivery and not built in a traditional assembly line.&#13;
&#13;
Applying the methods of multi-echelon demand analysis, “physics of time”, and creating a standardized method for risk and impact assessments, a basic framework is created that a company in this unique situation can follow to set an initial inventory strategy. The framework is applied to this company as a case study.&#13;
&#13;
The strategy and amount of inventory proposed using this method significantly reduced inventory and associated storage costs compared to the central warehouse proposal, and will ensure more robust material availability than the current “project-based” ordering approach.&#13;
&#13;
The implications of this work to the wider industry are a proposed method for situations of high-growth, minimal historical demand data, and product delivery that is in between high-rate assembly line and one-off construction projects.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Neural MMO Platform for Massively Multiagent Research</title>
<link href="https://hdl.handle.net/1721.1/151250" rel="alternate"/>
<author>
<name>Suarez, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/151250</id>
<updated>2023-08-01T04:14:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Neural MMO Platform for Massively Multiagent Research
Suarez, Joseph
Neural MMO is a computationally accessible research platform that combines large agent populations, long time horizons, open-ended tasks, and modular game systems. Existing environments feature subsets of these properties, but Neural MMO is the first to combine them all. We present Neural MMO as free and open source software with active support, ongoing development, documentation, and additional training, logging, and visualization tools to help users adapt to this new setting. Initial baselines on the platform demonstrate that agents trained in large populations explore more and learn a progression of skills. We raise other more difficult problems such as many-team cooperation as open research questions which Neural MMO is well-suited to answer. Finally, we discuss current limitations of the platform, potential mitigations, and plans for continued development.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Python-based tools for characterizing geosynchronous satellite behavior and evaluating maneuver prediction techniques</title>
<link href="https://hdl.handle.net/1721.1/151249" rel="alternate"/>
<author>
<name>Solera, Haley Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/151249</id>
<updated>2023-08-01T04:00:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Python-based tools for characterizing geosynchronous satellite behavior and evaluating maneuver prediction techniques
Solera, Haley Elizabeth
Geosynchronous (GEO) satellites maneuver frequently to maintain their Earth-relative position despite drift incurred from natural perturbations, but quantifying their diverse maneuver patterns can be challenging. Even for individual satellites, between one station-keeping cycle and the next, the frequency, magnitude, and direction of maneuvers can change. Additionally, there is very little accountability among operators to disclose detailed mission objectives and precise orbital data or to adhere to operational guidelines. This complicates the process of characterizing station-keeping control objectives, predicting maneuvers, and recognizing the early signs of a shift in a satellite's pattern of life (PoL). Characterizing PoLs for a diverse range of GEO satellites can help to contextualize historic on-orbit behaviors and behavior patterns, cultivate generalized maneuver prediction on a large scale, and help future behaviors to be quickly identified as anomalous, nominal, or indicative of a certain mission objective. This work presents two Python tools designed to address these challenges by improving the general accessibility of broad-scale PoL characterization and predictive aspects of Space Situational Awareness (SSA).&#13;
&#13;
First, a nomenclature for a generalizable PoL model is proposed, and a simple algorithm is introduced to enable PoL characterization according to this model. The algorithm is shown to efficiently process a large number of satellite histories by isolating PoL shifts - called nodes - even from sparse or low-precision position histories like collections of two-line-element (TLE) sets. Then, a second simulation tool is described and demonstrated to evaluate probabilistic maneuver prediction models in context of physical viewing constraints determined by user-defined surveillance scenarios. This work explores the potential of both tools to address data accessibility challenges and facilitate GEO satellite behavior characterization in order to foster a more cohesive and communicative SSA research community.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performing Trans-disciplinarity: Exploring Subjectivity and Objectivity in Knowledge Production</title>
<link href="https://hdl.handle.net/1721.1/151247" rel="alternate"/>
<author>
<name>Ishraki, Kazi</name>
</author>
<id>https://hdl.handle.net/1721.1/151247</id>
<updated>2023-08-01T03:35:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Performing Trans-disciplinarity: Exploring Subjectivity and Objectivity in Knowledge Production
Ishraki, Kazi
In this thesis, I make a case for challenging categorical thinking and draw upon non-human perspectives as a starting point to propose a pre-linguistic root to categorization by situating the concept of categorization into a sensory paradigm. Later I highlight a selection of 3 artistic projects that look at human-non-human relationships, sensory augmentation, and bacterial consent to question methodologies of inquiry in relating beyond the human category. In conclusion I present my thesis exhibition, Poetics of Inquiry, as a meditation on subjectivity to explore how artistic research can offer an entry point into thinking beyond categorization. This thesis also attempts to situate my practice within a limited base of references as a contribution towards the vast discourse by fellow knowledge producers within the field of Art, Culture, and Technology.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovative Supply Chain Cyber Risk Analytics: Unsupervised Clustering and Reinforcement Learning Approaches</title>
<link href="https://hdl.handle.net/1721.1/151244" rel="alternate"/>
<author>
<name>Siegel, Benjamin M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151244</id>
<updated>2023-08-01T04:06:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Innovative Supply Chain Cyber Risk Analytics: Unsupervised Clustering and Reinforcement Learning Approaches
Siegel, Benjamin M.
The increasing frequency and severity of cyberattacks has made reliable cyber risk assessment a critical concern for organizations worldwide. Traditional cyber risk methodologies focus on the enterprise’s level of cyber maturity. Moreover, several commercial companies provide cyber ratings using information about the organization accessible by outside parties, often called outside-in ratings. However, merely focusing on the enterprise’s own cyber maturity may be insufficient given the increasing number of cyberattacks that exploit vulnerabilities in the organization’s supply chain. This thesis presents innovative approaches to cyber risk assessment that incorporate attributes of the digital supply chain.&#13;
&#13;
Chapter 2 is motivated by recent cyberattacks that relied on compromising software companies as a vector to attack their customers, illustrating the importance of going beyond the enterprise’s vulnerabilities and assessing potential threats from the supply chain. Taking into account this observation, the chapter presents a data-driven approach to identifying high risk software companies based on their relative position in the supply chain. The newly proposed approach is based on unsupervised clustering techniques applied to intuitive supply chain features of the respective software companies. The clustering approach is applied to a self-constructed dataset of over 4,600 software companies, and the model partitions the software companies into two clusters. Historical breach data that was not used in the clustering suggests that the second cluster, despite being smaller, has a significantly higher proportion of breached companies. Furthermore, feature differences between clusters reveal that the risky software companies tend to have many more customers and suppliers, particularly in the Technology and Business Services sectors. These findings highlight the importance of specific supply chain features as risk drivers in assessing the cybersecurity posture of software companies.&#13;
&#13;
In Chapter 3, we propose a novel approach to cyber risk assessment that directly incorporates an attacker model and in so doing are able to better predict enterprises’ vulnerabilities. We develop a theoretical attacking agent to randomly target a company and explore neighboring nodes in the supply chain graph. Deep reinforcement learning algorithms are used to train the attacker over time, identifying rewarding paths throughout the supply chain network. The fully trained attacker then simulates attacks, yielding a risk score for each individual company in the network. This score corresponds to the relative number of breaches the company experiences in simulation. This approach is empirically validated using a dataset of over 13,000 companies in the Retail sector, and the results are highly statistically significant when compared to real-world breach incident data and an existing outside-in ratings model. Because the theoretical attacker approach is validated by existing breach data and holds predictive power, this methodology can contribute to the development of more effective risk assessment strategies to combat the growing threat of cyberattacks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Electrodes for an Electrostatically Actuated Mesh Reflector</title>
<link href="https://hdl.handle.net/1721.1/151242" rel="alternate"/>
<author>
<name>Overby, Kaleb D.</name>
</author>
<id>https://hdl.handle.net/1721.1/151242</id>
<updated>2023-08-01T03:04:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of Electrodes for an Electrostatically Actuated Mesh Reflector
Overby, Kaleb D.
In-space manufacturing (ISM) of large structures with distributed actuators can enable radio frequency (RF) reflectors with previously inaccessible combinations of size and surface precision. A candidate approach for distributed actuation in large space structures is macroscale electrostatic actuation, which offers advantages in low power, high bandwidth, and low parasitic mass. Electrostatic actuation is commonly used in small-scale devices such as nano- and micro-electromechanical systems; however, its application in space structures, especially large structures fabricated via in-space manufacturing, presents novel challenges related to the exotic materials used in space, which must satisfy constraints from processability, environmental compatibility, and electro-thermo-mechanical structural performance. This thesis presents a detailed investigation on the design, processing, and performance of specialized electrodes compatible with an ISM technique termed Bend-Forming. A specific pair of electrodes is considered - a knitted metallic mesh electrode which serves as the RF reflector surface in a Bend-Formed antenna, and a deployable structural element which can be tiled to create a stiff command surface that manipulates the compliant mesh. The mechanical properties of the knitted mesh electrode are characterized via biaxial cyclic loading experiments, revealing anisotropic stiffness, lock-up at high strains, large hysteretic losses under cycling, and kinematic hardening. These insights are then used in a finite element simulation to determine a necessary pre-tension which result in uniform reaction forces at the attachments to a Bend-Formed support structure. The experiments and simulations are used together to determine the pretension for a 10 OPI mesh prototype, fabricated as part of this effort. This mesh also incorporates a specialized catenary system that mitigates wrinkling to maximize useful reflector area.&#13;
&#13;
Next, structural elements that comprise the command surface electrode scheme are designed, manufactured, and tested. Each structural element is a flattenable fiber- reinforced composite boom which could then be coiled and used as feedstock during Bend-Forming. The design process considered two optimization criteria – minimizing transverse deflection under electrostatic pressure and maximizing breakdown voltage – subject to constraints on coilability, compatibility with Bend-Forming, and overall structural performance. A parametric finite element study is used to characterize the effects of different cross-sectional features, e.g. local curvature, opening angle, and flange intersections points, on transverse deflection, with a view towards determining optimally stiff designs that minimize deflection. Separately, experiments and simulations are used to determine a stacking of conductors and dielectrics that can be applied to the boom surface such that each boom is a robust, individually addressable electrode.&#13;
&#13;
The thesis concludes with integration of this electrode pair in an electrostatically actuated mesh reflector. X-band RF testing on this reflector demonstrates in situ control over the reflector surface, enabling beam focusing through adjustment of the focal-length and beam steering by tuning the bias voltage on individual electrodes in the command surface. This work provides a framework for the design of macro- scale electrostatic actuators, with applications ranging from the Bend-Formed mesh reflectors of present interest to a novel class of deployable electrostatically actuated space structures.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing WIPlash: Implementation of a Controlled Release Strategy to Improve Shop Performance</title>
<link href="https://hdl.handle.net/1721.1/151240" rel="alternate"/>
<author>
<name>Covell, David D.</name>
</author>
<id>https://hdl.handle.net/1721.1/151240</id>
<updated>2023-08-01T03:38:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preventing WIPlash: Implementation of a Controlled Release Strategy to Improve Shop Performance
Covell, David D.
Complex manufacturing systems are managed through their schedules, which help to make sure that the right part is in the right place at the right time. The challenge that many companies face is that the schedule is based off of a series of assumptions, and the manufacturing system suffers when inevitable variability interferes with the best laid plans. The challenge addressed in this paper involves how to consume an operation schedule and maximize output in a system with significant sources of variability.&#13;
&#13;
What follows is a case study on the use of well-documented methods in operations management to improve the performance of a manufacturing system. Through the use of Lean process mapping, Theory of Constraints focus on the bottleneck, and&#13;
CONWIP methods of order release, I show that it is possible to achieve lower Work-In-Progress (WIP) levels and improved performance on key metrics relative to a purely push system. Successful implementation of the release strategy required not only an understanding of factory dynamics, but also an understanding of the dynamics surrounding implementation of changes within a balanced organization.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City as the Infrastructure of Innovation: &#13;
Insights and Proposals for Shaping Shenzhen's Innovation Districts and Knowledge-based Industry</title>
<link href="https://hdl.handle.net/1721.1/151235" rel="alternate"/>
<author>
<name>Huang, Kecheng</name>
</author>
<id>https://hdl.handle.net/1721.1/151235</id>
<updated>2023-08-01T03:01:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">City as the Infrastructure of Innovation: &#13;
Insights and Proposals for Shaping Shenzhen's Innovation Districts and Knowledge-based Industry
Huang, Kecheng
Innovation district has become a popular concept in urban planning and design application in recent years. Proposed by city planning departments around the world, they aim to adopt this methodology to promote the growth of knowledge-based industries and, at the same time, improve the interaction and collaboration between the innovation industry and the city. &#13;
&#13;
In this paper, the author explores the relationship between cities and innovation districts across multiple scales, taking Shenzhen and its typical innovation districts as the research subject. Through site investigation and comparative analysis with the innovation districts in San Francisco, the author identifies the key issues and challenges in designing innovation districts in China’s context. In addition, the author also proposes design strategies at both the urban design level and architectural design level to improve the functionality and adaptability of them.&#13;
&#13;
The findings provide insights for addressing the challenges faced by innovation districts in Shenzhen, China.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Murals</title>
<link href="https://hdl.handle.net/1721.1/151234" rel="alternate"/>
<author>
<name>Medrano, Mariana</name>
</author>
<id>https://hdl.handle.net/1721.1/151234</id>
<updated>2023-08-01T03:12:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Murals
Medrano, Mariana
This project emerges from the desire of making a mural for the interior space of an abortion clinic. In 2023, in the United States, reproductive agency is a threatened right, which makes abortion clinics spaces of resistance, alternative narratives, and radical care. The history of the care provided in clinics is older than what we may ever trace, because reproductive agency is a practice aided not only by contemporary medical professionals but also by the natural world itself. There is a historic plethora of herbs, roots, flowers, et cetera, that have been employed as abortifacients: Plants that when consumed cause the uterus to contract, thus inducing a miscarriage, or abortion. This practice, which unfortunately today is largely lost as a collective social knowledge, intersects with murals in that plants also provide us with pigments. A mural for an abortion clinic holds the ambition of elevating this historic narrative of reproductive care and agency by encoding knowledge in pigments, a lost history in images. For this project, a catalog of abortifacient plants and their corresponding pigments was created to inform the making of a mural. The pigments themselves were extracted and synthesized from organic matter, and each plant was considered as rooted in intersectional histories of medicine, power, gender, colonialism, divinity, and color.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilitating Multi-Perspective-Taking in Adults: A Field Study</title>
<link href="https://hdl.handle.net/1721.1/151233" rel="alternate"/>
<author>
<name>Georgiadis, Mari</name>
</author>
<id>https://hdl.handle.net/1721.1/151233</id>
<updated>2023-08-01T04:06:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Facilitating Multi-Perspective-Taking in Adults: A Field Study
Georgiadis, Mari
Interpersonal collaboration is critical to solving complex problems and constitutes one of the most challenging aspects of effective teamwork. A significant amount of tension that leads to unproductive divisions can be traced back to either-or thinking fallacy. This research explores what it takes to effectively facilitate multi-perspective-taking in adults in order to address this challenge. The results of the field study make a number of contributions: (i) they highlight the complexity and opportunity involved in facilitating multi-perspective-taking in adults; (ii) they offer structured guidelines for how to approach such a training; and (iii) they exhibit initial evidence that reinforces the possibility and value of expanding the multi-perspective-taking capacity of adults.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Propensity to Borrow out of Expected Permanent Income</title>
<link href="https://hdl.handle.net/1721.1/151232" rel="alternate"/>
<author>
<name>Wilson, John</name>
</author>
<id>https://hdl.handle.net/1721.1/151232</id>
<updated>2023-08-01T04:16:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Propensity to Borrow out of Expected Permanent Income
Wilson, John
One prediction of the Permanent Income Hypothesis is that households who are illiquid may wish to borrow against future income when there is a positive shock to their future income process. We informally model this prediction in a two-period setting and then test it using Equifax data. We demonstrate that Democrats experience a strong, positive shock to their expectations about their future real income around the 2020 presidential election. Our difference-in-difference analysis finds that in response Democrats were 0.08% more likely to take on debt in order to buy a car, and that their outstanding auto loan balance increased by $104 on average. Compared to the rate at which Democrats purchased cars via loan prior to the election, this 0.08% increase represents a 1.37% increase in the purchase rate. We validate our results by finding a similar result holds for installment loan purchases. We show that this result is robust to our empirical assumptions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Responsible Design: Design Methods for Anthropocentric Sustainable Futures</title>
<link href="https://hdl.handle.net/1721.1/151231" rel="alternate"/>
<author>
<name>Quirós Balma, Andrea</name>
</author>
<id>https://hdl.handle.net/1721.1/151231</id>
<updated>2023-08-01T03:55:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Responsible Design: Design Methods for Anthropocentric Sustainable Futures
Quirós Balma, Andrea
Finding a state of sustainability in which present and future human generations may have equal opportunity in perpetuity is an anthropocentric pursuit. It requires intergenerational equity in everything we do, including how we design the products, systems, and companies we build and use. Responsible Design is a new methodology that helps provide the structure designers need to develop sustainable solutions. It is an evolved version of Human Centered Design, a methodology that although well-intentioned can deliver solutions with dangerous effects on the environment. Responsible Design uses frameworks that consider the current climate change crisis across scales, the ethical concerns it poses across generations, and the viability of solutions across environmental, social, and economic dimensions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ah New Riddim: A Marked (Black) Axiological Shift Across Space and Time</title>
<link href="https://hdl.handle.net/1721.1/151230" rel="alternate"/>
<author>
<name>Neptune, Christie</name>
</author>
<id>https://hdl.handle.net/1721.1/151230</id>
<updated>2023-08-01T04:05:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ah New Riddim: A Marked (Black) Axiological Shift Across Space and Time
Neptune, Christie
Can the axiologies and stories oscillating the margins mark the discourse of Western logic positioned at the center, and how might this marking register in visual representations of the urban? Focusing on the West Indian community of East Flatbush, this thesis argues for the turn from the universal (uni-versus) towards the multiversal (multus-versus) within discursive urban space. First, this thesis demonstrates the potential of black popular culture within representational practices that shift the axes of power from the center to the margins. Secondly, It examines how frameworks of African temporality and the marked conventions of modern cinema and visual culture bring attention to a plurality of black subjectivity(s) across both dominant and marginal spatialities. Finally, this thesis considers the agency of marked axiological shifts within artistic interventions that foster the persistence of new knowledge formations "in relation to" a diversity of global perspectives.&#13;
&#13;
Keywords:MarkedAxiologicalShifts,Africantemporality,concentricstorytelling,subjectivity,place,interactivity, FilmicEncounter
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecast-Driven Inventory Management for the Fast-Moving Consumer Goods Industry</title>
<link href="https://hdl.handle.net/1721.1/151229" rel="alternate"/>
<author>
<name>Al Mesfer, Abdulelah S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151229</id>
<updated>2023-08-01T03:05:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Forecast-Driven Inventory Management for the Fast-Moving Consumer Goods Industry
Al Mesfer, Abdulelah S.
This thesis investigates the development and evaluation of various demand forecasting models for the Fast-Moving Consumer Goods (FMCG) industry on real-world data to devise an inventory control policy for a third-party logistics provider. Demand forecasting is crucial in the retail industry, influencing supply chain management, inventory control, and pricing strategies. Accurately predicting demand is essential for optimizing resource allocation, reducing stockouts, and minimizing holding costs. In this study, we employ several time series models, including traditional time series models, such as ARIMA and SARIMA, and machine learning techniques, such as Random Forests, XGBoost, and Prophet, to forecast retail demand. The performance of these models is assessed using time series cross-validation techniques and accuracy measures, such as RMSE, MAPE, and MAE. Data preprocessing steps, including resampling, imputation of missing values and outliers, SKU prioritization, and feature engineering, are performed to enhance the reliability of the forecasting models. The results indicate that XGBoost outperforms the other models, showcasing its ability to generate accurate FMCG demand forecasts. Based on the forecasting error, a continuous review (s,Q) policy is formulated to improve inventory management for the third-party logistics provider. The proposed inventory control policy demonstrates the potential to minimize holding costs for the FMCG industry. Future research directions include the investigation of additional forecasting models, the integration of external factors, and the extension of the study to other retail contexts.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Neural Networks for Programming Quantum Annealers</title>
<link href="https://hdl.handle.net/1721.1/151228" rel="alternate"/>
<author>
<name>Bosch, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/151228</id>
<updated>2023-08-01T04:12:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Artificial Neural Networks for Programming Quantum Annealers
Bosch, Samuel
Quantum machine learning is an emerging field of research at the intersection of quantum computing and machine learning. It has the potential to enable advances in artificial intelligence, such as solving problems intractable on classical computers. Some of the fundamental ideas behind quantum machine learning are very similar to kernel methods in classical machine learning. Both process information by mapping it into high-dimensional vector spaces without explicitly calculating their numerical values. Quantum annealers are mostly studied in the adiabatic regime, a computational model in which the quantum system remains in an instantaneous ground energy eigenstate of a time-dependent Hamiltonian. Our research focuses on the diabatic regime where the quantum state does not necessarily remain in the ground state during computation. Concretely, we explore a setup for performing classification on labeled classical datasets, consisting of a classical neural network connected to a quantum annealer. The neural network programs the quantum annealer's controls and thereby maps the annealer's initial states into new states in the Hilbert space. The neural network's parameters are optimized in a way that maximizes the distance of states corresponding to inputs from different classes and minimizes the distance between quantum states corresponding to the same class. Recent literature showed that at least some of the "learning" is due to the quantum annealer, connecting a small linear network to a quantum annealer and using it to learn small and linearly inseparable datasets. In this study, we simulate this system to learn several common datasets, including those for image and sound recognition. We conclude that adding a small quantum annealer does not provide a significant benefit over just using a regular (nonlinear) classical neural network.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Make vs. Buy Optimization for Industrial Manufacturing &amp; Distribution Businesses</title>
<link href="https://hdl.handle.net/1721.1/151227" rel="alternate"/>
<author>
<name>Esposito, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/151227</id>
<updated>2023-08-01T04:11:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Make vs. Buy Optimization for Industrial Manufacturing &amp; Distribution Businesses
Esposito, Nicholas
In organizations producing products ranging from complex assemblies to individual components for customers, strategic decisions whether to make or buy the components required for the final product can have significant implications on the organization’s income statement, balance sheet, and value proposition in the market. Existing literature describes broad frameworks for evaluating these make vs. buy decisions, but a gap exists in how these decisions should be treated in organizations that are vertically integrated across manufacturing and distribution, especially with a commoditized product. Here we show the development of a broad strategic sourcing framework and detailed item-level analytical tool to aid in these make vs. buy decisions for manufacturer-distributors in commoditized markets. We show that the novel combination of internal capability and capacity data, external supplier segmentation, and a total cost of ownership approach to the financial impacts of a supplier choice can significantly aid in the identification and prioritization of strategic sourcing opportunities. We expect these new methods and tools to have a significant positive impact on the profitability of the partner organization, reduce the total level of inventory required in their network, and improve their value proposition to customers in the market.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Time Series Structure Learning:&#13;
Formulation of an Event Driven Prior Distribution</title>
<link href="https://hdl.handle.net/1721.1/151224" rel="alternate"/>
<author>
<name>Forman, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151224</id>
<updated>2023-08-01T03:19:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bayesian Time Series Structure Learning:&#13;
Formulation of an Event Driven Prior Distribution
Forman, David J.
We study the prior distribution over structures of a Bayesian time series structure learning model—the Temporal Interaction Model (TIM) of Siracusa and Fisher III. We develop a new method for setting the hyperparameters of the TIM structure prior. Our contribution enables more consistent inference performance as the number of interacting nodes in the time series increases, which we show analytically and with synthetic experiments. Secondly, we prove that the form of the prior distribution is within the curved exponential family. Finally, we test our developments empirically. Because traffic dynamics are comparatively accessible to common knowledge, we choose traffic time series as a test case to examine general behaviors of TIM inference and in particular our parameterization of the structure prior.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manta: An In-Situ Debugging Tool for Programmable Hardware</title>
<link href="https://hdl.handle.net/1721.1/151223" rel="alternate"/>
<author>
<name>Moseley, Fischer Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/151223</id>
<updated>2023-08-01T04:18:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Manta: An In-Situ Debugging Tool for Programmable Hardware
Moseley, Fischer Jay
Designing and debugging digital hardware has traditionally used vendor-provided tools, which are large and platform-constrained. Designers expend considerable effort accommodating toolchains for Field Programmable Gate Array (FPGA) development, which include utilities for debugging logic on the FPGA itself. As an alternative, this work proposes Manta, a lightweight, modular, platform-independent, and intuitive tool for debugging digital logic on FPGAs. Manta is designed to supplement vendor tools, and includes a logic analyzer, block memory interface, and the ability to measure and control individual signals on the FPGA. These tools are shown to build faster and consume fewer on-chip resources than equivalent vendor offerings, without any restrictions on chip family or vendor. Ethernet and UART interfaces provide convenient and high bandwidth communication between the host machine and target FPGA, and an extensible Python API allows for easy development of custom applications. This complete system produces an accessible and equitable FPGA development experience for use in educational, professional, and hobbyist environments alike.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Long-Range Underwater Backscatter via Van Atta Acoustic Networks</title>
<link href="https://hdl.handle.net/1721.1/151222" rel="alternate"/>
<author>
<name>Rademacher, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/151222</id>
<updated>2023-08-01T03:59:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Long-Range Underwater Backscatter via Van Atta Acoustic Networks
Rademacher, Jack
We present the design, implementation, and evaluation of Van Atta Acoustic Backscat­ter (VAB), a technology that enables long-range, ultra-low-power networking in un­derwater environments. At the core of VAB is a novel, scalable underwater backscatter architecture that bridges recent advances in RF backscatter (Van Atta architectures) with ultra-low-power underwater acoustic networks. Our design introduces multiple innovations across the networking stack, which enable it to overcome unique chal­lenges that arise from the electro-mechanical properties of underwater backscatter and the challenging nature of low-power underwater acoustic channels. We imple­mented our design in an end-to-end system, and evaluated it in over 1,500 real-world experimental trials in a river and the ocean. Our evaluation demonstrates that VAB achieves a communication range that exceeds 300m in round trip backscatter across orientations (at BER of 10-3). We compared our design head-to-head with past state-­of-the-art systems, demonstrating a 15x improvement in communication range at the same throughput and power. By realizing hundreds of meters of range in underwater backscatter, this paper presents the first practical system capable of coastal monitor­ing applications. Finally, our evaluation represents the first experimental validation of underwater backscatter in the ocean.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Systems and Human-Centered Design Approach for Awareness, Early Diagnosis and Treatment Adherence of ADHD and ADD for Children of India</title>
<link href="https://hdl.handle.net/1721.1/151218" rel="alternate"/>
<author>
<name>Goyal, Akshita</name>
</author>
<id>https://hdl.handle.net/1721.1/151218</id>
<updated>2023-08-01T04:25:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Integrated Systems and Human-Centered Design Approach for Awareness, Early Diagnosis and Treatment Adherence of ADHD and ADD for Children of India
Goyal, Akshita
This thesis explores and compares the pre and post covid-19 pandemic awareness, understanding, and challenges related to ADHD/ ADD across external stakeholders (parents, teachers, doctors) in a child’s ecosystem in India. It includes three independent stakeholder research and collective system analysis to define the barriers to diagnosis and treatment and intentions to support child mental health. We emphasize the role of cultural, demographic, and socio-economic factors in Indian parents’ and teachers’ responses to ADHD/ ADD. We explore how different areas of medical practice, like psychiatry, psychology, and pediatrics, engage with children with ADHD/ ADD. The thesis also includes challenges doctors face in providing appropriate care to children with mental health difficulties. Through system analysis and systems thinking, we identify the collective pain points and needs of the ecosystem within which a child with the potential for ADHD/ ADD lives. We also address post covid-19 shifting trends in societal acceptance of and willingness to support child mental health through quantitative and qualitative data analysis.&#13;
 &#13;
Using insights from the primary stakeholder and secondary background research, this thesis takes a human-centered design, system design, and platform strategy approach to create engaging, easy-to-implement, and educational solution concepts for pediatric ADHD/ ADD. The work aims to promote awareness, early diagnosis, and treatment adherence to ADHD/ ADD in India. The proposed solution, SMURF, is a multi-stakeholder, culturally sensitive, accessible platform concept designed to support children with ADHD/ ADD. The platform's primary goal is to create awareness about ADHD/ ADD amongst stakeholders and help periodically observe symptoms in children for early diagnosis. It aims to support doctors in information aggregation, generating prescreening reports and gentle nudges to promote treatment adherence. The initial platform blueprint has been successfully validated by psychiatrists and psychologists in India. The thesis provides guidelines for the future design and implementation of an equitable platform and the need for governmental support to help children with ADHD/ ADD across different socio-economic backgrounds.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utopic Déjà Vu: The Power of the Public Hallucination in the UAE</title>
<link href="https://hdl.handle.net/1721.1/151217" rel="alternate"/>
<author>
<name>Benton, Christopher Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/151217</id>
<updated>2023-08-01T03:19:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Utopic Déjà Vu: The Power of the Public Hallucination in the UAE
Benton, Christopher Joshua
Every year, the United Arab Emirates processes millions of resident visas for those who chase the Dubai dream. In fact, the country has the largest migrant population per capita in the world. Much like Disney, Dubai brokers in feeding the imagination, chiefly of immigrant-workers who make up 90% of the population. The promise of tax-free living, year-around sunny skies, and a bespoke lifestyle materializes the luxury skyscrapers and spectacular infrastructure projects for which the city is known.  &#13;
&#13;
From an inconsequential fishing village 50 years ago to one of the world’s great metropolises, Dubai presents a type of future shock for so-called locals, expatriates and migrant workers who all negotiate the city in profoundly different ways. This thesis will lay out a framework for utopic déjà vu, a new critical term to describe how the guest worker survives in a city undergoing rapid change: of a past under constant regeneration, a present that is always in flux, and a future that demands constant resources and psycho-spatial attention. By investigating narratives of national history and historiography, urban architecture, government policy, critical theory, and my own artwork, we will explore a new mechanism for understanding time and space in the city of the future, as well as the ethics and aesthetics that it propagates
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy Risk Mitigation Strategies for Drone Package Delivery</title>
<link href="https://hdl.handle.net/1721.1/151216" rel="alternate"/>
<author>
<name>Ding, Geoffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/151216</id>
<updated>2023-08-01T03:44:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Privacy Risk Mitigation Strategies for Drone Package Delivery
Ding, Geoffrey
Uncrewed aerial vehicles (UAVs), or drones, are increasingly used to deliver goods. In an emerging business model, a drone operator partners with multiple businesses to offer drone delivery as a service. Due to regulations requiring drones to broadcast position information, this business model results in a privacy risk: Third-party observers may use broadcast drone trajectories to link customers to the vendors from which they order, with a wide range of potential consequences. We propose a probabilistic definition of privacy risk based on the likelihood of inferring which customer receives a delivery from which vendor. Next, we quantify these risks and evaluate the impacts of the number of orders, drone capacity, decoy vendors, and delivery lime time requirements on privacy. We then discuss how privacy risk may be integrated into the vehicle routing problem or explicitly optimized on its own. Finally, we show the geographical dependence of the trade-off between privacy and efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Architecting the Future of Construction Enterprise for Intrapreneurship</title>
<link href="https://hdl.handle.net/1721.1/151215" rel="alternate"/>
<author>
<name>Osugi, Tatsuya</name>
</author>
<id>https://hdl.handle.net/1721.1/151215</id>
<updated>2023-08-01T03:33:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Systems Architecting the Future of Construction Enterprise for Intrapreneurship
Osugi, Tatsuya
Japan’s construction industry faces a severe problem of an aging workforce and shortage. Construction companies seek to diversify their revenue base and promote intrapreneurship, but results have been limited. This thesis aims to analyze factors that lead to the current situation and the architecture that promotes intrapreneurship, using the ARIES framework to analyze and propose an architecture that a future enterprise should adopt and an implementation plan.&#13;
&#13;
A literature review is conducted to study the factors necessary to promote intrapreneurship and to determine the thesis direction. The factors that hinder intrapreneurship are clarified through landscape analysis, stakeholder analysis, ten view elements model, SWOT analysis, and X-matrix analysis of the target enterprise, and the actual state of the enterprise’s current architecture is determined.&#13;
&#13;
From the results, five directions of envisioned future are extracted. Then, tradespace models are created and evaluated by multi-attribute utility and implementability. The generated architectures are narrowed down to four by three down-selections, and the most robust architecture is comprehensively selected through scenario-based testing, quantitative SWOT analysis, and risk analysis.&#13;
&#13;
The final architecture offers the top and middle managers a leadership program for innovation, a rewarding design, an agile development lifecycle, PMO, separate unit, project-oriented, and a platform for accumulating intrapreneurial know-how and facilitation of stakeholder communication. The X-matrix analysis is again conducted and reveals that this architecture fulfills the business strategy and stakeholder values to a great extent. Then, elements of the implementation plan for the new architecture are identified from the element anatomy, and an implementation plan is created. The results of this thesis can be used to design an architecture for intrapreneurship promotion, which could be adapted to construction and adjacent markets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product to Platform Strategy: Transitioning COVID-19 Citizen Tracing Product to Centralized Personal Health Record (PHR) Platform in Indonesia</title>
<link href="https://hdl.handle.net/1721.1/151214" rel="alternate"/>
<author>
<name>Listyo, Sabrina Woro Anggraini</name>
</author>
<id>https://hdl.handle.net/1721.1/151214</id>
<updated>2023-08-01T03:14:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Product to Platform Strategy: Transitioning COVID-19 Citizen Tracing Product to Centralized Personal Health Record (PHR) Platform in Indonesia
Listyo, Sabrina Woro Anggraini
Institutions are increasingly realizing the importance of transitioning from stand-alone digital products to platform-based models. This shift presents several challenges, as platforms need to build an ecosystem that involves multiple third-party partners. This study examines the Satu Sehat app (“Healthy United” in Indonesian) application from the Indonesian Ministry of Health. It serves as a health service platform that enables citizens to access their Personal Health Records (PHR) and improve the patient’s journey. The app evolved from a stand-alone digital product, the previous national COVID-19 tracing app, Peduli Lindungi. Satu Sehat Mobile platform has a unique model compared to other commercial platforms developed by private companies, as it is created by a government institution, the Digital Transformation Team (DTO) from the Ministry of Health. As a result, its key metrics and success factors differ from those of commercial platforms. Through a combination of literature review and interview with the platform development team, this thesis explores why a product should evolve into a platform, what key factors drive a successful transition from product to platform, and how platform owners can effectively collaborate with third-party complementors. One key advantage of a platform-based model is the potential to create network effects, where the value of the platform increases as more users and third-party complementors join and interact with it. Effective collaboration with third-party complementors is also critical to the success of a platform-based model. This involves providing the necessary tools and resources to enable complementors to build on the platform and monetize their services. Drawing from the thesis findings, institutions can better assess whether adopting a platform-based model is necessary to remain competitive and relevant in today's market.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trustworthy Learning and Uncertainty Quantification under Constraints</title>
<link href="https://hdl.handle.net/1721.1/151210" rel="alternate"/>
<author>
<name>Shen, Maohao</name>
</author>
<id>https://hdl.handle.net/1721.1/151210</id>
<updated>2023-08-01T03:08:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Trustworthy Learning and Uncertainty Quantification under Constraints
Shen, Maohao
Machine learning techniques have become increasingly important in a wide range of fields, including medicine, finance, and autonomous driving. While state-of-theart machine learning models can achieve promising prediction performance, there is an increasing need for reliable and trustworthy machine learning techniques, which requires the models to impose other capabilities, such as privacy preservation, computational efficiency, interpretability, robustness, and uncertainty quantification. This thesis focus on proposing novel techniques for critical requirements of trustworthy machine learning models, including uncertainty quantification, computational efficiency, and privacy preservation. Within this realm, we focus on aspects of uncertainty quantification problems for different settings and tasks, as well as applications with privacy or computational constraints in the form of limited access to training data and model internals. In particularly, this thesis investigates and develops methods to address three important sub-problems of trustworthy machine learning, namely posthoc uncertainty learning; reliable gradient-free and likelihood-free prompt tuning; and trustworthy unsupervised multi-source-free domain adaptation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Change Conversations with Children: Making Sustainability Meaningful, Tangible, and Actionable</title>
<link href="https://hdl.handle.net/1721.1/151208" rel="alternate"/>
<author>
<name>Crease, Alexander</name>
</author>
<author>
<name>Singhasaneh, Natha</name>
</author>
<id>https://hdl.handle.net/1721.1/151208</id>
<updated>2023-08-01T04:00:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Climate Change Conversations with Children: Making Sustainability Meaningful, Tangible, and Actionable
Crease, Alexander; Singhasaneh, Natha
For many of us, negative emotions surface when confronted with the environmental issues we face as a society. Young people in particular can experience high levels of eco-anxiety due to climate change. Adults do not feel informed enough to facilitate constructive conversations with young people about climate change, and often feel stressed and disempowered from doing so. A myriad of issues, including lack of confidence, friction in the education system, pessimistic messaging, misinformation, and polarization perpetuate a “spiral of silence” around the subject, making it one that adults do not like bringing up with each other, let alone their children. Value-based behavioral change surrounding sustainability at the individual, communal, and societal levels is essential for an environmentally resilient future. It is our responsibility to equip the next generation with the values, mindsets, and habits that prepare them for the environmental challenges they will face in the future. To help break the spiral of silence, we examined how conversations and action around sustainability can be normalized. Specifically, we explored how we might make discussion of sustainability meaningful, tangible, and actionable for children, while providing adults with an approachable, adaptable, and empowering resource for these conversations. Our work reviews academic research on eco-anxiety and the effectiveness of various communication pedagogies with existing solutions and their implications. In addition to our secondary research, we engaged a wide range of stakeholders – including parents, educators, researchers, and children – through interviews and workshops to develop the Sustainability Communication Framework. This framework includes Design Elements critical for engaging children in sustainable thinking and action, as well as Design Principles which guide how ideas can be most effectively communicated. Case studies are provided to demonstrate how effective solutions can be viewed through the lens of the framework. The framework can be applied in a variety of settings to guide experiences that normalize and reinforce values around sustainability. By supporting adults in making sustainability education age-appropriate to children, we hope to create a long-term impact by changing the way that children view and interact with the world, nurturing them into critical thinkers and active changemakers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VisText: A Benchmark for Semantically Rich Chart Captioning</title>
<link href="https://hdl.handle.net/1721.1/151207" rel="alternate"/>
<author>
<name>Tang, Ben Jun-Hong</name>
</author>
<id>https://hdl.handle.net/1721.1/151207</id>
<updated>2023-08-01T03:02:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">VisText: A Benchmark for Semantically Rich Chart Captioning
Tang, Ben Jun-Hong
Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities. However, current approaches for automatically generating such captions struggle to articulate the perceptual or cognitive features that are the hallmark of charts (e.g., complex trends and patterns). In response, we introduce VisText: a dataset of 12,441 pairs of charts and captions that describe the charts’ construction, report key statistics, and identify perceptual and cognitive phenomena. In VisText, a chart is available as three representations: a rasterized image, a backing data table, and a scene graph — a hierarchical representation of a chart’s visual elements akin to a web page’s Document Object Model (DOM). To evaluate the impact of VisText, we fine-tune state-of-the-art language models on our chart captioning task and apply prefix-tuning to produce captions that vary the semantic content they convey. Our models generate coherent, semantically rich captions and perform on par with state-of-the-art chart captioning models across machine translation and text generation metrics. Through qualitative analysis, we identify six broad categories of errors that our models make that can inform future work.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Systems Theory to Analyze Cyber Resiliency of Naval Engineering Systems</title>
<link href="https://hdl.handle.net/1721.1/151206" rel="alternate"/>
<author>
<name>Montvydas, Ryan G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151206</id>
<updated>2023-08-01T03:08:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Applying Systems Theory to Analyze Cyber Resiliency of Naval Engineering Systems
Montvydas, Ryan G.
The U. S. Coast Guard, similar to many other maritime organizations, is increasingly adapting to the technologically advanced new age of cutters – vessels that are greater than 65 feet in length. This new technology results in more systems on board ships and cutters having cyber interactions and interfaces than before. As our systems at sea grow more connected to the digital and cyber worlds, our naval engineering systems face new threats that could compromise and damage essential engineering components. Potentially, access to these physical systems by threat actors through the cyber interactions could render a ship useless if proper response is not taken. Therefore, these physical systems need to adopt cyber resiliency as a new capability due to their technological advancements.&#13;
&#13;
This research explores cyber resilience as a novel concept in the U.S. Coast Guard naval engineering field. Research defined the difference between cyber resiliency and cybersecurity, assessed how cyber resiliency can be measured for awareness and optimization, and explored how the System-Theoretic Process Analysis created by Dr. Nancy Leveson can be harnessed to identify potential resiliency gaps of these cyber-physical systems. Further, Systems Theory for cyber resiliency was applied to a U.S. Coast Guard cutter to explore its cyber resiliency capability and needs. Identifying the cyber resiliency needs of a cutter enabled the creation of a set of recommended system cyber resiliency requirements surrounding the automated controls of a naval propulsion system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The compatibility of the salesman with his customers</title>
<link href="https://hdl.handle.net/1721.1/151158" rel="alternate"/>
<author>
<name>Bernheimer, Walter Samuel.</name>
</author>
<id>https://hdl.handle.net/1721.1/151158</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The compatibility of the salesman with his customers
Bernheimer, Walter Samuel.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1964
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dyadic behavior in small groups</title>
<link href="https://hdl.handle.net/1721.1/151157" rel="alternate"/>
<author>
<name>Levy, Steven David.</name>
</author>
<id>https://hdl.handle.net/1721.1/151157</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Dyadic behavior in small groups
Levy, Steven David.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1963; Includes bibliographical references (leaf [76]).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental measurement of the thermal conductivity of cesium vapor</title>
<link href="https://hdl.handle.net/1721.1/151156" rel="alternate"/>
<author>
<name>Sununu, John H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151156</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">An experimental measurement of the thermal conductivity of cesium vapor
Sununu, John H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1963; Includes bibliographical references (leaf 19).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The wetting of alumina by vitreous abrasive wheel bonds</title>
<link href="https://hdl.handle.net/1721.1/151154" rel="alternate"/>
<author>
<name>Sidhwa, Almitra Pheroze.</name>
</author>
<id>https://hdl.handle.net/1721.1/151154</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The wetting of alumina by vitreous abrasive wheel bonds
Sidhwa, Almitra Pheroze.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1959; Includes bibliographical references (leaves 51-52).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A digital phase-locked ultrastable wideband FM oscillator.</title>
<link href="https://hdl.handle.net/1721.1/151148" rel="alternate"/>
<author>
<name>Burns, Ivan Raymond.</name>
</author>
<id>https://hdl.handle.net/1721.1/151148</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">A digital phase-locked ultrastable wideband FM oscillator.
Burns, Ivan Raymond.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1970; Bibliography: leaf 78.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning in unstable environments : directional planning as a practical alternative to strategic planning.</title>
<link href="https://hdl.handle.net/1721.1/151147" rel="alternate"/>
<author>
<name>Saltiel, Jack.</name>
</author>
<id>https://hdl.handle.net/1721.1/151147</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Planning in unstable environments : directional planning as a practical alternative to strategic planning.
Saltiel, Jack.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977; Bibliography : leaves 123-125.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new analysis in the continuum theory of thermal ignition</title>
<link href="https://hdl.handle.net/1721.1/151146" rel="alternate"/>
<author>
<name>Lermant, Jean-Claude.</name>
</author>
<id>https://hdl.handle.net/1721.1/151146</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">A new analysis in the continuum theory of thermal ignition
Lermant, Jean-Claude.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1983; Includes bibliographical references.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis and evaluation of the Brazilian railway system.</title>
<link href="https://hdl.handle.net/1721.1/151144" rel="alternate"/>
<author>
<name>De Souza, Luis Claudio Garcia.</name>
</author>
<id>https://hdl.handle.net/1721.1/151144</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">An analysis and evaluation of the Brazilian railway system.
De Souza, Luis Claudio Garcia.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connections of the olfactory tracts in the bullhead catfish I̲c̲ṯa̲ḻu̲ṟu̲s̲ ṉe̲ḇu̲ḻo̲s̲u̲s̲.</title>
<link href="https://hdl.handle.net/1721.1/151037" rel="alternate"/>
<author>
<name>Finger, Thomas Emanuel.</name>
</author>
<id>https://hdl.handle.net/1721.1/151037</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Connections of the olfactory tracts in the bullhead catfish I̲c̲ṯa̲ḻu̲ṟu̲s̲ ṉe̲ḇu̲ḻo̲s̲u̲s̲.
Finger, Thomas Emanuel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Psychology, 1973; Bibliography: leaves 24-26.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an oyster cultch material for three dimensional oyster aquaculture.</title>
<link href="https://hdl.handle.net/1721.1/151034" rel="alternate"/>
<author>
<name>Fisher, John Walker.</name>
</author>
<id>https://hdl.handle.net/1721.1/151034</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Development of an oyster cultch material for three dimensional oyster aquaculture.
Fisher, John Walker.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1973; Bibliography: leaves 59-60.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The fuel cycle economics of PWR extended burnup</title>
<link href="https://hdl.handle.net/1721.1/151032" rel="alternate"/>
<author>
<name>Fieldhack, Randall W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151032</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">The fuel cycle economics of PWR extended burnup
Fieldhack, Randall W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1986; Bibliography: leaves 115-117.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reynolds number effects on the unsteady aerodynamic behavior of an NACA 0012 airfoil</title>
<link href="https://hdl.handle.net/1721.1/151029" rel="alternate"/>
<author>
<name>Fletcher, Michael Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/151029</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Reynolds number effects on the unsteady aerodynamic behavior of an NACA 0012 airfoil
Fletcher, Michael Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1986; Bibliography: leaves 93-95.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The analysis and design of a flexible robotic fixturing and drilling system</title>
<link href="https://hdl.handle.net/1721.1/151028" rel="alternate"/>
<author>
<name>Fields, Antony Jonathan.</name>
</author>
<id>https://hdl.handle.net/1721.1/151028</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">The analysis and design of a flexible robotic fixturing and drilling system
Fields, Antony Jonathan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1986; Bibliography: leaves 89-91.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A role-playing simulation for the planning and control of a multiple-product salesforce</title>
<link href="https://hdl.handle.net/1721.1/151027" rel="alternate"/>
<author>
<name>Flint, Brilsford B.</name>
</author>
<id>https://hdl.handle.net/1721.1/151027</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">A role-playing simulation for the planning and control of a multiple-product salesforce
Flint, Brilsford B.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Vita.; Bibliography: leaves 98-100.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The control of a high-speed, low-jitter pulse generator</title>
<link href="https://hdl.handle.net/1721.1/151026" rel="alternate"/>
<author>
<name>Fitzpatrick, T. A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151026</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">The control of a high-speed, low-jitter pulse generator
Fitzpatrick, T. A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1986; Bibliography: leaf 40.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electricity shortage planning : a regional perspective</title>
<link href="https://hdl.handle.net/1721.1/151025" rel="alternate"/>
<author>
<name>Finelli, Francis A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151025</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Electricity shortage planning : a regional perspective
Finelli, Francis A.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 185-189.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approaching a new market : the activities of multinational pharmaceutical firms in China</title>
<link href="https://hdl.handle.net/1721.1/151023" rel="alternate"/>
<author>
<name>Fickle, Kathleen Anne.</name>
</author>
<id>https://hdl.handle.net/1721.1/151023</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Approaching a new market : the activities of multinational pharmaceutical firms in China
Fickle, Kathleen Anne.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 274-281.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo simulation for the pricing of GNMA securities</title>
<link href="https://hdl.handle.net/1721.1/151022" rel="alternate"/>
<author>
<name>Fetter, Robert J.
            (Robert Jonathan)</name>
</author>
<id>https://hdl.handle.net/1721.1/151022</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Monte Carlo simulation for the pricing of GNMA securities
Fetter, Robert J.
            (Robert Jonathan)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Includes bibliographical references (leaves 76-77).
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An economic study of the radium industry</title>
<link href="https://hdl.handle.net/1721.1/151020" rel="alternate"/>
<author>
<name>Eckert, James Edmund.</name>
</author>
<id>https://hdl.handle.net/1721.1/151020</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">An economic study of the radium industry
Eckert, James Edmund.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1959; Includes bibliographical references (leaf [ix]).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmed cruise control</title>
<link href="https://hdl.handle.net/1721.1/151016" rel="alternate"/>
<author>
<name>Gatley, Vernon R.
            (Vernon Rowe)</name>
</author>
<id>https://hdl.handle.net/1721.1/151016</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1954-01-01T00:00:00Z</published>
<summary type="text">Programmed cruise control
Gatley, Vernon R.
            (Vernon Rowe)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1954; Includes bibliographical references (leaf 62).
</summary>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Labor legislation in India</title>
<link href="https://hdl.handle.net/1721.1/151015" rel="alternate"/>
<author>
<name>Ramakrishnan, P. R.</name>
</author>
<id>https://hdl.handle.net/1721.1/151015</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">Labor legislation in India
Ramakrishnan, P. R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1953; Bibliography: leaves [180-182].
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of an active suspension system for an automobile,</title>
<link href="https://hdl.handle.net/1721.1/151013" rel="alternate"/>
<author>
<name>Fields, Gene Michael.</name>
</author>
<id>https://hdl.handle.net/1721.1/151013</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Design of an active suspension system for an automobile,
Fields, Gene Michael.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the economies to be obtained by the operation of gas and oil-electric rail motor cars on the Boston and Maine Railroad</title>
<link href="https://hdl.handle.net/1721.1/150970" rel="alternate"/>
<author>
<name>Kinzer, Howard A.</name>
</author>
<id>https://hdl.handle.net/1721.1/150970</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1933-01-01T00:00:00Z</published>
<summary type="text">A study of the economies to be obtained by the operation of gas and oil-electric rail motor cars on the Boston and Maine Railroad
Kinzer, Howard A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1933; Appendix contains numerous pamphlets.
</summary>
<dc:date>1933-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reaction between copper reverberatory slag and refractories</title>
<link href="https://hdl.handle.net/1721.1/150969" rel="alternate"/>
<author>
<name>Kocatopcu, Şahap Şefkati.</name>
</author>
<id>https://hdl.handle.net/1721.1/150969</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">Reaction between copper reverberatory slag and refractories
Kocatopcu, Şahap Şefkati.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1944; Includes bibliographical references (leaf 59).
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Viscosity of some polar vapor mixtures</title>
<link href="https://hdl.handle.net/1721.1/150965" rel="alternate"/>
<author>
<name>Sinanoğlu, Oktay.</name>
</author>
<id>https://hdl.handle.net/1721.1/150965</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Viscosity of some polar vapor mixtures
Sinanoğlu, Oktay.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1957; Includes bibliographical references (leaves 41-42).
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation of a fixed-bed methanation reactor.</title>
<link href="https://hdl.handle.net/1721.1/150901" rel="alternate"/>
<author>
<name>Kinoshita, Goro.</name>
</author>
<id>https://hdl.handle.net/1721.1/150901</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Simulation of a fixed-bed methanation reactor.
Kinoshita, Goro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Turbulent interchange in triangular array rod bundles.</title>
<link href="https://hdl.handle.net/1721.1/150899" rel="alternate"/>
<author>
<name>Kirchner, Walter Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/150899</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Turbulent interchange in triangular array rod bundles.
Kirchner, Walter Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interfacing a dynamic measuring system with a computer.</title>
<link href="https://hdl.handle.net/1721.1/150895" rel="alternate"/>
<author>
<name>Fitzgibbons, Michael Radcliffe.</name>
</author>
<id>https://hdl.handle.net/1721.1/150895</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Interfacing a dynamic measuring system with a computer.
Fitzgibbons, Michael Radcliffe.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrodeposition of gallium phosphide.</title>
<link href="https://hdl.handle.net/1721.1/150893" rel="alternate"/>
<author>
<name>Flanders, Roger Donald.</name>
</author>
<id>https://hdl.handle.net/1721.1/150893</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Electrodeposition of gallium phosphide.
Flanders, Roger Donald.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis and control of a mass-transit problem.</title>
<link href="https://hdl.handle.net/1721.1/150892" rel="alternate"/>
<author>
<name>Finder, Kenneth Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/150892</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Analysis and control of a mass-transit problem.
Finder, Kenneth Alan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Is prepaid dental care the solution?</title>
<link href="https://hdl.handle.net/1721.1/150891" rel="alternate"/>
<author>
<name>Findley, James Judson.</name>
</author>
<id>https://hdl.handle.net/1721.1/150891</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Is prepaid dental care the solution?
Findley, James Judson.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1973; Bibliography: leaves 63-67.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of store-door delivery &amp; collection and its application to Boston and Maine Railroad territory</title>
<link href="https://hdl.handle.net/1721.1/150886" rel="alternate"/>
<author>
<name>Kamy, Harry Donald.</name>
</author>
<id>https://hdl.handle.net/1721.1/150886</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of store-door delivery &amp; collection and its application to Boston and Maine Railroad territory
Kamy, Harry Donald.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1932; Includes bibliographical references.
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proton-recoil neutron spectrometry in a fast reactor blanket.</title>
<link href="https://hdl.handle.net/1721.1/150881" rel="alternate"/>
<author>
<name>Kennerley, Robert John.</name>
</author>
<id>https://hdl.handle.net/1721.1/150881</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Proton-recoil neutron spectrometry in a fast reactor blanket.
Kennerley, Robert John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Project managers in the construction industry.</title>
<link href="https://hdl.handle.net/1721.1/150876" rel="alternate"/>
<author>
<name>Kispert, Robert Gordon.</name>
</author>
<id>https://hdl.handle.net/1721.1/150876</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Project managers in the construction industry.
Kispert, Robert Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Bibliography: leaves 203-204.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A model for investigating the reliability of train-to-train connections in railroad freight yards.</title>
<link href="https://hdl.handle.net/1721.1/150875" rel="alternate"/>
<author>
<name>Kerr, Peter Alexander.</name>
</author>
<id>https://hdl.handle.net/1721.1/150875</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">A model for investigating the reliability of train-to-train connections in railroad freight yards.
Kerr, Peter Alexander.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Bibliography: leaves 120-121.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental investigation of frequency multiplication by means of iron core coupled circuits</title>
<link href="https://hdl.handle.net/1721.1/150874" rel="alternate"/>
<author>
<name>Rumsey, Paul Truman.</name>
</author>
<id>https://hdl.handle.net/1721.1/150874</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1928-01-01T00:00:00Z</published>
<summary type="text">Experimental investigation of frequency multiplication by means of iron core coupled circuits
Rumsey, Paul Truman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1928
</summary>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Rotation-Equivariant Deep Learning to Cloud and Road Segmentation in Satellite and Aerial Imagery</title>
<link href="https://hdl.handle.net/1721.1/150767" rel="alternate"/>
<author>
<name>Meredith, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/150767</id>
<updated>2023-05-18T03:16:31Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Applying Rotation-Equivariant Deep Learning to Cloud and Road Segmentation in Satellite and Aerial Imagery
Meredith, Alex
Satellite and aerial images have many applications – images of clouds and roads are of particular relevance to this thesis. Satellite images of clouds are frequently used for climate monitoring, weather tracking, satellite instrument calibration, and on-orbit autonomy; satellite and aerial images of roads are frequently used for mapping flooded areas, predicting illegal logging, traffic monitoring, and route planning.&#13;
&#13;
Cloud detection in satellite imagery is key for autonomously taking and downlinking cloud-free images of a target region as well as studying cloud-climate interactions and calibrating microwave radiometers. Many existing state-of-the-art cloud detection algorithms require multispectral inputs and sometimes confuse clouds with snow, ice, or cold water. We propose deep learning models trained on visible-spectrum, long-wave infrared (LWIR), and short-wave infrared (SWIR) imagery for on-orbit cloud detection. Rotation-equivariant deep learning models are equivariant to rotations, meaning that when an input to the model is rotated, the model output will be equivalently rotated. We compare rotation-equivariant deep learning models to non-equivariant models, and also present comparisons to rule-based methods for cloud segmentation. Additionally, we compare models trained on visible-spectrum (VIS), LWIR, and SWIR imagery to models trained on only VIS and LWIR, on only VIS and SWIR, and on only VIS imagery and make recommendations for imaging bands to prioritize during instrument selection for resource-constrained missions. We find that augmenting VIS imagery with SWIR imagery is most useful for missions where false positives (non-cloud pixels misidentified as cloud) are extremely costly, and we find that augmenting with LWIR imagery is most useful for missions where false negatives (cloud pixels misidentified as non-cloud) are extremely costly.&#13;
&#13;
A secondary focus of this thesis is evaluating rotation-equivariant deep learning models on the road detection domain. Road detection in satellite and aerial imagery can map safe evacuation routes from areas affected by natural disaster or predict deforestation by identifying roads constructed for the purpose of illegal logging. We present the results of rotation-equivariant and non-equivariant models on road segmentation of aerial imagery, and make recommendations for integrating rotation-equivariance into current state-of-the-art road detection algorithms.&#13;
&#13;
We find that our C₈-equivariant dense U-Net, a rotation-equivariant deep learning model, outperforms our other deep learning models on both cloud and road segmentation, and also outperforms rule-based algorithms on cloud segmentation. The C₈-equivariant dense U-Net achieves an F₁ score of 0.9806 on the cloud segmentation dataset when evaluated with a 2 pixel buffer at the cloud boundaries, and achieves an F₁ score of 0.9342 on the road segmentation dataset when evaluated with a 4 pixel buffer at the road boundaries.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fixation of zirconium fuel reprocessing wastes in vitrified igneous rocks and industrial slags</title>
<link href="https://hdl.handle.net/1721.1/150751" rel="alternate"/>
<author>
<name>Ketcham, David Leroy.</name>
</author>
<id>https://hdl.handle.net/1721.1/150751</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Fixation of zirconium fuel reprocessing wastes in vitrified igneous rocks and industrial slags
Ketcham, David Leroy.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1960; Includes bibliographical references (leaves 75-79).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An accurate audio oscillator utilizing digital control</title>
<link href="https://hdl.handle.net/1721.1/150750" rel="alternate"/>
<author>
<name>King, Brian Dennis.</name>
</author>
<id>https://hdl.handle.net/1721.1/150750</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">An accurate audio oscillator utilizing digital control
King, Brian Dennis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaf 85).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Ka-band free-electron laser amplifier</title>
<link href="https://hdl.handle.net/1721.1/150749" rel="alternate"/>
<author>
<name>Legorburu, Peter Papavaritis.</name>
</author>
<id>https://hdl.handle.net/1721.1/150749</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">A Ka-band free-electron laser amplifier
Legorburu, Peter Papavaritis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1991; Includes bibliographical references (leaves 75-76).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The mechanism of crack initiation in silicon-iron under elevated temperature fatigue conditions.</title>
<link href="https://hdl.handle.net/1721.1/150742" rel="alternate"/>
<author>
<name>Kim, Jae-Hak.</name>
</author>
<id>https://hdl.handle.net/1721.1/150742</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The mechanism of crack initiation in silicon-iron under elevated temperature fatigue conditions.
Kim, Jae-Hak.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Eight leaves of illustrations not numbered.; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Helicopter noise experiments in an urban environment.</title>
<link href="https://hdl.handle.net/1721.1/150741" rel="alternate"/>
<author>
<name>Kinney, Wayne A.</name>
</author>
<id>https://hdl.handle.net/1721.1/150741</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Helicopter noise experiments in an urban environment.
Kinney, Wayne A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Processing of cardiograms for pattern recognition.</title>
<link href="https://hdl.handle.net/1721.1/150738" rel="alternate"/>
<author>
<name>Akant, Adnan.</name>
</author>
<id>https://hdl.handle.net/1721.1/150738</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Processing of cardiograms for pattern recognition.
Akant, Adnan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1974; Includes bibliographical references.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of the development and administration of Section 314(d) of the Public Health Service Act, as amended.</title>
<link href="https://hdl.handle.net/1721.1/150737" rel="alternate"/>
<author>
<name>King, Richard Maurice.</name>
</author>
<id>https://hdl.handle.net/1721.1/150737</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">An analysis of the development and administration of Section 314(d) of the Public Health Service Act, as amended.
King, Richard Maurice.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financial hedging in international markets for commodity producers.</title>
<link href="https://hdl.handle.net/1721.1/150736" rel="alternate"/>
<author>
<name>Akant, Adnan.</name>
</author>
<id>https://hdl.handle.net/1721.1/150736</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Financial hedging in international markets for commodity producers.
Akant, Adnan.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Waste Reduction in Amazon Robotics Sortable High Velocity Fulfilment Using Six-Sigma and Product Design Methods</title>
<link href="https://hdl.handle.net/1721.1/150721" rel="alternate"/>
<author>
<name>Peleg, Tamir</name>
</author>
<id>https://hdl.handle.net/1721.1/150721</id>
<updated>2023-05-16T03:32:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Waste Reduction in Amazon Robotics Sortable High Velocity Fulfilment Using Six-Sigma and Product Design Methods
Peleg, Tamir
Amazon Robotics Sortable High Velocity Fulfilment or ARS-HVF is a program designed to mitigate waste generation associated with high velocity (high rate of sales over time) items’ fulfillment. Large variability in items’ velocities in the ARS network causes the existing process to generate waste when dealing with High-Velocity items. Three forms of this waste are packaging material waste, delivery expenses, and non-value-adding labor. Looking at the four leading Amazon Devices in 2020, the generated waste associated with these products amounts to over 100k [m^3] of packaging, comparable to 40 Olympic swimming pools, millions of dollars in excess delivery expenses as well as carbon emissions from unnecessary van rides due to volume added by over-boxing, and over 100k hours of non-value-adding labor. Together, these waste forms pose a potential savings amount of $20M, for the year 2022 over these four products alone. Six-Sigma and product design and development methods were used in this work to methodically evaluate and solve the problem of waste generation in the current Amazon FC process. Internal Subject Matter Experts (SMEs) and external vendors were utilized to inform the design process,  provide feedback on designs, manufacture samples, and provide industrial grade solutions. Waste generation reduction is an opportunity to design a new fulfillment path. The designed path is built from three components: package, process, and machine. These are tailored together to remove the use of over-box, eliminate excess delivery expenses, and reduce non-value-adding labor by more than 95%. The new e-commerce primary package design is compatible with physical stores presence, fully recyclable, and can be shipped as is, eliminating the need for over-box, and downsizing shipped packages' volume to the actual size of the item’s unit. It also serves as an enabler for the new process that bypasses the current ARS Fulfilment center (FC) process's three waste generating stations, Stow, Pick and Pack. The process design includes a new multi unit package, known as a master-shipper, which together with the newly designed fulfilment dispenser machine, realizes cycle-time reduction by pooling multiple units into one task instead of performing multiple tasks processing one unit. The three components together result in an average cost per unit reduction of $0.8-1.5. With potential expansion to additional high velocity items, the estimated savings potential amounts to $75-110M for 2022, with expected growth of 30% YoY. These results illustrate that sustainability efforts can highly benefit all involved stakeholders, including the company, its customers and the communities which they both live in.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Machine Learning Models for Early Detection of Pregnancy Risk</title>
<link href="https://hdl.handle.net/1721.1/150711" rel="alternate"/>
<author>
<name>Utsumi, Yuria</name>
</author>
<id>https://hdl.handle.net/1721.1/150711</id>
<updated>2023-05-16T03:51:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Explaining Machine Learning Models for Early Detection of Pregnancy Risk
Utsumi, Yuria
Care management programs for high-risk pregnancies aim to detect pregnant women with pregnancy risk factors early so they can receive proper care or preventative treatment. To detect these women, pregnant members are first detected, then they are checked for high risk diagnosis codes or fed into a risk prediction algorithm. Members predicted to be most at risk are outreached and provided guidance on how to manage or monitor symptoms.&#13;
&#13;
In this thesis, we work with the high risk pregnancy care management team at Independence Blue Cross to (1) build a pregnancy identification algorithm to detect pregnant women earlier in their pregnancy, (2) model impactable pregnancy risk factors, and (3) explain these models’ predictions. We introduce a new framework for thinking about explainability methods in healthcare – working in assumptions about a prior understanding a clinician may have about the patient and working with high dimensional, redundant data – and we conduct a user study to examine deployability and impact of these algorithms.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>2.5D: Novel Material Dimensions with 3D Printing on Fabric</title>
<link href="https://hdl.handle.net/1721.1/150707" rel="alternate"/>
<author>
<name>Lee, En-Han Thaddeus</name>
</author>
<id>https://hdl.handle.net/1721.1/150707</id>
<updated>2023-05-16T03:21:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">2.5D: Novel Material Dimensions with 3D Printing on Fabric
Lee, En-Han Thaddeus
Much of the design for architecture and objects we encounter today is built around the paradigms for manufacturing either two dimensional, flat goods, or three dimensional forms. We achieve an astounding proficiency in producing paper, fabric or sheet goods, whilst often encountering the familiar problem of logistics and assembly in creating anything in three dimensions. What if we were to combine the proficiency we have with the former, to produce volume, form, and structure?&#13;
&#13;
2.5D is a proposal for a hybrid approach that applies 3D printing onto textiles and film material. The resultant method hopes to meld the design vocabularies of 2D and 3D design whilst presenting new possibilities with existing materials and technologies. Building on preceding research in 4.154 Interactive Intelligent Skins, this thesis takes the body as the most immediate context for architecture to present three objects as case studies.&#13;
&#13;
The first is a reformable bag that suggests how fabric behavior might be modified with 3D printing. The second, a therapeutic garment that explores new materialities for variable flexibility and structure. And last, a shoe concept that addresses possibilities of mass customization and distributed manufacturing.&#13;
&#13;
Responding to the challenges faced in the design of these, this thesis also puts forth a prototype design for a wide-format 3D printer capable of working with novel flexible filaments in the context of roll-to-roll textile and film manufacturing. When coupled with the techniques presented, this approach offers the tantalizing possibility to manufacture objects with complex structures and material behaviors, but in a manner that achieves accessibility with high volume output at relatively low costs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WEB 3.0 Disruption and Adoption in Real Estate</title>
<link href="https://hdl.handle.net/1721.1/150690" rel="alternate"/>
<author>
<name>Zhang, Sherina</name>
</author>
<id>https://hdl.handle.net/1721.1/150690</id>
<updated>2024-08-06T20:26:33Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">WEB 3.0 Disruption and Adoption in Real Estate
Zhang, Sherina
This research evaluates the distribution and application of Web 3.0 technologies in the real estate industry. Through investigations in technical advancement, adoption, innovation, and financial support, this research aims to provide a roadmap for real estate professionals to incorporate Web 3.0 technologies in their daily operations or invest in emerging startups.&#13;
&#13;
The real estate sector has traditionally been slow to adopt innovative technologies, but with rising interest in PropTech and growing demand for technological integration, the industry is ready to embrace disruption from the latest innovations. Web 3.0, as the next iteration of the internet, is poised to significantly reshape human interactions and the environment. Despite its potential, the application and use cases of Web 3.0 in the real estate sector are not as well documented and analyzed as in other industries.&#13;
&#13;
This research investigates Web 3.0 disruption in real estate by first clarifying the conceptual understanding of Web 3.0 and the fundamental building blocks of the Semantic Web. The study examines adoption and technical advancement of key Web 3.0 technologies by quantifying public awareness, patents, R&amp;D, ventures, and financial support. Technologies with better technical and adoption metrics are further studied to elaborate on their use cases and applications in the real estate sector. A startup tracker is used to list venture backed Web 3.0 enabled companies that are disrupting the real estate industry. The summary of the studies provides a comprehensive view of emerging areas of disruption in real estate driven by Web 3.0 technologies.&#13;
&#13;
In conclusion, the research provides guidance and techniques for real estate developers and asset managers to integrate the technologies studied into their operations. For real estate investors, the research highlights promising sectors and startups within the industry as potential investment opportunities.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superconducting Electronics for Breakthrough Starshot Communications</title>
<link href="https://hdl.handle.net/1721.1/150688" rel="alternate"/>
<author>
<name>Sorenson, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/150688</id>
<updated>2023-05-16T03:24:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Superconducting Electronics for Breakthrough Starshot Communications
Sorenson, Andrew
Gram-scale sailcraft for the Breakthrough Starshot project are currently being designed to travel to Proxima Centauri, 4.24 light years away, and transmit back images and data [1]. In order to meet the size, weight, and power constraints of the mission, superconducting electronics should be considered for onboard processing and interfacing with the communications system. Previous research has shown that superconducting nanowire electronics consume over 100&#120273; less switching energy than 7 nm CMOS electronics [2]. In order to pursue application of superconducting electronics on Starshot probes, fundamental questions must be answered regarding the suitability of superconducting materials in the interstellar environment. To investigate this suitability, we performed numerical analysis of the effects of both radiation and temperature on superconducting electronics. We also designed, simulated, and tested superconducting nanowire devices tailored to Starshot operations. We found that with an edge-on sail transit configuration, equilibrium temperature of the sail may be below the critical temperature of common superconductors and that the anticipated error rate from radiation is 1. 23 &#120273; 10⁻¹⁸ µ&#119898;⁻² &#119899;&#119904;⁻¹ also present the design and simulation of a circuit modification that may drastically reduce the error rate in exposed superconducting nanowires. By finding that there are no immediate show-stoppers for using superconducting electronics onboard, we hope to inspire future investigations into the use of superconducting nanowire electronics for Starshot and other deep space missions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Workflow Evaluation of Key Work Packages in Drug Product Technologies</title>
<link href="https://hdl.handle.net/1721.1/150591" rel="alternate"/>
<author>
<name>Azolaty, Elnaz</name>
</author>
<id>https://hdl.handle.net/1721.1/150591</id>
<updated>2023-05-04T03:43:06Z</updated>
<published>2020-09-01T00:00:00Z</published>
<summary type="text">Workflow Evaluation of Key Work Packages in Drug Product Technologies
Azolaty, Elnaz
The biopharmaceutical industry is growing more competitive both in the variation of product offerings and in speed to patients. Amgen’s increasing pipeline of promising drugs coupled with their continued focus on delivering drugs to patients faster than before creates an imperative for identifying opportunities to increase efficiency and speed with reduced process development timelines.&#13;
&#13;
The research presents analysis at different altitudes of the Drug Product Technologies (DPT) processes: (1) current state analysis of commercialization milestones across 10 products; (2) current state analysis and discussion of end-to-end study workflows; (3) a case study evaluating a proposal to implement automation to increase efficiency and consistency of study execution. This research incorporates data across all three altitudes to be used in conjunction with methods of process flow mapping, value and non-value-added activity identification from Lean practices and critical path analysis to inform improved understanding of current state processes to help inform future-state investment decisions as discussed with the automation case study for moisture analysis.&#13;
&#13;
Using these methods, the project highlights the report generation process for studies as an opportunity area for enhanced efficiency. On a broader level, we recommend incorporating the processes explored in this research with process flow mapping, critical path analysis, and investment evaluation framework at all three levels more frequently and using the identified data sources to build out the infrastructure to be able to analyze cycle times and critical path analysis dynamically as the processes within work packages evolve. By evaluating work packages at these three levels, we build the case for extending these processes throughout DPT in other studies and using these tools to inform systematic decision-making around investments potentially creating increased efficiency for projects that are proposed and under consideration.
</summary>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of gamma radiation on certain pure vitamins in dilute solution</title>
<link href="https://hdl.handle.net/1721.1/150590" rel="alternate"/>
<author>
<name>Kim, Kyoung-Sook.</name>
</author>
<id>https://hdl.handle.net/1721.1/150590</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The effect of gamma radiation on certain pure vitamins in dilute solution
Kim, Kyoung-Sook.
Thesis: M.S., Massachusetts Institute of Technology, Department of Food Technology, 1960; Includes bibliographical references (leaves 46-51).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A heuristic approach to alternate routing in a job shop.</title>
<link href="https://hdl.handle.net/1721.1/150583" rel="alternate"/>
<author>
<name>Russo, Francis John.</name>
</author>
<id>https://hdl.handle.net/1721.1/150583</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">A heuristic approach to alternate routing in a job shop.
Russo, Francis John.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investment promotion policies within the development context : the case of Greece</title>
<link href="https://hdl.handle.net/1721.1/150574" rel="alternate"/>
<author>
<name>Melenikiotou, Georgia S.</name>
</author>
<id>https://hdl.handle.net/1721.1/150574</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Investment promotion policies within the development context : the case of Greece
Melenikiotou, Georgia S.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaves 121-123.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Market behavior for bonds of bankrupt railroads.</title>
<link href="https://hdl.handle.net/1721.1/150573" rel="alternate"/>
<author>
<name>Sparks, Bradley Earl.</name>
</author>
<id>https://hdl.handle.net/1721.1/150573</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Market behavior for bonds of bankrupt railroads.
Sparks, Bradley Earl.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaf 43.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetothermal Modulation of Nerve Growth</title>
<link href="https://hdl.handle.net/1721.1/150562" rel="alternate"/>
<author>
<name>Field, Hannah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/150562</id>
<updated>2023-04-26T03:52:18Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Magnetothermal Modulation of Nerve Growth
Field, Hannah M.
Magnetic nanoparticles (MNPs) provide several mechanisms for wireless neuromodulation. MNPs under applied AC magnetic fields (AMFs) exhibit hysteresis loss, which can activate heat-sensitive ion channels such as TRPV1. This magnetothermal modulation requires AMFs with peak field strengths of up to 40 kA/m and frequencies of up to 580 kHz. While air–gap magnetic cores can achieve the necessary field parameters, their small size limits them to in–vitro experiments. A 10 cm coreless solenoid design generates the desired field parameters and is suitable for in–vivo experiments but requires several kilowatts of power. Here, we show the construction of a resonant tank inverter capable of delivering 6000 Watts of power at 600 V and 10 A to the tank circuit and generating the requisite AMF field strength and frequency inside the coil.&#13;
&#13;
As a first experiment, we use the apparatus to demonstrate wireless, magnetothermal modulation of dorsal root ganglia (DRG) explants, sensory neuronal structures that are a critical target for nerve therapy. Calcium influx into neurons plays a key role in many processes necessary for axonal regeneratiton. Using magnetothermal modulation, we stimulate calcium uptake into DRG cells via TRPV1 ion channels, which are endogenously expressed and heat sensitive. By adjusting the pulse pat-tern of magnetic stimulation, we find the optimal conditions for inducing neurite outgrowth in DRG cultures.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the neurological effects of SARS-CoV-2 infection on the brain</title>
<link href="https://hdl.handle.net/1721.1/150559" rel="alternate"/>
<author>
<name>Kabani, Malek</name>
</author>
<id>https://hdl.handle.net/1721.1/150559</id>
<updated>2023-04-26T03:30:41Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Investigating the neurological effects of SARS-CoV-2 infection on the brain
Kabani, Malek
SARS-CoV-2 and its associated disease COVID-19 is still a major public safety concern, three years into the global pandemic. While effective vaccines are readily available to protect against infection, major questions surround the long-term effect of SARS-CoV-2. Several patients report suffering from long-term COVID-19 neurological symptoms that include memory loss, and attention problems. Studying the molecular changes that occur in the brain upon infection and other cellular pathways that may be impacted will help us further understand the mechanism of neuroinvasion and what cell types are most vulnerable, in the hopes of developing therapies that target these markers. To this end, we used post-mortem brain samples to first see if we could detect the presence of SARS-CoV-2 in the brain. We then used RT-qPCR to determine whether the blood brain barrier was affected upon infection. Furthermore, we used a BBB in vitro model to determine the cells infected by SARS-CoV-2. We also used a pseudovirus to assess whether it recapitulates the same infection pattern as the live virus . Finally, we were interested in what cellular pathways this infection could also perturb. We hypothesized that SARS-CoV-2 could alter the Aβ uptake of pericytes and tested this hypothesis in our BBB model system using both the live virus and recombinant spike protein.&#13;
&#13;
These results inform that SARS-CoV-2 alters the homeostasis in the brain and opens the door towards future research around other cellular pathways altered by COVID-19 brain infection.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An evaluation of beryllium-oxide as a moderator in a nuclear rocket application</title>
<link href="https://hdl.handle.net/1721.1/150545" rel="alternate"/>
<author>
<name>Badgley, Robert H.</name>
</author>
<id>https://hdl.handle.net/1721.1/150545</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">An evaluation of beryllium-oxide as a moderator in a nuclear rocket application
Badgley, Robert H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaf 45).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Color display by red and white and its potential applications</title>
<link href="https://hdl.handle.net/1721.1/150541" rel="alternate"/>
<author>
<name>Asano, Shintaro.</name>
</author>
<id>https://hdl.handle.net/1721.1/150541</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Color display by red and white and its potential applications
Asano, Shintaro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1961; Includes bibliographical references (leaves 47-48).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of the development bank in the industrialization of Ghana</title>
<link href="https://hdl.handle.net/1721.1/150534" rel="alternate"/>
<author>
<name>King, Kenneth.</name>
</author>
<id>https://hdl.handle.net/1721.1/150534</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The role of the development bank in the industrialization of Ghana
King, Kenneth.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1960; Includes bibliographical references (leaves 113-115).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intact stability study programmed for a digital computer</title>
<link href="https://hdl.handle.net/1721.1/150531" rel="alternate"/>
<author>
<name>Adams, C. W.
            (Charles William)</name>
</author>
<id>https://hdl.handle.net/1721.1/150531</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">Intact stability study programmed for a digital computer
Adams, C. W.
            (Charles William)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1949; Bibliography: leaf 113.
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategies for writers of puts and calls.</title>
<link href="https://hdl.handle.net/1721.1/150526" rel="alternate"/>
<author>
<name>Rosen, Martin Nathan.</name>
</author>
<id>https://hdl.handle.net/1721.1/150526</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Strategies for writers of puts and calls.
Rosen, Martin Nathan.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of an externally powered artificial elbow for electromyographic control.</title>
<link href="https://hdl.handle.net/1721.1/150524" rel="alternate"/>
<author>
<name>Rothchild, Ronald Dennis.</name>
</author>
<id>https://hdl.handle.net/1721.1/150524</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Design of an externally powered artificial elbow for electromyographic control.
Rothchild, Ronald Dennis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1965; Bibliography: leaf 45.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System design - optical character recognition with weighted area masks.</title>
<link href="https://hdl.handle.net/1721.1/150523" rel="alternate"/>
<author>
<name>Rueckwald, Ronald Frederick.</name>
</author>
<id>https://hdl.handle.net/1721.1/150523</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">System design - optical character recognition with weighted area masks.
Rueckwald, Ronald Frederick.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic tests of model steel structures</title>
<link href="https://hdl.handle.net/1721.1/150520" rel="alternate"/>
<author>
<name>Rowe, Pierce Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/150520</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Dynamic tests of model steel structures
Rowe, Pierce Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1965; Bibliography: leaf 126.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revolution and counterrevolution : U.S.-Nicaraguan relations subsequent to the 1979 Nicaraguan revolution</title>
<link href="https://hdl.handle.net/1721.1/150516" rel="alternate"/>
<author>
<name>Cushman, Joanne Margaret.</name>
</author>
<id>https://hdl.handle.net/1721.1/150516</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Revolution and counterrevolution : U.S.-Nicaraguan relations subsequent to the 1979 Nicaraguan revolution
Cushman, Joanne Margaret.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1987; Bibliography: leaves 145-150.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uses of time intervals to model digital behavior</title>
<link href="https://hdl.handle.net/1721.1/150515" rel="alternate"/>
<author>
<name>Crystal, Michael Roger.</name>
</author>
<id>https://hdl.handle.net/1721.1/150515</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Uses of time intervals to model digital behavior
Crystal, Michael Roger.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Bibliography: leaves 68-69.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A young entrepreneur's guide to buying a small business</title>
<link href="https://hdl.handle.net/1721.1/150514" rel="alternate"/>
<author>
<name>Cronin, Thomas C.
            (Thomas Christopher)</name>
</author>
<id>https://hdl.handle.net/1721.1/150514</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">A young entrepreneur's guide to buying a small business
Cronin, Thomas C.
            (Thomas Christopher)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Title as it appears in M.I.T. Graduate List 1987: The young entrepreneur's guide to purchasing a small business.; Bibliography: leaves 107-116.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An optimization model for marketing and production planning</title>
<link href="https://hdl.handle.net/1721.1/150513" rel="alternate"/>
<author>
<name>Covert, Karen B.
            (Karen Bartlett)</name>
</author>
<id>https://hdl.handle.net/1721.1/150513</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">An optimization model for marketing and production planning
Covert, Karen B.
            (Karen Bartlett)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Title as it appears in the M.I.T. Graduate List, June 1987: A model combining marketing and logistics planning for a consumer goods product.; Bibliography: leaf 61.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Competitive advantage in subsidiaries of multinationals : the case of EG&amp;G Sealol, Canada</title>
<link href="https://hdl.handle.net/1721.1/150511" rel="alternate"/>
<author>
<name>Crowe, Susan E.
            (Susan Elizabeth)</name>
</author>
<id>https://hdl.handle.net/1721.1/150511</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Competitive advantage in subsidiaries of multinationals : the case of EG&amp;G Sealol, Canada
Crowe, Susan E.
            (Susan Elizabeth)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Title as it appears in M.I.T. Graduate List, June 1987: Parent-subsidiary relationships in multinational firms : the case of EG&amp;G Sealol, Canada.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global sourcing strategies for data processing services</title>
<link href="https://hdl.handle.net/1721.1/150510" rel="alternate"/>
<author>
<name>Crow, Karen L.
            (Karen Lisa)</name>
</author>
<id>https://hdl.handle.net/1721.1/150510</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Global sourcing strategies for data processing services
Crow, Karen L.
            (Karen Lisa)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Bibliography: leaves 69-70.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying spreadsheet concepts to new knowledge acquisition tools</title>
<link href="https://hdl.handle.net/1721.1/150509" rel="alternate"/>
<author>
<name>Cowan, Richard A.
            (Richard Alan)</name>
</author>
<id>https://hdl.handle.net/1721.1/150509</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Applying spreadsheet concepts to new knowledge acquisition tools
Cowan, Richard A.
            (Richard Alan)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Title as it appeared in M.I.T. Graduate List, June 1987: Applying spreadsheet techniques to new knowledge acquisition tools.; Bibliography: leaves [82]-[84].
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental low Reynolds number comparison of a Wortmann FX67-K170 airfoil, a NACA 0012 airfoil, and a NACA 64-210 airfoil in simulated heavy rain</title>
<link href="https://hdl.handle.net/1721.1/150505" rel="alternate"/>
<author>
<name>Craig, Anthony Paul.</name>
</author>
<id>https://hdl.handle.net/1721.1/150505</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">An experimental low Reynolds number comparison of a Wortmann FX67-K170 airfoil, a NACA 0012 airfoil, and a NACA 64-210 airfoil in simulated heavy rain
Craig, Anthony Paul.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1987; Bibliography: leaves 102-103.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A statistical analysis of vehicular traffic</title>
<link href="https://hdl.handle.net/1721.1/150503" rel="alternate"/>
<author>
<name>Buller, Paul Kinzbruner.</name>
</author>
<id>https://hdl.handle.net/1721.1/150503</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">A statistical analysis of vehicular traffic
Buller, Paul Kinzbruner.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaves [60]-62).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drug repurposing : design, emulation and analysis of synthetic in-silico clinical trials using electronic health records and modern data analytics</title>
<link href="https://hdl.handle.net/1721.1/150468" rel="alternate"/>
<author>
<name>Xu, Shenbo,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150468</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Drug repurposing : design, emulation and analysis of synthetic in-silico clinical trials using electronic health records and modern data analytics
Xu, Shenbo,
            author.
Cancer has been a worldwide health issue, and its burden is considered to increase in the future. For most cancer disorders, the success with current therapies has been limited. Even after huge investments in drug development, the need for therapeutic advances remains high. As effective anti-cancer drugs are in high demands, drug repurposing, using existing drugs for other diseases has sparked a growing interest. Drug repurposing presents a striking opportunity and potentially significant cost-saving in the future treatment of cancer. The cost and complexity of conducting randomized clinical trials (RCT), the growth of electronic health record (EHR) sources, and the thriving technological advances in modern data analytics create an unparalleled opportunity to develop a systematic approach for drug repurposing,using EHR data and sophisticated analytical methods. In this thesis, by leveraging enriched high dimensional EHR data with diagnosis, drug prescription and lab test information, we aim to develop a systematic approach to emulate clinical trials regarding various drugs and diseases based on modern data analytics. Specifically, we take a data-driven approach to repurpose anti-diabetic drugs for several types of cancer incidence and mortality risks among the aging population, through the lenses of optimization, statistics, and machine learning. We start by introducing background knowledge for this study including cancer, drug repurposing, anti-diabetic drugs and clinical trials in Chapter 1. In Chapter 2, we describe the UK primary care database Clinical Practice Research Datalink (CPRD) along with its data structure for data preprocessing. Methods and mechanisms for missing data in clinical studies are also discussed as they will influence model robustness, statistical significance and directional results. In Chapter 3, we discuss alternative frameworks for survival analysis and causal inference with emphasis on modelling the behavior of how physicians prescribe drugs, using propensity scores. Several Cox regression based semi-parametric methods are also reviewed for survival analysis. Chapter 4 offers baseline characteristics for a comprehensive insilico randomized controlled trial with a total of 640 model specifications. Chapter 5 presents numerical risk ratio results for 10 sub-studies and discussions of covariate balance evaluation and sensitivity analyses among 64 schemes within each sub-study. Through this work, we have made preliminary contributions to repurposing anti-diabetic drugs for cancer incidence and mortality risks. More importantly, we have offered a systematic approach that has the potential to be used to repurpose drugs for other diseases that are of interest. This use of modern data analytics offers tremendous potential to meet healthcare challenges in this era of rapid technological change.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 219-227).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A modular and stretchable electronic system for on-body health monitoring applications</title>
<link href="https://hdl.handle.net/1721.1/150466" rel="alternate"/>
<author>
<name>Núñez López, Carlos,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150466</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">A modular and stretchable electronic system for on-body health monitoring applications
Núñez López, Carlos,
            author.
Most of current wearable devices used for health monitoring (e.g. Fitbit) are composed of bulky rigid electronics that are not customizable and are too rigid for the skin. To overcome such limitations, a modular system based on thin and stretchable electronic modules was proposed. To link modules together, a novel four pin sliding connector was designed, fabricated, integrated into an stretchable electronic circuit and characterized. The first part of the thesis focused on investigating different stretchable conductive materials that could be integrated into soft rubber substrates. Two materials were tested. First, a commercial silver ink was deposited onto polyurethane rubber (PUR), showing high conductivity but minimum stretchability (below 3% strain). Second, serpentine shaped FPCs were designed and integrated into a silicone substrate, showing stretchability up to 160-170% strain with minimum changes in conductivity (below 30%). Additionally, a tensile cycling test showed stable electromechanical behavior up to 3,500 cycles at 30% maximum tensile strain. The second part of this work addressed the design, fabrication and testing of a novel system for modular stretchable electronics. A four pin sliding connector to enable I2C communication was fabricated by assembling 3D printed parts with brass components manufactured with an EDM cutter. The mechanism could be easily integrated within the previously made stretchable FPC serpentines and demonstrated excellent electromechanical performance. A sample module could be stretched until complete serpentine failure (120% strain) with resistance values across the four pins lower than 2[omega]. Furthermore, the device evaluation on a treadmill showed changes in resistance lower than 4.27[omega] during the 15 minute experiments.
Thesis: S.M. in Media Technology, Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 69-74).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a low-cost high-performance flexural six degree-of-freedom positioning stage</title>
<link href="https://hdl.handle.net/1721.1/150465" rel="alternate"/>
<author>
<name>Owen, Elliot Douglas,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150465</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Design of a low-cost high-performance flexural six degree-of-freedom positioning stage
Owen, Elliot Douglas,
            author.
This thesis introduces a new family of high-performance, low-cost, six-degree-of-freedom positioning systems for high-dynamic, high-accuracy applications. This machine is designed to achieve sub-micron (~~ 50nm) accuracy with meso-scale (~~ 1mm) motions with simple machine elements and commercially available components. The motion platform presented is a parallel kinematic machine supported by six flexural legs that each ride on a ball screw assembly. The required degrees of freedom and range of motion for the flexural and rolling elements are analyzed. The stiffness and stress analysis of each element is presented along with a model for system level stiffness, inertia, and natural frequency. A forward kinematic model describes how the motions of individual actuators can be directly related to the motions of the output stage, unlike a traditional hexapod which requires inverse kinematics. This thesis presents initial sizing rules of thumb, fundamental scaling laws, simulation and experimental results, and practical considerations for manufacturing. Prototypes were built and tested to confirm the models presented and inform the design of future machines. The operating theory of this machine can be extended beyond the exact elements described for applications which require larger range of motion, higher load capacity, or only need quasi-static operation.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Cataloged from PDF version of thesis. Cataloged from PDF of thesis.; Includes bibliographical references (pages 161-162).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A multi-stage stochastic ordering method for wildfire preparedness and response</title>
<link href="https://hdl.handle.net/1721.1/150463" rel="alternate"/>
<author>
<name>McCleneghan, Megan Rose,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150463</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">A multi-stage stochastic ordering method for wildfire preparedness and response
McCleneghan, Megan Rose,
            author.
Following an unprecedented wildfire season in 2017, management of Sierra Gas &amp; Electric (SG&amp;E), an undisclosed utility company, issued a new standard directing that all new and replacement electric transmission line (T-Line) poles be made from steel wherever possible to mitigate liability. This new standard necessitates that some inventory be held locally in anticipation of emergencies and quality issues as steel poles have significantly longer lead times than wood. The variability of poles makes ordering for an emergency inventory difficult, as steel poles come in more than 60 common strength/length combinations. This thesis focuses on assessing the risk wildfire poses to SG&amp;E's wood T-Line poles, and simulating an estimated yearly demand to determine order quantities that optimize pole replacement preparedness. In general, this work presents a two-stage process for determining necessary inventory levels for non-perishable products when the products needed change with the location of an event. A Markov Chain Monte Carlo simulation was developed using empirical sampling of prior fire data over 2,000 iterations to create simulated wildfires throughout the state of California. Combining this with geospatial analysis allowed for modeling of approximate distributions of SG&amp;E poles in the footprints of fires. Given the probabilistic demand for poles of different types, the two-stage process was defined as before an emergency has occurred and after, once the location of a fire is known. Optimization problems were set up based on both aggregate and location specific data to inform the service levels used for ordering poles at each stage. This model offers realistic insight into how the varied nature of SG&amp;E's pole infrastructure across the state effects ordering decisions, as well as how the company can leverage its extensive geospatial data and forecasting abilities to make ordering decisions.
Thesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2019; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019; Cataloged from PDF version of thesis. "The pagination in this thesis reflects how it was delivered to the Institute Archives and Special Collections. The Table of Contents does not accurately represent the page numbering"--Disclaimer page.; Includes bibliographical references (page 83).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and evaluation of an abrasive saw kickback machine</title>
<link href="https://hdl.handle.net/1721.1/150461" rel="alternate"/>
<author>
<name>Burcat, Steven,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150461</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Design and evaluation of an abrasive saw kickback machine
Burcat, Steven,
            author.
Injuries from power saw kickback are often fatal. However, only woodcutting saws have regulations and assessment methodologies for kickback. These regulations do not apply to metal and masonry saws, as the cutting mechanism and dominant kick-back mode are different from those of woodcutting saws. In this work, the kickback of abrasive saws is investigated by combining theoretical and experimental tools. A theoretical model developed based on frictional engagement during a pinch-based kickback event is shown to predict the resultant kickback energy for various saws in good agreement with experimental measurements. These measurements were obtained using a specialized machine that was designed to generate pinch-based kickback and to measure the resultant kickback energy for both chainsaws and cutoff saws. While the model can predict the resultant kickback energy for a saw given known cutting conditions (i.e. cutting angle and pinch force), it does not predict the maximum possible kickback energy given any cutting angle of a saw because it does not account for the change in speed of the cutting blade. Upon validation of the physics model, two commonly used representative saws, a Stihl TS420 and an ICS 695XL, were tested using this kickback machine to evaluate their comparative kickback risk. This work demonstrates that pinch-based kickback can be a major safety risk for abrasive saw operators, and it provides a machine and analytical framework for evaluating this risk.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020; Supervised by Alexander H. Slocum. Cataloged from the PDF version of thesis.; Includes bibliographical references (page 65).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human intent prediction for adaptive lighting based on a limited data scenario</title>
<link href="https://hdl.handle.net/1721.1/150458" rel="alternate"/>
<author>
<name>Sun, Jiamin
            (Researcher in architecture),
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/150458</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Human intent prediction for adaptive lighting based on a limited data scenario
Sun, Jiamin
            (Researcher in architecture),
            author.
An adaptive environment involves various ubiquitous computing and computer-aided technologies. It provides users with environmental supports such as lighting, air conditioning, motion assistance. Among the different control schemes, lighting is an essential element because it strongly affects people's visual experience and work productivity. The generalized residential lighting system is limited to create a personalized and responsive environment. Additionally, multiple and complex light sources make it difficult for users to obtain optimized lighting configurations. In general, an intelligent control system requires an extensive database of user habits in order to infer different user intents. In this work, we present a new personalized lighting control method that can learn explicit and implicit context though knowledge-based background and interactions. Instead of collecting a large amount of personal data, we explore the possibility to achieve a valid control method based on a limited data scenario. We consider language as one of the most important inputs from users when they are interacting with a smart environment. Although there has been a large amount of work in automatic control based on speech recognition, the situation is different for using language to control lights according to different preferences. In our study, on the one hand, multiple dimensions of representation of lighting status are studied and organized in a way that can be derived from people's language input. We have generated a learning model and a small database based on the hierarchy of different lighting settings. On the other hand, besides the learning part, we explore how users can directly teach the lighting system. That is, through continuous interactions, the control system learns users' profiles through limited interaction data and gradually becomes consistent with specific personal preferences. In addition to lighting control methods, we also introduce the different components of typical lighting systems and networks. This work contributes to fundamental knowledge in the areas of ubiquitous computing and home automation.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: S.M. in Architecture Studies, Massachusetts Institute of Technology, Department of Architecture, 2019; Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 75-80).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Electrochemical Sensor Development Platform: Applications of System Identification to Biological Sensing in Evolving Fluids</title>
<link href="https://hdl.handle.net/1721.1/150433" rel="alternate"/>
<author>
<name>Aling, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/150433</id>
<updated>2023-04-07T03:42:37Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An Electrochemical Sensor Development Platform: Applications of System Identification to Biological Sensing in Evolving Fluids
Aling, Michael
Biologically active fluids dominate living systems, from human blood to global agriculture. Characterizing the growth of microorganisms within these biofluids is of key importance from scientific roles identifying pathogens and their biochemical behavior to industrial applications in food safety, healthcare, and pharmaceuticals. Compact, low-cost electrochemical sensors that monitor microorganism population and growth have attracted attention as replacements for days-long plate-counting -- potentially delivering results within hours or seconds by monitoring model parameters from diverse physical and chemical phenomena as they trace sigmoidal growth curves under microbial influence. Within the measurement framework of electrochemical impedance spectroscopy, this work proposes extracting additional information from biofluid systems by harnessing a nonlinear dynamic electrochemical model. A modular laboratory platform has been developed to perform parallel, temperature-controlled two-electrode electrochemical experiments from DC conditions to 10 MHz on compact hardware amenable to a low-cost sensor format for end users. An outline of a general-purpose, black-box technique to characterize fluids and predict their evolution over time is also presented, along with platform commissioning tests and preliminary data and analysis.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What is the Value of the Postal Service?</title>
<link href="https://hdl.handle.net/1721.1/150431" rel="alternate"/>
<author>
<name>Billingsley, Michael Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/150431</id>
<updated>2023-04-07T03:16:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">What is the Value of the Postal Service?
Billingsley, Michael Allen
The United States Postal Service (USPS or Postal Service) has a storied history that predates the constitution and founding of America. Through war, economic boom and bust, the industrial revolution, and technological innovation, the USPS has evolved its operations and remains a beacon for consistency that binds Americans together. As the world adopted electronic technology as its primary means of communication, USPS experienced precipitous declines in mail volume. Large institutions have turned to electronic alternatives for consumer outreach and the share of the population who has never entered a Post Office or mailed a letter is growing. Has the Postal Service finally met its match in electronic diversion? How long until mail volume becomes too low to justify the Postal Service’s universal service obligation? Will package delivery alone sustain a ubiquitous government mandated shipper? These questions are not new and will not go away for the foreseeable future. With 640,000 current employees, the USPS remains a vital part of the economy with the promise of job stability and enriched benefits still intact. If losing the Postal Service is not a sustainable option for the economy, and most certainly for either political party, what is the way forward? This thesis will provide a framework to analyze the value of the Postal Service through a variety of lenses. It will highlight the value that the Postal Service has provided for the United States historically and examine its more recent history through a financial microscope. A wideangle overview of regulations and mandates that shape the USPS’s financial condition sets the stage. I then turn to a brief review of the core financial valuation concepts that underpin the analysis. An assessment of the value of balanced scorecards in a corporate setting provides the foundation for key metric recommendations. I end with a proposed scorecard intended for public consumption that captures the value of the Postal Service.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>StreetSmart: Reinventing Retail Through Smarter Small Business</title>
<link href="https://hdl.handle.net/1721.1/150426" rel="alternate"/>
<author>
<name>James, Rhett M.</name>
</author>
<id>https://hdl.handle.net/1721.1/150426</id>
<updated>2023-04-07T03:15:46Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">StreetSmart: Reinventing Retail Through Smarter Small Business
James, Rhett M.
Before the COVID-19 pandemic, the retail market in U.S. cities was widely acknowledged to be overbuilt by about one billion square feet¹. The wave of lockdowns and restrictions on retail operations further exacerbated the distress in the retail market, with bankrupt major retail chains closing stores sending rents plummeting. Simultaneously, the rapid rise in eCommerce coupled with the COVID-19 pandemic exacerbated inequalities in minority-owned businesses, including ownership rates, revenue, and access to financing. The pandemic also led to the closure of street retail, increased demand for eCommerce and logistics space, and resulted in retail supply chain operational challenges and disruptions which disproportionately affected minority businesses. It is also estimated that the US may need an additional one billion square feet of logistics real estate to meet growing eCommerce demand. These trends taken together can have important implications for U.S. cities. Brick and mortar retail, storefronts, and street commerce, are critically important to the vibrancy of cities and a key social attraction of urban centers. This thesis explores a solution to the problems faced by minority-owned small businesses. By responding to changes and opportunities in the retail landscape, the competitiveness of minority-owned businesses can be strengthened. This thesis explores a solution—Venture Design—responding to changes and opportunities in the retail landscape designed to enhance the competitiveness of minority-owned businesses. The Venture Design framework developed by MITdesignX was applied to create a proposed hybrid-retail approach as a venture opportunity. This thesis makes the case that minority small businesses are uniquely positioned to leverage such a solution to remain competitive in the changing retail landscape.&#13;
&#13;
¹Cody, Kevin. Retail Market May Be Overbuilt to the Tune of 1 Billion Square Feet, April 14, 2021.&#13;
https://www.costar.com/article/1488563638/retail-market-may-be-overbuilt-to-the-tune-of-1-billion-square-feet.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Optimal Error Resilience of Interactive Communication Over Binary Channels</title>
<link href="https://hdl.handle.net/1721.1/150312" rel="alternate"/>
<author>
<name>Zhang, Rachel Yun</name>
</author>
<id>https://hdl.handle.net/1721.1/150312</id>
<updated>2023-04-01T03:35:32Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Optimal Error Resilience of Interactive Communication Over Binary Channels
Zhang, Rachel Yun
In interactive coding, Alice and Bob wish to compute some function f of their individual private inputs x and y. They do this by engaging in a non-adaptive (fixed order, fixed length) interactive protocol to jointly compute f(x,y). The goal is to do this in an error-resilient way, such that even given some fraction of adversarial corruptions to the protocol, both parties still learn f(x,y). &#13;
&#13;
We study the optimal error resilience of such a protocol in the face of adversarial bit flips or erasures. While the optimal error resilience of such a protocol over a large alphabet is well understood, the situation over the binary alphabet has remained open. Over the binary alphabet, there has remained a substantial gap in error resilience between the best protocol construction and the best known upper bound, for both bit flips and erasures.&#13;
&#13;
In this thesis, we construct protocols meeting the known upper bounds for both types of errors, thereby closing this gap and resolving the question of optimal error resilience for binary channels. Our schemes for both types of errors have positive rate and are computationally efficient.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk-Aware Neural Navigation for Interactive Driving</title>
<link href="https://hdl.handle.net/1721.1/150311" rel="alternate"/>
<author>
<name>Jiwani, Suzanna</name>
</author>
<id>https://hdl.handle.net/1721.1/150311</id>
<updated>2023-04-01T03:06:42Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Risk-Aware Neural Navigation for Interactive Driving
Jiwani, Suzanna
Safety has been a key goal for autonomous driving since its inception, and we believe recognizing and responding to risk is a key component of safety. In this work, we aim to answer the question, "How can explainable risk representations be used to produce accurate and safe trajectories?". To answer this question, previous work uses risk metrics to formulate an optimization problem. In contrast, our work is based on research showing the usefulness of grids as a representation to generate image-based risk maps through a trained neural network. We propose a novel method of determining risk from a bird’s eye view (BEV) of an autonomous vehicle’s surroundings. Our method consists of (1) a Risk Map Generator, which is trained using a modified loss to encourage recognizing risk associated with nearby agents, (2) value iteration using the risk map to learn a policy, and (3) a Trajectory Sampler, which samples from this policy to generate a trajectory. We uniquely evaluate our planner in an interactive manner, adjusting the surroundings at each time step, and find significant improvements in its overall ability to mimic human driving, with an 86.56% decrease in average displacement error and an 87.72% decrease in the average distance from the goal while maintaining comparable safety statistics when compared with baseline methods. Self ablation also reveals the potential for fine-tuning the behavior of the planner given a designer’s needs.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining Masked Autoencoding and Neural Fields for Multi-band Satellite Understanding</title>
<link href="https://hdl.handle.net/1721.1/150309" rel="alternate"/>
<author>
<name>Huang, Kuan Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/150309</id>
<updated>2023-04-01T03:27:19Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Combining Masked Autoencoding and Neural Fields for Multi-band Satellite Understanding
Huang, Kuan Wei
Multi-spectral satellite remote sensing is a primary way to monitor planet-scale events such as deforestation, land-cover change, fire, and flooding. Unfortunately, incomplete spatial coverage and sparse temporal sampling make it challenging to develop a unified understanding of the environment. We aim to solve these challenges by creating a curated multi-modal satellite remote sensing dataset and presenting a novel architecture that learns a unified representation across large-scale heterogeneous remote sensing data by solving an image completion task. We equip our model with temporal, spectral, and global positioning information in addition to local positional encoding. This allows our algorithm to learn a unified, high-resolution, and time-varying representation across the entire survey area. Unlike the prior work, our architecture does not require data with uniform coverage, temporal resolution, or paired bands, and through prompting, it can act as a method for satellite infilling, temporal prediction, and cross-band translation. We train and evaluate our approach on a multi-modal remote sensing dataset and show that it outperforms baselines across satellite completion and cross-band translation tasks. In addition, we show that the neural feature field learned by our method is more effective than baselines for transfer learning to predict Amazon rainforest deforestation.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an End-to-End Pipeline for Custom Key-Value Extraction from Commercial Invoices</title>
<link href="https://hdl.handle.net/1721.1/150308" rel="alternate"/>
<author>
<name>Mohan, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/150308</id>
<updated>2023-04-01T03:51:53Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Development of an End-to-End Pipeline for Custom Key-Value Extraction from Commercial Invoices
Mohan, Abhishek
Inefficiencies in manual extraction of information from business documents have resulted in the development of automated processing solutions. Within the scope of business documents, commercial invoices present additional complexities due to the diversity of document layouts and the variation in quality of scanned documents. Commercially available solutions have been built to perform invoice extraction, yet they do not provide flexibility in accomplishing tasks unique to a particular dataset and its associated complications. Using sample documents provided by a leading electronic component distributor, we researched different approaches capable of extracting key-value information from a complex dataset of invoices. The thesis provides a detailed look into the development of a highly accurate, end-to-end data pipeline accomplishing this task. A multi-module approach integrating image processing, optical character recognition, custom algorithms, and machine learning-based matching was built and compartmentalized into continuous stages - allowing for effective and efficient key-value extraction of information from invoice documents. In conjunction with an intuitive web interface, the custom pipeline provides a solution with strong performance and the flexibility to be generalized for extraction of additional business documents in future efforts.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not So Correct: Rebuilding with the Fragments of Memories</title>
<link href="https://hdl.handle.net/1721.1/150307" rel="alternate"/>
<author>
<name>Oh, Yoonjae</name>
</author>
<id>https://hdl.handle.net/1721.1/150307</id>
<updated>2023-04-01T03:16:09Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Not So Correct: Rebuilding with the Fragments of Memories
Oh, Yoonjae
This thesis was motivated by a question my grandfather (1928-2022) asked, "Can you rebuild my hometown?". How can I rebuild a place that remains in memory? Defining success of rebuilding is highly subjective as people value various elements in memory differently. Furthermore, verifying the result of rebuilding is difficult in the absence of the original author. Thus, the goal of this thesis is not to provide an universal criteria to evaluate rebuilding. Rather, it is to explore different approaches and elements that can be used to rebuild places in memory by recreating my grandfather’s hometown based on conversations I had with him. &#13;
&#13;
I partially recreated his hometown’s landscape based on a story of the day he left his hometown; he left his house to avoid being drafted into the Korean War in 1952, but what was meant to be a 7-day hide-out ended up becoming a 75-year leave. I recreate his footprints, the spaces he stepped on and the landscape he saw, as if I were closely following him. I create key artifacts of that day by intertwining fragments of my grandfather’s memories, layering my interpretations of real data and anecdotal details from his recollection. The hypothetical world created through compilation of these artifacts, closely linked by a thread of imaginations and real memories, represents sentimental and physical qualities of the places that remain in his memory. &#13;
&#13;
Although somewhat enigmatic and obscure, the recreation of my grandfather’s cherished hometown was my way of bidding farewell to my grandfather. This project shows that nostalgia can be a powerful source of inspiration, even when memories are fragmented and fuzzy
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Statistical and Computational Methods to Dissect Ancestry-Biased Germline Effects in Lung Cancer</title>
<link href="https://hdl.handle.net/1721.1/150306" rel="alternate"/>
<author>
<name>Ismoldayeva, Assel</name>
</author>
<id>https://hdl.handle.net/1721.1/150306</id>
<updated>2023-04-01T03:47:09Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Statistical and Computational Methods to Dissect Ancestry-Biased Germline Effects in Lung Cancer
Ismoldayeva, Assel
Lung cancer is a complex disease influenced by a variety of genetic and environmental factors. The germline mutations associated with the disease vary greatly between the East Asian and the European populations. We explore these differences by analyzing genome-wide association study summary statistics from European and Japanese biobanks. Using stratified linkage disequilibrium regression in conjunction wit gene expression-based and epigenetic annotations, we derive cell-types and biological processes associated with lung cancer and smoking in both populations.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Applications for Time Series Data: Motor Anomaly Detection and Mean Arterial Blood Pressure Estimation</title>
<link href="https://hdl.handle.net/1721.1/150305" rel="alternate"/>
<author>
<name>Zheng, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/150305</id>
<updated>2023-04-01T03:47:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Machine Learning Applications for Time Series Data: Motor Anomaly Detection and Mean Arterial Blood Pressure Estimation
Zheng, Jessica
In recent years, Machine Learning (ML) on edge computing devices has gained attention from both the academic world and industries due to the enormous potential of various applications. Although the advancement in hardware and algorithm optimization techniques have helped accelerate the pace of bringing ML onto edge devices, the resources constraints on such devices remain challenging for ML applications. This work analyzes the feasibility and efficacy of different ML algorithms for two such applications using timeseries data: (1) TinyML for Anomalous Motor Operation Detection, and (2) Estimation of Mean Arterial Blood Pressure (MAP) from ultrasound measurements. In the first application, we explore different algorithms for detecting anomalous fan motor operation on a small microcontroller unit (MCU). Results show that a CNN model can maintain 99% accuracy for anomaly detection even with a small memory footprint of 2.9K parameters (under 6kB of memory). In the second application, we compare different algorithms to optimize the accuracy of MAP estimation from ultrasound data. We find that 1D-CNN and Transformer algorithms using the blood pressure shape and blood flow velocity waveforms can both achieve 8.8mmHg average standard deviation of the prediction error without anthropometric data, and the CNN model can achieve 7.9mmHg when anthropometric data is added as inputs, improving upon a baseline of 9.5mmHg using only anthropometric data. The code for these two projects will be made available at the following links: &#13;
(1) https://github.com/mit-han-lab/anomaly-detection &#13;
(2) https://github.com/mit-han-lab/ml-blood-pressure
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Staff minimization strategy for micro-reactors</title>
<link href="https://hdl.handle.net/1721.1/150300" rel="alternate"/>
<author>
<name>Naranjo De Candido, Isabel</name>
</author>
<id>https://hdl.handle.net/1721.1/150300</id>
<updated>2023-04-01T03:35:16Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Staff minimization strategy for micro-reactors
Naranjo De Candido, Isabel
The means to achieve decarbonization of all economic sectors are an open question of difficult resolution. For example, remote communities and industrial or mining activities detached from the main electric grid heavily rely on fossil fuels, diesel fuel in particular, for heat and power production. A combination of renewables and energy storage is often unfeasible, due to the climate conditions and maintenance challenges typical of those locations. Urban and industrial micro-grids with combined heat and power also are often unsuitable for renewable energy source solutions due to their intermittency and large land requirements&#13;
&#13;
Until now fossil fuels have been the solution of choice. In a historic period in which carbon emissions were not of concern and fossil fuels were quite inexpensive and broadly available, the main advantage of nuclear power was limited to its independence from the need to transport and store high amounts of fuels onsite, with organizational, security, and economic benefits deriving from it.&#13;
&#13;
However, the nuclear industry has now a historic opportunity to bring the concept of very small reactors with a flexible purpose from the ideation phase to commercialization, and potentially transform the energy sector for decades to come. Modern materials and instrumentation and controls can support the development of compact and reliable plug-and-play nuclear reactors with low and predictable maintenance needs and very low probability of consequential accidents. Moreover, automated decision making enabling autonomous operation can help lower the need for human operators, thus reducing the cost of the energy products. These very small nuclear reactors are generally referred to as micro-reactors and have a thermal power output of less than 20 MW.&#13;
&#13;
To move further from the prototype to the commercialization phase, micro-reactors need a strong business case. In fact, fossil fuels are still relatively inexpensive and in the near-term carbon credits will continue to be available to virtually compensate for the emissions. Thus, the energy price at which micro-reactors will compete is uncertain and depends on the application, the location served, the fossil fuel costs and the carbon credits prices. One of the positive aspects of micro-reactors is that, compared to fossil fuel power generators, their fuel costs are a much smaller share of the total cost. Thus, once a reactor is built, the cost of the energy it produces tends to be more predictable and stable than with fossil fuel power generators. This can be a valuable feature for customers and investors, who can make more accurate predictions on the future economic viability of their energy assets.&#13;
&#13;
In this study, we have focused on operation and maintenance cost, and in particular on whether it is possible to optimize the number (and thus the cost) of two kinds of staff: maintenance workers and plant operators. In particular, we have investigated whether and how, with the aid of proper technologies, it is possible to reduce onsite staff while relying on a fleet-type centralized service business unit that shares the staff among multiple reactors and locations. This staff organization is completely different from current nuclear power plant operating experience and brings micro-reactors closer to the model of operation and maintenance of small aero-derivative gas turbines and similar small transportable ‘plug-and-play’ power units. In particular, we identify a reference staffing scenario, which represents the minimum staffing level that has to be present onsite to allow the micro-reactor to operate with limited offsite assistance and thus similarly to the current fleet of large nuclear power plants. Then, we identify the optimal staffing scenario, which should be the eventual goal for micro-reactors operation. In this case, no personnel are permanently onsite, the reactor operation is monitored remotely and maintenance workers go onsite only for programmed activities. We then describe the enabling technologies for this second scenario: autonomous operation, remote monitoring and predictive maintenance. We finally estimate the cost of the personnel and the technologies, and make cost comparisons. In this report, we show that the change in staff organization from onsite personnel to offsite personnel could translate to an annual cost saving of ~25% for these two operation and maintenance (O&amp;M) invoices. We also argue that relying more on the use of technology for plant health monitoring will also mean a higher degree of safety.&#13;
&#13;
Ultimately, to move from a traditional fully-manned, onsite personnel approach to an unmanned, remote personnel approach, a robust operating experience and the approval from the regulator will be needed. It will be important to achieve a high level of confidence on the reliability of the installed technology and a solid understanding of possible malfunctions and failure modes that may arise. Finally, it will be important to achieve an appropriate confidence level among the population with regards to these new O&amp;M approaches.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Continuous Pareto Exploration in Multi-Task Learning</title>
<link href="https://hdl.handle.net/1721.1/150297" rel="alternate"/>
<author>
<name>Ma, Pingchuan</name>
</author>
<id>https://hdl.handle.net/1721.1/150297</id>
<updated>2023-04-01T03:31:11Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Efficient Continuous Pareto Exploration in Multi-Task Learning
Ma, Pingchuan
Tasks in multi-task learning often correlate, conflict, or even compete with each other. As a result, a single solution that is optimal for all tasks rarely exists. Recent papers introduced the concept of Pareto optimality to this field and directly cast multi-task learning as multiobjective optimization problems, but solutions returned by existing methods are typically finite, sparse, and discrete. We present a novel, efficient method that generates locally continuous Pareto sets and Pareto fronts, which opens up the possibility of continuous analysis of Pareto optimal solutions in machine learning problems. We scale up theoretical results in multi-objective optimization to modern machine learning problems by proposing a sample-based sparse linear system, for which standard Hessian-free solvers in machine learning can be applied. We compare our method to the state-of-the-art algorithms and demonstrate its usage of analyzing local Pareto sets on various multi-task classification and regression problems. The experimental results confirm that our algorithm reveals the primary directions in local Pareto sets for trade-off balancing, finds more solutions with different trade-offs efficiently, and scales well to tasks with millions of parameters.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometrical Optimization of Planar Nano Vacuum Channel Transistors</title>
<link href="https://hdl.handle.net/1721.1/150296" rel="alternate"/>
<author>
<name>Bechhofer, Adina R.</name>
</author>
<id>https://hdl.handle.net/1721.1/150296</id>
<updated>2023-04-01T03:12:56Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Geometrical Optimization of Planar Nano Vacuum Channel Transistors
Bechhofer, Adina R.
Nano vacuum devices have demonstrated tunneling emission in low voltages due to their 10 nm scale gaps that create order 10 GV/m electric fields with just 10 V. The small gaps give rise to ballistic transport through the channel, which combined with the low capacitances of the electrodes, gives rise to ultrafast response times. Nano vacuum channel devices have also exemplified robustness in the face of extreme radiation and temperature conditions [28, 18]. The design of nano vacuum devices is unintuitive due to the complicated and partially unknown physics governing their operation. In this thesis, we present an approach to performing shape optimization on nano vacuum channel devices based on an adaptation of a simulated-annealing [55] algorithm. We defined figures of merit to maximize the current in a diode, minimize the off-to-on current ratio in a transistor, and minimize the gate leakage current in a transistor. We implemented a finite element electrostatic simulation to calculate the emission-current-density profiles on emitting tips in diodes and transistors. We also implemented a heuristic to particle tracking to speed up the simulations and optimization of transistors.&#13;
&#13;
Using the optimization framework developed in this work, we are able to reach device designs that achieve a 6-orders-of-magnitude performance improvement compared to the initial geometry in approximately 10,000 optimization steps. For each emission model assumed, we uncover unique geometrical features that enhance the performance of devices on figure of merit of interest.&#13;
&#13;
This work establishes a free and open-source framework for electronic device optimization. Using this framework, device designers and engineers can spend less time, money, and research efforts on developing efficient and high performing devices.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Decentralized Multi-Agent Learning in Asymmetric Bipartite Queuing Systems</title>
<link href="https://hdl.handle.net/1721.1/150295" rel="alternate"/>
<author>
<name>Weng, Wentao</name>
</author>
<id>https://hdl.handle.net/1721.1/150295</id>
<updated>2023-04-01T03:44:25Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Efficient Decentralized Multi-Agent Learning in Asymmetric Bipartite Queuing Systems
Weng, Wentao
We study decentralized multi-agent learning in bipartite queuing systems, a standard model for service systems. In particular, &#119873; agents request service from &#119870; servers in a fully decentralized way, i.e, by running the same algorithm without communication. Previous decentralized algorithms are restricted to symmetric systems, have performance that is degrading exponentially in the number of servers, require communication through shared randomness and unique agent identities, and are computationally demanding. In contrast, we provide a simple learning algorithm that, when run decentrally by each agent, leads the queuing system to have efficient performance in general asymmetric bipartite queuing systems while also having additional robustness properties. Along the way, we provide the first provably efficient UCB-based algorithm for the centralized case of the problem.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microstructural Analysis of Garnet-Heavy Metal Heterostructures</title>
<link href="https://hdl.handle.net/1721.1/150294" rel="alternate"/>
<author>
<name>Gross, Miela Josephine</name>
</author>
<id>https://hdl.handle.net/1721.1/150294</id>
<updated>2023-04-01T03:58:11Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Microstructural Analysis of Garnet-Heavy Metal Heterostructures
Gross, Miela Josephine
Rare earth iron garnets (REIGs) are a class of ferrimagnetic insulators desirable for their tunable magnetic properties. Saturation magnetization, perpendicular magnetic anisotropy (PMA), coercivity, and Gilbert damping are just a few characteristics that can be altered via elemental substitution or strain on the film [1,2]. These adjustable features make REIGs desirable magnetic materials for spintronics [3]. Spintronics devices aim to provide CMOS compatible microelectronics that utilize electric charge and magnetic moments to store and process information. &#13;
&#13;
REIGs exhibit a variety of magnetic interactions, both bulk and interfacial, which are valuable for spintronics. Interfacial phenomena such as spin transfer torque, spin orbit torque, and interfacial Dzyaloshinskii-Moriya interaction have all been clearly observed in REIG-heavy metal heterostructures [4,5]. These specific events allow the magnetization of the REIG to be switched without the use of an external magnetic field at very low powers. Their fast switching is showcased in ultrafast domain wall velocities reported in Bi-substituted yttrium iron garnet and thullium iron garnet [6]. Plus, their high magnetooptical behavior allows REIGs to be studied via light techniques such as magnetooptical Kerr effect imaging and Brillouin light scattering [7].&#13;
&#13;
While these magnetic interactions show promise for REIGs in spintronics applications, integrating these materials on Si requires further microstructure analysis. Polycrystalline dyprosium iron garnet (DyIG) has been successfully grown on Si with PMA due to an external rapid thermal anneal after pulsed laser deposition [8]. However, crystallization is limited to thicknesses above 20 nm. Furthermore, Pt is often used as a heavy metal for investigating interfacial phenomena in REIGs [9,10,11]. However, the REIG film growth is a kinetically active, high temperature, and oxygen rich environment. Pt durability during this process has yet to be studied. This thesis involves designing, building, and characterizing a heterostructure to achieve 10 nm thin DyIG and Y substituted DyIG on Si. A paramagnetic garnet, gadolinium gallium garnet, is used as a templating layer for the REIG, and a thin Pt diffusion barrier is sputtered between the two garnet layers. Microstructural and magnetic characteristics are investigated.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Predicate Learning</title>
<link href="https://hdl.handle.net/1721.1/150293" rel="alternate"/>
<author>
<name>Li, Amber</name>
</author>
<id>https://hdl.handle.net/1721.1/150293</id>
<updated>2023-04-01T03:53:10Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Active Predicate Learning
Li, Amber
Planning in robotics environments is difficult in part due to continuous state and action spaces. One approach to this challenge is to use bilevel planning, where decisionmaking occurs in multiple levels of abstraction. However, the efficacy and efficiency of bilevel planning relies on the underlying set of state and action abstractions. It is impractical to assume these abstractions as given (i.e. hand-designed by humans), so instead the agent should learn them, for example by exploring and interacting with its environment under the guidance of a teacher, from whom the robot may query expert knowledge. This is more difficult than the typical active learning problem setting because the robot must take actions to get to a state before it can make useful queries of the teacher about that state. This work develops an active learning framework for learning state abstractions (predicates) for effective and efficient task and motion planning. Given the names and arguments of the predicates in an environment and very few pre-labeled examples, the agent is able to learn predicate classifiers that enable it to successfully complete test tasks.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensemaking: An Analysis of Participatory and Automated Methods</title>
<link href="https://hdl.handle.net/1721.1/150292" rel="alternate"/>
<author>
<name>Jean-Charles, Sandy</name>
</author>
<id>https://hdl.handle.net/1721.1/150292</id>
<updated>2023-04-01T03:41:30Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Sensemaking: An Analysis of Participatory and Automated Methods
Jean-Charles, Sandy
Participatory research involves the iterative process of action and reflection, with the involvement of community members rather than solely researchers. Several projects under the Center for Constructive Communication (CCC), such as RealTalk@MIT, involve community members in facilitated conversations to gain more insight into their experiences and perspectives. However, these participants do not have continued involvement in the next step in the process, sensemaking, where themes and subthemes from the conversations are generated. In order to evaluate a newer format of sensemaking that would make it more accessible to and incorporate the involvement of participants, I invited participants from RealTalk@MIT to make sense of their own conversation and noted the differences (small or significant) between the themes produced by the researchers involved in the original sensemaking team and the workshop participants. From there, I evaluated the effectiveness of the new workshop structure in terms of allowing participants to interact with their own voices, the differences between codes produced by both groups, and how those codes compare to computer-generated codes.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Profile Creation with Topic Modeling and Semantic Analysis from Conversations about COVID-19 among U.S. Older Adults</title>
<link href="https://hdl.handle.net/1721.1/150291" rel="alternate"/>
<author>
<name>Le, Joie</name>
</author>
<id>https://hdl.handle.net/1721.1/150291</id>
<updated>2023-04-01T03:24:56Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Profile Creation with Topic Modeling and Semantic Analysis from Conversations about COVID-19 among U.S. Older Adults
Le, Joie
Coding of qualitative data in social science research is a process that involves categorizing individual units of data to facilitate analysis. It requires a great deal of manual labor and time to produce codes with high validity and inter-coder reliability.&#13;
&#13;
In an ongoing study, MIT AgeLab researchers analyzed focus group and interview transcripts containing conversations about the impact of the COVID-19 pandemic on Black and white U.S. older adults’ preventive health behavior and healthcare use. To facilitate the qualitative coding process, we propose an approach for automated topic extraction with sentiment analysis using a natural language processing technique known as topic modeling. While automated methods for quantitative data are common, methods for qualitative data, especially focus group text, have not been rigorously explored. &#13;
&#13;
This thesis compares two topic modeling algorithms, LDA and GSDMM, and tests a variety of pseudo-document methods to divide the text transcripts into smaller documents. After the transcripts are split by race, COVID-19 vaccination status, and relationship to a local community, global topics and sentiment-based topics are extracted from the text and labeled by human researchers.&#13;
&#13;
Direct comparisons between profiles within an axis uncover differences warranting further analysis. The results produced from topic modeling can be used to derive an initial codebook pre-coding and push for the investigation of utilizing topic modeling in tandem with human coding during qualitative text analysis.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visibility-Aware Navigation Among Movable&#13;
Obstacles</title>
<link href="https://hdl.handle.net/1721.1/150289" rel="alternate"/>
<author>
<name>Muguira Iturralde, Jose</name>
</author>
<id>https://hdl.handle.net/1721.1/150289</id>
<updated>2023-04-01T03:32:29Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Visibility-Aware Navigation Among Movable&#13;
Obstacles
Muguira Iturralde, Jose
We examine the problem of visibility-aware robot navigation among movable obstacles (VANAMO). A variant of the well-known NAMO robotic planning problem, VANAMO puts additional visibility constraints on robot motion and object movability. This new problem formulation lifts the restrictive assumption that the map is fully visible and the object positions are fully known. We provide a formal definition of the VANAMO problem and propose the Look and Manipulate Backchaining (LaMB) algorithm for solving such problems. LaMB has a simple vision-based API that makes it more easily transferable to real-world robot applications and scales to large 3D environments. To evaluate LaMB, we construct a set of tasks that illustrate the complex interplay between visibility and object movability that can arise in mobile base manipulation problems in unknown environments. We show that LaMB outperforms NAMO and visibility-aware motion planning approaches as well as simple combinations of them on complex manipulation problems with partial observability.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Assessment of the Societal Cost of PtL and LH₂ as Aviation Fuels</title>
<link href="https://hdl.handle.net/1721.1/150287" rel="alternate"/>
<author>
<name>Abel, James M.</name>
</author>
<id>https://hdl.handle.net/1721.1/150287</id>
<updated>2023-04-01T03:14:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Comparative Assessment of the Societal Cost of PtL and LH₂ as Aviation Fuels
Abel, James M.
Alternative fuels produced from green hydrogen will play a critical role in decarbonization of aviation, owing to their globally scalable production and mission flexibility. However, there is no consensus whether it is preferable to liquefy hydrogen and use it directly as a fuel (LH₂) or to combine it with CO₂ captured from point sources or the atmosphere to create a synthetic hydrocarbon fuel (PtL). Much of this disagreement is a result of the wide range of uncertainty that exists in the variables affecting the cost and environmental impact of each fuel. To determine which of these uncertainties are the most critical to driving this decision, a parametric system-level model of the well-to-wake life-cycle of each fuel was constructed. which combines the economic and environmental cost of flight with each fuel into a single societal cost metric. A series of sensitivity analyses were conducted to quantify the impact of each input assumption on the total societal cost of flight with each fuel. The results showed that the relative favorability of future hydrogen aviation fueling strategies will depend on how technological developments and scientific knowledge evolve, specifically in the fields of direct air capture (DAC), LH₂ aircraft design, and warming from persistent contrail formation.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Workforce Housing in Miami</title>
<link href="https://hdl.handle.net/1721.1/150284" rel="alternate"/>
<author>
<name>Weiss, Francis J.</name>
</author>
<id>https://hdl.handle.net/1721.1/150284</id>
<updated>2023-04-01T03:02:48Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Increasing Workforce Housing in Miami
Weiss, Francis J.
Miami is at the center of the housing affordability crisis, ranking for the first time ever above San Francisco and New York, Miami recently became the least affordable housing market in the United States. Rising rates, record prices, insurance premiums and the unprecedent migration to Southeast Florida due to the pandemic exacerbated the housing costs on Miami’s households. Since the recent migration, over half of the households are now spending more than 30 percent of their income on housing. In this paper we investigate potential solutions through public policy, and private capital markets to help increase the supply of workforce housing, whilst producing a commercial rate of return.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revealing the Illusion of Explanatory Depth Can Hinder Persuasion</title>
<link href="https://hdl.handle.net/1721.1/150283" rel="alternate"/>
<author>
<name>Humiston, Graelyn B.</name>
</author>
<id>https://hdl.handle.net/1721.1/150283</id>
<updated>2023-04-01T04:06:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Revealing the Illusion of Explanatory Depth Can Hinder Persuasion
Humiston, Graelyn B.
People often believe that they understand complex phenomena much better than they actually do, and this “illusion of explanatory depth” can be undermined by asking them to explain the phenomenon step-by-step. A recent attempt to harness this process as an intervention to reduce political attitude polarization found that participants’ policy positions were less extreme after an attempt to explain the policies in detail; however, this finding has failed to consistently replicate. We hypothesized that revealing the illusion of explanatory depth for a political policy could instead increase openness to persuasion -- that a persuasive argument would have a greater impact on policy support if it was preceded by an attempt to explain the policy in detail. First, we replicated the decrease in participants’ self-rated understanding of a policy after being asked to explain it, with a much larger sample than in prior studies. We then ran an experiment crossing this illusion of explanatory depth treatment with a persuasion treatment, and predicted that the persuasion treatment would have a greater impact for participants who completed the explanation task. However, the results showed the opposite pattern: Participants changed their position less if they were asked to explain the policy beforehand, compared to those in the control condition. Our findings suggest that, counterintuitively, demonstrating a lack of understanding may decrease openness to persuasion, and that we may be wrong about the relationship between persuasion and understanding.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Fuel-Climate Tradeoffs in Contrail Avoidance</title>
<link href="https://hdl.handle.net/1721.1/150282" rel="alternate"/>
<author>
<name>Elmourad, Jad A.</name>
</author>
<id>https://hdl.handle.net/1721.1/150282</id>
<updated>2023-04-01T03:23:37Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Evaluating Fuel-Climate Tradeoffs in Contrail Avoidance
Elmourad, Jad A.
Contrails, the line-shaped clouds that can form behind airplanes, have been estimated to be a major contributor to aviation-induced climate change. Operational contrail avoidance via flight re-routing may be an effective and efficient mitigation approach due to the geometry of contrail-forming regions. Contrail avoidance strategies will result in fuel burn penalties and, consequently, additional climate warming from carbon dioxide emissions; therefore, the climate benefit from avoiding contrails must be evaluated against the carbon dioxide penalties. Prior work has estimated the fuel burn and climate tradeoffs associated with contrail avoidance by focusing on a small set of routes or weather conditions, targeting only specific regions of the world, focusing on minimizing contrail length without quantifying the contrail impact, limiting deviations to horizontal or fixed altitude changes, or not using a fuel-optimal baseline for comparison. In this work, we evaluated the fuel-climate tradeoffs on a large scale by considering global coverage of flights with a full-year simulation accounting for daily and seasonal variation in meteorological conditions. We applied full altitude optimization of flight trajectories, focusing mainly on two contrail avoidance strategies: avoiding only nighttime or avoiding all contrails. The net climate impact of these strategies was evaluated by simulating individual contrail plumes and their radiative forcing impact, comparing trajectories to a fuel-optimal baseline in order to properly isolate the additional fuel requirements of contrail avoidance. We found that nearly 100% of contrail length can be avoided using vertical re-routing exclusively for a fuel burn penalty of 1.3—1.4% on the flights that perform maximum contrail avoidance. However, since only a fraction of the flights need to perform contrail avoidance, the fuel burn penalty averaged over the entire fleet was found to be 0.5% to avoid all contrails and 0.16% to avoid just the nighttime ones. A 5% limit on the fuel burn penalty per flight reduced the avoided contrail length to 97%, whereas limiting the per-flight fuel burn penalty to the mean value of 1.4% limited the reduction in contrail length to 70%. Regarding the climate impact of contrail avoidance strategies, we found that on the flights that applied contrail avoidance, the net climate impact was reduced by 93% for the sample avoiding all contrails and 92% for the sample avoiding only nighttime contrails. We found that on flights that formed contrails the energy forcing of contrails was an order of magnitude larger than that of carbon dioxide for a time horizon of 100 years. In terms of the overall net climate impact across the fleet, i.e., including flights without contrail avoidance, we found that by avoiding all contrails on flights that form them, the entire fleet’s net climate impact was reduced by 80%. On the other hand, by avoiding only nighttime contrails, the entire fleet’s net climate impact was reduced by 28%. Therefore, it is best to avoid all contrails unless avoidance decisions can be made on a per-contrail basis. The flight-by-flight distribution and the seasonal variation of the fuel-climate tradeoffs were also analyzed.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>International Investments in Luxury Real Estate: An Evaluation of International Real Estate Investors and Developers Entering a Cross-Continental Market</title>
<link href="https://hdl.handle.net/1721.1/150279" rel="alternate"/>
<author>
<name>Roberts, Shermika S.</name>
</author>
<id>https://hdl.handle.net/1721.1/150279</id>
<updated>2023-04-01T03:30:36Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">International Investments in Luxury Real Estate: An Evaluation of International Real Estate Investors and Developers Entering a Cross-Continental Market
Roberts, Shermika S.
International investments in luxury real estate can be a complex and challenging endeavor, as investors and developers must navigate a variety of issues, including cultural differences, legal and regulatory frameworks, and market conditions. The goal of this research was to investigate how success can be achieved in a cross-continental market and the role of international investors and developers in evaluating the risks and opportunities presented by the market, as well as building relationships with local partners and experts who can help them navigate the local landscape. The study also considered Dubai and the United States as two major players in the global luxury real estate market, offering a range of opportunities for investors. In Dubai, the market is primarily driven by demand for high-end properties from wealthy individuals looking for a second or vacation home, as well as investors seeking to take advantage of the city's strong rental market. The United States has a more diverse luxury real estate market, with demand coming from a variety of sources. The research was based on interviews with two industry executives from the United States; John Grossman, the Chief Executive Officer of Grossman Company Properties and Scott Lockwood, Regional Vice President of Development, from Cambria Hotels by Choice Hotels International. The findings indicate that their experience in the local market, as well as relying on thorough market research, enables them to understand local trends and conditions. The study also found that local laws and regulations have played a role in shaping the luxury real estate markets, with some countries, states, and cities offering more favorable conditions for investors than others.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intrinsic Reward and Temporal Disruption: When workers leave their meaningful work</title>
<link href="https://hdl.handle.net/1721.1/150277" rel="alternate"/>
<author>
<name>Minster, Arrow</name>
</author>
<id>https://hdl.handle.net/1721.1/150277</id>
<updated>2023-04-01T03:20:21Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Intrinsic Reward and Temporal Disruption: When workers leave their meaningful work
Minster, Arrow
Many workers pursue opportunities for their intrinsic rewards as well as their extrinsic rewards. Intrinsic rewards like meaningfulness – having a positive impact on others – is also a strong commitment device. Even in the face of low pay or unsafe working conditions, workers can justify staying with meaningful work. Workers can justify leaving their meaningful work when factors external to their job, like family commitments, require their time and attention; without such external factors, workers are likely to stay in seemingly meaningful jobs even in the absence of extrinsic rewards. Based on interviews with nightlife entertainers during the pandemic in 2020, this paper identifies a case of workers leaving an intrinsically rewarding job. Specifically, workers’ embodied experience of meaningfulness is related to their decisions to leave or continue with entertainment work during the pandemic. This author submits that work itself is not always rewarding, rather workers develop their own ways of knowing if the work is rewarding. The presence and intensity of meaningfulness indicators are the reward and workers must justify pursuing work when such indicators are absent. In the case of nightlife entertainers, these workers justified leaving their work when the pandemic disrupted their embodied knowledge. The pandemic was disruptive by disconnecting entertainers from the recognizable sensations of crafting a good “vibe” and by obscuring when work could happen in nightlife venues again. As opposed to a purely cognitive assessment of extrinsic rewards, evaluating intrinsic rewards like meaningfulness relies on cognitive and affective knowledge that one accumulates with experience.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advances in single component adhesive enable the production of high-performance rubber composites containing 40 wt% rubber waste and 95 wt% rubber waste when supplemented with devulcanization</title>
<link href="https://hdl.handle.net/1721.1/150273" rel="alternate"/>
<author>
<name>Troyano-Valls, Clara</name>
</author>
<id>https://hdl.handle.net/1721.1/150273</id>
<updated>2023-04-01T03:07:49Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Advances in single component adhesive enable the production of high-performance rubber composites containing 40 wt% rubber waste and 95 wt% rubber waste when supplemented with devulcanization
Troyano-Valls, Clara
Tire waste management is a significant and unsolved issue that continues to threaten human and environmental health. Here, a single component adhesive developed by our group is further optimized to produce rubber composites containing 40 wt% recycled rubber crumb and an unused polybutadiene matrix. The best-performing 40 wt% samples outperform those produced in the previous study, and which only contain 15 wt% recycled content. To push the recycled content in the composite even further, the single component adhesive was combined with devulcanization to produce a 95 wt% sample that also outperforms the 15 wt% samples from the predecessor study. Finally, scanning electron microscopy and elemental mapping enabled a new understanding of the architecture of the composites, and shows that the adhesive is not acting as a coating to the rubber crumb, but is instead shearing off substantially and distributing itself in the composite.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systemic View to Acquiring Innovation: How the US Air Force Selects and Integrates Private Sector Innovation</title>
<link href="https://hdl.handle.net/1721.1/150271" rel="alternate"/>
<author>
<name>Porter, Orson S.</name>
</author>
<id>https://hdl.handle.net/1721.1/150271</id>
<updated>2023-04-01T03:32:12Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Systemic View to Acquiring Innovation: How the US Air Force Selects and Integrates Private Sector Innovation
Porter, Orson S.
The United States Air Force (USAF) is seeing a shift in global powers. In order to stay ahead of the new threat of near-peer advisories, the Chief of Staff of the Air Force established a plan, Accelerate Change or Lose. In this plan, General Charles Q. Brown asks USAF personnel to become innovative problem solvers. One way to bridge this gap is to use the small business innovative research (SBIR) program. Through this program, the USAF funds small businesses with innovative products, technologies, or services and then integrates them into current USAF systems.&#13;
&#13;
The USAF started Pitch Day in 2019 to rapidly fund and contract with innovative companies, with a goal to integrate their products, technologies, or services for end users within the USAF. Shortly afterwards the USAF created AFWERX to standardize process. &#13;
&#13;
Both USAF employees and private sector companies were interviewed in a semi-structured format. USAF employees over each of the processes were interviewed to fully understand the USAF’s approach. Fifty-two companies were also interviewed and the semi-structured format allowed for open-ended questions and provided unfiltered responses. &#13;
&#13;
The information provided by the interviews was then analyzed and overarching themes were identified. The baseline processes are mapped, and the various inputs and outputs required by the USAF and the private sector companies are then added to the baseline process. Social behavior is also added to the process models, which creates a holistic view of the entire system. The social behaviors include both enablers and barriers and include both formal and informal processes. The models help explain how companies move through the entire AFWERX system from Phase 1 to Phase 3, explain how the contracting process works for each solicitation and contract, and explains the possible factors that can lead to the “Valley of Death.” &#13;
&#13;
Within each of these different process flow models, feedback and balancing loops explain how informal requirements actually impact the formal requirements, both positively and negatively. The purpose of this thesis is to propose a new framework that can better enable the USAF and companies to understand the complexities of the government-to-industry innovation system.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliminating the Thorn in the United States’ Side: Media Propaganda and the Grenada Experiment</title>
<link href="https://hdl.handle.net/1721.1/150270" rel="alternate"/>
<author>
<name>Sands, Annis R.</name>
</author>
<id>https://hdl.handle.net/1721.1/150270</id>
<updated>2023-04-01T03:51:05Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Eliminating the Thorn in the United States’ Side: Media Propaganda and the Grenada Experiment
Sands, Annis R.
This thesis explores the media propaganda surrounding the 1979 Grenada Revolution and the subsequent 1983 US invasion.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning for detection of cyberattacks on industrial control systems</title>
<link href="https://hdl.handle.net/1721.1/150269" rel="alternate"/>
<author>
<name>Kalra, Geet</name>
</author>
<id>https://hdl.handle.net/1721.1/150269</id>
<updated>2023-04-01T03:43:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Machine learning for detection of cyberattacks on industrial control systems
Kalra, Geet
Senior executives for industrial systems are increasingly facing the need to reassess their cyber risk as cyberattacks are on a steep rise. This is because of the rapid digitalization of traditional industries, designed to work for decades at a time when security was not a priority. Simultaneously, the available tools to detect these attacks have also increased. This thesis aims to help researchers and industry leaders understand how to implement machine learning (ML) as an early detection tool for anomalies (cyberattacks being a subset of anomalies) in their processes. With learnings from an end-to-end implementation of some state-of-the-art machine learning models and a literature survey, this thesis highlights the critical focus areas for managers looking to implement ML tools. The thesis also helps managers to understand research metrics and converts them into business goals that would allow for better decision-making and resource allocation.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety in U.S. Navy Navigation Applying STAMP Processes to Surface Ship Collisions</title>
<link href="https://hdl.handle.net/1721.1/150268" rel="alternate"/>
<author>
<name>Canady, Andrew Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/150268</id>
<updated>2023-04-01T03:39:50Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Safety in U.S. Navy Navigation Applying STAMP Processes to Surface Ship Collisions
Canady, Andrew Michael
The collisions and accidents occurring throughout the U.S. Navy surface fleet warranted the appointment of a 34-personnel review team to analyze the three ship collisions and one grounding in 2017.  These accidents resulted in 17 U.S. Navy sailors' deaths and damage to the operational ships.  There were 12 incidents between 2007 and 2017; this increase in frequency drove the need to conduct the review. The concern is that the fundamental causal factors were not adequately addressed and that a future collision is imminent without further corrective action. &#13;
&#13;
This thesis uses Dr. Nancy Leveson’s Systems-Theoretic Accident Model and Process (STAMP) model of accident causation to analyze two U.S. Navy ship collisions in 2017.  The Causal Analysis based on STAMP (CAST) is conducted on both collisions, and an analysis of the results is compared with the traditional U.S. Navy findings.  CAST examines the system’s safety control structure to assess why the designed controls were inadequate to prevent the accident.  The goal of this thesis is to determine whether a STAMP approach to accident analysis would add value to the U.S. Navy.  If so, it seeks to determine what new factors the CAST analysis provides and how it may be used to prevent future mishaps.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Modeling and Optimization of Hydrogen Supply Chain for Aviation Demand</title>
<link href="https://hdl.handle.net/1721.1/150267" rel="alternate"/>
<author>
<name>Cybulsky, Anna Nadia</name>
</author>
<id>https://hdl.handle.net/1721.1/150267</id>
<updated>2023-04-01T03:49:51Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Techno-Economic Modeling and Optimization of Hydrogen Supply Chain for Aviation Demand
Cybulsky, Anna Nadia
The aviation industry is under increasing pressure to find solutions to decarbonize its operations by 2050. Few viable solutions exist for the sector to reach net zero emissions – the main ones under consideration are synthetic fuels (sustainable aviation fuels from biomass or power-to-liquid fuels from green hydrogen), and liquid hydrogen. This work considers the liquid hydrogen pathway and a flight network in Europe consisting of five countries and flights within 1000 nmi. The DOLPHYN energy systems capacity expansion and economic dispatch model considers existing and future technologies under a strict emissions cap for the year 2040, while optimizing the overall system for cost. This work highlights the importance of utilizing multiple technology options in order to achieve decarbonization targets: such as nuclear expansion, carbon capture and storage, and natural gas reforming with carbon capture for hydrogen production. The lowest cost system is achieved when nuclear power is allowed to expand, whereas the highest cost system arises when carbon capture and storage is not developed. Average system-wide levelized cost of hydrogen is projected at below €2/kg, demonstrating pathways for Europe to achieve cost-competitive domestic production, when hydrogen is deployed at large scale in coordination with power sector expansion.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipping decarbonization through sea route optimization &amp; Vortex generator resistance reduction</title>
<link href="https://hdl.handle.net/1721.1/150266" rel="alternate"/>
<author>
<name>Mogilevsky, Igor</name>
</author>
<id>https://hdl.handle.net/1721.1/150266</id>
<updated>2023-04-01T03:24:03Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Shipping decarbonization through sea route optimization &amp; Vortex generator resistance reduction
Mogilevsky, Igor
The IMO has set a target goal to reduce 40% of greenhouse gas (GHG) emissions by 2030, with the aim of pursuing efforts towards 70% reduction by 2050. In order to achieve this goal, accurate prediction of ship resistance and identification of effective resistance reduction measures are crucial. This thesis presents a ship resistance evaluation model that aims to accurately predict the added resistance experienced by a ship in high seas based on its physical characteristics and operating conditions. The model is developed as part of the ship decarbonization movement with the goal of helping to optimize ship performance and reduce fuel consumption.&#13;
&#13;
The model is developed using a combination of experimental data and computational evaluations. The performance of the model is validated against sea trial measurements. The results show that the proposed model is able to accurately predict ship resistance for a wide range of vessel types and operating conditions, and estimate the added resistance based on a reliable sea state database (ERA5 from ECMWF). The model is applied to evaluate the added resistance for a cross-pacific route, providing valuable insights for ship operators working towards the goal of reducing GHG emissions by 70% by 2050.&#13;
&#13;
In addition to the model development, this thesis also discusses and tests the use of Vortex generators (VGs) as an innovative tool for ship resistance reduction. The theory shows that the use of VGs is an effective method for potentially reducing ship’s form factor resistance, with an expected reduction of up to 30% of total resistance.&#13;
&#13;
While the research shows that the use of vortex generators can be an effective method for reducing ship resistance, further research is needed to optimize the size, shape, and position of these generators in order to maximize their effectiveness. Overall, the model and the use of Vortex generators are expected to be valuable tools for ship designers, operators, and researchers working towards the decarbonization of the shipping industry.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation of Irradiation of a Molten Salt Loop at the MIT Reactor</title>
<link href="https://hdl.handle.net/1721.1/150265" rel="alternate"/>
<author>
<name>Carayannopoulos, Loukas</name>
</author>
<id>https://hdl.handle.net/1721.1/150265</id>
<updated>2023-04-01T03:50:09Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Simulation of Irradiation of a Molten Salt Loop at the MIT Reactor
Carayannopoulos, Loukas
Over the past decade or so, molten salt reactors (MSR) have gained a lot of interest within the nuclear energy community, with two of the ten reactor designs supported by the Department of Energy’s (DOE) Advanced Reactor Development Program being MSRs. Commercializing MSR technology requires new experimental facilities to improve understanding of molten salt behavior in reactor-like environments. The Nuclear Energy University Program approved a project to design and build a molten salt test bed that will be irradiated by the MIT Reactor (MITR), which will provide testing capabilities that have not existed since the shutdown of the Molten Salt Reactor Experiment (MSRE) in 1969. The main experimental goals of the project are (1) to build a molten salt loop that operates between 550°C and 700°C and is irradiated by neutrons from the MITR to duplicate conditions in a molten salt reactor, (2) to produce data regarding the transport, diffusion, and dissolution of tritium and radionuclides in molten salts, and (3) to provide a facility for testing chemistry control, salt cleanup, tritium control, and instrumentation.&#13;
&#13;
To safely design this facility, an understanding of the radioactivity generated in the loop is needed. The purpose of this work is to model the irradiation of the salt loop, using MCNP and Serpent, to determine the necessary amount of radiation shielding and the activation of different salts proposed for use in MSRs. The salts considered are FLiBe, LiF-BeF₂-ZrF₄-UF₄ used in the MSRE, LiF-BeF₂-UF₄ suggested as a fuel salt by Terrestrial Energy and Flibe Energy, LiF-BeF₂-ThF₄ suggested as a blanket salt by Flibe Energy, and NaCl-UCl₃ suggested as a fuel salt by TerraPower. FLiBe will generate 1.413 ± 0.024 mCi/hr of tritium, with all other fluoride salts generating similar quantities, and a total of 1413 ± 24 Ci over a 1000-hour period – the maximum continuous operation time proposed for the facility. The maximum radioactivity produced in 1000 hours of operation is 546.9 Ci in the chloride salt. The quantities of tritium and other radioactive products formed in each of the salts make this facility an important tool for the development of MSR technology.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not as Planned</title>
<link href="https://hdl.handle.net/1721.1/150263" rel="alternate"/>
<author>
<name>Dueñas Gerritsen, Patricia</name>
</author>
<id>https://hdl.handle.net/1721.1/150263</id>
<updated>2023-04-01T03:03:46Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Not as Planned
Dueñas Gerritsen, Patricia
Architecture assumes too much. It assumes it can provide solutions that will always work. It assumes people will live in particular ways. It assumes buildings and conditions will last forever unaltered. These assumptions mean that buildings too often restrict change. The certainty of building has been taken for granted and as a result, buildings and their embodied materials are cast as collateral damage to new plans and new ideas. In place of this certainty, we need to accept that more often than not things do not go as planned.&#13;
&#13;
The Providence Gas Company Purifier House serves as the physical and conceptual site for this thesis. It has undergone failure after failure—technological obsolesce, environmental disasters, economic collapse—making it both peculiar and paradigmatic. And while architecture does not have the agency to change these realities, it can respond. By considering a series of architectural propositions instead of a singular custom-fit design, this thesis asks: Can we think through multiple states and their transitions? What might this mean for sites of architecture and existing modes of practice?&#13;
&#13;
To design differently, architects need to be more unsure and unassuming. Resigning control of future events means abandoning overdesigned and overdetermined systems of the past. This thesis presents a methodology of contingency plans that explores loose-fits and odd adjacencies resulting from existing forms. An embrace of nonlinear design has the potential to challenge standards, conventional programming, unproductive preservationist ideals, developer-driven notions of efficiency, and uncompromising aesthetic agendas.&#13;
&#13;
Presuming that future plans will fail, yet still maintaining hope in them is a call to think through precarity. This means that architects must anticipate that their work will be rewritten, edited, and converted. An expanded notion of practice could also enable new tools of documentation, deconstruction, and maintenance. This thesis reimagines the architect as caretaker and argues for an architecture of revision and repair through the design of multiple futures.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mutual Information as a Predictor of Group Performance: Application to Soccer Teams</title>
<link href="https://hdl.handle.net/1721.1/150262" rel="alternate"/>
<author>
<name>Miura, Hirotaka</name>
</author>
<id>https://hdl.handle.net/1721.1/150262</id>
<updated>2023-04-01T03:27:10Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Mutual Information as a Predictor of Group Performance: Application to Soccer Teams
Miura, Hirotaka
Predictors of group performance based on observational data about interactions among group members could be useful for many organizations. However, existing methods to formulate such predictors tend to be complicated or miss key relationships in the data. We demonstrate that a simple measure derived from information theory called mutual information is predictive of group performance. Mutual information captures the probabilistic dependency between two states, and can incorporate the time dimension to quantify dynamic interactions. Here we apply mutual information to analyze the pattern of passing between members of 11-player soccer teams in approximately 2,000 matches. We employ a modern econometric technique called debiased machine learning to estimate predictive effects of mutual information on game outcomes, controlling for many features including player-level events taking place on the field as well as opponent actions. Holding all other variables constant, we find a 0.01 unit increase in mutual information, roughly equivalent to moving a team from the bottom of the metric to the average, is associated with approximately a 4% increase in the likelihood of winning a game and 0.07 more goals during a game. As a comparison, all else equal, homes games are associated with about 0.26 more goals, implying that the effect size of mutual information on number of goals is equal to about a quarter of the effect size observed for home games. Stratifying by time, we find that around 50% of the effect of mutual information on number of goals for the entire match is observed during the middle of the game, suggesting that mutual information could be a leading indicator of group performance. These effects are separate from the impact of number of passes, which we find has a net zero effect on wins, losses, and draws, and a negative effect on number of goals. Together, these results suggest that mutual information could provide a simple way of predicting group performance using observational data.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effects of Government Legislation and Regulation in the 20th Century on the Evolution of Commercial Real Estate as an Investment Vehicle</title>
<link href="https://hdl.handle.net/1721.1/150260" rel="alternate"/>
<author>
<name>Bonomo, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/150260</id>
<updated>2023-04-01T03:43:42Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Effects of Government Legislation and Regulation in the 20th Century on the Evolution of Commercial Real Estate as an Investment Vehicle
Bonomo, Gregory
We live within, modify, design, create, develop and admire these tangible things we call houses, offices, shopping malls, coffee shops, restaurants, warehouses and apartments with their basic creation arising from a mere need, throughout history, to serve their principal purpose; provide a space to live, work and play. An evolution of the creation of these spaces physically and locationally over time is of equal importance to their evolution as a major vessel of value creation for individuals, corporations, public pensions and countries throughout history. This paper will highlight the evolution of commercial real estate as an investment vehicle in the United States throughout debatably the most crucial time period in its development, the 20th century. During this century the asset class evolved into one of the most sought after alternative investments and attracted all levels of investors from the individual retail investor up to the largest and most sophisticated institutional investors of the era.&#13;
&#13;
Over time, these properties were physically erected with stone, steel, concrete and wood, but have been cloaked with a history of legislation and regulation, a history of booms and busts, and a history of securitizations resulting in what it has evolved into today. As much importance to the tangible nature that creates commercial real estate assets should be given to the intangible aspects that, with the stroke of a pen, have revolutionized a physical space into a major arbiter of wealth throughout history.&#13;
&#13;
This paper was designed, at the start, to trace the who in terms of major capital allocators to commercial real estate during the 20th century. What has come about as a result of this research has been an additional focus on the how and the why. Much of the evolution of this investment vehicle, as well as the capital flows both into and out of the asset class, can be traced to major legislations passed during turbulent economic times throughout the century. It is of great importance to understand the reactions and development of this asset class during its initial evolution over time as well as the resulting impacts those had on the types of investors that made way into this unique asset class.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heterogeneous Satellite Constellation Design for Cislunar Space Situational Awareness Using Real Options Analysis</title>
<link href="https://hdl.handle.net/1721.1/150259" rel="alternate"/>
<author>
<name>Wachs, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/150259</id>
<updated>2023-04-01T03:56:02Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Heterogeneous Satellite Constellation Design for Cislunar Space Situational Awareness Using Real Options Analysis
Wachs, Jordan
The number of commercial satellites in Low Earth Orbit (LEO) increased more than four-fold in the decade between 2011 and 2021, a trend that is likely to continue due to the combination of increasingly capable technologies and business models [23] [24]. Building on the success of the LEO economy, there is potential for the establishment of a cislunar economy, supporting NASA and providing commercial goods and services well beyond LEO [20].&#13;
&#13;
Successful creation of a cislunar economy will require commercial and government cooperation at an altogether new scale. The vastness of this new domain, however, makes the implementation of an adequate Space Situational Awareness (SSA) capability both necessary and extraordinarily difficult. Future aerospace leaders may find them selves breaking from traditional paradigms for architecture definition, systems engineering, and program costing [43]. Rather than wholesale abandonment of the processes that resulted in decades of successful innovation, however, future architects will benefit from the adoption of a new framework of systems thinking that combines the hard-won knowledge from the Aerospace domain with similarly deep lessons learned from other domains such as Finance and Real Options.&#13;
&#13;
This thesis explores a new framework for the analysis of large, dynamic, heterogeneous satellite constellation architectures by focusing on a multi-mode cislunar (SSA) capability. The combination of technical performance models, financial models, and real options analyses into a single tool allows system architects to identify and maintain multiple paths to program success by preserving flexibility in design and implementation throughout the program life cycle. The analyses presented in this thesis explore how Real Options can be used to increase the probability of mission success from as low as 2% to as high as 72% despite wide ranging uncertainty in system cost and performance.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MIT-Middle East Multi-Party Collaboration</title>
<link href="https://hdl.handle.net/1721.1/150258" rel="alternate"/>
<author>
<name>Ali, Hassaam</name>
</author>
<id>https://hdl.handle.net/1721.1/150258</id>
<updated>2023-04-01T03:45:26Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">MIT-Middle East Multi-Party Collaboration
Ali, Hassaam
The Middle East comprises much of the Arab world, including Egypt and Israel. It is the birthplace of three major world religions. The modern history of the region has been marked with a series of geopolitical upheavals that have fragmented the countries that make up this place. In August of 2020, however, there was an historic shift away from fragmentation when an agreement of diplomatic, economic, trade and cultural collaboration was signed between Israel and UAE. This was soon followed by a similar agreement between Israel and Bahrain which further sparked normalization agreements with Sudan, and then with Morocco. Combined, these series of agreements were known as the Abraham Accords. This is important for multiple reasons:&#13;
&#13;
• This transition to normalization occurred after a long history of absence or lack of any diplomatic or trade relations despite being in the same region.&#13;
• The Abraham Accords have the potential open doors for the remaining countries in the region, among which Saudi Arabia is the largest and one of the richest in the world, to also normalize relations with Israel and thus pave way for a collaborative network innovation ecosystems for the region&#13;
• Viewed from the MIT lens of entrepreneurship and innovation, Israel stands out as the clear powerhouse in the region followed by a very distant second, UAE, which in recent years is doubling down on building and expanding its own entrepreneurial ecosystem.&#13;
&#13;
The idea of a ‘Middle East Multi Partner Initiative’ was explored from within MIT International Science and Technology Initiatives (MISTI) while aiming to engage across the Institute. The initiative to build on recent geopolitical developments in the region to reach across borders to make regional impact is in its nascent stages of development. Its goal is to promote collaborative solutions to major challenges in the region that often share the same set of geographical, economic, and social hurdles, but lack bridges for knowledge sharing as well as symbiotic and sustainable partnerships.&#13;
&#13;
My work explores the current state of the region from the lens of potential entrepreneurial and economic collaboration, the key areas to focus on, and the most effective framework for MIT to build bridges between multiple countries by leveraging MIT’s programs, researchers, students, and the alumni body who are either based in the region or involved in research or coursework in these countries. For the 200+ MIT students that in a typical year annually take part in professional experiences in the region, this initiative could bring together new and existing partners in multi-partner engagement while simultaneously offering transformative educational experiences for MIT students and meaningful research opportunities for MIT faculty in the region.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Public Housing: Architecture as (Prop)aganda</title>
<link href="https://hdl.handle.net/1721.1/150256" rel="alternate"/>
<author>
<name>Kim, Jayson</name>
</author>
<id>https://hdl.handle.net/1721.1/150256</id>
<updated>2023-04-01T03:26:03Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Towards Public Housing: Architecture as (Prop)aganda
Kim, Jayson
Capitalism sustains crisis, most acutely the housing crisis. The only way out is to de-commodify housing, since a building alone, despite architects’ aspirations, cannot tackle a system. This thesis claims that built architecture alone ineffectively challenges regulatory systems. Resistance’s power, then, lies not in the physical, but in the psychological. To work towards de-commodifying housing, architects need to envision themselves as engaging in psychological warfare. We need to adopt propaganda as a method to rewrite the cultural narrative where single-family homeownership antagonizes public housing. We need to shift paradigms from homeownership to home-usership. This thesis proposes collaborations with tenant advocacy groups to create propaganda in the form of guerrilla theater that engages in psychological warfare against prevailing conceptions of homeownership. The architect designs stage sets that advocacy groups will weaponize. Orchestrated together, the guerilla theater performs at town hall meetings. Borrowing Bertolt Brecht’s techniques of the epic theater, in particular critical distancing, each act works with architectural props to challenge the role we, inside the belly of the whale, play in this theater of the housing crisis. The thesis proposes sets for three acts. Act One, “A City is Born,” depicts the optimism and prosperity that came with the suburbanization experiment of post-WWII United States. Act Two, “Commodification of Domesticity,” describes housing today. Using approaches from the volumes Neighborhood Defenders and Thoughts on Building Strong Towns, this act paints a portrait of competing interests involved in the landscape of housing in Los Angeles. Act Three, “All That is Solid Melts into Air,” sketches the inevitable doom that forces the audience to reconsider homeownership in favor of home-usership. Questioning representations of public housing as matters of economic necessity, Towards Public Housing works to destigmatize public housing and home-usership and emphasize a more equitable future of American housing.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecture Ad Lib: A Field Guide</title>
<link href="https://hdl.handle.net/1721.1/150255" rel="alternate"/>
<author>
<name>McKinlay, Sasha</name>
</author>
<id>https://hdl.handle.net/1721.1/150255</id>
<updated>2023-04-01T03:47:47Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Architecture Ad Lib: A Field Guide
McKinlay, Sasha
The role of the architect has increasingly been to draw a project before its conception and coordinate for all the eventualities of construction. But as we know, the practice if architecture is at the whim of a whole host of contingencies. Whether it be climate change, economic crisis, natural disasters, changes in the global supply chain or simply just an erratic client. This way of thinking inherently positions the architect in the role of diligent planner, charged with an impossible task.&#13;
&#13;
If there’s one thing an architect can count on, it’s that nothing goes according to plan.&#13;
&#13;
The noble plan put forward by the architect, is thus a necessarily living process that must adjust to the obstacles we find in our path, a fact we often choose to ignore. We instead find ourselves defaulting to an exploitive resource culture in order to wiggle, shake, squeeze, hammer, or slam our plans to fit, all in the pursuit of perpetuating the myth of ‘according to plan’. This thesis demands an alternative mode of working to address the very crises that make our practice vulnerable. What kind of architecture emerges when we embrace uncertainty, rather than resist it?&#13;
&#13;
Explored here is a series of alternatives to the ‘ideal’ architectural story that results in a typical plywood shed. Each project responds to a new story, where something has gone gravely awry. Rather than resist these constraints, each project welcomes the opportunities found in leveraging their respective obstacles. These fables result in important morals which in turn lend a set of strategies. Together, they provide a non-procedural toolset that help leverage the trials and tribulations of practicing in an increasingly unpredictable world. This is Architecture Ad Lib, a field guide to those very moments of architectural uncertainty.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Silicate-Based Packaging Material for 3-D Heterogeneous Integration of Microsystems</title>
<link href="https://hdl.handle.net/1721.1/150254" rel="alternate"/>
<author>
<name>Benz, Ryan Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/150254</id>
<updated>2023-04-01T03:31:18Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Silicate-Based Packaging Material for 3-D Heterogeneous Integration of Microsystems
Benz, Ryan Timothy
3D heterogeneous integration (HI) and wafer-level packaging (WLP) represent the cutting edge of microelectronics design and fabrication. However, the current limitations of microelectronics packaging materials inhibit the widespread implementation of these processes, which presents an opportunity to develop new packaging solutions using innovative materials. Building upon previous work, research on the development and characterization of an improved silicate-based packaging material for 3D HI and WLP of microelectronics with a focus on Millimeter Wave (mmWave) radio frequency (RF) applications was carried out. A sodium-free material formulation and suitability with scalable deposition techniques demonstrated compatibility with complementary metal-oxide-semiconductor (CMOS) fabrication. The material was tested and qualified for use with common microelectronics fabrication processes such as positive and negative photolithographic patterning, wet etching, and most chemical cleans (excluding highly basic chemicals). It was polishable to a surface roughness of 197 ± 29 Å, thermally stable up to 400°C, compatible with high vacuum exposure down to 1.5 ± 0.4x10⁻⁵ torr, and filled gaps with aspect ratios as high as 18.6:1 using certain techniques. The packaging material could be deposited at thicknesses ranging from single-digit microns up to several millimeters, and exhibited excellent adhesion to silicon at thicknesses below ~30 µm. Residual stress generation in films of the silicate packaging material was less than 10 MPa at thicknesses of ~20 - 90 µm. The coefficient of thermal expansion was 2.968 ± 0.054 ppm/°C from 50 - 400°C, and the dielectric constant and loss tangent were ~4 and ~0.0002 at 110 GHz. Successful fabrication of a coplanar waveguide and prototype reconstituted wafers demonstrated that the packaging material can be used to make RF devices and to enable 3D HI and WLP.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced-Carbon Envelope Systems for More Sustainable Industrial Properties: A Cost Analysis of Reducing&#13;
Greenhouse Gas Emissions</title>
<link href="https://hdl.handle.net/1721.1/150252" rel="alternate"/>
<author>
<name>Ganitsky White, Raquel</name>
</author>
<id>https://hdl.handle.net/1721.1/150252</id>
<updated>2023-04-01T03:42:08Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Reduced-Carbon Envelope Systems for More Sustainable Industrial Properties: A Cost Analysis of Reducing&#13;
Greenhouse Gas Emissions
Ganitsky White, Raquel
Building operations and the construction industry account for 49% of global carbon emissions. Although different approaches have tried to lower the emissions of building operations, there have not been many initiatives to reduce the carbon emitted in construction. Currently, the embodied carbon of buildings in construction is lower than the operational carbon over their useful life. Nevertheless, if not enough attention is directed toward making the construction industry more sustainable, embodied carbon is expected to become the largest environmental hazard in real estate.&#13;
&#13;
Driven by growth in e-commerce and international problems with supply chains, industrial buildings have experienced the largest increase in demand the past few years compared to other property types. Yet, initiatives to make construction systems for industrial buildings more sustainable are not well developed, and thus these initiatives are not commonly used or known.&#13;
&#13;
This report aims to analyze reduced-carbon materials and systems currently used in industrial construction. A comparison of carbon emissions and prices will be studied to set costs for every ton of CO₂ not released into the environment. The report will apply a case study to approximate a real-life scenario and thus attempt to understand how much more expensive it is to build reduced-carbon industrial structures. The goal of this thesis is to understand how clear the path to a much needed more sustainable industrial building market is.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ESG Leverage for TOD. General Framework and the Quantitative Underwriting, Governance, and Policy case of Union Square</title>
<link href="https://hdl.handle.net/1721.1/150251" rel="alternate"/>
<author>
<name>Huicochea Mason, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/150251</id>
<updated>2023-04-01T03:43:39Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">ESG Leverage for TOD. General Framework and the Quantitative Underwriting, Governance, and Policy case of Union Square
Huicochea Mason, Juan
There is a global need for over 6 trillion in annual infrastructure commitment to achieve the UN and COP 21 sustainable goals by 2050, representing a burden of 50-75% delta over current budgetary trends. Despite a notorious increase in capital, there is a lack of "bankable projects" to invest in. In contrast, recent years have witnessed an increasing demand for ESG; this appears as an opportunity to develop more sustainable projects.&#13;
&#13;
The first part of this work aims to help developers, public officials, and planners with a universal framework to leverage ESG mechanisms over the project finance and funding structures, focusing on Transit Oriented Development as a case that can contribute to closing the current sustainability crisis gap both from the infrastructure and the real estate side.&#13;
&#13;
The second part of this work is focused in a case using the framework previously depicted; the case of Union Square in Sommerville, within the Boston Metropolitan Area. Given a real 2.4 million sqft development project, the analysis considers the project as-it-is under the undergoing policy and market framework. Focusing specifically on the environmental side of the project, the analysis considers the green benefits and construction costs (greeniums) of the certified project, together with its CO2 offsets and capital costs discounts. Then the analysis evolves in two recommended policies given the specificity of Somerville context: density bonuses and entitlement expeditiousness. The accountability of the revenues, policy analysis, and a public-private partnership will affect the project's financial performance in a methodic way.&#13;
&#13;
The input variables will be assessed under pessimistic, fair and optimistic scenarios and will shed light on which variable and policy deserves more attention regarding its elastic impact on key financial indexes. Following this structure, a context-specific policy and green development strategy can be drawn to foster TOD developments in the Boston area.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Molecular Structures with Intrinsic Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/150250" rel="alternate"/>
<author>
<name>Corso, Gabriele</name>
</author>
<id>https://hdl.handle.net/1721.1/150250</id>
<updated>2023-04-01T03:45:33Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Modeling Molecular Structures with Intrinsic Diffusion Models
Corso, Gabriele
Since its foundations, more than one hundred years ago, the field of structural biology has strived to understand and analyze the properties of molecules and their interactions by studying the structure that they take in 3D space. However, a fundamental challenge with this approach has been the dynamic nature of these particles, which forces us to model not a single but a whole distribution of structures for every molecular system. &#13;
&#13;
This thesis proposes Intrinsic Diffusion Modeling, a novel approach to this problem based on combining diffusion generative models with scientific knowledge about the flexibility of biological complexes. The knowledge of these degrees of freedom is translated into the definition of a manifold over which the diffusion process is defined. This manifold significantly reduces the dimensionality and increases the smoothness of the generation space allowing for significantly faster and more accurate generative processes.&#13;
&#13;
We demonstrate the effectiveness of this approach on two fundamental tasks at the basis of computational chemistry and biology: molecular conformer generation and molecular docking. In both tasks, we construct the first deep learning method to outperform traditional computational approaches achieving an unprecedented level of accuracy for scalable programs.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Heteroepitaxial Photodetectors</title>
<link href="https://hdl.handle.net/1721.1/150248" rel="alternate"/>
<author>
<name>Marzen, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/150248</id>
<updated>2023-04-01T04:05:03Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Integrated Heteroepitaxial Photodetectors
Marzen, Stephanie
Optical detection in the near-infrared and telecommunication bands has historically been performed using single-crystal bulk Ge, but the development of Ge-on-Si epitaxy reduced fabrication costs and opened doors for usage in applications including optical communications and infrared imaging. To reap the benefits of monolithic integration and incorporation in the back-end-of-line (BEOL) stack, low processing temperatures (&lt; 450°) are required. Using novel processing methods and strategic anneals, we have demonstrated that low temperature Ge growths on silicon can achieve low defect densities required for high performance. In this work, Ge-on-Si p-i-n photodetectors illuminated under normal incidence have demonstrated comparable responsivity and dark current density to devices processed at high temperatures. Relatively low temperature anneals (500°C) increased performance, but as-grown diodes also showed a responsivity of 0.11 A/W and [formula]. Annealing conditions of 500°C 3 hr improved such performance to 0.15 A/W and [formula].&#13;
&#13;
In the mid-wave infrared (MWIR), photodetection has been successfully implemented for decades using the II-VI material set, Hg₁₋ₓCdₓTe. Extensive research pushed HgCdTe to nearly reach its theoretical performance limit, while also highlighting its inherent shortcomings for commercialization. An upcoming material set, [formula]  has the potential to overcome such barriers while promising comparable performance. In this work, Lumerical simulations were performed to optimize a waveguide-integrated photodetector that incorporated an [formula] homojunction and was straightforward to fabricate, assuming successful epitaxy growths. The photodetector design promoted 30% light absorption after 20 &#120583;m propagation into the detection region.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Second Harmonic Generation in Silicon Photonic Crystal Resonators for Quantum Optic Applications</title>
<link href="https://hdl.handle.net/1721.1/150247" rel="alternate"/>
<author>
<name>Azzouz, H.</name>
</author>
<id>https://hdl.handle.net/1721.1/150247</id>
<updated>2023-04-01T03:06:38Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Second Harmonic Generation in Silicon Photonic Crystal Resonators for Quantum Optic Applications
Azzouz, H.
A possible way of achieving all-photonic classical and quantum logic gates is with dynamically coupled photonic crystal cavities. A silicon architecture that has octave-separated resonances with high quality factor, low mode volume, high nonlinear coupling is implemented in the form of a one dimensional photonic crystal nanobeam cavity for efficient second harmonic generation.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City Image: a dynamic perspective using machine learning and natural language processing</title>
<link href="https://hdl.handle.net/1721.1/150245" rel="alternate"/>
<author>
<name>Wang, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/150245</id>
<updated>2023-04-01T04:01:32Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">City Image: a dynamic perspective using machine learning and natural language processing
Wang, Rui
The city image is a collective mental image of the city elements that can be perceived and interpreted by the public. The broader one’s understanding of the city image, the better urban design we will have. Over half a century ago, Kevin Lynch innovatively introduced this idea and summarized five physical elements – node, path, edge, landmark, and district, guiding the practice of urban design. Many subsequent studies confirmed its stability and further expanded the physical elements from a static viewpoint. However, cities are complex adaptive systems that involve both physical and subjective (affinity and reactions) aspects of temporal change.&#13;
&#13;
This thesis focuses on a dynamic perspective of not only physical but also subjective city images during both day and night, and in different timeframes. Taking advantage of machine learning, this thesis measures how the public values the city based on hundreds of thousands of geo-tagged photos and their textual descriptions. The thesis demonstrates the possibility of large-scale studies on the city image. To identify its subjective associations, natural language processing is applied to extract frequently used words and evaluate sentiment analysis, reflecting the public’s affinity and reactions, positive and negative. Results are presented in the form of data visualization maps and charts. Case studies examined two major US cities and their representative elements – Boston (Fort Point Channel and Boston Common) and New York City (World Trade Center, High Line, and Brooklyn Bridge Park). The main conclusion is that there does exist the subjective city image based on dynamic analysis of Lynch’s physical elements which plays a key role in an in-depth understanding of city image. Based on the state-of-the-art technologies and perspective, the thesis sheds light on a comprehensive understanding of city image, formulating a new criterion as a potential guide for urban planning.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Carbon Accounting for Sustainable Computing in Cloud Provisioned Data Centers</title>
<link href="https://hdl.handle.net/1721.1/150244" rel="alternate"/>
<author>
<name>McMillan, Khaalid</name>
</author>
<id>https://hdl.handle.net/1721.1/150244</id>
<updated>2023-04-01T03:54:07Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Carbon Accounting for Sustainable Computing in Cloud Provisioned Data Centers
McMillan, Khaalid
Enterprise digital operations require data-driven and scaleable solutions to monitor and manage the increasing environmental impact caused by manufacturing hardware and operating large data center warehouses. Data centers are complex systems comprising of heating, ventilation, cooling, power distribution, and workspaces to support employees managing the facility's operation. These workloads drive the global financial sectors, critical supply chains, big data analytics supporting consumer buying habits, and managing digital records for healthcare and payrolls that are critical to the function of modern society. &#13;
&#13;
Although these digital operations are hidden within the confines of these large data centers, out of sight and out of mind, their impact on the environment is not negligible. The world's IP traffic is expected to exceed 2.3 zettabytes. Data centers consume significantly more energy per floor space of typical commercial buildings, 1.8% of the total energy used in the United States, and 1% of energy worldwide. This drives the need to measure and understand the impact of these workloads on the environment so that innovation and optimization can be leveraged to allow us to grow these technologies and digital opportunities sustainably. &#13;
&#13;
To sustainably grow our digital presence, we need a method for tracking the environmental impact that the industry can adopt at scale while allowing for continued growth and development of digital technologies in parallel. There is interest from several government organizations to standardize a method to achieve this. This thesis attempts to illustrate a standardized and flexible way to measure the environmental impacts of digital solutions that combine the embodied emissions caused by manufacturing hardware and operational emissions from both the ICT and non-ICT systems in the data center. This solution would provide a means to track the actual ecological influence of digital operations that can be applied to information technology systems at any scale ranging from small technology systems to large cloud-provisioned data center operations. This work will focus on embodied emissions combined with operational emissions, reporting methodologies, and how that information can be used to distribute workloads using a multi-armed bandit algorithm using Thompson Sampling. &#13;
&#13;
This work illustrates that embodied emissions can constitute anywhere from five to thirty percent of a server's total environmental impact. Workloads can be more than 11 times higher between workloads given the same set of hardware in a data center. Choosing the right configuration can reduce emissions by 4 times. Data center location significantly affects operational emissions, simply shifting the region where a workload is executed can reduce emissions by 24% or more. Using a data-driven approach, the spatial distribution of workloads can be implemented to optimize digital operations based on environmental objectives with as low as 30 iterations. The risk from regulation can be reduced, and a competitive advantage can be gained from implementing a sustainability-focused digital architecture.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal Collectives Architectural Imaginaries Beyond Modern Comfort</title>
<link href="https://hdl.handle.net/1721.1/150238" rel="alternate"/>
<author>
<name>Cousin, Tim</name>
</author>
<author>
<name>Faber, Olivier</name>
</author>
<id>https://hdl.handle.net/1721.1/150238</id>
<updated>2023-04-01T03:57:05Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Thermal Collectives Architectural Imaginaries Beyond Modern Comfort
Cousin, Tim; Faber, Olivier
The era of abundance is over.&#13;
&#13;
The urgent need for CO2 emission reductions, combined with the rising price of energy and building materials, as well as restrictions on construction waste, call for alternative modes of building and inhabiting our cities.&#13;
&#13;
The notion of “comfort” implies practices of consumption that have shaped our cultural and sensorial experience of domesticity. But “modern comfort”, the one we know today, is a recent construct that was shaped in the aftermath of the post-war economic boom. Modern comfort is characterized by the transition from the tactical heating of human bodies in space to the global and uniform conditioning of spaces themselves —at all times and across all seasons. This was rendered possible by the development of fuel intensive HVAC systems and supported by complex curtain wall envelopes that have resulted in the industry-wide abandonment of thermal intelligence and its associated material practices.&#13;
&#13;
In a near future context of fuel scarcity, a group of people come together to confront the rising difficulty of maintaining their comfort. Their vision for living together in an alternative mode of dwelling calls for new forms of abundance in a world of scarcity, achieved through thermal intelligence. Their manifesto outlines the following fundamentals: &#13;
&#13;
• Living with thermal properties and climate&#13;
• Collectivizing living spaces&#13;
• Applying thermal intelligence to material ethics in construction and maintenance. &#13;
&#13;
The group surveys the numerous stranded modern office buildings on the outskirts of Paris. They acquire one of them at a bargain, and commission an experienced thermal architect to design the major spatial and infrastructural rearrangements to unlock the building’s passive thermal capacity. In support of the Thermal Collective’s new way of dwelling, the residents share their skills and build and maintain the interior fit out. We will follow the stories of some of the inhabitants as they construct, live in and care for their Thermal Collective.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling siting criteria and identifying alternative licensing pathways for micro-reactors within the existing regulatory framework</title>
<link href="https://hdl.handle.net/1721.1/150236" rel="alternate"/>
<author>
<name>Garcia, Edward J.</name>
</author>
<id>https://hdl.handle.net/1721.1/150236</id>
<updated>2023-04-01T04:02:22Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Scaling siting criteria and identifying alternative licensing pathways for micro-reactors within the existing regulatory framework
Garcia, Edward J.
Micro-reactors are a relatively recent subset of Gen IV reactors currently being developed, and due to their design features (i.e., no refueling onsite, easily transported, and mass produced), micro-reactors have the potential to tap into broader markets never accessed before by the nuclear sector such as micro-grids, industrial heat generation, containerized farming, electric vehicle (EV) charging - to mention a few. Micro-reactors are a magnitude smaller than small modular reactors in terms of thermal power output and utilize alternative coolant systems such as heat pipes. More importantly, micro-reactors are slated to utilize, non-traditional (i.e., low enriched uranium) fuels in significantly smaller quantities than the existing reactor feet in the United States. As a result, the existing regulatory framework and siting criteria are largely unsuited for micro-reactors, and in order for micro-reactors to enter these markets, proper regulatory requirements and siting criteria must be established. Doing so would improve the siting flexibility is important because these broader markets tend to be near or within large population centers. In this thesis we present an argument for scaling micro-reactor criteria and requirements to reflect those of research reactors specifically for deployment in densely populated urban environments. Our findings reflect a systematic examination of the design bases and regulatory environments for both power and non-power reactors. In the process of doing so, we identify key issues and their outlines for taking policy actions within the existing regulatory framework. Moreover, we find that the main difference be tween a commercial micro-reactor and a research reactor is simply the end destination of their products, which should not warrant a substantially different regulatory treatment of the two classes of reactors.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Architectural Coincidence: guessing consciously, gauging unconsciously</title>
<link href="https://hdl.handle.net/1721.1/150234" rel="alternate"/>
<author>
<name>Wang, Yun</name>
</author>
<author>
<name>Wu, Haotian</name>
</author>
<id>https://hdl.handle.net/1721.1/150234</id>
<updated>2023-04-01T04:06:02Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Architectural Coincidence: guessing consciously, gauging unconsciously
Wang, Yun; Wu, Haotian
Gauging and guessing is a metaphor that represents the duality in the process of architectural design, between logic and intuition, reasoning and chancing, the explicit and the implicit. Throughout history, this duality implies not only different design methodologies but also deeply-rooted design mentalities, the conscious and the unconscious. Gauging is associated with consciousness, representing a will to pursue certain results based on intentional thought processes. Guessing, on the other hand, is associated with unconsciousness, representing an aleatoric chancing to fulfill one’s inner possibilities. However, “gauging consciously” and “guessing unconsciously” inevitably happens on a spectrum with two extremes, either limiting or diluting the discipline of architecture.&#13;
&#13;
This thesis investigates the opposite situations of gauging consciously and guessing unconsciously, experimenting with new ways of involving technologies in the design process to look for the possibility of paradigm shifts. In the first phase, the two research projects “Fake Fake-hill” and “Data-Matter to Data” are attempts to examine the notion of gauging unconsciously and guessing consciously respectively. Using naturally-formed rock art and intuitive model-making that engages hands as the prompts in combination with engaging the digital tools, we aim to show the possibility of pursuing a precise design result without the limit of human consciousness, and of pursuing a natural result without the necessity of unconsciousness. In other words, being naturally precise and precisely natural. Based on this research, the second phase of the thesis tested the methodology of combining the two sets of paradoxes with two design proposals,  in search of the “architectural coincidence” that remains oscillating in between.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diffusion Probabilistic Modeling of Protein Backbones in 3D for the Motif-Scaffolding problem</title>
<link href="https://hdl.handle.net/1721.1/150230" rel="alternate"/>
<author>
<name>Yim, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/150230</id>
<updated>2023-04-01T03:49:32Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Diffusion Probabilistic Modeling of Protein Backbones in 3D for the Motif-Scaffolding problem
Yim, Jason
Construction of a scaffold structure that supports a desired motif, conferring protein function, shows promise for the design of vaccines and enzymes. But a general solution to this motif-scaffolding problem remains open. Current machine-learning techniques for scaffold design are either limited to unrealistically small scaffolds (up to length 20) or struggle to produce multiple diverse scaffolds. We propose to learn a distribution over diverse and longer protein backbone structures via an E(3)-equivariant graph neural network. We develop SMCDiff to efficiently sample scaffolds from this distribution conditioned on a given motif; our algorithm is the first to theoretically guarantee conditional samples from a diffusion model in the large-compute limit. We evaluate our designed backbones by how well they align with AlphaFold2-predicted structures. We show that our method can (1) sample scaffolds up to 80 residues and (2) achieve structurally diverse scaffolds for a fixed motif.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Frequency Distributions in Data Streams</title>
<link href="https://hdl.handle.net/1721.1/150228" rel="alternate"/>
<author>
<name>Chen, Justin Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/150228</id>
<updated>2023-04-01T03:47:32Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Estimating Frequency Distributions in Data Streams
Chen, Justin Y.
Streaming algorithms allow for space-efficient processing of massive datasets. The distribution of the frequencies of items in a large dataset is often used to characterize that data: e.g., the data is heavy-tailed, the data follows a power law, or there are many elements that only appear only once or twice. In this thesis, we focus on the problem of estimating the profile (a vector representation of the frequency distribution). Given a sequence of m elements from a universe of size n, its profile is a vector φ whose i-th entry φᵢ represents the number of distinct elements that appear in the stream exactly i times. A classic paper by Datar and Muthukrishan from 2002 gave an algorithm which estimates any entry φᵢ up to an additive error of ±εD using O(1/ε² log(nm)) bits of space, where D is the number of distinct elements in the stream. We considerably improve on this result by designing an algorithm which estimates the whole profile vector φ, up to overall error ±εm, using O(1/ε2 log(1/ε) + log(nm)) bits. More formally, we give an algorithm that computes an approximate profile φˆ such that the L₁ distance ‖φ − φˆ‖₁ is at most εm. In addition to bounding the error across all coordinates, our space bound separates the terms that depend on 1/ε and those that depend on n and m. Furthermore, we give a lower bound showing that our bound is optimal up to constant factors.&#13;
&#13;
To achieve these results, we introduce two new techniques. First, we develop hashing-based sketches that keep very limited information about the identities of the hashed elements. As a result, elements with different frequencies are mixed together, and need to be unmixed using an iterative “deconvolution” process. Second, we reduce the randomness used by the algorithms in a somewhat subtle way: we first use Nisans generator to ensure that the random variables of interest are O(1)-wise independent, and then we analyze those variables by calculating their moments. (In our setting, using Nisans generator alone would not yield the desired space bound.) The latter technique seems quite versatile, and has been already used for other streaming problems [Ano23].
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>No-Regret Learning in General Games</title>
<link href="https://hdl.handle.net/1721.1/150227" rel="alternate"/>
<author>
<name>Fishelson, Maxwell K.</name>
</author>
<id>https://hdl.handle.net/1721.1/150227</id>
<updated>2023-04-01T03:38:16Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">No-Regret Learning in General Games
Fishelson, Maxwell K.
This thesis investigates the regret performance of no-regret learning algorithms in the competitive, though not fully-adversarial, environment of games. We establish exponential improvements on previously best-known external and internal regret bounds for these settings.&#13;
&#13;
We show that Optimistic Hedge – a common variant of multiplicative-weights-updates with recency bias – attains poly(log T) regret in multi-player general-sum games. In particular, when every player of the game uses Optimistic Hedge to iteratively update her strategy in response to the history of play so far, then after T rounds of interaction, each player experiences total regret that is poly(log T). Our bound improves, exponentially, the O(T¹ᐟ²) regret attainable by standard no-regret learners in games, the O(T¹ᐟ⁴) regret attainable by no-regret learners with recency bias [Syr+15], and the O(T¹ᐟ⁶) bound that was recently shown for Optimistic Hedge in the special case of two-player games [CP20]. A corollary of our bound is that Optimistic Hedge converges to coarse correlated equilibrium in general games at a rate of [formula].&#13;
&#13;
We then extend this result from external regret to internal and swap regret, thereby establishing uncoupled learning dynamics that converge to an approximate correlated equilibrium at the rate of [formula]. This substantially improves over the prior best rate of convergence for correlated equilibria of O(T⁻³ᐟ⁴) due to Chen and Peng (NeurIPS ‘20), and it is optimal up to polylogarithmic factors in T.&#13;
&#13;
The results presented here originate from my works [DFG21] and [Ana+22].
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Restoring Eye Contact in Video Conferencing</title>
<link href="https://hdl.handle.net/1721.1/150226" rel="alternate"/>
<author>
<name>Kim, Jin Woo</name>
</author>
<id>https://hdl.handle.net/1721.1/150226</id>
<updated>2023-04-01T03:29:08Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Restoring Eye Contact in Video Conferencing
Kim, Jin Woo
In recent years, especially during the COVID-19 pandemic, video conferencing applications have become widely adopted, enabling remote work and virtual learning. Despite the convenience, video conferencing made it challenging and even unnatural to establish eye contact, which is a critical component in visual communication. To create the perception of eye contact in a video conferencing call, a user would need to look directly into the camera, but the user typically looks at the participant displayed on the screen while the camera is located at the top of the screen. Such physical deviation results in the perception that the user is looking elsewhere to the other participant.&#13;
&#13;
This work proposes the application of a Convolutional Neural Network (CNN) based 3D face reconstruction technique, Position Map Regression Network (PRNet), on 2D images from a single RGB webcam found in consumer-grade computers to create a newly synthesized video stream where the video conferencing user’s face becomes oriented towards the webcam, resolving the physical deviation between the webcam location and the location of the other participant shown on the screen. Unlike previous approaches, this work fits a pre-trained model onto the specific user to leverage more accurate 3D face reconstruction results.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Propagation from meteorological drought to agricultural drought under climate change</title>
<link href="https://hdl.handle.net/1721.1/150225" rel="alternate"/>
<author>
<name>Gannon, Meriah J.</name>
</author>
<id>https://hdl.handle.net/1721.1/150225</id>
<updated>2023-04-01T03:40:55Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Propagation from meteorological drought to agricultural drought under climate change
Gannon, Meriah J.
Persistent precipitation deficits, or meteorological droughts, often but not necessarily lead to persistent soil moisture deficits, also known as agricultural droughts. For the same amount of accumulated precipitation deficit, an agricultural drought may or may not be triggered depending on a number of catalytic co-factors. In this study we use satellite observations to quantify how coincident warm temperature anomalies makes the transition from one drought type to another to occur more frequently. We then show that climate model historical simulations correctly capture this phenomenon. Finally, we show that future climate simulations reveal a marked intensification of the drought propagation from one type to another. With this intensification under climate change, equal precipitation deficits can cause a greater decrease in soil moisture and hence more severe agricultural droughts. We find that elevated temperatures nearly always exacerbate agricultural drought for a given precipitation deficit across multiple models, and future agricultural droughts are likely to be more extreme across the Americas, Europe, and Australia. Projected precipitation increases appear to offset heating amplification of agricultural drought in Africa and southern Asia, but not in North America or Europe. This underscores that changes in precipitation alone are insufficient for estimating drought propagation to soil moisture that is the pathway to impacts on crops and natural ecosystems, and water resources (e.g., aquifer recharge).
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sailing in the Pirate Sea of Art</title>
<link href="https://hdl.handle.net/1721.1/150224" rel="alternate"/>
<author>
<name>Lee, Tzu-Tung</name>
</author>
<id>https://hdl.handle.net/1721.1/150224</id>
<updated>2023-04-01T03:46:12Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Sailing in the Pirate Sea of Art
Lee, Tzu-Tung
“Sailing in the Pirate Sea of Art” is a manual for the public to replicate and innovate artist Tzu Tung’s projects. Tzu Tung sees zer art as a creative common for public use and an open-source path for creators to innovate together. The manual includes the following: Writing the Time Lag (2016–), an experimental, participatory documentary about field-researching among women and queer within Indigenous activist groups. #Ghostkeepers (2018–), ritualistic social media protocols for writers to research and create avatars for people who have passed away due to political violence. Positive Coin (2019-), an economy circle designed to increase people’s relation to AIDS-related stigma. Lastly, Forkonomy() (2020-), a contract workshop co-created with Hong Kong artist Winnie Soon, gathers relevant stakeholders to discuss the question “How does one buy/own one milliliter of the South China Sea?” and drafts a contract in response. &#13;
&#13;
Natural and cultural commons have been appropriated through deceitful contracts and the settler’s legal system over the last 500 years; these projects call attention to how to use participatory projects to pirate the current national property regime and so to release these commons to the public. The manual provides examples of the artistic use of participatory and technology forms for more people to cautiously re-conceptualize nationalism, property rights, and different modes of identity. The manual “Sailing in the Pirate Ship of Art” is a commoning ship for individuals and communities to have artistic and social change by making tangible space and intangible digital platforms places to deposit culture commons in the art open sea.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disastrous Opportunities: Designing for post-hurricane adaptivity using low carbon construction methods on the destroyed site of Belle Creole, St Martin: a construction research center</title>
<link href="https://hdl.handle.net/1721.1/150223" rel="alternate"/>
<author>
<name>Moreau, Sacha</name>
</author>
<id>https://hdl.handle.net/1721.1/150223</id>
<updated>2023-04-01T03:19:33Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Disastrous Opportunities: Designing for post-hurricane adaptivity using low carbon construction methods on the destroyed site of Belle Creole, St Martin: a construction research center
Moreau, Sacha
Disasters are perceived as singular apocalyptical events that rely on vernacular shifts that are often too slow to adapt before the next hurricane hits. This thesis redefines hurricanes as an opportunity to innovate in new ways of building that take advantage of the ensuing scarcities and limitations. The site is located on the island of St Martin where I grew up. After speaking to multiple actors in the government and the building industry, their concerns denote a need for adaptivity to future disasters to enable a new sustainable economy that is not constrained by European and French regulations and empowers the local government to act on its own to explore methodologies of resilience. This thesis answers some of these issues by re-designing the site of belle creole an abandoned hotel resort and transforming it into a material and disaster resilience research center for construction testing and certification. By focusing on an existing site that has undergone many hurricanes, I propose a symbiotic intervention for building re-use where new, low-carbon and site-specific re-sources come to support existing damaged architecture. The proposal blends the old with the new to preserve history while innovating for the future and tackle its challenges. New architectural interventions using local materials become an opportunity to tackle systemic issues while reducing material waste and labor. The new center will help expand European regulation, grow skills and jobs in the construction and sustainability industry and explore solutions to climate change’s impact on the territory. The thesis is defined in three parts, an overview of the territory, the site and its constraints and the architectural intervention and its evolution through time.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Doodlebugging: A Bayesian Methodology of Design</title>
<link href="https://hdl.handle.net/1721.1/150222" rel="alternate"/>
<author>
<name>SadeghiKivi, Ardalan</name>
</author>
<id>https://hdl.handle.net/1721.1/150222</id>
<updated>2023-04-01T03:38:01Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Doodlebugging: A Bayesian Methodology of Design
SadeghiKivi, Ardalan
Divination can be seen as a systematic method of organizing what appears to be disjointed, random facets of existence such that they provide insight into a problem at hand.¹ Divination practices are epistemic technologies² aiming to obtain factually accurate information about the world in order to inform decisions and actions by supernatural or magical means that have been extremely common, possibly even universal.³ Despite this prevalence, the specific methods of divination exhibit substantial variability: deciphering gods' messages from the flight patterns of birds (augury), inquiring about one's fate through the position of the stars and planets at the time of one's birth (astrology), or identifying the cause of a disease by feeding poison to a chicken (chicken oracle). This thesis will question whether modeling enterprises can further contribute to understanding divination as a technology by incorporating it into a design methodology, and what I will refer to as divination in this project includes inferring meaning from a dowsing wand, traditionally used to find or locate water, in order to inquire about the design that is in equilibrium with the flow and direction of energies in the universe. Furthemore, I will assemble and construct a probabilistic model based on the dowsing experiments to investigate the complexity of the HVAC systems.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Enclosure</title>
<link href="https://hdl.handle.net/1721.1/150221" rel="alternate"/>
<author>
<name>Tasistro-Hart, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/150221</id>
<updated>2023-04-01T03:44:43Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Building Enclosure
Tasistro-Hart, Benjamin
Enclosure is usually understood by architects as a material system that creates a distinct thermodynamic atmosphere. The past century of Modernism reified the façade as the dominant architectural element of enclosure. In the Anthropocene, an expanded understanding of enclosure redraws the boundary of shelter beyond the façade to include the territories of land used to produce architecture.&#13;
&#13;
This thesis situates itself in the forests of the Southeastern United States. These predominantly pine forests are usually overlooked in favor of the iconic National Forests of the West Coast. Almost entirely privately-owned, the forests in the Southeast are intensely managed caught between being part nature preserve and part timber stand. Nearly two-thirds of timber construction material in the nation comes from these forests which more recently has attracted the interest of architects as the forms of engineered timber but has for decades lurked in the walls of vernacular dwellings.&#13;
&#13;
Since the early twentieth century, the average dwelling has over doubled in built area. Larger, more numerous rooms require more material and lead to a reciprocal increase the intensity of commercial timber operations. Building enclosures that can accommodate the growing populations of the Southeast will need to reframe the static views of dwelling and timber stand as reciprocal sites embedded with cycles of material generation. This project proposes three interventions each located within periods of material abundance and scarcity found in timber dwelling cycle. Each intervention engages the dweller with the past, present, and future world of the Southeastern forest.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach to Improve Optical Fiber Manufacturing: Focus on Core Deposition</title>
<link href="https://hdl.handle.net/1721.1/150220" rel="alternate"/>
<author>
<name>Sardet, Maëlle J.</name>
</author>
<id>https://hdl.handle.net/1721.1/150220</id>
<updated>2023-04-01T04:06:17Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach to Improve Optical Fiber Manufacturing: Focus on Core Deposition
Sardet, Maëlle J.
This thesis presents an in-depth investigation on characterization of optical fiber preform core manufacturing and the identification of underlying trends in measured production data. While walking through the different operations involved in the process, we explained the challenges associated with insuring refractive index profile precision and glass purity. Starting with unsupervised learning, process by process, we applied linear and non linear dimensionality reduction algorithms (PCA and t-sne) to features matrices created from time series data and have been able to connect data clusters with context information like machines or month of the year. Then considering the core fabrication process as a whole, we studied the propagation of trends in the data sets up to quality measurements using Dice’s statistic to gauge similarities between samples sets. Finally, we developed some data-driven regression models in order to predict the refractive index measured at the end using data from all processes. As a result, Kernel algorithms performed the best and almost as well on raw statistics from all processes as on encoded information about machine sequences and dates. This supervised approach demonstrated some great potential for the development of prediction tools which could help design the optimized production line. An underlying objective is to support Sterlite Technologies Limited in using data-driven approach applied to process control for its plant in Waluj and Shendra starting by implementing good practices for variables measurement, logging and tracking.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Humans to Detect and Fix Representation Misalignment</title>
<link href="https://hdl.handle.net/1721.1/150218" rel="alternate"/>
<author>
<name>Peng, Andi</name>
</author>
<id>https://hdl.handle.net/1721.1/150218</id>
<updated>2023-04-01T03:40:43Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Leveraging Humans to Detect and Fix Representation Misalignment
Peng, Andi
As robots are increasingly deployed in real-world environments, a key question be- comes how to best teach them to accomplish tasks that end users want. A critical problem suffered by current robot reward and imitation learning approaches is that of representation misalignment, where the robot’s learned task representation does not fully capture the end user’s true task representation. In this work, we contend that because human users will be the ultimate evaluator of system performance in the world, it is crucial that we explicitly focus our efforts on leveraging them to detect and fix representation misalignment prior to attempting to learn their desired task. We advocate that current representation learning approaches can be studied under a single unifying formalism: the representation alignment problem. We mathematically operationalize this problem, define its desiderata, and situate current robot learning methods within this formalism. We then explore the feasibility of applying this formal- ism to robots trained end-to-end on visual input, where deployment failures can be caused by two types of error: errors due to an inability to infer the user’s true reward vs. errors due to knowing how to take correct actions in the desired state. We develop a human-in-the-loop framework—DFA (Diagnosis, Feedback, Adaptation)—to query for user feedback to perform efficient policy adaptation. In experiments with real human users in both discrete and continuous control domains, we show that our framework can help users diagnose the underlying source of representation misalignment more accurately than from robot behaviour alone. To conclude, we show how to leverage this feedback to improve model performance while minimizing human effort and discuss open challenges of using humans to detect and fix representation misalignment.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Numerical Synthesis of Arbitrary Multi-Qubit Unitaries with low &#119879;-Count</title>
<link href="https://hdl.handle.net/1721.1/150214" rel="alternate"/>
<author>
<name>Davis, Marc Grau</name>
</author>
<id>https://hdl.handle.net/1721.1/150214</id>
<updated>2023-04-01T03:49:33Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Numerical Synthesis of Arbitrary Multi-Qubit Unitaries with low &#119879;-Count
Davis, Marc Grau
Quantum gate synthesis based on numerical optimization produces efficient circuits for NISQ (Noisy Intermediate-Scale Quantum) computing by minimizing the num- ber of two-qubit gates. The requirements for fault tolerant quantum computing are significantly different in that some single qubit gates require magic state distillation and gate teleportation, which are resource intensive. Here, We propose an approach to adapt numerical optimization to error corrected quantum circuits by using sequen- tial two-pass multistart numerical optimizaton to reduce the number of &#119877;z gates that must be approximated with Clifford+&#119879; circuits. This technique allows NISQ synthesis based on numerical optimization to be applied to fault-tolerant circuits as well.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Proposal to Improve Korea’s Project Financing Market Using Mixed Methods: Qualitative Approach and AHP Analysis</title>
<link href="https://hdl.handle.net/1721.1/150212" rel="alternate"/>
<author>
<name>Kim, Taeyong</name>
</author>
<id>https://hdl.handle.net/1721.1/150212</id>
<updated>2023-04-01T03:34:01Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Proposal to Improve Korea’s Project Financing Market Using Mixed Methods: Qualitative Approach and AHP Analysis
Kim, Taeyong
The real estate finance industry is exposed to various risks due to its diverse economic factors. As such, project financing (PF), relying on the future cash flow from an asset, can play a positive role to diversify risk. As in other markets around the world, the PF market in South Korea has expanded over the last decade. However, concerns over PF risks have recently surged in the face of the crisis in the real estate market because of rising interest rates and increasing material costs. &#13;
&#13;
Insolvency problems in PF have received attention in Korea since the Asian Financial Crisis of 1997 and the Global Financial Crisis of 2008. There are several key factors causing insolvency problems in the Korean PF markets. First, most development projects use the presale method, in which the prepayments from the buyers are used to cover development expenses. Second, credit enhancements from general contractors have been overused and distribution of risk by market participants has remained misaligned. Third, most developers are undercapitalized and use excessive leverage. Fourth, there are issues with the project evaluation systems in terms of professionalism, dependability, and openness.  These factors underlie the potential risks that could adversely affect the stability of the financial system. &#13;
&#13;
This study aims to examine and suggest practical ways to improve the stability of the Korean PF market. To this end, the research exploits qualitative and quantitative research methods.  &#13;
&#13;
In the initial qualitative research section, the study uses a review of the relevant literature and case studies. Specifically, it investigates the primary PF issues associated with “Policy”, “Risk Sharing Structure”, “Developers”, and “Project Evaluation System”. In addition, the thesis further proposes four improvement plans: first, “Enhancing the Institutional and Policy Framework”; second, “Activating Risk-sharing Structures”; third, “Improving the Capacity of Developers”; and fourth, “Transforming the Project Evaluation System”. These four measures are classified as high-level improvement plans, and each plan is further divided into three subplans (low-level). Thus, we propose twelve (4*3) proposals to improve the system in total. &#13;
&#13;
Next, in the quantitative research part, an Analytic Hierarchy Process (AHP) analysis is applied to evaluate the relative importance of each of the improvement plans and to determine the priorities of real estate PF participants, who were divided into four groups: “Developers”, “General Contractors”, “Financial Institutions”, and “Other Groups”. Surveys were distributed to 60 experts from across these groups.&#13;
&#13;
The results revealed that among the high-level classification of proposed industry improvement plans, “Enhancing the Institutional and Policy Framework” was the most important and “Improving the Capacity of Developers” was the second most important. At lower levels of classification, the importance of each plan was ranked in the following order: “Limiting Credit Enhancement Measures of General Contractors”, “Strengthening Risk Management System and Regulatory Measures”, and “Increasing Participation of Financial Investor (FI) in PF Market.” The results further show that the ranking of importance for the different reforms varied among the different stakeholder groups. Based on these findings, this study discusses how to improve the Korean PF market and suggests that further qualitative research is needed to find a compromise and reconcile the differing perspectives of its stakeholders.&#13;
&#13;
The contribution of this study is that it identifies the fundamental problems of the PF market in Korea and proposes practical plans for reform. Additionally, by determining the priorities among them, it offers valuable data to guide the direction for the future development of this market.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wrinkles</title>
<link href="https://hdl.handle.net/1721.1/150210" rel="alternate"/>
<author>
<name>Zhang, Daisy Ziyan</name>
</author>
<id>https://hdl.handle.net/1721.1/150210</id>
<updated>2023-04-01T04:04:26Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Wrinkles
Zhang, Daisy Ziyan
As simple as the natural law, no physical objects will last forever - we are all in the process of perpetual senescence. While our eyes are sensitive towards the life cycle of human bodies, they are less attuned to the way buildings age. A building’s pristine original state, if that ever exists, only marks an infinitesimal, ephemeral point in the coordinates of time. Time is in fact the most significant part of the vitality of what we call - architecture.&#13;
&#13;
This thesis uses “wrinkles” as a conceptual thread to study the correlation between a building’s life cycle and the life cycles of the humans who inhabit it. It seeks an alternative to break the binary of before/after, and to redefine the beginning and the end of architectural design. It is also an attempt to learn from the quotidian, the ordinary, and to gently question established forms of architectural authorship. &#13;
&#13;
This thesis is rooted in an anonymous residential building in the center of Mexico City, one with 70 years of history being born, lived, earthquake-destructed, abandoned, repaired, rejuvenated, cared for, and so on. It dances between observation and imagination, and creates storytelling that unearths architecture as a living object, and unwraps its hidden layers of complexity. &#13;
&#13;
Borrowing viewpoints from our allied disciplines, photography, filmmaking, and landscape architecture, this thesis uses the digital camera to access a series of spatial tools, such as photogrammetry, data processing softwares, plotting machine, and most importantly, our eyes. It is an attempt to build an alternative literacy, a set of representations for the fundamental elements architecture is entangled with - time and people. Solutionism is not the offering of this thesis, but rather, an outlook for re-discovering architecture as life-long projects.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dislocation</title>
<link href="https://hdl.handle.net/1721.1/150208" rel="alternate"/>
<author>
<name>Liu, Huben</name>
</author>
<id>https://hdl.handle.net/1721.1/150208</id>
<updated>2023-04-01T03:19:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Dislocation
Liu, Huben
Price dislocations are common in fixed-income markets. I propose a dislocation factor (DIS), the first principle of three fixed-income market dislocations, including covered interest parity, on/off the run spread, and treasury noise measure. DIS surges when the market is under stress and indicates broad market conditions such as liquidity, volatility, and credit. DIS has insights for understanding asset prices both in time series and cross-section. In the time series, DIS has both explanatory and predictive power for the performance of equity long-short strategies: high DIS is usually followed by lower return and higher co-movement. In the cross-section, DIS is a priced risk factor and helps explain the return variation: more negative exposure to DIS results in a higher average return, compensating correlated risks.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Machine Learning in Process Control in Optical Fiber Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/150207" rel="alternate"/>
<author>
<name>Othman, Mohamed</name>
</author>
<id>https://hdl.handle.net/1721.1/150207</id>
<updated>2023-04-01T03:36:53Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Application of Machine Learning in Process Control in Optical Fiber Manufacturing
Othman, Mohamed
The current era of big data and IoT has propelled the manufacturing industry to the era of “Industry 4.0”. This thesis presents an approach to manufacturing process control through the use of Machine Learning models in the optical fiber manufacturing industry. Utilizing measured production data from the fiber drawing tower, a long short-term memory (LSTM) neural network structure is used to find the correlation between the inputs and outputs of the process. Different experiments were conducted on the physical draw tower and the simulation to gauge the accuracy of the model and how well it mimics the plant’s performance. This thesis, then, presents an in-depth investigation to the deployment of the digital twin model on an industrial PLC in order to control the diameter of the produced optic fiber at a given setpoint. The model would be able to predict and anticipate changes in the diameter and adjust the gains on the PLC to keep the process under control. This could potentially replace the iterative and laborious process of controller tuning and serve as a tool to be widely utilized in manufacturing settings.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Deep Reinforcement Learning and GAN-Based Market Simulation for Derivatives Pricing and Dynamic Hedging</title>
<link href="https://hdl.handle.net/1721.1/150206" rel="alternate"/>
<author>
<name>Qian, Samson</name>
</author>
<id>https://hdl.handle.net/1721.1/150206</id>
<updated>2023-04-01T03:29:53Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Multi-Agent Deep Reinforcement Learning and GAN-Based Market Simulation for Derivatives Pricing and Dynamic Hedging
Qian, Samson
Advancements in computing capabilities have enabled machine learning algorithms to learn directly from large amounts of data. Deep reinforcement learning is a particularly powerful method that uses agents to learn by interacting with an environment of data. Although many traders and investment managers rely on traditional statistical and stochastic methods to price assets and develop trading and hedging strategies, deep reinforcement learning has proven to be an effective method to learn optimal policies for pricing and hedging. Machine learning removes the need for various parametric assumptions about underlying market dynamics by learning directly from data. This research examines the use of machine learning methods to develop a data-driven method of derivatives pricing and dynamic hedging. Nevertheless, machine learning methods like reinforcement learning require an abundance of data to learn. We explore the implementation of a generative adversarial network-based approach to generate realistic market data from past historical data. This data is used to train the reinforcement learning framework and evaluate its robustness. The results demonstrate the efficacy of deep reinforcement learning methods to price derivatives and hedge positions in the proposed systematic GAN-based market simulation framework.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity of the ozone layer, climate, and public health to changes in the location of aviation emissions</title>
<link href="https://hdl.handle.net/1721.1/150205" rel="alternate"/>
<author>
<name>Kim, Joonhee</name>
</author>
<id>https://hdl.handle.net/1721.1/150205</id>
<updated>2023-04-01T03:25:13Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Sensitivity of the ozone layer, climate, and public health to changes in the location of aviation emissions
Kim, Joonhee
Aviation’s impact on the ozone layer, climate, and air quality varies based on the location of emissions. Changes from subsonic aircraft emissions due to regional growth, and the potential re-introduction of supersonic transport flying in the stratosphere present new scenarios that regulations do not currently address. To quantify the atmospheric impacts of aviation emissions, past studies have used global chemistry-transport models. However, these models are not practical in the context of policy analysis because of their high computational costs and lack of uncertainty quantification to support decision-making. Using atmospheric emission sensitivities derived from the GEOS-Chem chemistry-transport model, I develop a new reduced- order, probabilistic model to calculate the ozone, climate, and air quality impacts from aircraft emissions for a full range of possible flight altitudes and latitudes. The current model reports results based on the average of five years of atmospheric impacts. Applying this model to multiple emission scenarios, this thesis explores the variation in environmental impacts across subsonic flights on the basis of flight distance, and across potential supersonic flights with differing cruise altitudes.&#13;
&#13;
Results show that short-haul flights have the greatest air quality-related health impacts per unit of NOx emissions compared to mid- and long-haul flights. These differences are driven by surface PM2.5 changes, which lead to ~8400 premature mortalities per Tg NOx from short-haul emissions, about 1.6-1.8 times greater than estimates from mid- and long-haul NOx emissions. The results from subsonic and supersonic fleet indicate that the ozone, climate, and air quality impacts from NOx are most sensitive to changes in the altitude of emissions. Subsonic emissions are estimated to increase the global ozone column by 0.33 Dobson Units (DU) per Tg NOx, while a supersonic fleet flying 18-21 km causes 6.6 DU of ozone destruction per Tg NOx. However, this stratospheric ozone depletion also leads to ~13,000 fewer mortalities per Tg NOx from decreased population exposure to surface ozone. As changing aircraft emissions introduce a variety of new environmental consequences and tradeoffs, understanding the sensitivity of atmospheric impacts to the emission location is essential to inform policies and future aircraft technologies.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Predictability of Wind Power Generation</title>
<link href="https://hdl.handle.net/1721.1/150204" rel="alternate"/>
<author>
<name>Zhang, Vivienne Jiao</name>
</author>
<id>https://hdl.handle.net/1721.1/150204</id>
<updated>2023-04-01T04:06:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Improving Predictability of Wind Power Generation
Zhang, Vivienne Jiao
Wind energy plays an important role in decarbonizing the economy and increasingly accounts for a growing share of electricity supply in the United States. However, availability of wind resource is highly dependent on variable factors such as weather and local geographies, making wind power generation forecast a particularly difficult task. This adds to the challenge of grid management, which requires that the supply of electricity equates the demand at all times. Complicating the effort to improve wind power predicitability is a lack of empirical data, since wind power generation data are proprietary and often considered business secrets. To address this lack of empirical study, this thesis uses actual generation data between 2016 to 2021 from seven anonymized wind farms in Midwestern United States that range from 50MW to 235MW in size. The experiments demonstrate how machine learning methods can be used to forecast wind power generation at different time intervals, and how the accuracy of forecasting can be significantly improved when using a combination of newly extracted weather forecast data and weather measurement data. The economic benefits of more accurate forecasting are then studied using a using a simulation with market data from the Midcontinent Independent System Operator and the Southwest Power Pool. The thesis then explores whether predictability of wind power generation can be improved by placing weather stations closer to the wind forecast sites. Implications of these findings can inform investment decisions regarding weather monitoring stations and forecasting models, which can help electricity market participants adapt to a grid with an increasing share of renewable resources.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Zephyr: a Data-Centric Framework for Predictive Maintenance of Wind Turbines</title>
<link href="https://hdl.handle.net/1721.1/150202" rel="alternate"/>
<author>
<name>Hartwell, Frances R.</name>
</author>
<id>https://hdl.handle.net/1721.1/150202</id>
<updated>2023-04-01T03:29:18Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Zephyr: a Data-Centric Framework for Predictive Maintenance of Wind Turbines
Hartwell, Frances R.
Because wind turbines often operate through harsh weather events, under variable operating conditions, and in difficult-to-access locations, turbine maintenance is often challenging and costly. In this thesis, we present Zephyr, a flexible machine learning framework for predictive maintenance of wind energy assets. Manual analysis of wind turbine data is difficult and time-consuming due to its volume, variety, and, most importantly, the need for quick detection of issues. Machine learning (ML) methods are able to automate large-scale data analysis. However, the enormous amount of contextual information required to actually understand the data impedes the ability of ML frameworks to provide actionable insights. To this end, Zephyr enables Subject Matter Experts (SMEs) to incorporate their knowledge at various stages of ML model development. The Zephyr framework consists of a signal-processing-based featurization library, a data labeling algorithm – which helps analyze operational data and maintenance events in order to create labels for machine learning problems – and a set of automated machine learning pipelines for predicting outcome types. SMEs incorporate their expertise by providing labeling functions, bands for frequency domain-based featurization, and several other inputs in an intuitive way. We demonstrate the efficacy of this framework through two case studies involving maintenance operation data from wind turbines. Moreover, we show that ML performance can increase when involving domain expertise by a value as high as 48%.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Charting of Classified Audio Data</title>
<link href="https://hdl.handle.net/1721.1/150201" rel="alternate"/>
<author>
<name>Archer, William</name>
</author>
<id>https://hdl.handle.net/1721.1/150201</id>
<updated>2023-04-01T03:23:09Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Visual Charting of Classified Audio Data
Archer, William
To analyze hundreds of hours of audio recorded in field testing, it is useful to use previously trained machine learning algorithms that can classify the data with discrete time-stamped events. However, this is only a first step. A programmatic method is needed to make sense of the thousands of classified events. In a field testing environment, new charts need to be generated quickly so that conclusions can be drawn while the field test is still ongoing. The generation of these charts also needs to be flexible enough to quickly respond to any on-the-fly changes. This thesis describes the development of a highly customizable method to visually chart labeled audio data in an easily understandable format.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometry-Sensitive Swarm Algorithms</title>
<link href="https://hdl.handle.net/1721.1/150200" rel="alternate"/>
<author>
<name>Cai, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/150200</id>
<updated>2023-04-01T03:10:37Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Geometry-Sensitive Swarm Algorithms
Cai, Grace
Insect colonies, birds, and fish successfully coordinate themselves to make collective decisions through purely local interactions. Their behaviors have inspired the development of algorithms for robotic swarms. Robotic swarms consist of many simple and often identical agents that interact solely via local sensing and communication. Swarm algorithms seek to produce collective behaviors within the swarm such as aggregation, consensus achievement, and task allocation.&#13;
&#13;
For both biological and robotic swarms, simulation is a powerful tool to facilitate analyzing and improving swarm models. While simulating swarm algorithms is very useful, detailed and realistic simulations can take a long time to run and gather feedback on. This often leads to simplifications, especially in the geometry of the problem, that are not representative of what swarms may see in the real word [4, 13, 44]. In this thesis, we present a new discrete modelling framework for swarm algorithms that allows agents to synchronously transition on a discrete grid. Using such a grid makes it easier to add geometric qualities to the agents’ environment and allows for parallel speedups in the simulation time when developing swarm algorithms or models of biological swarms.&#13;
&#13;
Using our new framework, we study the existing swarm problems of house hunting and task allocation, and provide new algorithms which are more geometrically-sensitive than previous work [4, 13, 23, 32, 42, 44].&#13;
&#13;
In Chapter 3, we develop a house hunting algorithm that is able to choose the best nest even when it is very far away from the swarm’s home nest or is being blocked by other poorer quality candidates. Our algorithm chooses the best quality nest much more consistently than previous work, which had not considered these geometrically challenging setups.&#13;
&#13;
In Chapter 4, we develop two new task allocation algorithms for agents in an environment with unknown task locations and demands, and test these algorithms in environments of varying task density. We show that one of our algorithms, inspired by the communication of house-hunting agents via a home nest, outperforms Levy flight foraging in environments with sparse task density. Our other algorithm, inspired by communication via virtual pheromones, completes tasks even faster and performs
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Effective Platform for Assessing Cognitive Health</title>
<link href="https://hdl.handle.net/1721.1/150199" rel="alternate"/>
<author>
<name>Cook, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/150199</id>
<updated>2023-04-01T03:54:31Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">An Effective Platform for Assessing Cognitive Health
Cook, Jack
Over the last two decades, mortality rates due to Alzheimer’s disease and related dementias (ADRD) have more than doubled in the United States. Currently available treatments for Alzheimer’s disease are more effective in the disease’s earlier stages, but making early diagnoses remains difficult. The most common method for diagnosing early-stage ADRD involves routine cognitive assessments under the supervision of a medical professional, a costly and time-consuming process. Furthermore, these assessments are often administered with pen and paper, making it difficult to measure many behaviors expressed by the patient.&#13;
&#13;
In this work, we develop an app for a tablet computer and stylus that administers novel variations of three established cognitive assessments: the Clock Drawing Test, the Maze Test, and the Symbol Digit Test. The app aims to replicate the role of a human administrator by providing instructions and correcting mistakes as patients complete each assessment. It also collects a wealth of data, such as pen strokes and patient movements, that can be used to aid medical professionals in making an accurate diagnosis. Combined, these innovations make it easy for patients to routinely complete assessments at home, on their own devices. We hope this reduces barriers toward diagnosing early-stage ADRD.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BP-Tree: Overcoming the Point-Range Operation Tradeoff for In-Memory B⁺-trees</title>
<link href="https://hdl.handle.net/1721.1/150198" rel="alternate"/>
<author>
<name>Li, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/150198</id>
<updated>2023-04-01T03:23:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">BP-Tree: Overcoming the Point-Range Operation Tradeoff for In-Memory B⁺-trees
Li, Amanda
This thesis presents the BP-tree, an efficient concurrent key-value store based on the B⁺-tree, that uses large leaf nodes to optimize for range-query performance without sacrificing update speed by using large leaf nodes. B⁺-trees are a fundamental data structure for implementing in-memory indexes in databases and storage systems. B⁺- trees support both point operations (i.e., inserts and finds) and range operations (i.e., iterators and maps). There is an inherent tradeoff between point and range operations however, since the optimal node size for point operations is much smaller than the optimal node size for range operations. To avoid any slowdown in point operations, this thesis introduces a novel insert-optimized array called the buffered partitioned array (BPA) to efficiently organize data in leaf nodes.&#13;
&#13;
Using the buffered partitioned array, the BP-tree overcomes the decades-old tradeoff between point and range operations in B⁺-trees. Experiments show that on 48 hyperthreads, the BP-tree supports slightly faster (by about 1.1×) point operations than the best-case configuration for B+-trees for point operations while supporting between 1.4×–1.7× faster range operations. On workloads generated from the Yahoo! Cloud Serving Benchmark (YCSB), the BP-tree is faster (by about 1.1×) on all point operation workloads compared to the B⁺-tree, and slower (by about 1.15×) on the short range operation workload compared to the B⁺-tree. Furthermore, this work extends the YCSB to include large scan and map workloads, commonly found in database applications, and find that the BP-tree is between 1.2×–1.4× faster than the B⁺-tree on these workloads. This thesis contains my joint work with Helen Xu, Brian Wheatman, Manoj Marneni, and Prashant Pandey.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient simulation of Large-Scale Superconducting Nanowire Circuits</title>
<link href="https://hdl.handle.net/1721.1/150197" rel="alternate"/>
<author>
<name>El Dandachi, Tareq “Torque”</name>
</author>
<id>https://hdl.handle.net/1721.1/150197</id>
<updated>2023-04-01T04:02:52Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Efficient simulation of Large-Scale Superconducting Nanowire Circuits
El Dandachi, Tareq “Torque”
As the size of superconducting nanowire devices increases and the influence of second-order effects, such as thermal or electrostatic coupling, becomes more significant, the complexity of models required to accurately and efficiently simulate the device’s behavior becomes more challenging. Traditional circuit simulators used for superconducting devices tend to focus on frequency-domain simulation and are not optimized for simulating superconducting nanowire geometries in the time-domain. This thesis presents an integrated simulator environment designed with the goal of simulating superconducting nanowires. The work presented in this thesis introduces:&#13;
&#13;
1. an integrated environment for SPICE software that extends its modeling capabilities optimized for superconducting nanowire devices and accompanying experiments;&#13;
&#13;
2. a simple procedure to measure the stability of circuit models used to present an improved nanowire SPICE model; and&#13;
&#13;
3. an efficient Julia-based simulator optimized for superconducting nanowire devices and nonlinear microwave circuits.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improvements to LEO Tracking on The Portable Telescope for Lasercom (PorTeL)</title>
<link href="https://hdl.handle.net/1721.1/150195" rel="alternate"/>
<author>
<name>Harburg, Jacob F.</name>
</author>
<id>https://hdl.handle.net/1721.1/150195</id>
<updated>2023-04-01T03:02:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Improvements to LEO Tracking on The Portable Telescope for Lasercom (PorTeL)
Harburg, Jacob F.
Small satellites are getting more advanced and generating more data.  This has put a strain on their communications systems.  Laser communications can offer higher bandwidth at the same size, weight, and power (SWaP) as conventional radio frequency (RF) communications.  Laser communications also avoid the strict spectrum regulation which RF communications must comply with.  These factors make laser communications particularly advantageous for small satellites.&#13;
&#13;
Laser communications can outperform RF communications due to the higher carrier frequency and subsequent narrower beams.  These narrower beams, however, impose strict pointing requirements between transmitters and receivers.  Another issue that that laser communications must contend with is weather.  The optical wavelengths used in laser communications are heavily attenuated by clouds.  This can be a significant problem as cloud cover can render ground stations inoperable.  This motivates the need for a network of affordable and portable ground stations that can be easily redeployed to respond to changing weather conditions.&#13;
&#13;
The Portable Telescope for Lasercom (PorTeL) is an optical ground station that was developed at MIT to address the need for affordable and portable ground stations.  PorTeL was designed using a low cost commercial off-the-shelf (COTS) architecture.  This thesis focuses on updates made to PorTeL to enable it to receive optical downlinks from the CubeSat Laser Infrared CrosslinK (CLICK) mission satellites in low-Earth orbit (LEO).  We show that PorTeL is able to track CLICK-A with an accuracy of 11.1 arcseconds, well within its requirement of 21.8 arcseconds.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Approximate Information States in Partially Observable Environments</title>
<link href="https://hdl.handle.net/1721.1/150192" rel="alternate"/>
<author>
<name>Yang, Lujie</name>
</author>
<id>https://hdl.handle.net/1721.1/150192</id>
<updated>2023-04-01T03:39:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Discrete Approximate Information States in Partially Observable Environments
Yang, Lujie
The notion of approximate information states (AIS) was introduced in [31] as a methodology for learning task-relevant state representations for control in partially observable systems. They proposed particular learning objectives which attempt to reconstruct the cost and next state and provide a bound on the suboptimality of the closed-loop performance, but it is unclear whether these bounds are tight or actually lead to good performance in practice. Here we study this methodology by examining the special case of discrete approximate information states (DAIS). In this setting, we can solve for the globally optimal policy using value iteration for the DAIS model, allowing us to disambiguate the performance of the AIS objective from the policy search. Going further, for small problems with finite information states, we reformulate the DAIS learning problem as a novel mixed-integer program (MIP) and solve it to its global optimum; in the infinite information states case, we introduce clustering-based and end-to-end gradient-based optimization methods for minimizing the DAIS construction loss. We study DAIS in three partially observable environments and find that the AIS objective offers relatively loose bounds for guaranteeing monotonic performance improvement and is sufficient but not necessary for implementing optimal controllers. DAIS may even prove useful in practice by itself or as part of mixed discrete- and continuous-state representations, due to its ability to represent logical state, to its potential interpretabilty, and to the availability of these stronger algorithms.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computationally Efficient Reinforcement Learning under Partial Observability</title>
<link href="https://hdl.handle.net/1721.1/150191" rel="alternate"/>
<author>
<name>Rohatgi, Dhruv</name>
</author>
<id>https://hdl.handle.net/1721.1/150191</id>
<updated>2023-04-01T03:43:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Computationally Efficient Reinforcement Learning under Partial Observability
Rohatgi, Dhruv
A key challenge in reinforcement learning is the inability of the agent to fully observe the latent state of the system. Partially observable Markov decision processes (POMDPs) are a generalization of Markov decision processes (MDPs) that model this challenge. Unfortunately, planning and learning near-optimal policies in POMDPs is computationally intractable. Most existing algorithms either lack provable guarantees, require exponential time, or only apply under stringent assumptions about either the dynamics of the system or the observation model.&#13;
&#13;
This thesis shows that the computational intractability of planning and learning in worst-case POMDPs is fundamentally due to degeneracy in the observation model, in that making an appropriate assumption about the informativeness of the partial observations (of the latent state) mitigates intractability. Specifically, we show that planning and learning are both possible in quasi-polynomial time for gamma-observable POMDPs, where gamma-observability is the assumption that c-well-separated distributions over the latent states induce (gamma*c)-well-separated distributions over observations. These are the first sub-exponential time planning and learning algorithms for POMDPs under reasonable assumptions. While falling just short of polynomial time, it turns out that quasi-polynomial time is optimal for gamma-observable POMDPs under standard complexity assumptions.&#13;
&#13;
The main technical innovation driving our algorithmic results is a new quantitative connection between gamma-observability and the stability of posterior distributions for the latent state in Hidden Markov Models and (more generally) POMDPs. Essentially, stability implies that old observations have limited relevance to the current state, and hence "short-memory'' policies that only depend on a short window of recent observations are nearly optimal.&#13;
&#13;
This connection has several applications beyond planning and learning POMDPs. Leveraging gamma-observability, we give a quasi-polynomial time algorithm for (improperly) learning overcomplete HMMs that does not require a full-rankness assumption on the transition matrices. We also give a quasi-polynomial time algorithm for planning coarse correlated equilibria in partially observable Markov games.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating Interactive Experiences and Visualizing Computer Science Concepts to aid Student Understanding</title>
<link href="https://hdl.handle.net/1721.1/150190" rel="alternate"/>
<author>
<name>Meza, Adrian Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/150190</id>
<updated>2023-04-01T03:08:05Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Creating Interactive Experiences and Visualizing Computer Science Concepts to aid Student Understanding
Meza, Adrian Leonardo
Computer Science courses at MIT typically include lectures, recitations, and a handful of problem sets, quizzes, and exams. We believe that an essential middle step is missing among these. One between the stage of teaching where concepts are initially introduced, and the stage of learning where mastery of those concepts are applied &amp; tested. This middle step should be interactive, fun, engaging, and function to allow students to play around with the concepts they learn, without having them build their own tools from scratch. For example, when we first introduce graph search in an intro to computational thinking class, although we use a variety of visual aids, we never give the students a way to run and visualize the algorithms in action on some examples. We have them write code that eventually builds up to that point; but we argue that offering them a tool to master concepts, such as the patterns of various graph search algorithms, before they have to code it up, would lead to a better grasp of the material. In this thesis we introduce a set of tools called Sandboxes to provide this functionality for the key concepts covered in the course 6.100B: Introduction to Computational Thinking and Data Science.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Z-Order Indexes with Dynamic Bit Allocation</title>
<link href="https://hdl.handle.net/1721.1/150189" rel="alternate"/>
<author>
<name>Gao, Jenny</name>
</author>
<id>https://hdl.handle.net/1721.1/150189</id>
<updated>2023-04-01T04:06:10Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Learning Z-Order Indexes with Dynamic Bit Allocation
Gao, Jenny
The Z-order curve is a space-filling curve that maps multi-dimensional data to singledimensional values. Z-order has been used in databases to sort multi-dimensional data. Modern data management systems such as Amazon Redshift and Databricks Delta Lake give users the ability to sort on multiple columns using Z-order. However, the Z-order is difficult to tune, with tunable parameters such as which columns to include in the Z-order. Currently, users must specify the columns for Z-order when using the systems and might not necessarily achieve the best performance, as the choice of columns has a significant impact on performance. Another drawback is that the systems give equal weight to the columns, which often does not result in the best performance due to the unequal impact columns have on query performance. Our work aims to automatically determine the best Z-order configuration for a particular dataset and workload. In this thesis, we introduce learning Z-order indexes using an approach we refer to as dynamic bit allocation, which considers not only which columns to include, but also the weight to put on each column. Our learned Z-order indexes outperform existing techniques by up to 11× in query time and up to 30.2× in rows scanned, revealing the potential of tuning Z-order to improve query performance.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unified Compilation for Lossless Compression and Sparse Computing</title>
<link href="https://hdl.handle.net/1721.1/150186" rel="alternate"/>
<author>
<name>Donenfeld, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/150186</id>
<updated>2023-04-01T03:17:53Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Unified Compilation for Lossless Compression and Sparse Computing
Donenfeld, Daniel
Achieving high performance for computations on tensors depends heavily on the formats used to store them. While sparse tensors are very common, there are more general patterns in data which can sometimes be better captured using lossless compression.&#13;
&#13;
We show how to extend sparse tensor algebra compilers to support lossless compression techniques, including variants of run-length encoding and Lempel-Ziv compression. We develop new abstractions to represent losslessly compressed data as a generalized form of sparse tensors, with repetitions of values (which are compressed out in storage) represented by non-scalar, dynamic fill values. We then show how a compiler can use these abstractions to emit efficient code that computes on losslessly compressed data. By unifying lossless compression with sparse tensor algebra, our technique is able to generate code that computes with both losslessly compressed data and sparse data, as well as generate code that computes directly on compressed data without needing to first decompress it.&#13;
&#13;
We evaluate two implementations of our techniques, using a prototype compiler based on TACO, and an implementation of our formats within Finch. Our evaluation using our TACO compiler shows our technique generates efficient image and video processing kernels that compute on losslessly compressed data. We find that the generated kernels are up to 16.3× faster than equivalent dense kernels generated by TACO, a tensor algebra compiler, and up to 16.1× faster than OpenCV, a widely used image processing library. Using our Finch formats, we see compression ratios up to 25× with run-time speedups up to 3.1× over dense computation for reduction computations.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GSTACO: A Generalized Sparse Tensor Algebra Compiler</title>
<link href="https://hdl.handle.net/1721.1/150185" rel="alternate"/>
<author>
<name>Dima, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/150185</id>
<updated>2023-04-01T03:03:19Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">GSTACO: A Generalized Sparse Tensor Algebra Compiler
Dima, Alexandra
Many applications in engineering and computer science are characterized by sparse multi-dimensional data. Therefore, optimizations for sparse tensor algebra have received a lot of attention lately. Several hardware and software solutions have emerged in order to speed up the computation of certain tensor expressions, but none of them provides an interface that is general and comprehensive enough to meet the requirements of complex applications like graph analysis. In this work we attempt to identify where previous solutions fell short and build the Generalized Sparse Tensor Algebra Compiler (GSTACO), a new compiler aiming to fill in the engineering gaps of efficient sparse computation.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motivated Agents</title>
<link href="https://hdl.handle.net/1721.1/150184" rel="alternate"/>
<author>
<name>Pandit, Shreya</name>
</author>
<id>https://hdl.handle.net/1721.1/150184</id>
<updated>2023-04-01T03:02:16Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Motivated Agents
Pandit, Shreya
Motivation is a powerful force that drives human action and behavior. It drives us to pursue our goals and aspirations and can significantly impact our decision-making processes. In the field of artificial intelligence, the most common method for modeling human action and decision-making is through reinforcement learning, which relies on external reward-based learning mechanisms to influence the agent’s behavior. While rewards are a primary incentive for learning both in the brain and in machines, recent studies have shown that reward signals in the brain influence motivated behavior in a way that is distinct from learning. In this paper, we design a motivated agent that makes decisions based on individual motivation, rather than learning. To do this, we set out to demonstrate that a motivated agent can outperform a learning agent in a sparse reward environment. We also propose a framework for a goal sustaining mechanism based on dopamine firing, and demonstrate how this component immediately impacts the agent’s behavior in a grid environment without relying on learning. In summary, our work aims to contribute to the understanding of motivation and its role in decision-making, both in humans and in artificial intelligence. By designing a motivated agent that can make decisions based on individual motivation, we hope to shed light on how this fundamental aspect of human psychology can be modeled and utilized in artificial intelligence.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Based Flood Risk Modeling Using Features from Satellite, Socioeconomic, Geographic, and Building Data</title>
<link href="https://hdl.handle.net/1721.1/150182" rel="alternate"/>
<author>
<name>Ray, Anushka</name>
</author>
<id>https://hdl.handle.net/1721.1/150182</id>
<updated>2023-04-01T03:58:49Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Machine Learning Based Flood Risk Modeling Using Features from Satellite, Socioeconomic, Geographic, and Building Data
Ray, Anushka
Due to the effects of climate change coupled with increased urbanization, many cities will be experiencing more frequent and intense flooding in the future. As a result, it would be very beneficial for urban planners to have a low-cost and efficient modeling tool that can determine the flood risk at a granular level such as the census tract. Boston is one such coastal urban city that will experience an increase in flooding. Since each census tract in Boston is unique and varies in population and land use, urban planners and policy makers must know which areas in Boston are the most vulnerable to provide them with resources. This research proposes a machine learning based model that evaluates the flood risk of census tracts in Boston. The overall flood risk of a census tract is determined by aggregating relevant features such as land cover data from aerial satellite imagery via semantic segmentation methods, elevation, slope, and flow accumulation. In addition to these flood hazard features, we also integrate flood vulnerability features from socioeconomic data and building information for each census tract.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computing Fibers: Architectures and Applications</title>
<link href="https://hdl.handle.net/1721.1/150181" rel="alternate"/>
<author>
<name>Cheung, Henry Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/150181</id>
<updated>2023-04-01T03:29:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Computing Fibers: Architectures and Applications
Cheung, Henry Y.
Fibers and fabrics which are worn on the body everyday are in close contact with a wealth of valuable physiological signals, from heartbeats to temperature to blood pressure, providing insight into a wearer’s health status. However, conventional methods for physiological monitoring rely on rigid wearables with small areas of contact and added physical and mental burden on the user, limiting both the breadth and length of time data can be acquired. In this thesis, we demonstrate how thermallydrawn polymeric fibers capable of sensing, storing, processing, and communicating information while employing complex algorithms can contextualize physiological data into valuable health insights. We show the development of flexible interposers which can remap electrical contacts of complex integrated circuits to be more amenable to the convergence thermal draw process. These integrated circuits are then used to design an expandable system architecture that can accommodate multiple sensors along with optical and wireless links that can create networks of fiber computers on fabric, while maintaining re-programmability of the computing elements. Lastly, we show how on-fabric networks of computing fibers can enable complex applications such as blood pressure monitoring and frostbite detection by collecting and processing biometric data, conferring with multiple fibers across the fabric, and deciding on actions for the wearer.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monolithically 3-D Printed, Quadrupole Mass Filter for High-Precision, Compact, CubeSat Mass Spectrometry</title>
<link href="https://hdl.handle.net/1721.1/150180" rel="alternate"/>
<author>
<name>Diaz, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/150180</id>
<updated>2023-04-01T03:46:07Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Monolithically 3-D Printed, Quadrupole Mass Filter for High-Precision, Compact, CubeSat Mass Spectrometry
Diaz, Alejandro
Mass spectrometry is the gold standard for quantitative chemical analysis. However, mainstream mass spectrometers are large, heavy, and power hungry, restricting their ability to be deployed into in-situ, portable, and hand-held scenarios, e.g., drones and CubeSats (measurements in the atmosphere are crucial for monitoring climate change). Miniaturization has been attained at the expense of great loss in performance, caused in part by fabricating non-ideal electrode shapes and having low relative assembly precision. Via additive manufacturing, it is possible to create complex objects, e.g., mass filters with better shaped and spatially arranged electrodes. We report the design, fabrication, and characterization of the first, monolithically 3Dprinted, hyperbolic, compact RF quadrupole mass filters. The devices were made using an advanced, multi-material extrusion printer. The devices are tested as RFonly collision cells for use in miniaturized triple quadrupole mass spectrometers and as quadrupole mass filters for miniaturized mass spectrometers (we detected argon at a resolution of 5 and 1-250 amu mass range). We also developed compact electronics to drive the quadrupoles that are compatible with the size, weight, and power constraints of deployable platforms, such as CubeSats (&lt;3W, up to 400 &#119881;&#119901;&#119901; sinusoidal amplitude, 1-3MHz, &gt;2000 voltage steps for 100:1 resolution). This work provides an opportunity for more precise, low-power, deployable, and compact mass spectrometry systems.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continuation Stealing in Julia</title>
<link href="https://hdl.handle.net/1721.1/150176" rel="alternate"/>
<author>
<name>Trollback, August</name>
</author>
<id>https://hdl.handle.net/1721.1/150176</id>
<updated>2023-04-01T03:17:29Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Continuation Stealing in Julia
Trollback, August
Work stealing schedulers are widely used by parallel programming platforms to distribute tasks across multiple processors. Memory blowup from scheduling a program with work stealing can be bounded by using continuation stealing when new tasks are spawned. Continuation stealing is opposed to child stealing, a different method for spawning tasks that is simpler to implement, but comes at the expense of potentially unbounded memory use. An extension to the Julia programming langauge that adds support for optimizable spawn and sync parallel constructs has been proposed, but it does not currently support continuation stealing. In my thesis, I implement continuation stealing in Julia using two different metaprogramming approaches. One approach uses Julia’s macro system, while the other uses the Julia compiler’s internal representation (IR) of functions. My results show that the IR-based approach uses less memory than the child stealing implementation in the proposed extension to Julia, while having similar speed.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Early Plume Development and NOx Chemistry in LOx/H₂ and LOx/CH₄ Liquid Rocket Engines</title>
<link href="https://hdl.handle.net/1721.1/150174" rel="alternate"/>
<author>
<name>Hagström, China G.</name>
</author>
<id>https://hdl.handle.net/1721.1/150174</id>
<updated>2023-04-01T03:21:43Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Early Plume Development and NOx Chemistry in LOx/H₂ and LOx/CH₄ Liquid Rocket Engines
Hagström, China G.
Projected rocket launch demand in the next 10 years predicts an order-of-magnitude increase in CO₂-equivalent gasses. Despite this expected increase in launch frequency, the on-Earth environmental impacts of off-Earth missions is still understudied, partially due to the lack of combustion and emissions data for these vehicles.&#13;
&#13;
Existing rocket combustion and emission models do not account for altitude dependence on the formation of anthropogenic NOₓ from rocket engines, and has not been adequately evaluated. I accurately model rocket emissions’ altitude dependent chemical composition from 10 km to 40 km. Reaction chemistry in the combustion chamber, nozzle, and plume is modeled with a focus on the implications of NOₓ formation. Analysis of the combustion chamber, nozzle, and post-exhaust chemistry for the Space Shuttle Main Engine (SSME) and SpaceX Raptor Engine (SRE) is performed.&#13;
&#13;
The resulting estimation of altitude dependent NOₓ formation in multiple vehicles can be used in global atmospheric models. In the future, results of this work will inform environmental harm reduction strategies and guidelines.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-Aware AI-Assistant</title>
<link href="https://hdl.handle.net/1721.1/150173" rel="alternate"/>
<author>
<name>La, Ngoc</name>
</author>
<id>https://hdl.handle.net/1721.1/150173</id>
<updated>2023-04-01T04:07:49Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Human-Aware AI-Assistant
La, Ngoc
In many complex situations like high demand kitchens or busy emergency rooms, humans often have to make high quality decisions under high pressure in a short amount of time. Having an AI-assistant with the ability to support humans in those scenarios can help reduce the workload and stress thus improving their performance. In this thesis, we aim to design and implement an AI-assistant that has the ability to provide useful recommendations when necessary. To achieve this goal, the AI-assistant needs to be able to plan good actions according to the situation, predict humans’ behaviors, and utilize this information to provide useful recommendations to humans when necessary. With these requirements, the AI-assistant is designed with three components: planning, inference, and communication. A simulated kitchen environment with two levels of actions, subtask and primitive action, is used as a platform for designing, implementing, and testing the AI-assistant. Six supervised learning methods and two Deep Q Network structures are trained and analyzed to find the best models for the AI-assistant’s planning and inference systems. The results of training and testing different methods suggest using the DQN models as planners for simple scenarios without accidents, and Decision Tree classifiers as planners for more complicated scenarios. The AI-assistant’s inference system is built with Decision Tree classifiers. Two communication protocols, discrete and extended protocols, are carefully studied to make sure the AI-assistant has the ability to provide recommendations just-in-time. While the discrete protocol is easier to tune, the extended protocol performs better in some cases. In conclusion, the AI-assistant with three components is successfully built and proven to help improve agents’ performance in multiple Overcooked scenarios.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forced Response Measurements of Cavitation Dynamics in a Rocket Engine Turbopump Inducer</title>
<link href="https://hdl.handle.net/1721.1/150170" rel="alternate"/>
<author>
<name>Campbell, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/150170</id>
<updated>2023-04-01T03:52:42Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Forced Response Measurements of Cavitation Dynamics in a Rocket Engine Turbopump Inducer
Campbell, Matthew
Cavitation is a serious concern in liquid fueled launch vehicle propulsion systems which operate at high speeds to meet the engine thrust requirements. There are several types of cavitation instabilities, but of concern in this work is one-dimensional planar oscillations known as cavitation surge. In a launch vehicle the resulting dynamic behavior can lead to thrust oscillations and couple with the structures of the launch vehicle, leading to POGO instability (named after the POGO jumping stick) that can cause catastrophic failure.&#13;
&#13;
This thesis introduces a foundation for a method of forced response characterization of cavitating inducers and presents for the first time, damping ratio and natural frequency measurements of surge dynamics using a forced response system identification approach over different cavitation numbers. Cavitation dynamics have been experimentally characterized in the past via transmission matrices. In this work, dynamic pressure and velocity measurements were used to form transfer functions of the cavitating inducer. Experimental guidelines were developed to increase the stiffness of the structure to isolate fluid perturbations, condition the flow downstream of the inducer using honeycomb and wire mesh elements which increased the signal-to-noise ratio of the velocity measurements by 28%, and signal processing techniques to average and smooth the measurements.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to drive Thailand Developers Toward Net Zero: Lessons Learned from the Developer's Perspective and the Global Studies</title>
<link href="https://hdl.handle.net/1721.1/150169" rel="alternate"/>
<author>
<name>Lertpunyaroj, Ravisara</name>
</author>
<id>https://hdl.handle.net/1721.1/150169</id>
<updated>2023-04-01T03:49:02Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">How to drive Thailand Developers Toward Net Zero: Lessons Learned from the Developer's Perspective and the Global Studies
Lertpunyaroj, Ravisara
In recent years, there has been a growing concern regarding climate change risks to real estate in developed and developing countries. Climate change refers to long-term shifts in temperatures and weather patterns. Since the 1800s, human activities have been the key driver of climate change due to burning fossil fuels such as coal, oil, and gas, which generate greenhouse gas emissions that trap the sun's heat and raise temperatures. Climate change disrupts national economies and affects lives globally. Thailand is one of the most vulnerable countries in the world that has been affected by changes in weather patterns and natural disasters. It was ranked as one of the ten most flood-affected countries in the world. The extreme weather could impact 2 million lives by 2035- 2044 (Tan &amp; Zheng, 2022). Keeping global warming below 1.5 °C is a challenging task. The Paris Agreement aims at achieving net zero carbon dioxide (CO2) emissions in the second half of this century, and Thailand is one of many countries that have committed to this goal. At the press conference, Prime Minister Prayut Chan-ocha announced that the country is committed to net zero emissions by 2065. However, net zero development in Thailand is new, and there is not much information about net zero practice in the region. There is no study about the adoption of net zero in the real estate sector. This qualitative study analyzes the reasons developers adopt net zero and how the public sector can help developers achieve net zero goals based on developers' perspectives. The rationale for conducting this research is to bring Thai voices into the global real estate sector's net zero transition conversation. From the research, I found that competitive advantage, branding and marketing, and lower green technology costs are the key elements that influence Thai real estate developers to adopt the net zero practice. However, the developers need support from the government, including technological advancement, government incentives, and guidelines. The finding and discussion can help the developers adopt necessary responses to reduce negative impacts, and the policymakers can learn from the recommendations for the appropriate policies for the real estate sector.&#13;
&#13;
Keywords: net zero; real estate development; developer; decarbonization; pathway to reduce greenhouse gas; Thailand; commercial; residential
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Autonomous Vehicles on the Real Estate Housing Market in the United States</title>
<link href="https://hdl.handle.net/1721.1/150168" rel="alternate"/>
<author>
<name>Whang, Soojin</name>
</author>
<id>https://hdl.handle.net/1721.1/150168</id>
<updated>2023-04-01T03:34:29Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Impact of Autonomous Vehicles on the Real Estate Housing Market in the United States
Whang, Soojin
The mode of transportation has a significant impact on the design and layout of cities, and the advent of autonomous vehicles (AVs) is expected to bring another shift in transportation that will likely impact the way cities are designed. The initial adoption of AVs is likely to occur in the form of shared autonomous vehicles (SAVs) or driverless ride-hailing services. This thesis analyzes the impact of AVs on the housing market, examining it at both macro and micro levels. The macro level analysis examines the complementary impact of SAVs on public transportation systems and identifies the metropolitan areas that are most likely to experience significant changes as a result of the deployment of SAVs. The micro level analysis examines the demand side of specific housing price change in San Francisco, utilizing a historical rent gradient model, by considering changes in commute cost and time. Additionally, the supply side of the analysis explores the potential conversion of parking spaces into housing.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Design Workshops: A System Model of Engagement for Sustainable Community Development</title>
<link href="https://hdl.handle.net/1721.1/150162" rel="alternate"/>
<author>
<name>Hecht, Bruce Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/150162</id>
<updated>2023-04-01T03:02:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Urban Design Workshops: A System Model of Engagement for Sustainable Community Development
Hecht, Bruce Allen
Cities present challenges arising from the dynamics of growth for people and the places they inhabit. A system model of engagement for urban design workshops proposes functions, processes, and roles for local stakeholders interacting with planning experts. Three mechanisms are outlined for effective workshops: stakeholders contribute inputs and experiences in defining the problem space; balancing attention allocation promotes the generation of a range of options; and resulting choices prioritize solutions as desired outcomes. An experiment is outlined to test the proposed model and applicability to sustainable land use at a site selected for the potential of transit-oriented development. The research addresses cities’ future growth and resilience, leveraging local and lived experience with professional and technical expertise.&#13;
&#13;
Keywords: Urban design workshops, transit-oriented development, sustainable development, stakeholder engagement, teamwork mechanisms, urban planning, urban science, transdisciplinary engineering
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Gen-Z College Student Needs Regarding Social Media Apps through a Case Study on Bondit, a Social Media App for College Students</title>
<link href="https://hdl.handle.net/1721.1/150160" rel="alternate"/>
<author>
<name>Lee, Chiwon</name>
</author>
<id>https://hdl.handle.net/1721.1/150160</id>
<updated>2023-04-01T03:12:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Understanding Gen-Z College Student Needs Regarding Social Media Apps through a Case Study on Bondit, a Social Media App for College Students
Lee, Chiwon
Gen-Z college students are the first generation of college students that did not experience a world without the internet (Chillakuri, 2020). They have access to social media platforms to connect with peers, and they have access to multiple websites that their college provides in terms of campus resources. Despite the wealth of resources that they enjoy, the college retention rate of Gen-Z students is lower compared to previous generations due to poor mental health (Selingo, 2018). Existing research attributes this phenomenon to a lack of sense of belonging (Thomas et al., 2020) that is induced by existing popular social media platforms, such as Instagram (Knight-McCord et al., 2016), and the absence of features specifically designed to promote that feeling amongst college peers. College is not merely a place for accumulating knowledge, but to meet and socialize with peers, and to inspire creations that could resolve some of humanity’s biggest challenges. How might we help Gen-Z college students better bond with college peers through social media so that they can have a more positive college experience?&#13;
&#13;
This study aims to learn the needs of Gen-Z college students and identify social media app features that could promote college bonding through using Bondit, a new social media app for college student bonding, as a case study. This research also contributes with functional requirements and design recommendations for social media platforms that aim to create better college bonding experiences for Gen-Z college students.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Great Ideas (Don't) Sell Themselves: The Disclosure Paradox in Digital Startups Auctions</title>
<link href="https://hdl.handle.net/1721.1/150159" rel="alternate"/>
<author>
<name>Gius, Luca</name>
</author>
<id>https://hdl.handle.net/1721.1/150159</id>
<updated>2023-04-01T03:57:44Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Great Ideas (Don't) Sell Themselves: The Disclosure Paradox in Digital Startups Auctions
Gius, Luca
An inventor may be hesitant to reveal her idea to potential buyers because she fears it will be stolen. I show that this disclosure paradox can result in only the worst ideas being sold, leading to an ineffective commercialization of the best ideas. This distortion at the top is especially significant when gains f rom trade a re concentrated among a few, high-quality ideas. Next, I investigate the disclosure paradox in an online marketplace for early-stage digital start-ups. Sellers can protect their unpatentable ideas through confidentiality agreements (NDAs), and they can have the exchange verify their advertising revenues. I provide evidence that these disclosure technologies are used by high-quality inventors to assuage information frictions, sell their ideas more easily and to capture more value. Surprisingly, confidentiality agreements discourage many potential buyers and are less effective than revenue verification. I estimate that the disclosure paradox leads to significant welfare losses: going from a scenario in which every seller has access to revenue verification to o ne where nobody has access to it destroys 63% of the potential gains from trade and reduces the number of sold start-ups by 17%. Through simulations, I show that the losses are magnified by the skewed distribution of ideas.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Argument for the More Widespread Use of Ground Leases in the United States: How to Align Pertinent&#13;
Interests and Strategically Implement on an Impactful Scale</title>
<link href="https://hdl.handle.net/1721.1/150158" rel="alternate"/>
<author>
<name>Carr, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/150158</id>
<updated>2023-04-01T03:02:13Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">An Argument for the More Widespread Use of Ground Leases in the United States: How to Align Pertinent&#13;
Interests and Strategically Implement on an Impactful Scale
Carr, Christopher
As the most senior financial claim in the capital stack of a commercial real estate investment, ground leases have been deployed as an ownership strategy for centuries. Only in recent years has the singular investment thesis of the modern ground lease been proven in both the public and private markets by companies such as Safehold, Haven Capital and Montgomery Street Partners. As the nascent industry grows and matures, these companies aim to eliminate the opacity associated with uncertain future ground lease cash flow streams to both enhance the value of the leasehold interest to lessors and lessees and provide a bond-like security for investors. This strategy has yielded strong risk-adjusted returns to lessors, as evidenced by Safehold’s public valuation at a significant premium to its underlying net asset value, a rarity in the current market environment. Yet, the ground lease sector of commercial real estate remains small on a relative basis, and some market participants continue to be wary of transacting on ground leased properties in the same manner that they would transact on properties owned in fee simple.&#13;
&#13;
This paper argues that there is significant, scalable opportunity for new entry into the ground lease sector of commercial real estate and further identifies multiple new structural components to be included in the modern ground lease. These new clauses should further align the interests of all property-level participants as well as increase liquidity for both the leased fee interest and the leasehold interest. Through industry research, an academic literature review and a simple, case-based numerical model, this paper demonstrates the viability of an expanded ground lease market and outlines the threshold stress limits that impact profitability. In times of relatively high interest rates, such as late 2022, this paper argues that ground leases with the structure proposed herein should be viewed as an attractive alternative financing source.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultra-Miniaturized, Secure Wake-Up Receiver Based on THz Carrier Wave</title>
<link href="https://hdl.handle.net/1721.1/150157" rel="alternate"/>
<author>
<name>Lee, Eunseok</name>
</author>
<id>https://hdl.handle.net/1721.1/150157</id>
<updated>2023-04-01T03:42:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Ultra-Miniaturized, Secure Wake-Up Receiver Based on THz Carrier Wave
Lee, Eunseok
Devices have become smaller over the last few decades, and billions of devices are estimated to be connected in the 2030s. Researchers are developing small-scale, massively deployable wireless nodes for various applications that can work together to collect information and build large networks. These miniaturized wireless nodes require various functionalities, including communication, sensing, actuation, and energy harvesting. There is a growing need for the development of mm2 sized wake-up receivers to prolong the battery life on these devices. The mm-wave/THz spectrum is a promising candidate for millimeter-scale wake-up receiver designs as it is compatible with on-chip antenna integration. &#13;
&#13;
A prototype wake-up receiver using THz carrier wave was fabricated using TSMC 65nm technology. The wake-up receiver, which includes on-chip integrated patch antennas, captures the THz signal, which is then rectified and passed through amplifier-filter stages and digitized by a comparator. It authenticates wake-up patterns, generates wake-up signals, and updates the cryptographically randomized tokens. The system operates at 0.8 V and consumes 2.88 &#120583;W, with a sensitivity of -48 dBm at a data rate of 1.02 kbps. Power consumption can be reduced to 750 nW with within-bit duty cycling. The WuRx has been tested at a distance of several meters and paired with a beam-steerable THz reflectarray, demonstrating its potential for real-world applicability.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Expressiveness and Generalization of Hypergraph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/150156" rel="alternate"/>
<author>
<name>Luo, Zhezheng</name>
</author>
<id>https://hdl.handle.net/1721.1/150156</id>
<updated>2023-04-01T03:15:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">On the Expressiveness and Generalization of Hypergraph Neural Networks
Luo, Zhezheng
Graph Neural Networks have demonstrated their success on many applications, including analyzing molecules and social networks. Although these graph neural networks can effectively determine pairwise connections between nodes, the data structure in reality sometimes goes beyond pairwise relations and can be complicated, involving multiple nodes. This requires the graph neural networks to be extended to hypergraphs to deal with higher-order relations. It is critical to understand what type of problems these hypergraph neural networks can solve and effectively learn from data.&#13;
&#13;
In this thesis, we describe how we use Neural Logical Machines as a unified framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of HyperGNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Extreme Heat Risk in Urban Areas Using Computer Vision and Data Analysis</title>
<link href="https://hdl.handle.net/1721.1/150155" rel="alternate"/>
<author>
<name>Xu, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/150155</id>
<updated>2023-04-01T03:35:59Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Modeling Extreme Heat Risk in Urban Areas Using Computer Vision and Data Analysis
Xu, Katherine
Climate change is one of the greatest threats facing humanity, impacting the social and environmental determinants of human health. Urban areas suffer the effects of urban heat islands, which exacerbate the temperature rise from extreme heat events because they have more paved surfaces, less vegetation, and more heat created from human activities. As a result, heat risk modeling aims to reduce heat risk for vulnerable communities by assisting urban planners and policymakers in efficiently and effectively identifying regions within cities that may need more heat adaptation amenities. However, current heat risk modeling approaches are limited because they may not completely utilize available data sources or be easily generalizable to multiple cities. To overcome these limitations, we create a weighted sum model that estimates the extreme heat risk of an urban area at the granular census tract level. To construct this model, we leverage semantic segmentation techniques to extract relevant risk factors from aerial scene images, and we incorporate additional heat hazard and heat vulnerability factors from publicly available land surface temperature, building, and socioeconomic datasets. As a proof of concept, this research focuses on developing a heat risk model for Boston, which experiences intense urban heat islands.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing and Improving Garbage Collection Performance in the Julia Programming Language</title>
<link href="https://hdl.handle.net/1721.1/150154" rel="alternate"/>
<author>
<name>Netto, Diogo Correia</name>
</author>
<id>https://hdl.handle.net/1721.1/150154</id>
<updated>2023-04-01T03:27:30Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Assessing and Improving Garbage Collection Performance in the Julia Programming Language
Netto, Diogo Correia
With the increasing popularity of the Julia programming language for memory-intensive applications, garbage collection (GC) is becoming a performance bottleneck, with reports of poor GC performance ranging from differential equation solvers to large database benchmarks.&#13;
&#13;
There have been several GC optimizations (such as the implementation of a generational collector) targeting the Julia GC over the last decade, but none of them was in the direction of a multithreaded GC.&#13;
&#13;
This thesis assesses GC performance in the Julia programming language and implements optimizations focusing on parallelizing automatic memory management routines.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Loomings: the sleep of reason produces monsters</title>
<link href="https://hdl.handle.net/1721.1/150152" rel="alternate"/>
<author>
<name>Geltman, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/150152</id>
<updated>2023-04-01T03:28:58Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Loomings: the sleep of reason produces monsters
Geltman, Julian
To Architect is to always work with others. Working with others, though, can be hell, as friction abounds in the process of multiple voices and stances coalescing into one.&#13;
&#13;
I design as a form of co-authorship, a processional lineage of the works of others. I build off of foundations previously laid, and am never alone in this process, for there is always someone looking over my shoulder. The things that I create enter into dialogues with the relics and artifacts that reside in the archives of architectural knowledge. In this sense, me and my ghosts live in a symbiotic relationship. I steal their work and mutate it into a new context, a different proposition, an optimistic visioning; in turn, they get to live on as afterimage.&#13;
&#13;
This thesis is an exploration, instantiation and reflection on my own personal design method as I have come to understand how I work in graduate school. It is both a method and an attitude or ethos towards design. In working through this stance, one enters into active participation in architecture.&#13;
&#13;
I have chosen a number of projects that haunt me to use as a basis for this project. I do not run from ghosts, but instead embrace living amongst them. These are all remnants of utopias. These projects all have something to say about pragmatism, or idealism, or sometimes both. They are ideologically fraught, some saying something about place, some about polis, about politics, about being. They are all housing projects. They all have something to say about being together. They all are about collectivity; what can architecture say about the city? Is architecture distinct from city? How will we all be together? What is the space between us? What is this sea, and how did we become stranded together apart on separate islands?&#13;
&#13;
These projects are all massive. They all have much to signify. They purport and carry their cumbersome baggage as pilgrims. Together we set out to sea so as to salvage a design method of rework from the murky depths.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Julia in WebAssembly</title>
<link href="https://hdl.handle.net/1721.1/150151" rel="alternate"/>
<author>
<name>Huffman, Raymond Minor</name>
</author>
<id>https://hdl.handle.net/1721.1/150151</id>
<updated>2023-04-01T03:30:31Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Julia in WebAssembly
Huffman, Raymond Minor
WebAssembly is a modern binary instruction format that enables highly performant program execution in sandboxed execution environments. WebAssembly modules can be run natively in web browsers, or within lightweight, isolated, server-side runtimes.&#13;
&#13;
This project explores applications of WebAssembly using the Julia programming language and how languages that leverage just-in-time compilation can function under the particular architectural limitations of WebAssembly. It demonstrates the feasibility of multiple strategies for compiling Julia code to WebAssembly, and details the future work required to run the Julia compiler entirely in WebAssembly.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacity Expansion Modeling of Hydrogen and Electricity with Sector Coupling in New England</title>
<link href="https://hdl.handle.net/1721.1/150150" rel="alternate"/>
<author>
<name>Landler, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/150150</id>
<updated>2023-04-01T03:51:26Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Capacity Expansion Modeling of Hydrogen and Electricity with Sector Coupling in New England
Landler, Anna
Alternative fuels are necessary to achieve decarbonization, especially of sectors outside of electricity supply. Hydrogen is a promising energy carrier due to its flexibility of storage and transmission and applicability to a wide range of sectors. We model a case study of New England to investigate how infrastructure might be adapted and built out to support such a system. We then incorporate a model of the current electrical system to investigate synergies between the two. We find that projected technology improvements are insufficient to promote electrolysis at the utility scale. Further cost reductions in electrolysis or cheap imports of clean energy will be needed. Notably, SMR with CCS is a dominant method of meeting hydrogen demand; the specifics of CCS are an important avenue for future work. We find that the region is very sensitive to availability and cost imports. As such, any considerations for energy development in New England ought to be put in context of developments in the surrounding areas.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsimulability, Universality, and Undecidability in the Gizmo Framework</title>
<link href="https://hdl.handle.net/1721.1/150149" rel="alternate"/>
<author>
<name>Ani, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/150149</id>
<updated>2023-04-01T03:06:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Unsimulability, Universality, and Undecidability in the Gizmo Framework
Ani, Joshua
The gizmo framework is a recent development of the gadget framework used for proving computational complexity results of videogames and other motion planning problems. This thesis explores three aspects of the gizmo framework: unsimulability (the inability of one gizmo to simulate another gizmo), universality (the ability of a gizmo to simulate all gizmos in its simulability class), and undecidability (the inability to decide whether a maze made of a gizmo is solvable). We give a proof that the 1- toggle cannot simulate the 2-toggle, as it contains important techniques. We explore a class of gizmos called dicrumbler variants, and give partial results for which ones simulate which others. We give universal gizmos for simulability classes Reg and DAG, and explore the concept of finding all the gizmos that simulate a particular gizmo, with partial results given for the dicrumbler. We show that reachability for a gizmo representing a counter in a counter machine is undecidable, and show several gizmo simulations. We give a proof that generalized New Super Mario Bros. is undecidable using one of the undecidable gizmos.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>B-Cell Epitope Prediction for Improved Antibody Docking</title>
<link href="https://hdl.handle.net/1721.1/150148" rel="alternate"/>
<author>
<name>Rontogiannis, Aristofanis</name>
</author>
<id>https://hdl.handle.net/1721.1/150148</id>
<updated>2023-04-01T03:54:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">B-Cell Epitope Prediction for Improved Antibody Docking
Rontogiannis, Aristofanis
Predicting how antibodies bind to their targets is a fundamental problem of immunology, and a critical step in accelerating the development of vaccines and therapeutics against foreign pathogens. In particular, the task of predicting the 3D structure of an antibody-target complex, otherwise known as docking, is an important tool in drug design, providing valuable insights such as ways to increase antibody potency or methods to limit the likelihood of a mutational escape. State of the art models of antibody docking treat the task as a regression problem, outputting a single prediction. We hypothesized that while the performance after a single try might be poor, the likelihood of producing a good docking pose in &#119870; tries could be significantly higher. To achieve this without having to alter existing docking models, we propose to first train a B-Cell epitope predictor and to subsequently use it to produce a diverse set of candidate binding sites. Our epitope predictor achieves state of the art performance, with an ROC-AUC score of 76. We then show that, by properly post-processing the epitope model’s predictions to select &#119870; promising candidate docking sites, the success rate of a docking model on an independent test set can be increased by a factor of almost 10, with as little as 10 tries. Our approach is compatible with any docking model and offers an alternative to pure generative modeling, while being able to guarantee a diverse set of solutions, without the need to leverage complex sampling strategies.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“A resource in and of itself”: Grid-scale Batteries and the Politics of Storage</title>
<link href="https://hdl.handle.net/1721.1/150147" rel="alternate"/>
<author>
<name>White-Nockleby, Caroline Celeste</name>
</author>
<id>https://hdl.handle.net/1721.1/150147</id>
<updated>2023-04-01T03:40:19Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">“A resource in and of itself”: Grid-scale Batteries and the Politics of Storage
White-Nockleby, Caroline Celeste
From Tesla’s experimental ‘Virtual Power Plants’ to the US’s Energy Storage Grand Challenge, grid-scale batteries – which attach to the electricity grid to buffer supply and demand – are sites of intensifying research, speculation and legislation. They are increasingly positioned as a transformative means to mitigate and adapt to climate change. Indeed, batteries are not the only form of storage in the spotlight: A variety of stored forms, including seed banks, metals stockpiles, and sequestered carbon dioxide have become central in generating, and ameliorating, anxieties about environmental futures. ‘Storage’ offers a potent analytic to analogize phenomena across scales and contexts, in part because of the increasingly visible status of its emic instantiations. As a means to store electricity, a uniquely ephemeral commodity, batteries, like other stored forms, both mediate power and capital and can defuse political potency. Though batteries can smooth the integration of renewable energy into the grid by disciplining the unruly schedules of sun and wind, their potentials (and proponents) extend to the fossil fuel industry as well: They are ‘fuel-neutral’, allowing all kinds of electrons to become more cost-efficient. In these multivalent contexts, I suggest, securing the status and value of a battery’s stored electricity, or trading on its ambiguity, can signal and effect political agendas, even as such arbitrations can recast politics in a techno-juridical domain.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework To Guide the Buildup of Simulation Capabilities for Heterogenous Urban Search and Rescue (USAR) Multi-Robot Simulation</title>
<link href="https://hdl.handle.net/1721.1/150146" rel="alternate"/>
<author>
<name>Law, Heng Huan Allan</name>
</author>
<id>https://hdl.handle.net/1721.1/150146</id>
<updated>2023-04-01T03:30:25Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Framework To Guide the Buildup of Simulation Capabilities for Heterogenous Urban Search and Rescue (USAR) Multi-Robot Simulation
Law, Heng Huan Allan
This framework presents a systematic methodology of architecting a simulation setup that simulates a heterogenous multi-robot system which organizations can use to develop Robotics Urban Search and Rescue (USAR) Operations. The target audience would include system engineers or project managers who are looking to build up simulation capabilities to facilitate the development of heterogenous multi-robot systems. &#13;
&#13;
The framework does this by systematically charting the basic architecture required in a Robotics USAR Operation and mapping these requirements on robot architectures. These robotic architectures will then be used as requirements for the simulation architecture needed to simulate these heterogenous multirobot systems. The simulation architecture is then used to derive the changes needed in the organization to support such a capability build-up, as well as to be used to monitor the effects of different simulation requirements on the difficulty of building up the capability.&#13;
&#13;
The framework applied to a representative scenario consists of USAR requirements drawn from previous disasters and the application of current robotics technologies found on the market. The resultant simulation framework will then be analyzed, and suggested courses of action to build up the simulation capabilities will be recommended.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SOCIALIC: A novel role-playing simulation exercise for ethics teaching in higher education institutions</title>
<link href="https://hdl.handle.net/1721.1/150145" rel="alternate"/>
<author>
<name>Agüera Reneses, Javier</name>
</author>
<id>https://hdl.handle.net/1721.1/150145</id>
<updated>2023-04-01T03:31:33Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">SOCIALIC: A novel role-playing simulation exercise for ethics teaching in higher education institutions
Agüera Reneses, Javier
Traditionally, instruction in ethics was considered an essential component across all disciplinary curricula in higher education institutions. This tendency saw a drastic change during the last century due to a significantly higher specialization of programs and a significant expansion of the number of students. Most educational institutions and accreditation boards today recognize the importance and societal demands to improve ethics education, and are in the process of redesigning and expanding their programs in this regard, notably in STEM (Science, Technology, Engineering and Math) fields. This thesis analyzes the merits of various instruction methods from a historical perspective, concluding that most have been proven inadequate to effect a positive change in students’ expected moral behavior and equip them with the real-world skills required to conduct their professions ethically. We remark that role-play simulations are one of the most promising instruction methods, and highlight the potential to augment their positive impact by using online interactive environments and artificial intelligence. Finally, we employ the Human Centered Design (HCD) methodology to propose a prototype for a role-play simulation exercise (named SOCIALIC) that may be used to incorporate the teaching of core ethical principles into a non-ethics-focused graduate or undergraduate course.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Mental Model in Older Adults’ Experience of Digital Games</title>
<link href="https://hdl.handle.net/1721.1/150144" rel="alternate"/>
<author>
<name>Shu, Shi</name>
</author>
<id>https://hdl.handle.net/1721.1/150144</id>
<updated>2023-04-01T03:34:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Impact of Mental Model in Older Adults’ Experience of Digital Games
Shu, Shi
As digital gaming becomes more popular among older adults, many researchers have studied elements that affect older adults’ adoption of digital games, including motivation ((Ijsselsteijn et al., 2007), potential health benefits(Hall et al., 2012), and preferences (Brown, 2012). This paper aims to explore the impact of the mental model in older adults’ experience of digital games. The study conducted in-person interviews with participants of different ages and self-rated game experience levels. Each participant fills out their rating for each game category’s preference and comfort level with the rules. Nine participants met the criteria and were included in the study. After analyzing the data, the study found a correlation between older adults’ comfort level with game rules and game preference, as well as discrepancies among different categories due to participants previous experience. The result shows that older adults’ previous experience with games form mental models which have an impact on their comfort level with different game rules. Their previous experience also form mental models about games in general and themselves, which have an impact in their adoption of various digital games. The results of the study gives valuable implications for digital game designers to make the digital experience more accessible for the expanding aging demographic.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing project scope attributes and their influence on the software estimation process</title>
<link href="https://hdl.handle.net/1721.1/150143" rel="alternate"/>
<author>
<name>Garg, Dipti</name>
</author>
<id>https://hdl.handle.net/1721.1/150143</id>
<updated>2023-04-01T03:13:49Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Characterizing project scope attributes and their influence on the software estimation process
Garg, Dipti
Software estimation is a valuable practice, allowing organizations to predict and adapt for better project performance concerning cost and resources. Inaccurate estimates can lead to misalignment between stakeholders and loss of confidence by management in the software development process. This thesis focuses on software estimation processes and the relationship between scope attributes and their usefulness to the estimation process.&#13;
&#13;
Semi-structured interviews were conducted with industry practitioners to collect data on their experiences of the estimation process. The experts characterized key steps involved in an estimation process and responded to a five-point scale survey to gauge the consideration of scope attributes, i.e., functionality, dependencies, and newness during estimation. Open-ended interview style questions were asked to understand the meaningfulness of system topology during estimation.&#13;
&#13;
Based on the interviews it is identified that expert judgment based subjective assessment is the commonly used method for estimation. Additionally, release level estimation processes appeared to be informal and intuition-based, lacking analysis of systemic characteristics of project scope especially dependencies. Dependencies were often missed or insufficiently considered during the estimation phase as they required more thorough analysis and were considered only at later stages of project planning. Findings from the interviews also suggest that estimation mostly considers the prioritized scope items, making it inadequate to perform a topological assessment of dependencies. These and other findings are limited to the interviews conducted as part of this thesis. It opens possible avenues for researchers and practitioners to expand on the work by conducting more structured interviews or case studies to gather data on estimation processes, especially representation and assessment of scope dependencies within such processes. Additionally, experimental research can be conducted to determine how knowledge of dependencies impacts estimation processes.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration and Implementation of ESG Strategies for Real Estate Companies</title>
<link href="https://hdl.handle.net/1721.1/150141" rel="alternate"/>
<author>
<name>Zhao, Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/150141</id>
<updated>2023-04-01T03:52:12Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Integration and Implementation of ESG Strategies for Real Estate Companies
Zhao, Chen
There has not been a time in the history of capitalism that real estate investors and managers care about “doing good” as today. While the sole pursuit of financial return has undoubtedly been the dominant driver for real estate investments, Environmental, Social, and Governance (ESG) considerations currently play an increasingly vital role in decision-making for real estate companies. However, as the regulations and capital markets around ESG are still in their nascent phase, the real estate industry has to rely on diverse sources of information and unverified assumptions to determine what, how, and why they should approach ESG.&#13;
&#13;
This study examines how real estate owners, asset managers and developers approach asset-level and portfolio-level ESG issues through deep-dive interviews with ESG leaders of major market players. Based on the interviews, the paper identified various patterns of methodologies for how those companies 1) integrate ESG into their investment process, 2) define ESG targets and metrics, 3) collect and manage data, 4) prioritize among ESG strategy options, and 5) perceive the impacts of those practices.&#13;
&#13;
Beyond providing a structured overview of the current ESG practices by major US real estate companies, the study also intends to unveil the rationales behind those efforts. By mapping the results across the companies’ attributes, including ownership structure, investment strategy, international exposure, and asset class, it also sheds light on potential explanations for the divergence of their perspectives.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sourcing Cheaper and Greener Capital for Transit Oriented Developments</title>
<link href="https://hdl.handle.net/1721.1/150140" rel="alternate"/>
<author>
<name>La, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/150140</id>
<updated>2023-04-01T03:23:02Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Sourcing Cheaper and Greener Capital for Transit Oriented Developments
La, Steven
The concept of a Transit Oriented Development (TOD) today lies firmly in the urban planning realm as a fixture of sustainable development, smart growth, and new urbanism. What is missing is the ability use TODS beyond an urban planning tool into one where that can yield financial benefit for real estate developers by focusing on the environmental benefits in facilitating the modal shift from single occupancy vehicles to greener commuting options. This thesis establishes a hypothesis that if there was a manner in which the environmental benefits of a TOD could be accurately quantified and modelled then this could pave the way for real estate developers to source a cheaper and greener capital through qualifying ESG gains for impact investors.&#13;
&#13;
First this thesis explores different technologies available in the market that could potentially offer this capability as a service. Second this thesis then proposes a pathway for how the quantification can be certified by suggesting amendments to the LEED certification framework in order to solve the information asymmetry between real estate developers and financiers. Finally, this thesis establishes a hypothetical case study for a new TOD in the Fort Point area in Boston to demonstrate the financial outcomes applying this newly proposed financial tool.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fibers and Fragments: Weaving local resources into the Arabian Gulf's modern material culture.</title>
<link href="https://hdl.handle.net/1721.1/150139" rel="alternate"/>
<author>
<name>Al Khayat, Latifa Khalil Yaqoob</name>
</author>
<id>https://hdl.handle.net/1721.1/150139</id>
<updated>2023-04-01T03:16:20Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Fibers and Fragments: Weaving local resources into the Arabian Gulf's modern material culture.
Al Khayat, Latifa Khalil Yaqoob
Considering the constraints of using solely local materials of the Arabian Gulf, this thesis explores two components that constitute a future construction practice: concrete in compression (mined from demolition sites) and carbon fibers in tension.&#13;
&#13;
The discovery of oil in 1932  accelerated the use of reinforced concrete in the Gulf, which was first spurred by British officials and economic agents in Bahrain. Ninety years later, the construction industry has yet to find a replacement for François Coignet’s steel reinforcement bar. Its corrosive nature is exacerbated in harsh climates and weakens reinforced concrete. This thesis responds to this challenge by drawing lessons from the practices of craftworkers before the era of oil extraction in the 1940s. The woven and mortared dwellings using palm fibers, clay, and stone provide productive analogs for the possibilities of using synesthetic fibers and concrete in future construction practices. &#13;
&#13;
The Crown Jewels feature a construction system of post-tensioned concrete rubble. Piercing, stringing, threading, weaving, and splicing lead to a more effective combination of carbon fibers and concrete fragments. These processes tie the two contrasting materials together:&#13;
&#13;
(1) Concrete derived from demolition of modernist blocks, which is frequently a devalued ‘waste’ material destined for landfills, and&#13;
(2) Carbon fiber, which is a highly valued and energy-intensive counterpart.&#13;
&#13;
Although a technical endeavor, this thesis operates in a geography where Gulf states are trying to reinvent their economies and building practices. Yet, these states still maintain an affinity and adherence to British regulations set during its time as a protectorate. To that end, these proposed systems and materials are in alignment with a nationalist, developmental narrative, which is untethered from foreign norms and rather are rooted in prior material practices and cultures of building of the land.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultra-Smooth</title>
<link href="https://hdl.handle.net/1721.1/150138" rel="alternate"/>
<author>
<name>Rajkumar, Vijay Gautham</name>
</author>
<id>https://hdl.handle.net/1721.1/150138</id>
<updated>2023-04-01T03:22:13Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Ultra-Smooth
Rajkumar, Vijay Gautham
The title of this thesis, ultra-smooth, is an invented term -- a neologism defined by the work produced through this study. Through physical prototypes of formwork shaped by gravity for concrete floors and through the observation of gravity’s effect on the generation of such forms through high-speed photography and slow-motion film, this thesis challenges the notion of architecture as an act of defying gravity and instead embraces it. While smoothness, which draws attention to skin and surface, favors the architect as a designer of images whose work is translated from drawings and digital models, ultra- smooth finds its form, like clothing on a body. Leaving the computer and pen and paper behind, ultra- smooth positions the architect in a mode of surrender rather than control, giving form to natural forces, present but not yet visible.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Theory Approach to Cybersecuring a Supervised Machine Learning System</title>
<link href="https://hdl.handle.net/1721.1/150132" rel="alternate"/>
<author>
<name>Parada, Jose Ignacio</name>
</author>
<id>https://hdl.handle.net/1721.1/150132</id>
<updated>2023-04-01T03:37:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Systems Theory Approach to Cybersecuring a Supervised Machine Learning System
Parada, Jose Ignacio
Machine learning is a rapidly growing field with many applications in areas such as healthcare, finance, and transportation. As machine learning becomes more prevalent, it is important to ensure that these systems are secure and can resist attacks from malicious actors. This is particularly difficult because Machine Learning has become a black box, meaning that the models used to perform machine learning tasks can be very complex and might include millions or billions of parameters. This complexity makes it difficult to understand how the model makes decisions or predictions, and it can be hard to explain why the model produced a particular output. It is here where a systems approach can be helpful since it can understand and analyze complex systems and their interactions as a whole. It involves considering the relationships and interactions between the parts of a system, rather than just the individual parts themselves.&#13;
&#13;
This thesis aims to adopt a systems approach to security in machine learning systems using System-Theoretic Process Analysis for Security (STPA-Sec). Due to the broadness of the field, this thesis focuses on Supervised Machine Learning Systems and provides generalized recommendations.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Entrepreneurship and Translation in the University Landscape</title>
<link href="https://hdl.handle.net/1721.1/150131" rel="alternate"/>
<author>
<name>Drutchas, Jake</name>
</author>
<id>https://hdl.handle.net/1721.1/150131</id>
<updated>2023-04-01T03:56:19Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Entrepreneurship and Translation in the University Landscape
Drutchas, Jake
University research has become a core driver of innovation. Every year, governments around the globe invest in new and potentially groundbreaking discoveries. For this novel research to drive impact, it must be translated from the research setting into the market. This translation process is complex and challenging, and many hurdles and roadblocks stand in the way of success. This paper explores the process of translation, focusing on the process in which academic participants such as students, researchers, principal investigators, and professors must make a decision to invest their time and effort to bring a product to market and the steps involved in spinning research out of the lab and into the market. By examining the variables leading into translation and the early steps of the process, this research provides a playbook that can be utilized by these students, researchers, and staff to reduce the friction to entrepreneurship. This research aims to increase the quantity and success rates of startups out of the university setting.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Doppler Radar Lock-in Demodulation Algorithm for Machine Vibration Sensing</title>
<link href="https://hdl.handle.net/1721.1/150125" rel="alternate"/>
<author>
<name>Wampler, Lois</name>
</author>
<id>https://hdl.handle.net/1721.1/150125</id>
<updated>2023-04-01T04:05:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Doppler Radar Lock-in Demodulation Algorithm for Machine Vibration Sensing
Wampler, Lois
Data-driven predictive maintenance of modern machinery has the potential to increase equipment lifespan and decrease manufacturing costs. Among various condition monitoring techniques, vibration analysis can effectively diagnose potential problems in machines. Doppler radar can be used as a sensor that provides non-contact, inexpensive real-time data collection without necessitating line-down time. Conventional Fast Fourier Transformation based vibration analysis requires large amounts of data to achieve high spectral resolution necessary for fault detection especially with radio frequency sampling, which can be computationally too expensive for analysis. In this work, we propose to use a sweeping lock-in amplifier to achieve high frequency resolution with small amounts of data by processing windowed sections of Doppler-shifted radio signals. This algorithm can reliably measure the Doppler shift frequency corresponding to the travelling speed of a low frequency moving object and identify the oscillation frequency with small amplitude, with the latter widely present in machine vibration. The distinguishing condition of the two cases is mathematically derived. The proposed algorithms are verified in simulation with triangular displacement waveform for simplicity of analysis and sinusoidal waveform for generic applications. For experimental verification, speaker vibration at a known frequency is analyzed to achieve an accuracy of 0.025 Hz within the known vibration frequency. This method is robust to the presence of noise frequencies and capable of detecting multiple frequencies.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Self-Collision Avoidance for Dynamic Legged Robots</title>
<link href="https://hdl.handle.net/1721.1/150122" rel="alternate"/>
<author>
<name>Gonzalez Diaz, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/150122</id>
<updated>2023-04-01T03:17:42Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Real-Time Self-Collision Avoidance for Dynamic Legged Robots
Gonzalez Diaz, Daniel
Avoiding self-collisions is particularly challenging for legged robots, yet critical for them to avoid falling and damaging themselves. Unlike standard obstacle avoidance where the obstacles in the environment are relatively static, in self-collision avoidance the "obstacles" are the robot’s limbs which are more dynamic. Enforcing self-collision avoidance as a constraint can conflict with other control objectives, such as stability or foot placement. Ensuring that these conflicts are resolved in real-time is critical for hardware deployment. This work presents a reactive collision avoidance framework that combines Control Barrier Functions with a Whole-Body Controller that can reason about the robot’s full dynamics to guarantee collision-free motions when tracking motions from a high-level dynamics planner. The effectiveness of this approach is validated in simulation with walking experiments showing that adding Control Barrier Functions avoids leg self-collisions when the high-level planner’s footstep location or swing trajectory is infeasible for the real robot. Additionally, the approach generates feasible arm motions that improve disturbance recovery in real-time. Finally, the framework is extended for hardware implementation on the MIT Humanoid with an additional controller that solves for joint velocities to avoid swing-leg collisions in hardware experiments.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Topography: Remapping Appalachia</title>
<link href="https://hdl.handle.net/1721.1/150121" rel="alternate"/>
<author>
<name>Koskey, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/150121</id>
<updated>2023-04-01T03:02:52Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Beyond Topography: Remapping Appalachia
Koskey, Katherine
The field of architecture has been operating under a false sense that we only construct things. Robin Evans might be right that architects make drawings, not buildings, but through these drawings, we’re also moving mountains. The geography of Appalachia has become collateral damage in architecture’s focus on our urban centers through mountaintop removal coal mining. Architecture’s conversations around sustainable architecture have neglected the design interventions that operate at our sites of resource extraction. We design buildings for a ~100-year lifespan, but extraction marks the landscape for millennia.&#13;
&#13;
Architects, as we’re trained, are ill-equipped to address the wide-reaching impacts of extraction, but we do have a number of tools that can be used for more critical interrogation. We are comfortable with scaling ideas, with form, with figure to confront issues that deal with society at large. Recognizing that architects are inadvertently working with mountains and topography, this thesis proposes a design methodology for architectural operations through these geologic scales of place and time. With media as method, this exploratory research presents a range of approaches to telling the story of resource extraction through experimentation with time-based form transformations, mapping, scale shifting, and multimedia storytelling. Materiality, time, and non/scalability are topics that converge (and diverge) through artifacts, both found and made.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What’s in a Poche?</title>
<link href="https://hdl.handle.net/1721.1/150120" rel="alternate"/>
<author>
<name>AlMulla, Nada</name>
</author>
<id>https://hdl.handle.net/1721.1/150120</id>
<updated>2023-04-01T03:03:54Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">What’s in a Poche?
AlMulla, Nada
Where did it come from? And how was it acquired?&#13;
&#13;
In the wake of an increased desire for restitution, set against perpetual efforts of claiming and reclaiming, governmental entities begin to actively take a stance on unearthing looted cultural properties and responding to repatriation requests.&#13;
&#13;
Through the design of architectural moments that allow artifacts to covertly emerge from and disappear into the poche, the project addresses the issue of repatriation, and imagines a space located beneath the National Archives building in Washington D.C., adding to the fabric of secret underground tunnels in the National Mall. The below-ground facility will act as a transitional space where repatriated art will be securely housed before it is transported back to its home country. &#13;
&#13;
Historically, the solid mass in a poche referred to the necessary structure in masonry buildings, as well as the unique articulation of interior spaces and buffer zones that are different from the exterior. On some occasions, It was also used as an indication of hidden spaces, such as servant corridors.&#13;
&#13;
Since the problem of structure is rendered void by the paradigmatic shift in modern technology, it is no longer necessary to build thick walls with a solid mass of material. That leaves both the flexibility of articulating each interior space distinctly, and the creation of secret and concealed spaces as ways to adapt poche in contemporary architecture.&#13;
&#13;
The project is interested in exploring Architecture’s ability to operate beyond its traditional requirements to incorporate affective qualities such as obscurity and mystery through the design and organization of a space, and the adaptation of reclaimed or contemporary poche.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archidrome</title>
<link href="https://hdl.handle.net/1721.1/150119" rel="alternate"/>
<author>
<name>Boscolo, Arthur</name>
</author>
<id>https://hdl.handle.net/1721.1/150119</id>
<updated>2023-04-01T03:02:15Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Archidrome
Boscolo, Arthur
Buildings are only one in a family of facades that define any architectural project. Advancing through the rungs of academia and practice, Architects become diligent estheticians, building and maintaining carefully moisturized semblances. As mediatic cosmetologists, they are acutely aware of the public relations exercises that build their identities as practitioners.&#13;
&#13;
Every drawing, gesture and public appearance delicately preen a group of ideologically loaded semiotic bodies. A design project is not only a response to a set of site conditions - context, public, environment - but a gesture constructing the constellation of images that build up the public profile of the Architect. These gestures are not purely made in careful consideration of the stakeholders of a project but act simultaneously as a performative act - profile building. After construction, the brand built by the Architect embodied in a structure is no longer only part of their project to build themselves but is now appropriated by other stakeholders; it is now a profile-building tool for those who own it, inhabit it, the municipality which funded it, the urban project it constructs. Architects then design purposeful objects less than they can act as mediatic figures that manicure embodied advertisements of ideological positions.&#13;
&#13;
Design gestures become signifiers of a specific politic, but they can only act as such: pageantry. The focus of this thesis is the layer of discursive and psychological strata, the psychic skin, which is termed Faciality. The project is to graft this virtual skin for analysis and demonstrate how it builds the brand of a built project, the brand of the user, and the brand of the Architect through analysis and experiments. Things are more than just vehicles for physical properties in our society of lexical objects.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating Concentration and Temperature-dependent Energy Limitations of a Novel Fluorinated-organosulfur Catholyte for Li Primary Batteries</title>
<link href="https://hdl.handle.net/1721.1/150117" rel="alternate"/>
<author>
<name>Sevilla, Alejandro R.</name>
</author>
<id>https://hdl.handle.net/1721.1/150117</id>
<updated>2023-04-01T03:23:45Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Elucidating Concentration and Temperature-dependent Energy Limitations of a Novel Fluorinated-organosulfur Catholyte for Li Primary Batteries
Sevilla, Alejandro R.
Increasing the energy density of Lithium (Li) primary batteries requires exploring novel electrochemical reactions. Herein, the novel Li–carbon cell with a liquid catholyte based on the reactant 4-nitrophenylsfur (NO₂-Ph-SF₅) is studied in detail with emphasis on its energy limitations under practical cell conditions. While it has been demonstrated that the novel cell design can surpass the gravimetric energy density of industry leader Li–carbon monofluoride (Li–CFₓ) (1085 vs ~1000 Wh kg⁻¹) at 50 °C, calculations detailed in this study suggest that greater energy density (~20% greater) may still be possible. The energy shortfall arises from incomplete reduction of the cathode material at high reactant concentrations (4-5 M). Such concentrations are necessary for future practical applications of the cell. This study first investigates the effect of reactant concentration on capacity and energy density, followed by characterization of reaction intermediates and products, including a major product, lithium fluoride (LiF). Despite its electronically insulating nature, LiF was not found to induce carbon surface passivation under the conditions studied. Instead, the energy shortfall is shown to be constrained by the solubility of polysulfide-like intermediates whose electrochemical activity is hindered under concentrated catholyte conditions. Furthermore, the rate capability of the novel cell design is studied in the ~20–50 °C range. It is found that temperatures below 50 °C significantly inhibit energy density obtained at high current densities (&gt; 1 mA cm⁻²), affecting the cell’s power delivery at room temperature. Despite efforts to improve the electrolyte ionic conductivity and transport of reactant species in the catholyte through catholyte engineering (i.e., varying species concentrations and solvent), performance is still limited under these conditions, and further study is required to further elucidate the effect of temperature on the reaction. Nonetheless, this study reveals key design parameters that can inform future iterations of the promising Li–NO₂-Ph-SF₅ primary battery.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Qualitative Preferences to Guide Schedule Optimization</title>
<link href="https://hdl.handle.net/1721.1/150116" rel="alternate"/>
<author>
<name>Wells, Tesla</name>
</author>
<id>https://hdl.handle.net/1721.1/150116</id>
<updated>2023-04-01T03:56:54Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Using Qualitative Preferences to Guide Schedule Optimization
Wells, Tesla
As robots and computers become more capable, we see more mixed human-robot- computer teams. One strategy for coordinating mixed teams is standardizing priorities across agents. However, the encoding of priorities may lead to discrepancies in interpretation across different types of agents. For example, a machine may have difficulty generating a plan that incorporates the subtleties of a natural language description. A human, given a purely quantitative encoding, may have difficulty intuiting trends or generating a plan without a computation aid. These discrepancies in priority interpretation may lead to a plan that only one class of agent deems optimal. In this case of distributed planning, differing interpretations may lead to significant differences in the plans produced by different classes of agents. What’s more, low fidelity interpretation of priorities can prohibit model correction or refinement. A robot may not recognize its learnt-physics-model conflicts with underlying assumptions in a natural language description of priorities. For humans, it is difficult to identify incorrect or unrefined priority models from the same streams of numerical relations computers find useful.&#13;
&#13;
Most strategies currently in use for bridging this gap involve machines learning human preferences from large data sets or require labor-intensive custom utility encodings to be written, explained, and revised by a trained expert. The former often homogenizes models of human preferences and fails to incorporate corrections accurately, efficiently, and in context. The latter prohibits the usage of robots or computer aids in casual or dynamic settings without the presence/supervision of a human trained in working with the system. In both settings, this inhibits average humans from having personalized, human-computer or human-robot interactions.&#13;
&#13;
In this thesis, we attempt to improve human-robot interactions by encoding utility as a series of qualitative, Ceteris Paribus preference statements over a design space. We posit this formalism is both readily understood by human agents and easily reasoned over by machines. Previous work establishes the ability to compute machine readable utility functions from said statements by efficiently generating topological orderings. We detail our implementation of a machine “agent” who uses said utility function to compute optimal schedules to Conditional Temporal Problem problems for a mixed-agent team. We then build off of the procedure for generating utility functions to generate admissible heuristics that increase performance.&#13;
&#13;
We show this encoding enables us to explain which human preferences differentiate feasibly scheduled, high-utility plans. We present a suit of algorithms capable of explaining model behavior according to five different relevance standards. Additionally, we construct and algorithm for identifying where additional preference specification would resolve assumptions used in underspecified areas of the model. These explanations not only improve human understanding, but also facilitate identification of inaccuracies in the machine’s utility model. We build off our explanations to presents options for model-repair. We show preference-addition and preference-relaxation informed by explanation results in specific, targeted plan changes.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring and Manipulating State Representations in Neural Language Models</title>
<link href="https://hdl.handle.net/1721.1/150114" rel="alternate"/>
<author>
<name>Li, Belinda Zou</name>
</author>
<id>https://hdl.handle.net/1721.1/150114</id>
<updated>2023-04-01T03:04:13Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Measuring and Manipulating State Representations in Neural Language Models
Li, Belinda Zou
Modern neural language models (LMs) are typically pre-trained with a self-supervised objective: they are presented with texts that have piece(s) withheld, and asked to generate the withheld portions of the text. By simply scaling up such training, LMs have been able to achieve remarkable performance on many language reasoning benchmarks. However, sentences generated by LMs often still suffer from coherence errors: they describe events and situations inconsistent with the state of the world described by preceding text. One account of the successes and failures of LM generation states that LMs are simply modeling surface word co-occurrence statistics. However, we provide evidence for an alternative account (not mutually exclusive with the first): LMs represent and reason about the world they describe. In BART and T5 transformer LMs, we identify contextual word representations that function as models of entities and situations as they evolve throughout a discourse. These neural representations have functional similarities to linguistic models of dynamic semantics: they support a linear readout of each entity’s current properties and relations, and can be manipulated with predictable effects on language generation. Our results indicate that prediction in pretrained LMs is supported, at least in part, by dynamic representations of meaning and implicit simulation of entity state, and that this behavior can be learned with only text as training data. Consequently, when LMs fail generate coherent text, such failure can be attributable to either errors in inferring state from context or errors in generating next sentences consistent with the inferred state. We describe a procedure for distinguishing these two types of errors. In models with correctable errors of the first type, we show that targeted supervision can address them. We introduce two procedures for using explicit representations of world state as auxiliary supervision. These procedures efficiently improve LM coherence, in some cases providing the benefits of 1,000–9,000 training examples with only 500 state annotations.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing External Data in Public and Private Market Investing</title>
<link href="https://hdl.handle.net/1721.1/150109" rel="alternate"/>
<author>
<name>Nahari, Adam D.</name>
</author>
<id>https://hdl.handle.net/1721.1/150109</id>
<updated>2023-04-01T03:59:26Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Harnessing External Data in Public and Private Market Investing
Nahari, Adam D.
Data science is playing an increasingly central role in fundamental investing. This development has been driven by growing awareness of the ways in which external data can be leveraged to create unique insights and support decision-making across the investment lifecycle. Two important trends have undergirded this development. The first has been the rapid growth in the volume and granularity of data available to investors, either publicly or through data vendors. The second has been the growth in technical talent with expertise in such analyses (for example, the relatively recent emergence of roles like data engineers and data scientists), and technologies that make such analyses easier to perform. These trends have enabled investors to examine and evaluate company performance more transparently, and to suggest new insights and levers for improvement. &#13;
&#13;
In this thesis, we propose a generalizable method of evaluating the value of an external data source. We focus specifically on customer review data, and investigate whether customer review data provides a signal of stock price movement that is not accurately or immediately priced by the stock market. We also evaluate whether this signal varies  meaningfully across industries, and whether it can be monetized effectively in a trading strategy.  We evaluate these questions by examining data from companies’ Google reviews and analyzing the subsequent price action of the respective companies’ shares. Our approach can be easily extended to examine other sources of data and their impact on a number of performance metrics such as earnings per share, free cash flow, revenue, etc.&#13;
&#13;
As mentioned, we seek to determine whether an investor can derive significant value from online Google maps reviews. To test our ideas, we create a backtesting engine that simulates buying or short-selling stocks based on positive or negative changes in review scores, respectively, over varying time horizons, and holding the long or short positions for various lengths of time prior to exiting the position. &#13;
&#13;
Ultimately, we find that a backtesting framework that goes long or short on each stock in a portfolio based on whether the respective company’s rating increased or decreased over a period of time can indeed produce above-market returns at below-market risk. &#13;
&#13;
There is notable variation in returns and standard deviations of returns as the parameters of the score change-period and holding-period change, with the most favorable returns and risk return trade-offs concentrated in strategies that employ longer score change and holding periods. In addition, we find substantial variation in returns across industries, and this also changes considerably from one strategy parameter set to another. Also, the returns from long positions were considerably better than those from short positions, indicating that increases in review scores are much more reliable predictors of increases in stock prices than decreases in review scores are as predictors of declines in stock prices; however, this is a bit misleading since the backtest period was a time of immense growth in equities across the board. We recognize that this analysis leaves ample opportunity to improve upon its strategy and develop further strategies for various other data sources.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Oxygen Production of the Mars Oxygen ISRU Experiment (MOXIE) though Feedback Control of Pressure Sensor 4</title>
<link href="https://hdl.handle.net/1721.1/150106" rel="alternate"/>
<author>
<name>Horn, Kyle J.</name>
</author>
<id>https://hdl.handle.net/1721.1/150106</id>
<updated>2023-04-01T03:49:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Adaptive Oxygen Production of the Mars Oxygen ISRU Experiment (MOXIE) though Feedback Control of Pressure Sensor 4
Horn, Kyle J.
The Mars Oxygen ISRU Experiment (MOXIE) has demonstrated the ability of a system to produce Oxygen on the surface of Mars by means of Solid Oxide Electrolysis from atmospheric Carbon Dioxide. This work builds on the mission goals of MOXIE, which runs only intermittently and with much manual planning for each run, to develop control algorithms that will lay the foundation for fully autonomous and continuous functionality of future systems. Through modeling and experimentation on the MOXIE FlatSat system at MIT Haystack Observatory, the robustness of the pressure sensor feedback control loop was validated. The maximum Oxygen production rate achieved during the investigation was 6.07 grams per hour.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embroidered Multi-Modal Sensing Arrays for Tactual Perception</title>
<link href="https://hdl.handle.net/1721.1/150103" rel="alternate"/>
<author>
<name>Foshey, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/150103</id>
<updated>2023-04-01T03:58:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Embroidered Multi-Modal Sensing Arrays for Tactual Perception
Foshey, Michael
Humans possess a peripheral nervous system that gives them the ability to sense and interpret tactile information, normal and shear force, and vibrations. This provides us with the ability to perceive changes in our surroundings and react to them, allowing us to complete complex tasks. Bestowing these sensory modalities to robotic systems would enable them to complete complex manipulation and assembly tasks that are trivial for humans. In this work, we present new types of robotic skins that give robots the ability to sense normal force, shear force, and vibration. Fabricated with a highly-automated manufacturing process, our sensing systems can be inexpensively manufactured. Furthermore, we can capture forces over a high range by employing multi-gain capturing techniques. We demonstrate our sensing systems capabilities by designing and fabricating a set of devices and utilizing them for human wearables and robotics applications.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Novel Laser-Skived Microbridges for Improved Characterization of REBCO Superconductor</title>
<link href="https://hdl.handle.net/1721.1/150102" rel="alternate"/>
<author>
<name>Tang, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/150102</id>
<updated>2023-11-09T07:56:29Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Evaluation of Novel Laser-Skived Microbridges for Improved Characterization of REBCO Superconductor
Tang, Kevin
Recent advances in the performance and manufacturing of high-temperature superconductors (HTS) such as rare earth barium copper oxide (REBCO) are enabling a new generation of large-scale superconducting magnets operating at magnetic fields in excess of 20 T. Characterizing REBCO tape current as a function of temperature (T), magnetic field (B), and magnetic field angle (ϕ) is an essential activity for providing magnet design and operational data as well as for quality control; however, the high critical current of REBCO make such characterization difficult. To overcome this issue, a process called bridging is frequently employed such that a well-defined subsection of the tape width is used as a proxy for the full width REBCO tape. The benefit is that the total current required for characterization scales with the subsection width, enabling accurate, simpler, and more accessible characterizations, especially at high performance. Two downsides are the traditional bridge manufacturing process and the potential for the bridge to sample regions of higher or lower than width-averaged critical current.&#13;
&#13;
Bridging is traditionally achieved through a process of chemical etching and photolithography, which is labor and time-intensive, can inadvertently damage the superconductor if the REBCO coatings are thick, and can only be done on specially prepared REBCO samples without copper surface layers instead of the commercially manufactured tapes used in actual magnets [1]. To improve upon this method, this thesis proposes and qualifies a laser-based microbridging technique known as skiving, in which a precision laser is used to cut an optimized subsection of REBCO for accurate critical current characterization at significantly reduced test currents. From the fundamental power law describing superconductivity, the critical current and power law index (the so-called “n-value”) were the two main parameters used in assessing the performance of the technique and microbridge designs [2]. Bridges were tested at the MIT Plasma Science and Fusion Center (77 K, self-field), Commonwealth Fusion Systems (15 K - 77 K, self-field - 12 T), and at the High Field Laboratory for Superconducting Materials (HFLSM) at Tohoku University in Japan, where the 40 &#120583;m bridges were exposed to the most extreme conditions (4 K - 50 K, 0 T - 20 T) over two research campaigns in October 2019 and January 2020. &#13;
&#13;
400 µm single channel bridges were created and upon testing, produced an average post-bridge to pre-bridge n-value ratio of 0.67 ± 0.01 and an average bridge efficiency of 0.99 ± 0.01, serving as the baseline for subsequent tests. Different designs were tested to capture as much of the tape width as possible and provide a more accurate prediction of the full width critical current at very small (down to 40 µm) bridge widths, resulting in two variations that achieved an average n-value ratio of 0.64 ± 0.01 and an efficiency of 0.88 ± 0.01, providing good confidence in the use of such bridges to characterize full width tape. 40 µm bridges were shown to be significantly more accurate than chemically etched bridges in testing at the HFLSM and key lessons about what to include and exclude in bridge designs were learned. Future work is recommended for further investigation into characterizing samples from different manufacturers with varying layer thicknesses as well as improving the accuracy and robustness of the bridges in predicting full-width REBCO tape performance.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nudging Permanence</title>
<link href="https://hdl.handle.net/1721.1/150101" rel="alternate"/>
<author>
<name>Loescher-Montal, Angela</name>
</author>
<id>https://hdl.handle.net/1721.1/150101</id>
<updated>2023-04-01T03:14:59Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Nudging Permanence
Loescher-Montal, Angela
Across Berlin’s history, the street- as image, as space, as imaginary, as activity – has been, and continues to be, continuously appropriated and contested by stakeholders across the city - residents, owners, shopkeepers, tourists and others. Top-down politicians and public entities have long been grappling with how to position themselves (and their own desires) within this tension, using tools such as regulations and publicly funded projects as a form of developing an “appropriate” Straßenbild (street-image) to produce a desirable and cosmopolitan Stadtbild (city-image). As retail regulations constrain retail to interiors and developers favor larger longer-term retail contracts over smaller short-term “stunts”, I have begun to trace a shifting and unresolved paradigm. Permanence privileged over temporality. Certainty over uncertainty. Recent regulatory changes do not fall short of mentioning how current flying trade (fleamarkets, food trucks, etc.) “undermine” existing retail offerings. &#13;
&#13;
This project questions the typical process of gentrification under the ideological norms of “highest and best use” and takes up a large area of land in Friedrichshain currently slated for re-development to re-imagine temporary mentalities. This project is seen as a template for similar such projects, and in the spirit of temporary uses (transient, sedentary, and inhabited), most of the tactics – building included – can be adapted and moved across the city. By formalizing their existence, the thesis traces the legal and economic framework that many resident-driven retail, exchange and re-use initiatives uses navigate to exist in the city. Ultimately, this thesis remains unfinished, but its aims remain two-fold: to investigate temporary uses in relation to their regulatory and formal tactics and to re-enforce existing temporary practices through a supporting imaginary. It hopes to shift the fantasy of tectonic retail in this existing development, and in doing so, asks the question: can we nudge the imaginary of permanence?
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Noise-Protected Superconducting Qubits</title>
<link href="https://hdl.handle.net/1721.1/150097" rel="alternate"/>
<author>
<name>An, Junyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/150097</id>
<updated>2023-04-01T03:36:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Engineering Noise-Protected Superconducting Qubits
An, Junyoung
Improving the lifetime of qubits is crucial to achieve reliable quantum computation with superconducting qubits. One way to improve the qubit lifetime is to engineer the circuit design and the parameters to protect the qubit from environmental noise. Some of the noise-protected superconducting qubits have the potential to overcome the coherence limitations of transmons, which is often dominated by energy relaxation. Here we study the zero-pi qubit, the superconducting circuit-based qubit that can provide simultaneous protection against dephasing and relaxation. Although the noise-protection property of the zero-pi qubit is appealing, it has stricter design parameter constraints than other superconducting qubits and the coherent control of the qubit is challenging. In this thesis, we propose several methods to enable fast, robust control in the zero-pi qubit. Additionally, we introduce some preliminary measurement results of the zero-pi qubit and discuss how to mitigate the challenges we faced during the measurement.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Success of Emerging Space Actors: Effective Strategies in the NewSpace Era</title>
<link href="https://hdl.handle.net/1721.1/150096" rel="alternate"/>
<author>
<name>Erkel, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/150096</id>
<updated>2023-04-01T03:50:12Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Success of Emerging Space Actors: Effective Strategies in the NewSpace Era
Erkel, Daniel
In an era of ubiquitous space technology, policy makers around the globe face a choice: become a space power or become obsolete. Recent armed conflicts showed the strategic importance of satellite imagery and communication enabled by commercial satellites and consequently strengthened an already growing impetus for actors to enter or increase their engagement in the space domain. Research previously demonstrated the socio-economic benefits of space technology in emerging space nations. However, circumstances have changed since. While "traditional" avenues to join the industry still exist, the paradigm shift of the so-called NewSpace era introduced new opportunities. These opportunities come with new risks. Today's space technology is more affordable than ever before and project life cycles appear faster. However, rapid and high returns on investments are not guaranteed, neither are societal benefits through space. We are yet to see strong and more importantly widespread financial and socio-economic returns in the NewSpace ecosystem. Emerging actors must therefore systematically examine trends in the domain and develop resilient and flexible strategies. This is the gap this thesis aims to address. First, the thesis discusses the paradigm shift of the NewSpace industry along with the analysis of its key trends, drawing attention to some less-discussed risks present today. Following this, the thesis explores the classification of emerging space actors and presents a framework to chart their development. Finally, a systems engineering-focused approach to space strategy development is presented along with a novel approach in agile aerospace engineering, enabling cost-effective and risk-reduced microgravity experiments for emerging space actors as a possible stepping stone to join the space ecosystem.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Have the private equity real estate funds out-performed REITs on a risk adjusted basis over time?</title>
<link href="https://hdl.handle.net/1721.1/150095" rel="alternate"/>
<author>
<name>Lai, Qiaojun</name>
</author>
<id>https://hdl.handle.net/1721.1/150095</id>
<updated>2023-04-01T03:05:02Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Have the private equity real estate funds out-performed REITs on a risk adjusted basis over time?
Lai, Qiaojun
Investors often face tough decisions when it comes to allocating their limited investment capital. Traditionally, private equity real estate funds (“PERE funds”) have been a popular choice, but since the 1990s, real estate investment trusts (“REITs”) have also become a widely available option. Given the range of options available, investors may be wondering which option will provide the best returns on their investment.&#13;
&#13;
In this paper, we provide a comparison of the return performance between REITs and PERE Funds’ three main strategies, including Core, Value-Added, and Opportunistic, from 2000 to 2020. The study analyzes a series of data from Preqin, NCREIF, NAREIT and SBBI. Our study features the use of the Treynor ratio to measure investment performance on a risk-adjusted basis, to provide a rigorous quantitative approach to comparing the total returns of both PERE funds and REITs. Overall, we find that listed REITs slightly outperformed PERE Funds in the past 21 years. In the end, the study suggests an implied recommendation on investment strategies. reflecting our findings.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Treatment of Anal Fistulas and Endovascular Drug Delivery for Peripheral Arterial Disease</title>
<link href="https://hdl.handle.net/1721.1/150089" rel="alternate"/>
<author>
<name>Bowman, Bo (Heather Genevieve)</name>
</author>
<id>https://hdl.handle.net/1721.1/150089</id>
<updated>2023-04-01T03:16:25Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Novel Treatment of Anal Fistulas and Endovascular Drug Delivery for Peripheral Arterial Disease
Bowman, Bo (Heather Genevieve)
ABSTRACT #1: Peripheral Arterial Disease Drug Delivery&#13;
&#13;
Peripheral arterial disease affects 6.5 million people over the age of 40 in the USA. This blood circulation disorder blocks peripheral arterial blood vessels, restricting oxygen-rich blood flow to the limbs and increasing chances of limb loss. Unfortunately, although treatments exist to open the blood vessel, 12% of patients still experience recurrence. The cause of this is intimal hyperplasia. The drug Amlexanox has been shown to reduce intimal hyperplasia in dogs. In this study we propose a stretch-release drug delivery platform for delivering Amlexanox. This platform consists of a cyclodextrin acrylamide gel coating loaded with drug and inserted onto angioplasty balloons. Drug measurements show that this coating releases statistically significant amounts of drug into mock vessels upon balloon expansion. The coating can also stretch up to 21 times its original length. Successful bonding of the gel to commercial latex and silicone balloon catheters was achieved with the addition of oxygen scavengers and benzophenone, paired with UV light at a close distance. Future work entails coating integrity testing, drug dosing analysis, and in-vivo rat studies. In conclusion, this platform coating technology has been shown to release drugs during balloon expansion and has promise for enabling delivery of novel drug treatments. &#13;
&#13;
ABSTRACT #2: Novel Treatment of Anal Fistulas &#13;
&#13;
There are considerable unmet needs for treating anorectal fistulas. The current gold standard of care, an advancement flap, only has a 45-76% success rate, and 7% incontinence rate. Other effective treatments exist such as the LIFT procedure or fistulotomy, but many of these procedures still put the sphincter muscles at risk, most work only for specific situations, and none are without complications. In this study, a novel approach of closing the internal fistula opening is discussed. By closing the opening with a tissue-compatible adhesive patch, less bacteria enters the fistula from the anal or rectal cavity, and the internal opening is fully sealed off to facilitate healing. Preliminary results for rectal, anal, and vaginal tissue show that this patch can withstand more than 3-5 times the maximum estimated shear stress that patch would be subjected to for hard feces (1673 Pa). This technology is of interest to Chron’s patients, for whom interventional surgery is increasingly risky. This technology has significant clinical translational opportunity since it is low risk, non-invasive, and reversible on-demand. Literature already supports the in-vivo biocompatibility of this patch. Future work includes in-vivo studies to confirm clinical efficacy. In sum, this new proposed technology shows promise as an alternative to outperform advancement flaps for anorectal fistula treatment.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crises Learning Under Diagnosticity</title>
<link href="https://hdl.handle.net/1721.1/150088" rel="alternate"/>
<author>
<name>Zhu, Jiulei</name>
</author>
<id>https://hdl.handle.net/1721.1/150088</id>
<updated>2023-04-01T03:19:29Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Crises Learning Under Diagnosticity
Zhu, Jiulei
We analyse ways in which diagnostic expectation in macro forecasting contaminates posterior inference and interferes with Bayesian parameter learning. Conversely, we study scenarios where parameter uncertainty dampens or magnifies extrapolation. We characterise two unique implications of such an interaction with supporting evidence from the SPF: 1) State-dependence of extrapolation even after controlling for shocks - more aggressive extrapolation when surprises are mean-reverting. 2) Asymmetric extrapolation to positive versus negative surprises - reorienting the axis to control for state effects yields a unified bias pattern across macro indicators. Additionally, oblivious agents who extrapolate are found to learn parameters more slowly and consistently underestimate the persistence of the underlying process. We question the crude use of the predictability of forecast errors as quantifiers of departure from rationality and offer an alternative approach which treats extrapolative tendencies as state-dependent and which distinguishes between two sources of error: biases and parameter confusion.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Big Data Needs Small Data: Exploring Digital Adaptability of Restaurants in the context of Covid-19 in Boston</title>
<link href="https://hdl.handle.net/1721.1/150087" rel="alternate"/>
<author>
<name>Shi, Huiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/150087</id>
<updated>2023-04-01T04:04:51Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Big Data Needs Small Data: Exploring Digital Adaptability of Restaurants in the context of Covid-19 in Boston
Shi, Huiwen
Using a combination of quantitative and qualitative research methods, this research explores the relationship between digital engagement levels reflected through online reviews with restaurant digital adaptability in the context of the Covid-19 pandemic in Boston.&#13;
&#13;
First, the project scraped 523,348 reviews for the 3,325 restaurants in Boston from 2004 to 2022, Feb. Using K-means clustering to analyze time series data based on positive review quantities, restaurants are clustered into four typologies for digital engagement level. Second, three neighborhoods are selected as study areas for different distributions of store typologies. Finally, site visits and interviews were conducted with store owners/managers among sampled neighborhoods. The findings reveal that the clustering result based on Yelp.com reflects restaurants' digitalization strategies. Second, it identifies that business digital adaptability is crucial for restaurants’ business resilience regardless of business type. Last but not least, the research discovers the limitation of using a single-sourced user-generated dataset due to market segmentation, identifying the necessity of ground-truthing exercises to validate the quality of the data.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unassisted Humans Infer Personal Traits from Facebook Group Memberships: An Empirical Study with Implications for Employers and State Entities</title>
<link href="https://hdl.handle.net/1721.1/150085" rel="alternate"/>
<author>
<name>Paeth, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/150085</id>
<updated>2023-04-01T03:49:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Unassisted Humans Infer Personal Traits from Facebook Group Memberships: An Empirical Study with Implications for Employers and State Entities
Paeth, Kevin
The practice of using online social network (OSN) profiles and other internet-based records by third-parties in order to evaluate individuals for various purposes, known as cybervetting, is growing more popular. The United States State Department now requires all non-immigrant and immigrant visa applications to supply OSN profile identifiers operated by applicants on various platforms, including Facebook, Instagram, and Twitter (referred to as the “social media registration” requirement). Employers and recruiters regularly use OSN profiles and related information to screen or monitor employees and job candidates.&#13;
&#13;
In these contexts, certain personal traits of individuals may be considered especially sensitive, especially where human reviewers are decision-makers. Visa applicants may not wish to disclose information about themselves that is not explicitly required (such as religious and spiritual beliefs or sexual preference) for fear of discrimination. Similarly, job applicants may wish to keep private certain personal traits (such as race, ethnicity, gender, and age), even if their influence in decision-making would constitute illegal discrimination.&#13;
&#13;
The aim of this research is to determine if Facebook Group memberships can disclose users' information that may be considered sensitive, private, and/or legally protected to human reviewers. It is motivated by the observation that the non-hidden Facebook Group memberships of any user are publicly discoverable (with some effort), which may contradict users’ expectations of the privacy of their aggregate group membership information, and therefore not have been treated as a potential source of public data disclosure.&#13;
&#13;
We first collected real Facebook profile information from 32 users with diverse demographic backgrounds. We then conducted an empirical study with 63 participants to measure the ability of humans to infer eight personal traits (race and ethnicity, gender, age, religious and spiritual beliefs, relationship status, highest level of education, employment status, and income) of these users based exclusively on their Facebook Group memberships.&#13;
Our results show that certain traits are more inferable by human reviewers than others. Participants were able to infer race and ethnicity identities of 50% of subjects more than 88% of the time, and gender identities of 50% of subjects more than 70% of the time.&#13;
&#13;
We discuss the implications of our findings in the context of current regulations that prohibit employers from requesting sensitive demographic data as well as formal governmental processes that require foreign nationals to disclose their social media profiles.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Closure Models for Chaotic Dynamical Systems</title>
<link href="https://hdl.handle.net/1721.1/150084" rel="alternate"/>
<author>
<name>Jalan, Aman</name>
</author>
<id>https://hdl.handle.net/1721.1/150084</id>
<updated>2023-04-01T03:37:25Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Neural Closure Models for Chaotic Dynamical Systems
Jalan, Aman
An important challenge in the problem of producing accurate forecasts of multiscale dynamics, including but not limited to weather prediction and ocean modeling, is that these dynamical systems are chaotic in nature. A hallmark of chaotic dynamical systems is that they are highly sensitive to small perturbations in the initial conditions and parameter values. As a result, even the best physics-based computational models, often derived from first principles but limited by varied sources of errors, have limited predictive capabilities for both shorter-term state forecasts and for important longer-term global characteristics of the true system. Observational data, however, provide an avenue to increase predictive capabilities by learning the physics missing from lower-fidelity computational models and reducing their various errors. Recent advances in machine learning, and specifically data-driven knowledge-based prediction, have made this a possibility but even state-of-the-art techniques in this area have not been able to produce short-term forecasts beyond a small multiple of the Lyapunov time of the system, even for simple chaotic systems such as the Lorenz 63 model. In this work, we develop a training framework to apply neural ordinary differential equation-based (nODE) closure models to correct errors in the equations of such dynamical systems. We first identify the key training parameters that have an outsize effect on the learning ability of the neural closure models. We then develop a novel learning algorithm, broadly consisting of adaptive tuning of these parameters, designing dynamic multi-loss objective functions, and an error-targeting batching process. We evaluate and showcase our methodology to the chaotic Balance Equations in an array of increasingly difficult learning settings: first, only the coefficient of one missing term in one perturbed equation; second, one entire missing term in on perturbed equation; third, two missing terms in two perturbed equations; and finally the previous but with a perturbation being two orders of magnitude larger than the state, thereby resulting in a completely different attractor. In each of these cases, our new multi-faceted training approach drastically increases both state-of-the-art state predictability (upto 15 Lyapunov times) and attractor-reproducibility. Finally, we validate our results by comparing them with the predictability limit of the chaotic BE system under different magnitudes of perturbations.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RGB-D Likelihood for 3D Inverse Graphics</title>
<link href="https://hdl.handle.net/1721.1/150082" rel="alternate"/>
<author>
<name>Gothoskar, Nishad</name>
</author>
<id>https://hdl.handle.net/1721.1/150082</id>
<updated>2023-04-01T03:11:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">RGB-D Likelihood for 3D Inverse Graphics
Gothoskar, Nishad
A central challenge in 3D scene perception via inverse graphics is robustly modeling the gap between 3D graphics and real-world data. We propose a novel 3D Neural Embedding Likelihood (3DNEL) over RGB-D images to address this gap. 3DNEL uses neural embeddings to predict 2D-3D correspondences from RGB and combines this with depth in a principled manner. 3DNEL is trained entirely from synthetic images and generalizes to real-world data. To showcase this capability, we develop a multi-stage inverse graphics pipeline that uses 3DNEL for 6D object pose estimation from real RGB-D images. Our method outperforms the previous state-of-the-art in sim-to-real pose estimation on the YCB-Video dataset, and improves robustness, with significantly fewer large-error predictions. Unlike existing bottom-up, discriminative approaches that are specialized for pose estimation, 3DNEL adopts a probabilistic generative formulation that jointly models multi-object scenes. This generative formulation enables easy extension of 3DNEL to additional tasks like object and camera tracking from video, using principled inference in the same probabilistic model without task specific retraining.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multispecies Syntopia: Collaborative Survival in the Nuclear Anthropocene</title>
<link href="https://hdl.handle.net/1721.1/150077" rel="alternate"/>
<author>
<name>Liu, Wa</name>
</author>
<id>https://hdl.handle.net/1721.1/150077</id>
<updated>2023-04-01T03:32:03Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Multispecies Syntopia: Collaborative Survival in the Nuclear Anthropocene
Liu, Wa
My thesis paper argues that we—a multispecies assemblage of humans and non-humans occupying different ecologies—are living in the era of the “Nuclear Anthropocene.” From nuclear contamination to climate change, we find ourselves enmeshed in accumulating ecological crises ever since the Cold War. To achieve our collaborative survival with other species, this paper proposes to reimagine the planetary ecosystem through the perspective of plants, in particular, plants that embody the aftermath of nuclear militarization. Through a two-fold structure, this thesis questions humans’ dominant role in knowledge production while eliciting an eco-centric way of world-making. It draws upon new materialism and post-humanism to reflect on cross-species communication between human and non-human agents. In conclusion, it proposes the concept of “multispecies syntopia”—a combination of “syn,” which means “together”, and “topia,” which means “place.” It is the encounter and symbiosis of different species in the same habitat. Instead of anthropomorphizing the plants, the thesis seeks to expand our sensibilities and vocabularies in an attempt to understand the global ecosystem on a planetary time scale. Through the stories of plants and a comparative analysis of various artworks, I aspire to stitch us back into the tapestry of the multispecies assemblage and to feel more in the world grander than ourselves.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Homebuilder’s Songbook</title>
<link href="https://hdl.handle.net/1721.1/150076" rel="alternate"/>
<author>
<name>May, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/150076</id>
<updated>2023-04-01T03:44:39Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Homebuilder’s Songbook
May, Samuel
This thesis considers the potentials of cultural practices in transforming single-family housing in the present-day United States through a series of speculative designs. An archival assemblage documenting historical modifications to houses, along with music and other cultural artifacts, forms the working material for these speculations.&#13;
&#13;
In examining the role architects have played in imaging the single-family house, this thesis first explores a paper trail of pattern books, catalogs, and manuals. This lineage of disseminated media shows how designers have continually recast the house in response to larger social and technological changes through largely consistent representational strategies.&#13;
&#13;
A parallel examination of archives like the Historic American Building Survey demonstrates how communities have continually leveraged accessible building methods to make alterations to their own houses, developing new modes of building and dwelling in what Bell Hooks described as “Architecture as cultural practice.”&#13;
&#13;
In the past century, cultures of expertise have driven decisions about housing further and further from those impacted, limiting the cultural reconstruction of housing by its inhabitants. Left unchanged, inherited neighborhoods of single-family dwellings fail to meet the pluralistic needs of the communities left with them, and those excluded from them.&#13;
&#13;
Questioning the architect’s historic role in single-family housing, this thesis moves away from the prescriptive format of the catalog and towards a cultural anthology. Architecture can learn from methods of music production and scholarship for their potential to celebrate the subjectivity of voice, acknowledge co-authorship, and reflect cultural diversity. Where the catalog offers the available options, the songbook provides material with which to play. Through design and collaboration with active musicians and songwriters, this thesis speculates as to how architects, as participants and facilitators, might enliven homes with song.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earthly Forces: Rethinking the Potential Energies of the Episodic, Dispersed, and Unpredictable</title>
<link href="https://hdl.handle.net/1721.1/150075" rel="alternate"/>
<author>
<name>Pearl, Natalie Pascale</name>
</author>
<id>https://hdl.handle.net/1721.1/150075</id>
<updated>2023-04-01T04:02:57Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Earthly Forces: Rethinking the Potential Energies of the Episodic, Dispersed, and Unpredictable
Pearl, Natalie Pascale
The Earth is active. Mountains are violent, rivers relentless, and even the slightest movement of matter—drops of rain or grains of sand, when compounded can have exponential ramifications. Geology has a long history of being chained to the economies and ambitions of material and energy revolutions—and regimes of architecture and construction. This thesis project opposes modern desires of consistency, reliability and scalability that come at the expense of resource extraction and depletion. It imagines a world where humans might work with the rate at which the earth builds through erosion, transport, and accumulation. This research recognizes the force of water in its multitude of forms - turbulent, seeping, and crystalline. It explores ways in which the potential energies of gravity and earthly forces could create new forms of infrastructure that hold and store potential energy.&#13;
&#13;
Among the many forces of nature, this thesis identifies and develops floods, rock fall, and avalanches - as forces with which design interventions can be paired. Each force can lift weight and hold mass at height as potential energy, as well as collect debris for architectural structures. By acknowledging the potential energy of earth’s forces, this project reflects on how we can begin to imagine a world where these forces are seen as sublime and productive rather than terrifying and destructive. This research project proposes we modulate our energetic use and architectural output and calibrate these with the episodic, dispersed, and unpredictable eb and flow of the Earth’s metabolism.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NEO-FOXCONN: Analysis and Redesign of Foxconn Campus</title>
<link href="https://hdl.handle.net/1721.1/150072" rel="alternate"/>
<author>
<name>Fan, Zekun</name>
</author>
<id>https://hdl.handle.net/1721.1/150072</id>
<updated>2023-04-01T03:04:31Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">NEO-FOXCONN: Analysis and Redesign of Foxconn Campus
Fan, Zekun
Beginning with Obama’s “we can make change“ and Trump’s “make America great again”, the US has been attempting to reindustrialize by bringing the manufacturing industry back, especially from East Asian countries. Foxconn, the world's largest technology manufacturer, is expanding its factories in America. However, Foxconn’s East Asian mode will not disappear if transplanted without reconsideration. These problematic practices are not only the result of a cultural context; They are also the result of a complicit architecture.&#13;
 &#13;
Instead of Foxconn's labor-intensive "closed campus”, what kind of architecture would emerge if they built innovation-driven "open blocks" drawn from the American context? This thesis explores the uncomfortable territory between corporate production and architectural delight that resists the status quo of industry over humanity. &#13;
 &#13;
The approaches include studying the architectural typology of twelve typical Foxconn “campuses” across mainland China, analyzing the Longhua campus in Shenzhen and illustrating the narrative of individual labors, researching the evolution of collective housing from utopian socialists to present, and designing an experimental Neo-Foxconn project in South Boston.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonplanar Nanofabrication via Interface Engineering</title>
<link href="https://hdl.handle.net/1721.1/150066" rel="alternate"/>
<author>
<name>Spector, Sarah O.</name>
</author>
<id>https://hdl.handle.net/1721.1/150066</id>
<updated>2023-04-01T03:47:15Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Nonplanar Nanofabrication via Interface Engineering
Spector, Sarah O.
This thesis develops a platform for scalable fabrication of suspended, ultrathin nanostructures as building blocks of nanoelectromechanical systems by extending conventional planar techniques to nonplanar designs. We achieve this by engineering interface forces through a patterned molecular monolayer to enable controlled delamination of a deposited thin-film in predetermined locations. This allows us to form nonplanar structures with thicknesses &lt; 10 nm and nanogaps reaching &lt; 10 nm – features traditionally challenging to achieve. Our approach, which builds on standard, wafer-scale, and conventionally-compatible techniques, is versatile, tunable, and compatible with diverse materials. As a result, the technique opens up new opportunities for applications such as miniaturized nanoelectromechanical devices, including ultrathin mechanical resonators, which are demonstrated in this work.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Investigation of the Blowby Effect on the Three-Piece Oil Control Ring and Subsequent Oil Transport in Transient Engine Working Conditions</title>
<link href="https://hdl.handle.net/1721.1/150063" rel="alternate"/>
<author>
<name>Li, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/150063</id>
<updated>2023-04-01T03:14:11Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Experimental Investigation of the Blowby Effect on the Three-Piece Oil Control Ring and Subsequent Oil Transport in Transient Engine Working Conditions
Li, Mo
The Lubrication Oil Consumption (LOC) has been a critical issue for vehicle and engine manufactures for a long time. It can generate harmful Oil Emission (OE) as well as deteriorate the aftertreatment system. During transient engine operations, an increase of OE can be introduced but some of the mechanisms remained unclear. Thus, a throughout understanding of oil transport in the ring pack is needed to improve engine design.&#13;
&#13;
In this work, an optical engine with 2D Laser Induced Fluorescence (2D-LIF) technique was applied to the oil transport in a ring pack equipped with Three-Piece Oil Control Ring (TPOCR). It was found that an engine load results in zero blowby is the separation line to divide two drastically different oil flow patterns. Running the engine with a load lower than this line can result in an increase of oil accumulation inside TPOCR groove, followed by oil upwards transport into the third land, the second land and finally oil droplets through top ring gap. The time needed for oil leakage to pass different rings was investigated with the combination of numerical models, which can analyze the ring dynamics, gas flow and oil gas interaction. Given long enough time running under the blowby separation line, a sudden increase of engine load can result in increase of oil leakage to the combustion chamber, coming from the oil accumulated in the second land and top ring groove. Furthermore, the effect of drain holes and the relative location between ring gaps was studied to examine oil local distribution and overall LOC. One obvious practical takeaway is either to always run the engine with a positive blowby or to limit the duration of time with zero blowby. These findings can help improve the engine design and calibration to minimize the LOC as well as OE in Spark Ignition (SI), hydrogen and gas engines equipped with TPOCR.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Physiologic Tremor Characterization, Mitigation, and Modeling</title>
<link href="https://hdl.handle.net/1721.1/150062" rel="alternate"/>
<author>
<name>Magaña-Salgado, Uriel</name>
</author>
<id>https://hdl.handle.net/1721.1/150062</id>
<updated>2023-04-01T03:05:52Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Methods for Physiologic Tremor Characterization, Mitigation, and Modeling
Magaña-Salgado, Uriel
Physiologic tremors exist in all healthy individuals, and occur in moments of excitement, anxiety, or muscle activation. They have been commonly observed in surgeons and other occupations requiring precise movements, but unlike tremors from Parkinson’s disease or Essential Tremor, they are often disregarded in clinical and research settings as they are not linked to any neurological disease, nor disturb one’s daily life. The magnitude of physiologic tremors has been observed to increase through stressful or fatiguing experiences, and decrease through relaxation exercises or medication; however, the combined mechanical, electrical, and physiological changes in the body render it a challenging phenomenon to study.&#13;
&#13;
Advancements in physiological monitoring have allowed researchers to characterize tremors. Accelerometry and surface electromyography are often used to measure tremor patterns, both practical methods for measuring surface-level changes. Imaging modalities like ultrasound, on the other hand, are less prevalent techniques for these scenarios. Combining these methods can present a more holistic observation of physiological changes involved in an emerging tremor, potentially guiding research towards a more complete understanding of their cause and impact on the body.&#13;
&#13;
This thesis highlights approaches to detect and characterize physiologic tremors in the upper limbs. It describes the methods used to process the signals and images acquired via the various modalities, and relates the changes among modalities to each other. Finally, the results of this analysis are highlighted, presenting strategies to mitigate tremors and a new understanding of the biomechanics behind them.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semi-autonomous Magnetic Manipulation for Endovascular Navigation</title>
<link href="https://hdl.handle.net/1721.1/150060" rel="alternate"/>
<author>
<name>Choe, Jaehun</name>
</author>
<id>https://hdl.handle.net/1721.1/150060</id>
<updated>2023-04-01T03:03:31Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Semi-autonomous Magnetic Manipulation for Endovascular Navigation
Choe, Jaehun
Endovascular navigation is one of the significant barriers to stroke treatment. Due to its geometrical complexity, navigation inside the cerebral arteries becomes challenging as the destination moves further toward the distal region of the brain. Magnetic manipulation is one solution that can resolve this difficulty. By utilizing magnetic force and torque, magnetic manipulation enables active steering of the distal tip of the guidewire, thereby enhancing navigation capabilities. As a platform for magnetic manipulation, a single robot arm equipped with a permanent magnet has advantages over other magnetic field generators in terms of their footprint and cost. However, the manipulation of the permanent magnet attached to the end of the robot arm is not a trivial task. During the operation, a neurosurgeon has to consider the relative position and orientation of the end effector with respect to the current position of the guidewire. Moreover, the user must avoid collision between the robot and the patient in order to secure the patient’s safety. Without any assistance, these considerations leave the cognitive burden on the user’s side. In order to solve this problem, this thesis presents a semi-autonomous magnetic manipulation system for the single robot arm platform. In the system, an appropriate position and orientation of the magnet are obtained by aligning the desired field direction with the center axis of the magnet. For the calculated position and orientation, a corresponding robot configuration is obtained by solving the inverse kinematics problem of the robot arm. The 3 step motion planner is applied to find a valid robot trajectory to the optimal robot configuration while securing the patient’s safety. The system is integrated into an interface, and the performance of the semi-autonomous system is verified through an experiment on an artificial cerebral artery model.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Into the Rhino-verse</title>
<link href="https://hdl.handle.net/1721.1/150058" rel="alternate"/>
<author>
<name>Gruber, Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/150058</id>
<updated>2023-04-01T03:32:51Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Into the Rhino-verse
Gruber, Paul
Architects and designers are fully reliant on digital tools to complete their work. The need to constantly produce creative work has reached exhausting speeds. These programs are exponentially expanding and consuming each other, creating even more complex relationships. Architects are thrown into this technological jumble, expected to keep up with the rapidly evolving modes of production and it is challenging to access and utilize this seemingly infinite digital landscape. &#13;
&#13;
Rhinoceros 3D, a popular architectural modeling software, is a force in the digital design market. Over time, its parent company developed a platform that houses a large group of animal-named plug-ins to expand the capabilities of the Rhino Universe. Every day, new plug-ins are added, downloaded and modified for Rhino users and it is overwhelming to keep up with the changes and additions. The zoological theme allows for a playfulness in architectural design and provides a language to expand the digital scope of the architect. This thesis examines and embraces the absurdist digital landscape to make sense of the relationship between architects and parametric design tools, while also introducing newly imagined animals to control even more aspects of the architectural process. &#13;
&#13;
The project envisions a not-so-distant future where the role of the architect is collapsed to choosing the right combinations of animals to produce a desired, prescribed output. While the abilities of the architect expand through the usage of these tools, there is immense knowledge that collapsed within a Grasshopper component that the architect no longer needs to possess. This thesis views the digital ecosystem as more than just a branding mechanism, but as a useful tool in understanding the possibilities and limitations of digital design thinking in the sometimes-dreary, isolating, and confusing technological environment architects and designers inhabit daily.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Last Mile of Broadband: Estimating the Economic Impacts of Connect America Fund Initiative</title>
<link href="https://hdl.handle.net/1721.1/150052" rel="alternate"/>
<author>
<name>Munyikwa, Zanele</name>
</author>
<id>https://hdl.handle.net/1721.1/150052</id>
<updated>2023-04-01T03:17:59Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The Last Mile of Broadband: Estimating the Economic Impacts of Connect America Fund Initiative
Munyikwa, Zanele
I use a county-level panel dataset from 2013 to 2019 to assess the impacts of a federal program that provided massive subsidies to facilitate the expansion of broadband infrastructure: The Connect America Fund Phase II Program. This program incentivized telecommunications carriers to provided broadband access to high-cost areas in the United States (typically rural and other underserved communities). I study the impact of this "last mile" of broadband and assessing broadband access on local economic employment outcomes. I find that program funding in a geographic area has a positive effect on weekly wages, and potentially has a positive impact on population development.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Regional Sources of Atmospheric Polycyclic Aromatic Hydrocarbon Pollution and Associated Human Cancer Risk</title>
<link href="https://hdl.handle.net/1721.1/150046" rel="alternate"/>
<author>
<name>Trivedi, Disha</name>
</author>
<id>https://hdl.handle.net/1721.1/150046</id>
<updated>2023-04-01T03:47:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Assessing Regional Sources of Atmospheric Polycyclic Aromatic Hydrocarbon Pollution and Associated Human Cancer Risk
Trivedi, Disha
Atmospheric pollution from organic matter burning poses health risks to humans across the globe, making it crucial to assess and regulate such pollution’s sources. One class of atmospheric pollutants, toxic, carcinogenic polycyclic aromatic hydrocarbons (PAHs), are globally ubiquitous chemicals of interest to domestic and international policy decision-makers. Previous studies of PAHs use a single carcinogenic PAH, benzo[a]pyrene, to perform policy-relevant analyses of PAH sources and health effects. However, a new study from Kelly et al., (2021) suggests that BAP may not represent pollution and health effects of other carcinogenic PAHs and their degradation products. Kelly et al.’s findings suggest that previous BAP-based analyses may not capture the PAH emissions sources responsible for PAH concentrations and cancer risk in policy-relevant regions. This thesis uses Kelly et al.’s extended version of the global, three-dimensional GEOS-Chem atmospheric chemistry model to simulate concentrations of BAP and of 48 PAHs and degradation products in three policy-relevant regions: the Arctic Circle, the continental United States, and East Asia. This thesis performs source receptor analysis by simulating BAP (“the BAP simulation”) and simulating 48 PAHs (“the 48-PAH simulation”). In the Arctic, simulating 48 PAHs demonstrates that outside-Arctic sources, primarily from sub-Arctic Russia and continental Europe, contribute a higher percentage of Arctic PAH pollution and cancer risk than simulating BAP alone attributes. However, a high percentage of Arctic PAH concentrations come from within-Arctic sources in the BAP simulation and the 48-PAH simulation. In the United States, simulating 48 PAHs identifies single-digit contributions from outside-US sources, predominantly from sub-Arctic Canada and the Arctic, that are missed when simulating BAP alone. Geospatial mapping of these simulations indicates that Arctic wildfire emissions are likely responsible these sub-Arctic Canadian and Arctic contributions to US PAHs as well as of within-Arctic sources of Arctic PAHs. In East Asia, simulating BAP and simulating 48 PAHs show that within-East Asian sources contribute almost all of East Asian PAH concentrations and associated cancer risk. Overall, these source attribution findings demonstrate the saliency of multifactor PAH assessments, such as the 48-PAH simulation, to identifying outside-region sources of interest to stakeholders like the United Nations Economic Commission for Europe’s Convention on Long-Range Transboundary Air Pollution and the Arctic Council.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Draft Resolution Supporting The Municipal Authority to Rearrange: A Non-Optimized Methodology for Doing Less</title>
<link href="https://hdl.handle.net/1721.1/150045" rel="alternate"/>
<author>
<name>Wissemann, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/150045</id>
<updated>2023-04-01T03:41:12Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">A Draft Resolution Supporting The Municipal Authority to Rearrange: A Non-Optimized Methodology for Doing Less
Wissemann, Emily
Buildings are outdoor objects. They are enclosures that are detached and extended outward from the body in order to warm, cool, or otherwise insulate persons, animals, or objects deemed worthy of protection. Containers of space and air, building enclosures create barriers to temperature, moisture, and to strangers. Single-family homes are one such type of enclosure. As a building typology, they are institutions of outsized stature in both the American imaginary and the physical presence on the landscape. Clad with layered accumulations of extracted material, single-family homes are designed to extend to the brims of their structures and often their plots. This thesis proposes the partial deconstruction and reenclosure of unused and oversized housing stock by local governments, in order to mine post-extraction material for reuse as well as reduce the volume of temperature-controlled air in homes over 2,687 square feet.&#13;
&#13;
The three case studies presented in this thesis are set in Rutland, Vermont. A setting chosen for its wet and warm summers and freezing and snowy winters, as well as its above-average home sizes that stand in contrast to a history of declining population. Opposed to research into engineered solutions for sealed envelopes or design-for-disassembly approaches, which focus on ground-up construction, this thesis proposes pathways to less, that are outside of the typical cost or material-driven calculus of architecture and construction. Instead, this research centers on the reduction of the volume of temperature-controlled air as a driver of adaptive reuse schemes. The aim here is to provide a set of equally valid architectural possibilities for the reduction of the impact of the built world on the environment.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impact of Federal Reserve’s policies on the residential mortgage markets</title>
<link href="https://hdl.handle.net/1721.1/150044" rel="alternate"/>
<author>
<name>Raipelly, Rahul Sharad</name>
</author>
<author>
<name>Wamakima, Corazon</name>
</author>
<id>https://hdl.handle.net/1721.1/150044</id>
<updated>2023-04-01T03:14:04Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">The impact of Federal Reserve’s policies on the residential mortgage markets
Raipelly, Rahul Sharad; Wamakima, Corazon
Post Global Financial Crisis (GFC), the Federal Reserve adopted a new tool for temporary quantitative easing (QE) by purchasing agency mortgage-backed securities and Treasury securities from the market in an attempt to resurrect the economy. This unconventional method of restoring the capital markets proved innovative and effective. During the COVID pandemic, the Federal Reserve continued to purchase more securities to stabilize the markets resulting in a vast expansion in its balance sheet.&#13;
&#13;
In the current inflationary environment (2022), the Federal Reserve is running losses on the balance sheet as it increases the interest rates as part of its quantitative tightening policies; the Federal Reserve must consider whether to sell its current MBS holdings according to its plan or hold on to the portfolio. The residential mortgage market faces additional liquidity pressures and uncertainty with limited Federal Reserve support.&#13;
&#13;
Inspecting the spread between the 10-year Treasury yield and fixed 30-year mortgage rates during times of crisis and stable markets, this thesis investigates current market uncertainty and the impact of shocks that increase the spread, which translates into higher mortgage rates and lower affordability for the borrower. This thesis concludes the asymmetry of Federal Reserve policies may never work but adds uncertainty to the residential mortgage markets.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forward to the Past: Redesigning the form and flow of C2C Marketplace</title>
<link href="https://hdl.handle.net/1721.1/150041" rel="alternate"/>
<author>
<name>Sim, Jinyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/150041</id>
<updated>2023-04-01T03:16:41Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Forward to the Past: Redesigning the form and flow of C2C Marketplace
Sim, Jinyoung
The nature of luxury is multifaceted and deeply intertwined with our society and culture. It is not just about exclusivity and high cost, but also about experiences and values. Luxury can be found in a bespoke suit tailored to fit your every curve, in a lavish and secluded resort nestled in the heart of the jungle, or in a simple yet perfectly crafted piece of furniture that tells a story. It is a feeling of indulgence, of being able to fully appreciate and enjoy the finer things in life. It is a state of mind, a way of living, and a celebration of the human spirit.&#13;
&#13;
This thesis proposes a particular design model, or a prototype of architecture that both embodies and facilitates the transaction of secondhand luxury goods, or pre-owned luxury goods by individuals, in a form of consumer-to-consumer marketplace. The prototype is a special type of warehouse store that integrates numerous design elements that address the characteristics of luxury and its interrelationship to human, machine, and the built environment. In a way, the thesis bring luxury and architecture together as in a delicate dance, each enhancing the other in a way that is both subtle and profound. At their core, both luxury and architecture seek to evoke a sense of wonder and desire, drawing us in with their beauty and craftsmanship. They are mutually reinforcing, each elevating the other to new heights of splendor and exclusivity.&#13;
&#13;
In the end, this thesis attempts to reconstruct material form and built environment in the specific context of human interest and behavior that is, what captivates us, what we buy and we live for. It recognizes the profound impact that the built environment has on the human experience and the ways in which it shapes and defines our actions and behaviors. In this way, the built environment becomes a reflection of who we are and what we value, and has the power to shape and enhance the human experience in meaningful and enduring ways.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of superconducting domain structures using optical polarization techniques.</title>
<link href="https://hdl.handle.net/1721.1/148716" rel="alternate"/>
<author>
<name>Rosen, Lowell.</name>
</author>
<id>https://hdl.handle.net/1721.1/148716</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">A study of superconducting domain structures using optical polarization techniques.
Rosen, Lowell.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highway construction cost model for sector planning in developing countries</title>
<link href="https://hdl.handle.net/1721.1/148713" rel="alternate"/>
<author>
<name>Aw, Wee Beng.</name>
</author>
<id>https://hdl.handle.net/1721.1/148713</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Highway construction cost model for sector planning in developing countries
Aw, Wee Beng.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1981; Bibliography: leaves 174-179.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automation and internal labor market structure : a study of the Caterpillar Tractor Company</title>
<link href="https://hdl.handle.net/1721.1/148712" rel="alternate"/>
<author>
<name>Stanovsky, Clinton Sebastian.</name>
</author>
<id>https://hdl.handle.net/1721.1/148712</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Automation and internal labor market structure : a study of the Caterpillar Tractor Company
Stanovsky, Clinton Sebastian.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Tail-Integrated Boundary-Layer Ingesting Propulsion System for Turbo-Electric Aircraft</title>
<link href="https://hdl.handle.net/1721.1/148614" rel="alternate"/>
<author>
<name>Chen, Zhibo</name>
</author>
<id>https://hdl.handle.net/1721.1/148614</id>
<updated>2023-03-18T03:32:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Tail-Integrated Boundary-Layer Ingesting Propulsion System for Turbo-Electric Aircraft
Chen, Zhibo
In this thesis, we present conceptual design guidelines and results for a tail-integrated propulsion system for turbo-electric aircraft with boundary layer ingestion (BLI). This includes (i) definition of tail BLI electric fans and (ii) integration of the BLI propulsors&#13;
on an aircraft tail, to meet the propulsive power requirements and performance goals, i.e. separation-free and shock-free operation with fuel burn reduction, compared with a baseline aircraft for the same mission. The assessment of BLI benefits incorporates&#13;
CFD and TASOPT analyses, with emphasis placed on utilizing these analyses not only to identify potential challenges for integration of the BLI propulsors, but also to characterize the underlying mechanisms and thus establish the physical rationale for resolving these challenges.&#13;
&#13;
The conceptual design resulting from the guidelines has nine BLI propulsors with electric fans on an axisymmetric tail, which is installed on a baseline single-aisle aircraft with twin underwing turbofans without BLI. For the tail-integrated BLI electric&#13;
fans, the guidelines include the required fan loss buckets, and non-axisymmetric sta tors, to mitigate the fan efficiency drop due to rotor inlet incidence distortion. The design of the tail-integrated propulsor illustrates the aerodynamics of the propulsor inlet, nacelle, and nozzle that enable separation-free and shock-free operation at the cruise condition. The benefit of the defined tail BLI and twin underwing turbofan aircraft configuration is 10.4% in Propulsion Fuel Energy Intensity (PFEI) at a cruise Mach number of 0.8 and an altitude of 35100 ft, compared to a baseline twin underwing turbofan configuration. The sensitivity study shows that a 1% increase in installed (i.e. with BLI) fan efficiency translates to 0.8% rise in the PFEI benefit.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Assessment of Electrolytic Hydrogen Production under Dynamic Operations</title>
<link href="https://hdl.handle.net/1721.1/148613" rel="alternate"/>
<author>
<name>Chung, Doo Hyun Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/148613</id>
<updated>2023-03-18T03:35:11Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Techno-Economic Assessment of Electrolytic Hydrogen Production under Dynamic Operations
Chung, Doo Hyun Mark
Green hydrogen has an important role to play in the future decarbonized world as it has the potential to decarbonize many sectors of the economy. Proton Exchange Membrane (PEM) electrolysis is one of the production pathways for green hydrogen. This study uses a techno-economic optimization model that combines a high-level electrolyzer model and a cost model to evaluate the current (2021) and future (2040) economics of running a PEM electrolyzer under various operating conditions, including dynamic operations and differential-pressure operations. The results show that dynamic operation can reduce the levelized cost of hydrogen (LCOH) by 9% compared to steady operation at a nominal current density. Using direct electrochemical compression without the need for a mechanical compressor is an economically viable solution in the future, while a hybrid approach is preferred in the current scenario. Finally, the LCOH projections show that continued efforts to reduce the capital cost, improve the electrolyzer performance, and integrate more low-cost renewables into the electricity market are necessary for green hydrogen to reach cost-parity with hydrocarbon-based hydrogen.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image Compression using Sum-Product Networks</title>
<link href="https://hdl.handle.net/1721.1/148612" rel="alternate"/>
<author>
<name>Jayashankar, Tejas Kumar</name>
</author>
<id>https://hdl.handle.net/1721.1/148612</id>
<updated>2026-01-08T14:13:13Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Image Compression using Sum-Product Networks
Jayashankar, Tejas Kumar
An estimated 79 zettabytes (1021 bytes) of data was generated worldwide in 2021 with even more data expected to be produced in the future. The effective storage and communication of such large amounts of data is an important problem. Data compression lies at the heart of the solution to this issue.&#13;
&#13;
The two aspect of data compression — data modeling and coding — are typically jointly designed. As a result, it is difficult to evolve compression standards without a complete modification of the entire architecture. Recently, a model-code separation architecture for compression was proposed with a model-free encoder and model-adaptive decoder. The architecture uses a data independent encoder, and it employs a probabilistic graphical model (PGM) to model the source structure in the decoder. Decoding is performed by running belief propagation over the graphical models representing the modeling and coding aspects of compression.&#13;
&#13;
In practical settings where we deal with naturally occurring data, e.g., CIFAR-10 images, the PGM underlying the source data is unknown. Existing structure learning algorithms for PGMs are inefficient for learning from large datasets and place additional constraints on the graphical model structure that diminishes a PGM’s representational power. Due to the difficulty of inference and learning in complex PGMs, the current model-code separation architecture is limited in its use for many real world applications.&#13;
&#13;
In this thesis, we develop a new separation architecture based on recently proposed sum-product networks (SPNs), a class of tractable probabilistic generative models, to model the source distribution. Our architecture strikes a balance between efficient learning of source structure and fast lossless decoding. We show that SPNs admit efficient parameter learning via gradient descent to learn statistical structure in synthetic and naturally occurring images. Furthermore, through modifications to the SPN architecture, we describe a procedure to assimilate external beliefs about the source and compute the marginal probabilities of all the source nodes in a single forward and backward pass of the SPN architecture. By using an SPN source model in place of a PGM, we obtain a new model-code separation architecture for compression. &#13;
&#13;
Throughout this thesis, we focus on the efficient implementation of our compression architecture. We take advantage of modern deep learning frameworks and GPUs to implement our entire architecture using parallelized tensor operations. As a result, we are able to bridge the gap between traditional statistical inference algorithms and modern deep learning models by carefully developing the SPN source-code belief propagation algorithm for source decoding. The resulting algorithm can decode grayscale sources in under 0.04 seconds.&#13;
&#13;
This work applies the proposed architecture for the lossless compression of binary and grayscale images. We compare our architecture against some of the most commonly used compression systems of today and theoretical limits.&#13;
&#13;
We show that our architecture achieves a 1.7× gain in compression rate over the state-of-the-art JBIG2 compressor on the binarized MNIST dataset. Furthermore, our architecture does not incur a performance penalty on grayscale sources and is still able to achieve a 1.4× gain in compression rate on the grayscale CIFAR-10 and the Fashion MNIST datasets, as compared against some of the best universal compressors. Extensive analysis on synthetic binary sources show that our architecture can achieve near theoretical limits of compression and match the performance of baseline separation architectures with known PGM structure.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harvest-Time Optimal Path Planning in Dynamic Flows</title>
<link href="https://hdl.handle.net/1721.1/148611" rel="alternate"/>
<author>
<name>Bhabra, Manmeet Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/148611</id>
<updated>2023-03-18T03:23:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Harvest-Time Optimal Path Planning in Dynamic Flows
Bhabra, Manmeet Singh
The past decade has seen an increasing use of autonomous vehicles (propelled AUVs, ocean gliders, solar-vehicles, etc.) in marine applications. For the operation of these vehicles, efficient methods for path planning are critical. Path planning, in the most general sense, corresponds to a set of rules to be provided to an autonomous robot for navigating from one configuration to another in some optimal fashion. Increasingly, having autonomous vehicles that optimally collect/harvest external fields from highly dynamic environments has grown in relevance. Autonomously maximizing the harvest in minimum time is our present path planning objective. Such optimization has numerous impactful applications. For instance, in the case of energy optimal path planning where long endurance and low power are crucial, it is important to be able to optimally harvest energy (solar, wind, wave, thermal, etc.) along the way and/or leverage the environment (winds, currents, etc.) to reduce energy expenditure. Similarly, autonomous marine cleanup or collection vehicles, tasked with harvesting plastic waste, oil spills, or seaweed fields, need to be able to plan paths that maximize the amount of material harvested in order to optimize the cleanup or collection process.&#13;
&#13;
In this work, we develop an exact partial differential equation-based methodology that predicts harvest-time optimal paths for autonomous vehicles navigating in dynamic environments. The governing differential equations solve the multi-objective optimization problem of navigating a vehicle autonomously in a highly dynamic flow field to any destination with the goal of minimizing travel time while also maximizing the amount harvested by the vehicle. Using Hamilton-Jacobi theory for reachability, our methodology computes the exact set of Pareto optimal solutions to the multiobjective path planning problem. This is completed by numerically solving a reachability problem for the autonomous vehicle in an augmented state space consisting of the vehicle’s position in physical space as well as its harvest state. Our approach is applicable to path planning in various environments, however we primarily present examples of navigating in dynamic ocean flows. The following cases, in particular, are studied. First, we validate our methodology using a benchmark case of planning paths through a region with a harvesting field present in a halfspace, as this case admits a semi-analytical solution that we compare to the results of our method. We next consider a more complex unsteady environment as we solve for harvest-time optimal missions in a quasi-geostrophic double-gyre ocean flow field. Following this, we provide harvest-time optimal paths to the highly relevant issue of collecting harmful algae blooms. Our final case considers an application to next generation offshore aquaculture technologies. In particular, we consider in this case path planning of an offshore moving fish farm that accounts for optimizing fish growth. Overall, we find that our exact planning equations and efficient schemes are promising to address several pressing challenges for our planet.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The dynamics of interruptions in engineering project task execution</title>
<link href="https://hdl.handle.net/1721.1/148442" rel="alternate"/>
<author>
<name>Lanni, Francesco,
            1969-</name>
</author>
<id>https://hdl.handle.net/1721.1/148442</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2005-01-01T00:00:00Z</published>
<summary type="text">The dynamics of interruptions in engineering project task execution
Lanni, Francesco,
            1969-
The performance of engineering project organizations can be characterized generally by the organization's ability to deliver projects "on time", "under budget" and "right the first time". In industries where demand is very unpredictable it is not uncommon for organizations to operate with very "lean" project engineering staffs. Members of those engineering staffs are in very high demand by many other groups within the organization, leading to frequent interruptions. These interruptions can have significant impacts on productivity. Moreover, the productivity impacts often lead to degrading project cost and schedule performance, increased workload and stress, more mistakes, and ultimately contribute to the compromise of business cash flow and profit performance. Because of the dynamic complexity of the project task execution process and its relationship to the larger business goals, it is difficult to understand the real impacts of interruptions and devise effective policies to prevent the impacts from affecting performance adversely. Policies which appear to make sense in the short term may have long term repercussions that are not intuitive. Ultimately, the engineering staff and resource management leadership becomes the target of significant criticism. This thesis provides an overview of the dynamic impacts of interruptions on engineering project task execution processes, and identifies system structures, policies, and behaviors that may contribute to the chronic inability of engineering organizations to deliver results "on time", "under budget" and "right the first time". A specific organization in the defense industry was selected for the contextual basis of this research.; (cont.) Interviews with stakeholders of the project execution process were conducted extensively. Major themes identified in the interview process were used, in conjunction with social network analysis techniques to provide guidance for development of a formal system dynamics model. Model simulation results are presented, with insights into the effects of interruptions on the larger business operations, as well as suggestions for further work.
Thesis: S.M., Massachusetts Institute of Technology, System Design and Management Program, 2005; Includes bibliographical references (p. 128-130).
</summary>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calcutta University library architectural study</title>
<link href="https://hdl.handle.net/1721.1/148436" rel="alternate"/>
<author>
<name>Rahman, Habibur.</name>
</author>
<id>https://hdl.handle.net/1721.1/148436</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">Calcutta University library architectural study
Rahman, Habibur.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1944; Includes bibliographical references (leaf [41]).
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frame-to-frame extrapolation of television fields</title>
<link href="https://hdl.handle.net/1721.1/148432" rel="alternate"/>
<author>
<name>Kelleher, James Joseph,
            1937-</name>
</author>
<id>https://hdl.handle.net/1721.1/148432</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Frame-to-frame extrapolation of television fields
Kelleher, James Joseph,
            1937-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaves 37-38).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of automotive engineering managers</title>
<link href="https://hdl.handle.net/1721.1/148431" rel="alternate"/>
<author>
<name>Kehrl, Howard H.
            (Howard Harmon)</name>
</author>
<id>https://hdl.handle.net/1721.1/148431</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Development of automotive engineering managers
Kehrl, Howard H.
            (Howard Harmon)
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1960; Includes bibliographical references (leaves [69]-71).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The organizational structure of engineering systems</title>
<link href="https://hdl.handle.net/1721.1/148429" rel="alternate"/>
<author>
<name>Katz, Jerahmiel.</name>
</author>
<id>https://hdl.handle.net/1721.1/148429</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The organizational structure of engineering systems
Katz, Jerahmiel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1960; Includes bibliographical references (leaves 71-72).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A comparative study of the STOL and VTOL type aircraft in the take-off regime</title>
<link href="https://hdl.handle.net/1721.1/148427" rel="alternate"/>
<author>
<name>Kaplan, Robert L.
            (Robert Lewis)</name>
</author>
<id>https://hdl.handle.net/1721.1/148427</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">A comparative study of the STOL and VTOL type aircraft in the take-off regime
Kaplan, Robert L.
            (Robert Lewis)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1960; Includes bibliographical references (leaves 58-59).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of rolling practice and microstructure on the notched-bend fracture transition of steel</title>
<link href="https://hdl.handle.net/1721.1/148426" rel="alternate"/>
<author>
<name>Kapadia, Behram M.
            (Behram Maneck)</name>
</author>
<id>https://hdl.handle.net/1721.1/148426</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Effect of rolling practice and microstructure on the notched-bend fracture transition of steel
Kapadia, Behram M.
            (Behram Maneck)
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1960; Includes bibliographical references (leaves 27-28).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The evolution of the diesel fireman issue</title>
<link href="https://hdl.handle.net/1721.1/148423" rel="alternate"/>
<author>
<name>Davis, Jerry Ray.</name>
</author>
<id>https://hdl.handle.net/1721.1/148423</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">The evolution of the diesel fireman issue
Davis, Jerry Ray.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Bibliography: leaves 94-96.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modern Web Scraping and Data Analysis Tools to Discover Historic Real Estate Development Opportunities</title>
<link href="https://hdl.handle.net/1721.1/148288" rel="alternate"/>
<author>
<name>Berry, Nile</name>
</author>
<id>https://hdl.handle.net/1721.1/148288</id>
<updated>2023-03-03T03:32:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Modern Web Scraping and Data Analysis Tools to Discover Historic Real Estate Development Opportunities
Berry, Nile
The National Parks Service (NPS) is responsible for a database that catalogs every nationally recognized historic building and historic district across the United States. If listed as historic, the property can qualify for both state and federal historic tax credits which can subsidize up to 45% of the Qualified Rehabilitation Expenses (QREs) of the project, depending on the specific state. These tax incentives can significantly increase the financial return profile of these redevelopment projects. However, finding these qualifying sites across the United States is challenging given the size of the NPS database. With over 97,000 rows of static data, the NPS database is unwieldy and difficult to maneuver. Moreover, there is no way to proactively use the tool to find acquisition opportunities.&#13;
&#13;
This thesis project aims to solve this problem by creating an acquisition analytics funnel that aggregates data from multiple online sources and layers it to create a dynamic way to source new historic redevelopment projects. The initial focus area of the thesis is the state of Maine and the subject of the thesis is Historic Tax Credit View (HTC View), a digital data analytics platform conceived built and owned by the author. The platform combines the NPS database with automated web-scraping algorithms to parse publicly available census and market demand data that indicate whether certain markets are of higher investment value than others. Through the development of HTC View, the author and outside partners have raised funds to make a purchase of a historically recognized former Milling Site in Skowhegan, Maine that was originally identified as an opportunity by the platform. The results of this research demonstrate the effectiveness of adopting web-scraping technologies and the usefulness overall to real estate development professionals.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Possible method for determining particle size in fog flow</title>
<link href="https://hdl.handle.net/1721.1/148190" rel="alternate"/>
<author>
<name>Liberace, Richard.</name>
</author>
<id>https://hdl.handle.net/1721.1/148190</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Possible method for determining particle size in fog flow
Liberace, Richard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1964; Includes bibliographical references (leaf 81).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ship design for optimum economic return by computer techniques</title>
<link href="https://hdl.handle.net/1721.1/148184" rel="alternate"/>
<author>
<name>Li, Cheng-Huai.</name>
</author>
<id>https://hdl.handle.net/1721.1/148184</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Ship design for optimum economic return by computer techniques
Li, Cheng-Huai.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1964; Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 45-47).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The heats of mixing of binary liquid solutions</title>
<link href="https://hdl.handle.net/1721.1/148181" rel="alternate"/>
<author>
<name>Katz, Gerald.</name>
</author>
<id>https://hdl.handle.net/1721.1/148181</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The heats of mixing of binary liquid solutions
Katz, Gerald.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Includes bibliographical references (leaves 72-74).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determination of design parameters for a novel electroacoustical transducer</title>
<link href="https://hdl.handle.net/1721.1/148179" rel="alternate"/>
<author>
<name>Ball, Norman Addison,
            1939-</name>
</author>
<id>https://hdl.handle.net/1721.1/148179</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Determination of design parameters for a novel electroacoustical transducer
Ball, Norman Addison,
            1939-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1961; Includes bibliographical references (leaves 41-42).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The automorphism tower of a group with trivial center</title>
<link href="https://hdl.handle.net/1721.1/148176" rel="alternate"/>
<author>
<name>Kandall, Geoffrey Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/148176</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The automorphism tower of a group with trivial center
Kandall, Geoffrey Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1960; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Separation of a coarse solid inert heat carrier from a fluidized bed</title>
<link href="https://hdl.handle.net/1721.1/148175" rel="alternate"/>
<author>
<name>Kaminsky, George J.
            (George John)</name>
</author>
<id>https://hdl.handle.net/1721.1/148175</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Separation of a coarse solid inert heat carrier from a fluidized bed
Kaminsky, George J.
            (George John)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Includes bibliographical references (leaf 57).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear process heat and the paper industry- with special reference to New England</title>
<link href="https://hdl.handle.net/1721.1/148173" rel="alternate"/>
<author>
<name>Joshi, Arun.</name>
</author>
<id>https://hdl.handle.net/1721.1/148173</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Nuclear process heat and the paper industry- with special reference to New England
Joshi, Arun.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1960; Includes bibliographical references (leaves [157]-160).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Zeeman effect in promethium-147</title>
<link href="https://hdl.handle.net/1721.1/148172" rel="alternate"/>
<author>
<name>Johnson, Larry Claud,
            1956-</name>
</author>
<id>https://hdl.handle.net/1721.1/148172</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Zeeman effect in promethium-147
Johnson, Larry Claud,
            1956-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1960; Includes bibliographical references (leaves 35-36).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Internal cooling of a high speed supercharged diesel engine by manifold water injection.</title>
<link href="https://hdl.handle.net/1721.1/148169" rel="alternate"/>
<author>
<name>Hamilton, Frederick Morris.</name>
</author>
<id>https://hdl.handle.net/1721.1/148169</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Internal cooling of a high speed supercharged diesel engine by manifold water injection.
Hamilton, Frederick Morris.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Bibliography: leaf 80.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The treatment of copper concentrates by wet method.</title>
<link href="https://hdl.handle.net/1721.1/148168" rel="alternate"/>
<author>
<name>Lee, Kam-Fong.</name>
</author>
<id>https://hdl.handle.net/1721.1/148168</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">The treatment of copper concentrates by wet method.
Lee, Kam-Fong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1945; Bibliography: leaves 48-50.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An economic and engineering assessment of plasma-sprayed ceramic coatings</title>
<link href="https://hdl.handle.net/1721.1/148162" rel="alternate"/>
<author>
<name>Botton, Olivier de.</name>
</author>
<id>https://hdl.handle.net/1721.1/148162</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">An economic and engineering assessment of plasma-sprayed ceramic coatings
Botton, Olivier de.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1988; Bibliography: leaves 86-87.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shear resistance of discontinuities in rock.</title>
<link href="https://hdl.handle.net/1721.1/148161" rel="alternate"/>
<author>
<name>Nelson, Jeffrey William.</name>
</author>
<id>https://hdl.handle.net/1721.1/148161</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Shear resistance of discontinuities in rock.
Nelson, Jeffrey William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1977; Bibliography: leaves 132-134.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithms for locomotive scheduling</title>
<link href="https://hdl.handle.net/1721.1/148159" rel="alternate"/>
<author>
<name>Chandra, Anurag,
            1977-</name>
</author>
<id>https://hdl.handle.net/1721.1/148159</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">Algorithms for locomotive scheduling
Chandra, Anurag,
            1977-
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001; Includes bibliographical references (leaves 79-80).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some aspects of the foreign operations of American construction companies</title>
<link href="https://hdl.handle.net/1721.1/147966" rel="alternate"/>
<author>
<name>Joubert, Benoit G.</name>
</author>
<id>https://hdl.handle.net/1721.1/147966</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Some aspects of the foreign operations of American construction companies
Joubert, Benoit G.
Thesis: M.S., Massachusetts Institute of Technology, Department of Building Engineering and Construction, 1960; Includes bibliographical references (leaves 147-148).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High velocity impacts</title>
<link href="https://hdl.handle.net/1721.1/147965" rel="alternate"/>
<author>
<name>Jones, Arfon H.</name>
</author>
<id>https://hdl.handle.net/1721.1/147965</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">High velocity impacts
Jones, Arfon H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1960; Includes bibliographical references (leaves 72-79).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electronic curve tracing</title>
<link href="https://hdl.handle.net/1721.1/147964" rel="alternate"/>
<author>
<name>Jones, John Jay.</name>
</author>
<id>https://hdl.handle.net/1721.1/147964</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Electronic curve tracing
Jones, John Jay.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaf 80).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The scatter of fast neutral particles in a gas target</title>
<link href="https://hdl.handle.net/1721.1/147962" rel="alternate"/>
<author>
<name>Johnson, Byron Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/147962</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The scatter of fast neutral particles in a gas target
Johnson, Byron Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaves 77-78).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A retaliatory force system study</title>
<link href="https://hdl.handle.net/1721.1/147958" rel="alternate"/>
<author>
<name>Jacobson, Arnold Edwin.</name>
</author>
<id>https://hdl.handle.net/1721.1/147958</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">A retaliatory force system study
Jacobson, Arnold Edwin.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1960; Includes bibliographical references (leaf [68]).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An interactive mathematical optimization approach to locomotive dispatching on the Grand Trunk Western Railroad</title>
<link href="https://hdl.handle.net/1721.1/147954" rel="alternate"/>
<author>
<name>Sheaffer, Warren W.</name>
</author>
<id>https://hdl.handle.net/1721.1/147954</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">An interactive mathematical optimization approach to locomotive dispatching on the Grand Trunk Western Railroad
Sheaffer, Warren W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1986; Bibliography: leaves 127-130.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some simulation tests of an intrametropolitan location model.</title>
<link href="https://hdl.handle.net/1721.1/147952" rel="alternate"/>
<author>
<name>Hammons, Glenn Terrill.</name>
</author>
<id>https://hdl.handle.net/1721.1/147952</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Some simulation tests of an intrametropolitan location model.
Hammons, Glenn Terrill.
Thesis: M.S., Massachusetts Institute of Technology, Department of Economics, 1971; Bibliography: leaf 82.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality control circles</title>
<link href="https://hdl.handle.net/1721.1/147949" rel="alternate"/>
<author>
<name>Wong, James.</name>
</author>
<id>https://hdl.handle.net/1721.1/147949</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Quality control circles
Wong, James.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1981; Bibliography: leaves 129-132.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Duality of Ground: Re-envisioning Space of Death in New York City</title>
<link href="https://hdl.handle.net/1721.1/147914" rel="alternate"/>
<author>
<name>Lo, Kuang-Chun</name>
</author>
<author>
<name>Prachasartta, Jariyaporn</name>
</author>
<id>https://hdl.handle.net/1721.1/147914</id>
<updated>2023-02-07T03:41:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Duality of Ground: Re-envisioning Space of Death in New York City
Lo, Kuang-Chun; Prachasartta, Jariyaporn
The territory of death spans a wide range of space and time scales. New York City has, for centuries, been developed around and above the remains of the dead. Cemeteries, therefore, serve as a constant reminder of the mutability of urban ground and its profound notion of impermanence and perpetuity. Cemeteries, nonetheless, reveal as much about the living as they do about the dead. Today, their relevance has been challenged by the spatial constraints and the ever-changing perception of cultural and religious discourses and practices surrounding the place and space of death.&#13;
&#13;
New Yorkers, religious and not, have started to favor cremations over burials for various reasons. This signifies that the spatial and socio-cultural relationship towards the cemetery environs will evolve as societal norms have diversified. Existing cemeteries, anachronistic and stagnant, mark a separation between activities of the living and places of the dead resulting in them transitioning into a monofunctional ground.&#13;
&#13;
Individual death, although personal and intimate, makes up the discursive territory that enfolds identities, stories, and connections to the public whole. Therefore, the problems call for the need for communities to respond collectively. By taking the Cemetery Belt, a conglomeration of cemeteries at the border of Brooklyn and Queens, as a site of investigation, the thesis aims to re-envision the space of death and its role. The thesis proposes a design that mediates the balance and dynamism between conventional burial sites and new forms of engagement to create an urban experience that serves the journey of the dead and the living.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Specious Materials</title>
<link href="https://hdl.handle.net/1721.1/147911" rel="alternate"/>
<author>
<name>Wu, Jie</name>
</author>
<author>
<name>Xu, Zhifei</name>
</author>
<id>https://hdl.handle.net/1721.1/147911</id>
<updated>2023-02-07T03:09:50Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Specious Materials
Wu, Jie; Xu, Zhifei
Our world is saturated by digital media that’s manipulated, spread as facts, sorted algorithmically, creating many different “facts” for each niche community, and increasingly blending the “faked” into our physical reality. As a result, the modernist sense of truth is on the verge of collapse.&#13;
&#13;
As solid and real as architecture might have always been conceived, the field of architecture is not immune to the question of reality. The faking of one architecture material with the image and texture of another has long existed in our field for various purposes, but uncritically thought about and indifferently perceived. Today, digital media adds another dimension on top of the simple binary of real and fake, making the authenticity of the building increasingly confusing.&#13;
&#13;
This thesis proposes to see fake materials, not as an ethical problem (the betrayal of the classical modernism paradigm of “truth to the materials”) or ready-made industrial products made for their economic or performance benefits, but instead, as contemporary mediums that blend digital media into physical reality, and new design areas for architects to intervene with agencies. Then what we like to explore becomes: what would we make of a material that embodied multiple different materialities? Can fake material stand its own ground against its “real” counterparts? Freed from the real vs. fake dichotomy, can these specious materials bring forth a new aesthetic and convey critical meaning?
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Manufacturing of the Filament Collection and Diameter Measurement Systems of Fiber Extrusion Device for Educational Purposes</title>
<link href="https://hdl.handle.net/1721.1/147910" rel="alternate"/>
<author>
<name>Li, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/147910</id>
<updated>2023-02-07T03:27:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design and Manufacturing of the Filament Collection and Diameter Measurement Systems of Fiber Extrusion Device for Educational Purposes
Li, Rui
Fiber Extrusion Device (FrED) is a desktop fiber extrusion system that mimics continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing, and it allows users to perform experiments, vary manufacturing parameters, change control system, and collect data. There are different variants of FrED based on the price point (low cost to high cost). The existing FrED has issues like poor portability, high cost, and poor manufacturability. There is a need to make it cost efficient, portable, and, thus, accessible to remote learners. In this project, the TecMIT team implemented the agile methodology, V-model methodology, and design for manufacturing and assembly (DFMA) to design a low-cost version of FrED. In the project, FrED was divided into six sub-systems, and this thesis goes into details of the design and manufacturing of the filament collection and diameter measurement systems. Through five iterations of design cycles, a production model with a unit cost of $268.71 was developed and prototyped. A pilot run of 10 FrEDs, four user testing, and numerous functionality testing were conducted to finalize the production model. In the end, a total of 25 FrEDs (10 test runs and 15 production runs) were produced, and a user's manual and process plans were constructed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Prototype to Production: Scaling an On-Farm Milk Analyzing Device to Low Volume Production Using Design for Manufacturability and Assembly</title>
<link href="https://hdl.handle.net/1721.1/147909" rel="alternate"/>
<author>
<name>Thomson, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/147909</id>
<updated>2023-02-07T03:18:37Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">From Prototype to Production: Scaling an On-Farm Milk Analyzing Device to Low Volume Production Using Design for Manufacturability and Assembly
Thomson, Benjamin
The dairy industry is a complex and storied one that must innovate to survive. Dairy farmers have traditionally been unable to tap into the vast wealth of data contained within the milk of each cow. Labby, a startup company, seeks to change that paradigm with their Internet-of-Things Milk Analyzer. Labby has developed a proof-of-concept Milk Analyzer that optically analyzes the milk quality and milk composition of each cow, providing dairy farmers with rich insights into the health and performance of their herd. A prototype has been developed to validate the sensing performance and Labby is now poised to create a commercially-ready industrial device for production. In this thesis, a Milk Analyzer product is designed and developed in collaboration with Labby using an iterative product design process, with a focus on design for manufacturability and assembly (DFMA), so that low volume production of the device can commence. The product development and DFMA process is detailed, and the manufacturing and assembly methods of low volume electronic device production explored. Additionally, an add-on product to enhance the compatibility of the Milk Analyzer is developed and prototyped.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Incomplete Domestic Landscape</title>
<link href="https://hdl.handle.net/1721.1/147908" rel="alternate"/>
<author>
<name>Boes, Taylor</name>
</author>
<author>
<name>Ma, Florence</name>
</author>
<id>https://hdl.handle.net/1721.1/147908</id>
<updated>2023-02-07T03:01:27Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">The Incomplete Domestic Landscape
Boes, Taylor; Ma, Florence
Our environments are failing us. Modernism’s relentless pursuit of efficiency, control, and purity has birthed a sterilization that has weakened our bodies bit by bit, little by little, to the point that they have almost been lost.&#13;
 &#13;
We seek to unpack our relentless reality: its cold corners and its hard edges, its celebration of virtual futures and its endlessly scrolling present, its crumbling concrete and rusting re-bar, its fluorescent lighting and dying house plants, its unflinching march towards a singular Progress.&#13;
 &#13;
We question the modernist notions of control and efficiency as tools for better living.&#13;
 &#13;
We are Duo.&#13;
&#13;
We seek to squat, to occupy, to co-opt. We are subversive. We operate on and within.&#13;
 &#13;
We are soft surfaces and unexpected jungles.&#13;
 &#13;
We are talking about your house plants and your pillows, the faux fur you walk by in the fabric store and caress but never purchase, the fake Ficus in your mom’s entry way that she’s been watering for years, that carefully placed curtain that undulates between your back and your bed as your zoom call drones on, and the things that call for you to hold them and be held by them.&#13;
 &#13;
But we are not tame nor subtle. We may love soft things, but this is not a soft touch. It is an explosion. We design, cultivate, and share supplements: shifts in viewing, entanglements, soft creations and strange installations. We work through examinations and operations within archives. Open to all, the resulting landscapes do not require heavy machinery and perhaps you should be impaired when you try them.&#13;
&#13;
We want to feel. We want you to feel.&#13;
 &#13;
While you have been otherwise occupied, so have we. Here is that occupation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Medship: Affective Computing for Building Empathetic Behaviors Toward Patients with Substance Use Disorders</title>
<link href="https://hdl.handle.net/1721.1/147904" rel="alternate"/>
<author>
<name>Harris, Caleb M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147904</id>
<updated>2023-02-07T03:25:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Medship: Affective Computing for Building Empathetic Behaviors Toward Patients with Substance Use Disorders
Harris, Caleb M.
Opioid use is on the rise, with overdose deaths quadrupling since 1999. Consequently, so is substance use disorder (SUD), an illness caused by repeated use of substances such as alcohol or drugs that result in clinically significant impairment. Although physician interactions with patients with SUDs are dramatically increasing in frequency, the majority of medical training still fails to address the importance of building empathy and minimizing stigma in such clinical interactions. Furthermore, physicians receive only minimal instruction regarding the expression of empathy and its role in building rapport and eliciting positive responses from patients with SUDs. Such strategies not only improve the immediate clinical interaction by contributing to a warm, stigma-free environment, but also improve the long-term outcomes of the patient by driving them toward care instead of away from it. This thesis both identifies the affective features and expressions most attributed to positive clinical perception, from the perspective of actual patients with SUDs, and introduces Medship, a web-application-based tool embedded with affective computing models to provide real-time affective training for medical school students and physicians in simulated clinical interactions with patients with SUDs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel and Distributed Just-in-Time Shell Script Compilation</title>
<link href="https://hdl.handle.net/1721.1/147902" rel="alternate"/>
<author>
<name>Mustafa, Tammam</name>
</author>
<id>https://hdl.handle.net/1721.1/147902</id>
<updated>2023-02-07T03:03:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Parallel and Distributed Just-in-Time Shell Script Compilation
Mustafa, Tammam
In the past several years, the shell has received renewed interest from the research community. This thesis describes the work I did to advance the performance and capabilities of the current state-of-the-art shell-script parallelization systems. In the first half of this thesis, I focus on my contributions to PaSh-JIT, a JIT compiler for parallelizing POSIX shell scripts. In the second half, I explore the design and implementation of Distributed-PaSh, a shell that can utilize distributed computing resources and easily interface with distributed storage systems to efficiently execute data-processing pipelines. Distributed-PaSh analyzes the dataflow graph of a given script to create highly parallel data pipelines and execute those pipelines in a distributed cluster while giving special attention to data locality and movement. Distributed-PaSh achieves higher performance than single machine sequential and parallel shells.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speculative Friction: Seven Stories from the Geneva Freeport</title>
<link href="https://hdl.handle.net/1721.1/147901" rel="alternate"/>
<author>
<name>Song, Alice Jia Li</name>
</author>
<author>
<name>Yacoby, Yaara</name>
</author>
<id>https://hdl.handle.net/1721.1/147901</id>
<updated>2023-02-07T03:50:01Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Speculative Friction: Seven Stories from the Geneva Freeport
Song, Alice Jia Li; Yacoby, Yaara
Speculative Friction uses storytelling to explore the line between fact and fiction, implicating the construction of reality in the construction of speculative futures. This project is interested in the Geneva Freeport (Switzerland) as its central character. This Special Economic Zone legally operates outside of global trade taxation laws as a free-market tool to expedite the import and export of commercial goods. While there are hundreds of modern freeports around the world, the Geneva Freeport is unique in allowing “passing” objects to be stored indefinitely in its storage spaces. As a result of this state of stasis, as well as Swiss confidentiality laws, the Freeport has been the storage facility for anything considered to be of value. Grains, gold bars, art objects, and illegally extracted antiquities are all stored in the Freeport without public access and without taxation, even as ownership is exchanged. It is estimated that there are as many as 1.2 million objects in storage.&#13;
&#13;
This thesis opens the conversation to the banal and absurd capitalist reality at the Geneva Freeport and looks at this uncanny world from within. What are the objects and their entanglements with the world outside? What happens when the objects begin to push back on their container?
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bernini Started It</title>
<link href="https://hdl.handle.net/1721.1/147900" rel="alternate"/>
<author>
<name>Clement, Ryan</name>
</author>
<author>
<name>Matthai, Charlotte</name>
</author>
<id>https://hdl.handle.net/1721.1/147900</id>
<updated>2023-02-07T03:24:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Bernini Started It
Clement, Ryan; Matthai, Charlotte
Bernini’s altar at St. Peter’s Basilica is a sacred focal point and manifestation of divine power and glory - the Church at the height of its power and an authoritative flaunting of papal infallibility.&#13;
&#13;
The altar rests upon the tomb of St. Peter from which the Pope traces his legitimacy. At this place of sacrifice, wine is transubstantiated to blood and bread to flesh. Yet beneath the sacrificial locus - at the foot of the Pope - are lurking flesh and fluids not-so-divine. Sculpted into the Barberini crest on the piers of the Baldacchino are the face and genitals of a woman in labor. Viewed in a circumambulatory procession, the rhythms of contractions are depicted through her facial contortions and vaginal transmutation. Her genitals are thinly disguised as the face of a satyr that emerges in the final scene, no longer a virgin and hungry for more. This is not the vagina of Mary; this was not an immaculate conception.&#13;
&#13;
Bernini has inserted an other, a queer, an abject form of the revered sacrifices upon the altar. They’re all fluids; they’re all flesh. Yet the ones above hold a godly and masculine power, whereas the feminine fluids below are execrated and feared within the western Catholic heteronormative tradition.&#13;
&#13;
The altar provokes fantasies of queer desire and expression, yet it is stunted and entangled in a tradition of Western sexual monolingualism. What would it look like to unleash the altar from these constraints, to reorient “straightness” by centering the queer? How can we tap into this papal-sacrificial-dogmatic-lineage of legitimacy? How can we seize ecclesiastical power and appropriate TRANSubstantiation to affirm earthly human bodies, to legitimate bodies beyond the binary, and to acknowledge the sacrifice of bodies beyond human?
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of freight train operation on the Boston and Maine Railroad with special reference to Tonnage Rating and utilization of motive power</title>
<link href="https://hdl.handle.net/1721.1/147739" rel="alternate"/>
<author>
<name>Ramsey, John P.
            (John Patterson)</name>
</author>
<id>https://hdl.handle.net/1721.1/147739</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1931-01-01T00:00:00Z</published>
<summary type="text">A study of freight train operation on the Boston and Maine Railroad with special reference to Tonnage Rating and utilization of motive power
Ramsey, John P.
            (John Patterson)
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1931
</summary>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain walls in the two dimensional Ising strip</title>
<link href="https://hdl.handle.net/1721.1/147738" rel="alternate"/>
<author>
<name>Marsh, Adam Jonathan.</name>
</author>
<id>https://hdl.handle.net/1721.1/147738</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Domain walls in the two dimensional Ising strip
Marsh, Adam Jonathan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1992; Includes bibliographical references (leaves 53-54).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanics of rupture in blended yarns</title>
<link href="https://hdl.handle.net/1721.1/147735" rel="alternate"/>
<author>
<name>Machida, Kazuo.</name>
</author>
<id>https://hdl.handle.net/1721.1/147735</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Mechanics of rupture in blended yarns
Machida, Kazuo.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1963; Includes bibliographical references (leaves 70-71).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of an information entity-- the InfoCo-- for the electric power industry under restructuring</title>
<link href="https://hdl.handle.net/1721.1/147734" rel="alternate"/>
<author>
<name>Tsuchida, Toshiki Bruce,
            1967-</name>
</author>
<id>https://hdl.handle.net/1721.1/147734</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">Design of an information entity-- the InfoCo-- for the electric power industry under restructuring
Tsuchida, Toshiki Bruce,
            1967-
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2001; Includes bibliographical references (p. 108-111).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of the Fama-French Model to Singapore REITs</title>
<link href="https://hdl.handle.net/1721.1/147732" rel="alternate"/>
<author>
<name>He, Fan</name>
</author>
<author>
<name>Neo, Kok Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/147732</id>
<updated>2023-01-27T03:28:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Application of the Fama-French Model to Singapore REITs
He, Fan; Neo, Kok Tong
The paper applies the Fama French 3-factor Model to Singapore REITs’ market to determine if the model has strong explanatory power on Singapore REITs’ excess return over a 11-year period from 2009-2019. Several previous studies have illustrated that the Fama French Model has superior predictive power as compared to the traditional Capital Asset Pricing Model (CAPM) in many stock markets worldwide. Over the past 2 decades, the Singaporean REIT market has grown significantly, with over 34 S-REITS with a total market capitalization of around S$107 billion, and is increasingly becoming one of Asia’s global REIT hubs.&#13;
&#13;
We have utilised existing Fama French factors for APAC region stock markets (except Japan) to conduct a series of multivariate regressions. Specifically, the study has implemented longitudinal and cross-sectional regressions over four stages for a period of 11 years, testing the efficacy of the Fama French 3-Factor Model for the statistical significance of Market Risk Premium, Size and Value Premiums for both single REITs and REIT portfolios in Singapore from 2009 to 2019.&#13;
&#13;
Our results indicate that the Fama French factors have exhibited strong statistical significance in capturing and accounting for Singapore REITs’ excess returns. Factors of market risk premium, size and book-to-market value factors are proven to have significant explanatory power over excess return. Our main finding contrary to the existing application of Fama French 3-Factor model is the presence of a negative SMB coefficient which constitutes a reversal of the “size effect”. Further, we have also found the presence of REITs’ own-volatility (including unique firm-specific risk and systematic market risk) as a statistically significant factor, and the absence of Carhart’s momentum factor in the Singapore Stock Market. We have attempted to propose several plausible explanations, together with an in-depth analysis and review on existing literature for the reverse size premium effect, own-volatility factor and the absence of momentum factor in Singapore Stock Market.&#13;
&#13;
In summary, the Fama French 3-Factor model is successful with strong explanatory power in accounting for excess returns across entire time periods and individual sub-time periods for Singapore REITs over a 11-year period from 2009 to 2019.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System-Theoretic Approach to Risk Analysis</title>
<link href="https://hdl.handle.net/1721.1/147729" rel="alternate"/>
<author>
<name>Gregorian, Dro J.</name>
</author>
<author>
<name>Yoo, Sam M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147729</id>
<updated>2023-01-27T03:15:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A System-Theoretic Approach to Risk Analysis
Gregorian, Dro J.; Yoo, Sam M.
Traditional safety risk assessment methods focus on component failures instead of the hazards present before the failure occurs. A widespread assessment tool is a risk matrix that measures the probability and severity of a particular risk, focusing heavily on qualitatively assessing the problem and determining its impact categorically through a matrix. The problem with this methodology is that any underlying system components or hazards that somebody cannot quantify are overlooked and may not appear until an accident or performance issue occurs. As a result, most analysis and reporting is conducted after an undesirable event happens, and the lessons-learned are used to prevent future losses.&#13;
&#13;
However, a newer analysis method can identify the hazards and possible scenarios that lead to those losses before they occur. The technique is called System-Theoretic Process Analysis (STPA). STPA utilizes a qualitative approach to analyze the emergent properties of a system by finding unsafe control actions and determining their resultant loss scenarios.&#13;
&#13;
This thesis examines the DoD risk matrix's current use and then leverages STPA to improve the outputs. The authors’ research is also widely applicable outside of the DoD. The thesis provides two approaches to apply STPA in risk assessment, but both use a measure of mitigation effectiveness as a proxy for probability. A new STPA-Informed Risk Matrix (SRM) is introduced as an alternative for the MIL-STD-882E risk matrix. By combining the strengths of STPA and traditional risk assessment methods, decision-makers will be more equipped to determine risk levels associated with their projects, specifically concerning system safety. New DoD developmental programs are incredibly complex systems that require risk mitigation at each phase, from design to operation. STPA is applicable and scalable in any phase and yields actionable results that will prevent losses from occurring.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Mobility in Sierra Leone During COVID-19 Using Call Detail Records</title>
<link href="https://hdl.handle.net/1721.1/147728" rel="alternate"/>
<author>
<name>Li, Yanchao</name>
</author>
<author>
<name>Ran, Ziyu</name>
</author>
<id>https://hdl.handle.net/1721.1/147728</id>
<updated>2023-01-27T03:15:25Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Understanding Mobility in Sierra Leone During COVID-19 Using Call Detail Records
Li, Yanchao; Ran, Ziyu
Call Details Records (CDR) can provide an essential resource for understanding mobility patterns in data-poor environments, while often applied towards transportation applications. This thesis seeks to use CDR to understand the impact of government policies during fast-moving public health threats. To test the usefulness of CDR data, we apply it towards two different research questions: 1) How did human mobility patterns change after travel restriction policies during COVID-19, and how was that related to their socioeconomic status? 2) How did travel policies and socioeconomic status impact mobile users' accessibility to services during COVID-19? A big data analysis pipeline is developed to answer these two questions. For the analysis of mobility patterns changes, a series of mobility metrics are generated, including radius of gyration, purpose of trips, regularity of movement, and motif types. Then the mobile users are clustered into four typologies based on the metrics to determine how different groups of people changed their travel behavior during COVID-19. For measurement of impacts of travel policies and socioeconomic status on mobile users' accessibility to services (i.e., food and healthcare), accessibility metrics including travel distance, the rate of discretionary trips, the entropy of trip duration, and the cumulative number of food /healthcare services are derived. Then users' accessibility behaviors are classified as four distinct types to inform more target and effective policies. From our analysis, we find 1) the travel activities decreased hugely during the lockdown period and rebounded partly during the travel restriction period. 2) Users living in more impoverished areas generally needed to travel long before the COVID- 19 but decreased their travel behaviors hugely during the lockdown. 3) Users of higher socioeconomic status are less likely to be influenced by travel restrictions to obtain food/ healthcare resources, while travel policies easily influence users of lower socioeconomic status. This thesis interprets Sierra Leone's accessibility and mobility via multiple perspectives, providing analysis to support the local government in coping with the pandemic. The big data analysis pipeline created in this thesis can also be applied to future research in other data-poor countries. The research can be integrated with other research fields such as epidemiology, sociology, and economics to provide more information to inform policy decision-making.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conectividad Alegal: Remaking and Resilience in the Bay of Havana</title>
<link href="https://hdl.handle.net/1721.1/147727" rel="alternate"/>
<author>
<name>Igarzábal, Lucas F.</name>
</author>
<author>
<name>Waddle, Marisa Concetta</name>
</author>
<id>https://hdl.handle.net/1721.1/147727</id>
<updated>2023-01-27T03:17:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Conectividad Alegal: Remaking and Resilience in the Bay of Havana
Igarzábal, Lucas F.; Waddle, Marisa Concetta
This thesis investigates the rapid rate at which the changing of ownership, production, and policy has affected the Bay of Havana. In 2009, the Cuban government designated a single free port on the island just 40 kilometers west of the Bay. The Mariel marked an era of economic restructuring, a common occurrence in the past century. These policy changes aim to ease the day-to-day lives of Cuban citizens but also leave them vulnerable to foreign industries who seek to mine the area for its unregulated resources, cheap labor, and proximity to US trade flows. The Bay, as a site of this intense geopolitical speculation and aging infrastructure, is emblematic of Cuba as a whole.&#13;
&#13;
The Bay, bracketed by an inoperable oil refinery and a degrading thermoelectric plant, is currently characterized by abandoned industry. While these forgotten sites restrict pedestrian access and foster pollution, they provide a critical connection to the shoreline, and therefore to the world at-large. The project is a speculation of a future that aims to return this site to its citizenry. It argues for the Cuban philosophy of resolver to leverage the resilient culture of Havana’s citizens against foreign opportunism. It explores the transformation of the site over the next five decades, as it adapts to the everchanging economic, social, and political landscape of the country. The project salvages key components of the site, as opposed to depleting it of its resources. It develops new industries along the entire shore, adapted from abandoned factories, which circumvent material scarcity and access restrictions. The thesis operates between Havana’s historic ebb and flow of scarcity and surplus, defining a new vernacular of grassroots urbanism.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Efficient “Virtual Sales Organization”</title>
<link href="https://hdl.handle.net/1721.1/147726" rel="alternate"/>
<author>
<name>Khabibulin, Roman</name>
</author>
<author>
<name>Mahmood, Hamad</name>
</author>
<id>https://hdl.handle.net/1721.1/147726</id>
<updated>2023-01-27T03:37:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Building Efficient “Virtual Sales Organization”
Khabibulin, Roman; Mahmood, Hamad
Working virtual is the new normal many employers think that workplace culture in the organization is ready for a change, however, employees might be apprehensive and think differently. Closing this perception gap will yield substantial benefits for companies and their employees. &#13;
&#13;
Our research will focus on Information technology as an industry, our area of interest will be SaaS (Software as a service) and DaaS (Data as a service). Within these two segments, we will be looking at sales organizations and their operational behavior and build a framework/roadmap showing a successful transition into a virtual world. &#13;
&#13;
We will try to find answers to the burning questions at the end of each quarter that every sales leader has to answer, such as “How was your quarter?”, or “How can we help to improve?”, by laying out a framework that can guide C-suite leadership and Sales organization to understand each other’s points of view and also build future strategies based upon those findings. &#13;
&#13;
We will also define success (financially and personally) for both the business and employees. Several interviews will be conducted with industry leaders, and the findings will correlate with the virtual sales organization frameworks that we build.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing Labor: Fabrication Porn from Behind the Scenes</title>
<link href="https://hdl.handle.net/1721.1/147724" rel="alternate"/>
<author>
<name>Griffin, Daniel</name>
</author>
<author>
<name>Ow, Inez</name>
</author>
<id>https://hdl.handle.net/1721.1/147724</id>
<updated>2023-01-27T03:47:10Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Seeing Labor: Fabrication Porn from Behind the Scenes
Griffin, Daniel; Ow, Inez
The notion of high-tech oftentimes inadvertently expresses the conviction that industrial progress means the abolition of manual labor from our industry. Within the last 20 years, this veil has been maintained by one prominent body of work in particular: the experimental research pavilion. These 1:1 works allege to be beacons of progress, signaling novel directions for future constructions. Though they are marketed as high-tech, video documentation available on YouTube and Vimeo reveal that they are assembled manually on site, often by large teams of people who receive minimal recognition for the repetitive actions of their bodies. We classify many of these works as “fabrication porn,” semantically hinging upon the similarly exploitative dynamics that exist in the worlds of porn production and theatre stage sets. &#13;
&#13;
Backstage labor has become overwhelmingly disassociated from the very conventions by which we have been trained to present and consume architecture. Plans, sections, elevations, axonometrics, renders and detail drawings — they represent the artifact to be built, but not the actions or subjective experience required to build. When we are able to recognize labor as potential for value-adding, instead of reducing them to semi-automatic manual executions, it becomes clear that labor constitutes a large part of building authorship that the prevailing culture of sensationalism fails to properly credit. Worse still, when operating under this system which thrives on cropping inconvenient realities of labor out of view, we become complicit in propagating abuse, neglect, and injury.&#13;
&#13;
Our thesis recognizes that a more sympathetic relationship between architectural output and labor cannot be realized under the classical model of authorship in architecture, and contends that it is within the agency of the architect to embed labor accounting within the design of the physical artifact itself. We enter our thesis as test subjects in a series of self-administered labor accounting studies, as a way of attuning ourselves to the fact that every architectural design move necessarily implicates bodies. We then propose an architectural take on labor accounting, where physical artifacts serve as ledgers. Our vision: every trace of labor becomes non-fungible — the experience of architectural space and the experience of implicated laborers are rendered inseparable. Seeing Labor is a case that every architect should retrain themselves, to use their role in the organizing of bodies for advocacy, rather than exploitation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The spirit in the science: wild rice conservation through tribal-university partnerships in Minnesota</title>
<link href="https://hdl.handle.net/1721.1/147601" rel="alternate"/>
<author>
<name>van Deelen, Grace C.</name>
</author>
<id>https://hdl.handle.net/1721.1/147601</id>
<updated>2023-01-21T03:28:09Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The spirit in the science: wild rice conservation through tribal-university partnerships in Minnesota
van Deelen, Grace C.
In 2018, an unlikely partnership formed at the University of Minnesota, called the “Kawe Gidaa-naanaagadawendaamin Manoomin” (“First we must consider Manoomin / Psiη”) collaboration. The research collaboration, whose express purpose is to protect wild rice, includes both tribal and non-tribal institutions and members. Since 2018, the group has grown to include social scientists, graduate students, and undergraduate researchers. The research partners are now probing scientific and ethical questions about wild rice decline in the upper Midwest and the role of genetic research at the university.&#13;
&#13;
The “First” collaboration is one of a handful of tribal/university research partnerships that have sprung up around the country to include tribal perspectives and traditional knowledge in mainstream ecological research. More than stakeholders, tribal partners involved in these projects share power and governance over research alongside mainstream institutional partners. However, such partnerships are not isolated from the complicated histories of European settler colonialism that persist through US institutions today. Finding a way to reconcile those histories is crucial to the success of the partnerships themselves. In the case of the “First” collaboration, reconciling these histories means deeply investing in relationships — relationships to the land and between people — even if that means putting data collection on hold.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>'Burning Issues': Incarcerated Firefighting Programs in the U.S.</title>
<link href="https://hdl.handle.net/1721.1/147599" rel="alternate"/>
<author>
<name>Foehringer Merchant, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/147599</id>
<updated>2023-01-21T03:03:09Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">'Burning Issues': Incarcerated Firefighting Programs in the U.S.
Foehringer Merchant, Emma
There are at least 15 states in the U.S. that use incarcerated people to fight wildland fires: Arizona, California, Colorado, Georgia, Idaho, Montana, Nevada, New Mexico, North Carolina, Oregon, South Dakota, Tennessee, Virginia, Washington, and Wyoming. This thesis outlines the broad adoption and ad-hoc nature of these programs, as well as the wide variation in data available about their operation. Though incarcerated men and women have been fighting fires in the U.S. for decades, many of these programs have received very little public scrutiny.&#13;
&#13;
The impacts of climate change, such as drought and warmer temperatures, have increased the likelihood of wildfires and the portion of the year when those fires are likely to spark. As climate change intensifies and the costs of disasters increase -- in 1990, the U.S. spent $390 million fighting wildfires and in 2021, the nation spent $2.3 billion -- the U.S. will have a growing need for firefighting labor. Meanwhile, the federal government is struggling to hire enough firefighters to meet the demand. Though numerous variables contribute to the creation, maintenance, and size of incarcerated firefighting programs, increasing and more dangerous fire activity could push states to consider using this labor more often. That makes it essential to understand the scope of these programs as well as their ultimate effect on participants.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Toxic Legacy of the Gold Rush</title>
<link href="https://hdl.handle.net/1721.1/147598" rel="alternate"/>
<author>
<name>Campbell, Leah</name>
</author>
<id>https://hdl.handle.net/1721.1/147598</id>
<updated>2023-01-21T03:13:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Toxic Legacy of the Gold Rush
Campbell, Leah
The California Gold Rush has often been told as a story of a brief, environmentally benign, even romantic ‘rush’ that ended as quickly as it began. In truth, gold mining in California was highly extractive and industrial, and continued well into the 20th century. Over time, miners developed increasingly invasive means of getting at the gold, adding chemical additives like mercury and cyanide to make the process more efficient and bringing up toxic heavy metals like arsenic and lead in the process. These contaminants persist in the environment and are known to be harmful to human health.&#13;
&#13;
Today, there are 47,000 abandoned mines littered across California, many of which are gold mines concentrated in the appropriately named Gold Country region of the western Sierra Nevada mountains. Most of these sites were abandoned before federal and state laws required any sort of remediation of mining operations, and, in most cases, the companies and individuals who operated these sites are long gone. Though only a small percentage of these abandoned mines are contaminated, cleaning up toxic mines is a significant logistical, financial, and technical challenge.&#13;
&#13;
The ongoing efforts by government officials and community groups to clean up contaminated gold mines in Gold Country highlights many of the larger challenges of environmental remediation. At Argonaut Mine, an EPA Superfund site, the project manager contends with a “cultural blindness” to the impacts of gold mining and dangerously high levels of contamination that will take several years and millions of dollars to address. At Lava Cap Mine, another EPA Superfund site, those challenges are exacerbated by an ongoing legal battle to hold accountable those that contributed to the problem. Meanwhile, in Nevada City, community groups like Sierra Fund and Sierra Streams Institute are tackling the challenge of the thousands of smaller sites that will never make EPA’s Superfund list. They’re also illuminating the health risks facing residents of Gold Country and the state’s failure to regulate the buying and selling of abandoned mines.&#13;
&#13;
In an era of climate change, with new mining proposals under consideration, California must finally confront the toxic legacy of the Gold Rush.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When the Waters Came</title>
<link href="https://hdl.handle.net/1721.1/147597" rel="alternate"/>
<author>
<name>Rose, Maria Parazo</name>
</author>
<id>https://hdl.handle.net/1721.1/147597</id>
<updated>2023-01-21T03:12:01Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">When the Waters Came
Rose, Maria Parazo
In March 2019, record-breaking floods swept through the Midwest, leaving cities and farmland razed, broken, and drowned. Hundreds of people were displaced, and millions of acres of agricultural land were submerged. Thousands of livestock died; people reported carcasses floating in the currents. Silos were crushed and stored grains were contaminated, losing seasons of labor in a matter of days. But even when the water receded and returned to the confines of its banks, there was no relief for farmers like Jeremy Mahon and his family. Ranchers and farmers are still struggling, three years and hundreds of thousands of lost dollars later.&#13;
&#13;
As climate change exacerbates weather variability and storm severity, areas like the Midwest are expected to see more, and worse, floods and other disasters. Agriculture is crucial to the region’s economic success and residents’ livelihoods, so it will be increasingly important to prioritize conservation and adaptation-focused farming practices to ensure the industry’s safe continuity. But there’s a challenge: for social, cultural, and financial reasons, various people and communities simply don’t want to adapt. As disasters intensify, this resistance may be one of our biggest obstacles to successfully preparing for climate change impacts — the worst of which are still to come.&#13;
&#13;
This thesis explores the long-term impacts of the March 2019 floods on agricultural production, specifically in northeast Nebraska. Though the state at large has mostly recovered, small rural towns and farming communities are still dealing with the repercussions. The thesis goes on to explore the question of what holds people back from taking on adaptive farming practices, which is an important question given that climate risk is increasing, that consequences of disasters are long-lasting and severe, as well as immediately damaging, and that farming is as vital as it is to this area.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Solution In The Sea: New York recently legalized commercial kelp farming. Will it help solve the state's environmental and economic woes?</title>
<link href="https://hdl.handle.net/1721.1/147596" rel="alternate"/>
<author>
<name>Crawford, Iris M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147596</id>
<updated>2023-01-21T03:24:41Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Solution In The Sea: New York recently legalized commercial kelp farming. Will it help solve the state's environmental and economic woes?
Crawford, Iris M.
In 1985, an algae bloom, fueled by nitrogen pollution, transformed the Long Island Sound into a dead zone. Though the area has somewhat recovered, the people and marine life that rely on these waters still feel the impact decades later. Many are hoping that kelp farms can help. Kelp farming is a nature-based mitigation strategy that removes pollutants from ocean water while also providing a commercial crop that can be eaten and used in products ranging from pharmaceuticals to fertilizers.&#13;
&#13;
With support from an ocean farming nonprofit called GreenWave, kelp farms have popped up across the country, but it's only recently that New York has embraced this form of aquaculture. Last December, the New York State Senate passed a bill that legalizes farming certain kelp species during winter months on 110,000 acres of underwater land in Peconic Bay and in Gardiners Bay nearby. The bill has broad support from farmers and environmental groups, but problems with permitting and lack of infrastructure raise questions about how much economic or environmental impact the crop will have statewide.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Think ‘Zebra’</title>
<link href="https://hdl.handle.net/1721.1/147595" rel="alternate"/>
<author>
<name>Evergreen, Shel</name>
</author>
<id>https://hdl.handle.net/1721.1/147595</id>
<updated>2023-01-21T03:33:17Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Think ‘Zebra’
Evergreen, Shel
Thousands of rare diseases affect 300 million people globally, but a potential breakthrough in one sheds light on the systemic barriers to research and diagnosis.&#13;
&#13;
Ehlers-Danlos Syndrome (EDS) has thirteen subtypes, and, to date, all but one have at least one identified genetic marker. In 2021, researchers at the Medical University of South Carolina announced they may have found the first genetic marker for Hypermobile Ehlers-Danlos Syndrome (hEDS). This subtype is the most common and is commonly believed to be less severe than other types of EDS, which is not the case. Further, recent research shows hEDS may not be rare at all, a misconception that is potentially a consequence of systemic underdiagnosis that impacts both patient lives and the flow of research funding.&#13;
&#13;
Through stories of scientific research, healthcare provers, and patient experiences, this thesis illustrates the interplay between: difficulty of rare disease diagnosis, systemic barriers that prevent diagnosis, and the effects these have on institutional research into rare disease.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scrambling for Care: Autism in Rural America</title>
<link href="https://hdl.handle.net/1721.1/147594" rel="alternate"/>
<author>
<name>Zia, Shafaq</name>
</author>
<id>https://hdl.handle.net/1721.1/147594</id>
<updated>2023-01-21T03:33:42Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Scrambling for Care: Autism in Rural America
Zia, Shafaq
The path to autism diagnosis and intervention has always been arduous, but families in small towns with limited access to healthcare services are at a much greater disadvantage. Amid the Covid-19 pandemic, the surge in virtual services for autism has offered clinicians and researchers a unique opportunity to bring care to the home of geographically underserved communities. Through interviews with autism researchers, health experts, and family members of autistic individuals, this two-part project takes a look at the barriers to autism care in rural America, and the potential and limitations of telehealth to expand access to services.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Primary Prevention: What’s causing the rise in Type 1 diabetes—and can it be stopped?</title>
<link href="https://hdl.handle.net/1721.1/147593" rel="alternate"/>
<author>
<name>Dinneen, James</name>
</author>
<id>https://hdl.handle.net/1721.1/147593</id>
<updated>2023-01-21T03:04:19Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Primary Prevention: What’s causing the rise in Type 1 diabetes—and can it be stopped?
Dinneen, James
Decades of research have failed to identify the environmental factors behind rising rates of Type 1 diabetes. However, the search has made Type 1 diabetes one of the best studied autoimmune diseases, with a network of clinics and laboratories dedicated to understanding the interplay of genetic and environmental factors behind the disease. This has enabled clinicians to begin testing treatments to prevent diabetes in high-risk patients at the “primary” phase when all there is to go on is genetic risk. This thesis discusses the search for environmental determinants of diabetes in the context of a primary prevention clinical trial underway at the Institute for Diabetes Research in Munich, Germany. The trial and others underway represent a possible answer for the millions of people at high genetic risk for developing Type 1 diabetes, and other associated autoimmune conditions like celiac disease and allergies. They also offer an early view into the promise and pitfalls of precision medicine.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Correlating Displacement Sensors and In-Situ Optical Imaging for the Layer Management in a Laser Powder Bed Fusion Process</title>
<link href="https://hdl.handle.net/1721.1/147574" rel="alternate"/>
<author>
<name>Sabarad, Satvik Irappa</name>
</author>
<id>https://hdl.handle.net/1721.1/147574</id>
<updated>2023-01-20T03:37:21Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Correlating Displacement Sensors and In-Situ Optical Imaging for the Layer Management in a Laser Powder Bed Fusion Process
Sabarad, Satvik Irappa
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods. The strategy of this project is to utilize various sensors in tandem with the camera available within the machine to manage the pertinent powder layer. This study was carried out by a group of four students in collaboration with the partner company. &#13;
&#13;
For this study, the on-machine camera has been used for in-process monitoring purposes. This involves capturing the data in the form of images and post-processing them. Two different methods - Mean Intensity and Machine Learning based - were explored. Gauge Repeatability and Reproducibility (GR&amp;R) studies were conducted for the mean intensity-based method and the results showed that the process is repeatable across different setups. The laser triangulation sensor was used to correlate with the camera images. The second method, (Convolutional Neural Network (CNN) based machine learning model), classified the images with 94% accuracy. Both of these methods were deployed on the machine by creating a user-friendly interface.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining density functional theory and machine learning for optimization of multicomponent oxide electrocatalysts</title>
<link href="https://hdl.handle.net/1721.1/147573" rel="alternate"/>
<author>
<name>Karaguesian, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/147573</id>
<updated>2023-11-30T12:36:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Combining density functional theory and machine learning for optimization of multicomponent oxide electrocatalysts
Karaguesian, Jessica
Multicomponent metal oxides, such as perovskite oxides, hold promise for use as sustainable alternatives to Ir-, Ru-, and Pt-based electrocatalysts at scale. Perovskites can accommodate a wide variety of elements in their A- and B-sites, enabling tuning of their structural and electronic properties through compositional alloying. These properties, which are obtainable from density functional theory (DFT) calculations, can be used as low-dimensional descriptors that correlate with experimental stability and activity in, for example, the oxygen evolution reaction (OER). Established descriptors of stability include energy above convex hull and energy above Pourbaix hull, while those for catalytic activity include oxygen 2p- and B-site metal d-band centers, for example. We are therefore presented with a combinatorial problem of determining which A- and B-site compositions optimize such descriptors. The compositional search space of &#119860;ₓ&#119860;′₁₋ₓ&#119861;ᵧ&#119861;′₁₋ᵧ&#119874;₃ perovskites with up to two different elements in A- and B-sites is at least &#119874;(10⁶), making it intractable to calculate descriptors exhaustively using DFT. We therefore combine high-throughput DFT calculations with crystal-based graph neural networks to screen multicomponent perovskites. Using a high-throughput virtual screening platform, a DFT-simulated dataset of over 5,000 multicomponent perovskites was generated, with varied A- and Bsite alloying ratios and over 3,000 unique cationic combinations. Leveraging this dataset, alongside calculations available in the literature, graph convolutional neural networks (GNNs) were trained to predict the aforementioned crystal descriptors from unrelaxed cubic structures and used to predict descriptors for &#119874;(10⁶) &#119860;ₓ&#119860;′₁₋ₓ&#119861;ᵧ&#119861;′₁₋ᵧ&#119874;₃ perovskites. GNNs were also combined with baseline estimates of multicomponent perovskite properties calculated as interpolations of constituent &#119860;&#119861;&#119874;₃ perovskites, thereby achieving improved model performance. Moreover, impacts of varied cationic ordering were modelled, showing that different decorations of cations within the perovskite lattice can modulate resulting properties to the same degree as—or more than—varying compositional ratios. Equivariant message passing neural networks were thus implemented to achieve cation decoration-aware property predictions. Lastly, GNNs predicting per-site properties were established, encoding local chemical environments to provide physical insights about each atom in a crystal lattice. The presented work provides the community with a benchmark multicomponent perovskite dataset, improved machine learning models, and physical insights to be used in further studies of alloyed perovskites, and thus lays groundwork for improved design of multicomponent oxide electrocatalysts.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Design for Optimal Shift Intervention in Causal Model</title>
<link href="https://hdl.handle.net/1721.1/147571" rel="alternate"/>
<author>
<name>Zhang, Jiaqi</name>
</author>
<id>https://hdl.handle.net/1721.1/147571</id>
<updated>2023-01-20T03:47:37Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Experimental Design for Optimal Shift Intervention in Causal Model
Zhang, Jiaqi
Transforming a causal system from a given initial state to a desired target state is an important task permeating multiple fields including control theory, biology, and materials science. In causal models, such transformations can be achieved by performing a set of interventions. When the space of possible interventions is large, making an exhaustive search infeasible, experimental design strategies are needed. In this context, encoding the causal relationships between the variables, and thus the effect of interventions on the system, is critical in order to identify desirable interventions more efficiently. In this thesis, we develop an iterative causal method to identify optimal interventions, as measured by the discrepancy between the post-interventional mean of the distribution and a desired target mean. We formulate an active learning strategy that uses the samples obtained so far from different interventions to update the belief about the underlying causal model, as well as to identify the samples that are most informative about optimal interventions and thus should be acquired in the next batch. The approach employs a Bayesian update for the causal model and prioritizes informative interventions using a carefully designed, causally informed acquisition function. Moreover, the introduced acquisition function is evaluated in closed form, allowing for efficient optimization. The resulting algorithms are also theoretically grounded with information-theoretic bounds and provable consistency results. We illustrate the method on both synthetic data and real-world biological data, more precisely gene expression data from Perturb-CITE-seq experiments. In this case the goal is to identify optimal perturbations to induce a specific cell state transition; the proposed causal approach is observed to achieve better sample efficiency compared to several baselines. In both cases we observe that the causally informed acquisition function notably outperforms existing criteria allowing for optimal intervention design with significantly less experiments.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal Control for Wireless Software Defined Networks: Theory and Implementation</title>
<link href="https://hdl.handle.net/1721.1/147569" rel="alternate"/>
<author>
<name>Nguyen, Quang Minh</name>
</author>
<id>https://hdl.handle.net/1721.1/147569</id>
<updated>2023-01-20T03:35:20Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimal Control for Wireless Software Defined Networks: Theory and Implementation
Nguyen, Quang Minh
Wireless Software Defined Network (SDN) has emerged as a new programmable network paradigm that facilitates flexibility in robust control and management. Toward the production-level network deployment at scale, there has been a surge of interest in distributed architectures of wireless SDN. Despite the inherent importance, optimal network control for either centralized or distributed wireless SDN has remained an open problem, where previous works either fail to account for wireless interference constraints, or are only sub-optimal in throughput due to quasi-static shortest path routing. Though throughput-optimal and well-established in the literature, the BackPressure (BP) algorithm is not compatible with wireless SDN architecture. In contrast, the recently developed Universal Max-Weight (UMW) policy also achieves throughputoptimality, yet permits algorithmic structure more congruent with SDN’s requirements. Unlike BP, UMW pre-computes a fixed route per-packet upon a packet arrival, which can be integrated with the flow installation phase of SDN, and uses novel easy-to-track virtual queues in place of physical queues (of backlogged packets), whose operations are not supported by SDN switches. In this thesis, we propose novel UMW-based optimal control frameworks for both centralized and distributed wireless SDN that achieve the full network capacity and support an arbitrary mix of multi-type traffic.&#13;
&#13;
For centralized wireless SDN, we develop a Mininet-based implementation of the UMW framework to evaluate its performance. In order to improve robustness in dynamic wireless environments, we modify the UMW algorithm to enable re-routing around failed links. Compared against the conventional SDN shortest path routing, our algorithm improves throughput by over 100% and significantly reduces average per-packet delay in high-throughput regime. We further present the Randomized UMW (RUMW) algorithm that performs scheduling in linear time, yet still maintains the throughput-optimality under the setting of dynamic network.&#13;
&#13;
For distributed wireless SDN, our proposed Distributed Universal Max-Weight (DUMW) algorithm is throughput-optimal and non-trivially extends the UMW policy to permit distributed control and optimal inter-domain scheduling under the setting of heterogeneously delayed network state information. Furthermore, we design controller synchronization strategies that resolve the problem of multi-domain flow installation and are tailored to DUMW for maintaining throughput-optimality with negligible communication overhead. Extensive simulations validate the throughput-optimality and exhibit superior scalability of our framework.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structuring Representations in Deep Learning: Symmetries and Linear Models</title>
<link href="https://hdl.handle.net/1721.1/147568" rel="alternate"/>
<author>
<name>Lawrence, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/147568</id>
<updated>2023-01-20T03:32:32Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Structuring Representations in Deep Learning: Symmetries and Linear Models
Lawrence, Hannah
The ability of deep neural networks to learn rich data representations is considered paramount to understanding their behavior and empirical success. In particular, imposing known structure on learned representations via careful architecture choice has proven impactful for problems with underlying symmetries. Conversely, discovering the similarity structure between different representations — even in the absence of such explicit priors — provides a valuable tool for comparing the architectures which gave rise to them. In this thesis, we study three aspects of deep learning theory through the lens of structured representations: architecture optimization, approximation, and comparison. First, we examine the implicit bias of gradient descent on linear group convolutional networks (G-CNNs), which provide a model for learning highly structured representations. For such architectures, we prove that gradient descent implicitly minimizes the net’s Schatten norm in Fourier space [Lawrence et al., 2022]. While the explicit bias of equivariant nets is the main reason for their usage, this result indicates that a structured implicit bias may impact the types of functions they learn as well. Next, we expand on existing universality results for equivariant architectures. In contrast to the exponential dependence on dimension of existing universality results, we demonstrate that certain smooth subclasses of invariant functions, analogous to Barron classes of functions, can be efficiently approximated using architectures which capture invariant representations. Finally, we define a new metric for probing the structure of arbitrary learned representations [Boix-Adser`a et al., 2022]. In particular, we embed trained representations into a shared metric space, based on the principle that representations are “close” if they behave similarly on downstream linear regression tasks. This metric, termed gulp, is invariant under unitary transformations, and empirically provides an effective method for comparing learned representations on different architectures.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of High-Performance Piezoelectric Transformer-Based DC-DC Converters</title>
<link href="https://hdl.handle.net/1721.1/147567" rel="alternate"/>
<author>
<name>Ng, Elaine</name>
</author>
<id>https://hdl.handle.net/1721.1/147567</id>
<updated>2023-01-20T03:35:36Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design of High-Performance Piezoelectric Transformer-Based DC-DC Converters
Ng, Elaine
Piezoelectric transformers (PTs) are a promising energy storage alternative for power converter miniaturization. PTs offer galvanic isolation and voltage transformation like traditional magnetic transformers, but have more advantageous power scaling properties at low volumes. Despite these advantages, most attempts at magnetic-less PT-based dc-dc converters have limited whole-converter efficiencies. These attempts typically rely on standard resonant converter topologies not designed to effectively utilize PTs, along with off-the-shelf PTs not optimized for dc-dc power conversion.&#13;
&#13;
In this thesis, we propose a two-pronged approach at designing high-efficiency, high-power-density PT-based converters: (1) circuit-level design strategies and (2) component-level design strategies to optimize PT components for power conversion. At the circuit level, we select high-efficiency topologies and switching sequences that most efficiently utilize PTs as sole energy storage components in dc-dc converters. At the component-level, we present geometry conditions that can aid designers in achieving PT components with both high efficiency and high power density. The strategies and analyses presented in this thesis are validated through simulation of an example PT-based converter design.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Machine Learning Techniques on Satellite Data to Predict the Effect of Urbanization on Avian Biodiversity</title>
<link href="https://hdl.handle.net/1721.1/147565" rel="alternate"/>
<author>
<name>Tynan, Savannah</name>
</author>
<id>https://hdl.handle.net/1721.1/147565</id>
<updated>2023-01-20T03:34:50Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Using Machine Learning Techniques on Satellite Data to Predict the Effect of Urbanization on Avian Biodiversity
Tynan, Savannah
Assessing the relationship between environmental and socio-economic conditions of an urban area and local urban biodiversity loss is integral to informing policy decisions, urban design, and community action plans.&#13;
&#13;
Though some previous research explores the relationship between urban areas and biodiversity loss, the limited available studies are often specific to only one city or region. Those that do model this phenomena in multiple cities are limited to environmental variables, and rarely examine the socio-economic conditions of a city, such as average GDP or population density. To our knowledge, no studies analyze the complex underlying relationship between socio-economic as well as environmental conditions within urban areas and biodiversity, which is necessary for strategically protecting the most at risk regions.&#13;
&#13;
This work aims to leverage satellite datasets to predict cities’ risk exposure for bird biodiversity loss. This research aspires to develop an analytical approach that can be used for various types of biodiversity, though we restrict our initial analysis to birds, as they offer a broad range of data and can be used as an indicator for other species.1 We aim for our approach to be applicable to all urban areas, so this research leverages a globally representative sample of cities with robust survey data. The over-arching goal of this project is to design a methodology to better advise resource allocation for the protection of global biodiversity. &#13;
&#13;
We process 9 publicly available satellite datasets to create environmental and socio-economic features for every functional urban area (FUA), as classified by the OECD, totalling over 9,000 FUAs. We train and test 3 models: linear regression, random forest regression, and a hybrid supervised and unsupervised model. We analyze the predictive power of these approaches as well as relative importance assigned to each input feature. We find that all 3 of the approaches have the ability to accurately predict biodiversity loss and all of them find that the maximum land modification value of each FUA is the most useful feature in determining biodiversity loss. Finally, we discuss the implications of these findings and our models ability to inform resource allocation.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Prototype to Production: Focus on Manufacturing for Low&#13;
Volume Production of an On-Farm Milk Analyzing Device</title>
<link href="https://hdl.handle.net/1721.1/147559" rel="alternate"/>
<author>
<name>Bataille, Henri J.</name>
</author>
<id>https://hdl.handle.net/1721.1/147559</id>
<updated>2023-01-20T03:52:35Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">From Prototype to Production: Focus on Manufacturing for Low&#13;
Volume Production of an On-Farm Milk Analyzing Device
Bataille, Henri J.
The dairy industry is one of the bases of the food industry. Thus, improving it can enhance the efficiency of the total food market. The project developed in this thesis aims to join the current connectivity revolution that occurs in almost every field. Labby is a start-up that has developed a technology that enables farmers to control the milk. However, no design and manufacturing engineers were employed in this young company before. Thus, in this thesis, you will see a focus on the manufacturing that occurs after the design of the product that analyzes the milk. Challenges were found in the way that it should be a cheap, robust, and scalable part. The design and the surrounding of the product should be considered for the manufacturing. For that reason, two condensed chapters help to illustrate the track from ideation to production of the product. The final goal is to sell the product to dairy farms; thus, this thesis focuses on the tests that have been run throughout the months. Moreover, it will be explained how and why each part is manufactured in that particular way by sometimes comparing methods.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Warehouse Automation: Improvements for the Precise Placement of Irregular Pallets</title>
<link href="https://hdl.handle.net/1721.1/147558" rel="alternate"/>
<author>
<name>Rodriguez Cabrera, Luis Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/147558</id>
<updated>2023-01-20T03:39:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Warehouse Automation: Improvements for the Precise Placement of Irregular Pallets
Rodriguez Cabrera, Luis Fernando
Anonymous Lumber Co. (ALC) has decided to implement a new automated warehouse to feed its Continuous Drying System (CDS). The new warehouse’s job is to autonomously move pallets, also known as “units,” of stacked green lumber through the warehouse and onto the dry kilns. However, avoiding large gaps after placing the units on the drying kiln’s carts has proven to be more difficult than anticipated. This work dives into the current system implemented by ALC, the problems faced, several proposed solutions, and a detailed exploration into a proposal using LIDAR to measure irregularities in the units to control their precise positioning. The project’s main goal was to develop a solution which would improve ALC’s productivity while being quick and easy to implement. In the end, restricted access and time constraints postponed the implementation of this solution at ALC’s facility. However, a small-scale proof-of-concept was constructed to gather data and extrapolate what the expected performance of the chosen solution could be.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Analysis of a Numerical Well Design Optimization Process</title>
<link href="https://hdl.handle.net/1721.1/147556" rel="alternate"/>
<author>
<name>Hicks, Andre</name>
</author>
<id>https://hdl.handle.net/1721.1/147556</id>
<updated>2023-01-20T03:32:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">System Analysis of a Numerical Well Design Optimization Process
Hicks, Andre
The emergence of advanced engineering technologies creates the opportunity to improve existing manual and silo-ed workflows within integrated energy companies. The processes used for drilling engineering have not evolved at the pace of cutting-edge technology advancements over the last 20 years. The most significant shifts in classical well development are standardized design methods, advanced disciplinary analysis, improved knowledge transfer systems, excel-based workflows, and structured employee training.&#13;
&#13;
169As the Industrial Revolution 4.0 progresses, technologies in Model-Based Systems Engineering are emerging to enhance existing well design processes, yet the step change is insufficient to close the technology gap. This research contributes to existing drilling engineering and well design advancements by developing a system optimization architecture for the well design process. A random search algorithm coupled with a stochastic optimization methodology for multi-objective optimization emerges through the relationships defined within a system Design Structure Matrix (DSM). The optimization method includes the evaluation of the algorithms’ computational efficiency, design diversity, and convergence. The development of a numerical solution for well design, will provide the framework necessary to implement advanced analysis of well design that can accurately predict the quality of engineering decision-making.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing the Mantle Transition Zone beneath the Central Pacific Region Using PP Precursors</title>
<link href="https://hdl.handle.net/1721.1/147555" rel="alternate"/>
<author>
<name>Jian, Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/147555</id>
<updated>2023-01-20T04:05:20Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Probing the Mantle Transition Zone beneath the Central Pacific Region Using PP Precursors
Jian, Jing
The mantle transition zone (MTZ) of the Earth is bounded by globally observed discontinuities at depths of about 410 and 660 km, which are mainly attributed to the spinel and post-spinel phase transitions of olivine. The temperature-pressure relationships of phase transitions at the discontinuities, resulting in changes in seismic properties, make the MTZ a robust indicator of the mantle’s thermal and chemical conditions. With the hypothesis of an existing mantle plume under the Hawaii-Emperor seamount chain, some studies have analyzed SS precursors, underside S-wave reflections at the deep discontinuities, to constrain the thermal and compositional heterogeneities of the mantle beneath the Central Pacific. However, the analysis of PP precursors (P-wave counterparts of SS precursors) in this area and elsewhere is scarce. In this study, we compare the field observations of PP precursors and the results from thermodynamic modelings at various thermal and compositional conditions to estimate the average adiabatic mantle temperature and thickness of the MTZ beneath the Central Pacific based on the best matches of both travel times and amplitude ratios of PP precursors. We image the mantle transition zone beneath the Central Pacific using ~155,000 broadband PP waveforms and generate synthetic waveforms from thermodynamic modelings based on velocity-density profiles that represent harzburgitic (80% olivine), mixed (64% olivine) and pyrolitic (60% olivine) models at temperatures of 1,200-1,600 ºC. With the curvelet transform, an array processing technique, we improve the signal-to-noise ratio of PP precursors by suppressing the energy from interfering phases for images from both observations and synthetic waveforms and reveal the precursors over a more extensive epicentral distance range. The observation shows strong reflections at 410- and 520-km discontinuities but a weak one at 660-km discontinuity; thus, we focus on P410 and P520P. We conduct differential time analysis by measuring the difference between the arrival times of P410P and P520P for both the observation and synthetic waveforms, and the comparisons along with pressure-temperature data from mineral physics suggest an average mantle temperature of 1,200-1,250 ºC with no differentiation among compositional types. A more promising method is used by measuring and comparing the peak amplitude ratios of precursors to the main PP between the observation and synthetic waveforms around the predicted arrivals in time. The amplitude analysis shows that the model that best resembles the observation is a combination of pyrolitic model for the upper mantle and a mixture of 80% harzburgite and 20% basalt deeper in the MTZ at 1,350-1,450 ºC, suggesting an average adiabatic mantle temperature of 1,400±50 ºC and a composition of ~60-64% olivine. We also locate the depths of the 410- and 660-km discontinuity from the velocity-density profiles and calculate the thickness of the MTZ, which is 253±3 km. We calculate the differential travel time of P410P and P520P laterally across the region and conclude that the MTZ temperature is generally lower to the west and southwest and higher to the north and east of Hawaii.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Manufacturing Methods for a Curved All-Around Camera-Based Tactile Sensor</title>
<link href="https://hdl.handle.net/1721.1/147554" rel="alternate"/>
<author>
<name>Tippur, Megha H.</name>
</author>
<id>https://hdl.handle.net/1721.1/147554</id>
<updated>2023-01-20T03:07:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design and Manufacturing Methods for a Curved All-Around Camera-Based Tactile Sensor
Tippur, Megha H.
In recent years, use of camera-based tactile sensors in robotics has grown in popularity as the dexterous and in-hand manipulation tasks we aim to accomplish become more difficult and as the environments a robotic hand must interact with become more complex. Past works have shown that by giving robots the sense of touch, we can greatly improve their ability to complete a variety of sophisticated tasks effectively and efficiently. However, many tactile sensors capable of producing accurate contact localizations, force readings, or 3D reconstructions of the sensor’s surface have a flat 2D shape; this may not be an ideal configuration for robots working in a 3D world. In this work, we present three design and manufacturing methods that were explored to build a curved, all-around, camera-based tactile sensor capable of producing accurate surface deformation depth maps and contact localizations. Additionally, we build off previous GelSight sensors’ photometric stereo lighting methods to develop an orthogonal, cross-shaped illumination structure that has the potential to be transferred to different curved sensor geometries (with some changes to the lighting and camera parameters required). By providing roboticists with an all-around tactile sensor which can be configured to fit specialized tasks on different types of robotic hands, we hope to help reduce some of the constraints considered when approaching a manipulation problem.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Survey of Superfund Chemicals in Massachusetts Farms</title>
<link href="https://hdl.handle.net/1721.1/147553" rel="alternate"/>
<author>
<name>Riedinger, Kristen A.</name>
</author>
<id>https://hdl.handle.net/1721.1/147553</id>
<updated>2023-01-20T03:40:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Survey of Superfund Chemicals in Massachusetts Farms
Riedinger, Kristen A.
Heavy metals and organic pollutants in the environment can present a public health threat, and understanding how different pollutants are distributed in our agricultural resources and in the food chain is fundamental to predicting and preventing their negative health impacts. The purpose of this study was to measure the concentrations of 22 inorganic elements (including arsenic and lead) and 334 organic compounds (including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs), and volatile organic compounds (VOCs)) in water, soil and produce samples from organic farms in Massachusetts, USA. Measured concentrations were compared to regional and federal limits where applicable; few exceedances of these limits were found. Comparisons of compound levels across water, soil, and produce at individual sites displayed no significant patterns (i.e., correlations related to fundamental partitioning between media). Lastly, a distance analysis showed little indication that detected compounds migrated from nearby EPA Superfund sites. Theses results indicate that there is no systemic contamination associated with transport between historic EPA Superfund sites and contemporary organic farms in Massachusetts. The low concentrations of these elements found in this study suggest little health risk from locally grown produce.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Topological Crystals and Flat Band Systems</title>
<link href="https://hdl.handle.net/1721.1/147551" rel="alternate"/>
<author>
<name>Debbas, Maximilien</name>
</author>
<id>https://hdl.handle.net/1721.1/147551</id>
<updated>2023-01-20T03:47:48Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An Investigation into Topological Crystals and Flat Band Systems
Debbas, Maximilien
The field of condensed matter physics is characterized by of a vast array of physical phenomena which may be investigated through measurements of crystalline materials. This thesis investigates the synthesis of 3-dimensional systems under two broad classes: topological materials and flat band materials. The topological systems investigated include the topological insulator Cu₂Ti, the Weyl semimetal VMg₂O₄, and the nodal semimetal PtSeTe. The flat band systems investigated all incorporate the 2-dimensional Kagome lattice in their 3-dimensional structures, and include the materials Zr₂Fe₃(Si,Ge) and Fe₆Ge₆Zr. Synthesis campaigns for these candidate materials included the implementation of solid state reactions, flux growths, and chemical vapor transport growths. For these systems, the predicted physics, as well as the design, execution, and characterization of quantum material synthesis will be presented.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Impacts of Mobility-as-a-Service in Prototypical North American Cities via Agent-based Simulation</title>
<link href="https://hdl.handle.net/1721.1/147546" rel="alternate"/>
<author>
<name>DeSoto, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/147546</id>
<updated>2023-01-20T03:59:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Evaluating the Impacts of Mobility-as-a-Service in Prototypical North American Cities via Agent-based Simulation
DeSoto, Emma
Mobility-as-a-Service (MaaS) is a natural evolution of trends in shared mobility. As new mobility options emerge users are faced with an increasingly splintered mobility landscape. By unifying emerging on-demand modes with traditional mobility options, MaaS can increase users’ transportation accessibility, empowering users to pick between and use different modes. Furthermore, by offering users incentives MaaS has the potential to shift users towards more sustainable mobility patterns, improving network congestion and emissions. The objective of this thesis is to analyze the potential demand for MaaS in cities in the United States using agent-based simulation. Three different MaaS menus are designed, comprising of MaaS plans varying in price and incentive structure. The adoption of, subscription to, and impacts of these MaaS menus is simulated in three major city-types present in the U.S. using the state-of-theart simulation laboratory developed by the MIT Intelligent Transportation Systems (ITS) Lab, SimMobility. SimMobility is a multi-scale, multi-modal activity- and agent-based simulation software suitable for high-fidelity simulation of hypothetical transportation scenarios. This thesis demonstrates that MaaS has the potential to penetrate the United States, capturing both users of both sustainable and unsustainable modes. MaaS has the potential to improve transportation sustainability by promoting car-agnostic transit such as mass transit and Mobility on Demand to car users, while also having the potential to trigger switches away from sustainable active mobility modes.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Exploit Generation for Cross-Language Attacks</title>
<link href="https://hdl.handle.net/1721.1/147544" rel="alternate"/>
<author>
<name>Mihretie, Yosef E.</name>
</author>
<id>https://hdl.handle.net/1721.1/147544</id>
<updated>2023-01-20T03:29:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Automatic Exploit Generation for Cross-Language Attacks
Mihretie, Yosef E.
Memory corruption is an essential component of most computer exploits. At the same time, a significant portion of legacy system software is written in C/C++, which are known to be memory-unsafe. This has led to an arms race between attackers devising ever clever ways to execute memory corruption and developers engineering mitigation techniques to either prevent or raise the alarm when memory is corrupted. This has come to be known as “The Eternal War in Memory”. Recently, however, software programmers have shifted to using programming languages that are memory-safe by design like Go and Rust. These languages are specially favorable because they provide an easy interface that allows them to interact with the widely established C/C++ based infrastructure. Underlying this design approach is the assumption that replacing parts of a largely memory-unsafe software program with memory safe code will raise the overall security of the program. Recent work has however showed this assumption is flawed. In fact, mixing sections with different threat models into one program can lead to attacks that would not have been possible in the two sections individually. These attacks are called Cross-Language Attacks (CLA). On the other hand, analyzing large binary programs to construct CLA exploits is a tedious process. In this thesis, we present ACLEG which automatically generates CLA for the case of double-free exploits. ACLEG can help researchers and engineers understand the extent of CLA vulnerabilities in commercially deployed software programs. Moreover, it can help find bugs in software programs before they are deployed as part of the debugging toolset.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In-situ Characterization of Sea State with Improved Navigation on an Autonomous Underwater Glider</title>
<link href="https://hdl.handle.net/1721.1/147537" rel="alternate"/>
<author>
<name>Burgess, Gregory A.</name>
</author>
<id>https://hdl.handle.net/1721.1/147537</id>
<updated>2023-01-20T03:16:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">In-situ Characterization of Sea State with Improved Navigation on an Autonomous Underwater Glider
Burgess, Gregory A.
This thesis presents an Autonomous Underwater Glider (AUG) architecture with improved onboard navigation and acoustics-based sensing intended to enable basin-scale unattended surveys of our Earth's most remote oceans. Traditional AUGs have long-been an important platform for oceanographic surveys due to their high endurance and autonomy, yet lack the operational flexibility to operate in many regions of scientific interest and the sensing capability to capture scientific data at the air-sea interface. Particularly of interest is the marginal ice zone (MIZ) in the Arctic and the Southern Ocean, as both are vitally important to understanding global climate trends, yet prohibitively expensive to persistently monitor with support vessels. To fill this observational gap, the sensing, navigation, and adaptability of AUGs must be improved. This is possible by employing onboard acoustic sensing for sea state observation and navigation, as well as incorporating vehicle improvements targeting maneuverability and intelligent adaptability to evolving environmental states.&#13;
&#13;
To enable persistent monitoring of both the water-column and air-sea interface, this thesis proposes an improved vehicle architecture for a more capable AUG, a real-time DVL-aided navigation process that leverages ocean current sensing to limit localization error, and a subsea acoustics-based sea state characterization method capable of analyzing wave spectra under-ice and with zero surface expression. These methods are evaluated with respect to extensive laboratory experiments and field data collected during in-situ implementation.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factory and Material Flow Design for Mass Production of an Advanced Process Control Educational Device</title>
<link href="https://hdl.handle.net/1721.1/147536" rel="alternate"/>
<author>
<name>Rojrungsasithorn, Tanach</name>
</author>
<id>https://hdl.handle.net/1721.1/147536</id>
<updated>2023-01-20T03:36:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Factory and Material Flow Design for Mass Production of an Advanced Process Control Educational Device
Rojrungsasithorn, Tanach
A desktop Fiber Extrusion Device (FrED) was primarily developed for learning smart manufacturing and feedback control systems. As an educational kit, FrED was designed to be compact, safe, and low-cost while also providing feature-rich data. However, the current cost of FrED was still too high, and thus required further design and development to reduce the cost and make it affordable for individual learners. One part of FrED development was to build a FrED factory for mass production in order to provide the physical kits for both offline and online classes. This thesis proposed a factory design based on collected user needs, which included office and production area to effectively support mass production. Material flow and production line was designed and modeled by understanding and performing time studies on all manufacturing processes and logistics required for each component. Scheduling of the part fabrication process was also conducted to minimize the overall production time. With the production line modeling proposed, the production time for one FrED and five FrED was expected to be 1 day 5 minutes and 1 day 163.75 minutes, respectively. This primary study on FrED production can be used to estimate the production required for a larger batch production, and further improve manufacturing processes to reduce the production time required thus resulting in a higher throughput rate of mass production in the future.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Sheet Metal Tooling Supports</title>
<link href="https://hdl.handle.net/1721.1/147535" rel="alternate"/>
<author>
<name>Ajami, Hassan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/147535</id>
<updated>2023-01-20T03:46:41Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Sheet Metal Tooling Supports
Ajami, Hassan H.
The objective of this project was to facilitate the integration of additive manufacturing and CNC sheet metal fabrication to create hybrid check fixtures. In this case, the tool comprises a sheet metal base and a powder bed fusion cover. Using the Agile product development framework, the team conducted a series of sprints going from concept models to a final production tool in just over two months. Additive manufacturing investigations conducted to converge on the optimal production solution include studies on dimensional process capability, additive process type, material tradeoffs, and business factors. Moreover, several sheet metal and tubing structures were tested to achieve a highly accurate base for the additively manufactured surface. The integration of these parts was enabled by elastic-averaging-based connector geometries that also evolved throughout the different sprints in conjunction with results from efficient simulation models. The production hybrid fixture presented a range of benefits for automotive OEM and project sponsor, General Motors (GM). Compared to traditional fixtures, the lead time was shortened by 92%, the cost was reduced by 65%, and the recyclability increased from 59% to 100%. These benefits were achieved while meeting all product owner requirements and technical specifications. Given the increasing demand for check fixtures owing to shortening product lifecycles, it is expected that the savings generated can scale up significantly. Moreover, many of the techniques developed can be applied to other types of fixtures such as those used for welding and subassembly. The project was also successful at fulfilling an internal company goal of generating sufficient traction to launch a series of collaborative initiatives between the sheet metal fabrication and additive manufacturing teams at GM.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learned Interpolation for Better Streaming Quantiles with Worst Case Guarantees</title>
<link href="https://hdl.handle.net/1721.1/147533" rel="alternate"/>
<author>
<name>Schiefer, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/147533</id>
<updated>2023-01-20T03:31:50Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Learned Interpolation for Better Streaming Quantiles with Worst Case Guarantees
Schiefer, Nicholas
An ε-approximate quantile sketch over a stream of n inputs approximates the rank of any query point q—that is, the number of input points less than q—up to an additive error of εn, generally with some probability of at least 1−1/ poly(n), while consuming o(n) space. While the celebrated KLL sketch of Karnin, Lang, and Liberty achieves a provably optimal quantile approximation algorithm over worst-case streams, the approximations it achieves in practice are often far from optimal. Indeed, the most commonly used technique in practice is Dunning’s t-digest, which often achieves much better approximations than KLL on real-world data but is known to have arbitrarily large errors in the worst case. We apply interpolation techniques to the streaming quantiles problem to attempt to achieve better approximations on real-world data sets than KLL while maintaining similar guarantees in the worst case.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Studies for Future EAD-Propelled Aircraft</title>
<link href="https://hdl.handle.net/1721.1/147532" rel="alternate"/>
<author>
<name>Perovich, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/147532</id>
<updated>2023-01-20T04:01:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design Studies for Future EAD-Propelled Aircraft
Perovich, Nicholas
Following the first flight of a heavier-than-air EAD-propelled aircraft in 2018 by Xu et al., this thesis contributes to designing and building the next iteration of EAD-propelled aircraft. The nominal design is based on aircraft parameters produced by a geometric programming aircraft design optimizer. This thesis presents an experimental campaign to validate and improve models used by the optimizer. Models of interest are mass, drag, electrical, and structural. Preliminary experiments involve wing bending, charge-buildup on flight components, and large-scale thrust measurements. Detailed design of aircraft components is completed and a full airframe is manufactured. A flight test campaign consisting of both indoor and outdoor tests is completed to quantify the flight characteristics of the airframe and evaluate thrust. Results are presented in the context of several technical, operational, and programmatic suggestions made to improve efforts to design, build, and fly future EAD-propelled aircraft.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative methods for multiplexed cellular engineering and directed evolution</title>
<link href="https://hdl.handle.net/1721.1/147531" rel="alternate"/>
<author>
<name>Padia, Umesh Janak</name>
</author>
<id>https://hdl.handle.net/1721.1/147531</id>
<updated>2023-01-20T03:07:56Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Quantitative methods for multiplexed cellular engineering and directed evolution
Padia, Umesh Janak
Multiplexed screening through pooled libraries has gained traction in engineering biological behavior. In particular, it has been effective at controlling cells, designing proteins, determining targets for gene therapy, and developing small molecule drugs. This thesis contributes to cellular engineering and multiplexed screening in three projects. First, this thesis introduces a sequence-aware probabilistic model for cellular transcription that may be applied to library-scale cellular engineering screens. The model outperforms recently published single-cell models on key classification metrics. Next, this thesis introduces a multiplexed in vivo pipeline to engineer T-cell migration to solid tumors. The application of this pipeline recapitulates known homing factors associated with T-cell migration to melanoma. Finally, this thesis demonstrates a versatile distributed system to guide the design of proteins in directed evolution experiments and is generally applicable to all multiplexed library screens.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Active Object-Based Navigation</title>
<link href="https://hdl.handle.net/1721.1/147530" rel="alternate"/>
<author>
<name>Killy, S. Violet</name>
</author>
<id>https://hdl.handle.net/1721.1/147530</id>
<updated>2023-01-20T03:49:05Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards Active Object-Based Navigation
Killy, S. Violet
This thesis investigates the design and implementation of an active object-based navigation system. Localization refers to the capability of a mobile robot to determine its position relative to a prior map of the environment. Localization based on fiducial markers, such as AprilTags, has been a popular technique used widely in many laboratories for over a decade; however, it is desirable to enable a mobile robot to navigate based on the objects that occur naturally in a given environment. The goal of using real objects as “geometric beacons” for robot navigation was identified many years ago. In practice, however, it has been challenging to develop metrically accurate robot localization systems that use semantic, object-based maps. Recent years have seen tremendous progress in the use of machine learning techniques to recognize objects from camera images, including 6 degree-of-freedom (DOF) object pose estimation. Using these machine learning capabilities for robot navigation is an important application, yet research in creating real-time object-based localization systems has been limited. One of the difficulties is that to obtain accurate 6DOF object pose estimates, typically the object needs to be centered in the field of view of the camera, and viewed from a favorable viewing distance. To help meet this need, this thesis has developed a pan-tilt actively controlled camera mount, and deployed it on a small mobile robot, to explore capabilities for improved object-based navigation. By running a detector in real-time as the robot navigates through the world, the active pan-tilt head can actively track an object of interest; 6DOF pose estimates from the object are used to estimate the robot’s pose. The performance of the system is analyzed using ground truth provided by an OptiTrack motion capture system. Future work will extend the system to track multiple objects and to perform active SLAM.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Reinforcement Learning and Evolutionary Computation for Games with Stochasticity and Incomplete Information</title>
<link href="https://hdl.handle.net/1721.1/147529" rel="alternate"/>
<author>
<name>Zhou, Xinhe</name>
</author>
<id>https://hdl.handle.net/1721.1/147529</id>
<updated>2023-01-20T03:09:36Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Investigating Reinforcement Learning and Evolutionary Computation for Games with Stochasticity and Incomplete Information
Zhou, Xinhe
This thesis presents an application of reinforcement learning and evolutionary computation for solving complex games with incomplete information and stochasticity. Although there has been significant recent progress on AI game players, traditional deep reinforcement learning methods have mainly shown success in games with simpler properties. In this thesis, we evaluate two deep reinforcement learning methods: policy gradient and evolutionary strategies for training the neural network behind the AI players for Ticket to Ride, a complex strategic board game. By comparing AI players’ performance and policies with existing heuristics players, we show that the AI players learn well under both training algorithms. Furthermore, the results indicate that training the AI players under the complete information game environment has a positive influence on their performance under the incomplete information game environment as well.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoding Invisible 3D Printed Tags with Convolutional Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/147528" rel="alternate"/>
<author>
<name>Yotamornsunthorn, Veerapatr</name>
</author>
<id>https://hdl.handle.net/1721.1/147528</id>
<updated>2023-01-20T03:01:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Decoding Invisible 3D Printed Tags with Convolutional Neural Networks
Yotamornsunthorn, Veerapatr
Imperceptible tags embedded on three-dimensional (3D) objects have recently shown promising utility in applications such as augmented and virtual reality interactions, tracking logistics, and robotics. The InfraredTag is a newly developed tag that is imperceptible to the eye and can be 3D-printed as part of an object. The InfraredTag can be detected by an infrared (IR) camera. A common problem with IR images is insufficient resolution, which may render the embedded tag unreadable, and image processing is required to increase contrast. Current image processing techniques use a different set of parameters for each filter and can take several seconds to finish, making it challenging to read InfraredTags in real time. To reduce processing time, the proposed thesis seeks to eliminate the need to try out all sets of parameters. It will instead use convolution neural networks (CNNs) to quickly convert an IR image into a binary image, from which the embedded code can be readily read.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experiments To Improve Behavior of Electrowetting Surfaces in Microhydraulic Actuators</title>
<link href="https://hdl.handle.net/1721.1/147527" rel="alternate"/>
<author>
<name>Liu, Isabelle Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/147527</id>
<updated>2023-01-20T04:01:58Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Experiments To Improve Behavior of Electrowetting Surfaces in Microhydraulic Actuators
Liu, Isabelle Y.
Microhydraulic actuators based on the principles of electrowetting were developed at MIT Lincoln Laboratory. The actuators consist of two parts, rotor (droplet array) and stator (electrode array), that move relative to each other. This thesis focuses on the latest two versions of actuators, MHA5 and MHA6, which differ in geometry and fabrication process. MHA5 demonstrated successful electrical actuation but had stability issues; the initial version of MHA6 (MHA6A) couldn’t actuate due to high friction between the rotor and stator surface. This thesis investigated possible causes of high friction in MHA6 through quantitative friction drag experiments and contact angle experiments, and determined fabrication processes that led to low friction surfaces. The findings were incorporated to make a subsequent version of MHA6 (MHA6B) which demonstrated successful electrical actuation. This thesis also calculated the actuators’ expected torque, and performed the first quantitative torque measurements.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Smartphone-Based Computer Vision and Machine Learning Platform for Identification of Surgical Site Infections</title>
<link href="https://hdl.handle.net/1721.1/147526" rel="alternate"/>
<author>
<name>Wang, Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/147526</id>
<updated>2023-01-20T03:59:58Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Integrated Smartphone-Based Computer Vision and Machine Learning Platform for Identification of Surgical Site Infections
Wang, Lilian
The infection of surgical wounds, also known as surgical site infections (SSI), represents a significant financial cost for health care systems worldwide but also represents a threat to the life and health of women in developing countries who give birth by Cesarean section (C-section). In order to help monitor and detect SSI in women who recently underwent C-section births, this thesis presents the design and development of an integrated smartphone application that can be used by community health workers (CHW) to help detect SSI using a smartphone camera image. This mobile application includes four main components: (1) a computer vision image capture algorithm with automated image scaling, cropping and rotation; (2) automated image quality assessment to provide real-time feedback to the CHW; (3) an image processing pipeline to perform image sampling, color correction, and brightness adjustment; (4) integrated image-based machine learning prediction, making use of a previouslydeveloped convolutional neural network (CNN) model. The integrated smartphone application, created with the Android Java SDK, is primarily designed to operate in rural parts of the world where there is a lack of Internet access. However, the mobile application is also designed to connect and synchronize data with a remote electronic medical record (EMR) server developed at MIT, known as the PyMed EMR server. In this thesis, I describe the design and implementation of the main components of the mobile application and the complete application work flow. I also discuss the performance of the application on different mobile phone models as well as the performance trade-off between online and offline wound infection prediction.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speech-Based Artificial Intelligence Emotion Biomarkers in Frontotemporal Dementia</title>
<link href="https://hdl.handle.net/1721.1/147525" rel="alternate"/>
<author>
<name>Parllaku, Fjona</name>
</author>
<id>https://hdl.handle.net/1721.1/147525</id>
<updated>2023-01-20T03:13:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Speech-Based Artificial Intelligence Emotion Biomarkers in Frontotemporal Dementia
Parllaku, Fjona
Acoustic speech markers are well-characterized in Frontotemporal Dementia (FTD), a heterogeneous spectrum of progressive neurodegenerative diseases that can affect speech production and comprehension as well as higher-order cognition, behavior, and motor control. While profound apathy and deficits in emotion processing are also common symptoms, emotional content has yet to be explored in acoustic models of speech. We retrospectively analyze a dataset of standard elicited speech tasks from 69 FTD and 131 healthy elderly controls seen at the University of Melbourne. We develop two ResNet50 models to classify FTD vs healthy elderly controls using spectrograms of speech samples: 1) a naive model, and 2) a model that was pretrained on an emotions speech dataset. We compare the validation accuracies of the two models on different speech tasks. The pre-trained model better classifies FTD vs. healthy elderly controls, and the behavioral variant of FTD (bvFTD) vs. healthy elderly controls with validation accuracy scores of 79% and 84% respectively in the monologue speech task, and 93% and 90% in the picture description one. When considered singularly, the ‘happy’ emotion best discriminates between FTD vs healthy elderly controls compared to other latent emotions. Pre-training acoustic models on latent emotion increases the classification accuracy for FTD. We demonstrate the greatest improvement in model performance on elicited speech tasks with greater emotional content. Considered more broadly, our findings suggest that inclusion of latent emotion in acoustic classification models provides a benefit in neurologic diseases that affect emotion.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Global Object Pose from Tactile Images</title>
<link href="https://hdl.handle.net/1721.1/147521" rel="alternate"/>
<author>
<name>Bronars, Antonia</name>
</author>
<id>https://hdl.handle.net/1721.1/147521</id>
<updated>2023-01-20T03:35:14Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Estimating Global Object Pose from Tactile Images
Bronars, Antonia
This work evaluates Tac2Pose, an object-specific approach to tactile pose estimation for known objects. Given the object geometry, we learn a perception model in simulation that estimates a probability distribution over possible object poses given a tactile observation. To do so, we simulate the contact shapes that a dense set of object poses would produce on the sensor. Then, given a new contact shape obtained from the sensor, we match it against the pre-computed set using an object-specific embedding learned using contrastive learning. We obtain contact shapes from the sensor with an object-agnostic calibration step that maps RGB tactile images to binary contact shapes. This mapping, which can be reused across object and sensor instances, is the only step trained with real sensor data. Tac2Pose produces pose distributions and can incorporate additional pose constraints coming from other perception systems, multiple contacts, or priors. &#13;
&#13;
We provide quantitative results for 20 objects. Tac2Pose provides high accuracy pose estimations from distinctive tactile observations while regressing meaningful pose distributions to account for those contact shapes that could result from different object poses. We test Tac2Pose in multi-contact scenarios where two tactile sensors are simultaneously in contact with the object, as during a grasp with a parallel jaw gripper. We further show that when the output pose distribution is filtered with a prior on the object pose, Tac2Pose is often able to improve significantly on the prior. This suggests synergistic use of Tac2Pose with additional sensing modalities (e.g. vision) even in cases where the tactile observation from a grasp is not sufficiently discriminative. Given a coarse estimate, even ambiguous contacts can be used to determine an object’s pose precisely.&#13;
&#13;
We also test Tac2Pose on object models reconstructed from a 3D scanner, to evaluate the robustness to uncertainty in the object model. We show that even in the presence of model uncertainty, Tac2Pose is able to achieve fine accuracy comparable to when the object model is the manufacturer’s CAD model. Finally, we demonstrate the advantages of Tac2Pose compared with three baseline methods for tactile pose estimation: directly regressing the object pose with a neural network, matching an observed contact to a set of possible contacts using a standard classification neural network, and direct pixel comparison of an observed contact with a set of possible contacts.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework to Accelerate Parameter Development for Laser Powder&#13;
Bed Fusion</title>
<link href="https://hdl.handle.net/1721.1/147520" rel="alternate"/>
<author>
<name>Graybill, Benjamin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/147520</id>
<updated>2023-01-20T04:01:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Framework to Accelerate Parameter Development for Laser Powder&#13;
Bed Fusion
Graybill, Benjamin C.
Metal Laser Powder Bed Fusion (M-LPBF) is a method of additive manufacturing that enables the fabrication of complex components that would not be possible through conventional manufacturing methods. M-LPBF is well suited for aerospace applications not only because of its ability to fabricate complex and efficient components, but it also can enable the reduction of cost and the schedule of programs. The recent advancements in material development could open the design space even further for aerospace applications, but the initial development process of evaluating a new material on a M-LPBF printer can be time consuming and costly. In this work, a framework to improve the efficiency and structure of M-LPBF process development is proposed. First, simulations of the melt pool were performed to understand the impact of primary process parameters on the dimensions of the melt pool. Then, tools to model the melt pool were tested and used in combination with analytical equations to identify an acceptable processing window for the M-LPBF process. Following this process parameter filtering, physical experiments were executed that investigated the impact of process and design parameters on various outputs connected to the melt pool, density, dimensional accuracy, and surface roughness of the coupons printed. Optimal parameter ranges can then be determined according to different design and process priorities. The framework developed in this project enables a material and machine agnostic approach to process parameter selection in less time and at a lower cost.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Prototype to Production: Product Development of a Modular Automated Milk Sampling Device for Conventional Dairy Farms</title>
<link href="https://hdl.handle.net/1721.1/147519" rel="alternate"/>
<author>
<name>Li, Xiaomeng</name>
</author>
<id>https://hdl.handle.net/1721.1/147519</id>
<updated>2023-01-20T03:06:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">From Prototype to Production: Product Development of a Modular Automated Milk Sampling Device for Conventional Dairy Farms
Li, Xiaomeng
Milk sampling and testing is critical in the dairy industry as it provides farmers information to make accurate decisions about nutrition or medication on their cattle. Currently, the milk sampling process is both costly and labor intensive. Therefore, an automated milk sampler is developed to reduce the time and labor needed to collect milk samples. This thesis discusses in detail about the product development process of the automated milk sampler, which shows the path of the automated sampler from design schematic to working alpha prototype. The automated sampler consists of an infrared sensor that monitors the flow in the milking pipeline, a custom sampling vial that stores the milk sample, and two custom pinch valves which control the flow of air and milk accordingly. The path to low volume production of the automated sampler is laid out after the alpha prototype is demonstrated to work in lab conditions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interference Purcell Filter for Fast, Modular, and Hardware-Efficient Quantum Measurement</title>
<link href="https://hdl.handle.net/1721.1/147518" rel="alternate"/>
<author>
<name>Yen, Alec</name>
</author>
<id>https://hdl.handle.net/1721.1/147518</id>
<updated>2023-01-20T03:30:38Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Interference Purcell Filter for Fast, Modular, and Hardware-Efficient Quantum Measurement
Yen, Alec
This thesis proposes a method to suppress Purcell decay for fast, modular, and hardware-efficient quantum measurement that we call an “interference” Purcell filter. Superconducting qubits experience many decay channels, one of which is Purcell decay, or leakage of the qubit state into the readout line. The proposed work suppresses Purcell decay by coupling the readout resonator at two points on the readout line to create a destructive interference effect, enabling a small and space-efficient footprint. The Purcell suppression is compatible with large resonator decay rates, making it a suitable design as quantum error correction schemes move toward faster readout. Unlike many existing methods to suppress Purcell decay, the proposed design does not require an “open” or weakly-coupled port, the removal of which would improve modularity and expedite the design of many-qubit systems.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>THz Cryo-CMOS Link for Quantum Computing</title>
<link href="https://hdl.handle.net/1721.1/147517" rel="alternate"/>
<author>
<name>Wang, Jinchen</name>
</author>
<id>https://hdl.handle.net/1721.1/147517</id>
<updated>2023-01-20T03:17:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">THz Cryo-CMOS Link for Quantum Computing
Wang, Jinchen
In this thesis, a terahertz (THz) cryo-CMOS link for quantum computing and other cryogenic applications is designed. This THz wireless link can efficiently deliver the control signals and data between room temperature and cryogenic environment. Its operation allows for a small antenna aperture size, high data rate, and minimal interference with the operation of the qubits.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unwanted Project: Speculative Design for Circularity</title>
<link href="https://hdl.handle.net/1721.1/147516" rel="alternate"/>
<author>
<name>Zhu, Ziyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/147516</id>
<updated>2023-01-20T03:47:35Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Unwanted Project: Speculative Design for Circularity
Zhu, Ziyuan
More than 360 million tons of plastic are produced each year (2020, global plastic report), and only 9% can be recycled properly. With increasing climate change and the inevitable single-use waste stream produced in our everyday life, plastic pollution is rapidly outpacing the current effort to stop it. We couldn’t ignore the impact of the ‘throwaway living’ we developed almost 7 decades ago anymore. Thus, many efforts, from commercial and official bodies to grassroots plastic-recycling groups, have acted in the format of policy change, awareness enhancement, design, and innovation, to explore the potential of turning the current linear process into plastic material lifecycle into circular loops. These initiatives have laid the foundation for wide-reaching voluntary cooperation and repurposing of wasted materials into the upcycled future.&#13;
&#13;
While the official and commercialized efforts take a significant role in easing plastic pollution, we couldn’t ignore the grassroots and the bottom-up efforts from unofficial organizations that help to create these closed loops. Instead of relieving plastic pollution as a global issue, this thesis, ‘Unwanted Project (UP)’, takes an eye on the possibility of plastic repurposing from an individual and community level, as well as investigates what can be done to realize the circularity at the in-home level from a creative and repurposing perspective. This thesis firstly investigated material repurpose and reusing from multiple levels, from official recycling action to grassroots global plastic recycling community, and then dug into the communities and individuals to evaluate the potential of closing the material flow loop, enhancing collection capacity, as well as creating capability of production with qualitative and quantitative research. The results are three design scenarios. Through individual-level manufacture, ranging from 3D printing to heat pressing, ‘UP’ created scenarios that enable individuals to follow from their homes and community, to repurpose the plastic waste they collected in their daily life. These three scenarios are modulized and streamlined to provide a standard basis for the implementation, as well as leave creative space for individuals to up-cycle plastic with flexibility and purpose.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Fabrication of an Electric-Field Induction Motor</title>
<link href="https://hdl.handle.net/1721.1/147515" rel="alternate"/>
<author>
<name>Kieu, Quang</name>
</author>
<id>https://hdl.handle.net/1721.1/147515</id>
<updated>2023-01-20T03:19:59Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design and Fabrication of an Electric-Field Induction Motor
Kieu, Quang
This thesis explores a meso-scale micro-actuator of rotational motion based on electric induction. This electric induction motor (EIM) has a cylindrical geometry with a radial air gap between the stator and the rotor. A significant novelty is that the motor does not contain metal, but is rather manufactured using only 3D printed plastic and injected conducting epoxy. It therefore has the potential to have very little mass, and hence exhibit high torque and power to mass ratios. A traveling potential wave applied to electrodes on the cylindrical stator surface induces and attracts charges on the rotor surface to provide torque. An electric model is developed to predict the average torque of the motor when driven with an ideal sinusoidal potential wave. Using this model, a motor is designed with volume and power comparable to a mesoscale dielectric elastomer actuator, a state-of-the-art technology for micro-actuators. Harmonic decomposition is applied to a realistic drive potential to reveal major harmonics that contribute to the potential wave. The ideal model is then applied to each harmonic, yielding a more realistic estimate of the motor performance given the design parameters. A prototype motor and high-voltage variable-frequency drive circuit are fabricated to confirm the theory. With limited manufacturing capabilities, the motor has an uncertain air-gap separation and rotor surface conductivity, which must be experimentally estimated. Due to manufacturing difficulties with the bearing system and the rotor surface conductivity, the prototype was not functional. The thesis speculates as to the failure mechanism, and solidifies the understanding of the challenges surrounding the cylindrical EIM. Two critical areas of future research are identified: bearing development suitable for very small air gaps, and rotor surface conductor management. If these challenges can be met, then the analysis of this thesis indicates that torque densities approaching 1 mNm/ml are possible.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Data Shaping and Evaluation via Mutual Information Estimation</title>
<link href="https://hdl.handle.net/1721.1/147511" rel="alternate"/>
<author>
<name>Wu, William</name>
</author>
<id>https://hdl.handle.net/1721.1/147511</id>
<updated>2023-01-20T03:06:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Neural Data Shaping and Evaluation via Mutual Information Estimation
Wu, William
Machine learning in sensitive domains like healthcare currently faces a major bottleneck due to the scarcity of data that is publicly available. Privacy protection regulations such as HIPAA and GDPR and recent progress in information estimation literature motivate us to investigate the issue from an information theoretic perspective. In this thesis, we propose InfoShape, an encoder training scheme that aims to maintain privacy while also preserving utility for downstream prediction tasks. We achieve this by utilizing mutual information neural estimation (MINE) [2] to estimate two quantities, privacy leakage: the mutual information between the original inputs and the encoded representations, and utility score: the mutual information between the encoded representations and the intended labeling information for classification. We train a neural network as our encoder by using our privacy and utility measures in a Lagrangian optimization. We show empirically on Gaussian generated data that InfoShape is capable of altering encoded sample outputs such that the privacy leakage is reduced and the utility score increases. Moreover, we observe that the classification accuracy of downstream models has a meaningful connection with the utility score, which improves after we train an encoder compared to the untrained encoder. This work has profound implications for privacy-preserving machine learning and could serve as a pivotal tool in the future for revolutionizing AI in areas like healthcare.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DPR Cluster: An Automated Framework for Deploying Resilient Stateful Cloud Microservices</title>
<link href="https://hdl.handle.net/1721.1/147510" rel="alternate"/>
<author>
<name>Raicevic, Nikola</name>
</author>
<id>https://hdl.handle.net/1721.1/147510</id>
<updated>2023-01-20T03:36:42Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">DPR Cluster: An Automated Framework for Deploying Resilient Stateful Cloud Microservices
Raicevic, Nikola
Recent advances in distributed recovery protocols enable application builders to achieve strong prefix recovery guarantees in distributed systems of cache-stores (pairs of fast cache backed with persistent storage to answer storage requests) with low overhead. Specifically, Distributed Prefix Recovery (DPR) is a general-purpose protocol that implements prefix recovery guarantee for an arbitrary cluster of cache-stores with the help of a centralized management node. However, deploying such a cluster is still challenging, as it involves timely detection and restart of failed nodes, incremental roll-out of new cache-store implementations and deployments, and routing requests in a dynamic cluster with failures. Cluster administrators must manually configure DPR with this information and program cache-stores with the necessary capabilities in a fault-tolerant manner. In this thesis, we introduce the DPR cluster – an automated framework for quickly and easily deploying clusters of DPR-enhanced cache-stores. DPR Cluster utilizes Kubernetes as its cluster manager and features a declarative Python management API for scripting. Cluster administrators merely specify the desired cluster, and Kubernetes automatically deploys and manages the relevant components and restarts them on failure. Clients can dynamically discover a cluster and its components and communicate with them with DPR Cluster’s dynamic, fault-tolerant networking layer based on DNS. Additionally, DPR Cluster implements a suite of functionalities for fault-tolerance in addition to cache-store consistency, such as automatic reconnects. Our evaluation shows that DPR Cluster is highly resilient and functional with a simple API, and significantly lowers the barrier of entry for DPR deployments.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Self-Supervised Object Representations and 3D Scene Graph Based Navigation</title>
<link href="https://hdl.handle.net/1721.1/147509" rel="alternate"/>
<author>
<name>Peng, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/147509</id>
<updated>2023-01-20T03:12:59Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards Self-Supervised Object Representations and 3D Scene Graph Based Navigation
Peng, Lisa
3D Scene Graphs are powerful hierarchical representations of environments that combine spatial and semantic information into multiple levels of abstraction. 3D Scene Graphs are useful for a wide range of planning tasks in robotics that benefit from high-level semantic knowledge, and also capture dense low-level 3D geometry which is useful to support robot navigation. However, current methods for 3D Scene Graph construction result in sparse and sometimes spurious node instances and incorrect annotations. This is due to the their reliance on 2D semantic segmentation networks that may perform poorly outside their training domain. This thesis advances the state of the art in dense 2D semantic segmentation and 3D object pose estimation to improve scene graph construction and enable navigation in real-life environments.&#13;
&#13;
First, we tackle the scalability problem of data annotation for deep semantic segmentation and introduce a simple training approach for dense 2D object instance segmentation. The approach uses model-based synthetic data for training, and augments it with a small amount of real-world training data. We show that with this approach, our segmentation network needs 20x less real-world annotated images and achieves higher quality pixel-level segmentation on real-world test data.&#13;
&#13;
Second, we address the problem of data annotation in 3D object pose estimation and model fitting by proposing a novel self-supervised training framework that uses corrector and certification modules. Our architecture successfully trains a model to predict poses of partial point clouds without any ground truth pose annotations on real data, and with certifications of correctness and non-degeneracy – characterizing both quality of model fit and uniqueness of the solution. We provide extensive experiments, evaluating performance on both simulated and real world data, and show that the proposed approach matches the performance of fully supervised baselines.&#13;
&#13;
Lastly, we introduce a novel application of 3D Scene Graphs to an object search task. We show how 3D Scene Graphs can be used in a reinforcement learning framework to guide autonomous navigation and discuss how hierarchical information and dense semantics improves the effectiveness of the learned policy.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Inertial Odometry with Sparse Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/147507" rel="alternate"/>
<author>
<name>Saat, Berke</name>
</author>
<id>https://hdl.handle.net/1721.1/147507</id>
<updated>2023-01-20T03:53:05Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Visual Inertial Odometry with Sparse Deep Learning
Saat, Berke
In the field of indoor navigation, visual and inertial sensors are essential to localizing a robot. Visual Inertial Odometry (VIO) systems try to figure out the position and orientation of a robot by analyzing data from cameras and inertial sensors. As state-of-the-art VIO systems achieve remarkable accuracy, there are still improvements to be made in terms of the performance of such real-time systems. This paper presents an end-to-end VIO implementation that combines inertial measurements with wheel velocity information while using visual odometry to further optimize the pose estimates. On the visual odometry side, the paper presents a training module with an automated data collection system and a more sparse graph neural network to train with. We also show how the integration of this deep learning model impacts the performance of a real-time VIO system. On the inertial side, a commonly used sensor like an Inertial Measurement Unit (IMU) has noise and bias, which cause errors that grow fast as they get integrated over time. This paper uses factor graph structures and incremental optimization to minimize these errors and wheel velocity information to stabilize the linear outputs. In the end, we show how each VIO component and each modification to our deep learning model impact the accuracy of our pose estimation and the performance of our end-to-end VIO estimation.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soft, round, high resolution tactile fingertip sensors for dexterous robotic manipulation</title>
<link href="https://hdl.handle.net/1721.1/147504" rel="alternate"/>
<author>
<name>Romero, Branden Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/147504</id>
<updated>2023-01-20T03:39:13Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Soft, round, high resolution tactile fingertip sensors for dexterous robotic manipulation
Romero, Branden Robert
In this work we introduce a non-planar soft high-resolution tactile sensor. An iteration of the GelSight sensors, it enables future GelSights to have more complicated form factors, such as a humanoid fingertip. To do this we introduce a novel method for achieving directional lighting along the entirety of a curved sensor using light piping. Light piping uses total internal reflection and a semi-specular membrane to constrain the path of the light inside the sensor until the sensing membrane is deformed. By using this new membrane and changing the geometry, we introduce a new bidirectional reflectance distribution function and new optics. This require new calibration procedures in the form of developing a fisheye projection model, and developing a neighborhood and location based continuous look-up table to map the relationship between RGB value and surface normal orientation of the membrane at a point. Finally we perform two dexterous manipulation task with feedback from the sensors in the form of controlled rolling of an object on a support surface, and lid removal off a jar. We also give instructions on how to manufacture the sensor as well as increasing the durability of the membrane for all GelSight sensors.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Age of information for broadcast and collection in spatially distributed wireless networks</title>
<link href="https://hdl.handle.net/1721.1/147501" rel="alternate"/>
<author>
<name>Rao, Chirag R.</name>
</author>
<id>https://hdl.handle.net/1721.1/147501</id>
<updated>2023-01-20T04:07:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Age of information for broadcast and collection in spatially distributed wireless networks
Rao, Chirag R.
We consider a wireless network with a base station broadcasting and collecting timesensitive data to and from spatially distributed nodes in the presence of wireless interference. The Age of Information (AoI) is the time that has elapsed since the mostrecently delivered packet was generated, and captures the freshness of information. In the context of broadcast and collection, we define the Age of Broadcast (AoB) to be the amount of time elapsed until all nodes receive a fresh update, and the Age of Collection (AoC) as the amount of time that elapses until the base station receives an update from all nodes. We quantify the average broadcast and collection ages in two scenarios: 1) instance-dependent, in which the locations of all nodes and interferers are known, and 2) instance-independent, in which they are not known but are located randomly, and expected age is characterized with respect to node locations. In the instance-independent case, we show that AoB and AoC scale super-exponentially with respect to the radius of the region surrounding the base station. Simulation results highlight how expected AoB and AoC are affected by network parameters such as network density, medium access probability, and the size of the coverage region.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal Oblivious RAM with Integrity</title>
<link href="https://hdl.handle.net/1721.1/147499" rel="alternate"/>
<author>
<name>Mathialagan, Surya</name>
</author>
<id>https://hdl.handle.net/1721.1/147499</id>
<updated>2023-01-20T03:59:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimal Oblivious RAM with Integrity
Mathialagan, Surya
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM ’96), is a protocol that allows a client to perform RAM computations on a server without revealing any information about the underlying data, even via the access pattern. For a memory of size N, well-known lower bounds show that a multiplicative overhead of Ω(log N) in the number of RAM operations is necessary. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO 2021) with O(log N) worst-case overhead and O(1) client storage.&#13;
&#13;
However, this optimal ORAM construction is only known to be secure in the semi-honest setting, where an adversary is allowed to observe the access patterns but not modify the contents of the memory. If an adversary is allowed to tamper with the database, this construction, as well as many existing ORAM constructions, in fact become insecure.&#13;
&#13;
In this work, we construct an ORAM protocol with worst-case O(log N) overhead and O(1) client storage that also protects against tampering adversaries. This matches the efficiency of the best known ORAM constructions while additionally providing security against tampering. We achieve this by adapting the construction of Asharov et al. in a non-black-box way by using a combination of online and offline memory checking techniques, as introduced by Blum et al. (Algorithmica, 1994).
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Situational Cueing for Trust Calibration in Automated Systems</title>
<link href="https://hdl.handle.net/1721.1/147498" rel="alternate"/>
<author>
<name>Forsey-Smerek, Alexandra M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147498</id>
<updated>2023-01-20T03:37:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Situational Cueing for Trust Calibration in Automated Systems
Forsey-Smerek, Alexandra M.
Appropriately calibrated human trust is essential for safe and successful interactions between humans and automation. While undertrust in a system can lead to system disuse and suboptimal task performance, overtrust in a system can result in reduced user situation awareness and susceptibility to consequences of system failure. In dynamic domains, fluctuations in automation performance demand that user trust adapts appropriately. Recent attention has been focused on the presentation of trust cues as an interruptive behavioral intervention method to assist users in appropriate trust calibration in domains where system transparency information alone does not suffice. This thesis expands the application space of trust cues through the presentation and experimental evaluation of a novel trust cue method, situational trust cues (STCs). In the STCs framework, cues are presented if a situational update such as a change in the environmental conditions or task type significantly effects how the user should trust the automated system. Theory behind the presentation, design, and effectiveness of STCs is presented. &#13;
&#13;
STCs were experimentally validated in an in-person experiment with 64 participants to investigate the effectiveness of STCs in mitigating user overtrust and undertrust in automation in a dynamic mission operations environment. In general, participants reported that STCs were helpful but not required. Additional findings highlighted negative consequences of inappropriately presenting trust cues too frequently on the perceived utility of cues, suggesting appropriate presentation of trust cues of any type is critical for cues to retain their impact on behavior. Additionally, a post-hoc analysis of participant strategy on interacting with the automated system uncovered significant effects of participant mental model inaccuracies and individual biases on appropriate trust calibration. While limitations of the experiment administration prevented further conclusions about the effects of STCs on trust calibration, findings lay a clear path for promising future work on the evaluation of STCs and provide valuable context for design of trust cues of all types.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Programming meets Fine-grained Complexity</title>
<link href="https://hdl.handle.net/1721.1/147497" rel="alternate"/>
<author>
<name>Mao, Xiao</name>
</author>
<id>https://hdl.handle.net/1721.1/147497</id>
<updated>2023-01-20T03:35:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Dynamic Programming meets Fine-grained Complexity
Mao, Xiao
Since the term was coined by Richard Bellman in the 1940s, Dynamic Programming (DP) has remained one of the most popular technique in theoretical computer science, and has found applications in a wide range of problems.&#13;
&#13;
In this thesis, I summarize my three recent works covering applications of DP to three fundamental problems in fine-grained complexity. The first application is a sub-cubic time algorithm for unweighted tree edit distance (TED), the second application is an improved FPTAS (Fully Polynomial-Time Approximation Scheme) for Partition, and the third application is an improved FPTAS for Knapsack.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Neural Network Pruning’s Effect on Generalization</title>
<link href="https://hdl.handle.net/1721.1/147496" rel="alternate"/>
<author>
<name>Jin, Tian</name>
</author>
<id>https://hdl.handle.net/1721.1/147496</id>
<updated>2023-01-20T03:15:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">On Neural Network Pruning’s Effect on Generalization
Jin, Tian
Practitioners frequently observe that pruning improves model generalization. A longstanding hypothesis attributes such improvement to model size reduction. However, recent studies on over-parameterization characterize a new model size regime, in which larger models achieve better generalization. A contradiction arises when pruning is applied to over-parameterized models – while theory predicts that reducing size harms generalization, pruning nonetheless improves it. Motivated by such a contradiction, I re-examine pruning’s effect on generalization empirically.&#13;
&#13;
I demonstrate that pruning’s generalization-improving effect cannot be fully accounted for by weight removal. Instead, I find that pruning can lead to better training, improving model training loss. I find that pruning can also lead to stronger regularization, mitigating the harmful effect of noisy examples. Pruning extends model training time and reduces model size, which improves training and strengthens regularization respectively. I empirically demonstrate that both factors are essential to explaining pruning’s benefits to generalization fully.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Parallel Performance with Work and Span in the OpenCilk Compiler</title>
<link href="https://hdl.handle.net/1721.1/147495" rel="alternate"/>
<author>
<name>Reddy, Nikhil</name>
</author>
<id>https://hdl.handle.net/1721.1/147495</id>
<updated>2023-01-20T03:40:14Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimizing Parallel Performance with Work and Span in the OpenCilk Compiler
Reddy, Nikhil
OpenCilk is the modern iteration of Cilk, a multithreaded programming environment designed for high-performance multicore computing. OpenCilk consists of a LLVM fork called Tapir and a runtime scheduler, which, together, allow for OpenCilk’s high performance in practice. However, there are many opportunities to improve on the implementation of OpenCilk. In particular, current grainsize calculations for OpenCilk rely on a notion of work that is limited in a few ways.&#13;
&#13;
In this thesis, I propose an alternate implementation that creates a first-class notion of work and span within the OpenCilk compiler that is then used to inform optimizations within OpenCilk. I then analyze the current formulas for grainsizes. I identify key scenarios where the compiler is unable to determine a suitable grainsize and use this to suggest improvements. Finally, I construct a benchmark of one of these scenarios using a Twitter follower dataset and empirically analyze optimal grainsizes for it.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Vision-Based Navigation for Pedestrian Environments</title>
<link href="https://hdl.handle.net/1721.1/147494" rel="alternate"/>
<author>
<name>Anderson, Connor William</name>
</author>
<id>https://hdl.handle.net/1721.1/147494</id>
<updated>2023-01-20T03:42:33Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Implementation of Vision-Based Navigation for Pedestrian Environments
Anderson, Connor William
Autonomous navigation has rapidly grown to become a predominant field of study utilizing recent advances in robotics and artificial intelligence. Most autonomous navigation methods rely on expensive and complex sensor arrays such as Lidar, which pose practical limitations on the widespread deployment of these devices. This thesis presents an end-to-end implementation of a vision-based navigation pipeline for autonomous navigation in pedestrian environments, utilizing only a single front-facing RGB-D camera and tracking camera as perception devices. This pipeline utilizes 3D monocular object tracking in combination with an advanced Kalman-filter based geometric tracking scheme to track nearby pedestrians, in combination with full SLAM for localization and a reinforcement-learning based navigation stack to navigate through challenging dynamic multi-agent environments. The functionality of this pipeline is demonstrated through a series of pedestrian tracking and navigation experiments with many pedestrians. The tracking module of this pipeline is able to correctly localize pedestrians within 0.4 meters in simple scenarios and 0.6 meters in challenging multi-pedestrian stress testing cases inside of a 12 meter space in spite of limited field of view and relying on only inexpensive RGB camera images. Full end-to-end navigation was demonstrated in a crowded environment with 5 pedestrians, with only one collision out of 13 trials.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Invisible Issue of Task Underspecification in Deep Reinforcement Learning Evaluations</title>
<link href="https://hdl.handle.net/1721.1/147493" rel="alternate"/>
<author>
<name>Jayawardana, Vindula Muthushan</name>
</author>
<id>https://hdl.handle.net/1721.1/147493</id>
<updated>2023-01-20T03:15:52Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An Invisible Issue of Task Underspecification in Deep Reinforcement Learning Evaluations
Jayawardana, Vindula Muthushan
Performance evaluations of Deep Reinforcement Learning (DRL) algorithms are an integral part of the scientific progress of the field. However, standard performance evaluation practices in evaluating algorithmic generalization of DRL methods within a task can be unreliable and misleading if not careful. An important source of possible error lies in the reliance of the reported outcomes on often arbitrarily selected point Markov decision processes (point MDPs), stemming from task underspecification. A large class of DRL tasks, particularly in real-world decision problems, induce a family of MDPs, which---perhaps confusingly---each has the same high-level problem definition. As a demonstrative example, consider the classic pendulum control task that could be represented by a family of possible MDPs, each with a different pendulum mass, but is typically represented as a single MDP. This thesis argues that for reliable downstream decision-making, performance evaluations on a task in DRL should be carried out over a family of MDPs rather than a point MDP, which may be subject to bias. This thesis first illustrates the pitfalls of point MDP based evaluations through benchmark DRL control tasks and a real-world case study in traffic signal control. Then, significant inconsistencies between conclusions derived from point MDP based evaluations and MDP family based evaluations are presented. Subsequently, to overcome the prohibitive cost of training DRL models on entire families of MDPs, a series of recommendations is provided to perform accurate yet efficient performance evaluations under a computational budget. This work contributes to bolstering the empirical rigor of reinforcement learning, especially as the outcomes of DRL trickle into downstream decision-making in real-world contexts.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference and Task Planning over Spatially Complex Problems</title>
<link href="https://hdl.handle.net/1721.1/147491" rel="alternate"/>
<author>
<name>Cuellar, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/147491</id>
<updated>2023-01-20T03:14:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Inference and Task Planning over Spatially Complex Problems
Cuellar, Alex
One core problem of robot viability in many sectors is retrainability; if a robot’s task can change without changing code, automation becomes feasible for a wider set of applications. To advance robot retrainability, this thesis will introduce a learning from demonstrations (LfD) framework allowing a robot to learn and execute tasklevel plans in spatially complex environments. To achieve this goal, we introduce a propositional logic framework to encode spatial relationships between objects and an inference scheme to identify important relationships between defined object classes. Finally, we present a search-based algorithm to synthesize required class relationships into a task-level plan. As a representative problem for this of context, we focus on the problem of box packing, wherein the robot must learn specific rules surrounding how to place objects in a box according to a demonstrator’s wishes.&#13;
&#13;
DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.&#13;
&#13;
This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>S*: Geometric Multimodal Trajectory Optimization via Apex Interpolating Spiro Splines</title>
<link href="https://hdl.handle.net/1721.1/147490" rel="alternate"/>
<author>
<name>Chang, Christopher W.</name>
</author>
<id>https://hdl.handle.net/1721.1/147490</id>
<updated>2023-01-20T03:01:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">S*: Geometric Multimodal Trajectory Optimization via Apex Interpolating Spiro Splines
Chang, Christopher W.
Non-heuristic multimodal trajectory optimization is widely considered an intractable holy grail for real-time robotic systems, with the existing state-of-the-art standing at a heuristic hierarchical approach that stacks upstream search-based or sampling-based behavior planning on top of downstream local numerical trajectory optimization. In this thesis, we present (i) the S* algorithm, a novel geometric trajectory optimization method for autonomous ground vehicles in dynamic environments that uses apex interpolating Spiro splines to optimize orders of magnitude fewer variables than numerical optimization, and (ii) an anytime best-first multimodal variant of S* using a parallel optimistic branch-and-bound on homology classes. We demonstrate a preliminary implementation of this algorithm integrated into MIT Driverless’s autonomous racing stack on a full-size Roborace Devbot 2.0 racecar navigating mixed-reality obstacle courses at up to 100 mph.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithmic Approaches to Interfacing Different Materials Using Inkjet Multi-Material 3D Printers</title>
<link href="https://hdl.handle.net/1721.1/147489" rel="alternate"/>
<author>
<name>Blazes, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/147489</id>
<updated>2023-01-20T03:41:00Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Algorithmic Approaches to Interfacing Different Materials Using Inkjet Multi-Material 3D Printers
Blazes, Christopher
As the field of multi-material additive manufacturing (AM) advances, fabricated designs have become increasingly complex. As resolution improves, new algorithmic methods are needed to keep up with the exponentially increasing computational demands. To solve this problem, the shader-like "fablet" was developed to procedurally compute patterns voxel-by-voxel on the scale of billions of voxels [11, 12]. Fablets are powerful programs that allow impressive manipulations of the internal structure of voxelized meshes. However, using fablets to process features on the surface of a mesh is a relatively unexplored idea, even though the results can be incredibly useful in additive manufacturing. For example, a persistent challenge in the area of study is how best to interface two materials with different properties across their shared surface that may not bond well on their own. Work has been done with algorithmic approaches to creating composite materials with additive manufacturing - these studies focus on biomimetic patterns that can be applied over a volume to maximize interconnection or realize a particular property [2, 3]. However, little work has been done processing patterns like these fit over entire surfaces of a mesh. In this paper, algorithmic approaches are introduced which allow a user to easily design parts with customized, intricate surface patterns that are much more complex than the inputted meshes themselves, specifically using Inkbit’s Vista printer and software. Using these algorithmic approaches, a user can define a wide variety procedural or handmade patterns applied to the surface of a mesh in order to produce a voxelized representation of the part, as needed by the printer.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Different Image Representations for Image Retrieval</title>
<link href="https://hdl.handle.net/1721.1/147488" rel="alternate"/>
<author>
<name>Favela, Manuel Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/147488</id>
<updated>2023-01-20T03:28:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Investigating Different Image Representations for Image Retrieval
Favela, Manuel Alejandro
With image and video databases becoming more prevalent and larger, so too does the tools used for navigating and searching them. Image retrieval, or the process of finding one particular image from a set of images, is one such process. Image retrieval relies on some innate searching of the images in the database being retrieved from. Rather than doing this on the full images, image retrieval systems use image representations to allow the searching to happen faster while maintaining the image information to retrieve the correct image. The MIT Data Systems Group (DSG) created Seesaw, a system for interactive ad-hoc searches in image data sets with no assumption of pre-defined search queries made in advance. This thesis focuses on investigating different image representations to see which can improve image retrieval in the case of Seesaw. This project looks at different segmentation models and region proposal networks from Mask R-CNNs to see if these trained models can provide any comparable or better performance in Seesaw compared to the non-trained image representation system it currently uses. What is found is that the region proposal network representation performs on par to the representation found in Seesaw, but additionally that segmentation models can be used to eliminate some vectors in the Seesaw representation without compromising on performance.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering apomixis in plants to stabilize intergenerational hybrid vigor</title>
<link href="https://hdl.handle.net/1721.1/147486" rel="alternate"/>
<author>
<name>Gorestki, David</name>
</author>
<id>https://hdl.handle.net/1721.1/147486</id>
<updated>2023-01-20T03:05:17Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Engineering apomixis in plants to stabilize intergenerational hybrid vigor
Gorestki, David
Apomixis is the production of seeds asexually in plants, whereby clonal progeny have an identical genotype as the mother plant (1). In natural apomicts, mechanisms of reproduction are very diverse but carry potential for use in agriculture. If apomixis is engineered in crops it could be used to stabilize hybrid vigor throughout multiple generations without requiring continual hybrid crosses for seed production. Moreover, apomixis could unlock hybrid seed production in many species and lines where it is not yet possible. Here, I will discuss apomictic mechanisms and associated gene regulation and review research on engineering apomixis.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the role of top-down techniques for improving regional estimates of artisanal and small-scale gold mining mercury emissions</title>
<link href="https://hdl.handle.net/1721.1/147481" rel="alternate"/>
<author>
<name>Dlamini, Thandolwethu</name>
</author>
<id>https://hdl.handle.net/1721.1/147481</id>
<updated>2023-01-20T03:32:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Assessing the role of top-down techniques for improving regional estimates of artisanal and small-scale gold mining mercury emissions
Dlamini, Thandolwethu
ASGM is the world’s largest source of anthropogenic Hg emissions and is common in Latin America, Sub-Saharan Africa, South Asia, and East Asia. However, the amount of mercury emitted from ASGM and contributing to global mercury emissions is subject to substantial uncertainty. Bottom-up studies have quantified sources of Hg, including ASGM, using data on underlying activities to estimate regional and global totals. In contrast, top-down studies have used atmospheric concentration measurements and models to constrain Hg emissions. However, no top-down estimates have yet been calculated for ASGM emissions. With GEOS-Chem’s global-scale chemical transport model for Hg, we investigate whether and how ASGM-related Hg emissions can be quantified from existing regional measurement sites for gaseous elemental mercury (GEM). By combining our top-down method with existing bottom-up data, we improve estimates of Hg emissions from ASGM activities, using Peru and the Madre de Dios region of South America as case studies. We find that quantitative constraints on ASGM emissions are better provided by information on the shape of the probability distribution of GEM concentrations, such as the interquartile range and the 95% range, suggesting possible design guidelines for monitoring networks. The model-based analysis offers insights into improving regional estimates of ASGM emissions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controlling Directions Orthogonal to a Classifier</title>
<link href="https://hdl.handle.net/1721.1/147480" rel="alternate"/>
<author>
<name>Xu, Yilun</name>
</author>
<id>https://hdl.handle.net/1721.1/147480</id>
<updated>2023-01-20T03:50:19Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Controlling Directions Orthogonal to a Classifier
Xu, Yilun
We propose to identify directions invariant to a given classifier so that these directions can be controlled in tasks such as style transfer. While orthogonal decomposition is directly identifiable when the given classifier is linear, we formally define a notion of orthogonality in the non-linear case. We also provide a surprisingly simple method for constructing the orthogonal classifier (a classifier utilizing directions other than those of the given classifier). Empirically, we present three use cases where controlling orthogonal variation is important: style transfer, domain adaptation, and fairness. The orthogonal classifier enables desired style transfer when domains vary in multiple aspects, improves domain adaptation with label shifts and mitigates the unfairness as a predictor.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach for Predicting and Understanding Braking Conditions of Aircraft Landings</title>
<link href="https://hdl.handle.net/1721.1/147478" rel="alternate"/>
<author>
<name>Trávník, Marek</name>
</author>
<id>https://hdl.handle.net/1721.1/147478</id>
<updated>2023-01-20T03:32:11Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach for Predicting and Understanding Braking Conditions of Aircraft Landings
Trávník, Marek
Traditional runway condition reporting is limited due to its reliance on runway contamination information and pilot reports of braking action. A database of 4.9 million aircraft landings by Aviation Safety Technologies, labeled with runway condition codes computed from aircraft sensor outputs provides a unique opportunity to enhance&#13;
and modernise condition reporting using data-driven methods.&#13;
&#13;
This thesis presents an ensemble model trained on this landing database to predict runway condition codes using a cascading Xgboost architecture. The method uses a novel multiple ROC threshold setting procedure for linked classifiers which maintains&#13;
the shape of the runway condition code distribution. A forecast-focused version of the model only requires weather information from METAR reports, a description of the runway and aircraft type as input. The method is validated on a collection of 30 historical runway excursions, assigning at best "Medium to Poor" braking action to&#13;
all cases with reduced friction. Feature importance is computed using SHAP values, showing that relative humidity, temperature, precipitation, and aircraft type are the features that guide model predictions the most. &#13;
The model can be used to create decision aids for aircraft operators, to complement traditional condition reporting, and/or as a forecasting tool to inform runway maintenance decisions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Large-scale Multi-agent Control with Safety Certificates</title>
<link href="https://hdl.handle.net/1721.1/147477" rel="alternate"/>
<author>
<name>Qin, Zengyi</name>
</author>
<id>https://hdl.handle.net/1721.1/147477</id>
<updated>2023-01-20T04:03:23Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Learning Large-scale Multi-agent Control with Safety Certificates
Qin, Zengyi
Multi-agent intelligence in autonomous systems has been fascinating roboticists for decades. The recent advances in machine learning has created unprecedented opportunities for achieving ultimate multi-agent intelligence and full autonomy in a data-driven way. However, a fundamental bottleneck of machine learning-based methods is their safety and reliability in controlling the autonomous system at large scale, due to the lack of formal safety guarantee. In addressing these challenges, we develop: (1) An machine learning-based large-scale multi-agent control framework with safety certificates, which simultaneously enjoys the versatility of machine learning and the assurance of safety. (2) A multi-agent trajectory tracking framework with convergence and safety guarantees. (3) A general method to learn safe controllers for black-box systems with unknown dynamics. Comprehensive experiments have shown that the proposed methods have notable performance in terms of safety rate, task completion rate, computational efficiency and large-scale scalability.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynast: Inclusive and efficient quantification of metabolically labeled transcripts in single cells</title>
<link href="https://hdl.handle.net/1721.1/147475" rel="alternate"/>
<author>
<name>Min, Kyung Hoi (Joseph)</name>
</author>
<id>https://hdl.handle.net/1721.1/147475</id>
<updated>2023-01-20T03:46:36Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Dynast: Inclusive and efficient quantification of metabolically labeled transcripts in single cells
Min, Kyung Hoi (Joseph)
Single cell RNA velocity, defined as the time derivative of gene expression, is a powerful concept that can predict the future transcriptional state of the cell. Traditionally, RNA velocity estimations relied on the distinction between spliced and unspliced mRNA in single cell RNA-seq (scRNA-seq) data, resulting in noisy and biased approximations. Recent advancements in metabolic labeling enabled the direct, unbiased measurement of nascent RNA, yielding significantly improved RNA velocity estimates. However, there is still a lack of a standardized computational framework to process these data. This study introduces Dynast, a pipeline to comprehensively and efficiently quantify metabolically labeled and splicing transcripts from high-throughput metabolic labeling-enabled scRNA-seq.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Interaction with Various Elliptical Constraints</title>
<link href="https://hdl.handle.net/1721.1/147474" rel="alternate"/>
<author>
<name>Arons, Nicolas</name>
</author>
<id>https://hdl.handle.net/1721.1/147474</id>
<updated>2023-01-20T04:02:37Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Human Interaction with Various Elliptical Constraints
Arons, Nicolas
Despite worse actuators and long feedback delays, humans outperform robots in a number of tasks, including tool use and physical interaction. In order to study the human controller, this work explores interaction with three different elliptical kinematic constraints. The shapes of these constraints were based on prior work: the circle was the same size as a previous crank-turning study; the aligned ellipse was based on estimates of the zero-force trajectory during circular crank-turning; the anti-aligned ellipse was a rotated version of the aligned ellipse. Subjects were given visual feedback on their velocity to decrease variability across trials, constraint shapes, and subjects. The velocity they were asked to track had a speed-curvature profile that was consistent with the two-thirds power law, which is widely reported in human motion. &#13;
&#13;
Hypotheses about the experiment were made by modeling human physical interaction with a Norton equivalent network. The anti-aligned ellipse was hypothesized to evoke normal force and velocity error values higher than the circle, and the circle was hypothesized to evoke higher normal force and velocity error values than the aligned ellipse. &#13;
&#13;
Statistical analysis revealed that the anti-aligned ellipse did evoke higher normal force and velocity errors than both the circle and aligned ellipse. There was no significant difference between the circle and aligned ellipse for either normal force or velocity error. The experiment was a success, however, in proving that constraint shape does affect the force that subjects exert on the constraint. This result suggests several further studies on human interaction with various kinematic constraints.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Frequency Acoustic Propagation and Modeling in Stratified Estuaries</title>
<link href="https://hdl.handle.net/1721.1/147473" rel="alternate"/>
<author>
<name>Swanda, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/147473</id>
<updated>2023-01-20T03:37:31Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">High Frequency Acoustic Propagation and Modeling in Stratified Estuaries
Swanda, Nicholas
Acoustic propagation measurements are made in a highly variable and stratified estuary using high frequency transducers (120kHz) on tripods placed across the main channel of the river flow. The measurements are taken in the Connecticut River across several tidal cycles, when the flood tide causes a wedge of seawater to press up the river bed, beneath the fresh water, and then be eroded and pushed back out during the ebb. BELLHOP, implemented via Matlab, is a beam/ray tracing method and is used to model the acoustic propagation in this environment using collected temperature, salinity, and depth data. Multiple modeling comparisons are done over the period of three full tidal cycles, totaling a thousand separate modeling runs and compiled into a time series. Arrival times measurements from the transducer system were able to be accurately modeled, validating BELLHOP as a useful tool in modeling this very dynamic and challenging acoustic environment.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Storage Potential of the Salina Group, Appalachian and Michigan Basins</title>
<link href="https://hdl.handle.net/1721.1/147472" rel="alternate"/>
<author>
<name>Coyle, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/147472</id>
<updated>2023-01-20T03:03:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Hydrogen Storage Potential of the Salina Group, Appalachian and Michigan Basins
Coyle, Sarah
With rapidly changing technology and increasing social-political demand for decarbonization, the energy system is evolving globally and domestically. Adoption of hydrogen at scale as an energy carrier and a storage medium is a key strategy discussed by the DOE and in the literature for decarbonization of the electricity grid and the transportation industry.  However, large volumes of hydrogen storage are necessary to enable hydrogen adoption at scale. Geologic hydrogen storage is expected to be a critical tool for scaled hydrogen because of its projected lower cost and higher capacity than aboveground and non-geologic storage options. &#13;
&#13;
However, the existing benchmarking of cost, capacity, and geographic potential for hydrogen storage is restricted chiefly to idealized salt properties and geometries. However, the suitability of salt for hydrogen storage will vary by geologic setting. Additionally, the related costs of hydrogen storage will differ significantly with salt thickness and depth. Subsurface projects intrinsically have more uncertainty and risk than above-ground projects. Geologic characterization will be a critical part of understanding and characterizing geographic changes in the cost and availability of hydrogen storage. &#13;
&#13;
This thesis focuses on salt cavern hydrogen storage through evaluation of the hydrogen storage potential and its associated costs of the Salina Group evaporites in two adjacent sedimentary basins, the Appalachian Basin and the Michigan Basin. The Michigan Basin Salina Group contains three evaporite units suitable for hydrogen storage, the A1, A2, and B evaporites. The Appalachian Basin includes a single unit, the F4 evaporite. The cost to store hydrogen within these four intervals varies much more widely than previously benchmarked in the literature, with salt thickness and depth serving as key cost drivers. &#13;
&#13;
Of course, future hydrogen economics and infrastructure will play an essential role in the ultimate value of subsurface storage. Local resources that control the availability of different types and sources of hydrogen and nearby end-use demand will shape the value chain of hydrogen storage locally and regionally. In future work, evaluation of local and regional supply resources and market potential should be coupled with basin-specific geologic storage potential like the characterization done in this work.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy Transition Impacts for Workers: A Comparative Analysis of Differences in Energy Transition Policies in Germany and Appalachia and their Impact on Coal Employment Outcomes</title>
<link href="https://hdl.handle.net/1721.1/147471" rel="alternate"/>
<author>
<name>Barnes, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/147471</id>
<updated>2023-01-20T03:32:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Energy Transition Impacts for Workers: A Comparative Analysis of Differences in Energy Transition Policies in Germany and Appalachia and their Impact on Coal Employment Outcomes
Barnes, Matthew
Energy transitions are occurring across the globe as natural gas and renewable energies increasingly compete with and displace coal-fired electricity, and the need to reduce greenhouse gas emissions to combat climate change becomes more urgent. As energy sources transition, so too does the entire energy system in which they operate. For the coal industry, the energy transition leads to significant structural changes to the communities that are losing coal-based employment. Through a comparative analysis of the energy policies of Germany and the United States using a transdisciplinary framework, this thesis identifies potential policy actions to overcome barriers to a just transition and improve outcomes for workers with durable legislative policy. Extensive literature review including policies, analysis, commentary, and publicly available data, is employed to contextualize the energy transition in Germany and Appalachia. Germany, with a long history of energy transition policies and similarities between its coal regions and that of Appalachia, provides a useful study of policy strategies. This thesis suggests that within the context of the United States, durable legislated policy, not executive action, is paramount to sending the stable policy signals required to encourage further development of policy actions to manage the energy transition.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tropicalizing the Portable Radio</title>
<link href="https://hdl.handle.net/1721.1/147468" rel="alternate"/>
<author>
<name>Ruamcharoen, Chayanon</name>
</author>
<id>https://hdl.handle.net/1721.1/147468</id>
<updated>2023-01-20T03:14:54Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Tropicalizing the Portable Radio
Ruamcharoen, Chayanon
During the Second World War, the U.S. military found that fungal growth put its portable radio equipment out of use at an alarmingly fast rate in the tropics. This paper follows radio engineers and biologists as they made sense of “‘tropical deterioration” and devised techniques of “tropicalization” to counteract it. By tracking multiple materializations of air that carried not only radio signals, but also fungal spores, it shows how the categories of the portable radio and the tropics became recast in their encounter. If the portable radio was imagined to condition spatiotemporal experience so as to fold the tropical environment into the smooth space of military logistics, tropical deterioration ran counter to this imaginary. As air mixed radio and fungi, the decaying portable radio served as a trope around which these scientists and engineers pitched mechanical time of radio technology against organic time of fecund tropical nature, which ran faster than in the temperate zone. To protect the portable radio from dangerous tropical air, radio engineers came to see hermetic sealing as a preferred method for tropicalization—a choice that evinces their aspiration to keep technology and the environment apart.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modern Portfolio Theory Applied to Institutional Real Estate Investment</title>
<link href="https://hdl.handle.net/1721.1/147467" rel="alternate"/>
<author>
<name>Gastelú Bárcena, Emilio</name>
</author>
<id>https://hdl.handle.net/1721.1/147467</id>
<updated>2023-01-20T04:06:50Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Modern Portfolio Theory Applied to Institutional Real Estate Investment
Gastelú Bárcena, Emilio
What is the optimal capital allocation to institutional-grade Real Estate that investment managers should pursue to achieve the highest risk-adjusted return? As Real Estate keeps evolving, institutionalizing, and becoming an asset class that is paramount to a well-balanced portfolio, the question of what product types and markets will provide the highest return, less volatility, and greatest diversification remains unclear. This research aims to find the optimal capital allocation in Real Estate that will generate the highest Sharpe Ratio.&#13;
&#13;
This research will use endorsed Real Estate research platforms and conduct one-on-one interviews with institutional Real Estate investment managers to understand how to formulate an investment thesis and capital deployment strategy. Using a Mean-Variance analysis, this study will first illustrate what allocation to Real Estate will deliver the highest risk-adjusted return within a diversified portfolio. Afterward, this study will strive to create a simplified portfolio allocation tool for asset managers to use while formulating their investment decisions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Career Pathways Within an Organization Based on the Assessment of Prior Experiences</title>
<link href="https://hdl.handle.net/1721.1/147465" rel="alternate"/>
<author>
<name>Geiger, Kurt Drew</name>
</author>
<id>https://hdl.handle.net/1721.1/147465</id>
<updated>2023-01-20T03:34:58Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Exploring Career Pathways Within an Organization Based on the Assessment of Prior Experiences
Geiger, Kurt Drew
“People are our most valuable resource” is a statement championed by organizations both large and small. It would therefore stand to reason that great care should go into crafting a career progression that aims to maximize the development of employees. Yet current research shows there is minimal consensus on how to effectively manage human capital in the sense of assessing the experience gained during a position, and using that assessment to guide an individual’s next career movements. The question asked in this thesis is whether a model based on observable tasks expected to occur in a position has any meaningful difference from a model based solely on time in a position. To answer this question a model representative of a system currently in place is constructed and different scenarios are tested by changing the decision criteria that govern the career progression of individuals within the chosen organization. Our exploration uses the Navy Explosive Ordnance Disposal officer community as a framework for modeling and the analytic hierarchy process to systematically evaluate positions based on experiences gained through specific tasks. Results from this effort support the idea that considering the observable tasks expected to occur in a position may help better develop individuals in key leadership attributes.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NeuroModular - A Modular Backend for Fiber-Based Wireless Bioelectronic Interfaces</title>
<link href="https://hdl.handle.net/1721.1/147462" rel="alternate"/>
<author>
<name>Allen, Harrison</name>
</author>
<id>https://hdl.handle.net/1721.1/147462</id>
<updated>2023-01-20T04:00:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">NeuroModular - A Modular Backend for Fiber-Based Wireless Bioelectronic Interfaces
Allen, Harrison
Multifunctional microelectronic fibers are a new class of bioelectronic interfaces that combine the scalability and customization afforded by fiber drawing with the functional maturity of solid-state microdevices. Wireless operation of multifunctional fiber-based devices would allow neuromodulation in the central and peripheral nervous systems in awake behaving animals, allowing more naturalistic behaviors compared to wired operation. In this work I present a modular, versatile, and miniature wireless control platform that supports an array of capabilities in multifunctional fibers. The device is designed with two sub-circuits, the primary module circuit, and multiple fiber control circuits. The primary module circuit communicates with a user-controlled graphic user interface (GUI) via Bluetooth Low Energy (BLE) and controls the fiber control circuits. The fiber control circuits have two different implementations, one for simple fiber control (v1.0), and the other for more advanced fiber control (v1.1). These circuits can each operate up to three functional "channels" simultaneously and independently. Each channel can support microscale light-emitting diodes (µLEDs) for in-vivo optogenetics, microscale temperature sensors, and thermal actuators, and they can be extended to accommodate additional functionalities. The modules can operate using different power solutions, depending on experimental needs: various sizes and capacities of batteries for wireless operation, or wired power for indefinite run time.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonizing the Global Shipping Industry: Evaluating Pathways for Alternative Fuels</title>
<link href="https://hdl.handle.net/1721.1/147461" rel="alternate"/>
<author>
<name>Hong, Seoyeon Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/147461</id>
<updated>2023-01-20T03:15:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Decarbonizing the Global Shipping Industry: Evaluating Pathways for Alternative Fuels
Hong, Seoyeon Tara
Achieving net-zero emissions across all sectors, including the shipping industry, which relies heavily on fossil fuels and traditional internal combustion engines for propulsion, is critical to mitigating climate change and limiting global temperature rise. This thesis evaluates decarbonizing pathways for the global shipping industry through alternative fuels. The decarbonization pathways for shipping are constructed by considering significant system decisions, including powertrains, fuel types, and feedstock. Each pathway is assessed based on cost and multi-attribute utility using system-level metrics relevant to shipping. For alternative fuels, fuel cost models have been developed to estimate the levelized cost of production based on varying electricity prices, natural gas prices, and capital and operating expenditure assumptions. With the fuel cost model results, the total cost of ownership models of bulk carrier vessels have been developed to calculate and compare the lifetime cost for operating vessels for various alternative fuel pathways. The cost models provide insights into the cost markup of alternative fuel pathways relative to the conventional fuels of maritime ships. The MIT’s Economic Projection and Policy Analysis (EPPA) model has been enhanced to represent a low-emission shipping option to assess the economic impact and make projections on the market share of the alternative fuel pathway through 2050. Required investment to enable low-emission shipping to enter the market has been estimated using the EPPA model. Combining findings from the multi-attribute utility, including lifecycle emissions of alternative fuels and economic modeling results, near-term, medium-term, and long-term pathways for low-emission shipping have been proposed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Management of R&amp;D Capabilities with Agent Based Modeling</title>
<link href="https://hdl.handle.net/1721.1/147459" rel="alternate"/>
<author>
<name>Paul, Jason V.</name>
</author>
<id>https://hdl.handle.net/1721.1/147459</id>
<updated>2023-01-20T03:30:51Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Strategic Management of R&amp;D Capabilities with Agent Based Modeling
Paul, Jason V.
R&amp;D efforts produce more than just the tangible prototype or patent. These efforts also create intangible attributes that are manifested as a capability, delivering an organization the endogenous skills needed to traverse a technological landscape in a more efficient and successful manner than the competition. The challenge for leadership is aligning resources and policy to balance the tangible and intangible to meet organizational strategy. This is challenging since the intangible outputs are not easily quantifiable. However, Agent Based Modeling (ABM) provides one method to simulate R&amp;D activities to quantify and compare the relative gains between the tangible and intangible outputs. This type of modeling incorporates agents that move about an environment while making decisions based on their interactions with the environment and other agents. Due to the stochastic foundation of this model, a Monte Carlo approach is used and the results are shown as a cumulative distribution function that could allow leadership to compare the relative impacts of different R&amp;D strategies or policies. This work presents a model with agents in the form of researchers that are pursuing an innovation goal. They traverse a technology landscape, ultimately creating a path to realize the innovation goal. The landscape is littered with technical impediments and potential serendipitous discoveries. The researchers must overcome these barriers through individual or collaborative research efforts or avoidance. This model is exercised for self consistency and cross consistency through 3 scenarios to increase confidence. The model is then expanded to include technology areas of lower maturity with higher densities of technical barriers and serendipitous discovery sites and a scenario where researchers conducting basic research work in conjunction with researchers conducting applied research. In these last 2 scenarios, the data highlight the tradespace in a realistic scenario, giving leadership the data to determine best resourcing decisions to achieve organizational strategic goals. Specifically, in both of these scenarios, the time required to achieve the primary innovation goal is greater than the base case but shows a marked increase in organizational knowledge gains and serendipitous discoveries.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a Desktop Fiber Manufacturing Device</title>
<link href="https://hdl.handle.net/1721.1/147450" rel="alternate"/>
<author>
<name>Dhar, Shreya</name>
</author>
<id>https://hdl.handle.net/1721.1/147450</id>
<updated>2023-01-20T03:30:03Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a Desktop Fiber Manufacturing Device
Dhar, Shreya
This thesis analyses a Desktop Fiber Manufacturing Device and discusses design changes to expand its functionality and usability. This extrusion device consists of an extruder that pushes a preform, hot glue stick in this case, into a heater. The softened preform can then be drawn into fiber that is then spooled after being cooled through a water bath. Design changes have been made to make the height of the extruder and the distance of the spool easily adjustable using easy reattachment mechanisms and bearings. The extrusion system has also been modified to allow repositioning of the idle extrusion gear to allow the device to extrude fiber from preforms of different diameters. The spooling system has been improved by making the spool easily removable and to increase the run-time of the device. A different way of distributing the manufactured fiber over the spool has also been explored and a wire management system has been implemented. Power and Tension sensors have also been implemented and the data collected from them have been analyzed. Additionally, the performance of the fiber extrusion device has been studied and the effects of the different motor speeds on the manufactured fiber’s diameter has been analyzed to identify settings that can reduce variation. These modifications expand and build on the current device to further develop it as a platform for education and research related to manufacturing, data analysis, controls and design.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dense Spin Arrays with Low Cross-talk Operations for Quantum Network Applications</title>
<link href="https://hdl.handle.net/1721.1/147448" rel="alternate"/>
<author>
<name>Wang, Hanfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/147448</id>
<updated>2023-01-20T03:48:30Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Dense Spin Arrays with Low Cross-talk Operations for Quantum Network Applications
Wang, Hanfeng
In this master thesis, we propose a quantum repeater architecture based on a spin array with an efficient optical interface. Single qubit control and multi-qubit gates can be realized by localized electric field with low cross-talk and power consumption. This thesis will also contain how we use electric field to make use of spectral addressing and frequency-multiplexing to increase the quantum repeater performance. We evaluate the performance of the our design in comparison to a routing tree design and show an increased entanglement generation rate scaling into the thousands of qubits regime. Our results enable high fidelity control of dense quantum emitter arrays for scalable networking.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Efficient Acoustic Interfaces for Quantum Emitters in Diamond</title>
<link href="https://hdl.handle.net/1721.1/147447" rel="alternate"/>
<author>
<name>Raniwala, Hamza Hussain</name>
</author>
<id>https://hdl.handle.net/1721.1/147447</id>
<updated>2023-01-20T03:24:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design of Efficient Acoustic Interfaces for Quantum Emitters in Diamond
Raniwala, Hamza Hussain
Solid-state atomic defects--known as quantum emitters--in diamond are a valuable technology for quantum networking and computing due to their optically active transitions that interface on-chip systems with flying photons as well as their long-lived spin transitions that function as quantum memories. These advantages motivate the development of quantum emitter interfaces that can allow other technologies, such as superconducting circuits, nanomechanical resonators, and telecom optical cavities to interact with quantum emitters. Here, we propose two devices that allow these systems to efficiently interact via spin-phonon interactions with Group IV Silicon vacancy (SiV⁻) centers in diamond. First, we design and simulate a spin-optomechanical interface with ultrasmall mechanical and optical mode volumes ([formula] and [formula], respectively) to interface SiV⁻ centers with a telecom optical mode for quantum networking. Next, we design and simulate an electromechanical transducer that generates tripartite strong coupling from a superconducting circuit and SiV⁻ electron spin to an intermediary phonon mode (with ultra-high cooperativities (~10³ and ~10², respectively). Finally, we discuss the deployment of these two devices in quantum information protocols: heralded entanglement using our spin-optomechanical interface; and superconducting circuit-to-spin quantum transduction, information storage, and networking using our spin-electromechanical transducer.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep-learning Enabled Accurate Bruch’s Membrane Segmentation in Ultrahigh-Resolution Spectral Domain and Ultrahigh-Speed Swept Source Optical Coherence Tomography</title>
<link href="https://hdl.handle.net/1721.1/147445" rel="alternate"/>
<author>
<name>Lin, Junhong</name>
</author>
<id>https://hdl.handle.net/1721.1/147445</id>
<updated>2023-01-20T04:04:52Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Deep-learning Enabled Accurate Bruch’s Membrane Segmentation in Ultrahigh-Resolution Spectral Domain and Ultrahigh-Speed Swept Source Optical Coherence Tomography
Lin, Junhong
Aged-related macular degeneration (AMD) and diabetic retinopathy (DR), the leading cause of significant vision loss worldwide, alter the retinal structure and capillary blood flow in eyes. Optical coherence tomography (OCT) and angiography (OCTA), the gold standard imaging modalities in ophthalmic clinics, enable the micrometer-scale visualization of retinal structure and vasculature and provide the ability for early detection and progression monitoring of retinal disease. Ultrahigh resolution, spectral domain OCT prototype (UHR SD-OCT) and ultrahigh speed, swept source OCT prototype (UHS SS-OCT) developed by our group provide the ability to visualize the fine structural changes in the outer retina and vascular changes in the retina respectively, which occur with the disease progression. A few of the most important clinical findings with AMD and DR, such as drusen and choriocapillaris (CC) blood flow deficit, are located adjacent to the Bruch’s membrane (BrM). BrM is a very thin (2–6 µm) extracellular matrix, which is generally not resolved in commercial OCT instrument and therefore challenging to perform segmentation and analysis. It is even more challenging when pathologic changes in retina distort its appearance and contrast. To qualitatively and quantitatively assess the pathologic changes adjacent to BrM, an accurate segmentation is required for robust analysis. This thesis presents an advanced automatic, deep learning-based segmentation framework. The study aims to generate an accurate BrM segmentation for quantitative analysis. The performance of the segmentation is evaluated on both healthy eyes and eyes with retinal diseases, and reproducibility / repeatability is assessed through consecutive repeated imaging sessions on patients as well as longitudinal imaging of patients. This study will facilitate the investigation of in vivo early structural / vascular biomarker for AMD and DR progression.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Variational Autoencoders for Discovering Influential Latent Factors</title>
<link href="https://hdl.handle.net/1721.1/147441" rel="alternate"/>
<author>
<name>Hu, William</name>
</author>
<id>https://hdl.handle.net/1721.1/147441</id>
<updated>2023-01-20T03:06:59Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Variational Autoencoders for Discovering Influential Latent Factors
Hu, William
Generative modeling is increasingly being used to simulate or generate new unseen data instances by means of modeling the statistical distribution of data. Generative modeling falls under the broad area of representation learning, which aims to discover representations required for detecting features, classification, and other ways of understanding data. In this vein, variational autoencoders (VAEs) and their variants are one technique of generative modeling (and therefore representation learning) using variational inference under the assumption that the underlying data distribution is composed of a few latent random variables. For example, a VAE (or some other generative learning model) might learn that an image of a person can be generated from the hair color, face shape, and background color. By decomposing the data into latent factors, we could generate and explore new unseen data, which would enable us to investigate how certain data looks like in different environments. However, VAEs are not perfect, and the trained latent factors trained could potentially contain redundant information. In this thesis, we propose to apply VAEs as an unsupervised technique (i.e., in the absence of any external metadata) to investigate the extent to which we can discover a disentangled representation of tabular data and use these factors to generate new data.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Algorithms For String Problems</title>
<link href="https://hdl.handle.net/1721.1/147440" rel="alternate"/>
<author>
<name>Jin, Ce</name>
</author>
<id>https://hdl.handle.net/1721.1/147440</id>
<updated>2023-01-20T03:25:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Quantum Algorithms For String Problems
Jin, Ce
We design near-optimal quantum query algorithms for two important text processing problems: Longest Common Substring and Lexicographically Minimal String Rotation. Specifically, we show that:&#13;
&#13;
- Longest Common Substring can be solved by a quantum algorithm in Õ(n²⸍³) time, improving upon the Õ(n⁵⸍⁶)-time algorithm by Le Gall and Seddighin (2022). Moreover, given a length threshold 1 ≤ d ≤ n, our algorithm decides in n²⸍³⁺⁰⁽¹⁾/d¹⸍⁶ time whether the longest common substring has length at least d, almost matching the Omega(n²⸍³/d¹⸍⁶) quantum query lower bound.&#13;
&#13;
- Lexicographically Minimal String Rotation can be solved by a quantum algorithm in n¹⸍²⁺⁰⁽¹⁾ time, improving upon the Õ(n³⸍⁴)-time algorithm by Wang and Ying (2020), and almost matching the Ω(√n) quantum query lower bound.&#13;
&#13;
Our algorithm for Lexicographically Minimal String Rotation is obtained by speeding up a divide-and-conquer algorithm using nested Grover search and quantum minimum finding. Combining this divide-and-conquer idea with the deterministic sampling algorithm of Vishkin (1991) and Ramesh and Vinay (2003), we achieve a quantum speed-up of the String Synchronizing Set technique introduced by Kempa and Kociumaka (2019). Our algorithm for Longest Common Substring applies this string synchronizing set in the quantum walk framework.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SafeGENIE: Secure and federated linear mixed model association tests</title>
<link href="https://hdl.handle.net/1721.1/147439" rel="alternate"/>
<author>
<name>Chen, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/147439</id>
<updated>2023-01-20T03:57:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">SafeGENIE: Secure and federated linear mixed model association tests
Chen, Jeffrey
Privacy-preserving algorithms for genome-wide association studies (GWAS) promise to facilitate data sharing across silos to accelerate new discoveries. However, existing approaches do not support an important, prevalent class of methods known as linear mixed model (LMM) association tests or would provide limited privacy protection, due to the high computational burden of LMMs under existing secure computation frameworks. Here we introduce SafeGENIE, an efficient and provably secure algorithm for LMM-based association studies, which allows multiple entities to securely share their data to jointly compute association statistics without leaking any intermediary results. We overcome the computational burden of LMMs by leveraging recent advances in LMMs and secure computation, as well as a novel scalable dimensionality reduction technique. Our results show that SafeGENIE obtains accurate association test results comparable to a state-of-the-art centralized algorithm (REGENIE), and achieves practical runtimes even for large datasets of up to 100K individuals. Our work unlocks the promise of secure and distributed algorithms for collaborative genomic studies.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-cell differential splicing of Alzheimer’s Disease in 1.9 million cells across 416 individuals</title>
<link href="https://hdl.handle.net/1721.1/147438" rel="alternate"/>
<author>
<name>Hwa, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/147438</id>
<updated>2023-01-20T03:49:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Single-cell differential splicing of Alzheimer’s Disease in 1.9 million cells across 416 individuals
Hwa, Christian
Alzheimer’s Disease (AD) is a polygenic disease with variable phenotypic response among patients. While single-cell differential expression data can provide important information regarding the disease’s pathology, it only captures steady-state cell behavior. By contrast, RNA velocity analysis allows inference of both direction and magnitude of regulatory changes occuring over disease progression. However, despite the success of existing RNA velocity analyses, the pace of disease progression remains uncharacterized, as previous work focused primarily on macro-scale trajectory diagrams and failed to capture trajectories of individual genes relative to global pathological changes.&#13;
&#13;
Here, we use scRNA-seq profiling of 1.9 million cells from dorsolateral prefrontal cortex of 416 post-mortem human samples to model temporal regulatory changes of AD. We use RNA velocity to infer spliced and unspliced counts for each gene and a beta-binomial distribution to model the unspliced/spliced ratio, and we identify differentially-spliced genes across six brain cell types and differentially-spliced genes between AD and control cohorts. Of 16,948 expressed genes, 1158 genes show significant differential splicing (DS) effects with FDR&lt;0.05%. We find that cell-type specific and globally-altered genes are significantly enriched in AD-associated processes, including dysregulation of the TNF pathway in astrocytes, dysregulation of PD-L1 expression across all cell-types, cellular transport, and lipid dyshomeostasis.&#13;
&#13;
Overall, our study provides a novel method to analyze and compare the RNA velocity between AD and non-AD cohorts, and identifies a new set of differentiallyspliced AD genes, when previous analyses focused primarily on gene expression changes.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Portable Handheld Fine-Grained RFID Localization System with Complex-Controlled Polarization</title>
<link href="https://hdl.handle.net/1721.1/147437" rel="alternate"/>
<author>
<name>Dodds, Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/147437</id>
<updated>2023-01-20T03:36:19Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Portable Handheld Fine-Grained RFID Localization System with Complex-Controlled Polarization
Dodds, Laura
There is significant interest in fine-grained RFID localization systems. Existing systems for fine-grained RFID localization require infrastructure support, either in the form of extensive reference tags deployed in the environment or a deployed antenna infrastructure, such as antenna arrays, to localize RFID tags within its radio range. Yet, there remains a need for fine-grained RFID localization solutions that are in a compact, portable, mobile form, that can be held by users as they walk around areas to map them, such as in retail stores, warehouses, or manufacturing plants.&#13;
&#13;
We present the design, implementation, and evaluation of EveryFind, a portable handheld system for fine-grained RFID localization. Our system introduces two key innovations that enable robust, accurate, and real-time localization of RFID tags in challenging environments. The first is complex-controlled polarization (CCP), a mechanism for localizing RFIDs at all orientations through software-controlled polarization of two linearly polarized antennas. The second is joint tag discovery and localization (JTDL), a method for simultaneously localizing and reading tags with zero-overhead regardless of the tag’s orientation. Building on these two techniques, we develop an end-to-end handheld system that addresses a number of practical challenges in selfinterference cancellation, efficient inventorying, and self-localization. Our evaluation over hundreds of RFID locations demonstrates that EveryFind achieves a median accuracy of few centimeters in each of the x/y/z dimensions and a 90&#119905;&#119893; in practical indoor environments.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>HyperSketch : Language for Implementing Generic Neuro-Symbolic Program Synthesizers</title>
<link href="https://hdl.handle.net/1721.1/147436" rel="alternate"/>
<author>
<name>Serafimov, Kliment</name>
</author>
<id>https://hdl.handle.net/1721.1/147436</id>
<updated>2023-01-20T03:31:13Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">HyperSketch : Language for Implementing Generic Neuro-Symbolic Program Synthesizers
Serafimov, Kliment
Recent developments in neuro-symbolic learning, including program synthesis and deep learning have surprised in the rate of growth of scale, scope, and variety of models used and problems solved. However, existing frameworks are usually specialized for one style of model learning / synthesis and when integrating with other representational modalities requires substantial engineering effort.&#13;
&#13;
In this paper we propose a sketch-based meta-programming framework for developing generic synthesis algorithms and integrating different learning modalities (particularly constraint solving and gradient descent) over a common intermediate representation.&#13;
&#13;
We introduce HyperSketch, a high-level language that empowers developers in programming model-agnostic neuro-symbolic learning and synthesis algorithms. HyperSketch does this by providing the developer compact access to the internal representation and solvers of the Sketch (Solar-Lezama et al.) program synthesis system, allowing the user to manipulate sketches during runtime. With few primitives, we give the developer ability to encode prior knowledge about the order of concretization of unknown model structure, and offering the developer to choose which solvers (and hyperparameters) to use for solving different parts of the sketch. The core innovation of our language is a family of primitives for runtime manipulation of sketches that can perform the following tasks: (1) call Sketch’s solver on a particular sketch, (2) concretize a sketch function, (3) clone a sketch function, (4) rewire the call-graph of the sketch, (5) manipulate input files, and (6) evaluate sketches. We expose these primitives as part of an imperative dynamically typed standard language base with Python/C++-style syntax.&#13;
&#13;
We use HyperSketch to implement several synthesis strategies with varying complexity for several problem domains: Boolean function synthesis, synthesis of Karel programs, and synthesis of timeseries programs. We particularly explore best-effort synthesis and synthesis-through-unification (STUN) in tutorial-style guide. We also demonstrate a case study of using HyperSketch to solve a problem in an industry research setting on a task of synthesizing predicates over labeled graphs. We demonstrate that using HyperSketch for implementing synthesizers can reduce developer effort by 3x-10x compared to previous approaches.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Engineering Expertise in Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/147435" rel="alternate"/>
<author>
<name>Ackerman, Liam J.</name>
</author>
<id>https://hdl.handle.net/1721.1/147435</id>
<updated>2023-01-20T03:37:06Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Leveraging Engineering Expertise in Deep Reinforcement Learning
Ackerman, Liam J.
Deep reinforcement learning has been used to craft robust and performant control policies for legged robotics. However, the engineering processes to create these policies are often plagued by long training times that slow down engineering iteration. This thesis suggests that model-based controllers offer a wealth of successful computation that may be used within reinforcement learning control pipelines to improve learning efficiency. Two ideas incorporate this engineering expertise to increase reinforcement learning efficiency. First, successful model-based computations are pre-processed and incorporated directly into network observations. Introducing these terms into the reinforcement learning architecture is shown to increase learning speeds and policy performance dramatically. Next, inspired by model-based task hierarchies, more structure is added to the reinforcement learning objective function to activate and deactivate reward terms based on an agent’s state. This structure is intended to avoid local minima which impede learning. This reward restructure is shown to avoid local minima during training but degrades final policy performance at edge-cases.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An ECDSA Nullifier Scheme and a Proof of Identity Application</title>
<link href="https://hdl.handle.net/1721.1/147434" rel="alternate"/>
<author>
<name>Gupta, Aayush</name>
</author>
<id>https://hdl.handle.net/1721.1/147434</id>
<updated>2023-01-20T03:41:31Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An ECDSA Nullifier Scheme and a Proof of Identity Application
Gupta, Aayush
ZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers.&#13;
&#13;
Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic algorithm that both uniquely identifies the keypair and keeps the account identity secret. In this work, we will define the full construction, and prove uniqueness, secrecy, and existential unforgeability. We will then demonstrate a proof of concept of the nullifier.&#13;
&#13;
To help further zero knowledge identity systems, we additionally explore a construction for zero knowledge proof of email ownership. We show how relying on existing mail server infrastructure can allow us to bootstrap new anonymity sets and prove subsets of emails in zero knowledge.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long Term Measurement of Bandgap Voltage and System Level Integral Non-Linearity Drift</title>
<link href="https://hdl.handle.net/1721.1/147432" rel="alternate"/>
<author>
<name>Chaney, Colin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/147432</id>
<updated>2023-01-20T03:29:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Long Term Measurement of Bandgap Voltage and System Level Integral Non-Linearity Drift
Chaney, Colin P.
All modern precision measurement systems require voltage references that are stable across all variables, namely temperature and time. Most of these references are made through bandgap voltage circuits. While it is impossible to eliminate long term drift in these circuits it is industry standard to collect long term drift data, such that customers can know when their references may drift out of specification. While this is done for voltage references there are complicated system level performance metrics, such as integral non-linearty, that may also drift in time but which we do not collect data for. Most long term drift measurement systems are only equipped to deal with analog voltage measurement and are unable to measure these other system level performance metrics. This thesis details the development of a new long term drift measurement system architecture that can support these system level measurements while also preserving the ability to measure analog data such as voltage reference drift. The architecture was designed with ease of configurability in mind where many types of measurements and product lines can be supported with the necessary development of proper sub-boards.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Transformer for scATAC-scRNA Translation</title>
<link href="https://hdl.handle.net/1721.1/147430" rel="alternate"/>
<author>
<name>Jin, Roger</name>
</author>
<id>https://hdl.handle.net/1721.1/147430</id>
<updated>2023-01-20T03:12:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Transformer for scATAC-scRNA Translation
Jin, Roger
scATAC-seq gives a comprehensive picture of the chromatin accessibility profile of a cell, covering not only protein-coding regions but also non-coding regulatory regions which are in theory missed by scRNA-seq. However, scATAC-seq data is highdimensional and noisy, aspects which when compounded with data scarcity present challenges for modeling on even seemingly-simple downstream tasks such as cell-type prediction. As such, researchers may benefit from access to a large library of models to evaluate. While we do not demonstrate state of the art results in any capacity, we provide an implementation of a simple representation of sparse tabular data that allows it to be inputted into the popular transformer family of architectures, and use this representation to train a transformer that predicts scRNA-seq given scATAC-seq. Our code is made available here https://github.com/rogershijin/GANOLI.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Environment and Real Estate: How Boston Developers Are Responding to an Evolving Landscape</title>
<link href="https://hdl.handle.net/1721.1/147427" rel="alternate"/>
<author>
<name>Land, Carson Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/147427</id>
<updated>2023-01-20T04:05:46Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Environment and Real Estate: How Boston Developers Are Responding to an Evolving Landscape
Land, Carson Christopher
Today, the impacts of climate change and the effort to diminish its drivers and mitigate its consequences are disrupting the global economy. Given the inherent intersection between real estate and the economy, this disruption presents unique challenges and opportunities within the field of real estate development. In fact, the real estate industry may be at an inflection point towards a more sustainable and resilient future. Yet, with change comes confusion as stakeholders across the real estate spectrum work to respond to this emerging reality.&#13;
&#13;
This paper provides a broad overview of the evolving landscape of real estate development in the Boston market as it relates to sustainability and the environment. Through its investigation, this paper seeks to elucidate how best-in-class developers in the Boston market are responding to these new market changes. Through this analysis, this paper endeavors to provide a contemporary high-level summary, albeit partial, of the intersection between real estate development, sustainability and environmental risk. The ultimate hope is that this work, and future investigations, will enable the sharing of best practices across the industry.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thesis Topic: What makes a successful RV Park in the US</title>
<link href="https://hdl.handle.net/1721.1/147423" rel="alternate"/>
<author>
<name>Zhao, Yue (Mia)</name>
</author>
<id>https://hdl.handle.net/1721.1/147423</id>
<updated>2023-01-20T03:01:11Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Thesis Topic: What makes a successful RV Park in the US
Zhao, Yue (Mia)
Being creative looking for alternative investment in RV park rather than limiting in traditional commercial real estate sectors enabled Sun Communities REIT generated significantly higher profit for its shareholders in 2021. “The RV segment continues to deliver strong results producing same community NOI growth of nearly 31 percent in the quarter, as we benefit from the demand for outdoor experienced coming from existing and new Sun customers”, said by Sun Communities CEO, Gary Shiffman at Q3 2021 report.&#13;
&#13;
Recreational Vehicle (RV) park campgrounds have been neglected investment opportunity for real estate developers and investors for a very long time due to the perception that campgrounds are lowincome accommodations. It is only recently that the product type is beginning to grab the attention of commercial real estate developers and investors. The most likely cause of this new attention is the enormous growth of the mobile work force and the attraction of the RV way of life since the onset of the COVID-19 pandemic. Investments in RV parks currently enjoy high cap rates, low maintenance costs and high growth potential- a nearly perfect combination for real estate investors.&#13;
&#13;
The COVID-19 pandemic has had a considerable impact on travel over the past two years. While hotels and airlines have been negatively disrupted, campgrounds recorded record breaking busy years amidst and after the pandemic. In 2021, more than half of travelers planned to camp at some point during their trip. During the pandemic, camping accounted for 40% of all leisure travel. American RV ownership has reached a historical high point where many travelers could not find an RV to buy and also needed to book at a campground month in advance to order to secure a spot for their RV.&#13;
&#13;
Many forward-looking real estate investors see opportunities in this traditionally mom-and-pop dominated industry and have more recently been attempting to step into this industry with the intention of increasing their revenue streams while syndicating these assets with their current land holdings. This thesis conducts a thorough study of the campground industry in North America to help equip a real estate developer with the necessary knowledge of the camping industry in order to provide guidance on making an investment in this asset class.&#13;
&#13;
The methodology of the thesis includes a review of secondary sources from literature reviews; paid and unpaid industry reports from associations and governments; recent news and forums; and primary sources from field studies and interviews with private equity funds, architects, city planners, agents, and developers.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oceanic Ambient Noise in the Arctic on the Chukchi Shelf: Broadband Characteristics and Environmental Drivers</title>
<link href="https://hdl.handle.net/1721.1/147416" rel="alternate"/>
<author>
<name>Fung, LTJG Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/147416</id>
<updated>2023-01-20T03:39:31Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Oceanic Ambient Noise in the Arctic on the Chukchi Shelf: Broadband Characteristics and Environmental Drivers
Fung, LTJG Kathryn
This thesis encompasses an analysis of underwater ambient noise collected by the yearlong Canada Basin Acoustic Propagation Experiment (CANAPE) on the Chukchi Shelf of the Arctic. This location contained the Beaufort Duct, a significant effect of climate change on the Arctic’s underwater soundscape. A study of the statistical and probability metrics was conducted on a frequency band of 50-1900 Hz to examine the relation between environmental drivers and noise patterns. The presence of ice typically decreases broadband ambient noise, when compared to ice-free seas. However, the Beaufort Duct under ice increases the ambient noise levels below 1 kHz. The relationship between ambient noise and the environment is further explored by studying the link between distant ice movements and ambient levels Correlation between the two is found to exist from 300-1500 Hz, as distant ( 500 km) ice drift motion appears to drive noise levels at the receiver. [Work supported by the Office of Naval Research]
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Material Handling for Continuous Lyophilization Process</title>
<link href="https://hdl.handle.net/1721.1/147415" rel="alternate"/>
<author>
<name>Flores, Ryan Maximiliano</name>
</author>
<id>https://hdl.handle.net/1721.1/147415</id>
<updated>2023-01-20T04:00:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Material Handling for Continuous Lyophilization Process
Flores, Ryan Maximiliano
This paper details solutions to two main challenges with a lyophilization machine: load-lock door design for isolating process chambers from the environment and other process chambers operating at different conditions and vacuum seals for complex geometries that are robust to significant temperature variation.&#13;
&#13;
The load-lock door design solution detailed in this paper is the wedge door. Using a linear actuation motion and large sealing surface angle with respect to the horizontal, the wedge door requires less space compared to the previous rotary designs, four-bar linkage and simple pivot doors. The wedge door has a length normalized leak rate of [︀formula]︀, and lifetime of 41,992 cycles before service is required which is 9.2 months of operation of the lyophilization machine.&#13;
&#13;
In order to seal the complex geometry of the interface between the linear motion system and the load-lock and drying chambers, an injectable vacuum seal was designed. The injectable seal uses Self-leveling Green sealant from Av-DEC injected into a rectangular groove with 2 mm depth and 9.5 mm width and sandblasted sealing surfaces. To fully fill the groove around the linear motion system module, a maximum injection pressure of 45 PSI is required. The injectable seal has a length normalized leak rate of [︀formula]︀.The injectable seal is robust to temperature variation of 10 °C to 25 °C, due to the increased adhesion of sealant to the sealing surfaces due to the increase roughness of the sandblasted surfaces.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concept of Operations and Failure Analysis for a Complex Deployable CubeSat Antenna Payload</title>
<link href="https://hdl.handle.net/1721.1/147413" rel="alternate"/>
<author>
<name>Ammons, Kristen J.</name>
</author>
<id>https://hdl.handle.net/1721.1/147413</id>
<updated>2023-01-20T03:07:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Concept of Operations and Failure Analysis for a Complex Deployable CubeSat Antenna Payload
Ammons, Kristen J.
AERO-VISTA is a NASA H-TIDeS funded mission comprised of two 6U CubeSats. Each CubeSat is outfitted with a novel vector sensor payload that will facilitate the characterization of the Earth’s radio aurora. Additionally, each spacecraft contains two Auxiliary Sensor Package (ASP) units that are used to image the vector sensor payloads. Full vector sensor deployment is integral to fulfilling mission requirements. Thus, it becomes important to perform a failure analysis on the deployment sequence and mechanism to best inform further testing efforts as well as to develop the deployment concept of operations. For this purpose, a Fault Tree Analysis is performed for each stage of the deployment sequence. After constructing these fault trees, available telemetry points for the identification of these failure are indicated, allowing for the categorization of failures into those that can be identified while in orbit versus those cannot be identified while in orbit. Additionally, each Fault Tree can be presented as an Event Tree to better visualize the sequence of events needed for a successful deployment. With these two tools in hand, a deployment timeline and a checklist for spacecraft operators is constructed. The wide angle camera on the ASP provides a key piece of telemetry for confirming each stage of the antenna deployment. Software testing for the ASP is performed from a system-level perspective. Additional testing is done to confirm that the camera can clearly image the antenna in full sunlight conditions while remaining within parameter constraints placed by the software. Finally, a sensitivity analysis of the ASP camera demonstrates the possibility of imaging auroral events at the 10k Raleigh signal level.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast, Metadata-private Anonymous Broadcast</title>
<link href="https://hdl.handle.net/1721.1/147411" rel="alternate"/>
<author>
<name>Langowski, Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/147411</id>
<updated>2023-01-20T03:17:10Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Fast, Metadata-private Anonymous Broadcast
Langowski, Simon
This thesis presents Trellis: a mix-net based anonymous broadcast system with cryptographic security guarantees. Trellis can be used to anonymously publish documents or communicate with other users, all while assuming full network surveillance. In Trellis, users send messages through a set of servers in successive rounds. The servers mix and post the messages to a public bulletin board, hiding which senders sent which messages.&#13;
&#13;
Trellis hides all network-level metadata, remains robust to changing network conditions, guarantees availability to honest users, and scales with the number of mix servers. Trellis provides three to five orders of magnitude faster performance and better network robustness compared to Atom, the state-of-the-art anonymous broadcast system with a similar threat model.&#13;
&#13;
In achieving these guarantees, Trellis contributes: (1) a simpler theoretical mixing analysis for a routing mix network constructed with a fraction of malicious servers, (2) anonymous routing tokens for verifiable random paths, and (3) lightweight blame protocols built on top of onion routing to identify and eliminate malicious parties.&#13;
&#13;
We implement and evaluate Trellis in a networked deployment. With 32 servers located across four geographic regions, Trellis achieves a throughput of 200 bits per second with 100,000 users. With 64 servers, Trellis achieves a throughput of 320 bits per second. Trellis’s throughput is only 100 to 1000× slower compared to Tor (which has 2M daily users) and is therefore potentially deployable at a smaller “enterprise” scale. Our implementation is open-source.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of extending the length of MIT’s Introduction to Computer Science course on the performance of students with little programming experience</title>
<link href="https://hdl.handle.net/1721.1/147410" rel="alternate"/>
<author>
<name>Zárate, Marcos Rubén</name>
</author>
<id>https://hdl.handle.net/1721.1/147410</id>
<updated>2023-01-20T03:41:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Effects of extending the length of MIT’s Introduction to Computer Science course on the performance of students with little programming experience
Zárate, Marcos Rubén
Since Fall 2007, the Department of Electrical Engineering and Computer Science at MIT has offered every semester an introduction to computer science course roughly split in two smaller parts. The first part, labeled as 6.0001 since Fall 2014, has the goal of teaching students basic computational thinking programming skills and the ability to read, craft and understand simple algorithms; whereas the second part, labeled as 6.0002 since Fall 2014, has the goal of introducing students to data science and teaching skills required to reason about, perform and interpret computational experiments. Each course runs for half a standard MIT semester, which requires that introductory concepts be taught at a very fast pace, which may prove very difficult for students who had little programming experience beforehand. Starting in Fall 2021, the Department has offered a parallel new course, 6.S061, whose overall goal is exactly the same as 6.0001, but is designed to run for an entire semester, and is aimed towards students with little to no prior experience in the area. The proposed thesis will analyze the impact of offering the introductory course at a slower pace by studying the overall performance of students in these two courses as well as on a follow-up course, as well as any effects on students who started taking one version of the course and later changed to a different version.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Deep Learning and Signal Processing Architecture Using&#13;
Frequency-Encoded RF Photonics</title>
<link href="https://hdl.handle.net/1721.1/147409" rel="alternate"/>
<author>
<name>Davis, Ronald A.</name>
</author>
<id>https://hdl.handle.net/1721.1/147409</id>
<updated>2023-01-20T03:03:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Deep Learning and Signal Processing Architecture Using&#13;
Frequency-Encoded RF Photonics
Davis, Ronald A.
Deep neural networks have become ubiquitous due to their ability to perform arbitrary tasks more accurately than manually-crafted systems.  This ability has created a substantial demand for more complex models processing larger amounts of data.  However, the traditional computing architecture has reached a bottleneck in processing performance due to data movement.  Considerable efforts have been made to create custom hardware to accelerate deep neural network training and inference.  Among these efforts are optical neural networks, which have been a promising approach that excel at linear operations but struggle with nonlinear implementations.  Here, we propose our multiplicative analog frequency transform optical neural network (MAFT-ONN) that computes matrix products using frequency-encoded signals and implements the nonlinearity for each layer using a single Mach-Zhender modulator.  We experimentally demonstrate a 3-layer DNN for inference of MNIST digits, showing a scalable, fully analog front-to-end ONN.  This architecture is also the first deep neural network hardware accelerator that is suited for direct inference of time-based signals without digitization.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Carbon capture technology for natural gas power plants: selection techniques and implementation strategies for a real-world scenario</title>
<link href="https://hdl.handle.net/1721.1/147408" rel="alternate"/>
<author>
<name>Schwab, William</name>
</author>
<id>https://hdl.handle.net/1721.1/147408</id>
<updated>2023-01-20T03:03:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Carbon capture technology for natural gas power plants: selection techniques and implementation strategies for a real-world scenario
Schwab, William
Although renewable energy solutions are improving rapidly, the majority of electricity is still generated by burning fossil fuels, including natural gas, which generate Greenhouse gases (GHGs). These GHGs have been linked to significant climate changes that are projected to have far-reaching impacts across the planet if emissions are not drastically reduced. While renewable energy sources are a part of the solution, thought must be given for how to reduce the emissions from legacy power infrastructure like natural gas power plants, since it is generally infeasible to completely abandon all of the current and in-development fossil fuel plants in favor of renewable technologies. One possible solution to utilize these large capacity plants while reducing their carbon dioxide (CO2) emissions is carbon capture and storage (CCS) technology. By capturing the CO2 that is produced by these natural gas facilities before it reaches the atmosphere, the overall emissions can be lowered significantly, while the power produced can still be provided to the communities that rely on it.  &#13;
&#13;
This thesis answers one overarching question: how can we determine the most preferred technology and implementation strategy for utilizing CCS to reduce the carbon footprint from natural gas electricity generation while continuing to meet ever-growing demand? To create a real-world scenario, this question was viewed through the lens of a realistic stakeholder: an electricity producer trying to navigate the complex environmental and regulatory landscape while making prudent fiscal decisions. A number of System Design tools were used to explore this topic. A novel, hybrid Design Structure Matrix (DSM) was created to select the most appropriate attributes to include in the tradespace utility function. Two tradespaces were developed to determine the most preferred CCS technology from a technical standpoint. Then, a flexibility analysis was conducted to assess the most profitable economic strategy for implementing the most preferred technology into a natural gas power plant project. &#13;
&#13;
For the given stakeholder’s priorities and value drivers, the most preferred technology was a post-combustion capture using monoethanolamine (MEA). However, there were a number of technologies that were on the Pareto frontier and could have been equally good options, given alternative stakeholder requirements. The most beneficial strategy to implement this technology was to build in the optionality to retrofit CCS capability and then install the CCS facility 11 years after initial plant start up. This research proves that a tradespace is a viable tool for selecting the most preferred CCS technology, and a flexibility analysis is a prudent strategy for determining the economic value of a CCS-based project. Importantly, the ability to select the most preferred alternatives can be adapted to any stakeholder and CCS project by changing the priorities and value drivers utilized.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Understanding Gender Inequity in Engineering</title>
<link href="https://hdl.handle.net/1721.1/147407" rel="alternate"/>
<author>
<name>Papageorge, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/147407</id>
<updated>2023-01-20T03:48:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Understanding Gender Inequity in Engineering
Papageorge, Katherine
Gender inequity is a very difficult topic to grapple with as most existing articles, books, and opinion pieces around the subject like to focus in on one particular problem or issue. Gender inequity in STEM has been a hot topic of conversation for years, with people pointing fingers at certain potential root causes such as a limited talent pipeline, parental planning, lack of retention, and many others. These issues are not single source issues, however, and cannot be thought of as such if meaningful change and progress is to result in the overall STEM workforce. As STEM itself is extremely broad, this thesis seeks to focus on gender inequity in engineering specifically, and to assess and dissect issues and opportunities from a systemic approach. &#13;
&#13;
By leveraging learnings and processes relevant to Systems Management and Systems Design, the research and work enclosed in this thesis intends to pursue multiple lines of inquiry into the system level makeup of the engineering world and how it does or does not support gender equity. This research analyzes existing data sets available for working women in many disciplines, as well as incorporates input from a set of interviewees comprised of female engineers. Defining the relationships between many seemingly separate issues may lend insights into how academic institutions, corporations, and society as a whole may be able to implement some staged solution sets to improve gender parity and equity throughout the engineering field.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case for Pre-trained Language Models in Systems Engineering</title>
<link href="https://hdl.handle.net/1721.1/147405" rel="alternate"/>
<author>
<name>Lim, Shao Cong</name>
</author>
<id>https://hdl.handle.net/1721.1/147405</id>
<updated>2023-01-20T03:34:05Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Case for Pre-trained Language Models in Systems Engineering
Lim, Shao Cong
Modern engineered systems are immensely complex. Extensive sets of natural language requirements guide the development of such systems. As such, tools to assist system engineers in managing and extracting information from these requirements must also scale to match the complexity of these systems. However, the systems engineering community has lagged in adopting advanced natural language processing techniques. Pre-trained language models, such as BERT, represent state-of-the-art in the field. This thesis seeks to understand if these pre-trained language models can achieve higher model performance at a lower computational and manpower cost than earlier techniques. The results show that adapting these language models through task-adaptive pretraining leads to consistent improvements in model performance and greater model robustness. These results indicate the potential of applying such language models in the systems engineering domain. However, much work remains to improve model performance and expand possible applications.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Holistic View of Factors Impacting the Adoption of Lessons Learned Management Systems</title>
<link href="https://hdl.handle.net/1721.1/147404" rel="alternate"/>
<author>
<name>Kittipeerapat, Thitisak</name>
</author>
<id>https://hdl.handle.net/1721.1/147404</id>
<updated>2023-01-20T03:44:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Holistic View of Factors Impacting the Adoption of Lessons Learned Management Systems
Kittipeerapat, Thitisak
The benefits of a lessons learned (LL) program are widely recognized. A lesson learned management system (LLMS) is the software designed to support the LL program and is essential to the program’s value delivery. The need for knowledge management (KM) in the energy sector appears to increase as energy companies focus on the energy transition while continuing digital transformation, performance improvement, and decarbonization of conventional energy production. Digital transformation has increased the accessibility to intelligent, well-designed lessons learned management systems (LLMS). However, there remain challenges to adoption. This study aims to explore factors impacting the adoption of LLMSs within the context of the upstream oil and gas sector. First, a comprehensive literature review is conducted to understand existing barriers and enablers. Semi-structured interviews are then conducted with employees from a large global oil and gas company to uncover existing barriers and factors leading to the adoption of an LLMS. The thematic analysis method was used to analyze the interview data, in which five themes are presented: understand existing barriers, incorporate enabling processes and governance, facilitate the people element supporting the LL programs, satisfy users with important LLMS characteristics, and improve the LL process with an LLMS. Finally, practical recommendations for organizations to increase the adoption of an LLMS are summarized.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chrono urbanism and its relationship with the hybrid working culture: Real Estate opportunities and perspectives from NYC</title>
<link href="https://hdl.handle.net/1721.1/147397" rel="alternate"/>
<author>
<name>Rodriguez Escalante, Luis Raul</name>
</author>
<id>https://hdl.handle.net/1721.1/147397</id>
<updated>2023-01-20T04:05:01Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Chrono urbanism and its relationship with the hybrid working culture: Real Estate opportunities and perspectives from NYC
Rodriguez Escalante, Luis Raul
In the last few months, major news about redevelopment projects and policy changes to incorporate remote work culture has hit the headlines due to the always evolving lifestyle of people in urban areas around the world. During the still current covid19 pandemic, people fled major urban areas to more appealing sceneries. Now, as published in media outlets, they are coming back drawn by corporations but are demanding a new work culture which is being called hybrid.&#13;
&#13;
Well before the pandemic, cities around the world have been fostering the notion of chrono-urbanism. In this paper, I will be discussing recent chrono-urbanism approaches applied to the hybrid lifestyles of people retuning to urban areas, where work-from-home becomes more prevalent. I will examine how the changes of lifestyles will adjust the spatial landscape of the real estate market and will observe this phenomenon specifically in New York City. I intend to combine evidence-based research with opinions from key stakeholders in urban planning and the real estate industry, through different attributes of the remote-work-culture incorporation such as commuting and urban activity patterns, new working spaces transformation and the synergy between these two.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Study of Transfer Learning on LSTM Recurrent Neural Networks for Fiber Manufacturing Commercialization</title>
<link href="https://hdl.handle.net/1721.1/147394" rel="alternate"/>
<author>
<name>Sawant, Nilay</name>
</author>
<id>https://hdl.handle.net/1721.1/147394</id>
<updated>2023-01-20T03:24:26Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Feasibility Study of Transfer Learning on LSTM Recurrent Neural Networks for Fiber Manufacturing Commercialization
Sawant, Nilay
This thesis explores business pathways to commercialize Device Realization Lab’s technology that uses deep reinforcement learning for optical fiber manufacturing control systems. A viable business solution is proposed based on feedback from venture capital investors. The solution comprises developing cloud-based software that can generate digital twins for fiber manufacturing companies. These digital twins can serve as anomaly detectors and suggest optimal input parameters that reduce production variation and tolerance, improving quality and decreasing scrap rate. Efforts to define a minimum viable product (MVP) for this business solution began with the creation of a long short-term memory recurrent neural network (LSTM RNN) model for a desktop fiber extrusion system that mimics the fiber extrusion process on the manufacturing floor. Transfer learning on the LSTM RNN was then implemented to explore the feasibility of reusing a well-developed machine learning (ML) model for a fiber material (e.g. glass fiber) to construct an ML model for a separate fiber material (e.g. nylon fiber) for which a relatively low amount of data is available. The study found that applying transfer learning reduced the mean squared error of the new fiber material model by over 40% compared to developing the model without transfer learning. This thesis strives to reveal the innovative applications of the technology that can benefit the fiber manufacturing field and defines an MVP that can be shared with venture capital investors as a first step toward commercializing this technology.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Manufacturing of the Extrusion Assembly for an Advanced Process Control Educational Device</title>
<link href="https://hdl.handle.net/1721.1/147393" rel="alternate"/>
<author>
<name>Levi, Aviva Jesse</name>
</author>
<id>https://hdl.handle.net/1721.1/147393</id>
<updated>2023-01-20T04:04:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design and Manufacturing of the Extrusion Assembly for an Advanced Process Control Educational Device
Levi, Aviva Jesse
FrED (Fiber Extrusion Device) is a desktop fiber extrusion system that is developed to teach advanced process control in a heuristic manner. There was a need to work on reducing the cost of the existing version of FrED to make it accessible to remote learners. The device was broken down into various sub-assemblies and each sub-assembly was investigated in detail to identify opportunities for cost reduction. This thesis outlines the design and manufacturing of the extrusion assembly. The V-model methodology was adopted, and the functional requirements of the extrusion assembly were defined. Four iterations of prototyping were done. The prototypes were validated to see if they meet the functional requirements. User testing was conducted to check how user friendly the design is. A pilot run of 10 parts was done to check for potential design improvements. Lastly, the production level design was prepared along with a process plan. This was used to carry out the pilot production run of 15 FrEDs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable sketching and indexing algorithms for large biological datasets</title>
<link href="https://hdl.handle.net/1721.1/147392" rel="alternate"/>
<author>
<name>Ekim, Bariş C.</name>
</author>
<id>https://hdl.handle.net/1721.1/147392</id>
<updated>2023-01-20T04:05:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Scalable sketching and indexing algorithms for large biological datasets
Ekim, Bariş C.
DNA sequencing data continues to progress towards longer sequencing reads with increasingly lower error rates. In order to efficiently process the ever-growing collections of sequencing data, there is a crucial need for more time- and memory-efficient algorithms and data structures. In this thesis, we propose several ways to represent DNA sequences in order to mitigate some of these challenges in practical biological tasks. Firstly, we expand upon an existing k-mer (a substring of length k) -based approach, a universal hitting set (UHS), to sample a subset of locations on a DNA sequence. We show that UHSs can be efficiently constructed using a randomized parallel algorithm, and propose ways in which UHSs can be used in sketching and indexing sequences for downstream analysis. Secondly, we introduce the concept of minimizer-space sequencing data analysis, where a set of minimizers, rather than DNA nucleotides, are the atomic tokens of the alphabet. We propose that minimizer-space representations can be seamlessly applied to the problem of genome assembly, the task of reconstructing a genome from a collection of DNA sequences. By projecting sequences into ordered lists of minimizers, we claim that we can achieve orders-of-magnitude improvement in runtime and memory usage over existing methods without much loss of accuracy. We expect these approaches to be essential for downstream bioinformatics applications, such as read mapping, metagenomics, and pangenomics, as well as to provide ways to better store, search, and compress large collections of sequencing data.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoding Neural Processing of Linguistic Features From Large-Scale Intracranial Recordings and Naturalistic Language Stimuli</title>
<link href="https://hdl.handle.net/1721.1/147391" rel="alternate"/>
<author>
<name>Rosenfarb, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/147391</id>
<updated>2023-01-20T03:15:20Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Decoding Neural Processing of Linguistic Features From Large-Scale Intracranial Recordings and Naturalistic Language Stimuli
Rosenfarb, Dana
Previous research has discovered a set of areas in the brain that appears to represent information about linguistic meaning. However, no study has yet produced a comprehensive survey of how semantic information is represented in the brain, utilizing high-resolution neural recording. Using data from the Brain TreeBank, a large-scale multimodal dataset of recorded brain activity, we fit Generalized Linear Models (GLM) to map language and vision stimuli to induced brain activity. This framework allows us to localize processing areas in the brain per feature, as well as explore the temporal dynamics of this processing. Findings include maps relating neural activity across the brain throughout time and different linguistics, auditory and visual tasks, and indications of neural activity per word pre-onset for different regressors, such as part-of-speech, surprisal, and delta RMS.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantized Guessing Random Additive Noise Decoding - A Universal Quantized Soft-Decoder</title>
<link href="https://hdl.handle.net/1721.1/147390" rel="alternate"/>
<author>
<name>Gabhart, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/147390</id>
<updated>2023-01-20T03:47:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Quantized Guessing Random Additive Noise Decoding - A Universal Quantized Soft-Decoder
Gabhart, Evan
Guessing Random Additive Noise Decoding (GRAND) has proven to be a universal, maximum likelihood decoder. Multiple extensions of GRAND have been introduced, giving way to a class of universal decoders. GRAND itself describes a hard-detection decoder, so a natural extension was to incorporate the use of soft-information. The result was Soft Guessing Random Additive Noise Decoding (SGRAND). SGRAND assumes access to complete soft information, proving itself to be a maximum-likelihood soft-detection decoder. Physical limitations, however, prevent one from having access to perfect soft-information in practice.&#13;
&#13;
This thesis proposes an approximation to the optimal performance of SGRAND, Quantized Guessing Random Additive Noise Decoding (QGRAND). I describe the algorithm and evaluate its performance compared to hard-detection GRAND, SGRAND, and another approach to approximating SGRAND, Ordered Reliability Bits GRAND (ORBGRAND). QGRAND also allows itself to be tailored to an arbitrary number of bits of soft information, and I will show as the number of bits increases so does performance. I then use the GRAND algorithms discussed in order to evaluate error correction potential of different channel codes, particularly Polar Adjusted Convolutional codes, CA-Polar codes, and CRCs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Verification of Distributed Systems with Grove</title>
<link href="https://hdl.handle.net/1721.1/147388" rel="alternate"/>
<author>
<name>Sharma, Upamanyu</name>
</author>
<id>https://hdl.handle.net/1721.1/147388</id>
<updated>2023-01-20T03:01:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Modular Verification of Distributed Systems with Grove
Sharma, Upamanyu
Grove is a new framework for machine-checked verification of distributed systems. Grove focuses on modular verification. It enables developers to state and prove specifications for their components (e.g. an RPC library), and to use those specifications when proving the correctness of components that build on it (e.g. a key value service built on RPC).&#13;
&#13;
To enable modular specification and verification in a distributed systems, Grove uses the idea of ownership from separation logic. Using Grove, we built a verified unreliable RPC library, where we captured unreliability in the formal specification by using duplicable ownership. We also built a verified exactly-once RPC library, where we reasoned about ownership transfer from the client to server (and back) over an unreliable network by using the escrow pattern.&#13;
&#13;
Overall, we developed and verified an example system written in Go consisting of the RPC libraries, a sharded key-value store with support for dynamically adding new servers and rebalancing shards, a lock service, and a bank application that supports atomic transfers across accounts that live in different shards, built on top of these services. The key-value service scales well with the number of servers and the number of cores per server. The proofs are mechanized in the Coq proof assistant using the Iris  library and Goose.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Future of Technology Bargaining in the Information Age</title>
<link href="https://hdl.handle.net/1721.1/147381" rel="alternate"/>
<author>
<name>Atto, Anthony R.</name>
</author>
<id>https://hdl.handle.net/1721.1/147381</id>
<updated>2023-01-20T03:31:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Future of Technology Bargaining in the Information Age
Atto, Anthony R.
This thesis was motivated by the advent of digital technologies and their effects on workers. Using technology as a substitute for labor is a common industrial practice and the effects of substitution (though not the decision to substitute) will continue to dominate collective bargaining in the information age.&#13;
&#13;
More relevant than substitution is the effect that digital technologies are having on the production process itself. It is becoming `taskified' and `digitized.' Decomposed into smaller, well-defined tasks that are also digitally compatible with each other, the entire production process is becoming easier to direct, control, and monitor. Each new task presents management a new lever of power in two forms -- the decision to substitute and the ability to monitor.&#13;
&#13;
While substitution of technology for labor remains an issue, the complementarity of labor and technology contributions in the production process is becoming more important; apportioning value created to a factor in the production process is especially complicated and subjective with large numbers of complementary tasks.&#13;
&#13;
The need for labor education is amplified by these factors and argues for a renewed interest in technology bargaining. A policy examination from various labor-related organizations and informant discussions suggest that labor recognizes this and is responding accordingly. Bargaining agreements have data and technology clauses, `digital union representatives' are being hired, and education on digital technologies is being prioritized.&#13;
&#13;
Keywords: Complementarity, Decision bargaining, Effects bargaining,  Industrial relations, Labor education, Labor substitution, Power, Production process, Taskification, Technology bargaining, Worker monitoring
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Workplace Change in an Age of Insecurity: Evidence from a U.S. Automaker</title>
<link href="https://hdl.handle.net/1721.1/147380" rel="alternate"/>
<author>
<name>McKenna, Claire C.</name>
</author>
<id>https://hdl.handle.net/1721.1/147380</id>
<updated>2023-01-20T03:21:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Workplace Change in an Age of Insecurity: Evidence from a U.S. Automaker
McKenna, Claire C.
What accounts for the low adoption of high performance work practices in the U.S.? This study explores a political explanation: their survival depends on the cooperation of different social actors in organizations. Through a qualitative interview study of two underused work reforms at a U.S. auto plant, I explore how interpretations of workplace change vary across interest groups, with the goal of helping to explain what limits diffusion broadly. The first reform permits certain skilled employees, as team members, to cross job boundaries. The second reform redistributes job tasks from skilled to production employees. In insecure contexts, the concepts of opportunity and threat are a useful framework for understanding actor responses to change. While teams were most commonly viewed as opportunities, interpretations of task redistributions were mixed. While non-skilled national union representatives viewed them as an opportunity for the majority production workforce to share in the gains of technological change, skilled representatives viewed them as threats, and resisted them. Adoption of both reforms seemed to depend on endorsement of local skilled leadership. This study contributes to the small literature that uses a political framework to understand implementation of work organization changes. Further, it provides insight into other forms of insecurity informing actor responses to change--not job insecurity per se, but longer-term labor market insecurity rooted in declining occupational status. This has implications for which complementary measures might spark greater workforce commitment in firms trying to advance work reforms. Further, this study highlights enduring sources of leverage and occupation-based conflict within traditional U.S. labor unions today. Finally, it contributes to future of work discussions by highlighting the bureaucratic structures and group interests with which firms adopting new technologies contend, suggesting that job or skill outcomes of technological change are not fixed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analysis of the Cost-Benefit of Sustainable Transformation</title>
<link href="https://hdl.handle.net/1721.1/147378" rel="alternate"/>
<author>
<name>Menda, Mihir Manoj</name>
</author>
<id>https://hdl.handle.net/1721.1/147378</id>
<updated>2023-01-20T03:41:39Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An Analysis of the Cost-Benefit of Sustainable Transformation
Menda, Mihir Manoj
The real estate industry is a behemoth when it comes to scale by any measure: contributing 10% to the global GDP, and 40% of the planet’s carbon emissions. Sustainable real estate has been on the rise over the last decade with almost all stakeholders across the real estate lifecycle now demanding sustainably built environments. Sustainable transformation ranges intervention in any phase(s) spanning the real estate lifecycle. So, both an asset under construction in its design-development phase, and an operational core asset considering a sustainable retrofit can undergo sustainable transformation.&#13;
&#13;
As countries make commitments to go green and achieve carbon neutrality, sustainable transformation across industries is required to achieve the bold goals set in international arenas like COP26. While there yet exists explicit industry-wide regulations that mandate sustainable real estate globally, some regulatory bodies are pushing sustainable transformation by means of penalties incurred for non-compliance like New York’s Local Law 97. Until governing bodies mandate regulation for sustainable real estate, the industry is dependent on incentivizing asset owners to sustainably transform their assets. The positive statement of capital and equity markets awarding sustainably superior real estate with capital premiums for asset purchase is crucial for the support of sustainable real estate.&#13;
&#13;
In the private sector, the final decision for sustainable transformation of an asset lies with its asset owner. Hence, an asset owner’s buy-in is integral for a real estate asset to undergo sustainable transformation. This thesis proposes a framework that aids real estate owners to evaluate investment into the sustainable transformation of their real estate assets. Whilst the thesis only evaluates the assessment of the cost-benefit of sustainable transformation by building HVAC electrification, the framework can, in general, aid decision-making for any application involving building transformation, as long as it has its corresponding input data.&#13;
&#13;
The framework will be applied on two strikingly different office buildings in two cities in terms of climate, regulation, and market: an office asset of 6.9 million square feet in its design-development phase in Hyderabad, India, and an indicative office asset of 0.9 million square feet in New York, US.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Peak Load and Heat Stress under Heat Waves by Scheduling Cooling and Energy Storage Systems</title>
<link href="https://hdl.handle.net/1721.1/147376" rel="alternate"/>
<author>
<name>Zhang, Zhujing</name>
</author>
<id>https://hdl.handle.net/1721.1/147376</id>
<updated>2023-01-20T04:04:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Mitigating Peak Load and Heat Stress under Heat Waves by Scheduling Cooling and Energy Storage Systems
Zhang, Zhujing
As the climate changes, heat waves are becoming more frequent and severe. Exposure to heat waves could lead to heat stress. Heat waves intensify cooling demand and reduce air conditioner efficiencies. It causes peaks in electricity demand that pose operational challenges to the power grid. This thesis provides simulation-based methods to mitigate peak load and heat stress under heat waves by adjusting the schedules of the cooling and energy storage systems in buildings. This thesis demonstrates three scheduling methods: (1) cooling system scheduling, (2) energy storage system scheduling, and (3) combined cooling and energy storage system scheduling. The cooling system scheduling methods involve (1) generating baseline and training data with EnergyPlus (&#119864;&#119875;) simulations, (2) fitting surrogate models that relate cooling system adjustments to the perturbations in purchased power and Standard Effective Temperature (&#119878;&#119864;&#119879;* , a comprehensive measure of thermal comfort, which is found to be a useful measurement of heat stress), and (3) embedding the &#119864;&#119875; data and trained models in an optimizer to schedule cooling system adjustments. The methods provide closely predicted optimized solutions with less computation cost than solving the problem by brute-force &#119864;&#119875; simulations. The energy storage system scheduling method relocates the power discharging from the energy storage system based on &#119864;&#119875; simulated baseline grid purchased power and its average value. The scheduling of the cooling and energy storage systems combines the two methods above. Case studies of these methods on a single building and a six-building neighborhood in the climate of Miami and Kuwait are offered. In these case studies, the methods reduce peak load significantly while maintaining &#119878;&#119864;&#119879;* within comfortable ranges. Among the three methods, the combined scheduling of cooling and energy storage systems is able to reduce peak power the most.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building an Open Source Platform for Forensic Medical Documentation of Human Rights Violations</title>
<link href="https://hdl.handle.net/1721.1/147374" rel="alternate"/>
<author>
<name>Monsalve, Felipe</name>
</author>
<id>https://hdl.handle.net/1721.1/147374</id>
<updated>2023-01-20T03:50:07Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Building an Open Source Platform for Forensic Medical Documentation of Human Rights Violations
Monsalve, Felipe
MediCapt is a virtual platform for secure documentation of medical records, currently deployed by Physicians for Human Rights (PHR) in Kenya and the Democratic Republic of the Congo to forensically and clinically document sexual violence. Its uses in documenting of sexual violence have allowed for the collection of court-admissible evidence for use in the prosecution of perpetrators, even in areas with limited access to healthcare and low levels of literacy, where perpetrators might otherwise go free due to a lack of registered physical evidence. We have rewritten an early version of MediCapt with the guidance of PHR, with the aim of building the first widely deployable, open-source, in-the-field health data collection platform that focuses on forensic and clinical documentation of sexual violence, and other human rights violations, in remote areas. To this end, we developed a new automated server-side infrastructure and frontend for MediCapt that scales automatically to any foreseeable level of demand, is compliant with the latest security and privacy regulations and best practices, and will require little to no maintenance in the coming years. Locally, we designed a secure caching system for offline functionality that is easy to integrate, modular and secure, which ensures proper functioning of the platform in low connectivity environments where MediCapt will be deployed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>&#120590;OS: Elastic Realms for Multi-Tenant Cloud Computing</title>
<link href="https://hdl.handle.net/1721.1/147373" rel="alternate"/>
<author>
<name>Szekely, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/147373</id>
<updated>2023-01-20T03:35:17Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">&#120590;OS: Elastic Realms for Multi-Tenant Cloud Computing
Szekely, Ariel
Despite the enormous success of cloud computing, programming and deploying cloud applications remains challenging. Application developers are forced to either explicitly provision resources or limit the types of applications they write to fit a serverless framework such as AWS Lambda.&#13;
&#13;
&#120590;OS is a new multi-tenant cloud operating system that allows providers to manage resources for tenants while simplifying application development. A key contribution of &#120590;OS is its novel abstraction: realms. Realms present tenants with the illusion of a single-system image and abstract boundaries between physical machines. Developers structure their applications as processes, called procs in &#120590;OS. Much like a time-sharing OS multiplexes users’ processes across a machine’s cores, &#120590;OS multiplexes tenants’ procs across the cloud provider’s physical machines. Since each tenant tends to plan for peak load, realms can improve data center utilization by enabling providers to transparently reallocate partial machines to another tenant’s realm when load dips.&#13;
&#13;
An evaluation of &#120590;OS demonstrates that a &#120590;OS-based MapReduce (&#120590;OS-MR) implementation grows quickly from 1 core to 32 and scales near-perfectly achieving 15.26× speedup over the same implementation running on 2 cores. Similarly, an elastic Key-Value service built on &#120590;OS (&#120590;OS-KV) cooperates with &#120590;OS to scale the number of kvd servers and balance shards across them, according to client load. &#120590;OS also achieves high resource utilization when multiple tenants’ realms compete for a shared group of machines. For example, when &#120590;OS multiplexes a long-running &#120590;OS-MR job in one realm and a &#120590;OS-KV service with varying numbers of clients in another realm, &#120590;OS keeps utilization above 90% and transparently moves partial machines between the realms as the &#120590;OS-KV client load changes.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Arty: Expressive timbre transfer using articulation detection for guitar</title>
<link href="https://hdl.handle.net/1721.1/147372" rel="alternate"/>
<author>
<name>Franjou, Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/147372</id>
<updated>2023-01-20T03:55:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Arty: Expressive timbre transfer using articulation detection for guitar
Franjou, Sebastian
In this work, we propose a novel approach to timbre transfer. Timbre transfer is the transformation of an instrument’s timbre to match the timbre of another instrument while preserving key musical information like pitch and loudness. Current attempts tend to rely either on MIDI pitch and velocity information, or on Deep Learning networks. The former approach requires discarding a lot of information and hence suffers from a loss of expressivity, while the latter results in expressive but unstable and difficult to tune systems.&#13;
&#13;
Arty aims to address this problem by adding expression data to the collected MIDI. By detecting instrument-specific playing techniques called articulations, and transcribing these articulations as MIDI data, Arty attempts to provide an expressive yet flexible alternative to the methods above for timbre transfer from guitar. The use of MIDI allows for integration with other music performance systems and doesn’t impose a particular sound synthesis method.&#13;
&#13;
We created a new dataset, the Arty dataset, and used it in conjunction with existing data to train a model to classify right-hand and left-hand guitar playing techniques. We implemented a website as a user interface to allow users to easily convert their guitar playing to MIDI. Arty achieved fairly high accuracy on the dataset, but the user study showed that Arty’s real world accuracy is much lower, in part because real-world data is different from and more diverse than our dataset. The user study did however reveal a strong interest for such a system from advanced virtual instruments users.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Thinking Applied to Digital Divide</title>
<link href="https://hdl.handle.net/1721.1/147370" rel="alternate"/>
<author>
<name>Singla, Akshit</name>
</author>
<id>https://hdl.handle.net/1721.1/147370</id>
<updated>2023-01-20T03:41:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Systems Thinking Applied to Digital Divide
Singla, Akshit
More than half of the human society is offline, and a majority of those who are online have a very limited access to internet due to a multitude of reasons. The internet technology had the promise of bridging societal inequalities by enabling interactions across all sections of the society at low costs, but things did not turn out that way - what happened? Just like the birth of all societal inequalities, the reinforcing feedback loop of adding value to those who already have access got stronger due to the economic incentive structures, while the reinforcing feedback loop of bringing people online took the opposite direction. Luckily, there is a sense of saturation in the online market size, as well as new use-cases like autonomous vehicles emerging that are shifting the discussions towards better connectivity. The recent pandemic has also reminded the governments across the world of their responsibility towards the lower sections of the society, and the promise of the internet that could enable them with it.&#13;
&#13;
"Digital Divide" is a term that is commonly thrown around for multiple reasons. There is a lack of standardization and vocabulary that limits collaboration and creates barriers to entry for new entrants, let alone awareness among others who have not even been exposed to the issue. Due to this fragmentation between stakeholders, the reporting on the issue is inconsistent across different sources and the efforts being implemented by stakeholders across the globe are rarely able to learn from each other’s successes and failures. The "technological determinism" mindset, although widely acknowledged, is still embedded in the measurements, and there is a lack of acknowledgement of the complex identity structures in the modern society.&#13;
&#13;
This thesis aims to tackle these challenges by leveraging the systems thinking approach. It provides with a beneficiary-first perspective of the issue, and derives a revised definition for this issue that is relevant for the current timeframe (2022). Finally, a root-case analysis model in conjunction with assessment framework is proposed to empower stakeholders with the right tools for assessing and finding solutions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Continuous Tensor-Train Methods for Optimal Control Problems with the Ornstein-Uhlenbeck Operator</title>
<link href="https://hdl.handle.net/1721.1/147369" rel="alternate"/>
<author>
<name>White, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/147369</id>
<updated>2023-01-20T03:38:38Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Analysis of Continuous Tensor-Train Methods for Optimal Control Problems with the Ornstein-Uhlenbeck Operator
White, Joshua
Continuous tensor-train decompositions methods have been found to well approximate optimal control policies under certain conditions for dynamic programming problems within the Compressed Continuous Computation (C3) framework. We aim to utilize numerically approximated control solutions found using such tensor-train methods as importance functions for adaptive multilevel splitting algorithms to investigate rare event probabilities related to stochastic dynamical systems and thereafter evaluate the variances and relative errors of the resulting statistical estimators. We also evaluate the efficacy of Compressed Continuous Computation in solving time-dependent Hamilton-Jacobi-Bellman partial differential equations for a limited class of systems whose aforementioned solutions are exactly known by calculating point-wise squared deviations and L2 errors between computed and analytical solutions. We find that Compressed Continuous Computation produces fruitful importance functions for adaptive multilevel splitting algorithms, resulting in probability estimators with significant variance and relative error reduction, however we additionally find that due to significant numerical error the C3 framework is not suitable for accurately solving time-dependent partial differential equations.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Organization of Multicellular Living Systems</title>
<link href="https://hdl.handle.net/1721.1/147368" rel="alternate"/>
<author>
<name>Yang, Haiqian</name>
</author>
<id>https://hdl.handle.net/1721.1/147368</id>
<updated>2023-01-20T03:31:17Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Spatial Organization of Multicellular Living Systems
Yang, Haiqian
Cells cooperate as groups to achieve functions at the tissue level, and specific structural characteristics emerge from the local organization of neighboring cells. Analogous to classical physics where transformations in the local structure give rise to phases and phase transitions, the changes in local structures in multicellular assemblies can be essential for a variety of vital processes including morphogenesis, wound healing, and cancer. In this work, we use the two invariants (volume and shear) of the deformation tensor of Delaunay triangles as a pair of quantities to define the local microstates of multicellular living systems. In chapter 3, we develop configurational fingerprints based on these local structures, and extract two parameters, namely the volumetric and shear order parameters, that are reflective of the transitions of local order in the systems. Theoretically, these two parameters form a complete and unique pair of signatures for the local structural order of a multicellular system. The evolution of these two order parameters offers a robust and experimentally accessible way to map the phase transitions in expanding cell monolayers, and during embryogenesis and invasion of epithelial spheroids. In chapter 4, We show by both simulations and experiments that volume follows a k-Gamma distribution, and shear follows an exponential distribution. We further propose two temperature-like quantities for cell assemblies, in which sense we show the periphery of an extravasating epithelial monolayer is ‘hotter’ than the core.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of greenhouse gas emissions using the sustainable systems-thinking approach by utilizing cost-effective hydrogen production with a lower environmental footprint</title>
<link href="https://hdl.handle.net/1721.1/147366" rel="alternate"/>
<author>
<name>Tanzharikov, Arman</name>
</author>
<id>https://hdl.handle.net/1721.1/147366</id>
<updated>2023-01-20T03:43:35Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Reduction of greenhouse gas emissions using the sustainable systems-thinking approach by utilizing cost-effective hydrogen production with a lower environmental footprint
Tanzharikov, Arman
Hydrogen (H₂) is an important energy carrier that can fuel the future as society seeks to engage in energy transition with sustainable, emission-free solutions. Hydrogen is a clean-burning fuel to generate energy as only water is produced when it is burned. Currently, nearly all commercially available Hydrogen worldwide is processed via Steam Methane Reforming (SMR) and Electrolysis ("green" Hydrogen). However, SMR is responsible for associated CO₂ emissions, significantly contributing to global greenhouse gas (GHG). &#13;
&#13;
This thesis aims to techno-economically assess whether emission-free Methane Pyrolysis technology for Hydrogen production could operationalize in the energy industry as a sustainable technology that supports a clean energy transition strategy that reduces potent GHG emissions. It starts by providing a background on the importance of questioning the impact of GHG emissions on climate change covered in the Marginal Abatement Cost Curve (MACC) portfolios. The Object Process Methodology (OPM) compares two leading Hydrogen production technologies, SMR and Electrolysis, with a relatively new technology called Methane Pyrolysis, a thermal decomposition of methane that produces clean Hydrogen (without CO₂ emissions) and Solid Carbon. For Hydrogen demand, eXtremOS simulation model was used to forecast European socio-technical energy system scenarios and clean energy transformation pathways. The forecasted H₂ demand was used as an input assumption for the simple Levelized Cost of Hydrogen (LCOH) modeling to estimate and cross-reference hydrogen production costs for three technologies with different capital expenses (CAPEX), fixed and variable operational expenses (OPEX), feedstock, energy requirements, and carbon emissions. &#13;
&#13;
The continuous search for emissions-free operations creates an opportunity for the new technology to position itself as a direct replacement for SMR. The final sections of the thesis highlight the sensitivity of critical uncertainties and the current level of new technology and market readiness challenge to provide the author's views and recommendations on the complexity and feasibility of Methane Pyrolysis technology and its prospects.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Powder Bed Fusion Tooling Surfaces</title>
<link href="https://hdl.handle.net/1721.1/147357" rel="alternate"/>
<author>
<name>Cunningham, Andrew T.</name>
</author>
<id>https://hdl.handle.net/1721.1/147357</id>
<updated>2023-01-20T04:00:33Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Powder Bed Fusion Tooling Surfaces
Cunningham, Andrew T.
The objective of this project was to facilitate the integration of additive manufacturing and CNC sheet metal fabrication to create hybrid check fixtures. In this case, the tool comprises a sheet metal base and a powder bed fusion cover. Using the Agile product development framework, the team conducted a series of sprints going from concept models to a final production tool in just over two months. Additive manufacturing investigations conducted to converge on the optimal production solution include studies on dimensional process capability, additive process type, material tradeoffs, and business factors. Moreover, several sheet metal and tubing structures were tested to achieve a highly accurate base for the additively manufactured surface. The integration of these parts was enabled by elastic-averaging-based connector geometries that also evolved throughout the different sprints in conjunction with results from efficient simulation models. The production hybrid fixture presented a range of benefits for automotive OEM and project sponsor, General Motors (GM). Compared to traditional fixtures, the lead time was shortened by 92%, the cost was reduced by 65%, and the recyclability increased from 59% to 100%. These benefits were achieved while meeting all product owner requirements and technical specifications. Given the increasing demand for check fixtures owing to shortening product lifecycles, it is expected that the savings generated can scale up significantly. Moreover, many of the techniques developed can be applied to other types of fixtures such as those used for welding and subassembly. The project was also successful at fulfilling an internal company goal of generating sufficient traction to launch a series of collaborative initiatives between the sheet metal fabrication and additive manufacturing teams at GM.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laser Powder Bed Fusion Process Characterization: Design of Experiments for Dimensionally Accurate Thin Walls</title>
<link href="https://hdl.handle.net/1721.1/147356" rel="alternate"/>
<author>
<name>Flam, Rachael M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147356</id>
<updated>2023-01-20T03:22:48Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Laser Powder Bed Fusion Process Characterization: Design of Experiments for Dimensionally Accurate Thin Walls
Flam, Rachael M.
Metal Laser Powder Bed Fusion (M-LPBF) is a method of additive manufacturing that enables the fabrication of complex components that would not otherwise be possible through conventional manufacturing techniques. M-LPBF is well suited for aerospace applications because of its ability to fabricate geometrically complex and efficient components. It can also enable the reduction of costs and the schedule of programs. The recent advancements in material development have the potential to widen the design space even further for aerospace applications, but the initial process of evaluating a new material on a M-LPBF printer can be time-consuming and costly. In this thesis, a framework to improve the efficiency and structure of M-LPBF process development is proposed. First, simulations of the melt pool were performed to understand the impact of primary process parameters on the dimensions of the melt pool. Then, tools to model the melt pool were tested and used in combination with analytical equations to identify an acceptable processing window for the M-LPBF process. Following this process parameter filtering, physical experiments were executed that investigated the impact of process and design parameters on various outputs connected to the melt pool, density, dimensional accuracy, and surface roughness of the coupons printed. Optimal parameter ranges can then be determined according to different design and process priorities. The framework developed in this project enables a material and machine agnostic approach to process parameter selection in less time and at a lower cost.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Clinical trials Operations: Supply Chain Management and Framework Development</title>
<link href="https://hdl.handle.net/1721.1/147355" rel="alternate"/>
<author>
<name>Ait Mbiriq, Imane</name>
</author>
<id>https://hdl.handle.net/1721.1/147355</id>
<updated>2023-01-20T04:06:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Remote Clinical trials Operations: Supply Chain Management and Framework Development
Ait Mbiriq, Imane
Remote clinical trials present the new approach of revolutionizing traditional clinical trials in order to decrease costs, accelerate the processes and improve the experience for participants and trial’s staff. The Covid-19 pandemic has significantly encouraged the implementation of remote clinical trials, since it became harder to reach participants and patients. The Tufts Medical Center aims to adopt remote clinical trial practices for their future clinical trials. The future trials include (1) a phase 2b a clinical trial testing the effectiveness of niclosamide in the shortening the Covid-19 contagious period in children and adolescents and (2) a data collection trial with participants suffering from Long Covid symptoms.&#13;
&#13;
Although clinical trials can have different parameters and processes, this thesis suggests a general framework that can guide Tufts Medical center in their planning of the future remote trials including the design of the trial, the participant’s recruiting, the trial’s supplies’ inventory management and the data collection during and at the end of the trial. The thesis also includes limitations and points of failure of the generalized framework.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric Properties of Learned Representations</title>
<link href="https://hdl.handle.net/1721.1/147353" rel="alternate"/>
<author>
<name>Wang, Tongzhou</name>
</author>
<id>https://hdl.handle.net/1721.1/147353</id>
<updated>2023-01-20T03:32:24Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Geometric Properties of Learned Representations
Wang, Tongzhou
In machine learning, reprensentation learning refers to optimizing a mapping from data to some representation space (usually generic vectors in Rᵈ for some pre-determined &#119889; much lower than data dimensions). While such training often uses no supervised labels, the learned representations have proved very useful for solving downstream tasks. Such successes sparkled an enormous amount of interests in representation learning methods among both academic researchers and practitioners. Despite the popularity, it is not always clear what the representation learning objectives are optimizing for, and how to design representation learning methods for new domains and tasks (such as reinforcement learning). In this thesis, we consider the structures captured by two geometric properties of learned representations: invariances and distances. From these two perspectives, we start by thoroughly analyzing the widely adopted contrastive representation learning, uncovering that it learns certain structures and relations among data. Then, we describe two new representation learning methods for reinforcement learning and control, where they respectively capture the optimal planning cost (distance) and the information invariant to environment noises.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature Disparity Comparisons for Campus Heat Vulnerabilities</title>
<link href="https://hdl.handle.net/1721.1/147350" rel="alternate"/>
<author>
<name>Futami, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/147350</id>
<updated>2023-01-20T04:06:09Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Temperature Disparity Comparisons for Campus Heat Vulnerabilities
Futami, Lauren
For the past several decades, planet Earth has experienced the detrimental effects of climate change like heightened temperatures, extreme weather, and wildlife loss, and to this day, still continues to experience these consequences. Because of this, people at various levels of society – global, country, state, city, community – need to prepare the next steps forward regarding how to live and build in this new and changing environment. In order to prepare, it becomes vital to learn and measure what and how the current ecosystem and landscape are reacting to climate change both for human comfort as well as more efficient energy usage. In this research, climate modules were built and distributed across MIT’s campus to measure air tem- perature, ground temperature, humidity, pressure, and light every three minutes over 24 hours. The campus temperatures were compared with established temperature measurements around the surrounding Cambridge city area as well as Boston Logan airport gathered by Weather Underground. Areas on MIT’s campus measured com- parable average temperatures throughout the 24 hour measurement intervals with the temperatures measured by Weather Underground, but recorded higher maximum temperatures experienced in the day. Within MIT’s campus, the climate modules also recorded varying temperatures, signaling that MIT’s campus does not have a holistic temperature, but rather disparate temperature readings depending on the surrounding area and materials. As a result, this research informs MIT’s future deci- sions on possibly energy allocation among existing buildings as well as planning for subsequent construction of new structures.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Practical Search with Voronoi Distributed Autonomous&#13;
Marine Swarms</title>
<link href="https://hdl.handle.net/1721.1/147349" rel="alternate"/>
<author>
<name>Evans, Nicholas Craig</name>
</author>
<id>https://hdl.handle.net/1721.1/147349</id>
<updated>2023-01-20T03:28:54Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Practical Search with Voronoi Distributed Autonomous&#13;
Marine Swarms
Evans, Nicholas Craig
The search for underwater threats in littoral regions is a problem that has been researched for nearly a century. However, recent developments in autonomy and robotics have made this issue more complex. The advent of capable autonomous underwater vehicles presents a 21st century flare to this traditional problem. These vehicles can be smaller, quieter, and expendable. Therefore, new methods and tactics used to detect and track these vehicles are needed. The use of a swarm of marine robots can increase the likelihood of uncovering these threats. This thesis provides various Voronoi partition-based methods to autonomously control a swarm of identically capable autonomous surface vessels in a limited coverage and tracking problem. These methods increase the probability of interdiction of an adversary vehicle crossing a defined region. The results achieved from Monte Carlo simulations demonstrate how different protocols of swarm movement can improve detection probability as compared to a stationary swarm provided the detection capability does not change. The swarm control algorithms are employed on Clearpath Heron USVs to validate the autonomy algorithms.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Clinical Trial Operations: Patient Education for Medical and Wearable Device Use</title>
<link href="https://hdl.handle.net/1721.1/147343" rel="alternate"/>
<author>
<name>Smith, Carly Madeleine</name>
</author>
<id>https://hdl.handle.net/1721.1/147343</id>
<updated>2023-01-20T04:02:42Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Remote Clinical Trial Operations: Patient Education for Medical and Wearable Device Use
Smith, Carly Madeleine
Consumer wearable devices with the capability to remotely collect longitudinal physiological data used for machine learning and artificial intelligence are set to revolutionize healthcare, including enabling remote clinical trials. Yet, there is no regulatory framework in place to standardize their utilization. In traditional clinical studies, user-related error is minimized, as a designated clinician performs all physiological measurements on each subject. However, in remote clinical settings, this standardization is lost, as each participant becomes responsible to collect their own physiological data. Patient education materials for remote studies must be designed intentionally to minimize user-related factors such as misuse and nonuse of device, as these mistakes introduce heterogeneity into and devalue longitudinal physiological data sets. This thesis project addressed the current state of remote clinical trial operations and provides a framework for human-subjects researchers to establish their own standardized remote clinical trial operations. Specifically, it focuses on the creation of intentional patient education materials with respect to fundamental principles of human cognition to reduce user-related error in wearable device operation.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Existential Belief and Epistemic Modals</title>
<link href="https://hdl.handle.net/1721.1/147342" rel="alternate"/>
<author>
<name>Močnik, Maša</name>
</author>
<id>https://hdl.handle.net/1721.1/147342</id>
<updated>2023-07-03T05:44:11Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Existential Belief and Epistemic Modals
Močnik, Maša
The thesis, based on Močnik (2019a,b,c), proposes a new semantics for epistemic modals and doxastic attitudes, based on the behaviour of epistemic modals embedded under doxastic attitudes. The core data comes from Slovenian, where the existential belief verb dopuščati (‘allow for the possibility’) does not embed universal epistemic force.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating the Design Process Through Natural Language Processing-based Idea Filtering</title>
<link href="https://hdl.handle.net/1721.1/147338" rel="alternate"/>
<author>
<name>Edwards, Kristen M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147338</id>
<updated>2023-01-20T03:40:43Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Accelerating the Design Process Through Natural Language Processing-based Idea Filtering
Edwards, Kristen M.
The following treatise explores the use of natural language processing to accelerate the design process in various domains by automating idea filtering. During the design of products or programs, a bottleneck often arises when experts need to filter through an exorbitant number of ideas, searching for which ones are most innovative, creative, relevant, or any other number of subjective characteristics. We observe this bottleneck when filtering entrepreneurial ideas for innovation, when filtering earlystage design concepts for creativity and usefulness, and when filtering literature for relevance toward policy-informing evidence syntheses. Motivated by the common challenge of idea filtering in various design domains, my research explores the use of natural language processing (NLP) for accelerating design through automated idea filtering. My team and I investigate the possibility of using machine learning to predict expert-derived creativity assessments of design ideas from more accessible non-expert survey results. We demonstrate the ability of machine learning models to predict design metrics from the design itself and textual survey information. Our results show that incorporating NLP improves prediction results across design metrics, and that clear distinctions in the predictability of certain metrics exist. We go on to explore the effectiveness of using NLP to accelerate literature screening for designing evidence-based policies and programs. In this research, we introduce the use of transformer models for idea filtering and evaluation. Transformer-based models have produced state of the art results in NLP tasks such as language translation, question answering, reading comprehension, and sentiment analysis. Our results show that the fine-tunable transformer-based models achieve the highest text classification accuracy, 79%, therefore accurately evaluating and filtering our textual dataset. Furthermore, we observe that the model accuracy improves with the training data size with diminishing marginal effect. The findings can facilitate informed decision making regarding the trade-off between model accuracy and manual labeling efforts, increasing efficiency. After demonstrating the effectiveness of using NLP to accelerate literature screening, we aimed to next decrease the level of effort required of expert reviewers to generate training data. To train an idea-filtering model, we need a labeled dataset of ideas, however obtaining labeled data is a challenge for engineering and design applications, as experts are expensive and, therefore, expert labeled datasets are as well. We were motivated to explore avenues to decrease the size of training data needed by using active learning (AL). AL is the concept that a machine learning algorithm can perform better with less training data if it is allowed to choose the data from which it learns. We find that data selection techniques that incorporate active learning result in higher F1 scores, a more balanced training set, and fewer necessary labeled training instances. These results suggest that active learning is effective in decreasing expert level of effort for NLP-based idea filtering with highly imbalanced data. We ultimately find that NLP can accelerate design processes in various domains by automating idea filtering and further decreasing the level of effort required of human experts.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affordable Housing Production in the Metropolis: Potential Options and Implications of Successors to New York City's 421-a Tax Exemption</title>
<link href="https://hdl.handle.net/1721.1/147334" rel="alternate"/>
<author>
<name>Katz, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/147334</id>
<updated>2023-01-20T04:02:17Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Affordable Housing Production in the Metropolis: Potential Options and Implications of Successors to New York City's 421-a Tax Exemption
Katz, Ashley
On June 15, 2022, the 421-a(16) Affordable New York Housing Program, expired. It is New York City’s largest tax incentive to build affordable housing. The 421-a program offered private capital increased returns through a tax abatement in exchange for a number of affordable units at varying levels of affordability. 421-a gave those with low incomes access to affordable homes in neighborhoods typically out of reach. With no legislative agreement for renewal or modified future program, the pipeline of affordable housing development will be diminished in New York City.&#13;
&#13;
This thesis offers an analytical tool and framework to determine outcome potentials for a successor program to 421-a. Using two case study financial analyses in neighborhoods representative of the range in market-rate rents citywide, this thesis: (1) comparatively examines returns based on the recently lapsed 421-a program, the Governor of New York’s proposal for a replacement, and a completely market-rate development without subsidy, (2) performs a sensitivity analysis determining outcome returns for private capital at a range of affordability requirements, (3) tests these outcomes for industry feasibility, (4) aggregates and analyzes survey response data to develop a conceptual threshold of program requirements and find the optimal policy point at which the greatest number of units and deepest affordability is feasible for private capital to consider for development.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Stochastic Reduced-Order Modeling for Autonomous Ocean Platforms</title>
<link href="https://hdl.handle.net/1721.1/147333" rel="alternate"/>
<author>
<name>Ryu, Young Hyun (Tony)</name>
</author>
<id>https://hdl.handle.net/1721.1/147333</id>
<updated>2023-01-20T03:37:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Adaptive Stochastic Reduced-Order Modeling for Autonomous Ocean Platforms
Ryu, Young Hyun (Tony)
Onboard forecasting and data assimilation are challenging but essential for unmanned autonomous ocean platforms. Due to the numerous operational constraints for these platforms, efficient adaptive reduced-order models (ROMs) are needed. In this thesis, we first review existing approaches and then develop a new adaptive Dynamic Mode Decomposition (DMD)-based, data-driven, reduced-order model framework that provides onboard forecasting and data assimilation capabilities for bandwidthdisadvantaged autonomous ocean platforms. We refer to the new adaptive ROM as the incremental, stochastic Low-Rank Dynamic Mode Decomposition (iLRDMD) algorithm. Given a set of high-fidelity and high-dimensional stochastic forecasts computed in remote centers, this framework enables i) efficient and accurate send and receive of the high-fidelity forecasts, ii) incremental update of the onboard reducedorder model, iii) data-driven onboard forecasting, and iv) onboard ROM data assimilation and learning. We analyze the computational costs for the compression, communications, incremental updates, and onboard forecasts. We evaluate the adaptive ROM using a simple 2D flow behind an island, both as a test case to develop the method, and to investigate the parameter sensitivity and algorithmic design choices. We develop the extension of deterministic iLRDMD to stochastic applications with uncertain ocean forecasts. We then demonstrate the adaptive ROM on more complex ocean fields ranging from univariate 2D, univariate 3D, and multivariate 3D fields from multi-resolution, data-assimilative Multidisciplinary Simulation, Estimation, and Assimilation Systems (MSEAS) reanalyses, specifically from the real-time exercises in the Middle Atlantic Bight region. We also highlight our results using the Navy’s Hybrid Coordinate Ocean Model (HYCOM) forecasts in the North Atlantic region. We then apply the adaptive ROM onboard forecasting algorithm to interdisciplinary applications, showcasing adaptive reduced-order forecasts for onboard underwater acoustics computations and forecasts, as well as for exact time-optimal path-planning with autonomous surface vehicles.&#13;
&#13;
For stochastic forecasting and data assimilation onboard the unmanned autonomous ocean platforms, we combine the stochastic ensemble DMD method with the Gaussian Mixture Model - Dynamically Orthogonal equations (GMM-DO) filter. The autonomous platforms can then perform principled Bayesian data assimilation onboard and learn from the limited and gappy ocean observation data and improve onboard estimates. We extend the DMD with the GMM-DO filter further by incorporating incremental DMD algorithms so that the stochastic ensemble DMD model itself is updated with new measurements. To address some of the inefficiencies in the first combination of the stochastic ensemble DMD with the GMM-DO filter, we further introduce the GMM-DMD algorithm. This algorithm not only uses the stochastic ensemble DMD as a computationally efficient forward model, but also employs the existing decomposition to fit the GMM to and perform Bayesian updates on. We demonstrate this incremental stochastic ensemble DMD with GMM-DO and GMMDMD using a real at-sea application in the Middle Atlantic Bight region. We employ a 300 member set of stochastic ensemble forecasts for the “Positioning System for Deep Ocean Navigation - Precision Ocean Interrogation, Navigation, and Timing” (POSYDON-POINT) sea experiment, and highlight the capabilities of reduced data assimilation using simulated twin experiments.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building an urban life sciences district in Midtown Cleveland: An opportunistic development proposal that requires private and public collaboration</title>
<link href="https://hdl.handle.net/1721.1/147332" rel="alternate"/>
<author>
<name>Vaughn, Zachary T.</name>
</author>
<id>https://hdl.handle.net/1721.1/147332</id>
<updated>2023-01-20T03:13:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Building an urban life sciences district in Midtown Cleveland: An opportunistic development proposal that requires private and public collaboration
Vaughn, Zachary T.
The life science industry in the United States has experienced exceptional growth in the past half-decade. Private and public funding continues to pour into life sciences; top talent is the key source of new products and innovations; and lab space is one of the fastest pre-leased asset classes in all of real estate. With the number two best hospital in the world — accompanied by other top hospitals and medical universities — it is hard to believe that Cleveland, Ohio has not yet overcome its nickname as the “Mistake on the Lake." This thesis aims to validate and propose an urban life sciences ecosystem in Midtown Cleveland by amalgamating Cleveland’s existing life science infrastructure and talent with the international demand for new biotech and pharmaceutical research and development.&#13;
&#13;
The thesis is sectioned into six chapters, the first chapter being the explanation of the thesis and its overall framework. Chapters II and III provide overviews of both Cleveland and the life science industry, whereas Chapter IV melds these two subjects together and explains the potential opportunity for life science development in Cleveland. This chapter also discusses the recent Cleveland Innovation District and the public-nonprofit partnerships currently in place. Finally, Chapter V presents a development proposal of a ten-acre site in Midtown Cleveland, focusing attention on one laboratory building in particular. The thesis concludes in Chapter VI with the writer’s final thoughts and key financial considerations.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging Time Preferences and Social Preferences</title>
<link href="https://hdl.handle.net/1721.1/147331" rel="alternate"/>
<author>
<name>Chen, Xi (Cathy)</name>
</author>
<id>https://hdl.handle.net/1721.1/147331</id>
<updated>2023-01-20T03:41:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Bridging Time Preferences and Social Preferences
Chen, Xi (Cathy)
While an immense volume of research has been conducted on time preferences and social preferences, limited studies have investigated whether there is any interaction and/or trade-off between these two important preferences in human decision-making. In this work, we fill this literature gap by investigating how time preferences (with a focus on procrastination) are affected by tasks’ prosocial nature. To do so, we designed an experiment to test the hypothesis that people have varied preferences for timing flexibility in deadlines when working on prosocial tasks vs. self-interested tasks. We propose that people may procrastinate more on prosocial tasks, because they might procure positive diagnostic utilities of a superior self-image by merely committing to a prosocial task, instead of completing it. With the analyses of individuals’ preferences for different working contracts based on conjoint analysis with three features – workload, earnings and deadlines – we found suggestive evidence that supports the hypothesis. Limitations on the current studies and plans for future research are also discussed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Communication of Key Concepts in Commercial Real Estate Analysis and Investment</title>
<link href="https://hdl.handle.net/1721.1/147329" rel="alternate"/>
<author>
<name>Demirchelie, Elaheh</name>
</author>
<id>https://hdl.handle.net/1721.1/147329</id>
<updated>2023-01-20T03:03:23Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Visual Communication of Key Concepts in Commercial Real Estate Analysis and Investment
Demirchelie, Elaheh
The book “Commercial Real Estate Analysis and Investments” by David M. Geltner (Author), Norman G. Miller (Author), Jim Clayton (Author), Piet Eichholtz (Author) is a comprehensive real estate analysis book developed by pioneers in the industry, which covers fundamental concepts in real estate investment and analysis.&#13;
&#13;
While the book uses some visual exhibits to communicate fundamental definitions, most of the content is written in text format with minimal visual communication strategies. This thesis aims to create an original, visual translation of the content of this book by creating new visual diagrams and communication strategies. This visual research is designed to deliver the book’s lessons to a broader audience active in the real estate industry who come from varying degrees of academic and professional background in real estate. This unique investigation serves as the basis for a short book that will be published in the future, based on the learnings derived this body of work.&#13;
&#13;
This thesis provides new insights and methodologies into visualizing commercial real estate analysis concepts in 2D format through the use of 2D diagrams and illustrations. There is currently no evidence of this type of research or product in the real estate industry in a comprehensive format. Thus it was necessary to dive into the basic essentials of diagramming to explore possibilities for future development and possible challenges that one would face in translating real estate fundamentals visually.&#13;
&#13;
Academic literature and industry research provide us with many resources that suggest visual communication increases attention span, recognition, and memorization of ideas. While many other industries have books and articles with visual pedagogies in place, through analysis of the broader real estate development pedagogies, I found a lack of visual literacy in teaching fundamental concepts in the real estate industry. This thesis aims to deliver the first product of its kind to fill in the gap in teaching essential concepts in the real estate industry through visual communication methodologies.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ethnography as Craft: Rendering the ‘Emic’ Space of a Server Farm using a 3 -D printer</title>
<link href="https://hdl.handle.net/1721.1/147327" rel="alternate"/>
<author>
<name>Gonzalez, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/147327</id>
<updated>2023-01-20T03:57:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Ethnography as Craft: Rendering the ‘Emic’ Space of a Server Farm using a 3 -D printer
Gonzalez, Steven
Drawing on ethnographic data in server farms, this Master’s thesis explores the promise of sculpture as a method for ethnographic representation and anthropological scholarship. Inspired by Paul Atkinson’s ethnographic study of glassblowers, I frame ethnography as a craft practice, introducing an experimental method called ethnographic sculpture to generate insights about spatiality and culture that purely textual accounts cannot. To reproduce the lived experience of my research participants and the social world of data centers in which they are found, I propose a three-dimensional (3D) approach to crafting an ethnographic sculpture: 1) the material, 2) the semiotic, and 3) the phenomenal. In what follows, I enlist my archive of field notes and interview transcripts to flesh out the material, semiotic, and phenomenal aspects of life inside of data centers. I explore themes ranging from: the embodied knowledge and sensory habitus of workers, oceanic and biotic metaphors in the narration of thermodynamics, the invisibility of air as a medium, and the unreliability of numbers to apprehend the complexity of servers, air conditioners, and the ensemble of technical systems in data centers. I then walk readers through my creative process of rendering my ethnography into a sculpture; a 3d printed model of a data center augmented with additional elements (fabrics, clays, lighting) to simulate the ‘emic’ world of data centers. I conclude with a reflection on the affordances of sculpture as a medium for ethnographic representation that is as experiential as it is descriptive.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bandit Problems under Censored Feedback</title>
<link href="https://hdl.handle.net/1721.1/147326" rel="alternate"/>
<author>
<name>Guinet, Gauthier Marc Benoit</name>
</author>
<id>https://hdl.handle.net/1721.1/147326</id>
<updated>2023-01-20T03:36:03Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Bandit Problems under Censored Feedback
Guinet, Gauthier Marc Benoit
In this thesis, we study sequential decision-making models where the feedback received by the principal depends on strategic uncertainty (e.g., agents’ willingness to follow a recommendation) and/or random uncertainty (e.g., loss or delay in arrival of information). Such challenges often arise in AI-driven platforms, with applications in recommender systems, revenue management or transportation. We model and study this class of problems through the lens of multi-armed and contextual bandits evolving in censored environments. Our goal is to estimate the performance loss due to censorship in the context of classical algorithms designed for uncensored environments. Our main contributions include the introduction of a broad class of censorship models and their analysis in terms of the effective dimension of the problem – a natural measure of its underlying statistical complexity and main driver of the regret bound. In particular, the effective dimension allows us to maintain the structure of the original problem at first order, while embedding it in a bigger space, and thus naturally leads to results analogous to uncensored settings. Our analysis involves a continuous generalization of the Elliptical Potential Inequality, which we believe is of independent interest. We also discover an interesting property of decision-making under censorship: a transient phase during which initial misspecification of censorship is self-corrected at an extra cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Thermo-chemo-mechanically Coupled Phenomena in Frontal Polymerization</title>
<link href="https://hdl.handle.net/1721.1/147324" rel="alternate"/>
<author>
<name>Li, Xuanhe</name>
</author>
<id>https://hdl.handle.net/1721.1/147324</id>
<updated>2023-01-20T03:09:26Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Investigation of Thermo-chemo-mechanically Coupled Phenomena in Frontal Polymerization
Li, Xuanhe
Frontal polymerization is a proven energy-saving and rapid method of synthesizing polymers with good mechanical properties. With a small energy input, a reaction front propagates rapidly through the sample, transforming monomer (liquid or soft gel) into polymer (stiff solid), and is self-sustained by heat released from the polymerization reaction itself. A thermo-chemical coupled model has been proposed to describe the frontal polymerization process, and recent experiments have shown that such coupling could lead to an unstable propagating front. However, the influence of mechanical forces has been absent in previous analyses of frontal polymerization, which could be significant considering local volume change caused by thermal expansion and chemical shrinkage as the front propagates. In this thesis, we will explore the mechanical behavior and potential thermal-chemo-mechanically coupled effects that may emerge during frontal polymerization of soft gels. We will show that non-uniform residual stress distribution could be generated due to differences in thermal and chemical properties on both sides of the propagating front. Our experiments further confirm that the emergence of stress could in turn influence the propagation of front, and our model describes such coupling effects to predict the dynamics of a propagating front in agreement with experimental observation. Our findings suggest that the mechanical effects need to be taken into consideration for industrial applications of frontal polymerization at large scales.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Precision Assembly Interfaces</title>
<link href="https://hdl.handle.net/1721.1/147323" rel="alternate"/>
<author>
<name>El Khatib, Ibrahim H.</name>
</author>
<id>https://hdl.handle.net/1721.1/147323</id>
<updated>2023-01-20T04:00:21Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Integration of Additive Manufacturing with CNC Sheet Metal Fabrication for Hybrid Fixtures: Design and Implementation of Precision Assembly Interfaces
El Khatib, Ibrahim H.
The objective of this project was to facilitate the integration of additive manufacturing and CNC sheet metal fabrication to create hybrid check fixtures. In this case the tool is a hybrid of a sheet metal base and an additive check surface. Using the Agile product development framework, the team conducted a series of sprints going from concept models to a final production tool in just over two months. Additive manufacturing investigations to converge on the best production solution included studies in dimensional process capability, additive process type, material tradeoffs, and business factors. Moreover, several sheet metal and tubing structures were tested to achieve a highly accurate base for the additively manufactured surface. The integration of these parts was enabled by elastic-averaging-based connector geometries that also evolved throughout the different sprints in conjunction with results from simple simulation models. The production hybrid fixture presented a range of benefits for automotive OEM and project sponsor, General Motors (GM). Compared to traditional fixtures, lead time was shortened by 92%, cost was reduced by 65%, and recyclability increased from 59% to 100%. These benefits were achieved while meeting all product owner requirements and technical specifications. Given the increasing demand for check fixtures owing to shortening product lifecycles, it is expected that the savings generated can scale up significantly. Moreover, many of the techniques developed can be applied to other types of fixtures such as those used for welding and subassembly. The project was also successful at fulfilling an internal company goal of generating sufficient traction to launch a series of collaborative initiatives between the sheet metal fabrication and additive manufacturing teams at GM.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the Surface Roughness of Overhangs&#13;
manufactured by Laser Powder Bed Fusion Process using Design of Experiments</title>
<link href="https://hdl.handle.net/1721.1/147322" rel="alternate"/>
<author>
<name>Lim, Xuan Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/147322</id>
<updated>2023-01-20T03:02:43Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Characterization of the Surface Roughness of Overhangs&#13;
manufactured by Laser Powder Bed Fusion Process using Design of Experiments
Lim, Xuan Yi
Metal Laser Powder Bed Fusion (M-LPBF) is a method of additive manufacturing that enables the fabrication of complex components that would not be possible through conventional manufacturing methods. M-LPBF is well suited for aerospace applications not only because of its ability to fabricate complex and efficient components, but it also can enable the reduction of cost and the schedule of programs. The recent advancements in material development could open the design space even further for aerospace applications, but the initial development process of evaluating a new material on a M-LPBF printer can be time consuming and costly. In this work, a framework to improve the efficiency and structure of M-LPBF process development is proposed. First, simulations of the melt pool were performed to understand the impact of primary process parameters on the dimensions of the melt pool. Then, tools to model the melt pool were tested and used in combination with analytical equations to identify an acceptable processing window for the M-LPBF process. Following this process parameter filtering, physical experiments were executed that investigated the impact of process and design parameters on various outputs connected to the melt pool, density, dimensional accuracy, and surface roughness of the coupons printed. Optimal parameter ranges can then be determined according to different design and process priorities. The framework developed in this project enables a material and machine agnostic approach to process parameter selection in less time and at a lower cost.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Machine Learning Training Jobs</title>
<link href="https://hdl.handle.net/1721.1/147321" rel="alternate"/>
<author>
<name>Wang, Weiyang</name>
</author>
<id>https://hdl.handle.net/1721.1/147321</id>
<updated>2023-01-20T03:46:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Machine Learning Training Jobs
Wang, Weiyang
This thesis explores a novel approach for building direct-connect DNN training clusters. The proposed system, called TopoOpt, co-optimizes the distributed training process across three dimensions: computation, communication, and network topology. TopoOpt uses a novel alternating optimization technique and a group theory-inspired algorithm to find the best network topology and routing plan, together with parallelization strategy, for distributed DNN training. To motivate this research, we measure the communication patterns of distributed DNN workloads at Meta. Simulations with six real distributed training models show that, compared to similar-cost Fat-tree interconnects, TopoOpt reduces DNN training time by up to 3.4× on a 128-server cluster. Importantly, TopoOpt’s performance matches an ideal network using an abstract full bisection bandwidth switch, which costs 3.2× more. Experiments with a 12-node prototype demonstrate the feasibility of TopoOpt. The prototype shows that with 4×25 Gbps interfaces, TopoOpt’s training throughput is comparable to the ideal baseline of a 100 Gbps full bisection bandwidth network. TopoOpt is the first system with entirely commodity hardware that co-optimizes topology and parallelization strategy for DNN workloads and is currently being evaluated for deployment at Meta.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trade-Space Analysis of Liquid Hydrogen Propulsion Systems for Electrified Aircraft</title>
<link href="https://hdl.handle.net/1721.1/147320" rel="alternate"/>
<author>
<name>White, Andrew Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/147320</id>
<updated>2023-01-20T03:38:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Trade-Space Analysis of Liquid Hydrogen Propulsion Systems for Electrified Aircraft
White, Andrew Scott
This thesis assesses the feasibility of turbo-, hybrid-, and fully-electric aircraft propulsion systems to enable more efficient air transport. A modular optimization framework was developed to quantify system performance for single-aisle transport aircraft with a mission similar to a Boeing 737 MAX 8. Various propulsion systems leveraging superconducting motors, boundary layer ingestion, high-temperature PEM fuel cells, and liquid hydrogen fuel were examined. Aviation turbine fuel (ATF) and liquid hydrogen were compared using the payload-fuel energy intensity (PFEI), defined as the fuel energy required per product of range and payload.&#13;
&#13;
For a given mission, it was found that a hydrogen-fueled fully-electric configuration required similar fuel energy compared to an ATF-burning turbo-fan propulsion system (PFEI = 5.0). Relative to these systems, a hydrogen-fueled turbo-fan had 14% lower PFEI, an ATF-burning turbo-electric propulsion system had 23% higher PFEI, a hydrogen-fueled turbo-electric propulsion system had 8% lower PFEI, and a hydrogen-fueled hybrid-electric had 3% lower PFEI for the same mission.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Nocturnal Itch And Its Impact On Sleep Using Machine Learning And Radio Signals</title>
<link href="https://hdl.handle.net/1721.1/147319" rel="alternate"/>
<author>
<name>Ouroutzoglou, Michail</name>
</author>
<id>https://hdl.handle.net/1721.1/147319</id>
<updated>2023-01-20T03:54:02Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Quantifying Nocturnal Itch And Its Impact On Sleep Using Machine Learning And Radio Signals
Ouroutzoglou, Michail
Today, chronic itch affects up to 15% of the population, and is associated with over $90 Billion in annual population-expenditures in the US. Despite all the interest around this area, there’s still no solution for quantifying nocturnal scratching and its impact on patients’ sleep quality in an objective, sensitive and privacy preserving way. In this work we collect large nocturnal scratching dataset, consisting of 370 nights of infrared footage, radio-frequency (RF) data, and human annotations of scratching. Using this data, we develop a neural network model that can detect occurrences of nocturnal scratching using only radio signals. The developed model can achieve very high accuracy in measuring meaningful scratching metrics, across a diverse population of patients. Additionally, by utilizing prior art on extracting sleep stages from radio signals, we can gain insights about the effect of itch on the sleep quality of a chronic itch patients, especially relative to healthy individuals.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework Development for Remote Clinical Trials: Assembly Process Design</title>
<link href="https://hdl.handle.net/1721.1/147318" rel="alternate"/>
<author>
<name>Lin, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/147318</id>
<updated>2023-01-20T03:48:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Framework Development for Remote Clinical Trials: Assembly Process Design
Lin, Ryan
Clinical trials are facing new difficulties after the COVID-19 outbreak, with enrollment decreasing by about half after January 2020. Tufts Clinical and Translational Research Center has partnered with the Massachusetts Institute of Technology to develop a framework for a “clinical-trial-in-a-box” for use in fully remote clinical trials, which would reduce risk of infection due to the lack of physical contact. The goal is to increase enrollment and create a safe system in which to conduct medical research while requiring as little knowledge of technology from the participant as possible, in order to include the most potential participants and eventually expand into rural regions.&#13;
&#13;
This thesis focuses on the assembly process of this framework, detailing the steps from receiving trial materials to shipping out the finished trial box. For more depth into the other aspects of this project, please refer to the other theses from the project group: Imane Ait Mbiriq, Carly Smith, and Joyce Noh.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Guarantees for Learning GMMs</title>
<link href="https://hdl.handle.net/1721.1/147317" rel="alternate"/>
<author>
<name>Liu, Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/147317</id>
<updated>2023-01-20T03:38:31Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Improved Guarantees for Learning GMMs
Liu, Allen
Mixtures of Gaussians (GMMs) are one of the most commonly used statistical models. They are typically used to model data coming from two or more heterogenous sources and have applications in a wide variety of fields including statistics, biology, physics and computer science. A fundamental task at the core of many of these applications is to learn the parameters of a mixture of Gaussians from samples. Starting with the seminal work of Karl Pearson in 1894 [81], there has been a long line of work on this problem [32, 6, 93, 48, 63, 78, 59, 49, 46, 39, 13, 65].&#13;
&#13;
Despite extensive work, several important questions have remained open for decades. We address two of those here. First, we study the problem of clustering in polynomial time, in terms of both the dimension &#119889; and number of components &#119896;. While an exponential dependence on &#119896; is necessary for learning in the worst case, it is possible to achieve a better dependence if one assumes that the components are clusterable. More precisely, for a mixture of &#119896; isotropic Gaussians in R &#119889; , as long as the means are separated by Ω(√ log &#119896;), then it is information-theoretically possible to cluster and learn the parameters in polynomial time. Despite recent advances [67, 55, 46], existing polynomial time algorithms all require a larger separation of Ω(&#119896; &#120575; ) for some &#120575; &gt; 0. In this work, we give an algorithm that has runtime and sample complexity poly(&#119896;, &#119889;) and provably works with essentially minimal separation.&#13;
&#13;
Second, we seek to address robustness. In particular, real data generally does not come from a distribution that is exactly a mixture of Gaussians, but rather a distribution that is close to a mixture of Gaussians. To address this, we consider a more challenging setting, that is now ubiquitous in the field of robust statistics, where an &#120598;-fraction of the datapoints may be arbitrarily altered, potentially adversarially. There has been a flurry of recent work towards developing robust algorithms for learning mixtures of Gaussians [39, 13, 65], but these results all require restrictions on the class of mixtures considered. In this work, we give an algorithm that attains provable robustness guarantees and works in full generality.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations</title>
<link href="https://hdl.handle.net/1721.1/147316" rel="alternate"/>
<author>
<name>Mei, Lingjie</name>
</author>
<id>https://hdl.handle.net/1721.1/147316</id>
<updated>2023-01-20T03:55:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations
Mei, Lingjie
We present a meta-learning framework for learning new visual concepts quickly, from just one or a few examples, guided by multiple naturally occurring data streams: simultaneously looking at images, reading sentences that describe the objects in the scene, and interpreting supplemental sentences that relate the novel concept with other concepts. The learned concepts support downstream applications, such as answering questions by reasoning about unseen images.&#13;
&#13;
In this thesis, we introduce model FALCON. FALCON represents individual visual concepts, such as colors and shapes, as axis-aligned boxes in a high-dimensional space (the “box embedding space”). Given an input image and its paired sentence, our model first resolves the referential expression in the sentence and associates the novel concept with particular objects in the scene. Next, our model interprets supplemental sentences to relate the novel concept with other known concepts, such as “X has property Y” or “X is a kind of Y”. Finally, it infers an optimal box embedding for the novel concept that jointly 1) maximizes the likelihood of the observed instances in the image, and 2) satisfies the relationships between the novel concepts and the known ones. We demonstrate the effectiveness of our model on both synthetic and real-world datasets.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushing the Limits of RF and Underwater Backscatter Systems</title>
<link href="https://hdl.handle.net/1721.1/147315" rel="alternate"/>
<author>
<name>Rodriguez, Osvy</name>
</author>
<id>https://hdl.handle.net/1721.1/147315</id>
<updated>2023-01-20T03:02:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Pushing the Limits of RF and Underwater Backscatter Systems
Rodriguez, Osvy
Backscatter communication systems, in both air and water, are great alternatives to traditional communication systems due to their low power requirement. The fact that backscatter systems can communicate at near-zero power makes them desirable for long-term applications such as environmental monitoring, asset tracking, and batteryless localization. However, backscatter systems are usually constrained by a limited operational range of a few meters. This limitation is generally due to the low power nature of the backscattered signal. In the case of batteryless backscatter systems, this range becomes more limited by the harvested energy efficiency of these systems. In this thesis, I explore different aspects of in-air and underwater backscatter communication. In airborne media, I explore the limitations of an end-to-end radio frequency (RF) backscatter localization system through the characterization of each component and validate a theoretical model to estimate its limited range under different conditions. In the context of underwater backscatter, I propose an ultra-wideband underwater backscatter system based on a novel metamaterial transducer and evaluate the system in controlled and uncontrolled environments, showing that it achieves more range and more throughput than prior state-of-the-art underwater backscatter designs. In doing so, this thesis extends our understanding of RF backscatter and advances the capabilities of underwater backscatter for next-generation energy-efficient communications.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LightShow: Abstract Representations of Music Lighting In Python</title>
<link href="https://hdl.handle.net/1721.1/147314" rel="alternate"/>
<author>
<name>Wilson, Benton B.</name>
</author>
<id>https://hdl.handle.net/1721.1/147314</id>
<updated>2023-01-20T03:38:04Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">LightShow: Abstract Representations of Music Lighting In Python
Wilson, Benton B.
This thesis explores music lighting and ways in which music lighting can be generated automatically. We attempted to use videos of prior concerts as training data for a machine learning model, but ultimately this proved unsuccessful. Instead, a useful abstraction for representing, designing, and implementing light shows based on audio was designed, implemented in Python, and used to generate lighting in a few contexts. The abstraction designed in this thesis ultimately focuses on allowing developers to easily expand on the package and reuse code, with the restriction that audio data must be known ahead of time. While the current abstraction does not support live audio streams, the future work section outlines how this could be implemented.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformation Tolerance and Demographic Robustness of Machine-based Face Recognition Systems</title>
<link href="https://hdl.handle.net/1721.1/147313" rel="alternate"/>
<author>
<name>Verma, Ashika</name>
</author>
<id>https://hdl.handle.net/1721.1/147313</id>
<updated>2023-01-20T03:11:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Transformation Tolerance and Demographic Robustness of Machine-based Face Recognition Systems
Verma, Ashika
Face recognition is widely acknowledged to be a very complex visual task for both humans and computers. Previous studies which analyze robustness of facial recognition systems have revealed that the ability to recognize faces becomes worse as the blur levels of face images increases, and that naturalistic color is important for facial recognition at high blur levels. Additionally, previous studies of current state of the art face recognition technologies have found bias in face recognition amongst different races, resulting in a worse recognition performance for people of color. In this study, we evaluate the performance and robustness of a current state-of-the-art facial recognition neural network architecture (ResNet-101) trained on an augmented facial identity dataset (Augmented Casia Webface) and perform a thorough comparison between White, Black and East Asian identities. We created a full-color, a grayscale and many hue-shifted datasets and then Gaussian blurred each dataset at different intensities and compared how AI systems perform relative to humans and amongst the different races.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Phenomena Induced by Magnon-Magnon and Magnon-Spin Coupling</title>
<link href="https://hdl.handle.net/1721.1/147312" rel="alternate"/>
<author>
<name>Hu, Zhongqiang</name>
</author>
<id>https://hdl.handle.net/1721.1/147312</id>
<updated>2023-01-20T03:20:58Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Novel Phenomena Induced by Magnon-Magnon and Magnon-Spin Coupling
Hu, Zhongqiang
Realization of novel topological phases in magnonic band structures represents a new opportunity for the development of spintronics and magnonics with low power consumption. While several approaches have been proposed for generating topological magnonic surface states, they usually require materials with either special crystal symmetries or artificially modulated structures that demand advanced nanofabrication techniques, both of which bring in inevitable difficulties in experiments.&#13;
&#13;
In this thesis, I show that in antiparallelly aligned magnetic multilayers, the longrange, chiral dipolar interaction between propagating magnons generates bulk bands with non-zero Chern integers and magnonic surface states carrying chiral spin currents. The surface states are highly localized and can be easily toggled between non-trivial and trivial phases through an external magnetic field. The realization of chiral surface spin currents in this dipolarly coupled heterostructure represents a magnonic implementation of the coupled wire model that has been extensively explored in electronic systems. My work presents an easy-to-implement system for realizing topological magnonic surface states and low-dissipation spin current transport in a tunable manner. Besides the magnon-magnon coupling induced novel topological phases, I also explore the possibility of realizing a hybrid magnon-spin coupled system for state-of-the-art quantum computing.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Memory Controller Side Channels</title>
<link href="https://hdl.handle.net/1721.1/147311" rel="alternate"/>
<author>
<name>Deutsch, Peter William</name>
</author>
<id>https://hdl.handle.net/1721.1/147311</id>
<updated>2023-01-20T03:41:34Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Mitigating Memory Controller Side Channels
Deutsch, Peter William
Memory timing side channels, where attackers utilize contention within DRAM controllers to infer a victim’s secrets, pose an important challenge to secure computation in shared memory environments. Attacks utilizing these side channels are broad and highly effective, as memory controllers offer a shared attack surface across all cores on a machine. Attacks have been demonstrated in the wild to leak cryptographic keys and other secret data, emphasizing the importance of employing mitigations to block the ability of an attacker to leak information. Existing state-of-the-art memory timing side channel mitigations have several key performance and security limitations. Prior schemes require onerous static bandwidth partitioning, extensive profiling phases, or simply fail to protect against attacks which exploit fine-grained timing and bank information.&#13;
&#13;
In this thesis we present DAGguise, a defense mechanism which fully protects against memory timing side channels while allowing for dynamic traffic contention in order to achieve good performance. DAGguise utilizes a novel abstract memory access representation, the Directed Acyclic Request Graph (rDAG for short), to model memory access patterns which experience contention. DAGguise shapes a victim’s access patterns according to a publicly known rDAG obtained through a lightweight profiling stage, completely eliminating information leakage. &#13;
&#13;
We formally verify the security of DAGguise, proving that it maintains strong security guarantees. Moreover, by allowing dynamic traffic contention, DAGguise achieves a 12% overall system speedup relative to Fixed Service, which is the state-of-the-art mitigation mechanism, with up to a 20% relative speedup for co-located applications which do not require protection. We further claim that the principles of DAGguise can be generalized to protect against other types of scheduler-based timing side channels, such as those targeting on-chip networks, or functional units in SMT cores.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Quantum Network with Waveguide Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/147310" rel="alternate"/>
<author>
<name>Almanakly, Aziza</name>
</author>
<id>https://hdl.handle.net/1721.1/147310</id>
<updated>2023-01-20T03:15:14Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards a Quantum Network with Waveguide Quantum Electrodynamics
Almanakly, Aziza
Over the past twenty years, the field of quantum computing has progressed from the investigation of individual quantum systems towards the implementation of manyqubit processors. Distributing information processing over a quantum network consisting of many nodes that communicate via itinerant photons is one potential framework for achieving modular and extensible quantum computation. Systems of superconducting qubits strongly coupled to a continuum of photonic modes in 1D coplanar waveguides, described by the formalism known as waveguide Quantum Electrodynamics (wQED), are emerging as a promising platform for quantum communication. In this work, we develop a quantum module comprised of superconducting qubits strongly coupled to a 1D waveguide that can bidirectionally emit and absorb propagating microwave photons on-demand. These modules can be tiled in series along a waveguide to form an all-to-all, extensible quantum network.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing DoS-Resilience for Cross-Protocol Proxies</title>
<link href="https://hdl.handle.net/1721.1/147308" rel="alternate"/>
<author>
<name>Farhat, Amir</name>
</author>
<id>https://hdl.handle.net/1721.1/147308</id>
<updated>2023-01-20T03:27:34Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Increasing DoS-Resilience for Cross-Protocol Proxies
Farhat, Amir
Industry is reporting increasingly damaging and popular application layer Denialof-Service (DoS) attacks. Therefore, now more than ever before, it is important to develop mitigations to DoS attacks generally, and application layer DoS attacks in particular. The challenge of this work is that application layer Internet-of-Things (IoT) systems integrated with cloud services exhibit a distinctive DoS vulnerability. The cloud services are accessed using HTTPS, but typically the small, under-resourced IoT devices only have the capacity to support the simplified, HTTP(S)-like CoAP(S) protocol, requiring protocol translation to occur in a proxy somewhere. This project addresses questions about how to reduce the vulnerability of such a proxy to DoS attacks. The contributions of this work are twofold. Firstly, we provide meaningful conclusions about the DoS-resilience of configuration parameters and compare our optimal settings with the defaults of the most substantial and widely used open source implementation of the CoAP(S) protocol proxy and auxiliary utilities. Our optimal settings result in substantial resilience against DoS attacks. Specifically, we cut mean client response time by two thirds, increase the number of messages that clients send successfully to 3.6x, reduce proxy memory usage by 20%, and reduce proxy CPU utilization in half. We additionally provide an architectural design proposal for the proxy which is likely to drastically increase its ability to maintain good performance for clients during an attack. Secondly, running experiments on DeterLab presents challenges regarding the collection and handling of experiment results without impinging on the performance of the experiments themselves. We provide our findings on solving the issues of impingement-free data collection and experiment storage and analysis in the form of an experiment management toolkit. To conclude, the research both demonstrates a viable reconfiguration of the proxy to simultaneously improve performance and reduce vulnerability to DoS attacks, and demonstrates the effectiveness of the experiment management toolkit developed during our research.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Morphology-Agnostic Control for Soft Robots</title>
<link href="https://hdl.handle.net/1721.1/147307" rel="alternate"/>
<author>
<name>Srinivasan, Suraj S.</name>
</author>
<id>https://hdl.handle.net/1721.1/147307</id>
<updated>2023-01-20T03:32:15Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards Morphology-Agnostic Control for Soft Robots
Srinivasan, Suraj S.
The advent of soft robots promises to fundamentally shift the landscape of robotic systems as they offer several advantages over the current paradigm of rigid bodies. Most notably, they provide adaptability to uncertain environments and look to bridge the gap between humans and machines. However, determining the optimal structure of a soft robot for a given task is difficult and complicated by the fact that soft robots have a design-dependent control profile. Thus, existing approaches have relied on human intuition or biomimicry. Co-design has been introduced as an approach to developing soft robots and involves jointly optimizing over the design and control of compliant bodies. An iterative design optimization routine suggests new morphologies while a control optimization subprocess determines a controller for each unique body. However, in its current form, co-design is a lengthy process due to the control optimization step being computationally expensive. Moreover, this step must be carried out separately for every unique morphology. This thesis discusses the development of MANTIS: a Morphology-Agnostic Controller for Soft Robots. We evaluate MANTIS against expert controllers using a soft robotic benchmarking suite (EvoGym) and demonstrate proficiency in zero-shot generalization to unseen morphologies. Importantly, this work makes strides towards universal control for soft robots, an objective which will greatly accelerate the rate of research in soft robotics.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Funding of Emission Reducing Projects of a Crude Oil Refining Plant</title>
<link href="https://hdl.handle.net/1721.1/147306" rel="alternate"/>
<author>
<name>Ravassipour, Amir</name>
</author>
<id>https://hdl.handle.net/1721.1/147306</id>
<updated>2023-01-20T03:01:36Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Strategic Funding of Emission Reducing Projects of a Crude Oil Refining Plant
Ravassipour, Amir
This thesis seeks to provide an economical method for strategic funding of emission-reducing projects of a crude oil refining plant. This economic model utilizes mathematical optimization of profits as the justification for funding projects. The optimization method finds justification for the minimization of operating cost and, more specifically, the minimization of emission cost as a method to maximize profits. The emission cost is measured as the summation of the carbon charge associated with the plant’s emission and the cost of funding emission-reducing projects. The strategy of the plant is to fund projects that contribute to emission abatement at a higher economic value than their respective costs of implementation.&#13;
&#13;
The costs of potential projects are levelized by measuring each project’s total cost against its lifecycle emission abatement and introducing the levelized cost of carbon as the measure for comparing projects to one another. Projects with a lower levelized cost of carbon than the carbon charge will create economic value for the firm and will be funded.&#13;
&#13;
The total emission abatement of the plant is determined through the relationship between the marginal cost of the emission-reducing projects and their cumulative emission abatement. The project with the levelized cost of carbon equal to the carbon charge will determine the threshold for all other projects that will be funded. This thesis bounds these projects at the lower end with no projects funded scenario and at the top end with negative emissions levelized cost of carbon.&#13;
&#13;
The input of the discount rate, interest rate, and utility rate of the facility creates a sensitivity analysis of their respective impacts on the levelized cost of carbon. The future emissions discount rate impacts the lifecycle emission abatement of projects. Emission discount rate and lifecycle emission abatement of projects carry an inverse relationship. The plant can reduce its capital exposure by gaining access to cost-effective capital to fund its emission-reducing projects. The interest rate of these loans impacts the levelized cost of carbon. The utility rate of the facility and its future state will create a contrast in favoring capital-intensive vs. operating expense-intensive projects.&#13;
&#13;
This optimization strategy has to be tailored to each plant’s set of projects as well as their respective added constraints. This thesis explores constraints such as budgetary cap and minimum emission abatement requirements to understand the change in strategy of the firm given these constraints.&#13;
&#13;
The overall strategy of the firm will depend on three main components. First, to analyze emission-reducing projects; determine their cost and emission abatement; second, to forecast the cost of carbon. And third, to fund projects that create emission abatement and carbon charge savings greater than the cost of funding them.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiphoton Parallel Transmit MRI for Flip Angle Mitigation Without SAR Concerns</title>
<link href="https://hdl.handle.net/1721.1/147305" rel="alternate"/>
<author>
<name>Drago, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/147305</id>
<updated>2023-01-20T03:44:25Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Multiphoton Parallel Transmit MRI for Flip Angle Mitigation Without SAR Concerns
Drago, John M.
High-field magnetic resonance imaging (MRI) excitation performed using a standard birdcage volume coil suffers from a flip angle inhomogeneity problem. For example, in 7 T brain MRI, such an excitation has a flip angle as much as three-fold higher in the center of the head than near the periphery. This is due to a wave interference effect, whereby the wavelength of the applied excitation field (B1) becomes comparable to the dimensions of the human body creating a pattern of constructive and destructive interference from current elements around the head. This interference can be partially mitigated by separating the sources into multiple, individually-controllable elements in a process known as parallel transmit (pTx). While these multiple high-power, high-frequency transmit channels can solve the flip angle inhomogeneity problem, they are expensive and complicate the management of electromagnetic power dissipation in the body, known as the specific absorption rate (SAR). In pTx, SAR strongly depends on how the array elements are energized and must be modeled for each excitation pulse using a body model that accurately approximates the subject.&#13;
&#13;
This thesis presents a novel use of the seldom-considered multiphoton excitation phenomenon. Our multiphoton pTx (MP-pTx) method uses a conventional birdcage transmit coil (a single high-frequency channel) to apply an on-resonant B1 field that is supplemented with an array of low-frequency z-directed coils, in order to address the spatial flip-angle homogeneity problem. Using only low-frequency coils in the array helps lower cost and significantly simplifies SAR management, since SAR is negligible at low-frequencies – independent of how the array is energized. This thesis presents the characterization of the multiphoton method in experiments and simulations and develops an optimization framework for MP-pTx pulses. A proof-of-concept simulation of the resulting excitation shows that MP-pTx can create a more homogeneous excitation at high field strengths.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Optimization Centered Upstream Petroleum Operations in the Denver-Julesburg Basin</title>
<link href="https://hdl.handle.net/1721.1/147301" rel="alternate"/>
<author>
<name>Lehman, Jason J.</name>
</author>
<id>https://hdl.handle.net/1721.1/147301</id>
<updated>2023-01-20T03:27:19Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Driving Optimization Centered Upstream Petroleum Operations in the Denver-Julesburg Basin
Lehman, Jason J.
The world is focused on advancing a lower carbon future. Despite this focus, global energy requirements continue to grow at a rapid pace. The transition to cleaner, renewable energy sources cannot happen overnight, and many aspects of daily life involve products derived from petroleum. The organizations producing petroleum and petroleum products must focus on minimizing the impacts of supplying these products and controlling the environmental impacts of their production.&#13;
&#13;
In this thesis, Operations Research and Systems Thinking are coupled to suggest practical optimization techniques for which upstream petroleum producers may gain additional revenue streams to fund a lower carbon future. Current challenges for these firms include global competition to supply petroleum energy and increased difficulty obtaining institutional investors. While many optimization efforts in the industry have focused on reservoir management, strategic asset planning, and production flows, this thesis aims to provide additional value by demonstrating coupled approaches of Operations Research and Systems Thinking toward resource routing and surface resource planning.&#13;
&#13;
Three classic optimization problems are applied to an upstream petroleum producer to demonstrate the use of optimization and potential applications to address resource constraints and the sociotechnical system using Operations Research and Systems Thinking – the Traveling Salesman Problem (TSP), The Vehicle Routing Problem (VRP), and the Facility Location Problem (FLP). Architectural decisions in the system are varied and represented as constraints to numerous optimization scenarios to produce value-robust designs. Further design analysis entails multi-attribute tradespace exploration to help decision-makers determine the perceived value of the tradeoffs between needs and constraints.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flexible Design Approach to Fleet Management</title>
<link href="https://hdl.handle.net/1721.1/147299" rel="alternate"/>
<author>
<name>DiPietro, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/147299</id>
<updated>2023-01-20T04:03:06Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Flexible Design Approach to Fleet Management
DiPietro, Joshua
This thesis proposes an actionable implementation plan for the Coast Guard to manage their small boat fleet acquisition. It features a contract process to extend boat service life and reduce acquisition and operating costs.&#13;
&#13;
Today, the increased pace of fielding of new technologies necessitates rapid generational shifts to keep ahead of component equipment obsolescence. The Coast Guard has not been able to keep up with these requirements, and by 2024, 40% of boat classes will be required to operate well beyond their planned service lives.&#13;
&#13;
This thesis used Flexibility in Engineering Design (FED) to analyze ways to implement the Coast Guard’s ten-year strategic plan to improve fleet efficiency and performance. FED shows great promise for improving portfolio management for vehicles, aircraft, ships, and facilities. In essence, FED recognizes the role of uncertainty in current fleet challenges and accepts that uncertainty cannot be avoided. FED addresses uncertainty proactively by building in options as insurance against risks.&#13;
&#13;
The analysis was based on insights provided by many subject matter experts and extensive archival research into the complex institutional environment and performance of the Coast Guard boat acquisition system. A Pareto Analysis was used to explore over 400 concepts of likely fleet design features. The results indicated that planned boat service life was the design attribute that most significantly impacts system performance in terms of cost, mission execution, and risk. Based on this result, the FED analysis was focused on flexible strategies to extend service life. Two categories of flexibility were studied: flexibility “on” acquisition programs through contract options to purchase data/build rights; and flexibility “in” project through incorporating flexible design attributes into future boat classes.&#13;
&#13;
The FED analysis recommends a plan that could save $33 million over 30 years per class acquisition. The plan calls for including a contract option to future boat acquisitions which allows the Coast Guard the right but not the obligation to buy full data rights up to five years after delivery of the first hull of a new class of boats. The ability to rapidly ramp up production to replace hulls as needed allows the Coast Guard to plan for service life elongation. Even if never used, buying the right to build the current generation is akin to buying an insurance policy. It allows the Coast Guard to relax the conservative 10-year service life policy and adopt a longer service life that more closely aligns with experts’ expectation of the service lives boats can achieve.&#13;
&#13;
The thesis proposes an implementation plan to achieve the recommended results of the FED analysis. This plan is tailored to iteratively phase flexibility options into targeted classes and projected to enable the reduction of boat obsolescence, operational churn loss, and long-term program cost at a minimal upfront cost.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Model for Reaction Outcome Prediction and One-step Retrosynthesis with a Graph-to-Sequence Architecture</title>
<link href="https://hdl.handle.net/1721.1/147297" rel="alternate"/>
<author>
<name>Tu, Zhengkai</name>
</author>
<id>https://hdl.handle.net/1721.1/147297</id>
<updated>2023-01-20T03:45:46Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Scalable Model for Reaction Outcome Prediction and One-step Retrosynthesis with a Graph-to-Sequence Architecture
Tu, Zhengkai
Synthesis planning and reaction outcome prediction are two fundamental problems in computer-aided organic chemistry for which a variety of data-driven approaches have emerged. Natural language approaches that model each problem as a SMILESto-SMILES translation lead to a simple end-to-end formulation, reduce the need for data preprocessing, and enable the use of well-optimized machine translation model architectures. However, SMILES representations are not efficient for capturing information about molecular structures, as evidenced by the success of SMILES augmentation to boost empirical performance. Here, we describe a novel Graph2SMILES model that combines the power of Transformer models for text generation with the permutation invariance of molecular graph encoders that mitigates the need for input data augmentation. In our encoder, a directed message passing neural network (DMPNN) captures local chemical environments, and the global attention encoder allows for long-range and intermolecular interactions, enhanced by graph-aware positional embedding. As an end-to-end architecture, Graph2SMILES can be used as a dropin replacement for the Transformer in any task involving molecule(s)-to-molecule(s) transformations, which we empirically demonstrate leads to improved performance on existing benchmarks for both retrosynthesis and reaction outcome prediction.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study for Cyber Incident Report in Industrial Control Systems</title>
<link href="https://hdl.handle.net/1721.1/147296" rel="alternate"/>
<author>
<name>Ang, Kim Whatt Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/147296</id>
<updated>2023-01-20T03:08:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Case Study for Cyber Incident Report in Industrial Control Systems
Ang, Kim Whatt Gary
In recent times, Cyber Incidents have increased in frequency and complexity.  These incidents have come from a wide range of sources, from lone individuals to complex state-sponsored teams. In particular, these cyber-crime organizations have used a variety of tactics, techniques, and procedures (TTP) from exploiting well-known vulnerabilities to navigating highly sophisticated zero-day pathways in order to attack systems, sabotage critical services, commit financial crimes, and gather sensitive information for political gain. &#13;
&#13;
Industrial Control Systems (ICSs) have been used in critical infrastructure sectors such as nuclear reactors for power generation. These ICSs have evolved to connect with the enterprise systems for centralized management, opening up new risks. The risks of ICS Cyber Incidents have been increasing, some of which have brought severe consequences. Although governments have classified these risks as a matter of national security, the successful prevention and mitigation of such incidents will increasingly depend on the ability of  organizations to share cyber threat information and use it to improve their security posture.&#13;
&#13;
New regulations, such as the Cyber Incident Reporting for Critical Infrastructure Act 2022 (CIRCIA), emphasize the need and urgency of reporting relevant details of a Cyber Incident. These reports will allow the relevant authorities (e.g. Cybersecurity and Infrastructure Security Agency (CISA)) to spot trends and quickly share critical information with network defenders to warn other potential victims. Can organizations that rely on ICSs improve their cybersecurity posture through Cyber Incident Reports? What are the necessary ingredients for Cyber Incident Reports to be effective?&#13;
&#13;
This research aims to answer these questions by studying the current state of Cyber Incident Reporting in terms of definition, purposes, regulations and more. This research also seeks to understand the current Cyber Incident Reports formats available to the public and map out their advantages and disadvantages based on National Institute of Standards and Technology (NIST) Cybersecurity recommendations on Cyber Incident Reporting. In addition, this research evaluates the use of the MITRE ATT&amp;CK (Adversarial Tactics, Techniques &amp; Common Knowledge) Framework for ICS in a Cyber Incident report. This research could help ICS organizations improve their process of Cyber Incident reporting.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>X Disease | Disease X: Medical Mystery-Solving and Epidemiological Change</title>
<link href="https://hdl.handle.net/1721.1/147295" rel="alternate"/>
<author>
<name>Robbins, Gabrielle</name>
</author>
<id>https://hdl.handle.net/1721.1/147295</id>
<updated>2023-01-20T03:02:11Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">X Disease | Disease X: Medical Mystery-Solving and Epidemiological Change
Robbins, Gabrielle
This thesis addresses epidemiological knowledge-making in the face of new, unknown, emerging diseases. It uses two cases, Australian X Disease of the early 1900s and Disease X of the dawning 2000s, to broadly interrogate how medical mystery-solvers marshall forms of experimentation and classification to identify and contain unknown diseases. While dominant theories of millenial disease preparedness emphasize treating emergent disease like other global, virulent epidemics like Zika and Ebola – comparisons of scale and scope – this thesis uses the Australian X Disease to argue for historical approaches to the medical unknown – comparisons through space and time. Given than epidemiological practice conditions what can be known as much as what is overlooked in the face of the unknown, such long-ranging investigative energy can be instructive contra the pitfalls of established medical practice.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Fingertip Sensors and 7-Dof Hands for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/147294" rel="alternate"/>
<author>
<name>Guo, Menglong</name>
</author>
<id>https://hdl.handle.net/1721.1/147294</id>
<updated>2023-01-20T04:04:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design of Fingertip Sensors and 7-Dof Hands for Robotic Manipulation
Guo, Menglong
The goal of this work is to enable robots to one day enter the home environment to do household tasks at human-like speeds. Reacting to unexpected external contacts is the main challenge of designing systems that can forcefully manipulate objects. This research focuses on the design of robotic hands and sensors for quick and reactive manipulation using high bandwidth sensing and robotic manipulation platform with high DOFs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Guidelines for Sulfonyl/Sulfamoyl Fluoride Additives to Modulate Lithium Anode Coulombic Efficiency</title>
<link href="https://hdl.handle.net/1721.1/147293" rel="alternate"/>
<author>
<name>Jiang, Kyle S.</name>
</author>
<id>https://hdl.handle.net/1721.1/147293</id>
<updated>2023-01-20T03:47:21Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Design Guidelines for Sulfonyl/Sulfamoyl Fluoride Additives to Modulate Lithium Anode Coulombic Efficiency
Jiang, Kyle S.
The lithium metal anode has a high theoretical capacity (3860 mAh / g) and low electrochemical potential (‐3.04 V vs SHE), making it an ideal anode material for high energy density Li batteries. However, the high reactivity of Li metal with electrolytes results in the formation of a solid electrolyte interphase (SEI). In conventional electrolytes, the SEI is unstable, leading to continuous capacity loss and low Coulombic efficiencies (CE). Successful principles for Li metal electrolyte design to achieve high CE have largely focused on promoting the sacrificial reduction of anions (such as lithium bis(fluorosulfonyl)imide (LiFSI)) believed to be beneficial for the SEI. Alternatively, additive development for Li metal has identified several chemical classes that have been shown to effectively modify either the Li plating morphology or the SEI chemistry, and consequently increase CE. Motivated by the high CE of LiFSI systems and the performance of previously studied additives for Li metal, sulfonyl/sulfamoyl fluorides (R‐SO2F and R‐R’ NSO2F, respectively) are examined as a model class of functional electrolyte additives for Li cycling. This thesis examines what parameters govern the performance of this model class of additives, including the additive chemical structure and baseline electrolyte solvent. The effects of additives on CE were evaluated in select high‐CE electrolytes consisting of LiFSI dissolved in representative organic solvents, as well as the commercially relevant carbonate electrolyte LiPF6 dissolved in EC:DEC (LP40). The observed variations in CE, which suggest competitive reactions among solvents, anions, and additive, are then rationalized by characterization of post‐cycling SEI chemical compositions, gas evolution, and Li plating morphologies. The results identify the function of the nitrogen center unique to sulfamoylfluorides in promoting Li+ coordination and preventing structural fragmentation of the additive during cycling.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Random Access for Information&#13;
Freshness in Spatially Distributed Wireless Networks</title>
<link href="https://hdl.handle.net/1721.1/147292" rel="alternate"/>
<author>
<name>Jones, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/147292</id>
<updated>2023-01-20T03:39:00Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimizing Random Access for Information&#13;
Freshness in Spatially Distributed Wireless Networks
Jones, Nicholas
We analyze Age of Information (AoI) in wireless networks where nodes use a spatially adaptive random access scheme to send status updates to a central base station. We show that the set of achievable AoI in this setting is convex, and design policies to minimize weighted sum, min-max, and proportionally fair AoI by setting transmission probabilities as a function of node locations. We show that under the capture model, when the spatial topology of the network is considered, AoI can be significantly improved, and we obtain tight performance bounds on weighted sum and min-max AoI. Finally, we design a policy where each node sets its transmission probability based only on its own distance from the base station, when it does not know the positions of other nodes, and show that it converges to the optimal proportionally fair policy as the size of the network goes to infinity.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Legged Locomotion by Physics-based Initialization: Motion Imitation from Model-Based Optimal Control</title>
<link href="https://hdl.handle.net/1721.1/147283" rel="alternate"/>
<author>
<name>Miller, Adam Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/147283</id>
<updated>2023-01-20T03:32:55Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Learning Legged Locomotion by Physics-based Initialization: Motion Imitation from Model-Based Optimal Control
Miller, Adam Joseph
The development of legged robots capable of navigating in and interacting with the world is quickly advancing as new methods and techniques for sensing, decisionmaking, and controls expand the capabilities of state-of-the-art systems. Model-based methods, empowered by greater computing capacity and clever formulations, are imbuing systems with further physics-based understanding. While machine learning techniques, enabled by parallelized data generation and more efficient training, are imparting greater robustness to noise and abilities to handle poorly defined world features. Together these tools constitute the two major paradigms of legged robot research and while both have their shortcomings, they have complementary limitations that can be reinforced by the other’s strengths.&#13;
&#13;
We propose MIMOC: Motion Imitation from Model-Based Optimal Control. MIMOC is a Reinforcement Learning (RL) locomotion controller that learns agile locomotion by imitating reference trajectories from model-based optimal control. MIMOC mitigates challenges faced by other motion imitation-based RL approaches because the generated reference trajectories are dynamically consistent, require no motion retargeting, and include torque references that are essential to learn dynamic locomotion. As a result, MIMOC does not require any fine-tuning to transfer the policy to the real robots. MIMOC also overcomes key issues with model-based optimal controllers. Since it is trained with simulated sensor noise and domain randomization, MIMOC is less sensitive to modeling and state estimation inaccuracies. We validate MIMOC on the Mini-Cheetah in outdoor environments over a wide variety of challenging terrain and on the MIT Humanoid in simulation. We show that MIMOC can transfer to the real-world and to different legged platforms. We also show cases where MIMOC outperforms model-based optimal controllers, and demonstrate the value of imitating torque references.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Do Externally-Hired Managers Increase Innovation? Evidence from the U.S. Government</title>
<link href="https://hdl.handle.net/1721.1/147282" rel="alternate"/>
<author>
<name>Nguyen, Christina Angie</name>
</author>
<id>https://hdl.handle.net/1721.1/147282</id>
<updated>2023-01-20T03:49:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Do Externally-Hired Managers Increase Innovation? Evidence from the U.S. Government
Nguyen, Christina Angie
Although the U.S. government spends nearly $40 billion on intramural research and development, little is known about its own scientists’ innovative output and the managers who are responsible for their performance. Using individual-level data from 2000 to 2013, I investigate the impact of externally-hired federal managers on the innovation output of employees in science, technology, engineering, math, health, and social science occupations. By leveraging the variation in agencies' hiring of external managers, I find positive effects on scientists' number of publications, citations, and outside collaborations following an agency's shift to external management for treated scientists compared to matched controls. In addition, the impact varies for scientists in different occupational fields and is particularly large and positive for citations of shorter-tenured scientists and for outside collaborations of longer-tenured scientists. All together, these findings could inform strategies for hiring public managers and their potential influences on the scientific and technological progress of their organizations.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the structure of transmission matrix in lower dimensions</title>
<link href="https://hdl.handle.net/1721.1/147279" rel="alternate"/>
<author>
<name>Ghosh, Irin</name>
</author>
<id>https://hdl.handle.net/1721.1/147279</id>
<updated>2023-01-20T03:37:12Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Characterizing the structure of transmission matrix in lower dimensions
Ghosh, Irin
Multimode optical fibers are promising candidates for revolutionizing telecommunication because of high data rate and endoscopic applications because of the ability to capture similar information as single mode fibers but with much smaller footprint and hence lesser invasion. The major difficulty to be overcome is determining how the data transmitted through the fiber gets scrambled, which is described by the Transmission Matrix (TM) of the fiber. Methods so far have been successful in determining a good approximation of TM only for weakly deformed fibers. We will look into calibration-free, real-time methods to determine the TM of a fiber with flexibility in conformation. The proposed method is to find a lower dimensional representation of the TM which reduces the number of unknown parameters of the TMs for different fiber configurations, such that they can be determined in real time.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Role of the Source Dataset in Transfer Learning</title>
<link href="https://hdl.handle.net/1721.1/147278" rel="alternate"/>
<author>
<name>Khaddaj, Alaa</name>
</author>
<id>https://hdl.handle.net/1721.1/147278</id>
<updated>2023-01-20T03:27:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">On the Role of the Source Dataset in Transfer Learning
Khaddaj, Alaa
It is commonly believed that in transfer learning including more pre-training data translates into better performance. However, recent evidence suggests that removing data from the source dataset can actually help too. In this work, we take a closer look at the role of the source dataset's composition in transfer learning and present a framework for probing its impact on downstream performance. Our framework gives rise to new capabilities such as pinpointing transfer learning brittleness as well as detecting pathologies such as data-leakage and the presence of misleading examples in the source dataset. In particular, we demonstrate that removing detrimental datapoints identified by our framework improves transfer learning performance from ImageNet on a variety of target tasks.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structured Diffusion Processes in Deep Generative Models</title>
<link href="https://hdl.handle.net/1721.1/147277" rel="alternate"/>
<author>
<name>Jing, Bowen</name>
</author>
<id>https://hdl.handle.net/1721.1/147277</id>
<updated>2023-01-20T03:52:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Structured Diffusion Processes in Deep Generative Models
Jing, Bowen
Diffusion generative models have emerged as a powerful, versatile, and elegant generative modeling framework for diverse data modalities. However, the high computational cost of inference relative to other frameworks remains a chief limitation of such models. At the same time, the design space of a key component in their formulation—the forward diffusion process—has been underexplored. This thesis proposes a paradigm to accelerate and improve diffusion generative models by tailoring structured forward diffusion processes to the generative modeling problem at hand.&#13;
&#13;
Case studies of structured diffusion processes are developed and presented for (1) natural images and (2) molecular conformers. First, the subspace structure in images is exploited to develop subspace diffusion, a forward diffusion process that restricts the diffusion via projections to subspaces of decreasing dimensionality. Second, chemical constraints in molecular conformers are exploited to develop torsional diffusion, a forward process that preserves those constraints by operating over a lower-dimensional, non-Euclidean space. Both approaches simultaneously improve sample quality and reduce inference runtime while preserving existing capabilities—and developing new ones—of diffusion generative models.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated High-Throughput Characterization of Perovskite Photovoltaic Devices</title>
<link href="https://hdl.handle.net/1721.1/147276" rel="alternate"/>
<author>
<name>Motes, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/147276</id>
<updated>2023-01-20T03:09:46Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Automated High-Throughput Characterization of Perovskite Photovoltaic Devices
Motes, Brandon
Material science is a critical discipline that supports all other science and engineering disciplines, however current research processes are inefficient and limiting the pace of research. Automated high-throughput characterization shows potential to accelerate this research pace. This thesis presents a solution for automated high-throughput characterization with discussion of key attributes and design elements and demonstrates a working prototype.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VisuaLint: Sketchy In Situ Annotations of Chart Construction Errors</title>
<link href="https://hdl.handle.net/1721.1/147275" rel="alternate"/>
<author>
<name>Hopkins, Aspen K.</name>
</author>
<id>https://hdl.handle.net/1721.1/147275</id>
<updated>2023-01-20T03:17:39Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">VisuaLint: Sketchy In Situ Annotations of Chart Construction Errors
Hopkins, Aspen K.
Chart construction errors, such as truncated axes or inexpressive visual encodings, can hinder reading a visualization, or worse, imply misleading facts about the underlying data. These errors can be caught by critical readings of visualizations, but readers must have a high level of data and design literacy and must be paying close attention. To address this issue, we introduce VisuaLint: a technique for surfacing chart construction errors in situ. Inspired by the ubiquitous red wavy underline that indicates spelling mistakes, visualization elements that contain errors (e.g., axes and legends) are sketchily rendered and accompanied by a concise annotation. VisuaLint is unobtrusive — it does not interfere with reading a visualization — and its direct display establishes a close mapping between erroneous elements and the expression of error. We demonstrate five examples of VisualLint and present the results of a crowdsourced evaluation (N = 62) of its efficacy. These results contribute an empirical baseline proficiency for recognizing chart construction errors, and indicate nearuniversal difficulty in error identification. We find that people more reliably identify chart construction errors after being shown examples of VisuaLint, and prefer more verbose explanations for unfamiliar or less obvious flaws.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CollaboRanger: Coordinating Differences of Individuals in Group Coordination</title>
<link href="https://hdl.handle.net/1721.1/147274" rel="alternate"/>
<author>
<name>Zhang, Qianqia</name>
</author>
<id>https://hdl.handle.net/1721.1/147274</id>
<updated>2023-01-20T03:57:00Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">CollaboRanger: Coordinating Differences of Individuals in Group Coordination
Zhang, Qianqia
People form habits in the way they work and former research in personal task and information management found that these personal preferences vary drastically. In collaborative settings, these different forms of personal habits can make it challenging to coordinate among teammates. In this work, we investigate the methodology of a smooth transition from the personal working sphere to group coordination. Through our workshop study with 11 knowledge workers including program coordinators and admins, we understand how they manage the gap between personal differences and lead to group coordination. Our finding indicates that even for the mundane and basic coordination tasks like scheduling a meeting task, there are several underlying conflicts, such as fear of being judged and overstepping others’ contributions. Instead, they focus on accommodating differences of each participant (e.g., in the case of scheduling, tools that they use for keeping up with schedules) and spend a substantial amount of time aggregating information in different formats for each participant.&#13;
&#13;
We propose a system called CollaboRanger, where coordination participants do not have to change their habits for each coordination, but at the same time, it is easy to combine information from each. Using CollaboRanger, coordination participants can collaboratively gather responses from the participants in a table and summarize their decisions. To evaluate our system, we conducted a within-subjects experiment (N=18) to assess our design with knowledge workers. We found that when teams are using our system, they were able to sensemake distinct responses comprised of personal preference and tool choices much easier and faster than using email. These results indicate that one does not have to totalize the individual’s response when they coordinate, yet they can still efficiently make group decisions. We conclude with design implications and opportunities for bridging gaps between personal work routines and groupware designs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prospects for Quantum Equivariant Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/147273" rel="alternate"/>
<author>
<name>Castelazo, Grecia</name>
</author>
<id>https://hdl.handle.net/1721.1/147273</id>
<updated>2023-01-20T04:06:02Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Prospects for Quantum Equivariant Neural Networks
Castelazo, Grecia
Convolutional neural networks (CNNs) exploit translational invariance within images. Group equivariant neural networks comprise a natural generalization of convolutional neural networks by exploiting other symmetries arising through different group actions. Informally, a linear map is equivariant if it transfers symmetries from its input space into its output space. Equivariant neural networks guarantee equivariance for arbitrary groups, reducing the system design complexity. Motivated by the theoretical/experimental development of quantum computing, in particular with the quantum advantage derived from other quantum algorithms/subroutines for group theoretic and linear algebraic problems, we explore the potential of quantum computers to realize these structures in machine learning. This work reviews the mathematical machinery necessary from group representation theory, surveys the theory of equivariance, and combines results in non-commutative harmonic analysis and geometric deep learning. Convolutions and cross-correlations are examples of functions which are equivariant to the actions of a group. We present efficient quantum algorithms for performing linear finite-group convolutions and cross-correlations on data stored as quantum states. Potential implementations and quantizations of the infinite group cases also discussed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inductive Cell Voltage Balancer and Model of Battery Cells and Cell Balancers</title>
<link href="https://hdl.handle.net/1721.1/147270" rel="alternate"/>
<author>
<name>Stafford, Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/147270</id>
<updated>2023-01-20T03:48:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Inductive Cell Voltage Balancer and Model of Battery Cells and Cell Balancers
Stafford, Logan
To ensure that the cells of a battery perform their best and last for as long as possible a circuit known as a cell balancer is used. While there are many different types of these balancers, certain topologies of active capacitor and inductor balancers have advantages that other balancers might not have. An initial PCB of the 2 cell and 4 cell capacitive cell balancers have been assembled and tested using power supplies and bench top loads. The inductive cell balancer PCB was designed and tested to compare with the capacitor based balancer. A PSPICE model of an ANR26650M1B battery cell was created using default PSPICE components. This model is meant to simulate the cell across frequency and temperature.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Riemannian Metric Learning via Optimal Transport</title>
<link href="https://hdl.handle.net/1721.1/147268" rel="alternate"/>
<author>
<name>Scarvelis, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/147268</id>
<updated>2023-01-20T03:26:42Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Riemannian Metric Learning via Optimal Transport
Scarvelis, Christopher
We introduce an optimal transport-based model for learning a metric tensor from cross-sectional samples of evolving probability measures on a common Riemannian manifold. We neurally parametrize the metric as a spatially-varying matrix field and efficiently optimize our model's objective using backpropagation. Using this learned metric, we can nonlinearly interpolate between probability measures and compute geodesics on the manifold. We show that metrics learned using our method improve the quality of trajectory inference on scRNA and bird migration data at the cost of little additional cross-sectional data.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Interconnection between Net-Zero Building Code and Rental Housing Affordability in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/147264" rel="alternate"/>
<author>
<name>Tiwari, Himanshu</name>
</author>
<id>https://hdl.handle.net/1721.1/147264</id>
<updated>2023-01-20T03:30:59Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Interconnection between Net-Zero Building Code and Rental Housing Affordability in Massachusetts
Tiwari, Himanshu
With the continued focus of policymakers on achieving net-zero targets, there has been an increased focus on decarbonization of the housing markets through net-zero energy building codes such as the new “Stretch Code” introduced in Massachusetts. In a quest to achieve declared targets on the “Environmental” front, the “Social” front sometimes gets sacrificed for instance the incremental cost of net-zero may be passed on to landlords and tenants, thus hurting housing affordability.&#13;
&#13;
The existing literature is unclear about the impact, both in terms of direction and magnitude, of net-zero energy building codes on rental housing affordability, comprising of rents and utility payments. Accordingly, careful consideration is warranted to address the issue. The thesis intends to establish a quantitative relationship between the two by exploring the existing regulatory and market frameworks. The research focuses on resolving the issues by analyzing the Home Energy Rating System (HERS) data in Massachusetts to establish the impact of HERS on housing rents and utility expenses and ultimately on rental housing affordability.&#13;
&#13;
The analysis yields that the new Net-Zero building code in Massachusetts namely DOER’s Straw proposal will have an impact of lowering the utility expenses by an average amount of 0.78% for a medium-sized (1,000 sf to 2,500 sf) duplex to 7.99% for a medium-sized (up to 1,000 sf) low-rise multifamily apartment. Considering the premium for market-rate rental housing, the overall housing costs for renters will increase by 2.48% for a large (greater than 2,500 sf) duplex to 6.20% for a large single-family rental house. The variation in the rental affordability is significant across locations, ages, and sizes of the units. Although the utility expense saving is higher for the low-rise multifamily apartments most exposed to the issue of worsening affordability, the overall housing cost increase is substantial considering the market rent premium associated with adherence to the net-zero building code. For a controlled rent housing unit, the rental affordability as defined by the US Department of Housing and Urban Development (Housing Costs upon Household Income) on average is expected to improve by anywhere between 0.32% to 0.54% whereas, for a market rate unit, the rental affordability on average is expected to reduce significantly by anywhere between 0.72% to 10.22%. The affordability changes differ across sizes and types of houses but more importantly with the changes in income as well.&#13;
&#13;
Considering the findings, any incentive program should follow a differentiated approach of incentivizing the households that are not only more susceptible to worsening rental affordability due to substantial increases in housing costs but also where the household income is unlikely to absorb the increased housing costs.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies on Organophosphorus Catalyzed C(SP³)–H Amination for the Synthesis of Benzimidazoles</title>
<link href="https://hdl.handle.net/1721.1/147263" rel="alternate"/>
<author>
<name>Pombar, Gisselle</name>
</author>
<id>https://hdl.handle.net/1721.1/147263</id>
<updated>2023-01-20T03:38:19Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Studies on Organophosphorus Catalyzed C(SP³)–H Amination for the Synthesis of Benzimidazoles
Pombar, Gisselle
A PIII/PV=O-catalyzed C(sp³)–H amination has been realized for (dihydro)benzimidazole synthesis. This work reports: (1) optimization of organophosphorus-catalyzed C(sp³)–H functionalization; (2) scope studies to benzimidazoles by in situ oxidation of the corresponding dihydrobenzimidazole; and (3) insight into the reaction mechanism through in situ spectroscopic monitoring under catalytic conditions and Hammett linear free energy relationship studies. The synthetic method and mechanistic information provide insight into design principles for the expansion of C(sp³)–H functionalization reactions through PIII/PV=O O-atom transfer reactivity.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assisting Technology: Disability Expertise and Labor in Artificial Intelligence (AI) Data Work in China</title>
<link href="https://hdl.handle.net/1721.1/147261" rel="alternate"/>
<author>
<name>Wu, Di</name>
</author>
<id>https://hdl.handle.net/1721.1/147261</id>
<updated>2023-01-20T03:48:16Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Assisting Technology: Disability Expertise and Labor in Artificial Intelligence (AI) Data Work in China
Wu, Di
In recent years, people with disabilities in China have been explicitly enrolled by government programs, corporations, and NGOs to classify and label training data for AI systems. This thesis offers an ethnographic account of one of these programs, combining insights from science and technology studies (STS), critical disability studies, and digital labor scholarship. Run by a disabled persons’ organization (DPO), the examined program is staffed with predominantly blind, low vision, and physically impaired data workers, tasked to sort data for an AI-based internet of things (IoT) system. While existing scholarship on digital labor tend to focus on how technology empowers or exploits disabled people, this thesis asks how disabled people’s labor in turn transforms technology. Centering the experience of disabled data workers and the inner workings of the sociotechnical processes with which they are bound up, I argue that people with disabilities working in AI data annotation effectively assist the technology, not just the other way around.&#13;
&#13;
In this study, the DPO outperformed their non-disabled competitors and became the exclusive contractor of data annotation for a major AI company in China. I show that the obscure and iterative nature of classifying contextless intentions and unclear sound generated by the virtual assistant system necessitates a constant workforce of data annotators, who have rich tacit knowledge, good institutional memory, and a strong working relationship with the developers. Disabled workers in China, pushed out of a wide range of job opportunities due to structural ableism, supplied the initial stability for the AI company. In the meantime, through their disability-informed, non-normative knowledge of flourishing in uninhabitable worlds, or what anthropologist Cassandra Hartblay calls “disability expertise,” disabled workers have reshaped the often-dehumanizing conditions of microwork in the AI data pipeline, pulling many workers to stay and produce higher quality data. An intervention of this article is not only to lay bare the use and abuse of disability as a resource in contemporary AI systems, but also to elevate crip technoscience by teasing out the disability expertise actually entailed in the production of AI.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Holistic Technology Impact Model</title>
<link href="https://hdl.handle.net/1721.1/147260" rel="alternate"/>
<author>
<name>Hanschke, Hans</name>
</author>
<id>https://hdl.handle.net/1721.1/147260</id>
<updated>2023-01-20T03:09:16Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">The Holistic Technology Impact Model
Hanschke, Hans
This research outlines the path towards creating a tool which enables profit driven real estate developers to analyse at an early stage which sustainable innovations can provide the greatest value for their project.&#13;
&#13;
Elaborating on the urgency for change towards sustainable development approaches within the real estate industry, existing barriers were found between the approaches proposed by the academic scientist and the industry practitioners. In order to remove these barriers, it is proposed to build a Holistic Technology Impact Model based on findings from academic and industry sources which utilizes Monte Carlo Simulation techniques to provide the user with a customized sustainable development strategy.&#13;
&#13;
After the user provides early-stage information about the targeted project, such as location or a raw timeline, the model calculates a business case for a project without any green features and compares it to business cases of implementing one of the ten sustainable technologies included in the sample. Using Monte Carlo Simulation, the model can effectively account for future developments of variables such as energy prices, emissions penalties, economies of scale or a tenant’s willingness to pay.&#13;
&#13;
The outcome of the model not only provides customized implementation suggestions for the real estate developer but also enables researchers to analyse risk and return patterns as well as emissions performance of the distinct technologies across varying project characteristics, which is illustrated via three sample projects in this thesis. Thereby, it does not only provide strategic insights for the real estate developer, but also for the public sector, as well as the technology suppliers. In sum, the model combines practical and academic approaches to foster the transformation towards a green real estate industry.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Nanoengineering of Multifunctionality into an Advanced Composite Laminate</title>
<link href="https://hdl.handle.net/1721.1/147256" rel="alternate"/>
<author>
<name>Patel, Palak B.</name>
</author>
<id>https://hdl.handle.net/1721.1/147256</id>
<updated>2023-01-20T03:06:07Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Experimental Nanoengineering of Multifunctionality into an Advanced Composite Laminate
Patel, Palak B.
Advanced structural fiber composite materials have lightweight, multi-directional, and tailorable properties which are vital for weight-critical applications such as aerospace vehicles. Nanoengineered aerospace-grade composites have been developed to have integrated multifunctionalities while ensuring maintained, or even enhanced, mechanical properties, without significant changes in the dimension or weight of the composite system. While integrating individual multifunctionalities into such composites has been demonstrated in a limited set of cases, integrating more than one multifunctionality has not yet been explored. This thesis focuses on the manufacturing and characterization of a nanoengineered integrated multifunctional composite (IMC), that would enable the inclusion of more than one multifunctional capability, while maintaining or enhancing structural function. To this end, a glass fiber reinforced polymer (GFRP) unidirectional-ply composite laminate was nanoengineered with carbon nanotubes (CNTs) in the composite’s interlaminar regions and surfaces, to produce the IMC. A preliminary study, focusing on the compatibility of CNTs in GFRP and on enhancing the laminate’s structural function, determined the preferred CNT architectures to reinforce the interlaminar regions and enable various multifunctionalities. The IMC integrated a commercial CNT film on the outer surfaces and two preferred architectures in the interlaminar region: a 10 µm aligned carbon nanotube (A-CNT) film (termed nanostitch) and a patterned and coherently buckled A-CNT film (termed nanostitch 2.0). The resulting IMCs had an equivalent quality (no detectable voids, insignificant difference in the laminate thickness and interlaminar thickness) to the baseline GFRP system, while demonstrating maintained or enhanced mechanical performance. Relative to the baseline, the IMCs have enhanced (∼5%) interlaminar shear strength (ILSS) and maintained notched tensile strength with equivalent damage progression as revealed through in situ testing using synchrotron radiation computed tomography (SRCT). The IMCs demonstrated here, with electrically and thermally conductive interlaminar regions and surfaces, support future demonstrations of multifunctionalities through a composite system designed to serve independent yet synergistic functionalities in life-cycle enhancement, energy savings during manufacturing, in situ cure (manufacturing) monitoring, Joule heating ice protection system (IPS) applications, and in-service damage sensing, among others.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ery Urbanism: Framework for Water Inclusive Urban Growth in Chennai</title>
<link href="https://hdl.handle.net/1721.1/147251" rel="alternate"/>
<author>
<name>VijayKumar, Mona</name>
</author>
<id>https://hdl.handle.net/1721.1/147251</id>
<updated>2024-01-18T04:46:01Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Ery Urbanism: Framework for Water Inclusive Urban Growth in Chennai
VijayKumar, Mona
Chennai’s urban history has evolved and flourished around an elaborate cascading system of water tanks, locally known as ‘Erys’. Traditionally, as an agrarian society these erys were ecological commons that engendered biodiversity, and community building. They served as essential water reservoirs and flood control systems that were collectively managed by the communities.&#13;
&#13;
Over the last two decades, Chennai’s built-up area increased by 71% by indiscriminately expanding over these erys, transforming a largely wet and permeable land into impervious concrete. As a result, this has not only degraded the natural ecology of the city but also increased the flooding risks of the communities that currently live around water bodies. With the proposed eight fold urban expansion of Chennai’s territory by 2026 and lack of guidelines to conserve and manage the water bodies, the survival of these erys and livelihood around them are highly challenged.&#13;
&#13;
In contrast to the current development model that disregards erys, the thesis re-imagines -Erys as a way of organizing urbanism. The thesis proposes a spatial framework that prioritizes both the ecology and community to address the complex challenges of urbanization, floods and droughts. To fully understand the potential of erys in the current urban conditions, the project navigates between three scales 1) Watershed, 2) Sub-Watershed and 3) Neighborhood scale. The proposed interventions seek to create a water-inclusive urban growth for Chennai, that protects preserves the indigenous water systems and engages the communities that live around them.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-Centered System Design for an Aging Population: An Experimental Study of Footwear Design</title>
<link href="https://hdl.handle.net/1721.1/147249" rel="alternate"/>
<author>
<name>Lee, Sheng-Hung</name>
</author>
<id>https://hdl.handle.net/1721.1/147249</id>
<updated>2023-01-20T03:49:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Human-Centered System Design for an Aging Population: An Experimental Study of Footwear Design
Lee, Sheng-Hung
Population projections indicate that by 2050, people aged 65+ will account for 25% of the population in Europe and Northern America, and the number of people aged 80+ will triple to 426 million. With technological and biomedical advances, people now expect to live not only longer, but also better, demanding improved quality of living and working environments to support later adulthood. This new longevity presents opportunities for designers and engineers to engage in participatory, system-oriented design thinking and processes to meet the wants and needs of older users. This master thesis develops an innovative methodology to address the complex systemic social-technological design challenges that this new longevity presents and applies this technology to a novel case study, the design of indoor footwear for older adults.&#13;
&#13;
We propose a Human-Centered System Design (HCSD) approach, combining Human-Centered Design (HCD) and Design Thinking (DT) with select System Engineering (SE) approaches and System Thinking (ST). The methodology was applied to a case study of designing and prototyping indoor footwear for older adults, following a process from inspiration to ideation and then to implementation. Data collected included targeted user and expert interviews coupled with surveys, and market research, as well as from the facilitation of two hybrid participatory workshops. We also used Ultra-Wideband (UWB) assistive technology and computational design tools to prototype future human-centered indoor footwear designs for older adults. We use four lenses to distill and synthesize the results: 1) people; 2) product; 3) platform; and 4) process. The proposed design solution considers not only people’s wants and needs, technological feasibility, and commercial viability, but also the adaptability, scalability, and novelty of using HCSD to solve these. Further, we examine the role of the HCSD framework and the participatory design process in contributing to the development of empathy and service tools in pursuit of an age-inclusive society. The study concludes with the proposed solutions including product design, service model, user experience, and technology, as well as a provisional set of design principles for future research on indoor footwear for an aging population.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermodynamic modeling and design of high-performance adsorption-based atmospheric water harvesting devices</title>
<link href="https://hdl.handle.net/1721.1/147248" rel="alternate"/>
<author>
<name>Li, Adela Chenyang</name>
</author>
<id>https://hdl.handle.net/1721.1/147248</id>
<updated>2023-01-20T04:02:53Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Thermodynamic modeling and design of high-performance adsorption-based atmospheric water harvesting devices
Li, Adela Chenyang
Water scarcity is a grand global challenge since more than two-thirds of world’s population is experiencing water shortage. Atmospheric water harvesting (AWH) addresses this challenge by enabling decentralized freshwater supply in water-stressed and infrastructure-limited areas. Adsorption-based AWH, in particular, overcomes the climate limitations of conventional AWH technologies and has the potential to further expand clean water access to extremely arid regions. Despite innovations in adsorbent materials, however, fundamental understanding of the physical processes involved in the AWH cycle and how material properties impact the theoretical limits of AWH are lacking.&#13;
&#13;
In this thesis, we develop a generalized thermodynamic framework to elucidate the interplay between adsorbent properties and operating conditions for optimal AWH performance. Our analysis considers the temperature-dependence of adsorption, which is critical but has been largely overlooked in past work. Using metal-organic framework (MOF) as an example, we show that the peak energy efficiencies of single-stage and dual-stage AWH devices, after considering temperature-dependent adsorption, increased by 30% and 100% compared with previous work. Moreover, in contrast to common understanding, we show that the adsorption enthalpy of MOFs can also be optimized and further improve the peak energy efficiency by 40%.&#13;
&#13;
To guide the practical design of next-generation adsorption-based AWH devices, we also perform initial modeling and characterization of select subcomponents for enhanced device performance. For atmospheric air delivery, we show that both the Dyson V9 motor fan and miniature drone propellers are powerful and compact solutions. However, after taking the power consumption into account, we identify the Noctua industrial DC fan as the best candidate overall for air supply. In addition, we show that significant heat sink enhancement is needed to maintain the condenser at close to ambient temperature and sustain the high flux of vapor condensation prescribed by high water productivity. This work bridges important knowledge gaps between adsorbent materials development and device design, providing insights toward high-performance adsorption-based AWH technologies.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AC-RL: A Framework for Real-Time Control, Learning &amp; Adaptation</title>
<link href="https://hdl.handle.net/1721.1/147247" rel="alternate"/>
<author>
<name>Guha, Anubhav</name>
</author>
<id>https://hdl.handle.net/1721.1/147247</id>
<updated>2023-01-20T03:34:41Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">AC-RL: A Framework for Real-Time Control, Learning &amp; Adaptation
Guha, Anubhav
This paper considers the problem of real-time control and learning in dynamic systems subjected to parametric uncertainties. A combination of Adaptive Control (AC) in the inner loop and a Reinforcement Learning (RL) based policy in the outer loop is proposed such that in real-time the inner-loop model reference adaptive controller contracts the closed-loop dynamics towards a reference system, while the RL in the outerloop directs the overall system towards approximately optimal performance. This AC-RL approach is developed for a class of control affine nonlinear dynamical systems, and employs extensions to systems with multiple equilibrium points, systems with input magnitude constraints, and systems in which a high-order tuner is required for adequate performance. In addition to establishing a stability guarantee with realtime control, the AC-RL controller is also shown to lead to parameter learning with persistent excitation. Numerical validations of all algorithms are carried out using a quadrotor landing task on a moving platform. These results point out the clear advantage of the proposed integrative AC-RL approach.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Value of Flexibility: Case of Execution and Technology Choice for Carbon Capture</title>
<link href="https://hdl.handle.net/1721.1/147246" rel="alternate"/>
<author>
<name>Tozzi, Mark J.</name>
</author>
<id>https://hdl.handle.net/1721.1/147246</id>
<updated>2023-01-20T03:01:43Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Strategic Value of Flexibility: Case of Execution and Technology Choice for Carbon Capture
Tozzi, Mark J.
This thesis addresses the strategic issue of how best to develop large-scale investments in carbon capture and storage (CCS). It compares flexible designs with real options for execution and technology choice, to fixed investments associated with a proposed CCS ‘hub’. Climate change and global aspirations to reduce CO2 emissions are generating increasing interest in accelerating investment in CCS. This thesis focuses on how to value and frame development of one of the world’s largest proposed CCS hubs in the Gulf Coast industrial corridor.&#13;
&#13;
Flexibility in design and real options analysis were applied to this proposed hub, as a case study, to demonstrate that progressive technology investment and pilot-based capture deployments with scale optionality, generate the best project outcomes across the wide range of policy and commercial uncertainties impacting CCS development. Through system decomposition, the proposed hub was modeled as a dynamic techno-economic system. Monte-Carlo simulation and multi-dimensional project evaluations were performed for a range of potential scenarios. &#13;
 &#13;
Results of this analysis indicate that a flexible strategy that deploys capacity as policy and technology improve, and scales capacity as costs decline, is preferable to large-scale, fixed investments in a CCS hub. This flexible, conditions-based approach to development mitigates the extent and probability of downside outcomes, de-risks sensitive capture costs, and enables value-accretive scaling – all while achieving similarly high cumulative volumes of CO2 captured. &#13;
&#13;
The intent of this effort was to illustrate the high-level tradeoffs and opportunities within CCS development through creation and use of a screening model. This front-end modeling approach to strategy development is recommended in the early stages of CCS (or analog) projects, where identification and implementation of flexibility and real options can yield the largest value to the prospective project opportunity. Model assumptions and results are intended to be notional in nature and not for investment or decision-making purposes. Future work could focus on improving the accuracy of the model parameters and relationships to reconfirm insights with project-specific data, as well as further test these recommendations through extension to analog cases supporting decarbonization.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asymmetric network ties to elite American universities create differential access to venture capital in Africa</title>
<link href="https://hdl.handle.net/1721.1/147245" rel="alternate"/>
<author>
<name>Fayulu, Milain D.</name>
</author>
<id>https://hdl.handle.net/1721.1/147245</id>
<updated>2023-01-20T03:04:54Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Asymmetric network ties to elite American universities create differential access to venture capital in Africa
Fayulu, Milain D.
Why is the venture capital (VC) flowing into Africa concentrated in a handful of countries? I reject the popular notion that market size is the primary reason why VC investments only end up in a few countries. Because ties with elite universities are the primary way in which African startup founders build social capital with U.S. VC firms, which are the largest purveyors of venture funds on the African continent, I advance a novel network theory. It is predicated on a new form of core-periphery relationship wherein capital domiciled largely in the United States drives uneven startup ecosystem growth on the African continent. I argue that this market failure, characterized by concentrated VC investments, stems from an asymmetry in elite U.S. school network ties between the top four recipient countries (Nigeria, Kenya, South Africa, and Egypt) and the other 50 African nations. I collected statistical evidence of the funding differential between countries and the imbalance in U.S. elite university attendance by country. I used a linear model to show that attending a school in the top 20 of U.S. ‘News World Report of Best Universities’ might significantly increase the venture capital amount the country of a graduate is expected to receive.  My research contributes to the literature on network capitalism and development.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photovoltaics Detection in Satellite Imagery using Deep Learning and Remote Sensing</title>
<link href="https://hdl.handle.net/1721.1/147238" rel="alternate"/>
<author>
<name>Ravishankar, Rashmi</name>
</author>
<id>https://hdl.handle.net/1721.1/147238</id>
<updated>2023-01-20T03:02:34Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Photovoltaics Detection in Satellite Imagery using Deep Learning and Remote Sensing
Ravishankar, Rashmi
The global push for renewables coupled with the steadily decreasing cost of each unit of solar energy produced has resulted in a dramatic rise in photovoltaic deployment both at a residential and commercial level. Currently, the global solar energy capacity doubles every 18 months, of which expanding solar energy facilities are the biggest contributor. The inherently decentralized nature of the deployment of photovoltaics has resulted in a dearth of reliable or verifiable information on solar energy generation both at a granular as well as a global scale. In parallel, there is an increase in the availability of high-resolution imagery from satellites and advancement in state-ofthe-art learning techniques. Together, this presents a unique opportunity to harvest previously scarce high-resolution satellite data and deploy state-of-the-art detection techniques for renewable energy applications. In this thesis, I propose, optimize, and validate several Deep Learning frameworks to detect and map residential as well as commercial solar installations. The best performing residential model achieved a precision of 96.9% and recall of 90.0%, comparable to the 93.1% precision and 88.5% recall achieved by the current state-of-the-art, DeepSolar. Notably, this was achieved with significantly reduced computational complexity - 89,000 trainable parameters compared to DeepSolar’s 21.8 million trainable parameters. A method is proposed for the extension of the custom trained CNN todifferent geographies at low cost, by using incremental training data. Performance of the model on a new geography is found to saturate with roughly 15 %incremental training data. Further, a study in resolution sensitivity showed that the optimal GSD for this problem lies in the range [0.3,0.7] m. For solar farms, a semantic segmentation neural network based model was trained on a dataset created by collecting satellite imagery of several major solar farms in the US and tested on images of solar farms unseen by the model. Objectively, the model achieved highly competitive performance indicators including a mean accuracy of 96.87%, and a Jaccard Index (intersection over union of classified pixels) score of 95.5%. Subjectively, it was found to detect spaces between panels producing a segmentation output better than human labeling. As a final step in the pipeline, a multi-step capacity evaluation model to estimate the number of panels and energy generation capacity of the detected solar energy facilities was proposed and generation capacities were compared against publicly available electricity generation data reported by various sources. The capacity evaluation model is the first of its kind, while deep learning applied specifically for the detection and mapping of solar farms is one of the first for the United States. Overall, this work fits into a longer-term goal of creating a granular global database of solar energy production which could serve as a single source of truth for industries and policymakers to inform decision-making. In the future, this approach could be used as a replacement for conventional sources of knowledge, or as a secondary source of intelligence for the cross-validation of reported figures.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Strain on Activated-Aluminum–Water Reactions</title>
<link href="https://hdl.handle.net/1721.1/147237" rel="alternate"/>
<author>
<name>Moriarty, Daniel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/147237</id>
<updated>2023-01-20T03:17:58Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Effects of Strain on Activated-Aluminum–Water Reactions
Moriarty, Daniel P.
Activated aluminum is a fuel source that promises safe hydrogen energy storage with volumetric energy densities twice that of diesel fuel and 45 times that of lithium-ion batteries. Aluminum is activated when liquid eutectic gallium-indium disrupts the aluminum’s passive oxide layer, enabling a reaction with water to release hydrogen gas and heat. This thesis seeks further understanding of this reaction by exploring the effect of residual stresses in the aluminum. Annealing and cold rolling of 1100-alloy aluminum plate developed engineering strain levels up to -0.7. Reactions in both DI water and water with 3.5% NaCl salinity and 0.1M caffeine dopant showed no correlation between hydrogen production and strain but an aggressive acceleration of the reaction with increased strain levels. Some reactions produced unreacted aluminum in the product, most notably for high strain level (-0.4 – -0.6) reactions in DI water. The unreacted products, confirmed to be aluminum by SEM-EDS, fully reacted over 24 hours. Using SEM to inspect the first stages of microstructural reaction mechanisms, higher amounts of exfoliated aluminum were expelled from the bulk at high (-0.6) strain levels compared to unstrained (0.0) samples. These observations further understanding of how strain conditions affect activated aluminum reactions and help delineate ideal operating conditions for reactor design.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language in Medical Worlds: Hearing Technology for Deaf Jordanian Children</title>
<link href="https://hdl.handle.net/1721.1/147236" rel="alternate"/>
<author>
<name>Loh, Yui Leh Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/147236</id>
<updated>2023-01-20T04:02:23Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Language in Medical Worlds: Hearing Technology for Deaf Jordanian Children
Loh, Yui Leh Timothy
Bringing together medical and linguistic anthropology, I examine the provision of hearing technology, such as cochlear implants, to deaf Jordanian children, a project animated by an imperative to make deaf children speak. Drawing upon ethnographic fieldwork at a cochlear implantation initiative and an audiology department in Amman, I argue that this imperative must be understood in relation to anxieties about the status of Arabic in Jordan and the historical value of orality in the Middle East. This case shows that more attention must be paid to the role of language ideologies in co-constituting medical encounters between clinicians, parents, and patients.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atmospheres of Change: Virtual Production</title>
<link href="https://hdl.handle.net/1721.1/147235" rel="alternate"/>
<author>
<name>Kamat, Srushti</name>
</author>
<id>https://hdl.handle.net/1721.1/147235</id>
<updated>2023-01-20T04:07:01Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Atmospheres of Change: Virtual Production
Kamat, Srushti
There is much we cannot see about change as it occurs. Virtual production indicated a shift in traditional workflows of filmmaking at a time when the world was going through massive upheaval–The COVID-19 pandemic. What made this case a rich resource for theorizing was that it offered an opportunity to examine change as it occurs. The technology had vast possibilities, even if many of these possibilities were yet to be realized. Drawing from trade and fan publications, I begin by exploring how sound was first introduced to film production in the year 1930, shedding light on pre-existing representations of production atmospheres. I then turn to contemporary artifacts in the year 2020 which showed how shifts in production pipelines through virtual production were being represented. I conclude by offering a medium-based approach to the LED screen as a way to articulate how technological objects can be mediums for relational exchange. Building upon existing theories of technological frames as developed by social constructivists, I propose an “atmospheric frame.” The atmospheric frame gives rise to situations of negotiation and these situations are jumping-off points for theorizing around labor, economics and policies in adopting new technologies. My thesis is meant to be a start and not an end. How do we understand change as it occurs? What could be this ongoing heuristic?
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Technology Management in the Energy Transition: Evidence from the Oil &amp; Gas Industry</title>
<link href="https://hdl.handle.net/1721.1/147234" rel="alternate"/>
<author>
<name>Radelet, Benjamin S.</name>
</author>
<id>https://hdl.handle.net/1721.1/147234</id>
<updated>2023-01-20T03:04:23Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Quantifying Technology Management in the Energy Transition: Evidence from the Oil &amp; Gas Industry
Radelet, Benjamin S.
Technology plays a critical role in how companies manage and strategically reposition during periods of change, including the current transition to lower-carbon energy in the Energy Transition. While previous research has indicated associations between technology management and the Energy Transition, the ability to quantify the relationship and its characteristics has been limited due to a lack of differentiation in the public data.&#13;
&#13;
This thesis explores the degree to which technology management has shifted during the Energy Transition for twelve representative companies in the Oil &amp; Gas industry. A novel method was developed to differentiate technology patents based on the Cooperative Patent Classification’s Y02–Y04 schema for tagging Climate Change Mitigating Technology (CCMT), resulting in a three-tiered subclassification. Results of this method show that high-value innovation in the Oil &amp; Gas industry can be categorized, on average, as 89.4% Incremental Energy, 8.3% Sustaining CCMT, and 2.3% Disruptive CCMT.&#13;
&#13;
Next, this study utilized the differentiated patent data to perform Spearman rank order correlation analysis to establish the association between technology trends, corporate R&amp;D metrics, net sales and oil price. Findings show positive correlation between Disruptive CCMTs and both Sustaining CCMTs (&#119903;ₛ[202] = .55, &#119901; = &lt; .001) and Total R&amp;D Patenting (&#119903;ₛ[202] = .49, &#119901; = &lt; .001), indicating internal R&amp;D spillover between teams.&#13;
&#13;
Finally, the differentiated CCMT patent data and the correlation analyses were evaluated alongside global patenting trends to assess the rate of technological change in the Energy Transition. The findings indicate that the Oil &amp; Gas industry has produced high-value innovations on par with the broader Energy Transition, exhibiting an Average Annual Growth Rate of 24.9% for Disruptive CCMTs and 21.4% for Sustaining CCMTs compared with an average of 24.6% for Global CCMTs. The findings also highlight an ongoing period of transition with indications of future demarcation in technology strategies. As a result of these investigations, suggestions have been identified for future research.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformer-Maze</title>
<link href="https://hdl.handle.net/1721.1/147233" rel="alternate"/>
<author>
<name>Heuser, Annika</name>
</author>
<id>https://hdl.handle.net/1721.1/147233</id>
<updated>2023-01-20T03:01:38Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Transformer-Maze
Heuser, Annika
Psycholinguists study online language processing to gain insight into both the different mental representations of various sentence types and the computational resources required to build those representations. Psycholinguists have a number of tools available to them, the most prevalent being eye-tracking and self-paced reading (SPR). However, a lesser-known tool called the Maze task, more specifically G(rammatical)- Maze, is arguably a better choice for detecting and localizing differences in processing difficulty from word to word. In G-Maze, a participant must choose between each successive word in sentence and a distractor word that does not make sense based on the preceding context. If a participant chooses the distractor as opposed to the actual word, then the trial ends and they may not complete the sentence. Like SPR, G-Maze can be cheaply run on a crowdsourcing platform, but it does a better job of localizing effects and filtering out noisy data. Still, the effort required to pick contextually inappropriate distractors for hundreds of words might cause an experimenter to hesitate before picking this method. Boyce et al. (2020) remove this hesitation with A(uto)-Maze, a tool that automatically generates distractors using a computational language model. In this thesis, we introduce the next generation of A-Maze: T(ransformer)-Maze. Transformer models are the current state of the art in natural language processing, and thousands, pretrained in a variety of languages, are freely available on the internet, specifically through Huggingface’s Transformers package. In our validation experiment, T-Maze proves itself to be as effective as G-Maze with handmade materials, run in a lab. We are excited to provide psycholinguists with a new tool that allows them to easily gather high-quality online sentence processing data in many different languages.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental multi-model approach to instrument the sensemaking process at the team-level</title>
<link href="https://hdl.handle.net/1721.1/147232" rel="alternate"/>
<author>
<name>Vazquez Rodarte, Ignacio Salvador</name>
</author>
<id>https://hdl.handle.net/1721.1/147232</id>
<updated>2023-01-20T03:31:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">An experimental multi-model approach to instrument the sensemaking process at the team-level
Vazquez Rodarte, Ignacio Salvador
Just like Julius Caesar’s Gaul, any engineering challenge can be divided into three parts; (1) the problem, (2) solution and (3) design spaces. The interaction between solution and design, and the degree of influence that any given team has upon them will depend on the capacity of said team to make sense of the problem. This thesis presents a framework to evaluate the process of team-level sensemaking. How a small group of individuals show emotion, converse with each other and interact with the engineering problem at hand. This integrated view is tested with a small-n experiment to demonstrate the possible insights and data that can be generated and analyzed. As a contribution to collective intelligence and teamwork, the ability to objectively judge a team’s performance —via Pareto Ranks, measure the conversation dynamics —using graph theory and voice recognition, assessing the average emotion content displayed by the team members —using facial recognition, and estimating team entanglement —with physiological signals captured by smartwatches, gives a deep dive into each team’s sensemaking process.&#13;
&#13;
Pending reproduction of the experiment, this first iteration seems to indicate that emotions play a role in a team’s motivation to perform, as does the timing of the conversations the team has. Also, there are heuristics that emerged from the teams when they had to judge which of their proposed in-game designs was better —even though none of them actually met the requirements.&#13;
&#13;
The work presented in this thesis is the enactment of one specific sensemaking framework: Weick’s seven properties. Applied to a bounded system, where the problem under analysis is fully understood and the teams operate in game-bubble, where they all have access to the same —yet purposefully limited— information. And just like the participants in this experiment, this thesis might not have crossed the finish line set by the goals, but it certainly moved closer to it.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Mixed-Methods Approach to Force Estimation in Military Operations Other Than War</title>
<link href="https://hdl.handle.net/1721.1/147231" rel="alternate"/>
<author>
<name>Rippy, Julian T.</name>
</author>
<id>https://hdl.handle.net/1721.1/147231</id>
<updated>2023-01-20T03:36:07Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">A Mixed-Methods Approach to Force Estimation in Military Operations Other Than War
Rippy, Julian T.
This thesis presents a new method for estimating force size and composition for Military Operations Other than War. While military planners have tools for planning these kinds of operations, they are largely inaccessible or unsuitable for civilian use. The most common tool for force estimation in MOOTW, force ratios, is inaccurate and based on questionable assumptions. The new method presented here, operational inference, is a mixed-methods approach which uses a multivariate distance measure in order to determine which military operations are similar to each other. Using this information, a researcher can identify similar cases for focused comparison, allowing for both qualitative and quantitative improvements in force estimates. &#13;
&#13;
The utility of the method is demonstrated for two separate forms of MOOTW. It is applied to humanitarian military intervention by estimating a force for a hypothetical EU intervention in Libya. It is then applied to noncombatant evacuation operations by estimating forces required for the American evacuation of Afghanistan in August 2021, showing its ability to mimic real-world decisionmaking. The method produced estimates that were more accurate than those produced by force ratio methods, and in both cases the method and the campaign analysis it enabled are able to answer important, policy-relevant questions.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sea Level Rise and Commercial Office Markets in Southeast Florida</title>
<link href="https://hdl.handle.net/1721.1/147230" rel="alternate"/>
<author>
<name>Salvatori, Katherine G.</name>
</author>
<id>https://hdl.handle.net/1721.1/147230</id>
<updated>2023-01-20T03:20:54Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Sea Level Rise and Commercial Office Markets in Southeast Florida
Salvatori, Katherine G.
Sea level rise is an indisputably mounting predicament that has exacerbated consequences in Southeast Florida. In this thesis, we explore the impacts of sea level rise risk in commercial office markets in Miami-Dade County. We examine 560 commercial office properties with sale price records from 2000 to 2020, and 497 commercial office rental properties from 1988 quarter one through 2020 quarter four. For both sales and rental properties, we analyze each sample comprehensively, then we isolate the respective samples first by historic flood amount and then by flood risk metrics. We conclude by segregating properties in high-risk areas by historic flood amount to eradicate property location as a confounding variable.&#13;
&#13;
Our results suggest that properties that have historic exposure to flooding from either or both major recent hurricanes, Katrina in 2005 and Irma in 2017, have lower sales prices and rental values when compared to properties that have not experienced historic hurricane flooding in comparable flood risk zones. Our results also indicate that generally, commercial office properties that are more concentrated near waterfront areas have experienced greater historic flooding and have larger predicted flood risk than properties farther inland.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Computer Vision in Evaluating the Effects of New Housing Projects</title>
<link href="https://hdl.handle.net/1721.1/147217" rel="alternate"/>
<author>
<name>Thung, You Xuan</name>
</author>
<id>https://hdl.handle.net/1721.1/147217</id>
<updated>2023-01-20T03:42:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Applications of Computer Vision in Evaluating the Effects of New Housing Projects
Thung, You Xuan
Cities are laden with visual clues. Tapping on the large volume of street view imagery (SVI) made available in the last decade, we investigate how modern computer vision tools can characterize the visual quality and linguistic diversity of cities and leverage on these novel metrics to study the impact of new housing projects. &#13;
&#13;
Streets form a public space and how they look plays an important role in shaping how walkable they are, how safe people perceive them to be, and the general quality of living in the urban environment. To provide useful metrics to quantify the quality of streets, we construct a scalable process with state-of-the-art machine learning models to generate second-order metrics which capture both physical and perceptual features in an urban environment. Recognizing that the abundance of linguistic features littered across streetscapes gives us clues about underlying individual and social preferences in streetscapes, we also seek to quantify the linguistic diversity in cities. To that end, we construct a language detection model supporting English, Swedish, Arabic and Chinese that outperforms existing optical character recognition (OCR) tools. We evaluate visual interpretability with gradient-weighted class activation maps (Grad-CAM) and find that our model is both accurate and interpretable. We apply these tools to our case study of Stockholm and find intuitive spatiotemporal characterizations of the city. We also advance the application of these metrics by using them in a difference-in-difference (DID) setting to study the effects of newly completed housing projects on the built environment. We find that these projects generate spillover effects, as evident in the increase in enclosure and linguistic diversity in their immediate surroundings.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecture Evaluation for Extended Reality Devices</title>
<link href="https://hdl.handle.net/1721.1/147216" rel="alternate"/>
<author>
<name>Soto, Pedro</name>
</author>
<id>https://hdl.handle.net/1721.1/147216</id>
<updated>2023-01-20T03:55:09Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Architecture Evaluation for Extended Reality Devices
Soto, Pedro
Many technology companies have initiated significant efforts with digital transformation for their enterprises to satisfy current business needs and reach out to larger markets such as investments in Intelligent platforms (Cloud &amp; AI), Intelligent automation (RPA), Internet of Things (IoT). VR/AR/MR technologies are also considered part of this digital transformation, specifically by the use of hand gestures, voice, hand controllers, and gaze to control the projection of experiences in the Augmented, Virtual, and Mixed Reality worlds.&#13;
&#13;
There are many product designs and potential manufacturing improvements for creating head-mounted devices. However, the best product architecture and “go to market” approach is still unclear to the industry.&#13;
&#13;
Microsoft with HoloLens has a complex architecture with a high-quality immersion experience, focusing on the enterprise and government markets with sophisticated healthcare, manufacturing, and military/defense applications. However, their initial price to market is way above their competitors. Meta with Oculus has focused their efforts on consumer electronics with gaming applications. Others such as Magic Leap, Samsung with Gear VR, HTC Vive, Google Cardboard, Google Lens, Sony with PlayStation VR controllers, and Apple with AR glasses have interesting product proposals in retail, advertising, education, gaming, and social media environments.&#13;
&#13;
This thesis analyzes product design and functionalities strategies for XR Head-Mounted devices, evaluates a broad range of variables, and suggests a range of architectures that could meet users' future market needs. The result of this analysis is summarized in two main architectures; Spiral #1: Stable Intermediate Use Case and Spiral #2: Future Use Case. Spiral #1 is an excellent option to use as a stable intermediate architecture; it minimizes downside while capturing upside opportunities in size, comfort, and cost until Spiral #2 can be fully implemented.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LEU-HEU Mixed Core Conversion Thermal-hydraulic Analysis and&#13;
Coolant System Upgrade Assessment for the MIT Research Reactor</title>
<link href="https://hdl.handle.net/1721.1/147212" rel="alternate"/>
<author>
<name>Zhao, Yinjie</name>
</author>
<id>https://hdl.handle.net/1721.1/147212</id>
<updated>2023-01-20T03:16:29Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">LEU-HEU Mixed Core Conversion Thermal-hydraulic Analysis and&#13;
Coolant System Upgrade Assessment for the MIT Research Reactor
Zhao, Yinjie
The MIT Research Reactor (MITR) is in the process of converting from the current 93%-enriched U-235 highly-enriched uranium (HEU) fuel to the low enriched uranium (LEU, &lt;20%-enriched U-235) fuel, as part of the global non-proliferation initiatives. A high-density, monolithic uraniummolybdenum (U-10Mo) fuel matrix is chosen. The fuel element design is changed from 15-plate finned HEU fuel to 19-plate unfinned LEU fuel with the same geometry. The reactor power increases from 6.0 MW to 7.0 MW thermal, and primary coolant flow rate increases from 2000 gpm to 2400 gpm. Detailed analyses were completed for initial LEU core with 22 fuel elements, and demonstrated both neutronic and thermal hydraulic safety requirements are met throughout equilibrium cycles. An alternative conversion strategy is proposed which involves a gradual transition from an all-HEU core to an all-LEU core by replacing 3 HEU fuel elements with fresh LEU fuel elements during each fuel cycle. The objectives of this study are to demonstrate that the primary coolant system can be safely modified for 2400 gpm operation, and to perform steady-state and loss-of-flow (LOF) transient thermal-hydraulic analyses for the MITR HEU-LEU transitional mixed cores to evaluate this alternative conversion strategy. The primary technical challenge for the 20% increase in primary flow rate with existing piping system is flow-induced vibration. Several experiments were performed to measure and quantify vibration acceleration and velocity on three main hydraulic components to determine if higher flowrates cause excessive vibration. The test results show that the maximum vibration velocity is 9.70 mm/s, the maximum vibration acceleration is 0.98 G at the current flow rate 2000 gpm and no significant spectral change in the vibration profile at 2550 gpm. Therefore, it can be concluded that the existing piping system can safely support 2400 gpm primary flow operation.&#13;
&#13;
Thermal hydraulics analysis was performed using RELAP5 MOD3.3 code and STAT7 code. The MITR transitional mixed core input models were constructed to simulate the reactor primary system. Two scenarios, steady-state and loss-of-flow transient were simulated at power level of 6 MW. RELAP5 results show that during steady state, there is significant safety margin (&gt; 10 °C) to onset of nucleate boiling for both HEU and LEU fuel. The maximum core temperature occurs at HEU fuel in Mix-core 3, the maximum wall temperature reached was 89 °C. During the LOF transient case, the result shows that The HEU fuel element is more limiting than the LEU in transitional cores. Nucleate boiling is predicted to occur only in the HEU hot channel during the first 50 seconds after the pump coastdown. The peak cladding temperatures are much lower than the fuel temperature safety limit of UAlx fuel plates, which is 450 °C. From the STAT7 calculation results, the operational limiting power at which onset of nucleate boiling (ONB) occurs in all cases show significant margins from the Limiting System Safety Setting (LSSS) over-power level. The lowest margin for LEU element during the mixed core transition is at Mix-7, 11.43 MW with a 4.03 MW power margin. For the HEU element, the lowest margin during the transition is at Mix2, 8.51 MW with a 1.11 MW power margin. The location at which ONB is always expected to occur is F-Plate Stripe 1 and 4 for the LEU fuel element; side plate for the HEU fuel element with the HEU element is always more limiting.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Los Delivreros: Labor, Platforms, and Transnational Flows of lnformation in Latin American Gig Workers</title>
<link href="https://hdl.handle.net/1721.1/147209" rel="alternate"/>
<author>
<name>Reyes-Lopez, Ambar</name>
</author>
<id>https://hdl.handle.net/1721.1/147209</id>
<updated>2023-01-20T03:01:52Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Los Delivreros: Labor, Platforms, and Transnational Flows of lnformation in Latin American Gig Workers
Reyes-Lopez, Ambar
This thesis investigates the transnational modes of community-building and network formation and how these are instrumental for delivreros (food delivery workers) in New York City to exercise agency, forge their own narrative, and resist platform control through their use of digital social networks and communication technologies. Scholars such as Gray (2019) and Rosenblat (2017) have shown how the gig economy ecosystem is underpinned by long-standing tensions between companies and workers; I argue that migrant delivery workers defy information and knowledge asymmetries by repurposing the technology that has been built as a means for control. Marginalized, misrepresented, or ignored by mainstream media and governmental actors, delivery workers use information and communication technologies to bypass traditional channels, disseminate their own stories, and create community. Overall, my research illuminates how the flow of information through different spaces and times enables delivery workers to construct a place for subversion and negotiation with roles assigned to them by broader socio-political forces.&#13;
&#13;
In the first chapter, building on ethnographic fieldwork in NYC with delivery workers, I examine the relationship between digital technologies and labor in the platform ecosystem. I argue that a way to regulate work and workers within the gig economy is through time uncertainty and gamification. Yet, I also contend that workers use social media platforms as tools for resistance and subversion. To do that, I outline how delivery workers strategize and learn through social media platforms. Much of the literature on platforms and the gig economy has focused on the typically precarious working conditions. By shifting the focus to workers’ concern about their lack of control with their time, I seek to complement these analyses and to understand the different factors and actors that might affect workers’ lives.&#13;
&#13;
In the second chapter, I map how delivery workers communicate and engage collectively both in the physical and the digital worlds. My research reveals two digital platforms that workers use to share information: one that operates inwards (Whatsapp) and another that operates outwards (Facebook). These two forms of communication represent opposite sides of the spectrum between public and private communication as well as ephemeral and permanent information. Delivery workers use Facebook to livestream accidents, upload information about bike robberies, and document their actions. I identify three objectives to livestreaming: it helps workers construct their own narrative, it maintains transnational ties, and it establishes public credibility and reputation. And they use WhatsApp to coordinate, request help, and mobilize with one another in real-time. I analyze how public and private means of communication facilitate and constrain social forms of organization. These layers of communication synergize to form a transnational distributed knowledge network and to shape and interpret the collective identity of Latin American delivery workers. Thus, I argue that delivreros’ use of technology provides a unique glimpse into the convergence of social networks, media culture, and social movements within the context of contemporary gig labor and migrant organization.&#13;
&#13;
I conclude my thesis with insights about how delivery workers are adapting older indigenous practices to a context of urban cities and technology. I argue that migration moves ideas, memories, knowledge, stories, and forms of organization. I finish by thinking about migration as a medium and the way the social forms of organization that I observed in NYC are reminiscent of a long history of self-organized tactics, which have moved along with the delivreros I met. Latin American delivery workers’ experiences in NYC are not unique but rather gig workers all over the world are undergoing similar organizational patterns and transformations. I strengthen my case by focusing on urban safety and millennial modes of organization; I strive to depict a bigger picture beyond labor, platforms, and workers resistance. Overall, I bridge theories of platforms and labor with media and migration.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capabilities of parity check codes for nonprime alphabets</title>
<link href="https://hdl.handle.net/1721.1/147181" rel="alternate"/>
<author>
<name>Levy, Joseph Elliot.</name>
</author>
<id>https://hdl.handle.net/1721.1/147181</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Capabilities of parity check codes for nonprime alphabets
Levy, Joseph Elliot.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detection of an unknown but repeated signal</title>
<link href="https://hdl.handle.net/1721.1/147178" rel="alternate"/>
<author>
<name>Barbour, David Ramsay.</name>
</author>
<id>https://hdl.handle.net/1721.1/147178</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Detection of an unknown but repeated signal
Barbour, David Ramsay.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1961; Includes bibliographical references (leaf 80).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oxidation of carbon monoxide on pre-irradiated zinc oxide catalyst</title>
<link href="https://hdl.handle.net/1721.1/147174" rel="alternate"/>
<author>
<name>Itahara, Seiji.</name>
</author>
<id>https://hdl.handle.net/1721.1/147174</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Oxidation of carbon monoxide on pre-irradiated zinc oxide catalyst
Itahara, Seiji.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1960; Includes bibliographical references (leaves 59-60).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The oil film in the short journal bearing</title>
<link href="https://hdl.handle.net/1721.1/147173" rel="alternate"/>
<author>
<name>Ishii, Akira.</name>
</author>
<id>https://hdl.handle.net/1721.1/147173</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The oil film in the short journal bearing
Ishii, Akira.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1960; Includes bibliographical references (leaf 29).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic characterization of thermal death of bacterial spores</title>
<link href="https://hdl.handle.net/1721.1/147171" rel="alternate"/>
<author>
<name>Humphrey, Arthur E.
            (Arthur Earl)</name>
</author>
<id>https://hdl.handle.net/1721.1/147171</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Dynamic characterization of thermal death of bacterial spores
Humphrey, Arthur E.
            (Arthur Earl)
Thesis: M.S., Massachusetts Institute of Technology, Department of Food Technology, 1960; Includes bibliographical references (leaves 97-99).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigations of avalanche multiplication and current oscillations in indium antimonide</title>
<link href="https://hdl.handle.net/1721.1/147170" rel="alternate"/>
<author>
<name>Hurwitz, Charles E.</name>
</author>
<id>https://hdl.handle.net/1721.1/147170</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Investigations of avalanche multiplication and current oscillations in indium antimonide
Hurwitz, Charles E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaves 53-55).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A torque magnetometer for crystalline anistropy measurements</title>
<link href="https://hdl.handle.net/1721.1/147167" rel="alternate"/>
<author>
<name>Hunt, Robert P.
            (Robert Parrott)</name>
</author>
<id>https://hdl.handle.net/1721.1/147167</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">A torque magnetometer for crystalline anistropy measurements
Hunt, Robert P.
            (Robert Parrott)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaf 73).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nature &amp; extent of corporate strategic investment from East Asia into U.S. high technology firms</title>
<link href="https://hdl.handle.net/1721.1/147160" rel="alternate"/>
<author>
<name>Ussher, Bernard Donal.</name>
</author>
<id>https://hdl.handle.net/1721.1/147160</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Nature &amp; extent of corporate strategic investment from East Asia into U.S. high technology firms
Ussher, Bernard Donal.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaf 129).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global corporate telecommunication networks : marketing suppliers' strategies</title>
<link href="https://hdl.handle.net/1721.1/147159" rel="alternate"/>
<author>
<name>Valentiny, Dominique.</name>
</author>
<id>https://hdl.handle.net/1721.1/147159</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Global corporate telecommunication networks : marketing suppliers' strategies
Valentiny, Dominique.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references.
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Downsizing the employee workforce : human resource implications and considerations</title>
<link href="https://hdl.handle.net/1721.1/147158" rel="alternate"/>
<author>
<name>Usery, Jerry Craig.</name>
</author>
<id>https://hdl.handle.net/1721.1/147158</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Downsizing the employee workforce : human resource implications and considerations
Usery, Jerry Craig.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaf 107).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Localization of Japanese management in the United States</title>
<link href="https://hdl.handle.net/1721.1/147157" rel="alternate"/>
<author>
<name>Udoh, Tatsuo.</name>
</author>
<id>https://hdl.handle.net/1721.1/147157</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Localization of Japanese management in the United States
Udoh, Tatsuo.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 103-106).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design reuse as a strategy for incremental new product development : a study of software industry</title>
<link href="https://hdl.handle.net/1721.1/147156" rel="alternate"/>
<author>
<name>Upadhyay, Vandana.</name>
</author>
<id>https://hdl.handle.net/1721.1/147156</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Design reuse as a strategy for incremental new product development : a study of software industry
Upadhyay, Vandana.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 62-67).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can Urban Gardening be a Case for Neighborhood Infrastructure Reparation The Case for Cambridge, Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/147141" rel="alternate"/>
<author>
<name>Halaby, Lamice</name>
</author>
<id>https://hdl.handle.net/1721.1/147141</id>
<updated>2023-01-18T03:26:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Can Urban Gardening be a Case for Neighborhood Infrastructure Reparation The Case for Cambridge, Massachusetts
Halaby, Lamice
Community gardens are cultivated in many North American cities and play a crucial role in neighborhood revitalization. In Cambridge, Massachusetts, urban food gardens are an important social, cultural, and environmental practice. They have the potential to offer many benefits that shape how we envision and participate in the future of the neighborhood: serve as an instrument to support land banks and community-led land management, as well as enhance the quality of public spaces; incentivize community engagement; influence neighborhoods’ effort to provide green infrastructure; and, particularly, support the involvement of citizens that are at risk of being isolated from civic participation, Such as senior citizens and the unemployed and underemployed. Food gardens, if properly planned, can become an important transitional zoning tool to design new spaces, and also an important social place that fosters civic life (what sociologists call a “third place”). They are not the solution for all urban problems but can enhance the livability of our neighborhoods. This thesis analyzes the potential of urban food gardens in Cambridge, Massachusetts, drawing on the Cambridge Mobile Data Set to better understand communities' priorities as well as what they like and dislike about civic life. Insights for this study are drawn from interviews with senior citizens, particularly community leaders, who work with MIT’s AgeLab to tackle various challenges that seniors face in the city, with a focus on independent-living facilities for retired communities. The author, drawing on interviews conducted, virtually, with 10 residents of Cambridge Cohouse, a housing development with intergenerational groups who cultivate a community garden, worked with the “lifestyle leaders” (65+) to create a framework to better think of design programs that would facilitate food and “therapy gardens,'' and to assess their interest in managing food gardens. Some of these gardens were also studied to better understand the benefits of “therapy gardens'' for encouraging active lifestyles while aging, and thereby improving both mental and physical health. The work highlights the general aspirations of the residents living in retirement communities to remain socially connected and be engaged with their communities, and it shows the value of urban gardens as “third places” that build community. The thesis also sheds light on the different ways by which urban land used for food production has been addressed in municipal plans and incorporated into business practices. Examining gardens in urban neighborhoods has the potential to foster an understanding of the future of a "third place” as crucial to strengthening social infrastructure and civic life; whilst integrating urban gardens in civic participation plans improves social infrastructure and supports civic health and social and environmental ecosystems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Thermal Augmentation and Ultra Dense MEMS-Based Electrospray Thrusters</title>
<link href="https://hdl.handle.net/1721.1/147138" rel="alternate"/>
<author>
<name>Corrado, Matthew Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/147138</id>
<updated>2023-01-18T03:45:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Active Thermal Augmentation and Ultra Dense MEMS-Based Electrospray Thrusters
Corrado, Matthew Nicholas
Ionic liquid electrospray thrusters, a highly efficient form of electric space propulsion, have several advantages over traditional chemical forms of space propulsion as well as competing forms of electric propulsion, including their unique scalability down to extremely small sizes, their use of nontoxic propellants that do not require special storage or pressurization, and their ability to be operated in a bipolar mode, eliminating the need for bulky and complex neutralizers. Electrosprays still lag behind other forms of electric propulsion, such as Hall Effect Thrusters and Gridded Ion Engines, in thrust density, a key figure of merit for propulsion systems intended for small spacecraft that have limited surface area available for propulsion systems. A path forward to ultimately improve electrospray thrust density is proposed, and proofs of concept are tested. Increasing thrust density requires accomplishment of at least one of two feats: increasing the number of ion emission sites per unit area, or increasing the magnitude of current capable of being extracted per emission site. Advances in microelectromechanical systems (MEMS) fabrication techniques have enabled the former, and an ultra-dense silicon-based ionic liquid electrospray thruster with record-breaking emitter density is tested. The densified electrospray thruster is successfully fired, exhibiting emission in the pure ionic regime and performance characteristics comparable to the state of the art. The latter can be achieved by thermally augmenting the current output of an electrospray thruster, leveraging the temperature dependence of propellant properties and fluid mechanics of propellant transport. The applications of such a system are discussed and analyzed, and a prototype for a thermally augmented electrospray thruster is designed and tested, verifying the concept of current augmentation at elevated temperatures.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an Ingestible Fluid Wicking Gastric Electrical Stimulation Platform for Hormone Modulation</title>
<link href="https://hdl.handle.net/1721.1/147137" rel="alternate"/>
<author>
<name>McRae, James</name>
</author>
<id>https://hdl.handle.net/1721.1/147137</id>
<updated>2023-01-18T03:09:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Development of an Ingestible Fluid Wicking Gastric Electrical Stimulation Platform for Hormone Modulation
McRae, James
Dysregulation of the gut-brain axis affects hundreds of millions of people annually, often resulting in motility, autoimmune, mood, and neurological disorders. Colloquially referred to as an “electroceutical,” electrical stimulation of the GI tract for modulation of this axis has been explored as a potential therapeutic for GI motility disorders. Thus far, these methods have utilized invasive implant procedures in order to stimulate the outer muscle layers of the stomach. The development of non-invasive stimulation approaches requires that these systems be in an ingestible form factor that instead stimulate the inner mucosal layer of the stomach. However, stimulation of the mucosal layer remains challenging due to gastric fluid that can disrupt targeted stimulation. In this work we first elucidate and establish the relationship between gastric electrical stimulation (GES) and the production of ghrelin, a hormone associated with hunger, in a pig model. Next, we translate this stimulation approach into a non-invasive capsule system that is then optimized through rapid iteration enabled by 3D printing and an in vitro system replicating the stomach’s mechanical and electrical properties. Finally, inspired by the fluid wicking skin of the Moloch horridus, we developed and integrated fluid wicking surface structures into the capsule that can displace fluid in order to mitigate the challenge it poses to targeted stimulation. The optimized capsule was administered in vivo in pigs and demonstrated the ability to modulate plasma ghrelin levels. The developments around the surface structure and properties to enable fluid displacement have broad ranging applications in low size, weight, and power (SWaP) ingestible mucoadhesive and fluid sampling systems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experiments on the stability of an axisymmetric water jet</title>
<link href="https://hdl.handle.net/1721.1/146934" rel="alternate"/>
<author>
<name>Levine, Andrew.</name>
</author>
<id>https://hdl.handle.net/1721.1/146934</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Experiments on the stability of an axisymmetric water jet
Levine, Andrew.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaves 33-35).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The coherence of optical instruments</title>
<link href="https://hdl.handle.net/1721.1/146933" rel="alternate"/>
<author>
<name>Lerman, Steven Harold.</name>
</author>
<id>https://hdl.handle.net/1721.1/146933</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The coherence of optical instruments
Lerman, Steven Harold.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1964; Includes bibliographical references (leaf 59).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mathematical relation between baroreceptor neural activity and arterial pressure</title>
<link href="https://hdl.handle.net/1721.1/146932" rel="alternate"/>
<author>
<name>Lercari, Robert Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/146932</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Mathematical relation between baroreceptor neural activity and arterial pressure
Lercari, Robert Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaves 49-50).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reaction kinetics of deuterium-tritium mixtures</title>
<link href="https://hdl.handle.net/1721.1/146928" rel="alternate"/>
<author>
<name>Huss, William Norman.</name>
</author>
<id>https://hdl.handle.net/1721.1/146928</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Reaction kinetics of deuterium-tritium mixtures
Huss, William Norman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1960; Includes bibliographical references (leaf 61).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of environment on shear strength</title>
<link href="https://hdl.handle.net/1721.1/146927" rel="alternate"/>
<author>
<name>Hoyt, Terry S.</name>
</author>
<id>https://hdl.handle.net/1721.1/146927</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Effects of environment on shear strength
Hoyt, Terry S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1960; Includes bibliographical references (leaf x).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies in semiconducting glasses</title>
<link href="https://hdl.handle.net/1721.1/146926" rel="alternate"/>
<author>
<name>Hulst, Jean F.</name>
</author>
<id>https://hdl.handle.net/1721.1/146926</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Studies in semiconducting glasses
Hulst, Jean F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1960; Vita.; Includes bibliographical references (leaf 42).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Segregation control of binary size particles in a comminution device</title>
<link href="https://hdl.handle.net/1721.1/146924" rel="alternate"/>
<author>
<name>Annis, Karen Julia.</name>
</author>
<id>https://hdl.handle.net/1721.1/146924</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Segregation control of binary size particles in a comminution device
Annis, Karen Julia.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1991; Includes bibliographical references (leaves 52-54).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies of the differential spín exchange scattering of K and I.</title>
<link href="https://hdl.handle.net/1721.1/146923" rel="alternate"/>
<author>
<name>Ku, William Hsin Min.</name>
</author>
<id>https://hdl.handle.net/1721.1/146923</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Studies of the differential spín exchange scattering of K and I.
Ku, William Hsin Min.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The operational and manpower impact of the interdivisional run-through agreement on the Union Pacific Railroad</title>
<link href="https://hdl.handle.net/1721.1/146920" rel="alternate"/>
<author>
<name>O'Hara, C. Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/146920</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">The operational and manpower impact of the interdivisional run-through agreement on the Union Pacific Railroad
O'Hara, C. Edward.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1981; Bibliography: leaf 115.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The relationship between railroad work rules and operating plans.</title>
<link href="https://hdl.handle.net/1721.1/146919" rel="alternate"/>
<author>
<name>Morgenbesser, Martin Jay.</name>
</author>
<id>https://hdl.handle.net/1721.1/146919</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The relationship between railroad work rules and operating plans.
Morgenbesser, Martin Jay.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 73-74.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of electrostatic fields on forced convection heat transfer</title>
<link href="https://hdl.handle.net/1721.1/146918" rel="alternate"/>
<author>
<name>Levy, Edward K.
            (Edward Kenneth)</name>
</author>
<id>https://hdl.handle.net/1721.1/146918</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The effects of electrostatic fields on forced convection heat transfer
Levy, Edward K.
            (Edward Kenneth)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaves 21-22).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardware Security with Electromagnetic Side Channels</title>
<link href="https://hdl.handle.net/1721.1/146860" rel="alternate"/>
<author>
<name>Ashok, Maitreyi</name>
</author>
<id>https://hdl.handle.net/1721.1/146860</id>
<updated>2022-12-14T03:32:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Hardware Security with Electromagnetic Side Channels
Ashok, Maitreyi
While much of integrated circuit development over the last few decades has focused on power, performance, and area, hardware security is rapidly gaining prominence as a major consideration during the design process. Particularly, physical side channels that allow reverse engineering of inputs, operation states, and private information must be characterized and protected against. In this thesis, we focus on electromagnetic (EM) physical side channels. EM side channel measurements using a novel quantum diamond microscope are used to protect the integrated circuit supply chain by detecting hardware trojans with high spatial resolution, a wide field of view, and high sensitivity. A hardware trojan detection framework is developed that allows for automated and unbiased detection, using convolutional neural networks instead of principal component analysis for higher accuracy. In addition, the EM side channel is considered as an attack method to gain sensitive data from analog to digital converters (ADC). To this end, we propose a method of protection using conversion randomization that reduces both power and EM side channel leakage with minimal area and accuracy overhead. Due to the long wait times between samples in Internet of Things applications, we are able to make a trade off between conversion time and security. Finally, we consider the use of better routing and placement to augment EM side channel resilience for general digital circuits and capacitive digital to analog converters with no first-order power, area, or performance overhead.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automation of NC Programming with Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/146859" rel="alternate"/>
<author>
<name>Lunny, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/146859</id>
<updated>2022-12-14T03:06:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automation of NC Programming with Artificial Intelligence
Lunny, Michael
With the advent of artificial intelligence (AI) in business operations of various industries in recent decades, manufacturing firms are embracing intelligent, data-driven methods of making their processes more efficient. In particular, AI-driven automation of computer numerically controlled (CNC) programming, the process by which cutting tool and operation parameters governing CNC machines are determined, has potential to yield dramatic benefits to machining companies. Within the context of Midwest-based machining firm Orizon, two approaches to programming automation were developed. Geometry Rule-based Automation of Programming (GRAP) is a rule based system with the ability to recognize hole and pocket features and automatically create an associated program, albeit suboptimal. Deep Learning for Automated Tool Selection (DLATS) is a machine learning algorithm with the ability to select the appropriate cutting tool for a hole drilling process with 32% accuracy, which is over 300 times better than random selection. Motivation, results, and implementation findings for both GRAP and DLATS are presented.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advanced Functionality of Digital Mining Predictive Analytics and Insights Platform</title>
<link href="https://hdl.handle.net/1721.1/146858" rel="alternate"/>
<author>
<name>Sanghani, Kunal</name>
</author>
<id>https://hdl.handle.net/1721.1/146858</id>
<updated>2022-12-14T03:02:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Advanced Functionality of Digital Mining Predictive Analytics and Insights Platform
Sanghani, Kunal
CR Digital is a digital technology business specializing in the development of mining technology software and services. It is a subsidiary of CR Mining, a technology enabled mining company that enhances customer productivity and performance globally. CR Digital has a goal to be the leader in the mining industry when it comes to providing not only mining tools and equipment but also digital capabilities. CR Digital develops its products to be widely connected in the mining data ecosystem, using open API concepts to drive potential for data interoperability. CR Digital has three main products a) Titan 3330, a load haul optimization solution that provides real time payload information, b) Thunderbird, a drill efficiency indicator solution that CR Digital acquired in 2019, and c) GET Trakka, a GET loss detection system that CR Digital acquired in 2020. CR Digital uses Orion, an analytics portal, to display meaningful insights from the data generated by the three products and dashboards that provide a better visual representation of the data.&#13;
&#13;
CR’s high level goal is to increase revenue for its digital products in the Americas (North America and South America) by increasing the value, especially productivity measure in tons moved per unit time, that its digital offering brings to its customers. CR will increase digital revenue by providing impactful data analytics insights as part of its CR Digital offering, enabling tangible improvements in customer mining operations, and generating substantial value for those customers. CR’s data analytics insights need to be able to be delivered in a scalable manner, in all&#13;
regions of the global mining industry, in particular the Americas. Using Orion, CR Digital has rolled out Titan analytics and will roll out Thunderbird in Q4 2021. GET Trakka integration will happen in Q1 2022 because CR Tech does not have it in the roadmap for 2021.&#13;
&#13;
As a result, as a part of CRD’s Analysis and Improvement Service, this project will focus on developing a suite of advanced analytics solutions for the Titan (in Q3 2021), using machine learning techniques such as linear regression, logistic regression, classification trees, random forests, neural networks and/or XG boost. These machine learning methods will be used to build scalable insights to drive productivity increases for the mines. The project will be deemed successful if the supervisors and operators at the mines are able to not only be proactive in making&#13;
decisions but also are able to make real time decisions while working the machines. The project will also include analyzing the current business model of the three products and optimizing it to not only keep the customer experience seamless but also&#13;
generate recurring revenue for the digital business. With predictive analytics, CRD has the potential to add an incremental $9.7 million in annual revenue and a mine’s expected throughput could increase by $15 million.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiple-frequency antenna arrays</title>
<link href="https://hdl.handle.net/1721.1/146724" rel="alternate"/>
<author>
<name>Lenahan, Terrence A.,
            1941-</name>
</author>
<id>https://hdl.handle.net/1721.1/146724</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Multiple-frequency antenna arrays
Lenahan, Terrence A.,
            1941-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaf 53).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the driving-point impedance functions with multiple singularities</title>
<link href="https://hdl.handle.net/1721.1/146721" rel="alternate"/>
<author>
<name>Katz, Mordecai D.</name>
</author>
<id>https://hdl.handle.net/1721.1/146721</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">On the driving-point impedance functions with multiple singularities
Katz, Mordecai D.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1953; Bibliography: leaf 85.
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analytic investigation of flow and hemolysis in peristaltic-type blood pumps,</title>
<link href="https://hdl.handle.net/1721.1/146719" rel="alternate"/>
<author>
<name>Meginniss, J. R.
            (James R.)</name>
</author>
<id>https://hdl.handle.net/1721.1/146719</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">An analytic investigation of flow and hemolysis in peristaltic-type blood pumps,
Meginniss, J. R.
            (James R.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1970; Bibliography: leaves 107-109.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grounded-grid power amplifiers,</title>
<link href="https://hdl.handle.net/1721.1/146718" rel="alternate"/>
<author>
<name>Chang, Chen-Tung.</name>
</author>
<author>
<name>Zhang, Tong.</name>
</author>
<id>https://hdl.handle.net/1721.1/146718</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Grounded-grid power amplifiers,
Chang, Chen-Tung.; Zhang, Tong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaves 81-85.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collective bargaining in the railway industry: progress and problems of the 1960's projected to the 1970's.</title>
<link href="https://hdl.handle.net/1721.1/146716" rel="alternate"/>
<author>
<name>Webb, Herbert Gerald.</name>
</author>
<id>https://hdl.handle.net/1721.1/146716</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Collective bargaining in the railway industry: progress and problems of the 1960's projected to the 1970's.
Webb, Herbert Gerald.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1973; Bibliography: leaves 131-136.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impact of Boston Little City Halls on city management.</title>
<link href="https://hdl.handle.net/1721.1/146714" rel="alternate"/>
<author>
<name>Goudsmit, Frank.</name>
</author>
<id>https://hdl.handle.net/1721.1/146714</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">The impact of Boston Little City Halls on city management.
Goudsmit, Frank.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1971
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Particle size distributions and stability of aqueous aerosols.</title>
<link href="https://hdl.handle.net/1721.1/146712" rel="alternate"/>
<author>
<name>Seid, Arnold.</name>
</author>
<id>https://hdl.handle.net/1721.1/146712</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Particle size distributions and stability of aqueous aerosols.
Seid, Arnold.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Strategies for Wide Scale Replacement of Human Inspection with Machine Vision</title>
<link href="https://hdl.handle.net/1721.1/146709" rel="alternate"/>
<author>
<name>Sakerka, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/146709</id>
<updated>2022-12-01T03:17:21Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Evaluating Strategies for Wide Scale Replacement of Human Inspection with Machine Vision
Sakerka, Lauren
A stable and cost-effective workforce is key to manufacturing life-saving medical devices. However, an ongoing global labor shortage is causing national economic challenges and causing companies to have significant workforce shortages, delaying operations and production activities. Additionally, human visual inspections of medical devices are less reliable and effective than new technological inspections with machine and artificial intelligence vision systems. This research explores the efficiency of human visual inspections, the impact new technology, such as machine and AI vision, can add, how to lead technological change, and an approach to implementing this change at a medical device manufacturing company.&#13;
&#13;
Specifically, it examines best practices and a specific strategy for identifying machine and AI vision opportunities at a large manufacturing company where quality is extremely important. It also examines strategies to quickly identify improvement areas and get manufacturing excited about new technology. Finally, it compares a traditional field visit approach to a data driven opportunity identification approach. Ultimately, it proposes a data-driven approach using visual tools to communicate opportunities to management in order to get the buy-in to proceed with these technological improvements.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Short Duration Job Scheduling and Assignment using Staged Mixed Integer Programs</title>
<link href="https://hdl.handle.net/1721.1/146708" rel="alternate"/>
<author>
<name>Michaels, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/146708</id>
<updated>2022-12-01T03:33:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Short Duration Job Scheduling and Assignment using Staged Mixed Integer Programs
Michaels, Christina
As part of large-scale digital transformation efforts, Atlantic Utility’s electric field force recently introduced a mobile work dispatch solution aimed at reducing inefficiencies associated with daily work. The application retired many of the manual, paper-based processes previously employed by field crews and supervisors to complete daily short-cycle (&lt;6 hrs) jobs; it also introduced new capabilities that allow super- visors to review accumulated jobs in their operational region and strategize for their completion. Current operations find supervisors left with a long list of jobs to sift through when attempting to make daily work assignments and when scheduling work for one or more days in the future. Application users must manually identify jobs to schedule or assign while considering the distance to the job, required completion date, duration, and other factors. These factors contribute to the job priority level but without a simple way to aggregate these considerations into a clear set of prioritized jobs, supervisors are challenged to identify which work packets are highest priority and should be completed first. Daily scheduling and assignment is further complicated by the trade-off supervisors face when determining how to balance reduction of accumulated historical jobs with new jobs coming in at variable rates.&#13;
&#13;
This thesis formulates two proof-of-concept mixed integer programs that perform staged scheduling and assignment of short duration jobs. The objective functions include use of a metric indicative of a job’s total number of days past due or coming due. In this way, the formulations incorporate the real world trade-off supervisors face between historical and newly-created jobs subject to constraints on daily crew availability, increasing their utility as a future in-app aid for supervisors. Results of the scheduling stage over 2- to 6-day planning horizons indicate increased backlog reduction in comparison to naive or random strategies. Variation of user-defined inputs shows the scheduling formulation can be tuned to prioritize either jobs past due or those coming due in greater proportions, subject to the preferences of individual supervisors. When using both scheduling and assignment stages in sequence, results over 1- and 3-month simulated trials show consistently better performance in reducing job accumulation in comparison to historical records observed across operational barns of varying sizes.&#13;
&#13;
These results provide justification for a full operational pilot and recommendations for how to deploy production-ready algorithms are included in this thesis. They also suggest that greater improvement in barn operations is possible without assumption of increased crew capacity. Use of the staged formulations in the mobile work dispatch solution could introduce greater uniformity in how short duration jobs across the Atlantic Utility network are prioritized and completed, and may lead to enhanced customer service. These improvements could be realized through incorporation of these formulations as an automatic in-app aid to supervisors and field crews. Further, application of the staged approach to workforce allocation can be considered in industries outside utilities including those that involve logistics and delivery operations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Standardization in a Developing Manufacturing Environment</title>
<link href="https://hdl.handle.net/1721.1/146707" rel="alternate"/>
<author>
<name>Smolinski, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/146707</id>
<updated>2022-12-01T03:08:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Effects of Standardization in a Developing Manufacturing Environment
Smolinski, Stephanie
Growing a small-to-medium sized company while balancing financial health and operational excellence is a topic of significant interest across industries alike. It is generally accepted that when scaling operations, the behaviors that led to the success of a small company cannot be the same practices carried forward. Both people and the environment must adapt to reach success at the next level. &#13;
&#13;
This study examines the three primary challenges associated with the growth of a small, trailer chassis manufacturing company, PRATT Industries. First, the impact of culture and morale were assessed as they relate to employee retention and hiring practices. Second, statistical process control was applied to quality defect data. In conjunction with additional data analysis, this was utilized to identify the most meaningful root cause corrective actions. Lastly, inventory management was explored as it relates to ordering practices and factory floor organization. Underpinning all the above themes is the emphasis on the development of standardized, repeatable processes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytics to Make Hybrid Work, Work</title>
<link href="https://hdl.handle.net/1721.1/146706" rel="alternate"/>
<author>
<name>Tindall, Andrew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/146706</id>
<updated>2022-12-01T03:01:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analytics to Make Hybrid Work, Work
Tindall, Andrew J.
Hybrid work is a coordination problem at heart—how frequently and on which days of the week should hybrid employees come into the office? The COVID-19 pandemic accelerated a remote work revolution and caused the hybrid model—where employees split time between in-office and remote work—to become the norm as employees return to the office in 2022 and beyond. The shift to fully remote work during the pandemic highlighted numerous remote work benefits. To name a few, zero commute cost, more focus time and more flexibility. The challenge is that remote collaboration is more difficult and time consuming to orchestrate—potentially decreasing innovation. &#13;
&#13;
Acknowledging that remote and in-person work have different, and at many times complementary goals, our study tests whether employee collaboration data can help organizations solve the coordination problem inherent in hybrid work. We find that collaboration data can align work groups to maximize in-person collaboration gains while minimizing the number of days in office per week. We use data to recommend the optimal in-office frequency and find that offices will be 60% under capacity when employees return. Most importantly, we think about offices as networks—the value of being in the office scales non-linearly as users increase. We find that organizations can use collaboration data to model employee networks and appropriately align work communities. Ultimately, we develop a scheduling system that will help stabilize office space demand in 2022 and beyond.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment and Operationalization of Automation in Final Product Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/146705" rel="alternate"/>
<author>
<name>Tresansky, Andrew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/146705</id>
<updated>2022-12-01T03:26:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Assessment and Operationalization of Automation in Final Product Manufacturing
Tresansky, Andrew C.
Clinical trials final product manufacturing is a high-mix low-volume process that is difficult to automate and consequently highly manual. Furthermore, automation technologies often require substantial capital investments, and it is desirable to evaluate them in-silico prior to major capital outlays. This project used Simio (a modeling software) to develop a model of clinical trial autoinjector labeling and packaging which was used to identify candidate steps for automation. Then prototype solutions were developed for the candidate steps, data collected from the prototypes, and analyzed using the same Simio model to determine displacement or productivity effects. These economic effects were then analyzed using a discounted cash flow/net present value analysis, and further analyzed using real options analysis. Finally, this project aimed to codify the model-identify-prototype-model process used in this project for application to other potential automation projects. This project identified up to $380k of combined positive NPV between two automation projects given certain utilization assumptions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Growth in a Middle-Market Job Shop Environment</title>
<link href="https://hdl.handle.net/1721.1/146704" rel="alternate"/>
<author>
<name>Page, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/146704</id>
<updated>2022-12-01T03:45:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Enabling Growth in a Middle-Market Job Shop Environment
Page, Nicholas
No single playbook can be a universal template for success. Each corporation is a living entity, complete with a history, context, culture, market, and group of people that make it up. Understanding the needs of the business now, with an eye to the goals it seeks to later accomplish, is critical to developing a tailored approach and being an effective leader.&#13;
&#13;
This research explores a series of initiatives and the interplay amongst them, across four key focus areas at PSI Power &amp; Control, a middle-market job shop that assembles custom electrical distribution and control products for industrial consumers. These areas include data models, inventory &amp; supply chain, production operations, and processes &amp; strategic planning. Optimizing for simple, effective solutions to break down barriers to growth led, in part, to the successful development of live, actionable, and interactive data visualizations; predictive inventory management solutions; visual management operational implementations; and a formal new product design process.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simple, sustainable, water straight from the sun - batteryless electrodialysis desalination</title>
<link href="https://hdl.handle.net/1721.1/146702" rel="alternate"/>
<author>
<name>Bessette, Jonathan Tae-Yoon</name>
</author>
<id>https://hdl.handle.net/1721.1/146702</id>
<updated>2022-12-01T03:03:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simple, sustainable, water straight from the sun - batteryless electrodialysis desalination
Bessette, Jonathan Tae-Yoon
There is a need for reliable, low maintenance off-grid desalination for drinking water in resource-constrained regions. However, current off-grid desalination systems rely on large solar arrays and battery capacity for sufficient power and energy storage - such systems greatly increase the capital costs, operating costs, complexity and maintenance. Electrodialysis is a flexible technology with significant energy and water efficiency in comparison to other thermal and membrane processes and thus provides significant reduction in solar array capacity; however, it has not been exhibited off-grid without significant energy storage. This work proposes and validates a simple, robust, and maximal water production rate control scheme which enables batteryless off-grid desalination. The control scheme proposed involves cascade control with an outer PID loop tracking power and commanding flow rate, and a coupled inner model based control loop which always produces the maximum allowable current and thus, maximum desalination rate for the real-time power. This control scheme is applicable and adaptable to any continuous power system but can be most advantageous in direct-drive variable power situations, such as with solar panels. The controller is extremely simple, computationally efficient, and robust to implement: it relies on two sensors - a flow meter and a conductivity meter, one equation, and a PID controller. We demonstrate and conduct initial validation of this capability in a field pilot using direct-drive photovoltaic batch electrodialysis. We demonstrate a battery reduction of 99.4% from comparable prior art (20 kwh to 120 wh) on a 2 kwh system at a control speed of 100 milliseconds and utilization of 79% and 91% of total solar energy on two separate days of testing. This control scheme enables significant reduction and even elimination of batteries and is a step towards minimal-maintenance, high production off-grid desalination.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prototyping of Injection EVA Foam Footwear Midsoles</title>
<link href="https://hdl.handle.net/1721.1/146701" rel="alternate"/>
<author>
<name>Galgali, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/146701</id>
<updated>2022-12-01T03:07:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Prototyping of Injection EVA Foam Footwear Midsoles
Galgali, Amit
During the research and development (R&amp;D) process for new footwear, prototypes of various components are made for functional testing and analysis. Nike makes prototypes at its World Headquarters Product Creation Center (PCC) and also outsources to suppliers at several development centers across Asia (ADC). One important component is the midsole, often made from a type of ethylene-vinyl acetate (EVA) foam called injection-phylon (IP). IP midsole prototyping is currently exclusively outsourced, however there is a desire for a supplementary internal capability at the PCC. This thesis develops a plan for how the PCC could achieve that.&#13;
&#13;
A study was conducted to test the PCC’s current capabilities with the entire IP development process, benchmark a top ADC partner, and identify any gaps. It can be difficult to make IP parts that meet specifications due to the significant scale and non-uniform nature of how they expand after molding. Dealing with this expansion requires certain steps in the IP process and this study isolated three key aspects: expansion ratio (ER) grading, mold design, and part-making. The PCC then developed mold tools and three sets of midsoles to compare its capabilities in each of these aspects to a mold tool and set of midsoles sent from the ADC partner.&#13;
&#13;
Results showed that the PCC was able to make IP midsoles that met many of the part specifications. The PCC also acquired new mold design, injection molding, and stabilization best practices. An ER grading method was also developed, with initial results proving it has promise and should be tested further.&#13;
&#13;
The results also highlighted some gaps and additional research studies that could be conducted to bridge them including further analysis of these mold tools, testing the ER grading method on more midsole varieties, and thoroughly mapping the PCC’s upstream stakeholders’ requirements. The PCC should then undertake complete IP projects specifically chosen to test its IP capabilities in various midsole design complexities, formulations, development speeds, and quality levels. All of these efforts will help the PCC effectively implement an IP prototyping capability that is able to effectively supplement existing ADC capabilities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equipment Installation Quality Improvement</title>
<link href="https://hdl.handle.net/1721.1/146700" rel="alternate"/>
<author>
<name>Amlani, Jen</name>
</author>
<id>https://hdl.handle.net/1721.1/146700</id>
<updated>2022-12-01T03:23:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Equipment Installation Quality Improvement
Amlani, Jen
In an Amazon Fulfillment Center (FC), associates work alongside Material Handling Equipment (MHE) to move products and packages efficiently within the building. As new FCs are constructed and launched to meet changing fulfillment demands, it is essential for Amazon and its vendors to ensure the highest quality installation and integration of MHE to minimize operational issues following site launch. To continuously improve the FC construction and launch process, the Amazon Operations Engineering team seeks to optimize MHE installation qualification by improving existing quality processes while decreasing the time and resources required to complete them.&#13;
&#13;
This project launched two separate initiatives to move the needle towards increased quality and lean operation within Amazon’s MHE inspection and qualification process. &#13;
&#13;
The first initiative focuses on prioritization of inspection tasks considering risk of post-launch equipment failure using a new unsupervised machine learning approach. Three unique machine learning algorithms were created to connect disparate databases containing equipment inspection tasks and operational equipment failure data. The outputs of each algorithm were compared to understand which model provided the best fit. The new, best-fitting model can be used in the future to identify highest-impact equipment inspection tasks, and to simplify business decisions on how inspection tasks should be prioritized or throttled (not completed).&#13;
&#13;
The second initiative seeks to improve the equipment inspection tasks themselves, and explores the impact of increased task standardization using three techniques:&#13;
&#13;
• Including photos with examples of good- and poor-quality equipment installations&#13;
• Including descriptive measurements and specific ’how-to’ instructions for inspection tasks&#13;
• Separating tasks based on unique resolution actions&#13;
&#13;
A pilot test was completed to determine the impact of these techniques on vendor inspection outcome accuracy, and results showed that the increased standardization provided benefit to inspection outcomes overall.&#13;
&#13;
Both of these projects introduce novel quality improvements in order to decrease maintenance technician re-work and reduce post-operational equipment failures. This thesis captures the successful techniques that were used in the case study, rationale for why they were chosen, and highlights the general use cases for which these quality improvement methodologies can be applied.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamlining Financial Analysis for Novel Robotics Concepts</title>
<link href="https://hdl.handle.net/1721.1/146698" rel="alternate"/>
<author>
<name>Livingston, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/146698</id>
<updated>2022-12-01T03:38:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Streamlining Financial Analysis for Novel Robotics Concepts
Livingston, Timothy
Over the past decade e-commerce shipment volumes have risen dramatically, and online retailers,lead by Amazon, have invested significantly in robotics technologies to meet capacity needs. Before making these investments, firms consider both operational and financial feasibility in order to ensure that the technology meets requirements and earns a return on&#13;
investment.&#13;
&#13;
In this work, we present Robotics Concept Modeler (RCM), a software program that streamlines and standardizes financial analysis process for novel robotics concepts at Amazon.&#13;
RCM augments Amazon’s existing process of sequentially evaluating operational and financial feasibility, a governor on the company’s pace of robotics innovation.&#13;
&#13;
We outline the needs of the stakeholders involved in the financial analysis process, discuss how those needs shaped our development process, and give an overview of the final functionality of the developed software, including an exploration of the innovative graphical model construction methodology that drives many of RCM’s benefits.&#13;
&#13;
We show that using RCM introduces significant time savings in preparing financial analysis, expands the userbase of individuals that are capable of this analysis, and reduces ambiguity and variation in analysis structure.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Thread and Analytics Model to Improve Quality Controls in Surgical Stapler</title>
<link href="https://hdl.handle.net/1721.1/146697" rel="alternate"/>
<author>
<name>Hau, Han-Ching Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/146697</id>
<updated>2022-12-01T03:35:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Digital Thread and Analytics Model to Improve Quality Controls in Surgical Stapler
Hau, Han-Ching Elizabeth
Ethicon, Inc. currently collects data in various stages of its supply chain, but the information is fragmented across the end-to-end chain, resulting in a reactive supply chain. This study seeks to understand the data maturity of Ethicon's surgical stapler through exploratory data analysis and experimental data modeling with machine learning techniques in order to provide recommendations on strategies for digital readiness in a medical device and outline potential opportunities digitization can bring. &#13;
&#13;
The goals of this project are:&#13;
1. Enable end-to-end visibility into the currently supply chain by building a digital thread for a surgical stapler product&#13;
2. Create visualizations to provide visibility and insight into the existing production process&#13;
3. Use advanced analytics models to identify key components or measurements that affect the product's Force to Fire final quality inspection results&#13;
&#13;
The digital thread and models built laid the groundwork for the Ethicon team to understand the current state of their systems and will be used as the team conducts experiments to further understand the actual devices being built.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework and Analytics for Emissions Forecasting and Planning</title>
<link href="https://hdl.handle.net/1721.1/146696" rel="alternate"/>
<author>
<name>Chiang, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/146696</id>
<updated>2022-12-01T03:38:52Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Framework and Analytics for Emissions Forecasting and Planning
Chiang, Luke
Amgen has recently committed to achieving 100% carbon neutrality, 40% water reduction, and 75% waste reduction relative to its 2019 baseline by 2027. To help reach these goals, Amgen has taken a science-based approach and has assembled a Sustainability Analytics Team to develop tools based on analytical insights that site leads and executives can use to work towards the 2027 Sustainability Goal. A key business gap identified was the ability to precisely forecast future emissions growth stemming from long term growth of the company. &#13;
&#13;
This thesis presents a methodology to develop a framework that breaks down Amgen's emissions profile and utilizes analytics to understand key emissions drivers for long term growth within a vertical of the framework. The key emissions drivers are then incorporated into an excel model to build upon and supplement Amgen's current forecasting methods. Using the framework and analysis, drug substance (DS) production within the manufacturing vertical in the framework is used as a case study to demonstrate the validity and value of this approach. The initial hypothesis was that increases in DS production will not materially increase Amgen's carbon emissions. Conducting regression analysis on four facilities with DS plants to find correlations between emissions drivers and energy usage revealed that (1) energy usage with respect to DS production was largely insensitive to changes in production volume for sites with large building areas and (2) as DS production intensifies and requires less space, increases or decreases in DS production may materially impact a site's energy usage. &#13;
&#13;
These learnings were incorporated into an excel tool to forecast Amgen's carbon emissions versus current sustainability plans to help executives better understand whether Amgen was on pace to reach carbon neutrality by 2027 despite expected business growth and to make strategic decisions to ensure the sustainability goals are met. Moreover, these learnings can help Amgen prioritize sustainability initiatives that would help meet business needs while limiting or even reducing environmental impact. Examples include but are not limited to increasing cleanroom efficiency to reduce fixed energy usage, debottlenecking current processes before building new plants to limit increases in carbon emissions, and utilizing more energy efficient equipment or processes to reduce variable production energy usage as production intensifies and requires less space.  &#13;
&#13;
Beyond helping Amgen reach their sustainability goals, the methodology to developing a framework and conducting analysis can be utilized by all companies in different industries to meet their sustainability goals. The framework and type of analysis can be adjusted for different business activities and needs respectively to develop a holistic model to forecast a company's emissions and drive strategic decisions to minimize environmental impact.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics and Algorithms in Time of Flight Based Computational Imaging</title>
<link href="https://hdl.handle.net/1721.1/146695" rel="alternate"/>
<author>
<name>Sadhu, Venkata Subhash Chandra</name>
</author>
<id>https://hdl.handle.net/1721.1/146695</id>
<updated>2022-12-01T03:28:57Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Physics and Algorithms in Time of Flight Based Computational Imaging
Sadhu, Venkata Subhash Chandra
Imaging systems work on an interplay of physics, electronics, and algorithms. Designing imaging systems by exploiting all these layers of abstraction can give rise to novel applications and interesting solutions to existing problems.&#13;
&#13;
In this thesis, I will explore this design philosophy in 2 problems; automatic calibration of cameras that can see around corners, high resolution optical measurement of ultrasound. I will utilize advanced electronics hardware like femtosecond lasers, and Single Photon Avalanche Photodiode(SPAD) detectors for the automatic calibration project and software tools meant for machine learning in Python. For optical ultrasound measurement, I will use relatively simple hardware like solid state lasers, bench top optics, Field Programmable Gate Arrays (FPGA), and a synchronised global shutter CMOS camera.&#13;
&#13;
Both projects are aimed at solving practical problems in their respective areas. Automatic calibration aims to improve the fidelity of images by algorithmically fixing calibration errors after taking measurements in cameras which can see around the corners. The optical ultrasound project aims to take medical ultrasound images at a very low cost and high resolution by using ideas from optics, laser vibrometry, and signal processing.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operations Strategy for Evolving Customer Profiles</title>
<link href="https://hdl.handle.net/1721.1/146694" rel="alternate"/>
<author>
<name>Vangala, Pranav</name>
</author>
<id>https://hdl.handle.net/1721.1/146694</id>
<updated>2022-12-01T03:43:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Operations Strategy for Evolving Customer Profiles
Vangala, Pranav
ResMed is a respiratory medical device manufacturer based on San Diego, CA. Founded in 1989, the company was a pioneer in the field of sleep apnea therapy using positiveairway-pressure machines for patients to use at home. ResMed has seen remarkable growth and success across their existing customer base - historically made up of medical equipment suppliers who buy products in bulk, distribute them to patients, and interface with the patient during the therapy setup process. Recently, ResMed has entered sales channels outside of its conventional B2B, medical supplier-focused roots that move them a step closer to the end customer. These channels involve shipping products directly to customers on behalf of medical suppliers and retailers, and taking more responsibility for the therapy start and setup experience. Sales growth in these new channels places a fresh set of demands on ResMed teams across product, operations, sales and beyond. ResMed’s operations strategy needs to adapt to this increased focus on customer facing sales – this will require changes to people, systems and processes. This piece of research examines how ResMed can achieve success in the medical device world when selling directly to consumers and in-store retailers, assesses ResMed’s current stage of development across functional areas that will contribute to this success, and then propose methods to close gaps going forward.&#13;
&#13;
ResMed’s two guiding goals are to ensure that customers can start therapy in a timely manner, and that patients adhere to their treatment regimen to maximize therapeutic benefits. The functional areas that most contribute to these goals are fulfilment and distribution operations; supply chain and manufacturing; and product design for setup. Underpinning success across all of these is a requirement for efficient internal communications to deliver results for the patient. Given these new customer profiles, a restricted global supply landscape and prevailing demand uncertainty throughout the COVID-19 pandemic, ResMed stands to benefit from being more agile in making operational changes that affect their guiding goals.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Playbook - A Novel Approach to Identifying Opportunity for On Machine Measurement and Adaptive Machining Projects</title>
<link href="https://hdl.handle.net/1721.1/146693" rel="alternate"/>
<author>
<name>Higgins, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/146693</id>
<updated>2022-12-01T03:27:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Playbook - A Novel Approach to Identifying Opportunity for On Machine Measurement and Adaptive Machining Projects
Higgins, Luke
As industries advance into the manufacturing future companies are seeking more and more to bring inspection capabilities as close as possible to machining centers. Technological advancement has now made On Machine Measurement (OMM) and adaptive machining technology a reality for manufacturers. OMM technologies have many potential benefits to be had which begs the question: what parts and machines are suitable for these new technologies? Bell (among others) is seeking an answer to this question and wanted to build a framework for considering OMM / adaptive machining deployments withing their manufacturing centers. Bell facilities provide a perfect opportunity for the testing and deployment of advanced manufacturing capabilities given the high precision demanded of aircraft parts, their in house manufacturing capabilities, and the Bell commitment to pushing the boundaries of the possible.&#13;
&#13;
The Playbook was developed for Bell to provide a method for identification and deployment of OMM and adaptive machining projects. The Playbook utilizes data, Subject Matter Expert (SME) knowledge, and lessons learned combined with a structured approach to breakdown the key elements for successful part and machine selection for OMM and adaptive machining technologies. These parts and machines can then be analyzed using the Playbook to identify work to be performed for project implementation.&#13;
&#13;
The Playbook was deployed on a test part family for testing and development purposes. The test part family provided feedback for further Playbook development and also confirmation of the direction in which the playbook is heading. While the current state of OMM and adaptive machining is still novel in the high tolerance manufacturing environment, the Playbook has assisted Bell in considering the future of these technologies within their manufacturing facilities.&#13;
&#13;
Continuation of research in the field of OMM and adaptive machining technologies allows for the further integration of measurement and inspection into the manufacturing process. The fusion of fabrication and inspection further improves manufacturing system control and reduces the time and money required to fabricate complex parts.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can traditional Japanese companies reinvigorate middle managers to improve their competitive advantages in a world of uncertainty?</title>
<link href="https://hdl.handle.net/1721.1/146692" rel="alternate"/>
<author>
<name>Ikegami, Daisuke</name>
</author>
<id>https://hdl.handle.net/1721.1/146692</id>
<updated>2022-12-01T03:25:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Can traditional Japanese companies reinvigorate middle managers to improve their competitive advantages in a world of uncertainty?
Ikegami, Daisuke
Today's world of uncertainty requires more and more innovation for companies to survive and thrive. Innovation can be realized when middle managers bridge the gap between top management's ideal and frontline staff's reality. This is what Nonaka and Takeuchi call “middle up-down management” in The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Unfortunately, many middle managers, who used to be the engine driving innovation, have been struggling with middle up-down management due to rapid internal and external changes as well as companies’ failure to adapt to those changes by articulating the core role of middle managers. As a result, the companies fail to train, develop, and reward such managers. Hence, it is high time to re-analyze Japanese companies' competitive advantage from the standpoint of middle managers.&#13;
&#13;
My diagnostic analysis based on a survey of nineteen Japanese Sloan Fellows and interviews of twenty middle managers in some traditional Japanese companies reveals five critical root causes contributing to middle managers' struggle with middle up-down management, including 1) deficient articulation of strategic priorities by top management, 2) organizational cultures that prevent middle managers from succeeding at middle up-down management, 3) lack of clarity and articulation of core roles and capabilities of middle managers in an uncertain world, 4) insufficient support systems to build the core capabilities of middle managers, and 5) inconsistent diversity and inclusion policy.&#13;
&#13;
My extensive literature review shows that while operational management capabilities, which Japanese companies excel at, are still necessary and important to exploit the existing business, only those capabilities are not enough to support long-term competitive advantage in a world of uncertainty. Therefore, in addition to operational management, middle managers must play a role of enabling junior managers to explore, exploit and export new business models, products, and service ideas, using the core capabilities of coaching, connecting, communicating, removing obstacles together, and story-telling.&#13;
&#13;
The cases of four advanced companies (Microsoft, IBM, Toyota and Itochu) suggest that the articulation of strategic business priorities, the consistent message and action by the top management to foster a growth mindset, and diverse and inclusive culture, redefinition of the core role of middle managers, and building support systems for middle managers are critical to the company’s sustainable competitive advantage.&#13;
&#13;
Consequently, I generate six key methods for traditional Japanese companies to reinvigorate middle managers: 1) to reinvent the purpose and mission, 2) to re-articulate strategic business priorities by top management, 3) to foster a growth mindset in an organizational culture, 4) to re-define middle managers’ core roles and capabilities in a world of uncertainty, 5) to develop capability-building support systems for middle managers, and 6) to develop a diverse and inclusive culture.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fulfillment Simulation and Inventory Location Optimization</title>
<link href="https://hdl.handle.net/1721.1/146691" rel="alternate"/>
<author>
<name>Krishnamachar, Anjali</name>
</author>
<id>https://hdl.handle.net/1721.1/146691</id>
<updated>2022-12-01T03:25:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Fulfillment Simulation and Inventory Location Optimization
Krishnamachar, Anjali
For a large, global retailer, the expansion towards digital business has motivated a shift in supply chain strategy. The retailer is in the process of transitioning from traditionally using national hubs for fulfillment to specifically targeting digital demand hotspots as new fulfillment centers. As this strategy is developed, the retailer requires the capability to quickly test out potential network configurations in a simulated manner before implementing these system changes in the distribution network. The retailer would also like to explore how key system parameters, such as inventory placement, can be optimized to improve fulfillment performance throughout the network.&#13;
&#13;
This thesis focuses on two main topics: the improvement and automation of an existing sim- ulation framework used for simulating digital fulfillment and the development of an optimization formulation for recommending inventory positions that will improve fulfillment metrics. We demon- strate a pilot test of both topics by using the improved simulation framework to compare several fulfillment scenarios, including one with inventory positions that are dictated by the developed opti- mization method. First, the inventory optimization is executed over a historical dataset of orders to find recommended inventory positions. Then, the simulation framework is used to simulate digital fulfillment under two types of scenarios: inventory placed at the recommended positions and inventory placed at the original historical positions. The resulting simulated fulfillment performance is compared between the scenarios and we show that the scenario with recommended inventory positions is able to achieve a 6.7% reduction in unit shipping cost as well as a 25% reduction in the percent of total fulfilled orders that require split shipments. These initial results are promising in their demonstration of potential avenues for improving digital fulfillment via inventory optimization as well as the use of simulation as a tool for digital fulfillment network comparison.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sigma Ratings Case Study</title>
<link href="https://hdl.handle.net/1721.1/146690" rel="alternate"/>
<author>
<name>Fouilland, Gaspard</name>
</author>
<id>https://hdl.handle.net/1721.1/146690</id>
<updated>2022-12-01T03:36:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sigma Ratings Case Study
Fouilland, Gaspard
Pivoting is an essential step for the majority of successful entrepreneurial ventures. This case study focuses on MIT startup and delta v alumnus Sigma Ratings, a non-credit agency, founded in 2017 by Gabrielle Haddad and Stuart Jones Jr. It immerses the reader in the decision-making process of the company, after its business model revealed major flaws necessitating a pivot to ensure its survival and long-term profitability. This specific story sparks a wider reflection on the constant need for adaptability and the importance of resilience in an entrepreneurial journey.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Intelligence and Machine Learning Capabilities and Application Programming Interfaces at Amazon, Google, and Microsoft</title>
<link href="https://hdl.handle.net/1721.1/146689" rel="alternate"/>
<author>
<name>Liu, Boyan</name>
</author>
<id>https://hdl.handle.net/1721.1/146689</id>
<updated>2022-12-01T03:00:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Artificial Intelligence and Machine Learning Capabilities and Application Programming Interfaces at Amazon, Google, and Microsoft
Liu, Boyan
With the continuous development of artificial intelligence (AI) and machine learning (ML), cloudbased AI and ML have been hot in recent years. The trend is that cloud-based services and products have become a strategic weapon for giant tech companies. However, each major manufacturer's competitive strategy and focus are different, leading to fierce competition under the share and pattern of change.&#13;
&#13;
This thesis starts with the overall development of AI and ML and introduces the history and status of cloud-based AI and ML development in technology companies. Then, by introducing official websites and open API interfaces and their documentation, I analyze the internal applications and external ecosystem of Amazon, Microsoft, and Google and compare the three companies' AI and ML platform development strategies. Finally, I predict the development direction of AI and ML platforms, including future business models and outbreak trends, and analyze these three companies' corresponding platform development strategies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redesigning Marketing for Traditional Chinese Medicine Clinics in China</title>
<link href="https://hdl.handle.net/1721.1/146688" rel="alternate"/>
<author>
<name>Liu, Dahai</name>
</author>
<id>https://hdl.handle.net/1721.1/146688</id>
<updated>2022-12-01T03:36:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Redesigning Marketing for Traditional Chinese Medicine Clinics in China
Liu, Dahai
With China's ongoing development, Chinese people have more and higher standards for quality of life and are becoming increasingly concerned about their health. TCM also has a positive image among Chinese people and has received strong support from Chinese medical regulations, resulting in a rapid increase in the number of TCM clinics in China in recent years. As a result, TCM clinics have a lot of room for growth. However, for historical reasons, TCM clinics do not devote enough attention to marketing. TCM clinics are unable to meet the growing demand of Chinese customers for TCM due to a lack of awareness. With such a knowledge gap, TCM clinics' marketing initiatives are becoming increasingly vital.&#13;
&#13;
This study summarizes and categorizes TCM clinics and doctors in China to serve as a foundation for future marketing strategies and approaches. Furthermore, previous TCM clinic marketing focused solely on the consumer side and ignored the doctor side. As a result, this paper's marketing strategy and tactics will target both customers and doctors, as well as redesign existing TCM clinic marketing. To begin, this study examines the TCM clinics' business environment and marketing issues in China, concluding that TCM clinics primarily have unclear marketing strategies, simplistic marketing methods, and a lack of marketing organizations. Second, the marketing environment of TCM clinics is examined, with the conclusion that, in terms of political policy, economy, social environment, and technical means, China is presently friendly to the development of TCM clinics, providing fertile ground for marketing. Finally, TCM clinic marketing strategies and tactics are examined in order to identify key market categories and provide five marketing methods in terms of products, services, marketing channels, prices, and promotions. Embracing new technologies, comprehensively constructing digital TCM clinics, and improving accurate digital marketing to consumers and doctors are just a few ways. By revamping the marketing of TCM clinics in China, this study proposes an entire marketing strategy and strategies for TCM clinics. It improves the overall marketing level of TCM clinics, allowing them to run more efficiently. Furthermore, it provides consumers with improved TCM services as well as a better working environment and compensation for TCM doctors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Data-Driven Decisions to Improve Key Financial&#13;
and Operational Metrics in Semiconductor Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/146687" rel="alternate"/>
<author>
<name>Cubra, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/146687</id>
<updated>2022-12-01T03:06:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automating Data-Driven Decisions to Improve Key Financial&#13;
and Operational Metrics in Semiconductor Manufacturing
Cubra, Chris
Semiconductor manufacturing is a complex, non-linear process. The processing order of wafer lots in a semiconductor fab are determined by thousands of decisions that must be made each day. Each decision impacts the cycle time of a lot which is compounded as it goes through up to 700 steps. Operators do not readily have access to the data they need to make optimal decisions. &#13;
&#13;
This thesis focuses on automating data-driven decisions to empower operators to increase their productivity. By acquiring the right data and determining the key business decisions, lots can be prioritized more effectively to improve the fab’s KPIs. &#13;
&#13;
We begin by performing a current state analysis to understand the fab’s performance to date. We then determine the decisions that drive outcomes in the fab. Data is then aggregated to properly inform those decisions. Next, we create a heuristic model that we hypothesize will improve the fab’s performance.&#13;
&#13;
Although not completely optimal, the heuristic prioritization model was found to have significant process, performance, and visual management improvements. With the heuristic, lots are properly prioritized 50% more often, leading to cycle time being reduced 3.8 days for a single step in the process.&#13;
&#13;
We conclude this thesis by discussing how to implement an optimized scheduler for the next iteration of improving lot prioritization.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing the Supply Chain Design for Sourcing and Supply of Critical Materials</title>
<link href="https://hdl.handle.net/1721.1/146686" rel="alternate"/>
<author>
<name>Feole, Michelle Angela</name>
</author>
<id>https://hdl.handle.net/1721.1/146686</id>
<updated>2022-12-01T03:21:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimizing the Supply Chain Design for Sourcing and Supply of Critical Materials
Feole, Michelle Angela
A significant supply disruption occurred in 2019 from a packaging component supply shortage, impacting sites and products globally across the AstraZeneca (AZ) network. Supply to patients continued; however, a team was created to then manage the supply of critical materials. These materials are typically single sourced and used commonly across multiple AZ sites and brands signifying that a disruption could impact patient supply and AZ revenue across multiple brands. This thesis focuses on providing a framework for evaluating risk and vulnerabilities in the sourcing of the critical material supply chain design with a focus on primary packaging. With this methodology, users can identify opportunities for developing a more flexible and resilient supply chain. &#13;
&#13;
After analyzing a subset of Stock-Keeping Units (SKUs). and segmenting them based on complexity and criticality, we applied the Time-to-Survive (TTS) and Time-to-Recover (TTR) framework to identify high risk materials and supply nodes. TTR is the time for a supply chain to recover after a disruption at a particular node. TTS is the time the supply chain can continue operations based on demand and inventory levels. A TTS/TTR tool was created to index and sort the high risk materials supplemented by a process for interpreting the outputs and mitigations. After identifying the areas of risk, we also proposed a method for analyzing the trade-off between dual-sourcing versus holding increased inventory by evaluating the potential return on assets (ROA) ratio.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Convertible Trade Credits: a new way of creating value</title>
<link href="https://hdl.handle.net/1721.1/146684" rel="alternate"/>
<author>
<name>Sanabria, Pedro A.</name>
</author>
<id>https://hdl.handle.net/1721.1/146684</id>
<updated>2022-11-30T19:41:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Convertible Trade Credits: a new way of creating value
Sanabria, Pedro A.
This paper aims to explore the possibility of a vendor supporting a firm’s growth opportunities in sequential financing rounds in exchange for shares in the firm’s common stock. We departed from an initial setup where a firm has a positive NPV growth opportunity and easy access to financing (Myers and Read, 2020). We moved forward to set up a scenario where the firm had difficulties raising funding for the same real call option and asked whether it would be advantageous to rely on key vendors to finance it. The answer to the question was affirmative.&#13;
&#13;
We found that the new proposed Convertible Trade Credits Contract creates value for both parties in the transaction by allowing the firm to realize the growth opportunity &lt;optimal investment policy&gt; while sharing risks and returns with the vendor &lt;diversification&gt;. We relied on discrete binomial event trees to price real call options and the constant dividend growth model to price the firm’s stock under different states of nature. The example we provide assumes a world with no corporate taxes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Replenishment Efficiency Utilizing Unit of Measure and Planogram Settings</title>
<link href="https://hdl.handle.net/1721.1/146683" rel="alternate"/>
<author>
<name>Sidell, Ben</name>
</author>
<id>https://hdl.handle.net/1721.1/146683</id>
<updated>2022-12-01T03:21:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Advancing Replenishment Efficiency Utilizing Unit of Measure and Planogram Settings
Sidell, Ben
Throughout the pandemic, Target has experienced many changes in the retail landscape. The rapid sales growth and significant transition to online shopping has stressed the limits of their distribution network, exposing some of the weaknesses of the existing supply chain. Despite the industry changes, Target remains true to improving the guest experience through product availability and customer interaction. The overload in sales has lead to large swings in inventory throughout the network. Their adoption of an omnichannel distribution network provides the flexibility to succeed in this environment, but the distribution network is extremely complex, Utilizing a variety of stores, distribution centers, and suppliers. Excess inventory at the store creates unnecessary burden of storage and shelf refills, reducing time for customer engagement. With Target’s customer focus at the heart of their continuous improvement efforts, they are examining the levers at their disposal throughout the supply chain to reduce inventory spillover, such as POG Design and Unit of Measure.&#13;
&#13;
To help deepen the understanding of how the supply chain settings can be manipulated to unlock better performance, this paper examines product characteristics at a more granular level to identify when and where Target can take action. A categorization technique is utilized to identify which setting is creating the risk of systematic spillover. This technique is then applied to historic data to understand trends and identify opportunities. The paper then defines a method of identifying demand variability and proposes a shelf optimization technique for the stable demand items. The intent of this research is to reduce the systematic risk of inventory spillover, thus enhancing store performance for the guest.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Financial Model to Assist New Therapeutics Development Decision Making</title>
<link href="https://hdl.handle.net/1721.1/146682" rel="alternate"/>
<author>
<name>Wang, Cong</name>
</author>
<id>https://hdl.handle.net/1721.1/146682</id>
<updated>2022-12-01T03:05:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Financial Model to Assist New Therapeutics Development Decision Making
Wang, Cong
Biotech is an industry that is heavily reliant on the risky Research and Development process. While the biotech projects are most often valued with Discounted Cash Flow (DCF) methods, the risk and the phased development process of the biotech industry make it an ideal process to be valued by a real options approach. This thesis tries to use the binomial lattice method based Real Options Valuation (ROV) technique to evaluate Aduhelm, a recent high-profile failure in the biotech industry, and compare the valuation with Discounted Cash Flow (DCF) and Decision Tree (DT) methods. All three models were able to generate similar overall predictions to the change in Biogen’s market capitalization but ROV and DT performed better in earlier stages by capturing the option value in the process. None of the models would have automatically predicted Aduhelm’s market failure without the manager changing critical valuation assumptions. Regardless of the valuation models used, Managers should apply careful judgment and adjust assumptions as more information is gathered during the development process to ensure optimum investment decision making.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of changes in the investment strategies of real estate funds for multifamily/single-family houses after the pandemic</title>
<link href="https://hdl.handle.net/1721.1/146681" rel="alternate"/>
<author>
<name>Xu, Miao</name>
</author>
<id>https://hdl.handle.net/1721.1/146681</id>
<updated>2022-12-01T03:29:21Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analysis of changes in the investment strategies of real estate funds for multifamily/single-family houses after the pandemic
Xu, Miao
A global economic recession followed the pandemic outbreak, as did real estate’s value. However, the real estate recession lasted only two months, with a V-shaped rebound following it. The goal of this thesis is to analyze the investment of multifamily/single-family houses within the United States from an institutional investor’s perspective based on an analysis of the benefits and constraints of the current market and economic environment, as well as studying the changes in investment strategies since the pandemic began.&#13;
&#13;
The pandemic is a double-edged sword. It is important to note that the recession presents both negative impacts and opportunities. It created social and political restrictions, economic pressures such as lack of skilled labor, high unemployment rate, and development issues like high construction costs. However, at the same time, it also provided opportunities such as low capital costs, a desire for better living conditions, and the volatility of other capital markets. This thesis examines the above impacts and possibilities based on solid data analysis. In this thesis, we analyze the timing of investing in multi-families/single-families, capital structure, and strategy changes from the institutional investor's perspective. We also take Blackstone Real Estate Income Trust (BREIT) as an example and discuss institutional investors' investment strategies for an in-depth evaluation.&#13;
&#13;
According to the comparison of investment strategies of real estate funds before, during, and after the pandemic and the comparison of investment strategies of different funds, this thesis sorts out the reasons for the impact and opportunities of the pandemic on real estate funds based on solid data, and the outlook of the investment trend, investment strategy and fund structure of multi-family and single-family houses after the pandemic. The analysis of this study also has practical significance in the process of normalizing the current epidemic situation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automotive Inventory Delivery Location Optimization</title>
<link href="https://hdl.handle.net/1721.1/146680" rel="alternate"/>
<author>
<name>O'Donnell, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/146680</id>
<updated>2022-12-01T03:47:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automotive Inventory Delivery Location Optimization
O'Donnell, Sean
Automotive supply chains are large, complex networks that are frequently looked at for cost reduction opportunities. While research shows many ways to optimize routing and quantities of inventory to save money, it tends to be for systems where locations are distant. This thesis identifies opportunities for cost reduction in systems where there are multiple nearby delivery locations for any part, such as an automotive vehicle assembly plant with a warehouse within a few miles distance. A mixed integer linear optimization model was used on data from Nissan’s Smyrna, TN assembly plant. This model takes advantage of warehouse management cost variability for each part. By optimizing the delivery location for every part to be the warehouse or the factory, annual warehouse management cost savings of greater than 20% are possible. This result is discussed further as well as ways to successfully implement this type of model at different factories.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agritech Innovations in India</title>
<link href="https://hdl.handle.net/1721.1/146679" rel="alternate"/>
<author>
<name>Jammanahalli Mahesh, Sharan</name>
</author>
<id>https://hdl.handle.net/1721.1/146679</id>
<updated>2022-11-30T19:40:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Agritech Innovations in India
Jammanahalli Mahesh, Sharan
There is a rapid pace of innovation in literally every industry in order to improve efficiency and adapt to new challenges. However, the pace of innovation in agriculture focused on the needs of smallholder farmers is too slow. Farmers in emerging economies like India face many challenges that include changing weather conditions, declining water table, degradation of soil, rising farm input costs, and declining income.&#13;
&#13;
Agriculture employs about 60 percent of India's population. Hence, it is important to develop technological solutions addressing the needs of smallholder farmers, who account for about 86% of farmers in India. Multiple successful startups are helping to address the needs of farmers and other key stakeholders in India. However, the adoption is still limited, and further innovations (both process and product) in agri-tech are needed to accelerate the efficiency and growth of the industry. Emerging technologies like blockchain, computer vision, robotics, and artificial intelligence can accelerate the rate of innovation and scale of adoption. These innovations will help farmers maximize yields by conserving resources, thereby increasing their income and net profit.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study of Livestream Shopping’s Role in the Customer Journey</title>
<link href="https://hdl.handle.net/1721.1/146678" rel="alternate"/>
<author>
<name>Li, Shu Ran</name>
</author>
<id>https://hdl.handle.net/1721.1/146678</id>
<updated>2022-12-01T03:03:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Study of Livestream Shopping’s Role in the Customer Journey
Li, Shu Ran
Livestream shopping is a form of commerce and marketing that combines entertainment, interaction, and product introductions. Following live commerce’s boom in China in 2019, many US brands and platforms have also eagerly started experimenting with this new marketing format, however with lukewarm success at best even with the impact of COVID-19. To understand why it became so successful in China, apart from external reasons such as COVID-19, 5G infrastructure, smartphone and mobile payment penetration, an analysis of livestream shopping’s role within a customer journey using the AIDAS model was conducted with case studies. It was found that in pre-purchase stages, when used along with network targeting and multi-channel marketing, shopping livestreams can be extremely effective in spreading awareness and arousing interest. The key is to leverage learnings from big data to design and recommend content and streamers best aligned with the characteristics of the targeted audience and the platform, as well as the brand image. Then, as the audience either is already interested in the product or have trust in the KOL, it is easy to stimulate the impulse to purchase with coupon, lowest price guarantee, scarcity marketing, conformity, and FOMO. In the purchase stage, capabilities that allow customers to easily complete order placement without interrupting the livestream is key to high conversion rate, which means e-commerce platforms have a natural advantage as they already have access to customers’ payment information, eliminating an extra step of barrier. While advocacy comes naturally with livestreaming’s social and word-of-mouth nature, satisfaction is the weakest link of this form of marketing, as concerns over quality and after-sales services worry at least 50% of customers who are unwilling to adopt livestream shopping. While livestream shopping is forecasted to grow rapidly within the next few years in the US, it is also forecasted unlikely to be mainstream like it is in China due to post-pandemic mentality driving people to shop offline, less mobile payment and 5G coverage and different shopping habits, but monetary incentives, tailored content, multi-channel marketing, integration with payment, in-live shopping capabilities, and trustworthy aftersales services can drive livestream shopping growth in the US.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying, Characterizing, and Mitigating Wind and Solar Resource Shortages Across the Continental United States</title>
<link href="https://hdl.handle.net/1721.1/146676" rel="alternate"/>
<author>
<name>Hwang, Shannon Y.S.</name>
</author>
<id>https://hdl.handle.net/1721.1/146676</id>
<updated>2022-12-01T03:39:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Identifying, Characterizing, and Mitigating Wind and Solar Resource Shortages Across the Continental United States
Hwang, Shannon Y.S.
Many plans for decarbonizing society envision future electricity systems that are heavily reliant on wind and solar generation. However, wind and solar resources are variable, and supplying energy reliably and cost effectively requires the use of other technologies. In particular, recent studies suggest that the energy infrastructure capacities required for reliably meeting electricity demand in wind- and solar-heavy electricity systems can be determined by rare periods of particularly low wind and solar power generation. Thus, it is important to better understand the characteristics of such resource shortage events. Better understanding of shortage characteristics can help develop and evaluate methods, such as energy technology development, transmission infrastructure deployment, and demand response, that can reduce the costs of reliably providing wind and solar energy during shortage events. In this work, we identify renewable energy shortages from 1980–2020 with geographical resolution across the continental United States by simulating the operation of cost-optimal systems that use wind and solar power and energy storage to reliably supply electricity. We then explore the characteristics of identified shortages and cost-optimal system operation, and quantify the potential value and limits of approaches that could mitigate the impacts of such shortage events.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacity Multipliers: Rapidly Scaling Production through Line Balancing and Critical Path Reduction</title>
<link href="https://hdl.handle.net/1721.1/146675" rel="alternate"/>
<author>
<name>Eggleston, Tyler</name>
</author>
<id>https://hdl.handle.net/1721.1/146675</id>
<updated>2022-12-01T03:25:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Capacity Multipliers: Rapidly Scaling Production through Line Balancing and Critical Path Reduction
Eggleston, Tyler
Sustained and rapid growth in manufacturing companies frequently generates challenges in scaling production to meet demand. As demand approaches and exceeds production capacity, the backlog of orders will increase and drive up lead times. Long lead times are unattractive to customers and threaten to inhibit continued growth. Companies in this situation need to rapidly scale production to maintain rapid growth.&#13;
&#13;
The researcher hypothesized that shortening the duration of the critical path and balancing resource utilization could rapidly increase the throughput of a capacity constrained factory. The investigation uses ShopSabre CNC as a case study to investigate techniques including the base-stock inventory system, the kanban inventory system, workflow analysis diagrams, push-pull barrier definition, and division and coordination of labor to reduce the critical path and balance utilization. The effectiveness&#13;
of the techniques are measured by the observed or calculated increase in throughput capacity.&#13;
&#13;
The result of the research finds that all techniques that effectively shorten the critical path or balance utilization, particularly by alleviating the production bottleneck, result in increased aggregate throughput capacity. The investigation culminates with a simulation of the final assembly, calibration, and test sequence. It demonstrates that increasing the footprint of the factory by two times and applying the techniques included in the investigation results in a five-fold capacity increase.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile-Payments in the U.S. and China</title>
<link href="https://hdl.handle.net/1721.1/146674" rel="alternate"/>
<author>
<name>He, Liuning</name>
</author>
<id>https://hdl.handle.net/1721.1/146674</id>
<updated>2022-12-01T03:36:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Mobile-Payments in the U.S. and China
He, Liuning
Mobile payment is a non-cash payment method that uses mobile terminals as a medium. The United States, the world’s most financially advanced country, has lagged far behind China in mobile payment innovation. The reasons for this phenomenon have not been well explained in past studies. This thesis examines and deeply analyzes the development history and lineage of the payment industry infrastructure - banking industry, payment clearing system, bank card industry, and the whole payment industry in the U.S. and China with a historical and comparative research approach, and comes up with four main factors that lead to the differences between the mobile payment industry in the U.S. and China. First, the high concentration of China’s banking industry and the singularity of the payment clearing system lowered the threshold for mobile payment startups to expand their business in the early stage. Second, the relative backwardness of China’s Internet and bank card industries has led the Chinese to move directly from cash payments to electronic payments, and the lack of supporting hardware has spawned innovation in QR code payments. Third, mobile payment is a natural multi-side platform model, and the endowment of Chinese innovators in e-commerce and social user "sides" plays a decisive role in the direction of innovation and the probability of success. Fourth, fees in the Chinese payments chain are much lower than in the U.S., reducing the incentive for stakeholders to impede industry change and objectively facilitating mobile payment innovation and development.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Analytics for Improved Supply Chain Operations</title>
<link href="https://hdl.handle.net/1721.1/146673" rel="alternate"/>
<author>
<name>Muller, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/146673</id>
<updated>2022-12-01T03:37:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Leveraging Analytics for Improved Supply Chain Operations
Muller, Alexander
Decision making under ambiguity is a central challenge for supply chain organizations. Key obstacles to optimal decisions include insufficient data, opaque and complex processes, and uncertainty about the future. Traditionally, organizations have relied upon internal heuristics and “gut feels” to make decisions with mixed results. This work demonstrates how simple models can be leveraged to provide clarity at operational and strategic levels. Specifically, this work focuses on the development and implementation of models at two American Industrial Partners (AIP) portfolio companies: Commonwealth Rolled Products (CRP) and AHF Products (AHF). &#13;
&#13;
CRP is an aluminum rolling mill that produces both common alloy (CA) and automotive body sheet (ABS) aluminum products. The organization has struggled with managing high ingot inventory levels that have led to high holding costs and order delinquency. This work demonstrates how applications of simple flow improvements, the base stock model, and capacity analytics were used to reduce ABS delinquencies by 34%, eliminate ~$13M in stagnant inventories, and scope a strategic expansion with a net present value (NPV) of ~$10M. &#13;
&#13;
AHF is a flooring company that faces a strategic challenge in forecasting the price of its core raw material. There is no currently available commercial forecast, so the procurement team relies on consensus guesses that have been consistently inaccurate. This work builds a novel pricing model that combines linear regression and time series analysis to develop a robust tool that has predicted within 1% of actual prices after four months. This model has been deployed to improve the budgeting and pricing strategy of the company. &#13;
&#13;
We will present in more detail the operational challenges faced by the supply chain organizations of CRP and AHF. Following this discussion, we will examine the model formulations and analytical techniques used to address these problems. Next, we will detail the pragmatic choices needed to create sustainable models that can be maintained by the organization. Finally, we will discuss the results of these models at the respective companies and their future extensions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Algorithm for Target Inventory and the Impact on Replenishment Strategy</title>
<link href="https://hdl.handle.net/1721.1/146672" rel="alternate"/>
<author>
<name>Tsontzos, Lampros</name>
</author>
<id>https://hdl.handle.net/1721.1/146672</id>
<updated>2022-12-01T03:31:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Dynamic Algorithm for Target Inventory and the Impact on Replenishment Strategy
Tsontzos, Lampros
This thesis proposes a novel dynamic replenishment algorithm that minimizes total inventory and maximizes customer experience by avoiding stockouts. The algorithm achieves this through optimally setting the days of coverage for every item in every store on a given day based on a number of features. Traditional approaches rely on the base stock model for calculating cycle stock using the demand forecast and safety stock using the demand variability combined with service levels set by the business. This thesis focuses on non-traditional approaches that do not rely on the base stock model, which makes it difficult to dynamically set the optimal target stock levels.&#13;
&#13;
The proposed solution is an optimization-based heuristic that calculates the optimal days of coverage for a given item-store-date combination by combining the existing static estimate for days of coverage with a number of features designed to capture customer behavior, store characteristics, and item attributes. The optimized days of coverage metric is then combined with Zara’s demand forecast for each item to compute target stock levels. Finally, the new target stock values are run through a simulation to understand the impact on stockouts and total inventory.&#13;
&#13;
The heuristic is compared with the baseline approach that is currently used by Zara. The heuristic overperforms the existing approach by 2.2%. This translates to a 2.2% reduction in total inventory with minimal (&lt;0.1%) impact on stockouts. The heuristic consistently overperforms the baseline approach across a wide range of scenarios in different countries, departments, and time frames. Features such as item importance, price, and model type are highly predictive of future demand and help calculate days of coverage more optimally. This heuristic demonstrates that incorporating additional data and calculating target stock dynamically has the potential to enhance inventory replenishment processes at retailers using non-traditional policies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Attraction of China's Deep Tech Entrepreneurial Ecosystem for Chinese STEM Ph.D. Students Studying in The United States to Start Their Own Businesses Back Home</title>
<link href="https://hdl.handle.net/1721.1/146671" rel="alternate"/>
<author>
<name>Wang, Chongyang</name>
</author>
<id>https://hdl.handle.net/1721.1/146671</id>
<updated>2022-12-01T03:43:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Attraction of China's Deep Tech Entrepreneurial Ecosystem for Chinese STEM Ph.D. Students Studying in The United States to Start Their Own Businesses Back Home
Wang, Chongyang
In today's era, the innovation capability of deeptech has increasingly become an important competitiveness of a country. Against the background of geopolitical competition between China and the United States, emerging technologies such as AI, robotics, Biotech, and semiconductors have become the winners and losers of great power competition and future development. At the same time, the United States still leads the world in basic science and technology. There are a large number of Chinese students studying STEM in the United States, and they are considering using these technologies to start businesses in China. China from the central government to the local government is providing policy facilitation for this type of foreign student entrepreneurial project to return to China, and China's venture capitals are also keen to invest in this type of project. For Chinese STEM students in the United States, whether to choose to return to China and whether to choose to start a business is also a major decision. On the one hand, the United States has a better academic soil and innovation environment, and on the other hand, in terms of entrepreneurship, Chinese people can return to China to start a business with a higher upper limit, as well as a higher possibility of entrepreneurial success with the support of the government and VCs.&#13;
&#13;
Through this research, I will analyze the attractiveness of China's current deeptech industry students returning to China to start businesses from the aspects of China's government, VC, and society. Intention to do research and analysis.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing Diversity in the Modern European Workplace</title>
<link href="https://hdl.handle.net/1721.1/146670" rel="alternate"/>
<author>
<name>Bründermann, Hendrik</name>
</author>
<id>https://hdl.handle.net/1721.1/146670</id>
<updated>2022-12-01T03:01:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Managing Diversity in the Modern European Workplace
Bründermann, Hendrik
Diversity is consistently increasing in society and the workplace with increasing globalization and internationalization. This comes with new challenges and opportunities for organizations adapting to the new environment as internal heterogeneity in teams and organizations can lead to communication issues and conflict. Externally, organizations need to change their approach towards diverse customer and markets simultaneously.&#13;
&#13;
This is based on the inherent behaviour of people being drawn towards similar attributes present in themselves. Behaviour like this often leads to formation of in-groups and out-groups which frequently lead to issues and consequently loss of performance. Nonetheless, it is clear that diverse companies perform better because individuals from various different backgrounds offer new perspectives leading to an increase in creativity and innovation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Carbon Allocation Methodology across Multiple Business Teams and Activities with Interdependencies</title>
<link href="https://hdl.handle.net/1721.1/146669" rel="alternate"/>
<author>
<name>Ogawa, Mariko</name>
</author>
<id>https://hdl.handle.net/1721.1/146669</id>
<updated>2022-12-01T03:28:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Building a Carbon Allocation Methodology across Multiple Business Teams and Activities with Interdependencies
Ogawa, Mariko
To prevent the negative effects of climate change, companies around the world are setting and committing to net-zero carbon targets. Achieving this goal comes with operational challenges for companies, e.g., having a standardized method to hold internal business teams accountable for their carbon emission, and empowering individual teams to decarbonize. Especially for large companies with multiple business teams and functions that have interdependencies, allocation of carbon emissions coming from business activities and decisions is complex and not straightforward. &#13;
&#13;
Amazon announced the Climate Pledge in 2019 and committed to achieving net-zero carbon emissions by 2040, by physically decarbonizing its business activities and offsetting residual emissions. Amazon’s supply chain is complex, which creates many interdependencies among internal business teams. These business teams often share responsibility over the emissions of single asset or decisions, both internally and externally. This project aims to develop a carbon allocation methodology to allow those business teams to understand their contribution to carbon emission, which will be a source of information for their incremental decarbonization strategies and cross-business collaboration to accelerate physical decarbonization. We will focus on transportation businesses within Amazon and create multiple use cases and allocation logics using available activity data, and then recommend a way to scale the logic to non-transportation businesses, such as buildings, devices, and servers.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cloud Service Strategies and Competition in the Chinese Market Among Major Technology Companies</title>
<link href="https://hdl.handle.net/1721.1/146668" rel="alternate"/>
<author>
<name>Li, Sipei</name>
</author>
<id>https://hdl.handle.net/1721.1/146668</id>
<updated>2022-12-01T03:22:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Cloud Service Strategies and Competition in the Chinese Market Among Major Technology Companies
Li, Sipei
Regardless of the area of their core business -- marketplace, search engine, game or social network -- many major technology companies find cloud services a new area of growth. Amazon first launched the Amazon Web Services in 2006, followed by Microsoft launching Azure in 2010. On the other side of world, a similar story unfolded in China. Alibaba launched its cloud services in 2008. This thesis first analyzes why these three giant technology companies all chose to enter the cloud service market and how their strategies differ. Based on their capability, different companies have different product focuses on IaaS, PaaS and SaaS in cloud service. And instead of being a single product provider, most of them chose to build up a product platform integrating internal resources and external partnerships. The thesis also discusses how the cloud services impact each company’s financial performance.&#13;
&#13;
The Chinese cloud market is projected to grow at a stunning speed, which reveals huge potential. At the same time, the competition has been fierce among the players. In this section, the study is focused on the strategies of the three companies in this competition. The analysis starts with a general market overview and then examines the marketing strategies and competencies in this particular market for each company. The thesis also includes some recommendations to the companies as well as the future outlook of the market.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biodiversity and Business: who will save whom?</title>
<link href="https://hdl.handle.net/1721.1/146667" rel="alternate"/>
<author>
<name>Destailleur, Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/146667</id>
<updated>2022-12-01T03:22:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Biodiversity and Business: who will save whom?
Destailleur, Marie
Environmental concerns have become a central challenge for business, but they are far too often reduced to the climate question. Another crisis is looming, often described as the “sister crisis” of climate: biodiversity. This thesis explores how business and biodiversity are interdependent and can sustain each other. First, it establishes that biodiversity will save business. As a matter of fact, biodiversity provides the necessary conditions for conducting business thanks to ecosystem services, and it also provides resources for innovation thanks to biomimicry. Second, the thesis highlights how business can save biodiversity by accurately measuring and managing its impact on nature. Finally, the thesis explores the intersection of a nature-based and a positive-economy, and the necessary changes that will facilitate the emergence of companies which are simultaneously from and for nature.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sequential Optimization for Prospective Customer Segmentation and Content Targeting</title>
<link href="https://hdl.handle.net/1721.1/146666" rel="alternate"/>
<author>
<name>Groszman, Kenny</name>
</author>
<id>https://hdl.handle.net/1721.1/146666</id>
<updated>2022-12-01T03:26:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sequential Optimization for Prospective Customer Segmentation and Content Targeting
Groszman, Kenny
ResMed is a global leader in medical devices for the treatment of obstructive sleep apnea (OSA). Due to the high prevalence and underdiagnosis of OSA, a key pillar of ResMed's business strategy is to increase awareness of the disease and encourage treatment. This work seeks to optimize an emerging OSA awareness channel for ResMed: online paid advertising. Specifically, a sequential optimization approach (batched sequential model-based algorithm configuration, or B-SMAC) is developed to automatically and intelligently target online advertisements through iterative batch experimentation. The result, verified through simulation and field experiment, is the maximization and characterization of ad performance over a search space of 960 mutually exclusive customer segments. Further, re-aggregation methods are developed and tested in order to transform the outputs of B-SMAC into an economically viable targeting strategy for an online ad platform, leading to improved ad effectiveness when compared to baseline strategies. These results are a proof-of-concept for sequential optimization-based ad targeting and represent a promising future direction for increasing the number of patients entering ResMed's diagnostic funnel and receiving life-altering OSA treatment.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Improve the Performance of M&amp;As: from the Cultural Clash Perspective</title>
<link href="https://hdl.handle.net/1721.1/146665" rel="alternate"/>
<author>
<name>Cheng, Cheng</name>
</author>
<id>https://hdl.handle.net/1721.1/146665</id>
<updated>2022-12-01T03:26:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">How to Improve the Performance of M&amp;As: from the Cultural Clash Perspective
Cheng, Cheng
Entrepreneurs and founders are often interviewed on culture-related topics, especially when their organization is under strategic expansion and transformation. Because of the sensitivity of culture to organizational modifications, corporate culture has been a long-lasting arguable concern discussed over decades. Organizational evolutions may change the current cultural attributes and lead to positive or harmful effects. In parallel, mergers and acquisitions (M&amp;As) growing to be one of the most desirable strategies under the background of technology and economic deepening, more companies decide to purchase or merge with existing companies to increase market penetration than raise the cost of incubation or product innovation.&#13;
&#13;
This thesis intends to provide suggestions to companies interested in M&amp;A activities on how to improve outcomes from a cultural perspective. With the literature review and data analysis, this paper introduces the definition of culture and culture clash and why they matter in practice. Then, we referenced the research published by Marchetti (2019) to present the key factors that influence cultural compatibility for increasing M&amp;A outcomes regarding the deal announcement and acquirers' shareholder value. Eventually, we used secondary survey results to conclude effective solutions to improve M&amp;A performance after the deal announcement.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Predictive Model for Pancreatic Cancer Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/146664" rel="alternate"/>
<author>
<name>Xiong, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/146664</id>
<updated>2022-12-01T03:17:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Predictive Model for Pancreatic Cancer Diagnosis
Xiong, Thomas
Pancreatic ductal adenocarcinoma (PDAC), a specific type of pancreatic cancer, has a five-year survival rate of 8.5% and is the third-deadliest cancer in the United States. However, earlier detection can raise survival rates dramatically. In this thesis, we investigate the hypothesis that predictive models from a variety of model classes can use different indicators from electronic health record (EHR) data in order to predict PDAC diagnosis. We find that logistic regression, random forest, and XGBoost models perform the best when using patients’ unique diagnoses, lab test frequencies, medication frequencies, and race and ethnicity as data, with our best logistic regression model achieving an AUROC of 0.801 on a held-out test set. To better approximate these models’ use case in practice, we construct a time-dependent regime for model evaluation. Overall, we found that model performance decreased in the time-dependent regime as compared to the time-independent regime, suggesting the possibility of concept drift in our dataset. Moreover, through ℓ₀ regularization, we found that lab test frequencies tended to be the most important features in the best logistic regression model. The intended use for our deployed model is to serve as a prescreening tool to deliver an enriched population for further targeted PDAC screening. Our best model for this purpose delivers a sensitivity of 0.46 at a specificity of 0.9. According to our medical collaborators, this combination of sensitivity and specificity qualifies this model as suitable for our intended prescreening use. In this context, the ability of our model to work only with information derived from electronic health records, collected as part of routine medical care, is a significant advantage. We describe the steps taken to begin to model deployment into an existing federated EHR database. In this scenario, we envision that our model would be integrated into hospital EHR systems and routinely and automatically run over broad patient populations as EHR data is collected over time to produce a history of patient risk scores as patient data becomes available. Patient selection for further targeted PDAC screening can then consider both absolute scores and their evolution.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Application of Lean Processes Enhanced by Digital Archiving in Precision Subtractive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/146663" rel="alternate"/>
<author>
<name>Borchik, Daniel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/146663</id>
<updated>2022-12-01T03:38:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Exploring the Application of Lean Processes Enhanced by Digital Archiving in Precision Subtractive Manufacturing
Borchik, Daniel J.
Subtractive precision manufacturing produces components for many industries: aerospace, astrospace, defense, biotech, medical device, and more. This paper discusses the effects of applying lean principles enhanced by digital archiving to subtractive precision manufacturing. An analysis of the subtractive manufacturing process and relevant subprocesses was conducted. Two opportunities for intervention, fixturing and cut plans, were identified. The introduction of lean, supported by a digital archive, at a representative company, led to significant efficiency gains. The paper concludes with discussing additional opportunities for future efficiency gains and/or research.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Price Competition Reduction Strategies in Chinese B2C&#13;
E-Commerce Markets: A Case Study</title>
<link href="https://hdl.handle.net/1721.1/146662" rel="alternate"/>
<author>
<name>Liu, Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/146662</id>
<updated>2022-12-01T03:10:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Price Competition Reduction Strategies in Chinese B2C&#13;
E-Commerce Markets: A Case Study
Liu, Kaiwen
The internet has transformed the way people search for products and significantly reduced search costs. This paper reviews past literature in price competition theory, and examines the competitive landscape of major B2C firms in China’s e-commerce industry, where online retail markets have seen exponential growth over the last two decades. It shows that firms in Chinese online retail markets are facing heightened price competition due to reduced search costs for consumers and minimal differentiation in product offerings. Evidence shows that firms around the world are practicing price fixing and price obfuscation to reduce price competition for higher profits. This paper cites several examples from Tmall and JD to argue that both the platforms and the individual retailers selling on the platforms attempt to implement innovative methods to practice price obfuscation with the goal of limiting customers’ ability to compare and also of reducing price competition. Some of these anti-competitive practices are commonly seen both in China and in other parts of the world, while some are more unique to Chinese online marketplace. Finally, the paper proposes several approaches that regulators could consider to prevent increased prices and to protect consumer welfare.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Decision-Making Framework for Carbon: Incorporating Carbon into Optimized Business Objectives</title>
<link href="https://hdl.handle.net/1721.1/146661" rel="alternate"/>
<author>
<name>Wyler, Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/146661</id>
<updated>2022-12-01T03:26:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Developing a Decision-Making Framework for Carbon: Incorporating Carbon into Optimized Business Objectives
Wyler, Paige
Carbon emission reductions have been at the center of decision-making for companies across industries, particularly in automotive. As electric vehicles are introduced to the market, there is a large potential for reducing emissions associated with transportation. However, the process to build and use these vehicles is itself carbon intensive. &#13;
&#13;
This thesis explores mechanisms to enable effective decision-making to meet carbon reduction goals. Carbon accounting is a new concept for most original equipment manufacturers, and they lack a framework for weighing the importance of carbon alongside competing business objectives. Additionally, most mechanisms used in other industries require a high level of data availability and knowledge management that may not exist in a dynamic, fast-growing company launching its first products. &#13;
&#13;
In this thesis, we test carbon decision-making mechanisms in a high-growth business environment. A framework is developed for carbon decision-making, and the use of a carbon price is tested for specific business use cases by adding this cost to the objective function of a cost-optimized battery storage system. Using the results from these use cases, recommendations are made for applying carbon pricing to future decisions as knowledge management infrastructure evolves. This work provides a model for decision-making in a high-growth environment and creates clear conditions under which carbon pricing can effectively impact decisions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Industrial Internet of Things Architecture and Business Strategy for Digital Substation Asset Management</title>
<link href="https://hdl.handle.net/1721.1/146660" rel="alternate"/>
<author>
<name>Kim, Hunjoo</name>
</author>
<id>https://hdl.handle.net/1721.1/146660</id>
<updated>2022-12-01T03:32:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Development of Industrial Internet of Things Architecture and Business Strategy for Digital Substation Asset Management
Kim, Hunjoo
MR is a medium-sized, family owned company that is a market leader in electrical substation component manufacturing, especially the on-load tap changer for transformers. However, as changes in the industry are creating new competitors and demand for higher quality and efficiency, there is now an increased need to employ new technology focused on connectivity and data to maintain the current leadership. Application of Industrial Internet of Things (IIoT) in the context of substation asset management is thus considered as a possible business value proposition. Market status based on customer interviews showed that while many utilities were eager to adopt IIoT technologies to their operations, lack of standardization and data literacy hindered the adoption process. An architecture of IIoT for MR, based on existing industry standards, was created to guide the company in deploying IIoT-based services to their customers. A potential use-case of on-load tap changer vibroacoustic monitoring sensor anomaly detection using artificial neural network autoencoder is explored as an enabled application of the IIoT technology. The autoencoder was able to discern artificially introduced anomaly in real vibroacoustic monitoring data.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>End-to-End Artificial Intelligence Lifecycle Management</title>
<link href="https://hdl.handle.net/1721.1/146659" rel="alternate"/>
<author>
<name>Yajamanam Kidambi, Sravani</name>
</author>
<id>https://hdl.handle.net/1721.1/146659</id>
<updated>2022-12-01T03:22:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">End-to-End Artificial Intelligence Lifecycle Management
Yajamanam Kidambi, Sravani
In this digital era, companies are battling at the forefront of innovation to share the next transformative idea with the world. Many of these ideas apply Artificial Intelligence (AI) to solve important societal problems. However, companies may be facing difficulties understanding the core problem, understanding the data, preparing the data, building models, evaluating and finally deploying the AI technologies. &#13;
&#13;
In this thesis, we propose an AI Ecosystem that enables end-to-end AI lifecycle management. This ecosystem enables teams to easily transition from concept to prototype and from prototype to deployment. We show three key pillars for a successful AI project: process, people, and platform. We also discuss the ethical and regulatory considerations of building AI technologies in this space. The study was performed at Boston Scientific with two use cases: the interventional cardiology team actively developing an AI solution using Intervascular Ultrasound (IVUS) images and the supply chain team exploring AI solutions for demand forecasting. We demonstrate how an AI Ecosystem can enable such teams to focus on their core responsibility, developing innovative medical solutions that improve patients lives.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meat No Longer Requires Animal Slaughter – Valuing an Alternative Protein Player</title>
<link href="https://hdl.handle.net/1721.1/146658" rel="alternate"/>
<author>
<name>Pagel, Maximilian</name>
</author>
<id>https://hdl.handle.net/1721.1/146658</id>
<updated>2022-12-01T03:03:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Meat No Longer Requires Animal Slaughter – Valuing an Alternative Protein Player
Pagel, Maximilian
Conventional animal protein production faces numerous challenges nowadays. Meanwhile, alternative proteins have been improved significantly over the last years and plant-based proteins are about to reach cost parity. Against this backdrop, foreseeing a rosy future for alternative protein producers and expecting high returns from investing early may seem plausible. Yet there is still considerable uncertainty if and how fast the industry will take off and how profitable the business may become. Valuing Beyond Meat, one of the industry’s leading companies, under multiple scenarios (ranging from pessimistic to very optimistic) can thus shed light not only on whether its valuation but arguably also whether valuations in alternative proteins more broadly are currently overhyped, reasonable, or even offer room for further upside potential. It turns out that the firm’s current share price is a sensible average of Beyond’s potential value in the different scenarios.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating ESG Factors to Equity Valuation</title>
<link href="https://hdl.handle.net/1721.1/146657" rel="alternate"/>
<author>
<name>Singh, Inderpreet</name>
</author>
<id>https://hdl.handle.net/1721.1/146657</id>
<updated>2022-12-01T03:45:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integrating ESG Factors to Equity Valuation
Singh, Inderpreet
ESG stands for Environmental, Social, and Governance. Investors increasingly consider these nonfinancial ESG factors to identify material risks and growth opportunities. According to the Global Sustainable Investment Alliance (GSIA), the Global ESG Investing market has increased by 55%, from USD22.8 trillion in 2016 to USD35.3 trillion in 2020. The growth is not only in absolute terms but also in relative terms – its share in the total investing market has also been constantly expanding over the years. However, traditional company valuation methods, including the Discounted Cashflow Model and the Comparable Multiple Analysis deployed by various actors in the investment industry, only consider financial variables. So, a practical framework that would allow for the integration of ESG Factors with traditional methods would be handy for the financial community.&#13;
&#13;
Hence, the aim of this paper is threefold; the first is to understand the drivers of extraordinary growth in the ESG Market, such as the evolving definition of fiduciary duty, the enhanced financial performance of ESG portfolios, technological disruption and changing preferences of investors, the second is to examine some of the challenges of ESG Integration briefly and finally to explore the literature and professional practices to develop a valuation framework that can integrate the ESG information into the financial valuation of a company.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Internet Hospitals in China - Exploration of Business Models and Marketing Strategies</title>
<link href="https://hdl.handle.net/1721.1/146656" rel="alternate"/>
<author>
<name>Yang, Xi</name>
</author>
<id>https://hdl.handle.net/1721.1/146656</id>
<updated>2022-12-01T03:03:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Internet Hospitals in China - Exploration of Business Models and Marketing Strategies
Yang, Xi
The development of China’s Internet hospitals is not ideal in terms of quality despite rapid growth in quantity. The tech-based Internet hospitals operate with the Internet mindset, resulting in high operating costs and difficulties in profitability. Traditional hospital-based Internet hospitals suffer from a lack of technical capabilities, and operational talents, resulting in high operating costs and challenges in achieving better services. This paper explores the business models and market strategies of China’s Internet hospitals by comparing the healthcare systems and the current situation of telemedicine in China and the United States, and using the PEST and SWOT analysis frameworks to analyze the Internet healthcare industry in China by interviewing industry experts, quantitative questionnaires and data analysis, combined with analysis of the core pain points of stakeholders related to healthcare activities. Unlike the patient-centered business model of most Internet hospitals today, this paper proposes to build an Internet hospital platform with hospitals and medical staff as the core. The profitable multi-win business model proposed in this paper is to charge patients, medical students and young doctors, pharmaceutical companies, insurance companies, and medical device companies. The doctors can get the consultation and treatment fees, and the hospital can use the hospital-side products for free. In the end, we propose the market strategy is to build a sustainable platform product with traditional hospital resources as the core. We provide the medical-side product mainly for medical students and medical workers under 50 years old. The hospitalside product is mainly for the top 100 medical institutions with medical schools. The patient-side product is mainly for the generation under the "One Child Policy" to facilitate patients’ access to medical care.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Venture Capital and Human Capital Patterns in Dual Use&#13;
Hardware Startups in the United States and United Kingdom</title>
<link href="https://hdl.handle.net/1721.1/146655" rel="alternate"/>
<author>
<name>McLeod, Margaret W.</name>
</author>
<id>https://hdl.handle.net/1721.1/146655</id>
<updated>2022-12-01T03:23:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Venture Capital and Human Capital Patterns in Dual Use&#13;
Hardware Startups in the United States and United Kingdom
McLeod, Margaret W.
With the prospect of a near peer competitor, the United States (U.S.) government has started to reevaluate its strategy on how to best integrate technology into the government. Part of this reevaluation has included shifting their technology strategy from large private sector defense behemoths to entrepreneurial startups with dual use technology.&#13;
&#13;
To identify what patterns exist for dual use hardware startups, I divide my analysis into two pieces: people and money. I study the financing models of ten early-stage dual use technology companies in strategically important industries. I look closely at where the startups seek funding: federal grant money, private sector venture capital funds, or other financing options. I also study key employees associated with the companies to understand the academic backgrounds and professional experiences of the Chief Executive Officer and Chief Technology Officer who likely influence strategic decisions. To provide a comparison, I identified the twin pairs of each company in the United Kingdom (U.K.) and examined their funding patterns and employee characteristics to see what differences exist between the U.S. and U.K.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards the Future of Work: Managing the Risks of AI and&#13;
Automation</title>
<link href="https://hdl.handle.net/1721.1/146654" rel="alternate"/>
<author>
<name>Man, James</name>
</author>
<id>https://hdl.handle.net/1721.1/146654</id>
<updated>2022-12-01T03:25:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards the Future of Work: Managing the Risks of AI and&#13;
Automation
Man, James
Many believe in a vision of the future where almost all work is automated. A first step already underway involves Robotic Process Automation (RPA) technology, which firms use to automate standardized computer work. The larger step that needs to be taken towards this vision lies in connecting RPA to AI, so that Machine Learning (ML) algorithms can be used to automate human “intelligence” and decision making in companies. &#13;
&#13;
Management research surrounding the concept of Intelligent Automation (IA) is nascent and spans multiple domains. This thesis consolidatesthe fragmented research landscape through a Systematic Literature Review to address four research questions: 1) What use cases are IA fulfilling? 2) Which ML algorithms and technologies are employed? 3) What risks are associated with IA? and 4) What risk mitigation techniques are there? The findings paint a picture of what is needed to advance the value that IA delivers to firms and shore up professional practices.&#13;
&#13;
Results show that the bulk (66%) of cases centered on document processing and chatbots. ML models, tended to be uninterpretable, posing transparency and risk challenges. The systematic coding of 77 key sources yielded 36 risks that fell into eight clusters that are explored in depth. Corresponding risk mitigation measures covered far less ground, leaving many risks unaddressed. The risk registry derived in this thesis offers a starting point for a structured approach to managing emergent risks necessary for IA to deliver on its promise to improve work.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Enhanced Data Availability Affects Multi-Channel Marketing Attribution</title>
<link href="https://hdl.handle.net/1721.1/146653" rel="alternate"/>
<author>
<name>Facen, Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/146653</id>
<updated>2022-12-01T03:26:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">How Enhanced Data Availability Affects Multi-Channel Marketing Attribution
Facen, Taylor
Individuals can engage with businesses via various marketing channels throughout the sales journey. At Boston Scientific, members of the sales and marketing teams use many applications to develop strategies for and analyze the results of these channels. However, data is not always entered in these applications as cleanly as possible and work to integrate the data from these tools is not always prioritized. In order to support prioritization efforts, management would need an example case study to prove how improvements to data availability could positively affect the type of analysis that could be created and used throughout the organization.&#13;
&#13;
There is no shortage of literature that defines, designs, and advocates for effective data architecture. There are also studies that dive into detail about all of the various types of marketing analyses one can do with channel metrics. Here, this project sets out to combine components from both areas to demonstrate the effects of making data pipeline improvements on downstream projects. First, it describes how a new connector was built to sync Zoom webinar data to the organization’s data warehouse. Then it uses the newly produced dataset to compare and contrast insights created with and without the data. More specifically, this project used both heuristic and stochastic multi-channel marketing attribution models to showcase the types of in- sights that can be drawn with access to more channel activity data. The final result is a feedback loop where one can begin to understand how this type of analysis can help managers advocate for resources within their organization for data architecture improvements.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oversized Package Placement Optimization in Warehouses</title>
<link href="https://hdl.handle.net/1721.1/146652" rel="alternate"/>
<author>
<name>Jiang, Run</name>
</author>
<id>https://hdl.handle.net/1721.1/146652</id>
<updated>2022-12-01T03:48:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Oversized Package Placement Optimization in Warehouses
Jiang, Run
Amazon Logistics’ delivery stations process two types of packages: oversized packages (OV) and non-OV packages (non-OV). Currently, the placement of OV to OV racks is coupled with non-OV placement to non-OV racks because Amazon’s algorithm does not explicitly assign OV to a rack but places it in the rack where the non-OV package of the closest delivery destination is. The result is frequent overflow from OV rack to floor for some racks while very few packages in others, leading to severely inefficient use of space and uneven labor distribution.&#13;
&#13;
This project identifies the root cause of the problem and develops 4 solutions to eliminate overflow and evenly distribute the packages within a delivery station. Different configurations and current package placement assignment methodologies are considered and current industry work on optimally assigning and picking products of different types is reviewed. Based on this research, we modeled package placement using integer linear programming, dynamic averaging, and pooling.&#13;
&#13;
This thesis provides general frameworks for optimizing placing and picking different types of products in a warehouse setting through the case of two product types scenario. It can be further expanded to multiple product type scenarios in a general supply chain and logistics system where efficient and fair use of resources has been a constant challenge.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Operational Efficiency of a Small Manufacturing Maintenance Organization</title>
<link href="https://hdl.handle.net/1721.1/146651" rel="alternate"/>
<author>
<name>Poler, Colin</name>
</author>
<id>https://hdl.handle.net/1721.1/146651</id>
<updated>2022-12-01T03:00:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Improving Operational Efficiency of a Small Manufacturing Maintenance Organization
Poler, Colin
Many small manufacturers struggle with poor maintenance efficiency, resulting in high maintenance costs and/or frequent equipment breakdowns. Existing literature addresses which tasks to prioritize and how to measure results, but there is little prior work on how to accomplish more maintenance work overall with the same resources and reduce maintenance wastes. We develop a framework for conceptualizing maintenance operational efficiency as a complement to maintenance strategy, focusing on the primary maintenance process: backlog, diagnosis, planning, getting parts, executing, and observing effects. We apply this framework to a small Michigan manufacturing facility. We estimate the cost of equipment breakdowns at the facility using a novel cross-referencing between maintenance breakdowns and production bottlenecks. Finally, we propose several improvements to target wastes in each step of the primary maintenance process: shared ownership of equipment between maintenance and production, more accessible documentation, a work order system, proximal spare parts storage, and solving problems permanently.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Demand Re-Allocation under Fixed Capacity Commitments</title>
<link href="https://hdl.handle.net/1721.1/146650" rel="alternate"/>
<author>
<name>Quintella Correia, Felipe</name>
</author>
<id>https://hdl.handle.net/1721.1/146650</id>
<updated>2022-12-01T03:32:52Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimizing Demand Re-Allocation under Fixed Capacity Commitments
Quintella Correia, Felipe
The automotive industry is suffering from large supply-chain disruptions and restrictions, and sourcing contracts are signed during concept and design phases, fixing capacity commitments years ahead of time based on highly inaccurate demand fore- casts. Companies struggle with demand variation and the inability of the supply-chain to react, with suppliers having limited ability to respond to updated demand signals and varying market conditions. This thesis proposes an algorithmic solution to optimize allocation of demand order volumes into different product types when supply capacity is constrained, while trying to maintain the originally requested mix shares and volumes as much as possible.&#13;
&#13;
The model allows the company to approach supply-chain flexibility under a different lens and the results demonstrate the volume and financial improvements to current processes, with analyzed used cases showing an increased volume output of 10% and profit increases of up to 11% over a manual re-allocation process, or 7% increases when constraining the model to a lower volume output. Finally, a few additional model improvements are suggested as grounds for future work.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Adoption of Large-Format Additive Manufacturing in Aerospace Tooling</title>
<link href="https://hdl.handle.net/1721.1/146649" rel="alternate"/>
<author>
<name>Stehr, Connor</name>
</author>
<id>https://hdl.handle.net/1721.1/146649</id>
<updated>2022-12-01T03:17:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Accelerating Adoption of Large-Format Additive Manufacturing in Aerospace Tooling
Stehr, Connor
Ascent Aerospace is a mid-sized industrial business that specializes in the manufacturing of aerospace tooling and capital equipment. Due to the nature of the industry, this results in a high mix, low volume production environment where quality and precision are important to the customer.&#13;
&#13;
In 2020, just prior to the COVID-19 global pandemic, Ascent made an investment in additive manufacturing technology by purchasing a Large Scale Additive Manufacturing Machine (LSAM) from Thermwood Corporation. The LSAM serves a novel purpose in Ascent’s product portfolio by allowing it to fill customer needs for tools with a quicker turn time and/or lower cost without the strict requirements and high quality standards of Invar or other metal or composite tools.&#13;
&#13;
This thesis begins by reviewing the current state of Ascent Aerospace and the commercial aerospace tooling market, followed by an overview of Ascent’s current product portfolio and how the LSAM fits in. Next, an overview of the finite element modeling procedures to ensure adequate performance from a static and thermal loading perspective is roadmapped. Subsequently, a proposed alternate build process that can make LSAM-printed tools a more competitive choice for customers is described, followed by some miscellaneous operational initiatives, results, and conclusions.&#13;
&#13;
Taken as a whole, this thesis can serve as a guideline to companies intending to roll out 3d printing or other nascent technologies to broaden its product portfolio.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomanufacturing Automation Plug &amp; Play</title>
<link href="https://hdl.handle.net/1721.1/146648" rel="alternate"/>
<author>
<name>Mikkelson, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/146648</id>
<updated>2022-12-01T03:29:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Biomanufacturing Automation Plug &amp; Play
Mikkelson, Andrew
In the biopharmaceutical industry today, the software of manufacturing equipment requires custom coding and lengthy efforts to integrate any new piece of equipment with others in a process. Plug-and-play is an evolving concept that would enable new equipment to quickly connect and seamlessly integrate with the existing automation control system, saving precious time when developing processes for new drugs and enabling unprecedented operational flexibility. Through a variety of biopharma and process industry consortia, efforts toward this goal are well underway using a standardized data file called the Module Type Package (MTP). Most reports of plug-and-play development focus on using MTP to facilitate connection of whole equipment assemblies. However, the plug-and-play concept might also be applied to individual components, such as sensors, within those assemblies. The interchangeability of individual components, while related to the integration of whole assemblies with MTP, represents a distinct capability that is not widely addressed in existing industry literature. This thesis proposes a framework to differentiate the two capabilities as Reconfigurability and Interchangeability and broadly studies the possible use cases associated with each. After establishing the framework, this thesis focuses specifically on Interchangeability and investigates the design considerations for, and the potential business value of, an interchangeable sensor.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of an Agile Framework in Assessing and Aligning Digital Twin Use Cases Across Product Classes in a Large Organization</title>
<link href="https://hdl.handle.net/1721.1/146647" rel="alternate"/>
<author>
<name>Miller, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/146647</id>
<updated>2023-05-18T03:41:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Application of an Agile Framework in Assessing and Aligning Digital Twin Use Cases Across Product Classes in a Large Organization
Miller, Jeffrey
In order to fully realize the benefits of the Fourth Industrial Revolution, companies must migrate legacy software and processes to state-of-the-art systems that fully incorporate technologies enabled by the Internet of Things, namely Digital Twin and Digital Thread. Digital Twin technologies hold the promise of significant advantages in design and manufacturing, particularly for complex products with a long lifecycle. In an early effort to adopt these technologies, large organizations are investing resources in defining and iterating upon their nascent Digital Twin capabilities. However, particularly in a large organization, use cases for Digital Twin technologies can differ according to product class. This thesis presents as a case study one such migration process in a large aerospace manufacturing organization. The research defined and applied an Agile framework to a team’s current processes to assist them in better understanding the Digital Twin use cases present across the full enterprise, and to align their efforts accordingly. In applying the Agile framework to improve team alignment and gather additional feedback, a measurable change in Digital Twin representation specifically within the context of configuration definition and change management was effected, making Digital Twin configuration representations and change management tools more broadly applicable across product classes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reform of Chinese State-owned Entities in Financial Sector</title>
<link href="https://hdl.handle.net/1721.1/146646" rel="alternate"/>
<author>
<name>Luan, Jizheng</name>
</author>
<id>https://hdl.handle.net/1721.1/146646</id>
<updated>2022-11-30T19:38:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Reform of Chinese State-owned Entities in Financial Sector
Luan, Jizheng
There are 97 SOEs in mainland China by 2021, which control more than $80 trillion in assets. They control the resources of the country, play the buy-side roles in our economy, and contribute more than 40% in tax. However, these SOEs only generate less than 30% in new technologies and only 10% in new job opportunities. Even if they are successful in nominal numbers, they are inefficient with low profitability and a rigid management system. The big four banks in China are all Chinese financial state-owned entities. All of them are ranked on the list of the top ten banks all around the world. They are famous for the large scale of assets but poor financial performance and management problems of efficiency. A consensus agreement has been reached; structural reforms of these big banks are needed. However, these big four banks are not only profit-driven but also taking social responsibilities in terms of the system stability of the Chinese financial market. To unveil the truth behind the surface, this literature will illustrate the priorities of Chinese financial state-owned entities through historical analysis and make comparisons between them and American government-sponsored enterprises. Because there are many similarities between the establishment and development of Chinese state-owned entities and American government-sponsored enterprises, the American experience might indicate a clear path to the future development of Chinese SOEs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supply Chain Sustainability Opportunities in the Utility&#13;
Industry</title>
<link href="https://hdl.handle.net/1721.1/146645" rel="alternate"/>
<author>
<name>Hardin, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/146645</id>
<updated>2022-12-01T03:22:28Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Supply Chain Sustainability Opportunities in the Utility&#13;
Industry
Hardin, Alexandra
The impact of COVID-19 has shed a spotlight onto global supply chains. There is a renewed focus on both the effectiveness of supply chains, as well as how to operate them sustainably. Governments, corporations, and policymakers around the world are pushing for more sustainable procurement and there is increasing pressure for companies to set carbon reduction targets and pay more attention to the social impact of their operations on their communities.&#13;
&#13;
Recognizing this push, National Grid, an investor-owned utility operating in the United States and United Kingdom, released its Responsible Business Charter (RBC) in October 2020, outlining how the business as a whole would approach creating more sustainable business practices in the areas of environment, community, people, economy and governance.&#13;
&#13;
As with many utilities, National Grid’s business model places its Procurement organization in a unique position to have impact on the initiatives set out by the RBC. Therefore, the Supply Chain Sustainability and Corporate Social Responsibility team within Procurement sought to develop a Supply Chain Sustainability Charter. The purpose of this thesis was to begin the process of developing the charter by assessing potential focus areas for supply chains in the utility industry.&#13;
&#13;
Two themes were identified for the charter, through thorough benchmarking research, surveys, workshops and interviews from both inside and outside of the business. “A Path to Net Zero” and “Community Impact” will be the Procurement group’s focus areas in taking ownership of their own sustainability goals and initiatives. With the themes and charter outline decided on, next steps will include testing, implementation, reporting and final charter delivery.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Proactive Quality in Commercial Airplanes using Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/146644" rel="alternate"/>
<author>
<name>Allinson, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/146644</id>
<updated>2022-12-01T03:48:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Enabling Proactive Quality in Commercial Airplanes using Natural Language Processing
Allinson, Christian
Quality management systems traditionally draw insight from structured, often numerical, sources of data; unstructured, free-text representations of quality data are less frequently employed despite having high informational value, and often require additional human effort to prepare their contents for use. An ability to extract and proactively employ this information enables a richer analysis of quality performance.&#13;
&#13;
The primarily free-text reports generated by Boeing Commercial Airplane's "in-service investigation" (ISI) process are taken as an example of such quality data. We investigate both an unsupervised clustering method and a supervised classification method to group these reports by the broader "quality topic" they pertain to, using semantic relationship-maintaining text "embeddings" as features. We find success in supervised classification, and describe a method to relate ISI records with quality records from other parts of the commercial airplane value stream via standardized "code" metadata. &#13;
&#13;
We extend the use of similarity techniques to investigation execution and propose a "helper" tool that automates parts of the manual data collection and relationship-finding process. The benefits of using such a tool over traditional keyword searches are described through an illustrated example.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Electric Vehicles for Grid Services: Capacity Available and Applications for Electric Utility Commercialization</title>
<link href="https://hdl.handle.net/1721.1/146643" rel="alternate"/>
<author>
<name>Castillo Jr., Gustavo</name>
</author>
<id>https://hdl.handle.net/1721.1/146643</id>
<updated>2022-12-01T03:31:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Using Electric Vehicles for Grid Services: Capacity Available and Applications for Electric Utility Commercialization
Castillo Jr., Gustavo
As electric vehicle (EV) ownership increases, utilities face a high strain on electricity demand when vehicles charge at peak hours. EV grid services like managed charging (V1G) and bidirectional charging could enable electric vehicles' untapped energy storage capacity to improve grid resiliency. This thesis pertains to a detailed case study of EVs for grid services. Using Florida Power and Light residential charging data, the thesis lays out a method to estimate V1G and vehicle-to-grid  (V2G) capacity and finds that EV grid services are most readily available in the early morning and evening. An aggregation algorithm designed for demand response is outlined to coordinate the discharge of vehicles during a dispatch event to meet an operator-defined target load reduction, and the resulting performance is highlighted. A V1G algorithm for residential chargers is proposed and highlighted as an opportunity to increase customer participation in offering their vehicle for grid services. A strategy is introduced to build off the concepts presented to create a simulation for utility planning for EV grid services. The thesis concludes with a road map of adoption for EV grid services and potential commercialization opportunities. Key risks and technical challenges are highlighted, and final recommendations for utilities are provided.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Railroad operating plans : development and evaluation.</title>
<link href="https://hdl.handle.net/1721.1/146350" rel="alternate"/>
<author>
<name>McCarren, James Reilly.</name>
</author>
<id>https://hdl.handle.net/1721.1/146350</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Railroad operating plans : development and evaluation.
McCarren, James Reilly.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 149-150.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minority shareholders in close corporations : ways and limits of protection in their dilemma of no control and no ready market</title>
<link href="https://hdl.handle.net/1721.1/146349" rel="alternate"/>
<author>
<name>Esser, Angelika Marie Charlotte.</name>
</author>
<id>https://hdl.handle.net/1721.1/146349</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Minority shareholders in close corporations : ways and limits of protection in their dilemma of no control and no ready market
Esser, Angelika Marie Charlotte.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of controlled-potential electrolysis with a granular electrode</title>
<link href="https://hdl.handle.net/1721.1/146348" rel="alternate"/>
<author>
<name>Stedman, Harold Frank.</name>
</author>
<id>https://hdl.handle.net/1721.1/146348</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Applications of controlled-potential electrolysis with a granular electrode
Stedman, Harold Frank.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1960; Includes bibliographical references (leaf x).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Analog CNN Accelerator with RRAMs for Fast Inference</title>
<link href="https://hdl.handle.net/1721.1/146297" rel="alternate"/>
<author>
<name>Chao, Minghan</name>
</author>
<id>https://hdl.handle.net/1721.1/146297</id>
<updated>2022-11-11T03:11:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">All Analog CNN Accelerator with RRAMs for Fast Inference
Chao, Minghan
As AI applications become more prevalent and powerful, the performance of deep learning neural network is more demanding. The need to enable fast and energy efficient circuits for computing deep neural networks is urgent. Most current research works propose dedicated hardware for data to reuse thousands of times. However, while re-using the same hardware to perform the same computation repeatedly saves area, it comes at the expense of execution time. This presents another critical obstacle, as the need for real-data and rapid AI requires a fundamentally faster approach to implementing neural networks. The focus of this thesis is to duplicate the key operation – multiply and accumulate (MAC) computation units, in the hardware so that there is no hardware re-use, enabling the entire neural network to be physically fabricated on a single chip. As neural networks today often require hundreds of thousands to tens of millions of MAC computation units, this requires designing the smallest MAC computation units to fit all of the operations on chip.&#13;
&#13;
Here, we present initial analysis on a convolutional neural network (CNN) accelerator that implements such a system, optimizing for inference speed. The accelerator duplicates all of the computation hardware, thus eliminating the need to fetch data back and forth while reusing the same hardware. We propose a novel design for memory cells using resistive random access memory (RRAM) and computation units utilizing the analog behavior of transistors. This circuit classifies one Cifar-10 dataset image in 6µs (160k frames/s) with 2.4µJ energy per classification with an accuracy of 85%. It contains 7.5 million MAC units and achieves 5 million MAC/mm².
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pronounced Absurdity: The Wedding-scape Outside a Conical Field</title>
<link href="https://hdl.handle.net/1721.1/146293" rel="alternate"/>
<author>
<name>Sun, Yutan</name>
</author>
<id>https://hdl.handle.net/1721.1/146293</id>
<updated>2022-11-11T03:14:59Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Pronounced Absurdity: The Wedding-scape Outside a Conical Field
Sun, Yutan
Usually taking on an exotic appearance and occurring as a defocused and cropped backdrop in a picture frame, a wedding park is an architectural complex that provides spectacular and romanticized scene settings like a proscenium stage for wedding photographs. In order to satisfy the bourgeoisie lifestyle fantasy, architectural symbols out of the in situ context are extensively deployed to create a sense of elsewhere in a wedding park, leading to a misalignment of a wedding park’s pictorial presence and physical reality.&#13;
&#13;
A wedding park as a real estate typology is an architectural response to both the prosperous wedding economy and the visual consumption fever in China. Xiamen, a city branding itself as the international wedding capital, expects a new typology of wedding park that conforms its highly dense urban fabric and city image around weddings.&#13;
&#13;
The double-image phenomenon in a wedding park is paralleled in a dummy cake, a cake whose sponge has been wholly or partially replaced with polystyrene blocks. The dramatic counterpose between its sumptuous profile and inedibility becomes a metaphor for the duality of wedding spaces.&#13;
&#13;
This thesis understands a dummy cake as a political and cultural artifact that echoes the double-image of architecture and critiques the misalignment of imageability and physicality in Xiamen’s wedding spaces. This thesis imagines a set of wedding infrastructures inserted in the highly dense urban fabric of Shapowei district, building up fantasies with the appropriation of architectural symbols. The concentration of diversely themed wedding scenes signifies an efficient, inhumane, and consumeristic image-making mechanism. By replicating alienated symbols and juxtaposing the fantasized construct and the realistic urban context, this thesis creates spectacles for visual consumption and simultaneously foregrounds the absurdity of both the construct per se and its uncanny collision with the existing urban ambient. In this way, the rationality of wedding infrastructures only exists in the conical field of a camera, and the dysfunctional, disordered and obscure physical reality behind a flawless wedding photo becomes a critique of visual consumerism.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diné Bizaad Bitsisiléí Bóhoo’aah: A Basis for learning Navajo</title>
<link href="https://hdl.handle.net/1721.1/146292" rel="alternate"/>
<author>
<name>Denny, Devon</name>
</author>
<id>https://hdl.handle.net/1721.1/146292</id>
<updated>2022-11-11T03:31:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Diné Bizaad Bitsisiléí Bóhoo’aah: A Basis for learning Navajo
Denny, Devon
This guide is meant to provide a brief introduction to the Navajo language, Diné Bizaad. It is understood that most of the present Diné youth speak English as their only language, as the population of Diné Bizaad speakers increase in age. This guide assumes an English-speaking background and attempts to make basic connections between English and the Diné language. In order for these connections to be clear, the guide uses basic English grammar principles to relate to Diné Bizaad grammar. Each chapter introduces a grammatical concept in English to enhance the understanding of new concepts that may exist in Diné Bizaad. Influenced by ideas in Comprehensible Input and Holistic Learning methodologies, this guide intends to encourage learners to situate themselves in Diné Bizaad basics as well as develop a sense of direction upon completion. A simplified look at the structure of Diné Bizaad words and basic sentences should allow for learners to recognize patterns in the language as they encounter them in the future.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporation of Carbon Nanoparticles in Polyaryletherketone Matrices for High Performance Liquid Chromatography Applications</title>
<link href="https://hdl.handle.net/1721.1/146291" rel="alternate"/>
<author>
<name>Williams, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/146291</id>
<updated>2022-11-11T03:25:50Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Incorporation of Carbon Nanoparticles in Polyaryletherketone Matrices for High Performance Liquid Chromatography Applications
Williams, Jonathan
Polyaryletherketone (PAEK) plastics are an option for rotary shear valves in High Performance Liquid Chromatography (HPLC) applications owing to their high strength, thermal resilience, and chemical inertness. In this study, the effects of adding carbon nano particles at a 1% weight loading to 30% carbon fiber loaded PAEK plastics via a simple melt blending process were explored and compared to neat PAEK blends. Experimental blends were tested for their Young’s modulus, relaxation under sustained load and temperature, yield strength, and internal defects. It was found that OH-functionalized graphene nano particles increased the moduli of PEEKCF30 and PEKEKKCF30 by 10.8% and 4.7% respectively. These results were considered statistically significant by hypothesis tests for a 95% confidence level. Further, a neat PEKKCF30 blend was determined to have a 30.2% increase from the neat PEEKCF30 blend and was also statistically significant. Under sustained load at ambient temperature, it was found that graphene oxide reduced the relaxation force by 10% in a PEEKCF30 blend and the neat PEKKCF30 blend reduced this force by 36% while both were statistically significant. Under sustained load at elevated temperatures, the PEKKCF30 blend performed well compared to neat PEEKCF30 at all temperatures with a detriment at 90C of only -1.2%, and improvements of 7.6% &amp; 3.1% at 110C, &amp; 130C respectively. Yield strength calculations revealed a 10% and 15% improvement for PEEKCF30 functionalized graphene and neat PEKKCF30 respectively. Finally, microscopic analysis revealed void formation and contaminants were more common in blends with carbon nano particles. From this screening analysis, the neat PEKKCF30 base polymer and OH-functionalized graphene additive are recommended for further exploration.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Flexibility in Naval Ship Design</title>
<link href="https://hdl.handle.net/1721.1/146073" rel="alternate"/>
<author>
<name>Hein, Christopher N.</name>
</author>
<id>https://hdl.handle.net/1721.1/146073</id>
<updated>2022-11-02T03:40:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantifying Flexibility in Naval Ship Design
Hein, Christopher N.
Warships are highly complex and complicated systems that are designed in the present to address problems that will exist decades in the future. Because of the significant time gap between preliminary design and the end of the ship’s service life, it is critical that it be built with flexibility in mind. In this context flexibility is the measure of a ship’s ability to be upgraded quickly and cheaply to efficiently respond to a known or unknown perturbation. With enough time and money, a warship can be altered to respond to almost any eventuality. The difficulty lies in constructing warships with future uncertainty in mind and in such a manner that they can be upgraded quickly and cheaply. Put more simply, the difficulty lies in designing warships to be flexible.&#13;
&#13;
Taken a step further, the challenge is to design a ship that is “flexible enough” and in the right areas. As it stands, while flexibility is a desirable characteristic in ship design, the Navy does not have a robust methodology to design for it. While it is often a consideration or recognized as an emergent property, there are no recognized, quantitative metrics, KPIs (Key Performance Indicators), or processes that designers can rely upon to ensure that a ship is flexible on delivery. This study provides a new way to think of flexibility that will allow ship designers to quantify and categorize the measure of a warship’s flexibility in the face of future uncertainty through the use of a “flexibility framework” that is both quantitative and repeatable.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fully-Implantable Low-Noise Emi-Resistant Piezoelectric-Polymer Microphone and Amplifier for the Middle Ear</title>
<link href="https://hdl.handle.net/1721.1/146071" rel="alternate"/>
<author>
<name>Yeiser, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/146071</id>
<updated>2022-11-02T03:30:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Fully-Implantable Low-Noise Emi-Resistant Piezoelectric-Polymer Microphone and Amplifier for the Middle Ear
Yeiser, Aaron
We present a fully implantable piezoelectric microphone designed to operate with a cochlear implant. This thesis details the design, fabrication, and potential surgical implantation scheme for a fully differential and shielded cantilever made from polyvinylidene difluoride (PVDF)—a common piezoelectric polymer, as well as a low noise differential charge amplifier designed for small capacitance sensors. The amplifier and sensor combination has a noise floor of 385 e− (0.062 fC) over its bandwidth of 100 Hz to 20 kHz, equivalent to 0.015 nm of displacement. When implanted, we achieve a pressure sensitivity of 80–100 fC/Pa referenced to ear canal pressure below 2 kHz and 8–10 fC/Pa above 4 kHz. We expect this sensitivity at high frequency to substantially improve when measured relative to free-field sound pressure, as the horn-like outer ear and ear canal provide up to 20 dB pressure gain above 1 kHz. Our design also provides significant EMI protection—we measured a sensitivity to external electric potentials of only 0.6 fF compared to over 200 fF for an unshielded 4 mm-diameter sphere. We believe this microphone design is competitive with commercial electret microphones used for cochlear implants, especially since our design is fully implantable and interfaces with the existing middle ear structures.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Damped Double Dipole UHF RFID Antenna with Application to Wireless Chemiresistive Gas Sensing</title>
<link href="https://hdl.handle.net/1721.1/146070" rel="alternate"/>
<author>
<name>Peraire-Bueno, Alexander (Olek)</name>
</author>
<id>https://hdl.handle.net/1721.1/146070</id>
<updated>2022-11-02T03:28:56Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Damped Double Dipole UHF RFID Antenna with Application to Wireless Chemiresistive Gas Sensing
Peraire-Bueno, Alexander (Olek)
Ultra High Frequency (UHF) Radio Frequency Identification (RFID) tags provide an inexpensive framework for distributed sensing. Materials such as functionalized carbon nanotubes (CNTs) have been engineered to change in resistance when exposed to a variety of analytes. These materials have been added to RFID tags to create low cost sensors that work at a fixed reader-tag separation distance. This thesis proposes a novel approach to create UHF RFID sensing tags that work independent of distance (within the operating range), and are able to sense changes in resistance of a sensing element with a conductivity similar to that of CNT networks. Simulations of the proposed design show two methods of operation, either by comparing the damping between two resonant peaks, or by shifting the resonant frequency of the RFID tag. The first of the two methods of operation is validated experimentally with surface mount resistors, showing a relative change in &#120591; of 0.2 for a 35% change in resistance of the sensing element. Then, a printing process is developed for liquid inks comprising CNTs, and RFID tags are fabricated with functionalized CNTs as the active elements. The functionalized CNTs exhibit an irreversible 65% change in resistance at 100ppm NH₃, resulting in the tags demonstrating a relative change in &#120591; of 0.5 when exposed to 1000ppm NH₃.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smooth Flow Control for On-Chip Pneumatic Micropumps</title>
<link href="https://hdl.handle.net/1721.1/146069" rel="alternate"/>
<author>
<name>Lenhard, Allison N.</name>
</author>
<id>https://hdl.handle.net/1721.1/146069</id>
<updated>2022-11-02T03:39:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Smooth Flow Control for On-Chip Pneumatic Micropumps
Lenhard, Allison N.
Advancements in cellular biology, microfabrication methods, and the field of microfluidics allow biologists to closely replicate in vitro environments on organ-on-a-chip devices. In order to reproduce physiological conditions and processes as accurately as possible, it is necessary to generate the same flow profiles found in vitro. This thesis presents the development, implementation, testing, and iterative improvement of both hardware and software that composes a flow control system that produces smooth flow for on-chip pneumatic micropumps. By establishing a flow control system that can achieve smooth flow, fluidic conditions of microphysiological systems can be controlled to accurately mimic biological conditions. Biological experiments can require flow profiles anywhere on the spectrum of smooth flow to highly pulsatile flow. A smooth flow profile can be modified with pumping delays to make the flow profile as pulsatile as desired.&#13;
&#13;
The main outcome of the work presented in this thesis is three flow control system approaches for smooth flow that can achieve smooth flow profiles at flow rates up to 1 &#120583;L/s. Two different iterative learning control (ILC) algorithms relying on either direct or indirect sensing methods were developed to allow for feedback driven flow control systems. A packaged open-loop flow control platform was also developed and is less-complex than its ILC driven counterparts and can be used without modifying the chips since sensing is not necessary. These systems all perform consistently and maintain accurate flow rates while producing smooth flow profiles. Use of these flow control systems in future biological experiments will provide insight into the effect of having smooth on-chip flow.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>a taste of home</title>
<link href="https://hdl.handle.net/1721.1/145970" rel="alternate"/>
<author>
<name>Arenas, Ana P.</name>
</author>
<author>
<name>Rodrigues, Carol-Anne V.</name>
</author>
<id>https://hdl.handle.net/1721.1/145970</id>
<updated>2022-10-25T03:41:26Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">a taste of home
Arenas, Ana P.; Rodrigues, Carol-Anne V.
A border is both a physical location and a political condition that divides two countries. A border, however, is not impassible, nor is it only a fixed location; it is also an immaterial trace that is carried forth by everyone who crosses one. The border lives beyond the thin demarcation between nations, and thickens to accommodate the stories of all those who wear its mark as they travel from their origin point to final destination. This thesis proposes that the border does not only exist at a land’s edge, but also within our immediate surroundings, where we see borders all across the city of Boston. &#13;
&#13;
From newly formed migrant groups to the communities of locals who have lived here for years, Boston’s diversity defines its inner neighborhoods and outer suburbs. Across the city, we cross borders when we enter neighborhoods, enter a store, dine at a restaurant, or arrive at a home. These stories of borders manifest in the collective sharing and exchange of food at the table. Immigrant-owned restaurants across Boston offer their cuisine to celebrate their culture and create a sense of home in an unfamiliar place.&#13;
&#13;
On the other hand, borders are also experienced across the city through the inequitable access to food. Charitable organizations throughout the Greater Boston area work to bridge the gap between food excess and food scarcity, but there is still a divide. In a city rich with diverse cuisines &amp; a lack of access to food, how can architecture help bring equality to the sharing of food and dignify the experience of immigrants? &#13;
&#13;
This thesis proposes a network of “Food Embassies,” a new institution that celebrates food from across borders and bridges the gap between excess and scarcity. As an embassy serves its people in a foreign place, we propose Food Embassies across Greater Boston to provide accessibility, to create community, and to provide tastes of home.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of the washing problems in a modern power laundry</title>
<link href="https://hdl.handle.net/1721.1/145879" rel="alternate"/>
<author>
<name>Smith, Charles M.</name>
</author>
<id>https://hdl.handle.net/1721.1/145879</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1926-01-01T00:00:00Z</published>
<summary type="text">Study of the washing problems in a modern power laundry
Smith, Charles M.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1926
</summary>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method for measuring the specific oxidation rate of linseed oil</title>
<link href="https://hdl.handle.net/1721.1/145877" rel="alternate"/>
<author>
<name>Turner, Luther B.</name>
</author>
<id>https://hdl.handle.net/1721.1/145877</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1931-01-01T00:00:00Z</published>
<summary type="text">A method for measuring the specific oxidation rate of linseed oil
Turner, Luther B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1931; Includes bibliographical references (leaf 29).
</summary>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A re-entry control system using delta-modulated accelerometers and a DDA-computer</title>
<link href="https://hdl.handle.net/1721.1/145875" rel="alternate"/>
<author>
<name>Millers, Hans-Fredrik.</name>
</author>
<id>https://hdl.handle.net/1721.1/145875</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">A re-entry control system using delta-modulated accelerometers and a DDA-computer
Millers, Hans-Fredrik.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 88-89).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative analysis of patent policy : technology acquisition and technology protection</title>
<link href="https://hdl.handle.net/1721.1/145874" rel="alternate"/>
<author>
<name>Tōya, Hiroki,
            1963-</name>
</author>
<id>https://hdl.handle.net/1721.1/145874</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Comparative analysis of patent policy : technology acquisition and technology protection
Tōya, Hiroki,
            1963-
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1993; Includes bibliographical references (p. 189-194).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental investigation of low aspect ratio aerofoils.</title>
<link href="https://hdl.handle.net/1721.1/145872" rel="alternate"/>
<author>
<name>Bertrand, John Edwin.</name>
</author>
<id>https://hdl.handle.net/1721.1/145872</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Experimental investigation of low aspect ratio aerofoils.
Bertrand, John Edwin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of acoustic streaming from a resonant bubble on the action potential characteristics of a nerve.</title>
<link href="https://hdl.handle.net/1721.1/145871" rel="alternate"/>
<author>
<name>Mecca, Roger Samuel.</name>
</author>
<id>https://hdl.handle.net/1721.1/145871</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The effects of acoustic streaming from a resonant bubble on the action potential characteristics of a nerve.
Mecca, Roger Samuel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Numbers 133-135 omitted in paging.; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of electron drift velocity in MOSFET inversion layers at high electric fields</title>
<link href="https://hdl.handle.net/1721.1/145868" rel="alternate"/>
<author>
<name>Bair, Lawrence A.</name>
</author>
<id>https://hdl.handle.net/1721.1/145868</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Measurement of electron drift velocity in MOSFET inversion layers at high electric fields
Bair, Lawrence A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1986; Bibliography: leaves 64-65.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A strategic analysis of the advanced ceramics industry for heat engine applications</title>
<link href="https://hdl.handle.net/1721.1/145866" rel="alternate"/>
<author>
<name>Assanis, Dennis N.</name>
</author>
<id>https://hdl.handle.net/1721.1/145866</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">A strategic analysis of the advanced ceramics industry for heat engine applications
Assanis, Dennis N.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 81-85.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Directed Energy Deposition Additive Manufacturing Supplier Sourcing for Aerospace</title>
<link href="https://hdl.handle.net/1721.1/145773" rel="alternate"/>
<author>
<name>Huang, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/145773</id>
<updated>2022-10-08T03:24:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Directed Energy Deposition Additive Manufacturing Supplier Sourcing for Aerospace
Huang, Yu
Pratt &amp; Whitney (P&amp;W) is a major aerospace Original Equipment Manufacturer (OEM) of gas turbine engines for both commercial and military sectors. Historically aerospace part designs require more iterations compared to other industries due to the high level of complexity and the need to minimize weight on the aircraft. P&amp;W has been utilizing Additive Manufacturing (AM) for rapid prototyping and Research &amp; Development (R&amp;D) cost reduction for three decades. However, most of the past metal additive manufacturing applications have utilized Powder Bed Fusion technology, which has limited size and unique capabilities. P&amp;W wants to explore the potential of Directed Energy Deposition (DED), an AM technology that could be used for bigger parts and adding features to existing parts.&#13;
&#13;
The project objective is to bring on DED suppliers to enable the acquisition of development metal hardware for P&amp;W’s advanced programs. We do this first by learning about the technology and gathering information on prominent suppliers via virtual interviews and site visits. We then come up with a list of criteria based on P&amp;W’s advance program outsourcing needs and evaluate the suppliers based on the criteria. The final product of this project is a report to P&amp;W documenting all the findings on suppliers and final scores for each of the suppliers based on the criteria we developed for evaluation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Public housing, Private priorities: The invisible dynamics in low-income housing allocation in urban Peru, the case of CSP-Techo Propio</title>
<link href="https://hdl.handle.net/1721.1/145772" rel="alternate"/>
<author>
<name>Belli Ferro, Fiorella</name>
</author>
<author>
<name>Orensanz, Mora</name>
</author>
<id>https://hdl.handle.net/1721.1/145772</id>
<updated>2022-10-08T03:24:51Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Public housing, Private priorities: The invisible dynamics in low-income housing allocation in urban Peru, the case of CSP-Techo Propio
Belli Ferro, Fiorella; Orensanz, Mora
This thesis analyzes Techo Propio, Peru’s leading affordable housing program for the last 20 years. Following the neoliberal turn in housing policies, the Peruvian government reduced its role to solely subsidizing the low-income housing demand while housing production and delivery was left entirely in the hands of the real-estate industry. We specifically analyze the component Construcción en Sitio Propio (CSP), which fully subsidizes the construction of 35 m2-houses in family-owned lots. Given the limited information and studies available on this subprogram, we were keen to understand how CSP is currently being implemented and, especially, how subsidies are allocated to families and what are the city-wide implications.&#13;
&#13;
Through a combination of spatial analysis and in-depth interviews with diverse actors in the Techo Propio ecosystem, this thesis elucidates the housing allocation process as it is being implemented, beyond the official narrative. Our findings identify which families actually become beneficiaries and the spatial consequences of this model at the neighborhood, city, and national scales. We hope our findings and conclusions can help reflect on potential improvements for this and similar programs, and ultimately contribute to discussions on the roles the public and private sectors should have in the provision of affordable housing across Latin America.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From linear to exponential: How SMEs in emerging markets can adopt innovation-driven strategies</title>
<link href="https://hdl.handle.net/1721.1/145771" rel="alternate"/>
<author>
<name>Vega Sanchez, Anahi</name>
</author>
<id>https://hdl.handle.net/1721.1/145771</id>
<updated>2022-10-08T03:33:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">From linear to exponential: How SMEs in emerging markets can adopt innovation-driven strategies
Vega Sanchez, Anahi
SMEs play an essential role in the economic development and prosperity of emerging markets. This thesis explores the possibility of traditional entrepreneurship to adopt strategies from Innovation Driven Enterprises to enhance periods of high growth and possibly increase the positive impact caused by the SME sector. The study was conducted in Mexico and Indonesia, two developing countries with similar entrepreneurial ecosystems. Data was collected through interviews and using a semi-structured questionnaire and later analyzed to define three major proposed strategies. The findings show that SMEs can benefit from tech integration, refinement of their purpose and main value proposition as well as the development of strong communities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The dilemmas of United Nations peacekeeping in the post Cold War era</title>
<link href="https://hdl.handle.net/1721.1/145767" rel="alternate"/>
<author>
<name>Carey, Elizabeth Ann,
            1975-</name>
</author>
<id>https://hdl.handle.net/1721.1/145767</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">The dilemmas of United Nations peacekeeping in the post Cold War era
Carey, Elizabeth Ann,
            1975-
Thesis: S.M., Massachusetts Institute of Technology, Department of Political Science, 2001; Includes bibliographical references (p. 135-137).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of property rights protection on economic growth and environmental pollution : a cross-sectional time-series analysis</title>
<link href="https://hdl.handle.net/1721.1/145766" rel="alternate"/>
<author>
<name>Wickboldt, Anne-Katrin,
            1970-</name>
</author>
<id>https://hdl.handle.net/1721.1/145766</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">The effect of property rights protection on economic growth and environmental pollution : a cross-sectional time-series analysis
Wickboldt, Anne-Katrin,
            1970-
Thesis: S.M., Massachusetts Institute of Technology, Department of Political Science, 2001; Includes bibliographical references (p. 90-96).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>State collapse : causes, dynamics and linkages to conflict</title>
<link href="https://hdl.handle.net/1721.1/145765" rel="alternate"/>
<author>
<name>McHugh, Gerard Paul,
            1967-</name>
</author>
<id>https://hdl.handle.net/1721.1/145765</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1998-01-01T00:00:00Z</published>
<summary type="text">State collapse : causes, dynamics and linkages to conflict
McHugh, Gerard Paul,
            1967-
Thesis: S.M., Massachusetts Institute of Technology, Department of Political Science, 1998; Includes bibliographical references (leaves 113-118).
</summary>
<dc:date>1998-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information networking as an instrument of sustainable development : the photovoltaic example</title>
<link href="https://hdl.handle.net/1721.1/145764" rel="alternate"/>
<author>
<name>Funk, Karina.</name>
</author>
<id>https://hdl.handle.net/1721.1/145764</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Information networking as an instrument of sustainable development : the photovoltaic example
Funk, Karina.
Thesis: M.S., Massachusetts Institute of Technology, Technology and Policy Program, 1997; Includes bibliographical references (p. 127-130).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Markets for energy efficiency--development, challenges, and opportunities : an analysis of the joint impacts of regulation and market forces on efficient residential and commercial end-use equipment</title>
<link href="https://hdl.handle.net/1721.1/145763" rel="alternate"/>
<author>
<name>Levin, Jeremy Ben.</name>
</author>
<id>https://hdl.handle.net/1721.1/145763</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Markets for energy efficiency--development, challenges, and opportunities : an analysis of the joint impacts of regulation and market forces on efficient residential and commercial end-use equipment
Levin, Jeremy Ben.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1997; Includes bibliographical references (p. 116-124).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The rationalization of load factors</title>
<link href="https://hdl.handle.net/1721.1/145762" rel="alternate"/>
<author>
<name>Crossland, Charles Wilfred.</name>
</author>
<id>https://hdl.handle.net/1721.1/145762</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">The rationalization of load factors
Crossland, Charles Wilfred.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1932; Includes bibliographical references.
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The paradoxes of industrial strategies : neoliberal reform and state intervention in Argentine industry from ISI to Martínez de Hoz.</title>
<link href="https://hdl.handle.net/1721.1/145761" rel="alternate"/>
<author>
<name>Dominguez, Ricardo Mario.</name>
</author>
<id>https://hdl.handle.net/1721.1/145761</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1996-01-01T00:00:00Z</published>
<summary type="text">The paradoxes of industrial strategies : neoliberal reform and state intervention in Argentine industry from ISI to Martínez de Hoz.
Dominguez, Ricardo Mario.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1996; Includes bibliographical references (p. 81-87).
</summary>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uniformly valid asymptotic solutions of the Orr-Sommerfeld equation and their application to plane Poiseuille flow</title>
<link href="https://hdl.handle.net/1721.1/145757" rel="alternate"/>
<author>
<name>Hershenov, Joseph,
            1935-</name>
</author>
<id>https://hdl.handle.net/1721.1/145757</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Uniformly valid asymptotic solutions of the Orr-Sommerfeld equation and their application to plane Poiseuille flow
Hershenov, Joseph,
            1935-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1961; Vita.; Includes bibliographical references (leaf 61).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The future of Bahrain as a financial center</title>
<link href="https://hdl.handle.net/1721.1/145756" rel="alternate"/>
<author>
<name>Al Qassim, Abdul Razak Abdulla Hassan.</name>
</author>
<id>https://hdl.handle.net/1721.1/145756</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1994-01-01T00:00:00Z</published>
<summary type="text">The future of Bahrain as a financial center
Al Qassim, Abdul Razak Abdulla Hassan.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1994; Includes bibliographical references (leaves 86-88).
</summary>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bahrain's competitiveness in the aluminum industry</title>
<link href="https://hdl.handle.net/1721.1/145755" rel="alternate"/>
<author>
<name>Al Noaimi, Ahmed Saleh.</name>
</author>
<id>https://hdl.handle.net/1721.1/145755</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1994-01-01T00:00:00Z</published>
<summary type="text">Bahrain's competitiveness in the aluminum industry
Al Noaimi, Ahmed Saleh.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1994; Includes bibliographical references (leaves 111-113).
</summary>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic plan for a new research and education institution in the Middle East</title>
<link href="https://hdl.handle.net/1721.1/145753" rel="alternate"/>
<author>
<name>Nasrallah, May.</name>
</author>
<author>
<name>Salty, Samer.</name>
</author>
<id>https://hdl.handle.net/1721.1/145753</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Strategic plan for a new research and education institution in the Middle East
Nasrallah, May.; Salty, Samer.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1993; Includes bibliographical references (leaves 187-191).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydraulic servomechanism developments.</title>
<link href="https://hdl.handle.net/1721.1/145751" rel="alternate"/>
<author>
<name>Forrester, Jay W.</name>
</author>
<id>https://hdl.handle.net/1721.1/145751</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Hydraulic servomechanism developments.
Forrester, Jay W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaf [86].
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spherical reflector feed field synthesis by geometrical optics and spherical wave expansion.</title>
<link href="https://hdl.handle.net/1721.1/145750" rel="alternate"/>
<author>
<name>McCann, Edward Frances.</name>
</author>
<id>https://hdl.handle.net/1721.1/145750</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">Spherical reflector feed field synthesis by geometrical optics and spherical wave expansion.
McCann, Edward Frances.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1970; Bibliography: leaves 121-125.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical transitions associated with steady flow in collapsible tubes with varying wall stiffness</title>
<link href="https://hdl.handle.net/1721.1/145749" rel="alternate"/>
<author>
<name>Jaekle, Donald E.
            (Donald Edwin)</name>
</author>
<id>https://hdl.handle.net/1721.1/145749</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Critical transitions associated with steady flow in collapsible tubes with varying wall stiffness
Jaekle, Donald E.
            (Donald Edwin)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1987; Bibliography: leaves 91-92.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new resolver-to-digital conversion technique.</title>
<link href="https://hdl.handle.net/1721.1/145748" rel="alternate"/>
<author>
<name>Goldman, Kenneth Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/145748</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">A new resolver-to-digital conversion technique.
Goldman, Kenneth Alan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1971; Includes bibliographical notes.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information technology and sustainable development : understanding linkages in theory and practice</title>
<link href="https://hdl.handle.net/1721.1/145743" rel="alternate"/>
<author>
<name>Haghseta, Farnaz Saboori,
            1974-</name>
</author>
<id>https://hdl.handle.net/1721.1/145743</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2003-01-01T00:00:00Z</published>
<summary type="text">Information technology and sustainable development : understanding linkages in theory and practice
Haghseta, Farnaz Saboori,
            1974-
Thesis: S.M., Massachusetts Institute of Technology, Technology and Policy Program, 2003; Includes bibliographical references (leaves 83-86).
</summary>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Yenidze Oriental Tobacco and Cigarette Factory : an example of Islamic Ornamental architecture in Germany</title>
<link href="https://hdl.handle.net/1721.1/145742" rel="alternate"/>
<author>
<name>Shaikh, Ayesha Usman.</name>
</author>
<id>https://hdl.handle.net/1721.1/145742</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The Yenidze Oriental Tobacco and Cigarette Factory : an example of Islamic Ornamental architecture in Germany
Shaikh, Ayesha Usman.
Stylistic elements of "Oriental" architecture became a popularized design feature for industrial buildings in Europe during the 19th century--especially within the context of Germany. As an Islamic ornamentation program unfolds, the production and dissemination of pattern books and building manuals brought forth by French and British amateur architects and designers become important factors in establishing and inventing this stylistic trend. This thesis investigates Islamic ornamentation and its occurrence and utilization on Prussian, Bavarian, and Saxonian industrial architectural examples. The Yenidze Oriental Tobacco and Cigarette Factory (1909) serves as a model in emphasizing the implementation and extrapolation of these 19th century pattern books and building manuals. In shedding light on the ways in which the tobacco factory serves as a continuer of a self-referential legacy, this architectural example will ultimately illuminate the stylistic trend's own discursivity. Interpreting the continuation of this design trajectory, the tobacco factory's architecture will further be contextualized within its own geo-agricultural and geo-political associations as regards to broader historical, mercantile, and imperial precedents.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: S.M., Massachusetts Institute of Technology, Department of Architecture, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (149 pages). Includes bibliographical references (pages 142-149).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical Performance and Prototyping of a Liquid Lens Laser Communications Transceiver</title>
<link href="https://hdl.handle.net/1721.1/145576" rel="alternate"/>
<author>
<name>Kacker, Shreeyam</name>
</author>
<id>https://hdl.handle.net/1721.1/145576</id>
<updated>2022-10-04T03:22:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optical Performance and Prototyping of a Liquid Lens Laser Communications Transceiver
Kacker, Shreeyam
Laser communications can enable more efficient and higher bandwidth communications than conventional radio frequency (RF) systems. Free-space optical communications systems' beams are typically narrower in divergence and require precise pointing, acquisition, and tracking (PAT) systems to establish and maintain links. Several technologies for beam steering exist, including MEMS fast-steering mirrors (FSMs), gimbals, and photonic integrated circuit (PIC) devices. However, these may not meet steering, aperture, power handling, or size, weight, and power (SWaP) requirements for small spacecraft. &#13;
&#13;
The Miniature Optical Steered Antenna for Intersatellite Communications (MOSAIC) aims to utilize liquid lenses to provide miniaturized non-mechanical beam steering, allowing wide field-of-view communications and multiple access capabilities. MOSAIC uses three liquid total liquid lenses: one lens is on-axis to provide divergence control, whereas the other two are offset in +x and +y respectively to provide steering. Previous work has focused on qualifying liquid lenses for the space environment, showing a clear path for evaluating their optical performance and constructing prototypes. Lenses from Corning Varioptic (France) and Optotune (Switzerland) are both considered in this work.&#13;
&#13;
The liquid lenses undergo environmental testing for liquid lenses, including microgravity testing, radiation exposure, and quantifying how much power the lenses can effectively couple. An analytic formulation of beam steering is presented, which can be used as a feedforward controller and to optimize system size to fit in constrained spaces, such as CubeSats. Simulation work in Zemax is presented to characterize the transmit and receive gain parameters, and a complete optical link budget is constructed from simulation results. Simulation results are validated with experiments showing beam profiles. An evaluation of diffusers is made, to evaluate the trade in increasing numerical aperture (NA) at the expense of beam quality. Transmit and receive capability is also demonstrated experimentally using two laboratory prototypes. Transceiver architecture trades are discussed, introducing the baseline design, strategies to incorporate diffusers, the benefits of apodization, optimizing the receive path, and strategies to beacon using nutation. Characterization of prototype 1550 nm (optical C-band) optimized lenses from Corning Varioptic are also characterized. Preliminary simulation results for steering multiple beam using a single optical train of variable focuses lenses is also presented.&#13;
&#13;
The liquid lenses from both Corning Varioptic and Optotune show excellent power handling capabilities, with no visible damage to either lens with input powers of up to 2 W continuous wave (CW) at 1550 nm. Both sets of do not show increased degradation rate due to radiation exposure. The visible-spectrum Corning Varioptic and Optotune lenses decrease in transmission from 100% to 91% (Corning Varioptic) and 85% (Optotune), after exposure to radiation equivalent to 10 years in low Earth orbit (LEO) with 0.5 mm aluminum shielding. Additional tip/tilt and coma aberration is measured on the lenses in gravitational environments due to the optical fluid sagging. Tip/tilt changes by 0.74 mrad and 4.05 mrad for the Corning Varioptic A-39N0 and Optotune EL-16-40-TC lenses, respectively. Beam quality significantly improves in microgravity for the Optotune EL-16-40-TC lenses, with significantly decreased coma aberration. The Corning Varioptic A-39N0 lenses maintain excellent beam quality throughout gravitational regimes and show a slight decrease in coma aberration in microgravity. Extended environmental testing qualifies these lenses to a TRL 5-6 on NASA's Technology Readiness Level (TRL) scale. Optical link budgets show that with a reference system, the Corning A-39 and Optotune EL-16-40-TC lenses can maintain a 25 Mbps 16-PPM link with 4 W of input power at 1550 nm with hemispherical steering up to 40 km (Corning Varioptic) and 220 km (Optotune).
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a User Interface for Counterfactual Simulations of Adaptive Treatment Strategies</title>
<link href="https://hdl.handle.net/1721.1/145575" rel="alternate"/>
<author>
<name>Jusiega, Violetta</name>
</author>
<id>https://hdl.handle.net/1721.1/145575</id>
<updated>2022-10-04T03:35:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing a User Interface for Counterfactual Simulations of Adaptive Treatment Strategies
Jusiega, Violetta
AI research has proven potential to revolutionize the healthcare industry by introducing AI-enabled tools that can aid a clinician’s decision-making process in how to diagnose and treat patients. It can be especially useful in the field of personalized medicine, particularly for simulating the outcomes of adaptive treatment strategies applied to specific patients. In order for such a tool to be used in practice, however, it needs to be properly designed. Most decision support tools fail to reach the hands of doctors because they fail to address the needs of the user. The research in this paper uses principles of HCI to design a tool that can be used by clinicians and researchers to simulate strategies. The end results show that it is possible to design a medical tool that addresses the needs of multiple users and insights into the most important aspects that should be considered when designing such tools.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Optimal Path Planning in the Portugal-Azores-Madeira Ocean Region</title>
<link href="https://hdl.handle.net/1721.1/145574" rel="alternate"/>
<author>
<name>Dahill, Clara</name>
</author>
<id>https://hdl.handle.net/1721.1/145574</id>
<updated>2022-10-04T03:33:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Time-Optimal Path Planning in the Portugal-Azores-Madeira Ocean Region
Dahill, Clara
For intelligent ocean exploration and sustainable ocean utilization, the need for smart autonomous underwater vehicles (AUVs), surface craft, and small aircrafts is rapidly increasing. The challenge of creating an optimal navigation route between departure and destination points for these vehicles has many applications, including ocean data collection, transportation and distribution of goods, naval operations, search and rescue, marine pollution, ocean cleanup, conservation, and solar-wind-wave energy harvesting, among others. Our computational approach uses our MSEAS time-optimal path planning theory and schemes based on exact Hamilton–Jacobi PDE and Level Set methods to predict time-optimal trajectories. This approach allows for multiple applications in optimal path planning for autonomous vehicles in the ocean and coastal environment. Employing our multi-resolution ocean modeling and data assimilation in the Portugal-Azores-Madeira region of the Northern Atlantic, we compute time reachability sets and optimal paths, and examine the sensitivity to variations in vehicle type, speed, start time, voyage direction, and operating depths. Our study illustrates how navigational paths vary with these parameters, and how the ocean dynamics and variability in the Portuguese ocean regions affect the time optimization, as compared to a direct voyage in the absence of any ocean flow and currents. Further in this work, we extend these methods by adding additional constraints to find minimum arrival times for vehicles travelling through archipelago areas that contain many land masses, as well as executing interception routes between two vehicles. These computations will allow for expanded capacity of ocean AUVs to execute their missions, by reducing travel time and energy use, and extending range. The methods in this thesis focus on data-driven multi-resolution ocean modeling and simulations as well as new time-optimal path planning and reachability studies for autonomous ocean vehicles in the Portugal-Azores-Madeira ocean region.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solutions of the differential equation u''' + [pi]²zu' + 3αpi]²u = o</title>
<link href="https://hdl.handle.net/1721.1/145450" rel="alternate"/>
<author>
<name>Hershenov, Joseph,
            1935-</name>
</author>
<id>https://hdl.handle.net/1721.1/145450</id>
<updated>2022-09-16T03:25:10Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Solutions of the differential equation u''' + [pi]²zu' + 3αpi]²u = o
Hershenov, Joseph,
            1935-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1957; On t.p. "[pi]" is represented by the mathematical symbol.; Includes bibliographical references (leaf 33).
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology development and transfer : a study of the process at the General Motors technical staffs</title>
<link href="https://hdl.handle.net/1721.1/145448" rel="alternate"/>
<author>
<name>Baker, Milton.</name>
</author>
<id>https://hdl.handle.net/1721.1/145448</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Technology development and transfer : a study of the process at the General Motors technical staffs
Baker, Milton.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 117-120.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sliding friction between some commercial polymers under low load and velocity conditions in the limit of zero wear</title>
<link href="https://hdl.handle.net/1721.1/145447" rel="alternate"/>
<author>
<name>Aylward, Lesa L.</name>
</author>
<id>https://hdl.handle.net/1721.1/145447</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Sliding friction between some commercial polymers under low load and velocity conditions in the limit of zero wear
Aylward, Lesa L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1986; Bibliography: leaf 43.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Merging corporate cultures : a case study</title>
<link href="https://hdl.handle.net/1721.1/145446" rel="alternate"/>
<author>
<name>Bachor, Rosanne Elizabeth.</name>
</author>
<id>https://hdl.handle.net/1721.1/145446</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Merging corporate cultures : a case study
Bachor, Rosanne Elizabeth.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 118-120.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deformable membrane spatial light modulator : a charge coupled approach</title>
<link href="https://hdl.handle.net/1721.1/145444" rel="alternate"/>
<author>
<name>Osterberg, Peter Maynard.</name>
</author>
<id>https://hdl.handle.net/1721.1/145444</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Deformable membrane spatial light modulator : a charge coupled approach
Osterberg, Peter Maynard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion analysis of flexible ureteroscopic techniques by urologic surgeons</title>
<link href="https://hdl.handle.net/1721.1/145245" rel="alternate"/>
<author>
<name>Wollin, Daniel Arthur.</name>
</author>
<id>https://hdl.handle.net/1721.1/145245</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Motion analysis of flexible ureteroscopic techniques by urologic surgeons
Wollin, Daniel Arthur.
Urologic surgeons, in order to surgically remove kidney stones from patients who suffer from this painful condition, perform a common procedure known as flexible ureteroscopy. During this operation, the surgeon will utilize a 3mm-diameter flexible camera passed through the urinary tract to fragment, manipulate, and remove kidney stones. The flexible ureteroscope utilizes a non-intuitive control mechanism including a thumb-actuated lever and various wrist rotations to direct the end effector. Numerous methodologies exist to evaluate, understand, and train proper surgeon movement when operating this device, although the current literature suggests that urologists cannot sufficiently define correct or successful device interaction. In this study, we employed infrared motion capture in combination with standard video analysis to characterize surgeon movement variables in a simulated clinical scenario. A ureteroscopic simulation box was used by 12 practicing urologists at various skill levels to perform a number of ureteroscopic tasks. Demographic, motion, and task-specific data were recorded and analyzed to delineate associations between measures of ureteroscopic efficiency and success. This project suggests that certain surgeon movement data, including measures of economy of motion and wrist rotation, trend with efficient ureteroscopic manipulation and require additional study. These variables could potentially serve as a basis for improvement in device development and urologic surgical training and evaluation.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2020; Cataloged from PDF version of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available. The images contained in this document are of the best quality available"--Disclaimer Notice page.; Includes bibliographical references (pages 51-52).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Role of master data management in large organizations</title>
<link href="https://hdl.handle.net/1721.1/145244" rel="alternate"/>
<author>
<name>Soni, Rupreet Singh.</name>
</author>
<id>https://hdl.handle.net/1721.1/145244</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2013-01-01T00:00:00Z</published>
<summary type="text">Role of master data management in large organizations
Soni, Rupreet Singh.
These days mostly all data generated is stored as digital data. Whatever action happens in the world, the outcome tends to be a digital data. From social networking sites to customer management applications and to further different business segments and operational streams of the organizations, everything formulates to be a digital data. For organizations, data is generated from different business sectors and varied operational segments. The question that is prevalent and is hard to answer is how do organizations take best holistic decisions on data sets coming from varied applications and source systems. To take effective decisions, organizations need to collate the data coming from different source systems and create a unified master data. Decisions taken on such master records are more meaningful and impactful. Master Data Management is a technology that helps organizations to collate varied data sets originating from different source systems and create a unified master data set. This master data set can be further used by the organizations for effective analytics, operational benefits, streamlined reporting and even for adhering to regulatory requirements.; Organizations can collate several different types of data entities and hence Master Data Management can be applied on different domains such as customers, suppliers, products and vendors. It depends on an organizational requirement for which domain data sets need to be collated and mastered. Thesis is divided into 3 segments. First segment describes the Master Data Management technology and gives an overview of the architecture and snapshot of industry adoption of the technology. Second chapter describes the motivational factors for organizations to use Master Data Management. Last and third chapter describes a strategic framework to implement Master Data Management in an organization. Finally I have drawn few conclusions out of my thesis, which help to understand the thesis appropriately.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2013; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 61-64).
</summary>
<dc:date>2013-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Countering disinformation : using systems thinking to develop an integrated approach</title>
<link href="https://hdl.handle.net/1721.1/145243" rel="alternate"/>
<author>
<name>Tan, Brendan Weijian.</name>
</author>
<id>https://hdl.handle.net/1721.1/145243</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Countering disinformation : using systems thinking to develop an integrated approach
Tan, Brendan Weijian.
Although disinformation has been used by malicious actors throughout the ages, technological advances and social changes in the Internet era have made it a more potent problem than ever before. Several key elections in 2016 were heavily influenced by disinformation -- including the Brexit referendum and US presidential elections -- which made it evident that information is increasingly being used as a weapon. Fittingly, the Center for European Policy Analysis has highlighted that the "age of information is fast becoming the age of disinformation". Although disinformation campaigns can have a significant impact on the population, the manpower and costs required to carry them out are disproportionately low. It is thus imperative that nations are adequately prepared to deal with this asymmetric and dangerous threat using a suite of countermeasures, as there is no one silver bullet to tackle this problem. Countermeasures that have been proposed or implemented around the globe are analyzed, and a systems thinking methodology is used to develop an integrated approach to deal with this complex issue at the national-level. To guide the entire thought process, the ARchitecting Innovative Enterprise Strategy (ARIES) framework is utilized. Finally, a case study on the annexation of Crimea by Russia serves to qualitatively validate the proposed system and ascertain its applicability in a real scenario. This particular case study is chosen as it is a prime example of how disinformation can be a powerful tool in the hands of adversaries.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis. Page 126 blank.; Includes bibliographical references (pages 101-119).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Humanistic co-design of a solution for the rehabilitation of children suffering from cortical visual impairment</title>
<link href="https://hdl.handle.net/1721.1/145242" rel="alternate"/>
<author>
<name>Ray Barua, Priyanka.</name>
</author>
<id>https://hdl.handle.net/1721.1/145242</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Humanistic co-design of a solution for the rehabilitation of children suffering from cortical visual impairment
Ray Barua, Priyanka.
Cortical Visual Impairment (CVI) is defined as impaired vision caused due to bilateral dysfunction of the optic radiations or visual cortex or both. It is a major cause of low vision in children in the developed and developing world due to increasing survival of babies during complicated deliveries in pediatric and neonatal care. Prevalence of CVI in developed countries is increasing due to successful management of childhood blindness resulting from cataract and retinopathy of prematurity. In addition, increasing survival of children with brain injury has contributed to increased cases of CVI. The prevalence of visual impairment in children under 16 years of age is between 10 and 22 cases per 10,000 births in developed countries and 40 per 10,000 births in developing countries but this may be an under-estimate owing to visual behavioral difficulties going undetected and undiagnosed cases. However, many children with CVI have improved their sight and mental and physical strength with proper care. Better quantitative tools for measuring vision are needed to assess these children, to allow measurement of their visual deficit, and to monitor their response to treatment and rehabilitation. In order to truly meet the unique learning needs of young children with cortical visual impairment (CVI), it is critical to accurately define the population to create and implement quality and responsive support services. The utmost important part of the CVI treatment are the daily exercises. Even if a child has possibility of improving his/her condition, missing out on the daily exercises might permanently take the chances away of acquiring better sight someday. This research is about finding a solution for the financially struggling families in developing countries like India which would help them do the daily exercises without having to travel to the hospital or meeting the Teacher for visually impaired on a regular basis.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 100-107).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for humanity : a design-phase tool to identify and assess inadvertent effects of products and services</title>
<link href="https://hdl.handle.net/1721.1/145241" rel="alternate"/>
<author>
<name>Leung, Jennifer Chung Yan.</name>
</author>
<id>https://hdl.handle.net/1721.1/145241</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Design for humanity : a design-phase tool to identify and assess inadvertent effects of products and services
Leung, Jennifer Chung Yan.
Products and services are created by makers of various backgrounds and different degrees of understanding of design methodology. At best, makers use human-centred methodologies that narrow down the target users to a set of needs. Solutions are designed out of that narrowed set of needs, tested against that set of needs and put into production. There is very little incentive and effort to examine the potential inadvertent effects that a solution may have on the user, their community, and the society as a whole once it's placed into the real world. In this thesis, I created a framework that would help makers evaluate the effects a proposed product or service is going to have on the user and society. This framework is meant to be used as a tool to help evaluate potential solutions before production. It breaks down the analysis of potential inadvertent effects into smaller pieces, allowing the makers to analyze how their solution may interact with the world on an individual, communal, and societal level. The intention of the framework is to surface both beneficial and detrimental inadvertent effects and inspire action for the maker.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 39-40).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Use of machine learning in radio frequency integrated circuits (RFIC) development</title>
<link href="https://hdl.handle.net/1721.1/145240" rel="alternate"/>
<author>
<name>Cui, Qiang
            (Computer engineer).</name>
</author>
<id>https://hdl.handle.net/1721.1/145240</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Use of machine learning in radio frequency integrated circuits (RFIC) development
Cui, Qiang
            (Computer engineer).
This Master's Thesis starts with an introduction to the radio frequency integrated circuits (RFICs) industry and a discussion on the key problem of the existing RFIC development process: the need for multiple trial and error iterations due to inaccurate simulations. This simulation inaccuracy happens because the existing electronic design automation (EDA) software, and the underlying physics-based IC models, fail to fully capture the nonlinear, frequency-dependent RF parasitic effects. To overcome this problem, in this thesis we propose the use of machine learning in RFIC development. Machine learning uses statistical models to recognize hidden patterns from sample data points, known as "training"; generalize patterns; and make predictions based on new data. In theory, machine learning can capture the nonlinear, frequency-dependent RF parasitic effects very well thanks to the large variety of nonlinear modelling techniques at its disposal, such as polynomial regressions and neural networks. Therefore, this thesis investigates for the first time the feasibility of using machine learning in RFIC development to solve the problem of inaccurate RFIC simulation.; Chapter two describes how to represent and collect the RF and spec data to be able to use them in machine learning. The data needs to be represented in the format of {X: design parameters, Ysim: EDA simulation results, Ytrue: test results}. Ideally, large datasets should be collected by testing fabricated ICs. However, in this thesis we used electromagnetic-enabled mixed mode simulation data as an alternative to actual test data for demonstration purposes. Chapter three summarizes the existing RFIC development flow and describes the three different blocks in the flow where machine learning could be added: (1) between the customer specifications and the circuit design, or specs-to-design; (2) between EDA simulation and circuit fabrication, or simulation-to-fabrication; and (3) between lab test results and design revision, or test-to-re-design. Chapter four studies each block design level in detail by applying two basic machine learning techniques: polynomial regression (PR) and neural networks (NN). Chapter five provides case studies for developing RF switches using machine learning. The results show that machine learning can significantly improve the prediction accuracy, which proves the feasibility of using machine learning in RFIC development.; The research developed in this Thesis has strong potential to impact the RFIC industry. Unlike digital circuit design where the high accuracy of EDA simulations allows for highly automated circuit development, the RFIC design industry suffers from significant simulation inaccuracies. Hence, RFIC development typically requires multiple time-consuming and costly design-fabrication iterations. Some researchers have already used machine learning to improve the step between the initial specifications and design, but those solutions are not really effective because of their large computational complexity. Those researchers in ran hundreds (if not thousands) of simulations using existing EDA tool, and used those simulations to train neural network model so the model can learn how to design circuit. The hundreds of simulations cause the computational complexity. In many cases circuit designs using these techniques take even longer time than existing solutions in the industry. In contrast, this thesis focuses on the use of machine learning to optimize block #2, that is between simulation and fabrication to provide accurate predictions. The task of accurate prediction in this thesis needs less computation resource but provides more helpful to RFIC development. This thesis shows that the simulation accuracy can be improved by 98%, which will dramatically reduce the need for multiple design-fabrication iterations. This improvement means significant time and cost reduction in RFIC products.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (page 50).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning large-scale agile development using a dependency structure mapping model</title>
<link href="https://hdl.handle.net/1721.1/145239" rel="alternate"/>
<author>
<name>Bajpai, Siddharth.</name>
</author>
<id>https://hdl.handle.net/1721.1/145239</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Planning large-scale agile development using a dependency structure mapping model
Bajpai, Siddharth.
Efforts to plan and coordinate complex software development using large-scale agile coordination techniques are recent phenomena. Some methodologies such as the Scaled Agile Framework (SAFe) attempt to provide a set of processes to help engineering teams who work on complex projects with significant coordination needs plan together while preserving agility at the team level. Are there existing system engineering techniques which can prove helpful as part of these methodologies? This thesis looks at one such technique - Dependency Structure Mapping (DSM) - in order to address a set of identified limitations of the program increment (PI) planning activity within SAFe. In doing so, this thesis illustrates how dependency data are assembled into a DSM model for a PI. It then applies the proposed model within a field study and discusses the results.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (page 38).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems architecture perspective on digital transformation for financial services</title>
<link href="https://hdl.handle.net/1721.1/145238" rel="alternate"/>
<author>
<name>Zaichkowsky, Tamara Miller.</name>
</author>
<id>https://hdl.handle.net/1721.1/145238</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Systems architecture perspective on digital transformation for financial services
Zaichkowsky, Tamara Miller.
The evolution of technology has had a dramatic impact on the way organizations conduct business. These changes have been felt throughout all industries and financial services is not immune to this transformation. However, Financial Services has unique conditions that have impacted the evolution and use of technology within the organization. These conditions include external factors: being a highly regulated industry, the impacts of globalization, the interconnectivity of the global financial markets, and capital intensive nature of the industry in addition to internal factors of being a highly risk averse culture, slow to progress or change and bureaucratic by design. These factors and many others are the key elements adding to the complexity and challenges of evolving this industry. Demands from the market and competitors are forcing financial services organizations to rethink their digital strategy and evolve at a pace closer to the expectations of the market. The challengers mainly the FinTechs and large Technology companies are looking to get into financial services whether it be through partnerships or subsidiaries. There is significant concern across the industry that a newcomer will take market share and/or significantly reduce the profitability of these organizations. These concerns have been the driving force for the majority of the industry focusing on the digital transformation.; Digital transformation has a very broad scope of implication and considerations. With the need and the focus of the industry on digital transformation, there is still a notable lack of progression within financial services. The focus for this thesis is on what architectural components and drivers for or against the change are impacting the progress in financial services and what consideration should be made for the future state to allow for digital transformation in the financial services industry through architectural design.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 87-94).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of energy delivery sector malware attack response mechanisms</title>
<link href="https://hdl.handle.net/1721.1/145237" rel="alternate"/>
<author>
<name>Sapienza, Michael Louis.</name>
</author>
<id>https://hdl.handle.net/1721.1/145237</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Analysis of energy delivery sector malware attack response mechanisms
Sapienza, Michael Louis.
Recent cyberattacks on the electricity grids in the U.S. and Ukraine, the rise of malware tailored to industrial control systems, failure of basic sanitary and life-saving systems after prolonged power outages, economic losses numbering in the billions: these are the consequences of malware attacks on critical infrastructure sectors across the globe. New and continuously evolving cyber threats demand new and better response mechanisms to mitigate their effects. However, critical infrastructure sectors, and the electricity subsector in particular, are faced with the enormous challenge of identifying gaps in their extremely complex cyber incident response mechanisms. This thesis takes a novel, systems-level approach to pinpoint deficiencies in incident response mechanisms of the U.S. electricity sector. An analysis of current and future external influences on the electricity sector validates that malware threats and vulnerabilities are rapidly evolving and are already outpacing the sector's ability to adapt its cyber incident response mechanisms. Using the Architecting Innovative Enterprise Strategies (ARIES) Framework to explore current incident response mechanisms reveals that the traditional, all-hazards approach to major incident response is insufficient to keep the grid secure. Instead, improvements in cyber incident response strategies, processes, organizations, information flow, products, and services are all necessary to overcome the disparity. Most importantly, the systems-level approach exposes the culture of cybersecurity in the sector is the systemic driver of those shortfalls and must be the primary consideration for improvement to the electricity sector's cyber incident response mechanisms.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available. The images contained in this document are of the best quality available"--Disclaimer Notice page.; Includes bibliographical references (pages 165-180).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A smart diaper wetness detection sensor : concept, design and ethical considerations</title>
<link href="https://hdl.handle.net/1721.1/145236" rel="alternate"/>
<author>
<name>Sen, Pankhuri.</name>
</author>
<id>https://hdl.handle.net/1721.1/145236</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A smart diaper wetness detection sensor : concept, design and ethical considerations
Sen, Pankhuri.
Human waste contained in diapers is a rich source of medical health based data. Embedding low-cost, wearable sensors in disposable diapers provides an opportunity for self-health monitoring and advancing preventative medicine. Diaper users include infants, elderly, disabled individuals, and hospital patients. Diaper wetness-based alerting can enhance care of this population by improving incontinence management, preventing rashes and infections, and avoiding embarrassment. A practical implementation of this consumer-oriented system will directly impact the user, their habits, ecosystem around the user, enhance access to health based information and disrupt the existing business models. Integrating ethical dimensions of privacy, safety, security, sustainability and socio-economic implications is essential to responsible technology development.; In this thesis, we realize a novel sensor for moisture detection leveraging the material properties of the water absorbing polymer gel common to most disposable diapers. The proposed UHF RFID based sensor utilizes hydrogel for moisture sensing and as antenna element, thus creating a hybrid design uniquely composed of metal and hydrogel. The design optimized for smallest baby diaper geometry achieves a 1-meter read range, a bend radius of &lt;20mm, is insensitive to sensor orientation relative to the reader, is low-cost, and can be integrated with existing diaper manufacturing units. An outlook on health applications enabled by a powerful diaper sensing system establishes the need for grounding future research with ethical considerations. We present a condensed narrative on understanding ethics, applied ethics, interplay of technology and society, the role of creators to subsequently detail the ethical considerations pertaining to development of IoT products and services. A method to administer the ethical considerations is extended and exemplified through our work on diaper moisture sensor.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2021; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 85-90).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting the enterprise for innovation : transformation for technology in support of national security</title>
<link href="https://hdl.handle.net/1721.1/145235" rel="alternate"/>
<author>
<name>O'Reilly, Patrick S.</name>
</author>
<id>https://hdl.handle.net/1721.1/145235</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Architecting the enterprise for innovation : transformation for technology in support of national security
O'Reilly, Patrick S.
Technology is fundamental to national security. Therefore making wise and timely investments in the most relevant research and development questions to address real and pressing needs is critical. This thesis is an investigation of the application of Enterprise Architecting frameworks and Systems Design methods to explore novel enterprise concepts, alternative enterprise architectures and relevant values to determine if they enhance and add measurable value to enterprise architecture of the early R&amp;D collaboration, teaming, proposal refinement, selection and funding processes at MIT Lincoln Laboratory. The overall aim is to answer the holistic question of can we create and identify emergent value by applying an enterprise framework to the challenge of enterprise transformation. Particular emphasis is placed on examining the elements of culture as it relates to adopting new technology and processes to enable enterprise transformation.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 108-110).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-centered fashion</title>
<link href="https://hdl.handle.net/1721.1/145234" rel="alternate"/>
<author>
<name>Mui, Melody Lok Yee.</name>
</author>
<id>https://hdl.handle.net/1721.1/145234</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Human-centered fashion
Mui, Melody Lok Yee.
Nowadays, large fashion corporations slowly adopt human centered design (HCD) process to connect with end users. For example, Levi used this design process to find out what kind of feelings that a new line of jacket should elicit in humans. And at retail sales point, the handbag iconic brand Kate Spade ideated dozens of concepts to enhance the in-store experience of guests. However, the precise process of how to integrate human-centered design process with fashion design is ambiguous. This creates challenges for fashion designers to use this process in creating new fashion items to fill in the emotional need of the millennial. In this research, I shall document in how to use the human-centered design process to hone a fashion product through a rigorous iteration process. From "need" to "wear", three products were created to solve a pain point of women, the need of carrying their heels seamlessly while communing. Five major iterations were conducted to give birth to a line of innovative products that are wearable.; Through using the HCD process, it was discovered that women had an unconscious habit of having their shoes hidden, which hindered me from further developing an accessible and convenient mean of carrying the heels. The important pivoting point of this research came after integrating the "avant garde" approach of fashion design into the product. This step brought in a new aesthetic element to tackle an initial unacceptability of emotion-based need. Mission statement and values of the product also emerge through the process. The outcome is an enhanced framework that shall come handy for designers in the fashion industry to practice social science in their inventions.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; "February 2020." Cataloged from PDF version of thesis.; Includes bibliographical references.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems architecting the future of U.S. Coast Guard operational logistics : a framework for enhancing mission support responsiveness</title>
<link href="https://hdl.handle.net/1721.1/145233" rel="alternate"/>
<author>
<name>McGuinness, Eugene D.</name>
</author>
<id>https://hdl.handle.net/1721.1/145233</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Systems architecting the future of U.S. Coast Guard operational logistics : a framework for enhancing mission support responsiveness
McGuinness, Eugene D.
Enterprises that are successful over the long term are compelled to continuously transform in order to adapt to complexity and challenges. Often, many of these transformation efforts fail to achieve the desired future state objectives. The United States Coast Guard (USCG) initiated a major modernization effort over a decade ago that created a Mission Support enterprise responsible for acquiring and maintaining the complex surface, aviation, shore, and IT assets and systems while supporting workforce readiness to conduct both planned and contingency operations. Within the modernization milestones, the Director of Operational Logistics (DOL), Logistics and Service Centers, and regional Bases were established to provide a single point of accountability to the field. After several years of operating in this construct, several barriers still obstruct the Mission Support enterprise from achieving integrated, customer-centric, optimized delivery of operational logistics.; This thesis argues that a "systems approach" to architecting the enterprise can posture Mission Support to enhance responsiveness to operational demands. The thesis applies the Architecting Innovative Enterprise Strategies (ARIES) Framework as the methodology for demonstrating the systems approach to leveraging complex enterprise interfaces for desired value delivery. Use of the seven step ARIES process model illustrates that a new enterprise architecture can be conceived, evaluated, and selected from a set of generated alternatives. This future state architecture is required to align resources, communication, and coordination for required levels of service delivery to achieve strategic enterprise and Service goals. The process activities provide a holistic approach to architecting an enterprise, identifying key drivers for change, detailing the envisioned future, recommending a "To-Be" architecture of best fit for desired outcomes, and prescribing an implementation plan that will most effectively and efficiently transform the enterprise. This research and subsequent findings provide USCG Mission Support leadership with a glide path to transform the DOL, Logistics and Service Centers, and Bases into an integrated and responsive enterprise for delivery of operational logistics in both contingency and steady state paradigms.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis. "DISCLAIMER: Views expressed in this thesis are those of the author and do not reflect the official policy or the position of the United States Coast Guard, the Department of Homeland Security, or the United States Government"--Disclaimer page.; Includes bibliographical references (pages 153-158).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new approach to prevent accidents in the steel industry</title>
<link href="https://hdl.handle.net/1721.1/145232" rel="alternate"/>
<author>
<name>López De la Toba, Paulo Francisco.</name>
</author>
<id>https://hdl.handle.net/1721.1/145232</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">A new approach to prevent accidents in the steel industry
López De la Toba, Paulo Francisco.
The steel industry has faced extraordinary changes over recent years incorporating new technologies and processes to make it more competitive and safer based on market requirements, regulations, and community concerns. Safety, in particular, has been an important topic in which the industry has put remarkable efforts to improve its performance. Traditional safety models currently used to analyze and prevent accidents have been in use for decades. However, the complexity of systems has substantially increased over this time and has reshaped the way people perform their activities. The limitations of traditional models are becoming more evident as system complexity increases, especially when it comes to understanding the interactions between many system elements, incomplete or otherwise flawed requirements, design errors, and human behavior and contextual factors. Today, there is general recognition that a new approach is needed to address the complexity of modern systems and to address the deeper systemic causes that are leading to accidents. This thesis evaluates and demonstrates how new approaches based on Systems-Theoretic Accident Model and Processes (STAMP) can be applied using a real case from the steel industry. Using these approaches, organizations can gain a broader perspective to understand the full range of factors contributing to accidents and create more effective measures to prevent future accidents.; This thesis examines a high-risk incident in a steel plant and compares a traditional Root Cause Analysis that was performed with a new systems approach called Causal Analysis based on System Theory (CAST). Causes and recommendations from both methods are compared. In addition, a systems approach for hazard analysis called Systems Theoretic Process Analysis (STPA) is evaluated to determine whether it could have anticipated the behaviors and contextual factors that led to the incident and whether it could have been prevented. These methods were found to be extremely effective in analyzing past accidents and in preventing future accidents, providing significant insights for organizations to understand the reasons behind accidents and to define the necessary steps to prevent them.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis. "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available. The images contained in this document are of the best quality available"--Disclaimer Notice page.; Includes bibliographical references (pages 159-160).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecturing the future national security space domain awareness acquisition enterprise</title>
<link href="https://hdl.handle.net/1721.1/145231" rel="alternate"/>
<author>
<name>Kelly, Aaron Joseph
            (Researcher in engineering and management).</name>
</author>
<id>https://hdl.handle.net/1721.1/145231</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Architecturing the future national security space domain awareness acquisition enterprise
Kelly, Aaron Joseph
            (Researcher in engineering and management).
Space-based capabilities are increasingly critical to prosperity in the United States (US) and around the globe. They are deeply embedded within society's vital functions and even more tightly intertwined with US national security capabilities across all warfighting domains. The disruption to these capabilities severely diminishes US warfighting capability. Space Domain Awareness (SDA) systems provides space environment and activity information to decisionmakers to enable action. These systems are prerequisite for the defense of US space capabilities. Adversaries see US dependence on space as an attractive target and are rapidly developing capabilities to deny US access to these systems at the time the nation would need them most. These threatening capabilities are eroding the longstanding US technological advantage in space. The US must react with capability development efforts that outpace the threat, but acquisitions have been stymied by a fragmented enterprise, risk adversity, burdensome acquisition processes, and oversight requirements that delay capability delivery, including SDA capabilities. Concurrently, staggering growth in commercial space activity is making the space environment more congested and increasing difficulty of the SDA mission. Fortunately, commercial capabilities that can support the NSS SDA acquisition enterprise have also evolved.; This thesis reasons a systems approach to architecting the NSS SDA acquisition enterprise provides architecture concepts that position the enterprise to overcome these issues and maintain its technological lead over adversaries. The thesis applies the Architecting Innovative Enterprise Strategy (ARIES) Framework as the systems architecting context to understand and address the enterprise's complexity. The ARIES process model and view elements logically transform data from research into a comprehensible description of the current architecture and a holistic vision of the desired future state. Finally, it guides the process of generating and suggesting an architecture concept that delivers the quality, interoperability, responsiveness, reliability, transparency, scalability, evolvability, and affordability necessary to meet warfighter needs. The concept focuses on delivering analysis and ingestion software capabilities that maximize the enterprise's ability to leverage external SDA data sources. Incorporation of that data enables the enterprise to prioritize acquisition of NSS-specific capabilities: those that track and characterize space objects.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 178-184).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mathematical analysis of uncertainty in machine learning and deep learning</title>
<link href="https://hdl.handle.net/1721.1/145230" rel="alternate"/>
<author>
<name>Kashimura, Takuya.</name>
</author>
<id>https://hdl.handle.net/1721.1/145230</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Mathematical analysis of uncertainty in machine learning and deep learning
Kashimura, Takuya.
In this paper, we study uncertainty in machine learning and deep learning from the mathematical point of view. Uncertainty is involved in many real-world situations. The Bayesian modelling can handle such uncertainty in machine learning community. However, the traditional deep learning model fails to show uncertainty for its outputs. Recently, at the intersection of the Bayesian modelling and deep learning, a new framework called the Bayesian deep learning (BDL) has been proposed and studied, which enables us to estimate uncertainty of deep learning models. As an example of it, we can review the results of Yarin Gal, in which the famous dropout method can be seen as a Bayesian modelling. We also see that overfitting problem of the framework due to the property of the KL divergence, and review the modified algorithm using o-divergence which generalizes the KL divergence. We also study a confidence band to assess uncertainty of a kernel ridge regression estimator. We propose the formulation to obtain a confidence band as the convex optimization, which enables us to use existing algorithms such as the primal-dual inner point method. The proposed method acquires a more accurate and fast confidence band than a bootstrap algorithm. We also see the effectiveness of our proposed method both in the case of function approximation and an estimate of an actual dataset.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 69-72).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultra-low noise and low temperature drift power supply system design for RF applications</title>
<link href="https://hdl.handle.net/1721.1/145229" rel="alternate"/>
<author>
<name>Kalluru, Vivek Venkata.</name>
</author>
<id>https://hdl.handle.net/1721.1/145229</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Ultra-low noise and low temperature drift power supply system design for RF applications
Kalluru, Vivek Venkata.
This thesis started off with an investigation of noise performance of bench power supplies and existing commercial off-the-self Low Drop Out (LDO) regulators used as power sources in radio frequency applications. Noise of power supply contributes to the phase noise of RF transceivers. The temperature drift affects the precision of analog baseband. Though some commercial LDOs show very good noise performance over bench power supplies they cannot be readily integrated into cost effective and widely used CMOS process, because they are realized in expensive technologies and they need off-chip components to filter low frequency noise. A novel circuit technique to correct the temperature drift and achieve low noise performance is proposed. The temperature drift caused by negative second order temperature coefficient in traditional voltage reference is effectively compensated by generating positive second order temperature coefficient using MOS transistor current. The simplicity of the scheme contributes to power efficiency and low noise. This topology is simulated in 65nm CMOS process which makes this a readily integrated solution to chip-scale RF applications requiring high degree of precision. A temperature coefficient of 2ppm/°C is achieved in simulation which is a 7X improvement over commercially available solutions. Another major advantage is ultra low power consumption of this topology. The current consumed by this topology is less than 1pA which makes it ideal for battery powered systems. The simulated integrated peakpeak noise is 0.1pV in 0.1Hz - 10Hz frequency band which is a 1OX improvement over commercially available parts. This design does not use any discrete components to reduce low-frequency noise. A power supply rejection ratio of 76dB is reported in simulation which shows its excellent immunity to noise from input voltage.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 47-48).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing the impact of digital transformation on business</title>
<link href="https://hdl.handle.net/1721.1/145228" rel="alternate"/>
<author>
<name>Jha, Robin.</name>
</author>
<id>https://hdl.handle.net/1721.1/145228</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Analyzing the impact of digital transformation on business
Jha, Robin.
Digital Transformation (DT) is defined as the "use of technology to radically improve performance or reach of enterprises" [35]. The never-ending acceleration towards faster and cheaper computing resources has enabled software to be delivered on increasingly compact timelines - measured in weeks or even days. In parallel, leaps in technologies such as analytics, social media, mobile computing, etc. are changing the business landscape. As a result of these internal and external factors, businesses are continuously under pressure to undergo transformation to stay competitive. However, facilitating DT is a challenging undertaking for any organization and needs to be understood and executed with clarity in vision and business direction. The goal of this study is to analyze existing DT frameworks and introduce a pragmatic DT framework which would allow organizations to stay up-to-date with the latest trends while improving efficiency and flexibility, ultimately resulting in increased customer satisfaction and higher revenue. The scope for this study encompasses evaluation of different architectures, technology stacks, development, testing, deployment, and operational methodologies as well as changes in cultural and business processes required to realize this goal. Throughout the journey, the impact of DT on the organization is constantly measured using custom-defined metrics and Key Performance Indicators (KPIs). We conclude by discussing the implications of this model while raising questions for future work to further validate this model across other business domains.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 63-66).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital health innovation &amp; commercialization framework</title>
<link href="https://hdl.handle.net/1721.1/145227" rel="alternate"/>
<author>
<name>Jain, Umesh.</name>
</author>
<id>https://hdl.handle.net/1721.1/145227</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Digital health innovation &amp; commercialization framework
Jain, Umesh.
Digital health is transforming healthcare by improving our ability to accurately diagnose and treat diseases, enabling innovating care models, and making healthcare services more accessible.,It holds the potential to deliver against the quadruple aim of healthcafre -- improving the health-of populations, enhancing the experience of care for individuals, lowering the cost of health care, and enhancing the experience of clinicians.¹ The broad scope of digital health encompasses many technologies that can deliver breakthrough solutions to consumers, health systems, providers, healthcare suppliers, life science companies, etc. Thus, digital health falls at the intersection of healthcare IT, consumer healthcare, medical devices, pharmaceutical products, and other allied products and services. While there are frameworks and established practices to develop and commercialize innovation in these respective categories, similar practices and knowledge are either limited or fragmented in digital health space. This thesis aims to explore and create a framework for developing and commercializing digital health innovation by leveraging and integrating established and emerging practices from the traditional medical industries and modern digital industries.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 67-69).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A generic framework for detecting interpretable real-time anomalies in network traffic data</title>
<link href="https://hdl.handle.net/1721.1/145226" rel="alternate"/>
<author>
<name>Dowmon, Nicholas H.</name>
</author>
<id>https://hdl.handle.net/1721.1/145226</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A generic framework for detecting interpretable real-time anomalies in network traffic data
Dowmon, Nicholas H.
The goal of this research is to develop a framework for detecting anomalies in network traffic data on highly complex computer networks. In this research, I present the Ensemble Outlier Detection System, a new framework for detecting anomalies in multidimensional network traffic data. The system meets six design requirements which ensure that the system can meet the needs of the sponsor organization's cybersecurity teams both now and in the future. In particular, this system improves on many existing anomaly detection systems by maintaining scalability for extremely large computer networks and resiliency to non-stationary data, re-establishing its own baselines as the network changes over time. I also present the Explorer tool, designed for cybersecurity analysts to interpret the cause of high anomaly scores on certain data points and to annotate each data point atomically. I ensure scalability by treating all fields in a data point as independent of one another. Preliminary results suggest that this treatment will not affect system performance, as many anomalous data points exhibit multiple anom-alous -fields-at- a time, increasing the outlier predictions for the data point using recursive aggregation. The system successfully detects and presents interpretations of various anomalies in network traffic from the sponsoring institution's dataset, and achieves performance values which can detect real-time anomalies in enterprise computer networks.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 89-92).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework for selecting a system design approach</title>
<link href="https://hdl.handle.net/1721.1/145225" rel="alternate"/>
<author>
<name>Chiverton, Kelly A.</name>
</author>
<id>https://hdl.handle.net/1721.1/145225</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Framework for selecting a system design approach
Chiverton, Kelly A.
Recent discussions within the Department of Defense highlight the growing need for US military systems to rapidly respond to new missions, threats, and operational environments that the warfighter can and cannot anticipate. In an effort to respond to the government-wide emphasis of fielding Department of Defense systems smarter and faster, this thesis examines the engineering fundamentals of system design options. The thesis analyzes two umbrella categories of design strategies: static vs. flexible. It also explores subcategories of the two design approaches: optimized, robust, real options, and adapt. Relevant literature is used to define the design strategies, understand the benefits and penalties of each approach, and explore historical examples of each design's use within the Department of Defense. Based on the literature review, the thesis proposes a decision framework for selecting an optimal design approach that characterizes system tradeoffs between dynamic market needs, the rate of technology change, and a system's future operating environment against the value of the proposed design, with the goal of choosing the most cost effective and responsive system design under a given set of objectives and uncertainties. A series of interviews with Air Force Field Grade Officers are used to inform the usefulness and understandability of the decision framework. The interviews also highlight framework limitations. Ultimately, the interview responses solidify a recommendation for the Air Force to implement this framework prior to a system's development.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 93-98).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using learning analytics to evaluate design changes in MOOCs : a case study on assessing course pacing</title>
<link href="https://hdl.handle.net/1721.1/145224" rel="alternate"/>
<author>
<name>Bilal, Ahmed.</name>
</author>
<id>https://hdl.handle.net/1721.1/145224</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Using learning analytics to evaluate design changes in MOOCs : a case study on assessing course pacing
Bilal, Ahmed.
Experimentation on course design in MOOCs can determine causal factors that promote learning and can identify aspects of the course where revision is needed. The presence of heterogeneous samples of learners, the difficulty of defining success metrics, and the lack of shared cross-course data are few of the challenges course designers face to evaluate MOOCs. In this thesis, we present a data-driven framework to evaluate design changes in MOOCs. We explore a change from multiple angles -process, proficiency, and perception- and apply various analytical methods -temporal, causal and predictiveto map out the outcome of instruction along multiple dimensions of learning. We demonstrate the application of this framework by evaluating course pacing on a repeated run of a supply chain MOOC by MITx. Self-pacing caused completion rate (-6%), pass rate (-10%), and engagement score (-7%) to drop, although students' satisfaction with course remained unchanged. The impact of pacing on students' outcome was not uniform with some experiencing no change while others encountering a steep fall. The most striking difference was seen in the longitudinal trajectories, with instructor-paced students mostly taking the same trajectory and self-paced students pursuing their own individually paced trajectories. We showed that these trajectories are correlated with student grade, and students with certain characteristics are inclined to pursue a specific trajectory. From these and other observations, we were able to provide practical guidance to course designers on what instructional materials and practices are satisfactory and where change is needed.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 143-147).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital transformation and its influence on platform business</title>
<link href="https://hdl.handle.net/1721.1/145223" rel="alternate"/>
<author>
<name>Ganesan, Vedavinayagam.</name>
</author>
<id>https://hdl.handle.net/1721.1/145223</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Digital transformation and its influence on platform business
Ganesan, Vedavinayagam.
Platform business model is increasingly gaining popularity among academics and practitioners. New start-ups to established incumbent companies have all shifted or are adjusting their business model from traditional linear to platform-based approach. Platform is a business model that creates value by connecting multiple interdependent participants and facilitating exchanges among these participants. Digital technologies such as connectivity, cloud computing, big data analytics, machine learning, and artificial intelligence play important role in making these multi-party connections and exchanges possible. There is considerable amount of literature published on platform business and digital transformation. Platform business literature often discusses the strategies of platform business and various methods of designing and setting up platform business. The digital transformation literature often discusses various digital technologies and ways of using those technologies to improve efficiency and performance. The main purpose of this study is to empirically analyze relationship between degree of digital transformation and platform business model and contribute to literature with insights gained from the results of the analysis.; This study analyzes 753 USA based active public nonfinancial firms from 16 industries existed on year 2018. The degree of digitalization measured as number of digital technologies involved in operations and products was related to existence of platform business. Analysis was done for both product platform and industry platform. This study finds that degree of digital transformation is significantly positively related to the likelihood of existence of both product platform and platform business. The study also finds that out of 16 industries studied, six industries are more likely to have platform business. This study also related other company characteristics with platform business. The findings include: Platformization is positively related to firms' size and number of complementors a firm has. Firm's age is negatively related to platformization. Digitalization of value chain has positive relationship with product platform while no relationship with industry platform. And, R&amp;D spending does not influence platformization.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2019; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 77-79).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital marketing : role in new age marketing - revolutionary or evolutionary</title>
<link href="https://hdl.handle.net/1721.1/145222" rel="alternate"/>
<author>
<name>Sreebashyam, Ruthu.</name>
</author>
<id>https://hdl.handle.net/1721.1/145222</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2014-01-01T00:00:00Z</published>
<summary type="text">Digital marketing : role in new age marketing - revolutionary or evolutionary
Sreebashyam, Ruthu.
This paper is a result of my passion for marketing and a humble attempt to understanding the changes surrounding, and impacting how companies communicate value to their customers and partners. It outlines the evolution process of marketing from being a miniscule function in large companies to a key gatekeeper for most large companies in the ad tech space today including tech biggies like Google and Facebook that earn a whopping majority of their revenue through digital marketing. It's also an attempt to refer to external economic and technological changes that may or may not have caused a change in the way businesses position, segment, or target consumers or, more currently appropriate, end users. Amidst a lot of jargon thrown around marketing analytics, big data, &amp; audience profiling, the underlying intention is to observe possibilities of digital marketing being another step in the evolution of marketing or a disruption that changes the way marketing is scientifically applied. The paper aims to ignite a discussion around where the bandwagon marches towards, what is going to be the next big wave in reaching out to your customers, and how do we communicate value that customers find essential to them?; This research touches upon key trends and hallmark points of evolution in marketing as a function and its scope. The introduction of new channels of marketing, and the inclusion of data modelling are examples of new methods that help marketing evolve as a practice. Although this research does outline the fact that the core concepts of marketing hold their firm ground. I would like to forewarn that this is purely a qualitative research. Coming with a great deal of experience in digital marketing with Google, I do realize the importance of marketing analytics and it is a subject that is widely written about so my attempt to take on the qualitative side of this business. Qualitative research is more perceptive and, therefore, more intriguing to me. The evolution of marketing in itself is based on how customer preferences have changed overtime whether it is to do with what they expect of a car or what they expect from companies to tell them about the car. Love it or hate it, there is a revolution happening in the pretext of an evolution or a possible long-predicted evolution taking place with revolutionary impact. It is a paradigm shift in the way we communicate, the way we interact with one another and in groups. It is the palpable effect of guzzling Exabytes of information and producing awe-inspiring state-of-the-art marketing analytics that companies use to woo their customers. It is this game of thrones that inspires more and more research work in this field. Everybody has an opinion on it and here is mine.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014; Cataloged from PDF version of thesis.; Includes bibliographical references.
</summary>
<dc:date>2014-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed dataflow machine controllers</title>
<link href="https://hdl.handle.net/1721.1/145214" rel="alternate"/>
<author>
<name>Read, Jake Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/145214</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Distributed dataflow machine controllers
Read, Jake Robert.
Workflows in Digital Fabrication require coordination across heterogenous computing systems, from the design tools used to describe component geometries to the embedded control systems used to interact with the physical world in order to produce those components. In the state of the art, workflows are typically static and opaque, especially within embedded controllers themselves. This makes them difficult to modify or develop, and places barriers between high level computing and low level control. An opportunity exists to develop an open platform for interoperability and reconfigurability that spans low- and high-level workflow components, that could collapse much of the heterogeneity found in these systems into cohesive representations. To do so, this thesis develops a systems architecture based on reconfigurable graphs of dataflow objects. It embeds virtual dataflow graphs of modular software elements within physical dataflow graphs of modular hardware elements, recasting heterogenous systems as cohesive graphs all the way down. The architecture is reduced to practice across high-level browser computing and low-level embedded control, through mixed networking links. It is deployed on two machine systems: one that collapses path planning and path execution for a small milling machine, marking a departure from the historic use of G Codes, and another that aligns computer vision based measurement with low-level motor control and sensor acquisition, to open access to materials measurement.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February, 2021   [for thesis before June 2021]; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 97-99).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling of modern excitation control systems</title>
<link href="https://hdl.handle.net/1721.1/145207" rel="alternate"/>
<author>
<name>Orta, Conrado.</name>
</author>
<id>https://hdl.handle.net/1721.1/145207</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Modeling of modern excitation control systems
Orta, Conrado.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lymphoid cell populations in the New Zealand black mouse : changes in the spleen, thymus, and peritoneal eluate cells as age increases</title>
<link href="https://hdl.handle.net/1721.1/145206" rel="alternate"/>
<author>
<name>Opperman, Julianne Elizabeth Radkowski.</name>
</author>
<id>https://hdl.handle.net/1721.1/145206</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Lymphoid cell populations in the New Zealand black mouse : changes in the spleen, thymus, and peritoneal eluate cells as age increases
Opperman, Julianne Elizabeth Radkowski.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1980; Bibliography: leaves 56-57.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The relative dose response to a small oral dose of vitamin A in cystic fibrosis</title>
<link href="https://hdl.handle.net/1721.1/145205" rel="alternate"/>
<author>
<name>Openshaw, Thomas Henry.</name>
</author>
<id>https://hdl.handle.net/1721.1/145205</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">The relative dose response to a small oral dose of vitamin A in cystic fibrosis
Openshaw, Thomas Henry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of excess dietary zinc on absorption and tissue storage of iron in the rat</title>
<link href="https://hdl.handle.net/1721.1/145204" rel="alternate"/>
<author>
<name>O'Neil, Mary Ann.</name>
</author>
<id>https://hdl.handle.net/1721.1/145204</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Effects of excess dietary zinc on absorption and tissue storage of iron in the rat
O'Neil, Mary Ann.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1980; Bibliography: leaves 78-81.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy Absorption and Dynamic Behaviors of Architectured Interpenetrating Phase Composites</title>
<link href="https://hdl.handle.net/1721.1/145186" rel="alternate"/>
<author>
<name>Taylor, Spencer V.</name>
</author>
<id>https://hdl.handle.net/1721.1/145186</id>
<updated>2022-08-30T04:08:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Energy Absorption and Dynamic Behaviors of Architectured Interpenetrating Phase Composites
Taylor, Spencer V.
Novel interpenetrating phase composites show promise for structural aerospace components, but their structure-property relations are not well understood. In this work, we explore the effects of mesoscale geometry on mechanical behaviors of architectured interpenetrating phase composites. We first investigate the tensile behavior of a composite construction termed the chain lattice, which is a hierarchical porous structure comprising two interpenetrating cellular solids. Through tension testing, we demonstrate that combined interphase action results in damage delocalization and an order-of-magnitude improvement in strain-to-failure over the fully dense base material. These experiments validate a micromechanics-based model of tensile specific energy absorption, which we then use in a parametric study on the effects of geometric parameters and matrix properties on tensile behavior. We predict that ceramic chain lattices can achieve an order-of-magnitude improvement in tensile specific energy absorption over the fully dense material. We next examine the macroscale and fine-scale dynamic response of interpenetrating phase composites comprising a body-centered cubic steel lattice embedded in an aluminum matrix. Through plate impact simulations, we find that the complex mesoscale geometry reduces shock velocity relative to monolithic constituents, slowing and spreading the shock front via reflection and redirection. In the fine-scale, we can predict several aspects of the pressure and longitudinal velocity responses by tracking internal wave reflections. Finally, we observe that the post-shock maximum temperature increases with structural openness, and temperature hotspots form at interfaces parallel to the shock direction. The findings in this work 1) highlight the ability to tailor energy absorption of interpenetrating phase composites by controlling mesoscale geometry; and 2) provide novel structure-property linkages in the dynamic response of architectured interpenetrating phase composites.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Zoning Assessment of Unconventional Aircraft</title>
<link href="https://hdl.handle.net/1721.1/145184" rel="alternate"/>
<author>
<name>Austin, Samuel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/145184</id>
<updated>2022-08-30T03:34:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Computational Zoning Assessment of Unconventional Aircraft
Austin, Samuel P.
The protection of aircraft from lightning strikes, both triggered and intercepted, is an essential component in the aircraft development process. In the past, lighting strikes to aircraft have caused catastrophic accidents that have prompted studies into the mechanisms behind lightning effects and their mitigations. These recommendations have led to protective measures in the form of wire mesh and diverter strips on nonmetallic surfaces, removing sources of spark-triggered ignition in fuel system, and route management to avoid thunderstorms.&#13;
&#13;
While significant progress has been made in understanding these phenomenon, much of what we know about lightning strikes to aircraft comes from historical experience and testing. This knowledge is currently used to drive requirements for new aircraft designs, which may not conform to the same assumptions under which models for existing aircraft are valid.&#13;
&#13;
As the aviation industry has evolved, so too has our ability to predict the onset of aircraft triggered lightning. At present, computational tools are used extensively in lightning protection scenarios to adapt old models to new designs. Specifically, computational zoning analysis predicts points on the aircraft at which a lightning leader will likely initiate.&#13;
&#13;
In this work, a new pipeline for the computational zoning assessment of novel aircraft using free, open source software is developed. This methodology is discussed and compared with other zoning techniques that have been used in the past.&#13;
&#13;
The zoning methodology presented here predicts positive and negative leader attachment points for arbitrary orientations of the ambient electric field. Additionally, analysis of the optimal aircraft charge, allowing the aircraft to sustain higher fields, is presented.&#13;
&#13;
The details of the software, which uses an implementation of the three dimensional Galerkin finite element method, is covered. Analysis examples of the MIT D8 DoubleBubble, a Blended Wing Body, and a conventional transport aircraft are presented.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Iterative LQR Method for Addressing Model Uncertainty in the Mars Entry Problem</title>
<link href="https://hdl.handle.net/1721.1/145183" rel="alternate"/>
<author>
<name>Farrar, Allegra</name>
</author>
<id>https://hdl.handle.net/1721.1/145183</id>
<updated>2022-08-30T03:30:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Iterative LQR Method for Addressing Model Uncertainty in the Mars Entry Problem
Farrar, Allegra
Since the moon landing in 1969 sounded the proverbial shotgun inciting efforts to expand the frontiers of space exploration, there has been unparalleled effort to enhance the technologies for doing so. While human presence in Earth orbit has boomed over the past decade, crewed planetary missions have yet to reach desired goals. Set by NASA as the future destinations beyond Earth orbit, Mars presents significant challenges to the entry, landing and descent sequence. Missions including sample return and human exploration require precise landing accuracy. Additionally, entry vehicle dynamics and atmospheric parameters at time of flight are hard to predict. This along with the advancement of mission objectives invoke the need for a reliable, robust, and computationally reasonable method with certifiable guarantees of safe landing.&#13;
&#13;
Therefore, this paper presents a closed-loop trajectory optimizer capable of incorporating the atmospheric models and navigational data uncertainty for the nonlinear dynamics of hypersonic entry by applying an iterative Linear-Quadradic-Regulator (iLQR). iLQR is an efficient and powerful method for trajectory optimization derived from Differential Dynamic Programming (DDP) principles, which have been applied successfully in cases of robotic movement to locally improve upon a single trajectory through second-order convergence for a local optimal trajectory. iLQR takes this method a step further by iteratively linearizing the system dynamics, converging to determine an optimal trajectory by minimizing the performance cost and the uncertainty in the dynamics model.&#13;
&#13;
To demonstrate its effectiveness, the algorithm will be tested against a series of realistic simulations to test the model performance against mission requirements, such as high altitude and precision landing. Results show an efficient data-driven algorithm capable of learning how to successfully control a 40 ton crewed-scale spacecraft for Mars entry under dynamical uncertainties in the state model. Additionally, given system performance parameters, the covariance, or landing accuracy, of the final position can be determined from the algorithm and the results can be used to determine safe parameter ranges that achieve the desired accuracy
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Persistent Costs of Disclosure Exemption Regulation</title>
<link href="https://hdl.handle.net/1721.1/145181" rel="alternate"/>
<author>
<name>Voelcker, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/145181</id>
<updated>2022-08-30T04:05:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Persistent Costs of Disclosure Exemption Regulation
Voelcker, Gabriel
This paper investigates the long-term costs of size-based disclosure exemption regulation. Prior literature documents that companies react to exemption thresholds by sacrificing resources to actively lower their size and avoid compliance of reporting requirements. Exploiting the Smaller Reporting Companies’ (SRCs) threshold update in 2018, I examine whether the investment-sacrificing behavior of companies is timely reversed once the SRC threshold is lifted. I hypothesize and find evidence that size manipulation is not reversed until at least two years after the SRC threshold update, imposing persistent costs on smaller companies that have been previously overlooked by the literature. As such, the effects of a size-based threshold may not be trivially undone, adding another layer of complexity to size-based regulatory interventions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Legitimacy-Centric Regulatory Disruption: Revitalizing Communities and Competition in a Mature, Regulated Market</title>
<link href="https://hdl.handle.net/1721.1/145180" rel="alternate"/>
<author>
<name>Rixey V, Eppa</name>
</author>
<id>https://hdl.handle.net/1721.1/145180</id>
<updated>2022-08-30T03:45:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Legitimacy-Centric Regulatory Disruption: Revitalizing Communities and Competition in a Mature, Regulated Market
Rixey V, Eppa
As basic functions of society are increasingly coordinated through private firms, how can we ensure they support broadly shared prosperity in our communities? Regulation is a common, albeit contested, answer to this question, in part because the largest firms use coercive means to limit and capture regulation via rent-seeking processes, undermining regulatory efforts and harming both entrepreneurship and local communities. This longitudinal case study shows that some small and innovative firms also successfully influence regulation, but, instead of directly challenging or capturing their regulatory environment, they embrace social responsibilities and collaboratively develop systems of accountability to support adaptive regulation. By shifting authority from a state agency to local regulators and adopting a pragmatic rather than strictly neo-liberal market orientation, craft brewers opened a mature, regulated market to change and locally shared value creation. Establishing prosocial reputations via experimental efforts to open taprooms or beer gardens and promoting reinterpretation of these new organizational forms in the wake of successful experiments, craft breweries shifted local perceptions of their businesses from liabilities needing control to community assets worthy of promotion. Together these tactics constitute a new process of legitimacy-centric regulatory disruption, which is elaborated in a model. This discretionary, legitimacy-centric process offers a new way to conceive of how firms promote regulatory change and contributes to our understanding of the relationships between corporate social innovations and industry regulations. The potential for discrimination is also discussed by analyzing how female and minority owned breweries experience this process of creating both financial and community value.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trajectory Specification to Support High-Throughput Continuous Descent Approaches</title>
<link href="https://hdl.handle.net/1721.1/145179" rel="alternate"/>
<author>
<name>Fasoro, Titilayo</name>
</author>
<id>https://hdl.handle.net/1721.1/145179</id>
<updated>2022-08-30T03:08:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Trajectory Specification to Support High-Throughput Continuous Descent Approaches
Fasoro, Titilayo
Continuous descent approaches (CDAs) have demonstrated the ability to reduce aircraft fuel burn and noise, while trajectory-based operations (TBO) have been shown to improve the predictability and throughput of aircraft flows. Prior work has recognized the difficulty of implementing CDAs in high-density terminal areas due to an increase in uncertainty, which can result in a decrease in throughput. This thesis investigates whether increased throughput afforded by trajectory-based operations can be combined with continuous descent approach profiles to achieve high-throughput CDA operations. The proposed method in this thesis first determines a CDA profile via trajectory optimization, and then locates waypoints with required time of arrival (RTA) constraints along this profile, to optimize a combination of throughput and fuel burn. For representative terminal-area descent profiles at Hartsfield-Jackson Atlanta International Airport (ATL), we find that by specifying intermediate waypoints with RTAs, it is possible to use intermediate waypoints with RTAs to increase the throughput by as much as 70%, while incurring an additional fuel burn penalty of 2% per flight.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Big Data and Firm Risk</title>
<link href="https://hdl.handle.net/1721.1/145178" rel="alternate"/>
<author>
<name>Paine, Fiona</name>
</author>
<id>https://hdl.handle.net/1721.1/145178</id>
<updated>2022-08-30T03:30:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Big Data and Firm Risk
Paine, Fiona
This paper investigates the impact of firm data collection and analysis of collected data on the riskiness of firm cash flows. I use a scraped data set of the third party resources loaded on firms’ websites as a measure of firm data collection and analysis practices. I find that firm use of less effective web analytics is associated with an increase in the variance of sales, inventory, and both fixed and variable costs. This effect is despite a lack of change in the level of these variables. Looking at the effect of treatment on the treated, there is higher profit and sales variance during times of higher uncertainty. I use differences in web analytics technology and a change in their relative effectiveness as my identification strategy. As a case study of a large negative demand shock, I look at differences in firm reactions to COVID-19 based on their web analytics usage.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Customer Search and Product Returns</title>
<link href="https://hdl.handle.net/1721.1/145177" rel="alternate"/>
<author>
<name>Ibragimov, Marat</name>
</author>
<id>https://hdl.handle.net/1721.1/145177</id>
<updated>2022-08-30T03:15:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Customer Search and Product Returns
Ibragimov, Marat
Online retailers are challenged by frequent product returns. High return rates significantly decrease companies’ profit which makes the issue of managing product returns very important from the practical standpoint. Typically, practitioners study returns in connection with purchase decisions or as a part of customer behavior/type. In this paper, we show that the events which precede the purchase decision are related to the return decision. Generally, this information is readily available to online retailers and thus provides a low-cost opportunity to better understand and predict the product returns.&#13;
&#13;
Based on the data provided by a large apparel retailer, we demonstrate that the way customers search for a product is indicative of product returns. We find correlational evidence that using search filters, spending more time, and purchasing the last item searched are negatively associated with the probability of return. We propose a joint model of search, purchase and return which is based on an analytic model of search, purchase, and returns. Our model is consistent with the findings in the data and provides insight into how search and returns are related. Finally, using a machine learning framework, we demonstrate that adding search data improves the prediction accuracy of individual-level return rate above and beyond prior models.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Targeting Seasonal Marketing Campaigns: Rebalancing Exploration and Exploitation</title>
<link href="https://hdl.handle.net/1721.1/145174" rel="alternate"/>
<author>
<name>Li, Keyan</name>
</author>
<id>https://hdl.handle.net/1721.1/145174</id>
<updated>2022-08-30T04:02:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Targeting Seasonal Marketing Campaigns: Rebalancing Exploration and Exploitation
Li, Keyan
Once a firm has a targeting policy, the firm incurs an opportunity cost when varying its action to learn how to improve that policy. This results in what is classically considered an exploration vs. exploitation tradeoff. This tradeoff is widely studied in online learning domains. However, firms are forced to learn in batches that occur infrequently in many marketing channels, such as seasonal marketing campaigns and salesperson marketing. For example, when demand is seasonal, marketing campaigns often occur annually, with retailers using data from last year to train this year’s policy. This essay identifies an information externality when assigning actions to customers in the same batch: the incremental information contributed by the focal customer depends upon the assignment decisions foe other customers. This essay investigates how to optimally rebalance exploration (more variation) and exploitation (direct implementation) in these settings leveraging this externality. The algorithm this essay proposes balances the expected value and opportunity cost of new information from each new batch. This essay validates the findings using data from a field experiment.¹&#13;
&#13;
¹This essay is based on joint work with Duncan Simester.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Unintended Inevitable: How Housing Fell through the Cracks in Venice Beach's Transition to Community Planning, and What It Might Take to Build an Imagination for the Future</title>
<link href="https://hdl.handle.net/1721.1/145171" rel="alternate"/>
<author>
<name>Schuessler, Anna M.</name>
</author>
<id>https://hdl.handle.net/1721.1/145171</id>
<updated>2022-08-30T03:12:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Unintended Inevitable: How Housing Fell through the Cracks in Venice Beach's Transition to Community Planning, and What It Might Take to Build an Imagination for the Future
Schuessler, Anna M.
Today, U.S. cities large and small are grappling with housing shortages and pressure from property owners to limit development and adopt policies allowing few to no changes in their neighborhoods. Studies showing the disproportionate impact property owners have on local housing policies also provide evidence that these influences have severely impeded housing production over time. Unless changes are made, they will continue to do so, leaving an increasing number of U.S. cities with a worsening housing shortage.&#13;
&#13;
This thesis studies the community planning that took place in Venice in the late 1960s and early 1970s, reflecting the deleterious effects of a hyper-local planning focus on both current and future residents. Using archival research methods and a liberatory memory framework, I attempted to trace the dynamics underlying and surfacing during Venice Beach’s community planning process in the late 1960s and early 1970s, a time when concerns about unregulated development and a community planning process deemed inadequate by almost all stakeholders shaped a community plan allowing little growth or change. A set of secondary sources informed my understanding of the agency community groups and leaders believed they had to influence this community planning process and track the cumulative effects of local municipalities enacting slow growth land use policy. This analysis showed that traditional planning processes, many of which have been in use for decades, privilege the sentiments of socially and economically dominant community voices. A regional approach to housing production can address the inequities produced by this dynamic — by widening our lens to think about what happens when most neighborhoods or cities in a region reject new housing production, issues with parochial planning are exposed. Efforts to set regional goals for housing and a regulatory structure to ensure those within a region contribute to it offer path toward addressing housing shortages. However, as we widen that lens beyond the loudest voices in the room, I believe we need to be vigilant not to lose the voices of the communities that have historically been marginalized by these processes and who resist oppression and plan for the future on their own terms.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cultivating Capacity in the Northeast's Native Seed and Plant Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/145170" rel="alternate"/>
<author>
<name>Allen, Eve B.</name>
</author>
<id>https://hdl.handle.net/1721.1/145170</id>
<updated>2022-08-30T03:40:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Cultivating Capacity in the Northeast's Native Seed and Plant Supply Chain
Allen, Eve B.
The United States Northeast is turning to nature to prepare for climate change and mitigate the economic, societal, and environmental challenges caused by urbanization and industrialization. Cities and suburbs across the megalopolis are replanting forests, softening coastlines, restoring wetlands, harnessing plants and microbes to remediate brownfield sites, and planting native vegetation on rooftops and old elevated railway lines. These activities spanning from the micro-scale (e.g., street tree plantings) to the macro-scale (e.g., coastal restoration) require seeds and plant propagules. This physical living material forms the foundation of both natural and constructed landscapes. Vegetation plays a critical role in providing an array of regulating, provisioning, and cultural ecosystem services that greatly benefit urban regions. However, largely missing from the discourse is how the chronic commercial shortage, or even unavailability, of most native plant species as seeds or nursery materials constrain efforts to reestablish biodiverse self-sustaining populations, assemblages, and communities that improve ecosystem functioning, support pollinators and wildlife, and are durable enough to withstand the impacts of climate change.&#13;
&#13;
This thesis research uses a mixed-method multi-level case study approach to understand the structure of the social network— government agencies, academic institutions, nonprofit organizations, private companies, and local citizens — as a first step in understanding viable pathways to strengthen the Northeast’s native seed and plant material supply chain, which is a prerequisite for achieving the multiple objectives of current and future restorative activities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Down then Out: Basement Apartments and Housing Insecurity in the Face of Flood Risks</title>
<link href="https://hdl.handle.net/1721.1/145169" rel="alternate"/>
<author>
<name>Silva, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/145169</id>
<updated>2022-08-30T03:36:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Down then Out: Basement Apartments and Housing Insecurity in the Face of Flood Risks
Silva, Stephanie
On September 1st, 2021, hurricane Ida struck New York City bringing record rainfall with 3.15 inches of rain in one hour and New York’s National Weather Service’s first ever Flash Flood Emergency. By the time the storm cleared, 13 New Yorkers had been killed. Eleven of these individuals were in basement apartments, and 5 of the 6 homes where these fatalities occurred were illegally converted units.&#13;
&#13;
Though the City of Boston has not faced fatal floods like those in New York, residents are experiencing similar dual crises of housing affordability and increasing flood risk. All the while, little to no conversations are being conducted regarding the specific vulnerability of those living in formal (legally-registered) and informal (unregistered) basement apartments. Residents appear to be driven “below ground” due to displacement pressures brought on by lack of alternative affordable housing, or by lack of eligibility for subsidized housing. Often these apartments serve as a “last resort” for residents seeking to remain in their neighborhood. For those in informal units, fear of displacement by city inspectors severely limits the availability of assistance with poor living conditions or landlord disputes, leading to further precarity.&#13;
&#13;
Through a series of interviews and analyses, this thesis provides a rough assessment of the scale and areas where residents face the most extreme version of this “compounding risk” in a single neighborhood: East Boston. The presence of this form of housing is a response to the affordable housing crisis, thus I offer five recommendations for the city to expand their understanding and planning efforts. Recognizing the “paradox of exposure” faced by many residents in this informal housing, these recommendations offer a means of expanding safety and security without displacing residents.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Shaping Mechanisms Prototyping of PneuKnit Systems</title>
<link href="https://hdl.handle.net/1721.1/145168" rel="alternate"/>
<author>
<name>AlHajri, Maryam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/145168</id>
<updated>2022-08-30T03:32:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Self-Shaping Mechanisms Prototyping of PneuKnit Systems
AlHajri, Maryam A.
Our surroundings are constantly in flux, whether it is changes in the environment or changes in those who inhabit it. However, most of our spaces and building components are designed for permanence and durability without acknowledging the nuanced fluctuations of the user’s behavior, lifestyle, or changes in the natural environment. The strive for building permanence, designed to resist change, contributes to the 100 million tons of wasted materials annually due to recurring renovations and remodeling that inevitably addresses these fluctuations. What if our parts were active and could sense, react, respond, adapt, and co-evolve with their inhabitants and surrounding context? Rather than building with static dormant components, this alternative presents us with opportunities to advance the built environment and rethink its interrelations with its users and its context, resulting in spaces that are performative and attuned to user needs.&#13;
&#13;
This thesis seeks to develop a typology of lightweight adaptable systems that are rapid and affordable to manufacture. It investigates the fabrication of responsive self-actuating mechanisms; specifically, hybrid pneumatic-knitted (pneu-knit) systems that are autonomous and adaptable to changes within the environment through embedded sensors. The integrated sensors detect the input stimuli–in this particular case study user proximity–transmitting the data to a signal processor and interpreter, which then generates output values for the air pressure settings. This acts as a direct informer and physical shaper of the pneu-knit system, whereby the differentiated shaping generates through the structure and the design of the pneumatic component. The contributions of this work include developing a fabrication framework and method to integrate the knitted, pneumatic, and sensing components for the assembly of affordable, adaptable, lightweight material systems that are attuned to their surroundings.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating New Value from Laboratory Testing and Services in Value-Based Healthcare: Investigating Data Monetization Strategies from Clinical Laboratories</title>
<link href="https://hdl.handle.net/1721.1/145165" rel="alternate"/>
<author>
<name>Garcia, Christopher A.</name>
</author>
<id>https://hdl.handle.net/1721.1/145165</id>
<updated>2022-08-30T03:01:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Creating New Value from Laboratory Testing and Services in Value-Based Healthcare: Investigating Data Monetization Strategies from Clinical Laboratories
Garcia, Christopher A.
In the U.S. healthcare system, much effort is being spent to decrease healthcare costs while improving patient outcomes and improving the health of the entire population. This transition from a fee-for-service payment model to one that allows for pay-for-performance (generally referred to as Value-based payment) has been gradual but is largely recognized as a key component and strategy for the American Healthcare system. Different stakeholders in the healthcare industry are transforming their identities, organizations as well as their services to compete in the changing healthcare market.&#13;
&#13;
The clinical laboratory is not usually considered a key contributor in value-based healthcare (VBHC) models, yet it is well situated to contribute in meaningful ways due to the nature of laboratory testing, the digitally native environment in which modern labs operate, and the growing acceptance of at-home testing. This thesis investigates how clinical laboratories are creating new value in the VBHC healthcare market using data-enabled, digital strategic initiatives while also validating the applicability of data monetization frameworks developed from the MIT Center for Information Systems Research (CISR). Four real-world examples of laboratory services created to support value-based care were collected through interviews with leaders of their respective laboratory companies. After analysis, all four examples were clear cases of data monetization, with the framework highlighting key factors that helped each laboratory to generate new value from their data assets. Some key factors included leadership support, an understanding of how clients create and capture value in value-based arrangements and personnel to translate laboratory data into actionable information that supports the value-based healthcare initiatives of their clients.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Architecting a Space Force Enterprise</title>
<link href="https://hdl.handle.net/1721.1/145164" rel="alternate"/>
<author>
<name>Landsberg, John N.</name>
</author>
<id>https://hdl.handle.net/1721.1/145164</id>
<updated>2022-08-30T03:01:52Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Systems Architecting a Space Force Enterprise
Landsberg, John N.
The Space Force is the newest addition to the Department of Defense ecosystem since the creation of the Air Force. The advent of space-based technologies, both government and commercial, created the need for the development of a new military service focused on the space domain. Yet to be fully operational, the Space Force mostly consists of previous Air Force organizations and is challenged with establishing a new enterprise while simultaneously providing space-based capabilities. As an enterprise, the Space Force is comprised of a complex set of organizations responsible for providing space-based capabilities to a diverse range of stakeholders. Furthermore, the political considerations in establishing the Space Force are essential in determining its long-term success. Using systems thinking enables an enterprise architect to address the considerations in a holistic manner.&#13;
&#13;
Additionally, complexity is another important consideration when architecting a new enterprise like the Space Force. The organizations, interactions, and topology of the enterprise all contribute to the overall complexity. Using organizational complexity methodology, the relative complexity of different enterprise architectures can be compared. Though complexity should be minimized, important trade-offs emerge when comparing enterprise alternatives. Increasing the organizational size and interactions between entities can increase enterprise capability, but also its complexity.&#13;
&#13;
This thesis applies the ARIES methodology to generate and evaluate different enterprise architecture alternatives for the Space Force. Through research and stakeholder interviews, both the challenges and opportunities are addressed to develop a future enterprise focused on providing value delivery. To supplement the evaluation, enterprise complexity provided a more thorough understanding of the effects of the different alternatives. As a result, the recommended architecture focuses on the integration of space-based capabilities across the enterprise landscape. A more integrated Space Force can provide a better sense of identity and the expertise necessary for long-term value delivery. Despite not creating new value pathways for the Space Force, providing additional integration capabilities can enable the more efficient use of space-based capabilities and decrease the overall enterprise complexity. Though only considered as one of many criteria in evaluation, the complexity methodology used within provides a foundation for future research on enterprise complexity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous Vehicle Implementation into Existing Garrison Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/145163" rel="alternate"/>
<author>
<name>Yoon, Edmund J.</name>
</author>
<id>https://hdl.handle.net/1721.1/145163</id>
<updated>2022-08-30T03:37:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Autonomous Vehicle Implementation into Existing Garrison Infrastructure
Yoon, Edmund J.
Military installations exist as hybrid communities and staging areas that support the overarching mission to protect and defend the Constitution of the United States of America. Specifically, the Army is currently reevaluating how installations fit as part of the battle space in multi-domain operations. A component of this is adapting and implementing technological advancements, such as the rapidly growing field of connected and autonomous vehicle (CAV) technology. CAVs, with their array of sensors and software, are able capture, store, and analyze remarkable amounts of data. Military installations have a need to develop an understanding of the data systems involved in CAV deployments.&#13;
&#13;
This thesis explores the basics of autonomous vehicle technology and the regulatory space within which autonomous vehicles will exist when operating on military installations. This research also analyzes an autonomous vehicle shuttle pilot at a military installation as a case study and provides recommendations for future testing and implementation of CAVs on military installations. The recommendations and analysis span across seven research lines of effort and aim to lay a foundation for further research and development to optimize and inform the integration of this technology. This thesis aims to enhance understanding of the process, infrastructure, human factors, and data systems that AV deployments need to consider for successful implementation into existing garrison architecture.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Informational Needs Among Small Producers in Panama: A Human-Centered Approach</title>
<link href="https://hdl.handle.net/1721.1/145162" rel="alternate"/>
<author>
<name>Chung Chung, Michelle Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/145162</id>
<updated>2022-08-30T03:39:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing for Informational Needs Among Small Producers in Panama: A Human-Centered Approach
Chung Chung, Michelle Marie
According to the UN, agriculture can lift more people out of poverty than any other sector. As of 2019, agriculture accounted for only 2.14% of the economy in Panama; however, the agricultural sector serves as one of the primary sources of income for communities living in poverty. Small farmers that own less than 10 hectares account for 82% of the total farmers in Panama and are responsible for the majority of the agricultural production in the country. While only 32% of farmers are below 45 years old, this younger generation is highly interested in adopting new technologies to maximize their production. In particular, the increase in smartphones and Internet-based digital tools results in larger opportunities for improvement in agriculture, such as more precise technologies, better data collection and analysis, and more effective information dissemination.&#13;
&#13;
This study utilizes the human-centered design process to explore the informational needs of small-scale producers in Panama, the existing ways they get information, and the tools that can help them improve their decision-making and productivity. Interviews with 30 producers and agricultural experts demonstrated that farmers need reliable, easily accessible, and updated information. Many farmers agreed that previous experience with government sources has proved to be limited and hard to find. AgroInfo is a digital mobile platform that aims to help farmers easily find relevant and updated information while expanding the interaction and knowledge exchange among the agricultural community. This study addresses the difficulties that Panamanian small farmers encounter while searching for information and seeks to explore informational opportunities to increase their efficiency and productivity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Impact of Play: Designing for Well-being through Playful Digital Mediums for Older Adults in Thailand using the Human-Centered Design Process</title>
<link href="https://hdl.handle.net/1721.1/145161" rel="alternate"/>
<author>
<name>Kasemsri, Jitt</name>
</author>
<id>https://hdl.handle.net/1721.1/145161</id>
<updated>2022-08-30T03:44:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Exploring the Impact of Play: Designing for Well-being through Playful Digital Mediums for Older Adults in Thailand using the Human-Centered Design Process
Kasemsri, Jitt
This thesis explores play and how we might design digital products for play. The concept of play is mainly studied in the context of children, and with older adults, play is often used as a mechanism to achieve specific goals and behaviors. In Thailand, the concept of play seems to permeate into different aspects of everyday life. The word play is used to describe actions that are, in Western culture, not typically viewed as playful; some examples are resting, gambling, and stock trading. Furthermore, with technology, internet access, and smartphones becoming more accessible to older Thai adults, I intend to investigate the intersection between play and technology.&#13;
&#13;
Using iterative human-centered design processes, this study explores how older adults in Thailand play, if and how play impacts overall well-being, and how a playful digital product may be designed to serve their play and well-being needs.&#13;
&#13;
Through interviews with target users, frameworks for well-being and play dimensions and personas that list each potential user’s context, needs, and challenges were created. A digital product concept was ideated, and an interactable playful digital prototype was designed to serve the users' needs.&#13;
&#13;
Results suggested that the product can potentially serve the well-being and play needs of Thai older adults. Design improvements and further research are recommended to ensure that the findings in this research are applicable across different markets, geographies, and cultural contexts.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steady-State and Transient Thermal Modeling of Solid Electrolysis (SOXE) within the Mars Oxygen In-Situ Resource Utilization Experiment</title>
<link href="https://hdl.handle.net/1721.1/145160" rel="alternate"/>
<author>
<name>Schultz, Justine Nikole</name>
</author>
<id>https://hdl.handle.net/1721.1/145160</id>
<updated>2022-08-30T03:36:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Steady-State and Transient Thermal Modeling of Solid Electrolysis (SOXE) within the Mars Oxygen In-Situ Resource Utilization Experiment
Schultz, Justine Nikole
Humankind has always felt the need to understand our place in the universe. The most direct next step for humankind to accomplish the colossal task of understanding and exploring our place in the solar system is to send people to Mars. This ambitious task requires improved understanding and performance of in-situ resource utilization on Mars’ surface as humans prepare to visit Mars. The Mars Perseverance Rover, which landed on the Martian surface on 18 February, 2021, contained the Mars Oxygen In-Situ Resource Utilization Experiment (MOXIE) as an experimental payload to demonstrate the capabilities of in-situ resource utilization by producing oxygen (O2) out of the abundant carbon dioxide (CO2) that makes up a majority of the Martian atmosphere. &#13;
&#13;
Accurate and high fidelity modelling of internal temperatures of the Solid Oxide Electrolysis (SOXE) stack are crucial to understanding operational performance of MOXIE. Weight, energy, space, and complexity constraints limited the ability to add internal temperature sensors to the flight instrument MOXIE. Tests are being conducted on the Martian surface with limited sensor data available to understand the degradation and performance of the SOXE stack in various operational conditions. A high fidelity model has been created utilizing COMSOL to understand the thermal impact of ambient conditions and empirical data on any given location of the SOXE stack, both internal to the flow path and external. This transient model was validated against data from JPL’s MOXIE testbed laboratory and continued model validation as new data is down-linked from the MOXIE flight model aboard NASA’s Perseverance Rover.&#13;
&#13;
This thesis gives an overview of the thermal system and corresponding thermal and multi-physics modelling of MOXIE. Since MOXIE is an experimental instrument that is confined to the Martian surface with limited sensors, the accurate modelling of detailed thermal data can provide an insight to the instrument’s performance. Similarly, analytical experiments can be conducted utilizing the multi-physics model to predict the results of a warm-up routine and an oxygen-producing run prior to experimenting in the harsh and unforgivable Martian atmosphere. The model will contribute to understanding the performance and thermal response of creating oxygen on the Martian surface to aid in human exploration.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rethinking Consumption &amp; Production: Systems Design for Sustainable Lifestyles in the Global North</title>
<link href="https://hdl.handle.net/1721.1/145159" rel="alternate"/>
<author>
<name>Liu, John C.</name>
</author>
<id>https://hdl.handle.net/1721.1/145159</id>
<updated>2022-08-30T03:39:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Rethinking Consumption &amp; Production: Systems Design for Sustainable Lifestyles in the Global North
Liu, John C.
Climate change &amp; sustainable development are two of the greatest challenges of the 21st century. The dominant narrative for addressing the crisis revolves around technological innovation, which presents an incomplete framing of the problem and produces solutions that only address symptoms and not the root cause of our existential predicament. The purpose of this thesis is to investigate the drivers of unsustainable lifestyles in the Global North by drawing upon scholarship in the field of Sustainable Consumption &amp; Production. The research methods include qualitative secondary research and the application of systems thinking in the social sciences to represent a system of consumption &amp; production.&#13;
&#13;
The output of this research is a framework titled ‘The Forces that Shape Consumption &amp; Production’, which assists system designers in mapping the relationships and interactions between 4 primary actors – the individual, community, enterprise, and government. A core argument of this paper is that choices made available by a system of consumption &amp; production determine the lifestyles that emerge. The framework is also used to conduct a macro-level analysis of transnational corporations with special attention paid to the United States. The findings reveal six drivers of unsustainable consumption &amp; production that have undermined progress on sustainable development. In order to address these issues, twelve design solutions are identified in 3 intervention categories – practice, cultural, legal – that can be applied to leverage points within the system. Lastly, I propose using the framework as an analytical tool to complement human-centered design methodologies and to create a bridge between academia &amp; industry.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulated Study Platform To Test Smart Home Technologies For Older Adults</title>
<link href="https://hdl.handle.net/1721.1/145158" rel="alternate"/>
<author>
<name>Trivedi, Yash</name>
</author>
<id>https://hdl.handle.net/1721.1/145158</id>
<updated>2022-08-30T03:31:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simulated Study Platform To Test Smart Home Technologies For Older Adults
Trivedi, Yash
The current global population is undergoing an unprecedented demographic change with a rapid increase of older age groups.  One of the greatest challenges facing our aging population is the need to age independently, preferably at home. Smart home technologies such as connected lighting, security systems and sensors have been suggested to provide older adults with better health, safety and peace of mind. While these smart home technologies promise great benefits, their adoption is low among older adults, and identifying the potential barriers remains an open question among academia and industry. However, these research efforts lack a framework to test and identify these human factors, and potential barriers in a scalable and low cost manner. &#13;
&#13;
This thesis explores the possibility of using a simulated smart home experience designed to demonstrate technology integration in a typical daily routine of an older adult to observe and test their interactions with various smart home technologies. The overall objective of this thesis is to design a simulated smart home experience based on a framework describing different levels of home automation, followed by a user study to observe and evaluate older adults’ interactions with technologies in the domains of energy management, health and wellness, housework support, and safety \&amp; security. The outcome of this study is a framework to test various smart home technologies in a simulated smart home environment validated with a study conducted to simulate a traditional home (L1) and a networked home (L3) to derive insights and design considerations to increase the adoption of smart home technologies among older adults.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Part of the Furniture: Envisioning Furniture Futures Through Qualitative Research and Design</title>
<link href="https://hdl.handle.net/1721.1/145157" rel="alternate"/>
<author>
<name>Risueño Dominguez, Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/145157</id>
<updated>2022-08-30T03:03:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Part of the Furniture: Envisioning Furniture Futures Through Qualitative Research and Design
Risueño Dominguez, Maria
Furniture is a bridge between our bodies and the world. Through furniture, we can live in a way we wouldn't otherwise be able to. Since we’ve evolved into humans, we've become designers and consumers of furniture – from the rocks in ancient caves to King's thrones to the millions of couches worldwide: furniture is our tool for living. Chairs, tables, beds...hundreds of objects surround us in our everyday lives, yet how much time do we spend rethinking them? The phrase being [part of the furniture] refers to something that has been somewhere so long to seem a permanent, unquestioned, or invisible feature of the landscape. Part social and behavioral research project and part furniture design exploration, this thesis aims to understand what furniture means to people and invite us to challenge and rethink what we know about furniture and envision new ways of designing it in the future.&#13;
&#13;
8 out of 10 furniture pieces end up in landfills during their lifetime. The sustainability of furniture is a complex issue due to its multifaceted nature. It involves not only material selection, recyclability, and quality standards, which impact longevity, but also consumer behavior, emotional needs, and furniture disposal. To understand what we can do to tackle this challenge we reviewed relevant literature across disciplines and conducted qualitative field research. Our review examines such factors as emotional durability, product attachment, material perception, and adaptability as different approaches to extending longevity. More in-depth interviews were conducted with furniture users and stakeholders in the furniture industry including subject-matter experts.&#13;
&#13;
This research aims to identify the gap between our furniture needs and the products available on the market. To understand this gap, we first looked at how people think about furniture and what they value. Secondly, we identified current practices and implications in the furniture industry. Thirdly, we explored areas of opportunity for the future of furniture. In the study, adaptability and the ability to change furniture over time were the top needs among furniture users. Industry stakeholders recognized longevity as a critical opportunity in the future. Yet, so much of the furniture we see on the market does not reflect these needs.&#13;
&#13;
Along with this thesis document, we envisioned a system of products that proposes an alternative way of thinking about furniture. We developed experimental prototypes of long-lasting furniture joints that allow for multiple affordances, adapting to different functions and needs over time. As the industry moves forward, new challenges will arise; this study identifies opportunities for developing designs, marketing, and imaging furniture in the future.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Inference: Heterogeneous Effects and Non-stationary Environments</title>
<link href="https://hdl.handle.net/1721.1/145155" rel="alternate"/>
<author>
<name>Slavov, Stanislav</name>
</author>
<id>https://hdl.handle.net/1721.1/145155</id>
<updated>2022-08-30T03:28:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Causal Inference: Heterogeneous Effects and Non-stationary Environments
Slavov, Stanislav
The capability of large businesses and eCommerce platforms to utilize vast amounts of customer data has unlocked the possibility of using advanced analytics methods to customize marketing strategies. We consider the stage of conversion in a marketing funnel, where a customer has arrived on the platform and chooses whether to purchase one of the options offered.  In this thesis, we present two lines of work that address the question of whether showing more options improves purchase probability on two levels: population and individual. Results are centered around data from a field experiment run by an online platform. In the setting of the experiment, the population and individual level effects can be understood through the lens of causal inference and the estimation of treatment effects.&#13;
&#13;
First, we use the experiment data to build causal models that aim to maximize the probability of purchase by customizing the number of options shown to each customer. We show that even when advanced analytics and careful model selection procedures are used, the produced models can fail to generalize well to new data. We conclude this first section by showing strong evidence that the I.I.D. assumption, fundamental for generalization of machine learning models, is violated for the data we consider.&#13;
&#13;
In the second part, we address the problem of estimating treatment effects in non-stationary data. In this setting, using old data to make inferences can lead to unreliable results. We propose a novel procedure that helps smooth out the data non-stationarity by providing a way to resample previous data to match the distribution in the current time period. We demonstrate its effectiveness on the experiment data and conduct sensitivity analysis. Finally, we validate the procedure through experiments on simulated data.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soundscapes as Urban Transformation: Introducing a notational language that represents the shifting relationships between sound, space, and movement</title>
<link href="https://hdl.handle.net/1721.1/145153" rel="alternate"/>
<author>
<name>Oikonomaki, Eleni Styliani</name>
</author>
<id>https://hdl.handle.net/1721.1/145153</id>
<updated>2022-08-30T03:03:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Soundscapes as Urban Transformation: Introducing a notational language that represents the shifting relationships between sound, space, and movement
Oikonomaki, Eleni Styliani
Even though we have advanced technology that can reveal the complexity of cities, urban planners typically turn to the physical attributes of the built environment alone to design them. Instead, this research views cities as a system of continuous, temporal changes that determine how people actually experience and move through cities in their everyday lives. I argue that sound -- an integral experience of cities often treated as no more than urban pollution -- conveys vital information about the practices, events, boundaries, and characters of neighborhoods and streetscapes. Urban sound is ubiquitous, yet we have not developed an adequate language to describe it. Through a case study in Cambridge, Massachusetts, I introduce a computational tool that can be used to understand and represent temporal, sonic changes occurring during different phases of the COVID-19 pandemic. More broadly, this work offers a notational system as a new language for representing the changing relationships between sound, space, and movement that embodies the complexity of the urban environment.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verik: Reinterpreting Kotlin as a Hardware Description Language</title>
<link href="https://hdl.handle.net/1721.1/145151" rel="alternate"/>
<author>
<name>Wang, Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/145151</id>
<updated>2022-08-30T03:21:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Verik: Reinterpreting Kotlin as a Hardware Description Language
Wang, Francis
This work introduces Verik, a hardware description language (HDL) for designing and verifying digital integrated circuits. Verik aims to be a drop-in replacement for SystemVerilog that leverages the productivity gains of the modern software stack to improve engineer productivity. Verik builds upon Kotlin, a modern general-purpose programming language with a clean and expressive syntax. Verik is Kotlin reinterpreted with the semantics of an HDL. The Verik toolchain consists of two parts, the compiler and the importer, and they serve to bridge the gap between the Kotlin and SystemVerilog environments. Verik is translated to SystemVerilog by the Verik compiler. This translation process is direct, typically with one-to-one correspondence between the input and output source files. Verik generates readable SystemVerilog output similar to what an engineer would have written. Conversely, SystemVerilog declarations can be imported into the Kotlin environment with the Verik importer. This allow us to make use of SystemVerilog libraries such as the Universal Verification Methodology (UVM) framework directly in Verik. We demonstrate Verik on a number of examples such as a RISC-V core and some UVM testbenches and show that it compares favorably against other popular HDLs used in academia and in industry. Finally, we recount the experience of using Verik with the Xilinx Vivado platform for a month-long FPGA workshop class.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Environment Diagram Assessment for Introductory CS Education</title>
<link href="https://hdl.handle.net/1721.1/145149" rel="alternate"/>
<author>
<name>Noble, Caleb</name>
</author>
<id>https://hdl.handle.net/1721.1/145149</id>
<updated>2022-08-30T03:18:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automated Environment Diagram Assessment for Introductory CS Education
Noble, Caleb
Code tracing is a valuable skill that many beginning programmers lack. Environment diagrams visually represent the state of a program to help introductory students develop a notional model of execution and drawings are often used in CS1 courses. This thesis describes a tool that enables students to construct diagrams with a drag-and-drop and submit for automatic assessment. Students instantly receive hints to help them correct misunderstandings, allowing even large courses to give individualized feedback. Instructors can easily create questions by providing code that is interpreted into a solution diagram. In a CS1 course, 87% of students felt more confident in answering diagramming questions after after using the tool and 83% found the automated hints helpful.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information-Theoretic Algorithms and Identifiability for Causal Graph Discovery</title>
<link href="https://hdl.handle.net/1721.1/145148" rel="alternate"/>
<author>
<name>Compton, Spencer</name>
</author>
<id>https://hdl.handle.net/1721.1/145148</id>
<updated>2022-08-30T03:44:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Information-Theoretic Algorithms and Identifiability for Causal Graph Discovery
Compton, Spencer
It is a task of widespread interest to learn the underlying causal structure for systems of random variables. Entropic Causal Inference is a recent framework for learning the causal graph between two variables from observational data (i.e., without experiments) by finding the information-theoretically simplest structural explanation of the data. In this thesis, we develop theoretical techniques that enable us to show how Entropic Causal Inference permits learnability of causal graphs with particular information-theoretically simple structure. We show the first theoretical guarantee for finite-sample learnability with Entropic Causal Inference for pairs of random variables. Later, we extend this guarantee to show the first result for Entropic Causal Inference in systems with more than two variables: proving learnability of general directed acyclic graphs over many variables (under assumptions on the generative process). We implement and experimentally evaluate Entropic Causal Inference on synthetic and real-world causal systems. Moreover, we improve the best-known approximation guarantee for the Minimum Entropy Coupling problem. This information-theoretic algorithmic problem has direct relevance to Entropic Causal Inference and is also of independent interest. In totality, this thesis develops algorithmic and information-theoretic tools that shed light on how information-theoretic properties enable learning of causal graphs from both a practical and theoretical perspective.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling with Attention in Demand Forecasting and Beyond</title>
<link href="https://hdl.handle.net/1721.1/145147" rel="alternate"/>
<author>
<name>Ocejo Elizondo, Clemente</name>
</author>
<id>https://hdl.handle.net/1721.1/145147</id>
<updated>2022-08-30T03:36:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Modeling with Attention in Demand Forecasting and Beyond
Ocejo Elizondo, Clemente
Time series forecasting is an important task in many fields from supply chain management to weather forecasting. Traditionally there have been many simple models that extrapolate trends and seasonal patterns from individual time series in order to forecast future values, but not until recently have DNNs (Deep Neural Networks) been leveraged to capture complex relationships between time series as well as within the time series. Recent advances in Transformer architectures have shown promising results in this domain but have yet to show success in a retail settings. In these settings, forecasts are needed at granular levels (product-store) where the data is quite sparse. In this work we will develop new Transformer-based models to successfully predict the demand of a retailer in a medical device manufacturing setting. We will do this by proposing new positional encoding methods that aim to capture trends that specific medical products follow. We also propose new attention mechanisms attending over features and time series independently to generate more descriptive interactions. Ultimately we hope to combine this Transformer with more traditional time series models such as Holt-Winters as a way to alleviate some of the predictive responsibility from the Transformer which require relatively large amounts of data to train as compared to traditional time series methods.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Multi-Agent Reinforcement Learning and Coevolution in Cybersecurity Simulations</title>
<link href="https://hdl.handle.net/1721.1/145146" rel="alternate"/>
<author>
<name>Turner, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/145146</id>
<updated>2022-08-30T03:46:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analyzing Multi-Agent Reinforcement Learning and Coevolution in Cybersecurity Simulations
Turner, Matthew J.
Cybersecurity simulations can offer deep insights into the behavior of agents in the battle to secure computer systems. We build on existing work modeling the competition between an attacker and defender on a network architecture in a zero-sum game using a graph database linking cybersecurity attack patterns, vulnerabilities, and software. To support these simulations, we introduce a data-driven approach to generate enterprise network samples. We apply coevolution to this challenging environment, and, in a novel modeling approach for this problem, interpret each population as a distribution over fixed strategies to form a mixed strategy Nash equilibrium. We compare the results to solutions generated by multi-agent reinforcement learning and show that evolutionary methods demonstrate a greater degree of robustness to hyperparameter misspecification in this environment. Our results suggest that coevolution may prove to be a satisfactory benchmark for hyperparameter tuning of adversarially trained reinforcement learning agents in the absence of other metrics for solution optimality.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SmartPitch: Applied Machine Learning for Professional Baseball Pitching Strategy</title>
<link href="https://hdl.handle.net/1721.1/145144" rel="alternate"/>
<author>
<name>Otremba Jr., Stephen Eugene</name>
</author>
<id>https://hdl.handle.net/1721.1/145144</id>
<updated>2022-08-30T03:24:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">SmartPitch: Applied Machine Learning for Professional Baseball Pitching Strategy
Otremba Jr., Stephen Eugene
The stateful nature of baseball has made it a prime candidate for exploring the topics of planning and strategy optimization. Nearly every moment of the game - from the first pitch to the final out - can be described by a collection of well known state variables that even casual fans should be familiar with. Markov Decision Processes and dynamic programming techniques have previously been applied to this space in order to research the areas of offensive player selection (lineup creation) and player substitution, but they have rarely been studied in the context of one of the most complicated parts of the sport: the minigame between the pitcher and the batter. Even this component of the sport is dictated by a progression of states, as the battle between a pitcher and batter is often tracked using a simple tuple of information known as the count, which captures the number of balls and strikes the pitcher has thrown. Using the count, we’re able to directly map the states of this pitcher-batter match-up to a Markov Decision Process, with state-transition probabilities estimated from supervised machine learning models trained on publicly released data collected through Major League Baseball’s statistics arm. In this thesis, we will discuss how this model of baseball can be used to evaluate optimal pitching strategies that can exploit the tendencies of specific batters and leverage a pitcher’s arsenal of available actions to minimize offensive production. We will explore the application of well-known reinforcement learning algorithms to calculate these pitching policies and will analyze the effectiveness of dimensionality reduction and artificial neural networks in estimating the components needed to construct our Markovian model of baseball.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pretraining Table Embeddings for Knowledge Graph Based Provenance Systems</title>
<link href="https://hdl.handle.net/1721.1/145143" rel="alternate"/>
<author>
<name>Yang, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/145143</id>
<updated>2022-08-30T03:46:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Pretraining Table Embeddings for Knowledge Graph Based Provenance Systems
Yang, Steven
We aim to build a knowledge graph based provenance system for data objects across institutions and teams. The world of data objects and systems is complex and heterogeneous. For effective collaboration, a shared data model is needed. Specifically, this work examines the problem of provenance subgraph classification: given a coarser low-level provenance subgraph that is not easily digestible by humans, we want to annotate the subgraph with human readable labels describing the operations done on each data object. This work first involves creating the infrastructure needed to select and label subgraphs. Next, this work focuses on producing table embedding techniques using the pretrain and finetune paradigm with an emphasis on the downstream task of Operator Classification.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open Coding for Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/145142" rel="alternate"/>
<author>
<name>Price, Magdalena</name>
</author>
<id>https://hdl.handle.net/1721.1/145142</id>
<updated>2022-08-30T03:33:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Open Coding for Machine Learning
Price, Magdalena
Data-driven decisions have an unavoidable influence on people’s lives [5], and despite being marketed as fair decision-making tools, predictive models can easily perpetuate the same biases they hope to counteract. Some approaches to reducing this bias include incorporating interactive machine learning techniques, modifying the input features of the algorithm, or improving the pre-processing of the dataset [35]. However, even if the prediction model is fair and the raw dataset is fair, unfair labels still present the possibility of adding bias to the system [25].&#13;
&#13;
In particular, predictive models for subjective observations are trained on correlative metrics that may not accurately reflect the nuanced nature of what is being predicted; Such a phenomenon may be understood as goal misspecification. Large datasets in particular can fall victim to this phenomenon [35], as the time and cost required demand alternative, less thorough methods of labeling. Thus, we take an approach that analyzes current methods of labeling big data, looking to reduce goal misspecification by modifying the process of labeling big data.&#13;
&#13;
Grounded coding theory [12] presents a modern approach to effectively labeling data from a human perspective, dividing the exploratory process into several stages that encourage thoughtful interaction with text corpora. In order to support effective data labeling, we draw explicit inspiration from some of the methodologies presented. Then, we build on these methodologies by augmenting them with machine learning techniques, providing support for effective and scalable data labeling.&#13;
&#13;
Thus, by providing a space for qualified individuals to effectively and efficiently create custom labels, our research better enables quality correlative goals for predictive models. Combining social science methodology with semi-supervised learning, we present a scalable annotation interface that serves as an effective alternative to current data labeling practices.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MEng Thesis: Incorporating Structured Commonsense into Language Models</title>
<link href="https://hdl.handle.net/1721.1/145141" rel="alternate"/>
<author>
<name>Yin, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/145141</id>
<updated>2022-08-30T03:30:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">MEng Thesis: Incorporating Structured Commonsense into Language Models
Yin, Claire
Machine learning has a wide variety of applications in the field of natural language processing (NLP). One such application is fine-tuning large pre-trained models to a wide variety of tasks. In this work, we propose methods to enhance these large language models by infusing them with information found in commonsense knowledge bases. Commonsense is basic knowledge about the world that humans are expected to have and is needed to achieve efficient communication. Often times, to understand texts, a person must use their commonsense to make implicit inferences based on what is explicitly presented in text. We harness the power of relational graph convolutional networks (RGCNs) to encode meaningful commonsense information from graphs and introduce 3 simple methods to inject this knowledge to improve contextual language representations from transformer-based language models. We show that the representations learned from the RGCN are useful in the task of link prediction in a commonsense knowledge base. Additionally, we show that the methods that we introduce to combine the representations of structured commonsense information with a transformer-based language model shows promising results in a downstream information retrieval task and in most types of combinations gives better performance than a baseline transformer-based language model. Lastly, we show that the representations learned from a RGCN, although trained on considerably less data, still prove useful in a downstream information retrieval task when combined with a transformer-based language model.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supervised Calibration and Uncertainty Quantification of Subgrid Closure Parameters using Ensemble Kalman Inversion</title>
<link href="https://hdl.handle.net/1721.1/145140" rel="alternate"/>
<author>
<name>Hillier, Adeline</name>
</author>
<id>https://hdl.handle.net/1721.1/145140</id>
<updated>2022-08-30T03:44:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Supervised Calibration and Uncertainty Quantification of Subgrid Closure Parameters using Ensemble Kalman Inversion
Hillier, Adeline
Data-driven approaches are increasingly being used to identify and remove structural biases in dynamical models for real-world systems. However, because model updates alter the dependency of a model on its free parameters, evidence about structural biases is often muddied by the variable influences of inadequately-tuned parameters on the model solution. We elaborate a framework for model development that combines calibration, sensitivity analysis, and uncertainty quantification of free parameters to shed light on where structural biases are likely to exist in a model, and where the model may be unnecessarily complex. The approach is useful for general applications because it is easy to implement, derivative-free, robust against model instabilities, and computationally inexpensive, requiring a modest number of model evaluations. A diffusive closure for turbulence penetrated by air-sea fluxes of the ocean surface, presently called the “Convective Turbulent Kinetic Parameterization," is developed as a testbed for and proof-of-concept for the approach. Modifications to the traditional Ensemble Kalman Inversion [1] algorithm are devised to improve convergence during the calibration phase of this process. Further, the Calibrate Emulate Sample [2] framework for uncertainty quantification is validated with modifications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Audience-Controlled Live Storytelling Technologies</title>
<link href="https://hdl.handle.net/1721.1/145139" rel="alternate"/>
<author>
<name>Roman, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/145139</id>
<updated>2022-08-30T03:19:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Interactive Audience-Controlled Live Storytelling Technologies
Roman, Anthony
When designing an entertainment experience, audience engagement in the event is a crucial aspect in its success. One way to increase audience involvement is by giving them a way to control an element of the experience. This can be facilitated through a variety of interactive technologies. While some forms of entertainment, such as theme park attractions, have used many of these technologies over the years, live storytelling events, such as theater productions, have made limited use of technology to facilitate audience interactions. The goal of this project is first to analyze how interactive technologies have been used to affect the stories told in theme park attractions. This will allow us to propose a method to incorporate more types of these technologies into the theater experience, based on the successes found within theme park interactivity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Virtual Reality Rehabilitation Interface with Augmented Sensing, Interaction, and Visualization Techniques</title>
<link href="https://hdl.handle.net/1721.1/145124" rel="alternate"/>
<author>
<name>Lei, Yuxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/145124</id>
<updated>2022-08-30T03:03:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Virtual Reality Rehabilitation Interface with Augmented Sensing, Interaction, and Visualization Techniques
Lei, Yuxuan
With the advanced development of multimodal sensing and rendering technologies, Virtual Reality (VR) has attracted enormous interest in unsupervised physical rehabilitation owing to its decisive advantages in turning traditional physical touchpoints into digital simulated empathy machinery. The shift from treatment rooms to the VR realm allows the scarce resource of rehabilitation services to reach a wider population. While traditional physical space designed the external environment, the virtual display satisfied users with self-awareness through virtual avatars and multisensory feedback. Thus, extensive research investigated innovative sensory input techniques, particularly motion tracking and mapping. &#13;
&#13;
In response to this reverted design methodology, the primary object of this thesis is to survey an effective design and engineering paradigm of virtual rehabilitation spaces, including sensing technologies, interaction methods, and augmented feedback. The paper investigated a VR rehabilitation simulator that integrated muscle engagement sensing inputs, conventional motion simulation, and immersive VR displays. It is a three-in-one system consisting of two-dimensional input systems, a high-precision, low-latency optical motion capture system, a wearable Electrical Impedance Tomography (EIT) device, and an output system, a virtual rehabilitation environment that allows real-time visualization and interaction of muscle engagement and motion feedback. To validate the functionality and efficiency of this system, two user research were conducted. Study 1 evaluated how the enhanced system helped participants improve therapeutic exercise completion accuracy, while study 2 measured how the system empowered remote physical therapist evaluation quality without the in-clinic diagnosis. The results showed that muscle engagement visualization substantially improved the accuracy of therapeutic exercise (~15%) and facilitated the therapist's remote assessment quality. Finally, the paper discussed a range of alternative low-cost technologies, the future implication of the VR program as an at-home rehabilitation training tool, and more research directions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"Scraping and Bloodletting": Xiamen Dada and the Self-Renewing System of Reform-Era Art</title>
<link href="https://hdl.handle.net/1721.1/145123" rel="alternate"/>
<author>
<name>Xu, Qianyue</name>
</author>
<id>https://hdl.handle.net/1721.1/145123</id>
<updated>2022-08-30T03:07:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">"Scraping and Bloodletting": Xiamen Dada and the Self-Renewing System of Reform-Era Art
Xu, Qianyue
In 1986, Dada rhetoric surfaced in the coastal city of Xiamen in southeast China to be taken up by the art collective Xiamen Dada (xiamen dada 厦门达达). Active between 1986 and 1989, the collective targeted agents in the institutional framework of art. Their activities included a burning event where they disassembled their previously exhibited artworks and set them on fire in front of the museum; a “surprise attack” in the form of a ready-made exhibition where they moved objects found around the museum into the building and called that an “event exhibition”; and a project blueprint for pulling the National Art Museum of China with four thousand meters of hemp rope.&#13;
&#13;
This thesis deconstructs Xiamen Dada’s much-heralded radicality as exchanges, negotiations, and engineered equilibrium between control and calculated freedom in the self-renewing system that was reform-era art in China. This is done by dismantling the false dichotomy between the supposedly monolithic art establishment and the radical avant-garde which is thought to work outside its set constraints. I open with an interrogation of the dynamic canonization of Luo Zhongli’s 罗中立 Father (fuqin 父亲, 1980), made around the time the collective’s de facto leader Huang Yong Ping 黄永砅 produced his audacious thesis project The Spray Gun Series (penqiang xilie 喷枪系列, 1981). I argue that the discourse of tolerance that produced this process exemplifies the symbiotic relationship between the Chinese art establishment and limited deviations from artistic canon that provide opportunities for calculated self-renewal. I then compare Xiamen Dada’s trajectory vis-à-vis art museums with that of the Stars (xingxing 星星), highlighting the enabling role played by sympathetic gatekeepers of the art establishment in both groups’ interventions and the obscured fact that Xiamen Dada accessed and capitalized on material resources and networks provided by the art establishment. I push this observation further to expose Xiamen Dada’s collectivism as a survival strategy and problematize the concept of collectivity.&#13;
&#13;
The radicality of Xiamen Dada lies not in the illusory breakaway from the art establishment, but in their repeated testing and recalibration of the optimal amount of stress that would stimulate the body of the art establishment to seek out a new form of health without triggering a complete destruction of its unity. Recuperation, reintegration, and reconfiguration—these are the vocabularies of reform.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Student’s Problem-solving Approaches in MOOCs using Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/145121" rel="alternate"/>
<author>
<name>Kong, ByeongJo</name>
</author>
<id>https://hdl.handle.net/1721.1/145121</id>
<updated>2022-08-30T04:07:43Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analyzing Student’s Problem-solving Approaches in MOOCs using Natural Language Processing
Kong, ByeongJo
Problem-solving processes are an essential part of learning. Knowing how students approach solving problems can help instructors improve their instructional designs and effectively guide the learning process of students. This thesis proposes a natural language processing (NLP) driven method to capture online learners’ problem-solving approaches while using Massive Open Online Courses (MOOCs) as a learning platform. It employs an online survey to gather data, NLP techniques, and existing educational theories to investigate this in the lens of both computer science and education.&#13;
&#13;
The thesis considers survey responses from students enrolled in a computer programming course taught on edX in Spring 2021. A total of 7,482 free-text responses are selected from 44,864 responses collected through the survey. The thesis shows how NLP techniques, i.e. preprocessing, topic modeling, and text summarization, must be tuned to extract information from a large-scale text corpus. The proposed method discovered 18 problem-solving approaches from the text data, such as using pen and paper, peer learning, trial and error, etc. By using datasets from 2020 and 2021, we also learned that there are strong topics that appear over the years, such as clarifying code logic, watching videos, etc. Lastly, we used existing educational theories to discuss the findings from a viewpoint of education.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factors Affecting Technology Adoption and Productivity in a Digital Era: a Framework Based on Literature Review and Future Agenda</title>
<link href="https://hdl.handle.net/1721.1/145120" rel="alternate"/>
<author>
<name>Shoji, Yoshiki</name>
</author>
<id>https://hdl.handle.net/1721.1/145120</id>
<updated>2022-08-30T03:53:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Factors Affecting Technology Adoption and Productivity in a Digital Era: a Framework Based on Literature Review and Future Agenda
Shoji, Yoshiki
Japan's productivity growth has slowed since the 2000s, and in 2020 it ranked last among the G7 countries. Assuming that the reason for this is the rapid pace of technological change, this study uses a literature review and data analysis to identify the factors that contribute to technology adoption, which are often described as cultural differences. By summarizing supporting forces and restraining forces within a company and visualizing the process of technology adoption, a framework of technology adoption is constructed. There, the focus is on the aspect of acquiring human resources. As a result of the data analysis based on data from Orbis, LinkedIn, indeed, Zephyr, D&amp;B Hoovers, and Ci Technology Data Set covering large manufacturing companies in the U.S., it is found that the more employees feel they are learning something in their work, the higher their adoption of technology will be. Surprisingly, however, no relationship is found between technology adoption and productivity, which can come down to a new hypothesis that technology adoption is a necessary but not sufficient condition for productivity growth. Based on the various data limitations identified in the process of conducting the analysis, the report summarizes a future research agenda to overcome these limitations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Digital Engineering Initialization Framework</title>
<link href="https://hdl.handle.net/1721.1/145119" rel="alternate"/>
<author>
<name>Vilcans, Kristen Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/145119</id>
<updated>2022-08-30T03:48:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards a Digital Engineering Initialization Framework
Vilcans, Kristen Marie
Digital engineering is emerging in the US Department of Defense (DoD) as a means to improve system flexibility, counteract system obsolescence, and reduce defects in system design. However, common digital engineering definitions and frameworks have not yet converged across DoD and government engineering contractors. This thesis leverages government publications, memorandums, and presentations, journals and conference proceedings, and systems engineering and defense industry association reports to assess current guidance and capability for digital engineering within the DoD and among the DoD’s engineering contractor base.&#13;
&#13;
A digital engineering lexicon and relationship map are proposed to provide clarity and convergence on key digital engineering terms. The lexicon and relationship map highlight the roles and interactions of key digital engineering elements such as the authoritative source of truth, digital thread, and digital twin.&#13;
&#13;
Through an analysis of digital engineering assessment surveys, stakeholder analysis, and systems analysis, a framework for initializing the digital engineering ecosystem is also proposed. The proposed Digital Engineering Ecosystem Initialization Framework aims to provide a tailorable framework for establishing a digital engineering environment, particularly applicable to small to medium sized government engineering contracting organizations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>No Pressure!: Designing Mobile Interventions to Improve Pressure Relief Adherence for Individuals with Spinal Cord Injury through Diary Studies</title>
<link href="https://hdl.handle.net/1721.1/145118" rel="alternate"/>
<author>
<name>Oh, Hannah (Hye Yeon)</name>
</author>
<id>https://hdl.handle.net/1721.1/145118</id>
<updated>2022-08-30T03:13:40Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">No Pressure!: Designing Mobile Interventions to Improve Pressure Relief Adherence for Individuals with Spinal Cord Injury through Diary Studies
Oh, Hannah (Hye Yeon)
Pressure injuries, otherwise known as decubitus ulcers or ‘bedsores’, affect an estimated 1 to 3 million people in the United States each year (Mondragon and Zito). Individuals with spinal cord injury (SCI) are a particularly vulnerable group to pressure injuries due to the reduced mobility of their wheelchair-bound lifestyle and partial loss of sensation. As a preventive measure, individuals with SCI are taught to do regular pressure reliefs throughout their day to offload pressure away from areas of high pressure. However, as with many health maintenance activities, pressure-relieving exercises are often neglected or easily forgotten. This thesis explores how mobile interventions can be designed to improve pressure relief adherence by individuals with SCI.&#13;
&#13;
To this end, literature on habit formation and adherence apps was studied to inform the design of a prototypical pressure relief notification system, which was tested with individuals with SCI. A 10-day diary study was conducted to capture their experiences in real-time, resulting in a collection of quantitative and qualitative data. The analysis led to a set of key considerations for designing such mobile interventions: 1) technology with a human touch – through humour and positive reinforcement; 2) identification of specific user groups and use cases; 3) personalisation of settings and providing flexibility for universal lifestyles and needs. These will inform the further development of this mobile intervention to improve pressure relief adherence by individuals with SCI, and ultimately help reduce pressure injury incidences.&#13;
&#13;
The field of mobile health interventions continues to grow with the development of technology. The resulting human-centred design considerations and use of diary study methods from this thesis can be extended to other preventive health applications through mobile interventions, such as that of medication adherence or exercise.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Deficiencies in Asian American Pacific Islander (AAPI) Hate Crime Reporting: Designing a Solution for Community Needs</title>
<link href="https://hdl.handle.net/1721.1/145116" rel="alternate"/>
<author>
<name>Xie, Kerry Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/145116</id>
<updated>2022-08-30T03:05:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Addressing Deficiencies in Asian American Pacific Islander (AAPI) Hate Crime Reporting: Designing a Solution for Community Needs
Xie, Kerry Y.
Coinciding with the initial spread of the COVID-19 pandemic, 2020 marked the beginning of a wave of anti-AAPI (Asian American Pacific Islander) hate incidents and violence in the US, as documented in both governmental and NGO data. According to available data, the majority of these incidents occurred in public spaces and often to more vulnerable populations such as women and the elderly. Yet, underreporting remains an issue, limiting the utility of the data for developing solutions to address anti-AAPI hate. In order to increase opportunities for funding, cooperation across organizations, and new community solutions, AAPI community leaders have highlighted a need for improved systems of hate incident reporting. This thesis utilizes the principles of human-centered design to develop an understanding of the target audience. We explore key user roles and their respective needs within the AAPI community based on observational research, literature review, and stakeholder interviews. This understanding is then applied towards the design of a feature set and user interface for an anti-AAPI hate reporting solution that supports the needs of the AAPI community.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An IoT-based Pressure Injury Prevention System</title>
<link href="https://hdl.handle.net/1721.1/145114" rel="alternate"/>
<author>
<name>Zhou, Jonathan Pu</name>
</author>
<id>https://hdl.handle.net/1721.1/145114</id>
<updated>2022-08-30T03:28:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An IoT-based Pressure Injury Prevention System
Zhou, Jonathan Pu
Pressure injuries cost the medical system billions of dollars every year. The current way to prevent pressure injuries is inefficient and labor-intensive. The thesis conducted market research about pressure injuries and summarized market segments and user needs. An IoT device is designed and prototyped for caregivers to help pressure injury prevention by tracking, logging, and monitoring a patient’s activity. The system includes an innovative smart fabric-based sensor mat to measure sit- ting and lying pressure between a human body and a mattress or sitting surface, a battery-powered control and data transmission unit, and a suite of cloud-based software functions such as data transmission, storage database, analytical algorithms, and a data portal. The system measures pressure in real-time and uses machine learning to interpret a patient’s posture, and logs the posture history in a cloud database. By using the system, caregivers can manage multiple patients’ reposition schedules easily and remotely with a smartphone or tablet. Active patients’ self-repositioning can be logged accurately with the system, so their care schedules can be less labor-intensive for caregivers. The system increases the overall operating efficiencies for the caregivers and the care quality for wheelchair and bed-bound patients.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imagined Common Ground: Rethinking on Language, Translation and Technology</title>
<link href="https://hdl.handle.net/1721.1/145113" rel="alternate"/>
<author>
<name>Jiang, Weihan</name>
</author>
<id>https://hdl.handle.net/1721.1/145113</id>
<updated>2022-08-30T03:43:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Imagined Common Ground: Rethinking on Language, Translation and Technology
Jiang, Weihan
When waking up from the dream of one-world-ness, how do we talk to each other?&#13;
&#13;
Technological development and the hegemonic definition of modernity that emerged from it have been under interrogation for decades. In a diminishingly globalized world, we are prompted to reflect on what our connectedness brought to us. There is a world free to roam for some, but not for all. The transnational corporations export their imagined world of synchrony, although it was also those trades and exchanges in the history that cultivated new imaginations (and sometimes violence). Could it be our sense of entitlement to the “feel-at-home”, the immediacy promised by the newest technologies, and our inability to hear and talk without aided translations, that contribute to a singular world under the name of “international”?&#13;
&#13;
The end of one world marks the emergence of many worlds, seen or unseen.&#13;
&#13;
This thesis is an attempt to respond to those questions of the global imagination and concern through its projection on a contemporary “nation,” China. It starts with the formation of the Han Chinese identity, itself a construct that cannot be reduced to a singular image. The case I write about is the distinct culture of Sichuan, formed by the past millennial influx of immigration. The thesis continues to unravel the complexity of the Chinese language, namely the separation of the oral and the written, the hierarchy of the vernacular oral (dialects) and the official oral (Mandarin), and the transience of the vernacular written. Translation happens on multiple levels, yet through the untranslatability of the unwritten to the written, a culture shaped by locality survives and mutates. The thesis also investigates how technology shapes the Chinese language in the digital age, and how standardization could possibly curb the liveliness of the unwritten or alter its living trajectory. The text ends with a discussion of personal anecdotes, weaving the writing and artistic practices together.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patterns of Moments Reasoning about Space Video via Pattern Language of Human Behavior by Extracting MultiAction Activities via Machine Learning Video</title>
<link href="https://hdl.handle.net/1721.1/145112" rel="alternate"/>
<author>
<name>Wu, Ngai Hang (Charles)</name>
</author>
<id>https://hdl.handle.net/1721.1/145112</id>
<updated>2022-08-30T03:12:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Patterns of Moments Reasoning about Space Video via Pattern Language of Human Behavior by Extracting MultiAction Activities via Machine Learning Video
Wu, Ngai Hang (Charles)
Architecture shapes our perception of space through scale, material, shape, and structure. The design of these elements convinces us of certain behaviors within and around the space, and it plays a significant role in our everyday lives. Experience oriented spatial design provides support for sustainable development, and improves people's material and physical satisfaction, well-being, and overall quality of life. It is contemptuous to think of architecture as a mere visual subject, but rather a medium where purposeful design of stimulus can be set up to lead to specific social behaviors in humans.&#13;
&#13;
This thesis investigates the relationship between the built environment and human behavior through a data-driven method using on-location videos and machine learning. It is intended to provide a crucial means to understand the future opportunities that lie within responsive architecture and human-centered design. Human-centered design is conventionally a top-down approach that is highly dependent on architects’ subjective pedagogy and experience of a specific space and their dwellers’ and passengers' immediate needs. For example, Christopher Alexander published a collection of design patterns that promotes everyday users to become consciously aware of their living patterns around specific architectural setups. However, his prescriptive proposal outlines only his empirical insight, without further exploration into the dimension of culture, community, and time. The ability to understand human activities more thoroughly in space is lacking.&#13;
&#13;
The research method is to observe and quantify human events and the types of spaces accommodating them and compare the behavioral difference within various spatial settings through short video clips. Initially, field data is collected by observing and recording human behaviors in public. Data-driven Computer Vision techniques are adopted, such as event recognition, scene attribute extraction, and dynamic analysis. Low-level features of human actions such as typing, drinking, stirring, and chewing are recognized, as well as the features of the surrounding space such as greenery, traffic, and enclosure. These low-level understandings discover behavioral patterns in different spaces with various features, providing insights into high-level human-centered spatial design.&#13;
&#13;
After tests and analysis of a case conducted on street café designs, certain correlations between the properties of built environments and user behaviors were discovered. This case study demonstrated the adequacy of the proposed methodology to understand human behavior in space with the help of data-driven machine learning models. It can potentially be used to build a computational human-centered design system that designs by experience. For instance, such a system can help refitting a residential space to better-fit home office for work during pandemic situations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Intent-based Neural Monte Carlo Tree Search Framework for Synthesis of Printed Circuit Boards</title>
<link href="https://hdl.handle.net/1721.1/145108" rel="alternate"/>
<author>
<name>Kaphle, Arpan</name>
</author>
<id>https://hdl.handle.net/1721.1/145108</id>
<updated>2022-08-30T03:05:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Intent-based Neural Monte Carlo Tree Search Framework for Synthesis of Printed Circuit Boards
Kaphle, Arpan
PCB Synthesis is a difficult joint optimization problem that has eluded automation in Electronic Design Automation (EDA) industry until now. Past approaches to create algorithms that intelligently learn to solve these problems have not yet widely been seen. Cadence Design Systems, affiliated with the author, works on solving such problems. This paper proposes the usage of Monte Carlo Tree Search (MCTS) augmented to improve search in order to self-generate datasets, culminating in a process called LFS (Learning Feedback System). This process allows using past data to accelerate MCTS with deep RL models on new or similar board configurations. Datasets are utilized with forms of dataset-based Reinforcement Learning (RL) algorithms, known as ’Offline’ and ’Off-Policy’ algorithms to solve this problem in a useful and simplified scope. The problem scope starts when other approaches have left a design with constraint violations. This paper baselines with both an algorithmically improved version of MCTS and that further accelerated with PPO, a purely online non demonstrator-based Deep RL algorithm. The results find that MCTS allows for smooth self-generation of datasets, a process inspired by AlphaGo Zero. Adding on to that, we find that off-policy and expert-based RL algorithms such as Adversarial Inverse Reinforcement Learning (AIRL) and Generative Adversarial Imitation Learning (GAIL) can significantly utilize the generated dataset to improve solving the board over time, and do far better when compared to the baseline trained for the same training amount once proper tuning is done. We also find that the complexity of the problem related to the performance of the baseline. Regarding our exploration of offline CQL within this MCTS-connected environment, we find that performance was not up to par, but that it was still able to generalize reasonable actions. We found that all approaches can be tuned to further accelerate MCTS’s decision making and help it prune better for larger state spaces upon comparison of overall actions per episode. The results indicate that amongst the methods tried, the neural accelerated MCTS feedback loop proposed seems to promisingly perform the best with expert-based RL methods.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Exertion for American Football Linemen via Force, Acceleration, and Heart Rate Measurements</title>
<link href="https://hdl.handle.net/1721.1/145107" rel="alternate"/>
<author>
<name>Nielan, Maya Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/145107</id>
<updated>2022-08-30T03:14:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantifying Exertion for American Football Linemen via Force, Acceleration, and Heart Rate Measurements
Nielan, Maya Katherine
Understanding exertion during exercise helps athletes prevent injuries and train at an optimal level. Currently, there exist metrics to determine exertion levels that are specific to individual activities that are mostly dynamic in nature. American football linemen spend most of their energy maintaining static loads; thus, they are in need of a new exertion metric. To design this metric, acceleration, force, and heart rate data is recorded over different weight lifting, running, and football-specific activities. From this data, a dimensionless external load value is calculated as [equation] and an internal load or exertion value is calculated as [equation]. These external and internal load values are compared within the football specific activity experiments and across all experiments of different activities. The relationship between these values is represented through this power fit [equation], suggesting that the relative change in external load gives rise to a proportional relative change in the body’s exertion levels.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Data-Driven Cognitive Disease Classification using Machine Learning and the Digital Symbol Digit Test</title>
<link href="https://hdl.handle.net/1721.1/145106" rel="alternate"/>
<author>
<name>Kim, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/145106</id>
<updated>2022-08-30T03:46:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards Data-Driven Cognitive Disease Classification using Machine Learning and the Digital Symbol Digit Test
Kim, Evan
There is no cure for Alzheimer’s and other cognitive diseases; however, there are treatments that help slow disease progression. Early detection of neurological dysfunction is often caught through screening tests like the Symbol Digit Test. Researchers at MIT and Lahey have been administering the Symbol Digit Learning Test using a digitizing pen that records data and pen strokes made by patients. This new data allows additional insights to be uncovered that a clinician may miss in a physical examination. We are the first group to perform analysis on this data for the digital Symbol Digit Learning Test – with this comes a number of challenges to build a working model that can aid clinicians in diagnosing this class of diseases. One challenge is creating an accurate multi-digit classifier that generalizes well when given messy digits drawn by patients with cognitive diseases. Additionally, more classifiers will need to be created to analyze the digital pen time-series data. Our research provides a computational approach to detecting cognitive diseases and brings to light novel insights such as diagnostic signals when subjects switch to a new row during the test and the impaired subject’s inability to match the healthy controls’ performance across the delayed recall task. Lastly, we developed a logistic regression classifier that flags dementia, Parkinson’s disease, and healthy subjects with an area under the curve of 0.93, 0.97, and 0.89 respectively. These new findings can be used in the clinical setting when administering these tests and can be fruitful for future machine learning model development.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bio-Signal Analysis for Personalized Pilot Training</title>
<link href="https://hdl.handle.net/1721.1/145105" rel="alternate"/>
<author>
<name>Powell, Stuart D.</name>
</author>
<id>https://hdl.handle.net/1721.1/145105</id>
<updated>2022-08-30T03:00:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Bio-Signal Analysis for Personalized Pilot Training
Powell, Stuart D.
Quantitative data on pilot performance can help increase efficiency in pilot training programs by informing when students are prepared take on more difficult challenges. Previous research indicates attentional and cognitive load differences in novice pilots versus more experienced pilots. These differences may manifest as changes in certain physiological signals as pilots of differing experience levels are over, under, or adequately challenged in a given task. This thesis analyzes the effectiveness of measured elecrodermal activity (EDA) and electromyography (EMG) signals to indicate trends in pilot experience, task performance, and challenge difficulty across N = 29 subjects.&#13;
&#13;
EMG and EDA features, as well as accelerometry and joystick data, are considered over the entire task and shorter windows in at the beginning and end of each task. Significant differences, with a p-value less than 0.05, are seen in EMG features based on difficulty, experience, and trial attempt and in EDA features based on experience and trial attempt.&#13;
&#13;
Using these features, tasks by a subject are classified using a logistic regression model with forward step-wise feature selection. The performance of the model in classifying the easiest against the hardest difficulty level reaches an AUC of 0.99, reduced to 0.83 with the dominant joystick feature removed. Classifying a run as performed by an expert against a novice (using a cutoff of 30 flight hours) in the hardest difficulty level reaches an AUC of 0.89, reduced to 0.67 with the dominant joystick data removed. Lastly, the model performance when classifying the first against the last attempt of a subject at a given difficulty level reaches an AUC of 0.85 across all subjects.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantification of Spalart-Allmaras Turbulence Modeling Uncertainties for Hypersonic Flows Utilizing Output-Based Grid Adaptation</title>
<link href="https://hdl.handle.net/1721.1/145104" rel="alternate"/>
<author>
<name>Waligura, Carter John</name>
</author>
<id>https://hdl.handle.net/1721.1/145104</id>
<updated>2022-08-30T03:54:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantification of Spalart-Allmaras Turbulence Modeling Uncertainties for Hypersonic Flows Utilizing Output-Based Grid Adaptation
Waligura, Carter John
In this thesis, uncertainty in the Spalart-Allmaras (SA) turbulence model with the compressible Reynolds-averaged Navier-Stokes (RANS) equations is quantified for steady non-reacting hypersonic flows using a coarse-grained uncertainty metric. Output-based adaptation is utilized to guarantee negligible numerical error with complex flow features, such as shock wave-boundary-layer interactions (SBLI). The adapted meshes are generated using MIT Solution Adaptive Numerical Simulator (SANS) software, which is able to adapt high order unstructured meshes using a modified Continuous Galerkin (CG) finite element method (FEM) discretization. The meshes are iteratively adapted by minimizing the error estimate of a given output functional, such as integrated drag or heat flux, over a boundary. The goal of the study is to quantify the expected uncertainty bounds when using the SA model with modifications to the key assumptions of a linear eddy viscosity constitutive relation and incompressible flow. The uncertainty comparison is made between specific areas of hypersonic geometries such as the pre-compression flat plate region and the post-compression shocked-wedge region of a compression corner. Ultimately, this study improves the determination of uncertainty bounds in engineering design involving turbulent flow, provides more insight into exemplary meshing practices for high-speed flow involving SBLI, and highlights where additional work is needed for the development of turbulence models in the hypersonic regime.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Art: Introduction, Challenges, Implications</title>
<link href="https://hdl.handle.net/1721.1/145103" rel="alternate"/>
<author>
<name>Seow, Olivia</name>
</author>
<id>https://hdl.handle.net/1721.1/145103</id>
<updated>2022-08-30T03:18:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Neural Art: Introduction, Challenges, Implications
Seow, Olivia
Neural art refers to visual art that is created at least partially by a neural network. While neural networks have gone in and out of favor among AI researchers over the last 80 years, neural art has recently surged in popularity due to advances that have enabled the easy generation of aesthetically pleasing and coherent visual outputs. We are at the precipice of paradigmatic change, both within the art community and in any domain that visual media touches. As we hurtle toward a strange new future for art and creativity, it is important for us to collectively shape it with a consciousness of the values being embedded in it. However, there is a lack of discourse in this area. This thesis is an attempt to introduce the concepts behind neural art and the concerns surrounding it, as a jumping-off point for increased understanding, discussion, and collaboration. Writing for a non-technical reader, I start with a primer on neural art. Some techniques are further developed in five personal works in the next chapter. This is followed by findings from a 43 participant survey regarding societal concerns about neural art, and a discussion of pertinent latest challenges. Based on the research, ten implications are discussed. The most updated version of this manuscript can be retrieved at https://github.com/oliviaseow/pixels2picasso.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable Tumor Localization in Bladder Cancer Histopathology Using Deep Multiple Instance Learning</title>
<link href="https://hdl.handle.net/1721.1/145102" rel="alternate"/>
<author>
<name>Nair, Karthik</name>
</author>
<id>https://hdl.handle.net/1721.1/145102</id>
<updated>2022-08-30T03:04:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Interpretable Tumor Localization in Bladder Cancer Histopathology Using Deep Multiple Instance Learning
Nair, Karthik
Deep learning has emerged in cancer histopathology as a tool for predicting clinical and molecular properties of a patient’s disease, thereby connecting slide with function. This concept is especially relevant to bladder cancer, where molecular and histopathologic heterogeneity is known to impact oncogenesis and disease progression, though the underlying properties governing these features are incompletely known. Traditional tile-based deep learning approaches to analyze bladder cancer (and other cancer) histopathology images do not integrate information across whole slides, potentially forfeiting accuracy and interpretability on more complex pathology tasks that require global slide context beyond local morphology, such as tumor subtyping and mutation prediction. To this end, we compare CLAM, a recently developed multiple instance learning model designed to address these limitations, to a tile-based computer vision model on over 1,500 hematoxylin and eosin (H&amp;E)-stained bladder cancer (urothelial carcinoma) slides. We found that CLAM was more robust against overfitting to spurious confounders when compared with a traditional approach, resulting in more interpretable outputs. Additionally, we generated high-resolution tumor localization maps for a previously unstudied cohort using CLAM. Taken together, our results demonstrate CLAM to be a promising approach for tackling difficult digital pathology tasks previously hindered by traditional approaches.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhanced Potts Models for Improved Computational Protein Design</title>
<link href="https://hdl.handle.net/1721.1/145101" rel="alternate"/>
<author>
<name>Lu, Mindren D.</name>
</author>
<id>https://hdl.handle.net/1721.1/145101</id>
<updated>2022-08-30T04:07:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Enhanced Potts Models for Improved Computational Protein Design
Lu, Mindren D.
Proteins are the fundamental building blocks of life, contributing to the structure, function, and regulation of all living cells. The ability to computationally design proteins to serve specific functions is thus of particular interest to the bioengineering and biomedical fields. TERMinator is a recently-developed neural protein design framework that outperforms state-of-the-art models in native sequence recovery. For a target structure, the model outputs a Potts model, an energy table describing the self and pairwise energetic contributions for all amino acids at all positions.&#13;
&#13;
In this thesis, I investigate approaches for enhancing TERMinator’s outputted Potts models for improved computational protein design. I find that direct regularization of the Potts model parameters leads to higher native sequence recovery. In addition, I use experimental energetic data to benchmark TERMinator’s zero-shot ability to predict the physical properties of proteins. Furthermore, I test the use of this experimental data with a correlational loss function to successfully perform finetuning to improve TERMinator’s performance on orthogonal energetic benchmarks. Finally, I detail an observed disconnect between accuracy on energetic benchmarks and native sequence recovery, illustrating the deficiency of only using native sequence recovery to measure model performance.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Design and Control of Autonomous Underwater Vehicles</title>
<link href="https://hdl.handle.net/1721.1/145100" rel="alternate"/>
<author>
<name>Salazar, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/145100</id>
<updated>2022-08-30T03:31:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Computational Design and Control of Autonomous Underwater Vehicles
Salazar, Juan
Design of autonomous underwater vehicles (AUVs) currently relies heavily on rule-ofthumb techniques shared among those in the field, with many designs ending up with thousands of custom-made components that are not readily transferable to other designs made for different tasks. This is largely because of the complexity of designing a vehicle subject to the harsh ocean environment, which results in many interdepencies across design parameters. For the past few years, we have been developing hardware and software for SoFi, an autonomous soft robotic fish platform originally developed at MIT. Throughout our development process, we encountered a number of hardware-related obstacles concerning SoFi’s many custom components, which considerably slowed down our design iteration cycle. Using an automated pipeline that can synthesize designs from a pre-defined component library relieves the design cycle from the drawbacks of rules-of-thumb and highly customized components, and enables fast computational generation and evaluation of optimal performance underwater vehicles. In this work, we first describe our design work and evaluation on the SoFi platform. Next, we present an approach in the form of a automatic graph grammarbased AUV design framework that produces locally optimal AUV designs (including structure and control) based on a task specification. Our framework randomly samples the space of AUV topologies defined by the graph grammar and simultaneously optimizes structure and control with a gradient-based method using a differentiable simulator. Finally, we summarize the results from evaluating our autonomous soft robotic platform, the results from running our AUV design framework on a simulated mission and then show that the design framework carries massive potential in accelerating the future design of SoFi and other types of AUVs, conventional or non-conventional.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning boiling properties of materials</title>
<link href="https://hdl.handle.net/1721.1/145099" rel="alternate"/>
<author>
<name>Lu, Kerri</name>
</author>
<id>https://hdl.handle.net/1721.1/145099</id>
<updated>2022-08-30T04:00:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning boiling properties of materials
Lu, Kerri
Boiling is an important physical process with applications to power generation, thermal management, and water treatment. In this work, we designed and implemented machine learning algorithms to predict a material surface’s boiling properties, including its critical heat flux and boiling curve. Our dataset contained over 200 sandblasted copper surfaces and corresponding boiling data collected from experiments.&#13;
&#13;
Starting with high-level input features, we found that the sandblasting parameters used to manufacture the material are not predictive of boiling properties. This motivated us to investigate lower-level features: we developed and tested an algorithm to detect cavities in a surface’s height profile and extract features such as cavity depth, radius, and size. We found a direct correlation between sandblasting particle size and these cavity features, matching our intuition that larger particles result in rougher surfaces.&#13;
&#13;
However, this effect does not propagate forward to the surface’s boiling properties, as the cavity features are not predictive of boiling curve coefficients. We hypothesized that this discrepancy arose because most cavities are not “active nucleation sites” where air is trapped during boiling. Building upon our cavity detection procedure, we designed an algorithm to identify active nucleation sites on a surface profile, in order to extract more fine-grained features for boiling curve prediction.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incentivizing Collaboration on Space Sustainability: Detectability, Identifiability, and Trackability of Space Missions</title>
<link href="https://hdl.handle.net/1721.1/145098" rel="alternate"/>
<author>
<name>Slavin, Maya</name>
</author>
<id>https://hdl.handle.net/1721.1/145098</id>
<updated>2022-08-30T03:59:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Incentivizing Collaboration on Space Sustainability: Detectability, Identifiability, and Trackability of Space Missions
Slavin, Maya
The world has increasingly come to rely on satellites to provide services such as navigation, global communications, banking, national security, and weather forecasting. However, as satellites are launched into space at increasing rates, the risk of collision between active payloads or with pieces of debris rises exponentially. One of the initiatives to combat congestion is the Space Sustainability Rating. The Space Sustainability Rating is a rating system commissioned by the World Economic Forum in 2018 that scores a space mission on how sustainable it is for the long-term usability of the space environment, particularly in regards to debris mitigation and collision avoidance. It aims to incentivize more responsible design decisions by satellite operators and encourage the acceleration and establishment of sustainable norms of behavior. One of the six scoring modules in the Space Sustainability Rating is the Detectability, Identifiability, and Trackability (DIT) module. This thesis builds on the earlier work that was done to develop the first version of the DIT module and makes three primary contributions to it. First, it investigates using the previously proposed concept of orbital zip codes for the Identifiability scoring process and then suggests an alternative scoring methodology based on constructing Cypher queries that count the number of similar space objects that could make identifying a given object more difficult. Second, this thesis demonstrates how ASTRIAGraph, a knowledge-graph database that combines data from multiple space data sources, can be used to facilitate parts of the DIT analysis. Finally, it conducts a multi-case study to examine how missions from regions outside of the United States and Europe score in the DIT module and whether there are factors related to the national contexts in which they were developed that impact their scores.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-based Scheduling</title>
<link href="https://hdl.handle.net/1721.1/145097" rel="alternate"/>
<author>
<name>Nayak, Siddharth Nagar</name>
</author>
<id>https://hdl.handle.net/1721.1/145097</id>
<updated>2022-08-30T03:01:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning-based Scheduling
Nayak, Siddharth Nagar
Integer programs provide a powerful abstraction for representing a wide range of real-world scheduling problems. Despite their ability to model general scheduling problems, solving large-scale integer programs (IP) remains a computational challenge in practice. The incorporation of more complex objectives such as robustness to disruptions further exacerbates the computational challenge. With the advent of deep learning in solving various hard problems, this thesis aims to tackle different computationally intensive aspects of scheduling with learning-based methods. First, we apply reinforcement learning (RL) to the Air Force crew-scheduling problem and compare it against IP formulations which explicitly optimize for minimization of overqualification and maximization of training requirements completed. We show that the RL agent is equally effective as its IP counterpart when the reward function is engineered according to the objective we want to optimize. We also show that the RL formulation is able to optimize for multiple objectives with simple modifications to the reward structure, whereas the IP methods require separate formulations for their objective functions. Then we present Neural network IP Coefficient Extraction (NICE), a novel technique that combines reinforcement learning and integer programming to tackle the problem of robust scheduling. More specifically, NICE uses reinforcement learning to approximately represent complex objectives in an integer programming formulation. We use NICE to determine assignments of pilots to a flight crew schedule so as to reduce the impact of disruptions. We compare NICE with (1) a baseline integer programming formulation that produces a feasible crew schedule, and (2) a robust integer programming formulation that explicitly tries to minimize the impact of disruptions. Our experiments show that NICE produces schedules that are more robust to disruptions than the baseline formulation, with computation times that are lower than those of the corresponding robust integer program.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydra: A Spatial Perception Engine for Constructing and Optimizing 3D Scene Graphs in Real-time</title>
<link href="https://hdl.handle.net/1721.1/145096" rel="alternate"/>
<author>
<name>Hughes, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/145096</id>
<updated>2022-08-30T03:10:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Hydra: A Spatial Perception Engine for Constructing and Optimizing 3D Scene Graphs in Real-time
Hughes, Nathan
3D scene graphs have emerged as a powerful map representation for robotics. Scene graphs encode spatial and semantic concepts at multiple levels of abstraction as nodes in a graph, and use edges to represent relationships. Such representations offer an efficient way to model diverse environments, and can be used as an aid for planning tasks requiring semantic knowledge. However, current approaches that construct scene graphs lack the ability to operate in real-time on robots. This thesis addresses this research gap, investigating how to build scene graphs from sensor data.&#13;
&#13;
We first introduce the concept of a 3D scene graph, and then detail our contributions to Kimera, the first work to build a hierarchical 3D scene graph directly from visual-inertial sensor data in post-processing. We discuss experiments that explore the runtime performance of Kimera, and produce scene graphs for real-life environments.&#13;
&#13;
We then introduce Hydra, a real-time spatial perception system that can construct 3D scene graphs incrementally from visual-inertial sensor data, overcoming several limitations of Kimera. We also propose an approach for enhancing traditional vision-based loop closure detection using scene graphs, and detail the first method for correcting a scene graph in response to odometric drift. We provide an extensive evaluation of Hydra, including comparing the produced scene graphs to the scene graphs produced by Kimera. We discuss the suitability of Hydra for usage on mobile robots; towards this we show a scene graph produced by data collected on a real robot and evaluate the runtime performance of Hydra on an embedded processor.&#13;
&#13;
Finally, we turn to applications of Hydra to planning and navigation tasks in robotics. We detail how 3D scene graphs can be used as observations in a reinforcement learning framework. Experiments evaluating this framework show that the hierarchy of the scene graph improves the effectiveness of the learned policy. We then discuss the role that Hydra could play in deploying learned policies to real robots.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Catalytic Reactions of Organoboranes</title>
<link href="https://hdl.handle.net/1721.1/145091" rel="alternate"/>
<author>
<name>Seim, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/145091</id>
<updated>2022-08-30T03:47:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Catalytic Reactions of Organoboranes
Seim, Alexander
Organoboranes represent a privileged class of intermediates and are integral to the syntheses of numerous medicinally relevant molecules. The synthesis of organoboranes is usually through 2 electron polar methods which often involve expensive and toxic reagents used in stoichiometric quantities. We proposed that 1 electron radical chemistry can be used to generate new organoboranes under catalytic systems to allow for more controllable and efficient processes. Herein we report our efforts towards developing new catalytic methodologies for the synthesis of organoboranes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Socio-Environmental Sensor Networks for Community Sensing</title>
<link href="https://hdl.handle.net/1721.1/145088" rel="alternate"/>
<author>
<name>Rico Medina, Andrés</name>
</author>
<id>https://hdl.handle.net/1721.1/145088</id>
<updated>2022-08-30T04:07:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Socio-Environmental Sensor Networks for Community Sensing
Rico Medina, Andrés
We are living in a time of extraordinary urban changes. Research has shown that cities can bring economic wealth and improved quality of life by fostering diverse economies, dense knowledge exchanges, and efficient district performance. However, it is also true that scientists have associated cities with crowding, segregation, environmental degradation, and other significant challenges. Sensors, Data, and Artificial Intelligence can lead to a better understanding of urban settings and their challenges by providing opportunities for insight into their social and environmental performance. Many of these sensing initiatives are carried out in a top-down fashion. Top-down sensing generates datasets that capture large-scale patterns across populations. This data could be complemented by bottom-up community-based approaches that capture more granular information emerging from the specific needs of individuals. Through a series of case studies, this thesis illustrates how to use a variety of community-scale sensor and machine intelligence implementations to measure aspects of socio-environmental cycles that emerge in different urban and environmental contexts. These studies explore possibilities for providing communities with access to localized information about socio-environmental systems that, if fully deployed, could enable bottom-up transformation of collective behavior, policies, and infrastructure to address the great challenges that future cities will face.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Driven Transit Oriented Development Planning: Using Montreal’s New Transit System as a Case Study</title>
<link href="https://hdl.handle.net/1721.1/145086" rel="alternate"/>
<author>
<name>Owen, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/145086</id>
<updated>2022-08-30T03:42:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Data Driven Transit Oriented Development Planning: Using Montreal’s New Transit System as a Case Study
Owen, Jordan
The goal of transit-oriented development (TOD) is to encourage personal movement via walking or cycling to/from transit stations, increase the usage of shared means of transit, reduce highway, street, and parking congestion, and thereby instill both personal and environmental benefits. There has been a significant amount of research on the methods regional and city planners can use to identify and pursue opportunities for TOD. The primary focus of most prior research has centered on population density, walkability, land use diversity, and parking around potential transit nodes to identify which ones are best suited for TOD. Studies frequently aggregate these factors into a single TOD index. &#13;
&#13;
However, several key considerations have been omitted in past research and applications related to TOD, such as real estate development capacity, and market potential. This thesis aims to assess TOD potential quantitatively from both a city planner and real estate developers’ perspective for an existing transit network and new networks in the planning phase. As a result, city planners and real estate developers could co-create maps and indices to identify which existing stations would best serve as a new polycentric node, and where new transit lines should be placed to maximize the benefits associated with TOD.  &#13;
&#13;
To address this gap in the prior literature and its practical applicability, this research proposes a unique methodological approach that focuses on development potential around major transit nodes and market potential. This new proposed methodology produces a spatial index with three distinct layers: the Walkability, the Potential Densification, and the Real Estate Market. Along with presenting a methodological approach, the methodology will be applied in a case study, using use the city of Montreal as an illustrative example.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The War on Who? An Analysis of Drug Possession Arrests in Four U.S. Cities</title>
<link href="https://hdl.handle.net/1721.1/145085" rel="alternate"/>
<author>
<name>Simon, Asher H.</name>
</author>
<id>https://hdl.handle.net/1721.1/145085</id>
<updated>2022-08-30T03:31:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The War on Who? An Analysis of Drug Possession Arrests in Four U.S. Cities
Simon, Asher H.
For over half a century, the “War on Drugs” has entailed strict control and policing of illicit drug use in American cities. Despite this policy of criminalization and punishment, large numbers of Americans from all backgrounds continue to use illegal substances, despite the risk of arrest or incarceration. However, the burden of enforcement is not borne evenly across different demographic groups. In particular, black men appear to suffer from disproportionate levels of arrest for drug possession.&#13;
&#13;
This thesis seeks to contribute to the existing understanding of inequities in drug possession arrests, especially as related to race, while explicitly addressing the role of the distribution of illicit drug use across different groups in determining patterns of arrests for possession. By combining drug possession arrest data from four U.S. cities (Los Angeles, Chicago, New York City, and Dallas) with national survey data estimating illicit drug use and population data, I create a series of multiple linear regression models that estimate the relationship between the propensity of arrest for drug possession and age, sex, racial background, and estimated illicit drug use. I find that, even after controlling for the estimated distribution of illegal drug use, along with demographic factors, significant disparities continue to exist in all four cities studied – specifically, black men are most likely to be arrested. These results provide further evidence that differences in use by identity cannot explain relative levels of arrest, lending support to theories that attribute these disparities to either police bias or differences in social or neighborhood context. I also find evidence suggesting that specific policy changes in two cities – Proposition 47 and 64 in Los Angeles and the end of Stop-and-Frisk in New York City – appear to have significantly reduced the magnitude of disparities in drug possession arrests. This further evidences the salience of enforcement strategy in driving disparate outcomes and implies that further changes in illicit drug enforcement policy have the potential to ameliorate existing inequities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A methodology for using eBPF to efficiently monitor network behavior in Linux Kubernetes clusters</title>
<link href="https://hdl.handle.net/1721.1/145083" rel="alternate"/>
<author>
<name>Zavarella, Timothy D.</name>
</author>
<id>https://hdl.handle.net/1721.1/145083</id>
<updated>2022-08-30T03:02:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A methodology for using eBPF to efficiently monitor network behavior in Linux Kubernetes clusters
Zavarella, Timothy D.
With the rise of container orchestration systems, such as Kubernetes and microservice based application architectures there has been a corresponding growth in tools aimed at monitoring these systems. As monitoring approaches have evolved the implementation of instrumentation has shifted from the application level to the platform level. The extended Berkeley Packet Filter (eBPF) can enable high performance and low overhead collection for platform level monitoring. Existing commercial eBPF monitoring systems are often tightly integrated systems with large dependencies and little flexibility in integration into alternative monitoring systems. This thesis presents a methodology for developing modular self-contained eBPF monitoring systems which are portable across various kernel versions, Container Network Interface (CNI) plugins, and cluster configurations. The choice of stable hook points and the BPF CO-RE approach to development using the libbpf or Cilium/ebpf loaders is recommended in this methodology. A proof of concept monitor was developed which captures network traffic on a cluster using the stable Traffic Control direct-action hook point. Packet capture at pod virtual ethernet network interfaces was selected to allow for CNI independent correlation of packets to cluster workloads. The prototype developed provides a suitable platform for implementing additional monitoring functionality on top of and was integrated with an existing NetApp cloud monitoring system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Attack Planner: Systematization and Expansion of Persistence Knowledge</title>
<link href="https://hdl.handle.net/1721.1/145082" rel="alternate"/>
<author>
<name>Jiang, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/145082</id>
<updated>2022-08-30T03:01:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Attack Planner: Systematization and Expansion of Persistence Knowledge
Jiang, Eric
The internet has become a component of society’s critical infrastructure. However, the benefit of using the internet has been accompanied by an increasing volume of cyberattacks. Although documentation of these cyberattacks does exist, it is not readily machine processable are often in a form that is even hard for people to understand. In order to protect systems against these attacks, companies have to hire penetration testers to help them find vulnerabilities within the system. However, this can be very expensive and time consuming. It is also very hard to be completely thorough and comprehensive with penetration testing as there are so many different types of attacks.&#13;
&#13;
The AttackPlanner is tool developed at CSAIL that allows users to easily understand the flow of an attack campaign as well as the different ways adversaries can achieve their goals, by representing cyberattacks in the form of trees called attack trees. In parallel with the development of the Attack Planner, CALDERA is another tool that assists in this project. My focus of this project is to expand the AttackPlanner’s plan repertoire, and its capabilities. There are many different purposes to which cyberattacks are put; this thesis focuses on the persistence aspect of attacks. By persistence, we assume that the attacker already has penetrated the system and can execute a malicious process, but the attacker’s goal is to implant an "advanced persistent threat" (APT) that can survive system reboot and continue exploiting the system over sustained periods of time.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smoothed Complexity of Network Coordination Games</title>
<link href="https://hdl.handle.net/1721.1/145081" rel="alternate"/>
<author>
<name>Viera, Julian T.</name>
</author>
<id>https://hdl.handle.net/1721.1/145081</id>
<updated>2022-08-30T03:00:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Smoothed Complexity of Network Coordination Games
Viera, Julian T.
The problem of finding or computing Nash equilibria has been an important problem in economics and computer science for decades. Classical worst-case and expectedcase analyses have shown that in many cases for many types of games, computing Nash equilibria is intractable. However, it has been empirically shown that in many instances, approximate Nash equilibria can be computed efficiently. Thus, there is a growing interest in the smoothed complexity of games. That is, the complexity of computing Nash equilibria when the inputs to the problem are confined to look more like real-world inputs. This thesis provides a further analysis of the smoothed complexity of network coordination games. We specifically look at the smoothed complexity of the 2-Flip algorithm. While we do not prove that using the 2-Flip algorithm on 2-Flip-Max-Cut achieves smoothed quasipolynomial time, we discuss multiple attempts at this goal, and hope to provide other researchers with the inspiration to prove quasipolynomial time.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming dependency parses into ternary expressions for enhanced indexing and matching</title>
<link href="https://hdl.handle.net/1721.1/145080" rel="alternate"/>
<author>
<name>Hu, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/145080</id>
<updated>2022-08-30T03:22:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Transforming dependency parses into ternary expressions for enhanced indexing and matching
Hu, Henry
Advancements in dependency parsing allow machines to quickly and accurately analyze natural language sentences; however, these parses often require non-trivial manipulation to be useful for many applications. This thesis describes Astroparse, a system for producing ternary expression (subject–relation–object triple) parses by building on existing third-party dependency parsers. I present a design which uses a previously-studied training-example framework with additional augmentations to expand its parsing abilities. I analyze some ways that dependency parse representations fail to capture important relationships in sentences and present algorithms to recover ternary expressions despite those failures. I evaluate my system by examining its outputted ternary expressions manually as well as by qualitatively analyzing its learned transformations. On sentences from high-quality articles in Wikipedia, Astroparse achieves an average precision of up to 93.4% and an estimated recall of about 88.1%, and recovers an average of 35.3% more relations than raw dependency parses alone. My system is also flexible to changes in the underlying dependency parsers and produces human-readable explanations for each ternary expression it produces.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contention Bounds for Locking Computations</title>
<link href="https://hdl.handle.net/1721.1/145079" rel="alternate"/>
<author>
<name>Li, Wanlin</name>
</author>
<id>https://hdl.handle.net/1721.1/145079</id>
<updated>2022-08-30T03:29:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Contention Bounds for Locking Computations
Li, Wanlin
This thesis quantifies lock contention in multithreaded programs by expanding the theoretical model of task-parallel execution traces to account for mutual exclusion locks. While lock profiling and contention detection tools abound in software, empirical measurements of contention suffer from wide fluctuations across different executions of the same code due to scheduling variation and processor availability. In this work we present analytical bounds on the maximum possible contention incurred by a given program over all possible execution schedules, even when running alongside other programs in a busy environment or when scheduled by an adversary. Although we show that computing the exact optimum is NP-hard for general task graphs, in the restricted case of fork-join (series-parallel) computations with &#119899; strands and a single lock we offer a Θ(&#119899;²) exact algorithm as well as a Θ(&#119899;) 2-approximation for worst case contention. In proving these bounds linking maximum contention to the antichain sizes of a program’s parallel trace, we establish graph-based properties of worst case execution schedules that also apply directly to the related single processor scheduling problem of average response time under task precedence constraints. In addition, our analysis of worst case contention offers improved estimates for the completion time of locking computations under the execution trace model.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beatty: Automatic Tempo Curve Synthesis for Expressive MIDI Track Playback</title>
<link href="https://hdl.handle.net/1721.1/145078" rel="alternate"/>
<author>
<name>Wong, Madeline</name>
</author>
<id>https://hdl.handle.net/1721.1/145078</id>
<updated>2022-08-30T03:30:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Beatty: Automatic Tempo Curve Synthesis for Expressive MIDI Track Playback
Wong, Madeline
Beatty is a sequence-to-sequence machine learning model to predict expressive timing decisions for excerpts of classical solo piano music. Composed of a bidirectional encoder LSTM and decoder LSTM with attention, Beatty predicts tempo labels based on input note sequences. The input note sequence is obtained by transforming a MIDI file representation of the musical score into a series of one-hot note vectors, which encode the MIDI note pitches, velocities, and durations, and are augmented with additional harmonic tension information. The target output is a sequence of tempo labels, represented as ratios of the sequence’s initial starting tempo. We demonstrate that the harmonic tension augmentation, as well as learning from filtered tempo label sequences, improve model performance. In qualitative evaluation, the model output receives positive feedback when its predicted tempo sequence is subtle and smooth and criticism when it fluctuates too greatly, suggesting areas for future exploration and improvement.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Power Communication Circuits for Net-Zero-Energy IoT Nodes</title>
<link href="https://hdl.handle.net/1721.1/145077" rel="alternate"/>
<author>
<name>Jung, Jaeyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/145077</id>
<updated>2022-08-30T03:44:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Low-Power Communication Circuits for Net-Zero-Energy IoT Nodes
Jung, Jaeyoung
While Internet of Things (IoT) devices are increasingly widespread thanks to the lower cost of semiconductors, the reach of IoT is limited by the fundamental need of IoT devices for power. Currently, IoT devices are powered by a battery, which must be periodically replaced or recharged, or powered from the electrical grid, which limits mobility and requires a permanent wired connection. This thesis presents a solution in the form of the Net-Zero-Energy Device, an IoT node that can power itself by harvesting energy from 5G radiation. To transmit data from weak 5G signals, the energy harvester was optimized for sensitivity, and the active blocks were optimized to minimize their leakage power. The focus of this thesis is the design of the encoder block and backscatter circuit, which enable the IoT node to transmit data.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Open Source Machine Learning Software Shapes AI</title>
<link href="https://hdl.handle.net/1721.1/145076" rel="alternate"/>
<author>
<name>Langenkamp, Maximillian</name>
</author>
<id>https://hdl.handle.net/1721.1/145076</id>
<updated>2022-08-30T03:01:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">How Open Source Machine Learning Software Shapes AI
Langenkamp, Maximillian
If we want a future where AI serves a plurality of interests, then we should pay attention to the factors that drive its success. While others have studied the importance of data, hardware, and models in directing the trajectory of AI, I argue that open source software is a neglected factor shaping AI as a discipline. I start with the observation that almost all AI research and applications are built on machine learning open source software (MLOSS). This thesis presents four contributions. First, it quantifies the outsized impact of MLOSS by using Github contributions data. By contrasting the costs of MLOSS and its economic benefits, I find that the average dollar of MLOSS investment corresponds to at least $100 of global economic value created, corresponding to $30B of economic value created this year. Second, I leverage interviews with AI researchers and developers to develop a causal model of the effect of open sourcing on economic value. I argue that open sourcing creates value through three primary mechanisms: standardization of MLOSS tools, increased experimentation in AI research, and creation of commuities. Third, I analyze the various incentives behind MLOSS by examining three key factors: business strategy, sociotechnical factors, and ideological motivations. In the last section, I explore how MLOSS may help us understand the future of AI and make a number of probabilistic predictions. I intend this thesis to be useful for technologists and academics who want to analyze and critique AI, and policymakers who want to better understand and regulate AI systems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of Generative Adversarial Networks for Predicting Central Hemodynamics</title>
<link href="https://hdl.handle.net/1721.1/145075" rel="alternate"/>
<author>
<name>Khambete, Mihir Prasad</name>
</author>
<id>https://hdl.handle.net/1721.1/145075</id>
<updated>2022-08-30T04:02:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of Generative Adversarial Networks for Predicting Central Hemodynamics
Khambete, Mihir Prasad
A comprehensive assessment of cardiovascular function requires measurement of central hemodynamic values (e.g pressures in the chambers of the heart and the pulmonary vessels). The current gold standard for obtaining these values is right heart catheterization, an invasive procedure that entails some level of risk for the patient. By contrast, other information routinely used in medicine, namely electrocardiograms (ECGs), routine labs, demographics, family history, and echocardiograms, may contain information that can be leveraged to infer hemodynamic quantities. While there is interest in using such proxy signals to infer central hemodynamics, no existing studies have leveraged these data to provide a comprehensive assessment of patient hemodynamics. This thesis investigates the use of generative adversarial networks (GANs) in combination with cardiovascular mechanistic models to estimate central hemodynamic values from ECG and tabular data. Three hemodynamic quantities form the focus of this work: mean pulmonary artery pressure (mPAP), mean pulmonary capillary wedge pressure (mPCWP), and cardiac output (CO). In addition, we consider methods for evaluating the performance of our networks at the patient level in order to assess their applicability in medical practice. Our models did not achieve good performance on the regression task. However, GANs performed comparably with or slightly better than existing deep feedforward networks on the binarized task, attaining AUC of up to 0.81±0.01 and 0.80±0.01 for detecting elevated mPAP and mPCWP respectively. We hope that this work will ultimately lead to a tool for clinicians to noninvasively estimate a patient’s central hemodynamics in a variety of clinical settings.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated Channel Operating Margin for Automated Context and Applications to Design Optimization</title>
<link href="https://hdl.handle.net/1721.1/145070" rel="alternate"/>
<author>
<name>Gromko, Zackary</name>
</author>
<id>https://hdl.handle.net/1721.1/145070</id>
<updated>2022-08-30T03:49:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Accelerated Channel Operating Margin for Automated Context and Applications to Design Optimization
Gromko, Zackary
Rapidly becoming a cornerstone of signal integrity metrics, Channel Operating Margin (COM) provides a highly desirable single figure of merit encapsulating performance guarantees and providing a variety of byproducts yielding insight on channel behavior. This poigniant metric has been anticipated as an effective tool for automated design problems from root cause analysis to design optimization. However, in practice it has largely been mired in the manual design regime, requiring a great depth of tooling and expert input to assess efficiently. The most substantial such bottleneck derives from the dual design problem of finding optimal equalizer settings for general transmitters and receivers given a channel description, which has historically been approached with an expert-guided grid search. We tackle this issue by introducing a practical method in Bayesian optimization accelerated by a lightweight transfer learning framework. We additionally present our methods in developing an unsupervised flow including derivation of channel descriptions using automated PowerSI tooling, fast time-domain assessment, and flexible frequency-space channel simulation. Finally, we discuss briefly the state of applications to design optimization. In sum, we develop a method for automated COM analysis from design to metric permitting the rapid analysis of entire PCBs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modeling of Volatile Organic Compound Measurements</title>
<link href="https://hdl.handle.net/1721.1/145069" rel="alternate"/>
<author>
<name>Quaye, Jessica A.</name>
</author>
<id>https://hdl.handle.net/1721.1/145069</id>
<updated>2022-08-30T03:08:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Predictive Modeling of Volatile Organic Compound Measurements
Quaye, Jessica A.
The Internet of Things (popularly referred to as "IoT") has revolutionized our daily lives with widespread access to devices such as smart watches, voice-controlled virtual assistants, smart security systems, among others. Many of these devices are reliant on sensors that measure signals including temperature, humidity, pressure, acceleration, and light intensity. Today, these devices are becoming more intelligent and capable of interacting with and responding to the world in which they operate. In order to make accurate inferences and predictions about the future, the devices require context on historical data. Intelligent applications using volatile organic compound (VOC) data have not been as heavily investigated as the aforementioned signals. This thesis takes a first principle approach to analyzing historic VOC data in order to make predictions about future VOC values. The central question that the project seek to address is: Can simple predictive models be applied to VOC data?&#13;
&#13;
This thesis focuses on building simple predictive models for forecasting VOC concentration, with the ultimate goal of predicting the flow of human traffic in a given space during different times of the day. We chose to monitor VOC concentration because it can be used as a proxy for CO2 concentration as well as other environmental signals. Additionally, VOC sensors are relatively inexpensive. We explore the use of VOC signals as an indicator of human presence rather than other popular techniques such as vision-based techniques (which can be obstructed by occlusions) or wireless sensing techniques (which require significant modifications to hardware). Our predictive models are trained with a combination of mathematical properties such as probability distribution, gradient, and correlation between signals. Each model is assessed with standard forecasting analysis metrics: Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE).
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preliminary Investigation of Productivity Tools for Memory Profiling in Parallel Programs</title>
<link href="https://hdl.handle.net/1721.1/145068" rel="alternate"/>
<author>
<name>Zou, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/145068</id>
<updated>2022-08-30T03:09:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Preliminary Investigation of Productivity Tools for Memory Profiling in Parallel Programs
Zou, Elizabeth
As computing efficiency becomes constrained by hardware scaling limitations, code optimization grows increasingly important as an area of research. The impact of certain optimizations depends on whether a program is compute-bound or memory-bound. Memory-bound computations especially benefit from program transformations that improve their data locality, to better exploit modern memory hierarchies. Reuse distance is a useful measure for analyzing data locality in an architecture-agnostic way, i.e., independent of specific cache sizes. Previous work has researched different ways to calculate reuse distance, ranging from deterministic to probabilistic and using different definitions of reuse distance.&#13;
&#13;
This thesis investigates the use of static compiler instrumentation tools to implement memory analysis tools for parallel programs. I show how the comprehensive static instrumentation (CSI) framework can be used to compute the reuse-distance of memory locations in a sequential execution of a program. For analyzing parallel programs, it is necessary to contextualize the memory access patterns with the logical parallel structure of the code. To this end, I show how reuse distance calculations can be organized according to the logical parallel structure of the program by building a series-parallel tree using CSI. I present several potential algorithms for using this instrumentation to calculate statistics for average and peak memory bandwidth in parallel codes. Although these instrumentation tools remain prototypes, they constitute a compelling proof-of-concept for the use of CSI to perform memory analysis in parallel codes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Analysis for a Nuclear-Powered Commercial Merchant Ship</title>
<link href="https://hdl.handle.net/1721.1/145067" rel="alternate"/>
<author>
<name>Hagen, Megan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/145067</id>
<updated>2022-08-30T03:57:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Feasibility Analysis for a Nuclear-Powered Commercial Merchant Ship
Hagen, Megan J.
This thesis evaluated advanced reactor designs to determine the best option for installation onboard a civilian container ship in support of the IMO’s strategy for reducing greenhouse gas emissions from ships.  Greenhouse gas reduction has been a major challenge within commercial maritime shipping as the industry has recently expanded significantly.  Although the industry continues to grow, these emissions must be greatly lowered beyond current values by 2030 for climate change to be meaningfully combated.&#13;
&#13;
Concurrently, the global nuclear power industry has made tremendous advancements in developing new technologies throughout the last two decades, and at the forefront of these efforts are seven Generation IV reactor designs.  These designs were created to improve economics, increase passive safety measures, and expand fuel cycle control, and so far, research efforts have been so successful that the idea of incorporating these designs into the overall climate change solution has been met with significant support from both public and private organizations throughout the world.  &#13;
&#13;
Therefore, Generation IV reactor designs were evaluated based on their technical feasibility and projected commercial readiness to determine the best option to supply propulsive and electrical power onboard a commercial container ship.  Related to commercial shipping, a nuclear-powered container ship has been considered and analyzed thoroughly in the past, but the introduction of Generation IV technology presents an interesting opportunity to revisit the concept and determine if using these designs would increase real-world implementation feasibility.&#13;
&#13;
To fully understand the research space, past nuclear-powered maritime applications as well as previously conducted research related to this topic were reviewed to validate this concept’s feasibility.  The licensing and regulatory process was also analyzed to understand the feasibility of final commercialization, and from this analysis, liquid metal reactors (specifically, Sodium Cooled Reactors, Molten Salt Fast Reactors and Lead-cooled Fast Reactors) were selected as optimal solutions for implementation onboard container ships.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Electrical Standards on MVDC Shipboard Power Cable Size</title>
<link href="https://hdl.handle.net/1721.1/145066" rel="alternate"/>
<author>
<name>Malone, Joshua James</name>
</author>
<id>https://hdl.handle.net/1721.1/145066</id>
<updated>2022-08-30T03:35:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Impact of Electrical Standards on MVDC Shipboard Power Cable Size
Malone, Joshua James
In recent years there has been rising interest in medium voltage direct current (MVDC) power systems for several reasons, including compatibility with DC loads, reduction of induced currents and magnetic signatures, and avoidance of alternating current (AC) frequency synchronization issues when combining outputs from multiple sources [1]. However, few MVDC systems are currently in operation, and little is known in terms of design and test parameters in comparison to the vast wealth of knowledge available for medium voltage alternating current (MVAC) systems. There is currently only one standard governing MVDC applications, the Institute of Electrical and Electronics Engineers (IEEE) Standard 1709, recommendations for shipboard MVDC systems [2].&#13;
&#13;
The goal of this study is to provide recommendations for shipboard MVDC power cable design and test values, and to examine how existing MVAC and MVDC standards affect MVDC cable size. A review of published standards and guidelines for MVAC cable design and test values is made and includes the collection of recommended lightning-impulse Basic Insulation Level (BIL), the short-duration overvoltage Withstand Voltage Test and cable insulation thickness values. The collected MVAC cable design and test values are compared to each other as well as to the sole MVDC standard in order to provide more informed suggested MVDC cable design and test values as well as MVDC cable sizing. It was found that there is a tradeoff with the thickness of cable insulation, where a small reduction in cable system size can be achieved by a reduction in insulation thickness but at a penalty of substantially greater electric stresses within the insulation. The collected results provided a basis for a proposed MVDC cable design process and for example shipboard MVDC cable systems with layouts, rated for 75, 100, and 125 [MW] power levels, and 12 and 18 [kV] Nominal System Voltages.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Perturbation-free Identification of Human Standing Balance</title>
<link href="https://hdl.handle.net/1721.1/145065" rel="alternate"/>
<author>
<name>Sugimoto Dimitrova, Rika</name>
</author>
<id>https://hdl.handle.net/1721.1/145065</id>
<updated>2022-08-30T04:04:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards Perturbation-free Identification of Human Standing Balance
Sugimoto Dimitrova, Rika
Balance disorders affect millions in the United States alone. Despite the large body of literature in the field, we do not yet fully understand the neuromuscular control of balance. A method to estimate natural, unperturbed standing balance dynamics would provide a necessary tool to examine the apparent control mechanisms employed by healthy individuals, and lead to insight and inspiration for developing effective rehabilitation and assistive technologies for those with balance impairments.&#13;
&#13;
In this thesis, a correlation-based system identification method has been investigated as a candidate for perturbation-free identification of human standing balance. The method was tested in simulation to understand its strengths and limitations, and was successfully validated on a hardware system. However, existing human quiet standing data revealed that the posture control process cannot be modelled by a stationary process at the time scales of interest, as required by the system identification method. Accordingly, the perturbation-free system identification of balance dynamics and control remains an area for ongoing research.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Success Classification for Object Navigation</title>
<link href="https://hdl.handle.net/1721.1/145058" rel="alternate"/>
<author>
<name>Yue, Albert</name>
</author>
<id>https://hdl.handle.net/1721.1/145058</id>
<updated>2022-08-30T03:02:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Success Classification for Object Navigation
Yue, Albert
Object navigation is the embodied task of navigating to an instance of a specified object in unseen environments. Previous work has made impressive progress on the problem, but there remains much room for improvement with current state-of-the-art methods reaching a success rate of less than one in three. In this work, we evaluate a state-of-the-art approach, identifying false positives in object detection as the main point of failure. We propose introducing a new module to verify success when the agent attempts to stop. We introduce a learning-based classifier that learns and compares embeddings for visual observations and object categories and find that it works well at predicting success, outperforming both naive baselines and a heuristic­-based classifier. We also find no improvement when using a ensemble model for semantic segmentation, although we believe there is more to be tested before arriving at a conclusive judgement.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach to Departure and Arrival Noise Abatement Flight Procedure Development</title>
<link href="https://hdl.handle.net/1721.1/145055" rel="alternate"/>
<author>
<name>Mahseredjian, Ara</name>
</author>
<id>https://hdl.handle.net/1721.1/145055</id>
<updated>2022-08-30T03:05:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach to Departure and Arrival Noise Abatement Flight Procedure Development
Mahseredjian, Ara
An aircraft noise modeling framework is presented and used to perform a data-driven exploration of factors correlating with measured aircraft community noise and a model-based validation of the variables found to have the greatest noise impact. Aggregate departure and arrival noise and flight procedures were examined so that factors correlating with measured noise could be isolated. Operational flights at Seattle-Tacoma International Airport were examined using a framework that includes ADS-B data, a force balance kinematics model to estimate aircraft performance, and noise monitor recordings. Variation in measured noise within the network was examined as a function of observed data, including aircraft type, aircraft trajectory, airline, wind, temperature, and relative humidity; and inferred variables, including aircraft configuration, weight, and thrust. Airline-specific departure procedures were shown to impact noise measurements. Departure procedures with higher thrust and higher initial climb gradients were observed to have lower measured noise. Arrival procedures that delayed their deceleration were observed to have lower measured noise in some cases. Ambient environmental conditions, including wind, temperature, and relative humidity, were found to impact noise variation. A model-based evaluation of the factors correlating with aircraft noise followed the data-driven exploration. The delayed deceleration approach, a procedure in which aircraft maintain higher speeds, remain cleanly configured, and fly with lower thrust levels for a longer period of time, was identified as having noise reduction potential beyond 8 nm from the airport. Noise from operational Boeing 737, Airbus A320, and Embraer E190 flights at Boston Logan and Seattle Tacoma airports was modeled using the NASA Airplane Noise Prediction Program and was compared with ground noise monitor measurements. When corrected for atmospheric conditions, modeled noise results were consistent with noise monitor readings under reasonable flap deployment assumptions during various early, intermediate, and delayed deceleration approach procedures. Measured noise results indicated that compared to aircraft that decelerated early, aircraft performing delayed deceleration approaches reduced noise by an average of 3-6 dB across different aircraft types.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery-Free Subsea Internet-of-Things</title>
<link href="https://hdl.handle.net/1721.1/145054" rel="alternate"/>
<author>
<name>Afzal, Sayed Saad</name>
</author>
<id>https://hdl.handle.net/1721.1/145054</id>
<updated>2022-08-30T03:25:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Battery-Free Subsea Internet-of-Things
Afzal, Sayed Saad
Bringing massive connectivity to low-cost, low-power ocean sensors is important for numerous oceanographic applications (across climate/weather modeling, marine biology, aquaculture, and defense). However, standard IoT technologies (e.g, Bluetooth, WiFi, GPS) cannot operate underwater, which has left 70% of our planet (the ocean) beyond their reach. In this thesis, I describe how we can change this reality by introducing IoT technologies that are inherently designed for the ocean. Specifically, I show how by rethinking the entire IoT technology stack in the context of oceans, we introduced low-cost (&lt; $100), net-zero-power, scalable connectivity technologies that seamlessly operate underwater and pave the way for massive underwater sensing, networking, localization, and machine learning.&#13;
&#13;
The thesis makes four fundamental contributions: First, it introduces ultra-wideband underwater bacskcatter, a technology that enables scalable, battery-free underwater communication. Second, it demonstrates how we can push the network throughput of underwater backscatter through a family of techniques including higher-order modulation techniques, self-interference cancellation, and multi-access protocols. Third, it shows how we can leverage our underwater backscatter nodes to enable a battery-free underwater GPS for localization and navigation. Finally, it demonstrates the feasibility of battery-free inference and machine learning in underwater environments by developing a task-specific deep neural network (DNN) model and deploying it on our battery-free underwater nodes.&#13;
&#13;
I deliver these contributions by designing and building new algorithms, systems and protocols for ultra low-power and scalable underwater sensing, networking, localization and inference. I also implement and evaluate these systems in real underwater environments (including rivers and lakes) and challenging weather conditions (including snow and rain), and discuss how they pave the way for new applications in ocean climate monitoring, underwater navigation, ocean exploration, robotics, aquaculture, and marine discovery.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Singlet Fission Organic Solar Cell with Long-Wavelength Absorption Using Non-fullerene Acceptors</title>
<link href="https://hdl.handle.net/1721.1/145053" rel="alternate"/>
<author>
<name>Wu, Alice Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/145053</id>
<updated>2022-08-30T03:40:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Singlet Fission Organic Solar Cell with Long-Wavelength Absorption Using Non-fullerene Acceptors
Wu, Alice Q.
Organic solar cells made with singlet exciton fission materials are able to convert a single photon into two charges. Paired with a long-wavelength absorbing acceptor material, singlet fission materials enable solar cells that exhibit the quantum efficiency benefits of a multi-junction solar cell, with the simplicity of a single-junction structure. We investigated bilayer solar cells using tetracene, a singlet fission material, and Y6, a long-wavelength non-fullerene acceptor material. We characterized their current-voltage characteristics, internal and external quantum efficiency, and change in photocurrent under a magnetic field. We demonstrated successful triplet exciton dissociation at the donor-acceptor interface and a positive contribution to the photocurrent from singlet fission. In addition, we investigated coupling a thin layer of tetracene to a PM6:Y6 bulk heterojunction solar cell, which is well-known OPV structure. We found that the addition of tetracene boosts external quantum efficiency at its peak absorption wavelengths and the overall power conversion efficiency. Further improvements to the device structure and materials should yield even more promising results.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis</title>
<link href="https://hdl.handle.net/1721.1/145052" rel="alternate"/>
<author>
<name>Ko, Ching-Yun</name>
</author>
<id>https://hdl.handle.net/1721.1/145052</id>
<updated>2022-08-30T03:03:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis
Ko, Ching-Yun
As a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. However, as a self-supervised learning method, the current contrastive learning method have encoded priors on the downstream classification tasks implicitly. In this thesis, by investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Orbit Pointing Risk Mitigation for the Agile MicroSat (AMS) CubeSat Laser Guidestar Payload</title>
<link href="https://hdl.handle.net/1721.1/145050" rel="alternate"/>
<author>
<name>Thieu, Albert</name>
</author>
<id>https://hdl.handle.net/1721.1/145050</id>
<updated>2022-08-30T03:10:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">On-Orbit Pointing Risk Mitigation for the Agile MicroSat (AMS) CubeSat Laser Guidestar Payload
Thieu, Albert
The AMS Beacon is a ½-U laser guidestar payload, scheduled to launch in April 2022 aboard the Agile MicroSat (AMS) 6-U CubeSat. The payload carries a laser with a wavelength of 976 nm and a power of up to 500 mW. The AMS Beacon will be the first to provide an active lasing low Earth orbit reference for high-angle rate adaptive optics (AO).  During the science phase of the mission, it will establish a space-to-ground link with an AO-equipped ground station.  Due to budget constraints and size, weight, and power (SWaP) limitations, the AMS Beacon was designed without gimbals or fast-steering mirrors, to utilize only open-loop body-pointing and generic CubeSat attitude control software. This thesis presents the pre-launch work, the radiometric link analysis, and a novel on-orbit scanning procedure to reduce mission risk due to a static pointing error and if necessary, recover and recharacterize the orientation of the laser beam relative to the spacecraft body. Within the limits of the attitude determination and control system (ADCS), our search mode can accommodate over 1.75° of pointing error during a single pass, and has the capability to search larger areas by concatenating data from multiple successive passes. As our expected pointing error is approximately 0.1°, this search mode is a fail-safe in case of larger than expected pointing shifts during launch and deployment. The recharacterization scan can also re-determine the beam orientation to within 0.1°.  Our scheme utilizes AMS’s body-pointing capability, AMS telemetry, and ground-based radiometric readings to recover and re-characterize beam alignment knowledge on-orbit. We simulate these data sources using the AMS flatsat and our link model. We then validate our on-orbit scanning procedure and analysis pipeline for recharacterizing the beam orientation. Because this procedure relies on standard CubeSat pointing capabilities and telemetry, we believe that our procedure could be used for future laser guidestar CubeSat payloads.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>One Size Does Not Fit All: Individualizing Climate Action Plans in Southern California</title>
<link href="https://hdl.handle.net/1721.1/145045" rel="alternate"/>
<author>
<name>Moore, Danielle</name>
</author>
<id>https://hdl.handle.net/1721.1/145045</id>
<updated>2022-08-30T03:39:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">One Size Does Not Fit All: Individualizing Climate Action Plans in Southern California
Moore, Danielle
The City of Imperial Beach, a coastal Southern California city, is under severe threat from climate change. Sea level rise, flooding, extreme heat, and environmental pollution all pose risk to the city and especially its vulnerable residents. This thesis has taken an environmental justice lens and applied it in analyzing Imperial Beach’s climate action plan. Specifically, it analyzes the city’s plan for how it will protect marginalized community members from climate change while successfully reducing the city’s emissions by 2050. It also advocates for a more tailored climate plan that acknowledges different community needs based on identity (e.g., race, class, and language). After analyzing the city’s plan alone, the thesis then zooms out to compare it to neighboring cities’ plans to understand the regional context. Multiple policy recommendations across different scales are then made for the city itself, the state of California, the U.S. federal government, and the Mexican government. These recommendations include further community engagement, stronger top-down climate goals, increasing meeting accessibility, making funding more available for Imperial Beach from California, and more. Lastly, a roadmap to 2050 that includes these recommendations alongside emissions goals is presented for the city.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformable Discreet Log Contracts</title>
<link href="https://hdl.handle.net/1721.1/145041" rel="alternate"/>
<author>
<name>Patel, Shwetark</name>
</author>
<id>https://hdl.handle.net/1721.1/145041</id>
<updated>2022-08-30T03:03:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Transformable Discreet Log Contracts
Patel, Shwetark
Since Bitcoin and the Unspent Transaction Output (UTXO)model wereintroduced by Satoshi Nakamoto over a decade ago, there have been many important issues identified with the UTXO model; the most important being that it is hard to extend the model to accommodate more complex use cases, such as those related to decentralized finance. Currently, Ethereum has many decentralized exchanges which allow users to seamlessly make trades. Performing a trade on chain on Bitcoin is quite difficult; currently, the most elegant way is to set up a Discreet Log Contract (DLC) between you and your counter-party. However, this currently have many downsides; for example they are not transferable (i.e. once Alice and Bob sign up for the DLC, they are stuck in the DLC until settlement or they both interactively agree to leave). We fix this by introducing the Transformable Discreet Log Contract (TDLC), which allows a third party, Carol, to swap in for either Alice or Bob midway through the contract with reduced interaction and the Truly Transformable Discreet Log Contract (TTDLC), which allows multiple parties to seamlessly trade the contract around between them. With both the TDLC and the TTDLC, the party swapping into the contract only has to interact with the single party swapping out. The end goal for the work presented in this thesis is to help improve the usability of Bitcoin for advanced use cases such as those relevant to decentralized finance.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Workflow Discovery in Provenance Graphs</title>
<link href="https://hdl.handle.net/1721.1/145040" rel="alternate"/>
<author>
<name>Yue, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/145040</id>
<updated>2022-08-30T03:29:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Unsupervised Workflow Discovery in Provenance Graphs
Yue, Kevin
Data is an ever-expanding part of life in today’s world. Understanding the origin and the history of data - a concept known as data provenance - can thus be extremely important. In this thesis, we first address the need for a data provenance knowledge graph system, then address the need for being able to recover workflows that exist in such provenance networks, in an unsupervised manner. Along with evaluating the effectiveness of existing unsupervised community and motif detection methods, we also suggest a novel approach that augments standard motif detection. Our research shows weak precision and recall numbers for almost all considered approaches, but provides a promising basis for future experimentation using more multifaceted methods.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gradient Subgroup Scanning for Distributionally and Outlier Robust Models</title>
<link href="https://hdl.handle.net/1721.1/145039" rel="alternate"/>
<author>
<name>Jung, Luann</name>
</author>
<id>https://hdl.handle.net/1721.1/145039</id>
<updated>2022-08-30T03:57:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Gradient Subgroup Scanning for Distributionally and Outlier Robust Models
Jung, Luann
Traditional machine learning methods such as empirical risk minimization (ERM) frequently encounter the issue of achieving high accuracy on average but low accuracy on certain subgroups, especially when there exist spurious correlations between the input data and label. Previous approaches for reducing the discrepancy between average and worst-group accuracies typically require expensive known subgroup annotations for either every training data point (as is the case in group distributionally robust optimization (DRO)), or every validation data point. Furthermore, these distributionally robust approaches tend to show reduced performance when outliers are also present in the data. Unfortunately, existing methods for improving subgroup performance cannot be simply combined with prior approaches for excluding outliers, as they often directly conflict. This work proposes a method for addressing both group robustness and outlier exclusion when training machine learning models that requires no previous knowledge about subgroups or outliers within the data. We focus on attempting to balance these two traditionally clashing goals by clustering the gradients of the losses with respect to the model parameters. In doing so, we find minority subgroups and exclude outliers under the assertion that gradients within groups behave similarly while outliers exhibit more randomized behavior. This work demonstrates an improvement to both average and worst-group accuracies compared to baselines and other previous methods when applied to the Waterbirds image dataset.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A First Step Towards Understanding Sperm Whale Communication and Behavior</title>
<link href="https://hdl.handle.net/1721.1/145038" rel="alternate"/>
<author>
<name>Jacobson-Schulte, Finnian</name>
</author>
<id>https://hdl.handle.net/1721.1/145038</id>
<updated>2022-08-30T03:04:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A First Step Towards Understanding Sperm Whale Communication and Behavior
Jacobson-Schulte, Finnian
We present two tools for annotating and preparing animal audio and movement data for downstream modeling tasks. We present a modification of VisionTransformer which detects and classifies animal communication into an annotated format. We give a visualization tool to create estimates for animal movement patterns and a platform for experts to assign behavioral labels to time periods. As a use case example, we show experiments training a factored Hidden Markov Model to discover patterns in the animal communication, both across the entire dataset of communication, and also split across different dominant behavior patterns. In this work each section uses sperm whales as the species of focus, however the work is easily adaptable and applicable to any species that we can collect audio or movement data for.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring image difficulty under limited presentation time: towards building better test sets for object recognition</title>
<link href="https://hdl.handle.net/1721.1/145037" rel="alternate"/>
<author>
<name>Lin, Xinyu</name>
</author>
<id>https://hdl.handle.net/1721.1/145037</id>
<updated>2022-08-30T04:03:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Measuring image difficulty under limited presentation time: towards building better test sets for object recognition
Lin, Xinyu
Datasets are crucial to computer vision and broader machine learning. In particular, with the advance of techniques that are less well-understood theoretically, raw performance on datasets such as ImageNet has been the main driver of developments and feedback in the state of the field. However, the source of data that datasets draw on today are highly biased; for example, object class is correlated with backgrounds and omit many phenomena. In addition, objects mostly appear in stereotypical rotations with little occlusion. The resulting datasets themselves are similarly biased. Thus, the performance on datasets is limited as a predictor of the performance users can expect on their own tasks. To approach this problem, datasets such as ObjectNet were built with images that more closely resemble real-world scenarios by controlling for object backgrounds, rotations, imaging viewpoints, etc. In this thesis, we further address this problem by proposing a novel difficulty metric that reflects the performance of humans on recognizing images. We derive this metric by conducting extensive psychophysics experiments to determine the minimal time humans need to recognize an image. This new metric can be used to construct datasets that controls for the difficulty of different scenes and views humans see on a daily basis. The models’ performance on these datasets will also better represent the performance of humans on their own tasks. However, collecting these labels can be costly, so we also propose machine proxies that can effectively estimate human difficulty for different images and datasets.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Thinking for Social Changemakers</title>
<link href="https://hdl.handle.net/1721.1/145036" rel="alternate"/>
<author>
<name>Varma, Preeti</name>
</author>
<id>https://hdl.handle.net/1721.1/145036</id>
<updated>2022-08-30T03:43:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Systems Thinking for Social Changemakers
Varma, Preeti
Social change makers are intuitive systems thinkers, but to be successful they need to develop more intentional system-level perspectives to coordinate and implement more meaningful and effective solutions that result in positive social and environmental impact outcomes. The social impact space is proliferated with well-intentioned under-resourced, initiatives striving to make positive change. System change has proven to be an elusive goal, despite much discussion and writing on the topic. One contributing reason is that social change makers (students, social entrepreneurs, civic and community leaders) currently don't have sufficient contextual education in systems dynamics to understand how systems thinking might apply to their challenges. Though there are scattered resources that exist to teach Systems Dynamics, there are limited resources available that specifically tie the Engineering practice of Systems Thinking to social issues, areas that are relevant for social change makers.&#13;
&#13;
This thesis employs a meta-level curriculum through a diverse set of teaching methods on 'Systems Thinking' for Social Entrepreneurs to train orchestrators of system change toward positive system level impact. Our goal is to bridge the current gap – make systems thinking more accessible for social change makers, with relevance. Making a traditionally complex topic clear, digestible, simple and contextual for social change makers so that there is a common language/knowledge and an elevated understanding in the community of practitioners. Ideally, social entrepreneurs and their funders would use these tools to get down a "systems thinking" learning curve that often takes years of experimentation and implementation to achieve. The urgency of the problems we face requires more accelerated deployment of deeply intentional systems change.&#13;
&#13;
We have prepared a set of educational tools as a bridge - leveraging the expertise of MIT leadership (seat of Systems Thinking in the US) and Harvard Business School (strong social entrepreneurship). We will build content and IP that helps equip HBS and MIT with accessible tools. We will test and refine these tools in several settings.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Voter Registration: A Security and Cryptography Perspective</title>
<link href="https://hdl.handle.net/1721.1/145035" rel="alternate"/>
<author>
<name>Gerbaud, Andrés Fábrega</name>
</author>
<id>https://hdl.handle.net/1721.1/145035</id>
<updated>2022-08-30T03:23:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Voter Registration: A Security and Cryptography Perspective
Gerbaud, Andrés Fábrega
Security and transparency of voter registration systems are crucial properties that any electoral system must satisfy: without robust guarantees on the underlying voter data, trust in election results—and the system as a whole—is severely impacted.&#13;
&#13;
In this thesis, we study two fundamental problems related to the security of voter registration. First, we formalize voter registration systems by providing a set of high-level definitions that characterize these systems in a general sense. To our knowledge, this is the first formal treatment of this sub-field of election security, which is (surprisingly) often neglected by the academic community. By abstracting away low-level implementation details, our work provides a clearer understanding of these complex systems; furthermore, it lays the formal groundwork and definitions which are useful to design secure technical protocols. Thus, we hope to pave the way for more research in this area.&#13;
&#13;
Secondly, we give a brief overview of an ongoing work-in-progress consisting of a new design for voter registration systems with stronger transparency guarantees, where voters are able to independently verify that their data has not been tampered with, even in the presence of untrusted election officials. We hope that our eventual system increases voter confidence in the electoral system, and helps detect (and, thus, mitigate) attacks that target voter registration databases.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Syntactic Transfer for Low-Resource Machine Translation with Contextual Parameter Generation</title>
<link href="https://hdl.handle.net/1721.1/145034" rel="alternate"/>
<author>
<name>O'Connor, Joe</name>
</author>
<id>https://hdl.handle.net/1721.1/145034</id>
<updated>2022-08-30T03:08:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Syntactic Transfer for Low-Resource Machine Translation with Contextual Parameter Generation
O'Connor, Joe
The advent of large pretrained models has led to paradigm-shifting improvements throughout natural language processing. For many tasks, state-of-the-art results are now achieved by taking one of these large pretrained models and adapting it in some way for use on the desired task. While this approach has been successful on a broad range of tasks, that success is not evenly distributed within tasks—most of the gains are in high-resource settings, i.e., tasks and languages for which there is a large amount of labeled data available. Some tasks—and many languages—lack sufficient labeled data for these approaches to work well. Recently, there has been much interest in methods that could potentially close this gap and improve performance in low-resource settings. In this work, I demonstrate a novel method for adapting large pretrained models that involves dynamically generating additional parameters for the model based on an informative representation of the task and show that this method works especially well on the task of low-resource machine translation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Study on the Tradeoffs of Action Recognition Models for Industry</title>
<link href="https://hdl.handle.net/1721.1/145033" rel="alternate"/>
<author>
<name>Snowdon, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/145033</id>
<updated>2022-08-30T03:56:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Empirical Study on the Tradeoffs of Action Recognition Models for Industry
Snowdon, Jack
Action recognition has attracted intense attention in the last decade. Advances in deep learning and the availability of large-scale video datasets have drastically improved its capabilities, attracting interest from industry with a variety of use cases. My work at the MIT-IBM Watson AI lab presents a survey analyzing the performance of various existing action recognition methods in the context of construction site analysis and workplace safety. The analyzed pretrained action recognition models were developed in the lab, and encompass a range of popular techniques in the field, each with their own strengths. In addition to developing a general pipeline to train these models on novel datasets, an easy-to-follow guide is presented to make model recommendations for both of the construction site and workplace safety tasks. Although it was hard to make definitive recommendations without knowing the specific hardware constraints, the results obtained and the discussion offer insight into what it feasible and effective with current technology in the problem spaces.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CellMincer: Self-Supervised Denoising of Functional Imaging</title>
<link href="https://hdl.handle.net/1721.1/145032" rel="alternate"/>
<author>
<name>Wang, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/145032</id>
<updated>2022-08-30T03:51:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">CellMincer: Self-Supervised Denoising of Functional Imaging
Wang, Brice
All-optical electrophysiology offers accessibility and scalability in observing neuronal activity beyond what can feasibly be achieved with patch clamp techniques. However, imaging platforms like Optopatch suffer from excessive detection noise, photobleaching, and an inability to organically segment and isolate neurons of interest. These drawbacks preclude its use as a true substitute for direct electrophysiological measurement, but recent advances in deep neural network inference may enable computation to recover the difference in data quality. To date, few robust denoising algorithms have been designed and implemented for voltage imaging data, in part because the lack of ground truth imaging complicates the task of training such a model. This thesis introduces CellMincer, a self-supervised deep neural network for denoising functional imaging. By exploiting a combination of spatiotemporally local contexts and precomputed global features, CellMincer outperforms comparable algorithms at denoising several modes of optical electrophysiology on a range of metrics, including measures of biologically relevant features.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Up Fact-Checking Using the Wisdom of Crowds</title>
<link href="https://hdl.handle.net/1721.1/145031" rel="alternate"/>
<author>
<name>Allen, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/145031</id>
<updated>2022-08-30T03:42:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Scaling Up Fact-Checking Using the Wisdom of Crowds
Allen, Jennifer
Professional fact-checking, a prominent approach to combating misinformation, does not scale easily. Furthermore, some distrust fact-checkers because of alleged liberal bias. We explore a solution to these problems: using politically balanced groups of laypeople to identify misinformation at scale. Examining 207 news articles flagged for fact-checking by Facebook algorithms, we compare accuracy ratings of three professional fact-checkers who researched each article to those of 1128 Americans from Amazon Mechanical Turk who rated each article’s headline and lede. The average ratings of small, politically balanced crowds of laypeople (i) correlate with the average fact-checker ratings as well as the fact-checkers’ ratings correlate with each other and (ii) predict whether the majority of fact-checkers rated a headline as “true” with high accuracy. Furthermore, cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checkers, and identifying each headline’s publisher leads to a small increase in agreement with fact-checkers.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resilience and Its Discontents: Risk, Temporality, and a Climate Change Crisis</title>
<link href="https://hdl.handle.net/1721.1/145028" rel="alternate"/>
<author>
<name>Shi, Kevin Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/145028</id>
<updated>2022-08-30T03:32:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Resilience and Its Discontents: Risk, Temporality, and a Climate Change Crisis
Shi, Kevin Kaiwen
In the aftermath of Hurricane Sandy in 2012, significant funding and numerous initiatives have sought to prepare the United States for future storms and flooding. Resilience has emerged as one of the most important concepts in discourse about climate change and urban planning, central to decisions about investments, urban development, and budgeting. Furthermore, it has taken shape as a large industry with conferences, professionals and advisors, and indices and metrics all meant to improve the preparedness and resiliency of various entities ranging from individuals and companies to cities and regions. However, resilience is not a purely technocratic and objective metric as it is often presented. This thesis examines resilience as a political and economic project, a technology for governing risks associated with climate change. In the process of this governance, the assumptions and understandings implicit within resilience, and deeply held by those working in the field, produce uneven outcomes.  Unlike other paradigms like sustainability and mitigation, resilience aims not to solve the problem of global climate change but rather to protect short-term against its impacts. Engaging with fields like geography, science, technology and society (STS), and anthropology, I argue that resilience has a temporal element — it does not aim to solve the problem of climate change, its future is instead postponed in the attempt to preserve an endless present. This present is portrayed as a crisis and resilience is framed as the way to prevent further destabilization. Crisis however, much like Naomi Klein’s disaster capitalism complex, is enormously profitable for a number of elite stakeholders including real estate developers and insurance corporations. In opposition to a resilience project in New York City, opponents have called for true resiliency. My thesis attempts to hold resilience accountable and create space for meaningful responses, ones that center solutions and climate justice.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>REMEMORY: Territorial Justice in Both Americas</title>
<link href="https://hdl.handle.net/1721.1/145027" rel="alternate"/>
<author>
<name>Hoyle, Rajan</name>
</author>
<id>https://hdl.handle.net/1721.1/145027</id>
<updated>2022-08-30T03:02:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">REMEMORY: Territorial Justice in Both Americas
Hoyle, Rajan
This thesis examines the resistance tactics used by collectives of Black and Indigenous women in the Americas to fight for housing, land, and territorial justice. I put organizers from the Black Fraternal Organization of Honduras (OFRANEH) in conversation with ancestral miners in Colombia's Cauca region, and finally with Moms 4 Housing in West Oakland to reveal themes and opportunities for solidarity and knowledge sharing across their struggles and the diaspora. Specifically, I work to tease out the limits and possibilities of property, land, and territory as viewed by each of these collectives and what cues planning might take from these insights. My research takes a journalistic and documentarian approach and leans on theory from the traditions of Black Feminist Geography, Decolonial and Postcolonial Thought, as well recent literature around property rights. In Chapter 1 I outline the structure of the paper, provide a review of the literature, and discuss my methods. Chapter 2 develops a thorough case study of Triunfo de la Cruz v. Honduras, a case that OFRANEH took all the way to the Inter-American Court of Human Rights. Chapter 3 is a case study into La Toma, Colombia, and the Black artisanal miners who organized La Marcha de los Turbantes, where 80 women marched 10 days and 350 miles to Bogotá to demand an end to unpermitted industrial mining along the Ovejas River. Chapter 4 is another case study that looks at West Oakland, USA, and Moms 4 Housing occupying a real estate owned residential property to bring attention to the real estate speculation crisis in Oakland. Finally, in Chapter 5, I conclude and posit that the planning work from below undertaken by Black women collectives is under interrogated and the Black women who have led this work for generations have too often been erased from the narrative of struggle. I advocate a recentering of resistance narratives, development of solidarity networks across and throughout the Black diaspora, and for more expansive and culturally responsive approaches to planning around property, land, and territorial justice in Black communities in the Americas and beyond.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>(Re)envisioning Land and Power: A Fight for Community Ownership + Control in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/145026" rel="alternate"/>
<author>
<name>García López, César</name>
</author>
<id>https://hdl.handle.net/1721.1/145026</id>
<updated>2022-08-30T03:28:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">(Re)envisioning Land and Power: A Fight for Community Ownership + Control in Massachusetts
García López, César
This thesis describes a client-based project I undertook in service to the Healthy Neighborhood Study Research Consortium (HNSRC) to help them develop a toolkit for community ownership + control. The toolkit explores what it takes to carry out community-led transformation for collective spaces. The toolkit came together through a participatory action research (PAR) process that built on years of neighborhood-level work carried out by folks committed to transforming their communities.&#13;
&#13;
My role in helping develop the toolkit was three-fold. First, I undertook archival and case study research to understand the historic and present day conditions that produced one parcel of vacant land in Boston as a way of adding contextual information to the toolkit. Second, I facilitated Learning and Innovation for Neighborhood Change (LINC) Lab working sessions with resident researchers, members of the HNSRC and who served as my committee of advising resident researchers, to surface insights into community land issues. Third, I synthesized my academic archival and case study research together with HNS insights to draft the toolkit itself. The result is a joint-authored, action-oriented HNS toolkit that aims to help community members better understand and navigate the process of taking community control over land.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visualizing Oil in Aramco World Magazine: Public Relations and Corporate Photography from 1949-1960</title>
<link href="https://hdl.handle.net/1721.1/145022" rel="alternate"/>
<author>
<name>Elnozahy, Mariam</name>
</author>
<id>https://hdl.handle.net/1721.1/145022</id>
<updated>2022-08-30T03:33:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Visualizing Oil in Aramco World Magazine: Public Relations and Corporate Photography from 1949-1960
Elnozahy, Mariam
In 1949, almost fifteen years after the first oil well was discovered by a group of American wildcatters on the Eastern Arabian Peninsula, the Arabian American Oil Company (Aramco) ramped up its public relations campaign by launching a company house magazine with a strong photographic program. The publication, Aramco World Magazine, was initially produced out of the New York office as a way to communicate company activities to employees. As the decade progressed, the magazine expanded its scope to serve a broader audience, covering trivia about the Middle East, world events, lifestyle tips, and, of course, oil operations.&#13;
&#13;
This thesis examines how the images circulated in the magazine worked towards the company’s public relations aim of creating a “favorable business climate” for the oil company and industry. It demonstrates how, through its circulation, the magazine concretized a culture of corporate citizenship among employees and countered negative sentiment directed at the company both from within and abroad. It concludes that the images that depicted a range of company operations, from compound activities and personnel portraits to infrastructure projects and oil machinery, conjured a fantasy of oil, one that could appeal to American expatriate employees and their families, and buttress the company’s role as a leading global enterprise. The analysis in this thesis not only reveals how the world of Aramco was visualized through the magazine, but also identifies the actors who engineered this visualization, and reveals the corporate community that were influenced by these visuals.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hyperbolic graph embedding of magnetoencephalography brain networks to study brain alterations in patients with subjective cognitive decline</title>
<link href="https://hdl.handle.net/1721.1/145021" rel="alternate"/>
<author>
<name>Baker, Cole</name>
</author>
<id>https://hdl.handle.net/1721.1/145021</id>
<updated>2022-08-30T03:23:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Hyperbolic graph embedding of magnetoencephalography brain networks to study brain alterations in patients with subjective cognitive decline
Baker, Cole
Identifying subtle changes in brain activity in the early stages of pathology is crucial for gaining understanding of the causes and mechanisms of neurodegenerative diseases such as Alzheimer’s disease (AD). Mapping high dimensional brain connectivity information to a lower dimensional latent space can allow quantitative analysis of the subtle changes in brain activity and create information-rich inputs to downstream classification tasks. Using a Hyperbolic Graph Convolutional Network (HGCN), we embed functional brain connectivity graphs derived from magnetoencephalography data to a Poincare disk instead of traditional Euclidean space. The Poincare disk is a negatively curved unit disk that encourages a continuous tree-like (and low-dimensional) embedding where paths between sibling nodes pass through a more central parent node. This model allows scale-free graphs to be embedded into 2 dimensions with low distortion while maintaining a conformal mapping of angles to Euclidean space. The Poincare model is particularly useful for neuroscientific analysis, as brain networks are generally scale-free, and the low dimensional mappings can facilitate learning despite the typically small datasets that are available in the field. The embeddings provide a parsimonious description of both similarity and hierarchy, which can be used to study the role of individual brain regions and known functional subnetworks, such as the default mode network (DMN) and ventral attention network (VAN). We used the hyperbolic embeddings to assess MEG brain network alterations in subjects with Subjective Cognitive Decline, a pre-clinical precursor to AD in which the subject cannot be objectively diagnosed through traditional neuropsychological testing. Poincare embeddings were used to classify subjects’ disease state and identify functional changes in the interconnectivity of several subnetworks as well as the overall hierarchical placement of those networks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning hydrodynamic coefficient databases for vortex induced vibration prediction of marine risers using sparse sensor measurements</title>
<link href="https://hdl.handle.net/1721.1/145020" rel="alternate"/>
<author>
<name>Mentzelopoulos, Andreas P.</name>
</author>
<id>https://hdl.handle.net/1721.1/145020</id>
<updated>2022-08-30T03:21:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning hydrodynamic coefficient databases for vortex induced vibration prediction of marine risers using sparse sensor measurements
Mentzelopoulos, Andreas P.
Semi-empirical models are currently the state-of-the-art technology for flexible cylinder vortex induced vibrations (VIV) predictive modelling. Accurate prediction of the structural response relies heavily on the accuracy of the acquired hydrodynamic coefficient database. Due to the large number of inputs required, the construction of systematic hydrodynamic coefficient databases from rigid cylinder forced vibration experiments can be time-consuming or even intractable. An alternative approach has been implemented in this work to improve the flexible cylinder VIV prediction by machine-learning optimal parametric hydrodynamic databases using physical measurements along the structure. The methodology is applied to a straight riser in uniform flow and extended to non-straight riser configurations and non-uniform incoming flow profiles. Moreover, database inference is extended to using direct sparse sensor measurements along the structure. Specifically, a 19-dimensional parametric hydrodynamic coefficient database is obtained for: (i) straight riser in uniform flow (using either displacement or strain data) (ii) straight riser in stepped uniform flow (iii) straight riser in sheared flow (iv) catenary riser in uniform flow of various incidence directions between the catenary plane and the incoming flow stream (v) stepped (2-diameter) riser in uniform flow. The predicted amplitude and frequency responses, using the extracted databases, are compared with observed experimental results.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solutions to the Generalized UAV Delivery Routing Problem for Last-Mile Delivery with Societal Constraints</title>
<link href="https://hdl.handle.net/1721.1/145019" rel="alternate"/>
<author>
<name>Gaba, Farri T.</name>
</author>
<id>https://hdl.handle.net/1721.1/145019</id>
<updated>2022-08-30T03:11:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Solutions to the Generalized UAV Delivery Routing Problem for Last-Mile Delivery with Societal Constraints
Gaba, Farri T.
Unmanned aerial vehicles (UAVs) are becoming an increasingly popular transporta- tion modality to improve efficiency, cut costs and increase customer service levels amongst last-mile industry players and academics alike. UAV operations planning is uniquely challenging because of UAV capacity and range constraints, the host of aeronautic regulation UAVs are subject to, and the significant externalities it imparts on the communities that they operate amongst that could materialize into additional operational restrictions. Previous research contributions have focused on the vehicle routing, environmental life-cycle analysis, economics and policy implications of un- manned aerial vehicles for last-mile delivery (UAV-LMD), typically in isolation.&#13;
&#13;
This thesis complements previous efforts by adopting an inter-disciplinary perspective of anticipated UAV-LMD operations. It first performs a survey of the most significant societal and regulatory barriers facing UAV-LMD today and in the coming decades and offers insight into potential regulatory pathways to constrain operations. Second, it extends existing UAV routing methodologies to capture these constraints and UAV-specific routing features in three competing routing models, offering a com- parative analysis of their performance and identifying performance advantages of a heuristics-based routing approach. Finally, this thesis performs a sensitivity analysis of societal and regulatory constraint intensity, technology progression and demand density on realistic demand instances. It finds that, independent of demand density, societal and regulatory constraint intensity as well as UAV technology progression lev- els drive UAV-LMD operational costs with the potential to render it uncompetitive compared to traditional fulfillment modalities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Align the Supports of Distributions</title>
<link href="https://hdl.handle.net/1721.1/145018" rel="alternate"/>
<author>
<name>Tong, Shangyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/145018</id>
<updated>2022-08-30T04:02:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning to Align the Supports of Distributions
Tong, Shangyuan
This thesis studies the problem of aligning the supports of distributions. Compared to the existing work on distribution alignment, support alignment does not require the densities to be matched. We propose symmetric support difference as a divergence measure to quantify the mismatch between supports. We show that select discriminators (e.g. discriminator trained for Jensen–Shannon divergence) are able to map support differences as support differences in their one-dimensional output space. Following this result, our method aligns supports by minimizing a symmetrized relaxed optimal transport cost in the discriminator 1D space via an adversarial process. Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance. We quantitatively evaluate the method across domain adaptation tasks with shifts in label distributions. Our experiments show that the proposed method is more robust against these shifts than other alignment-based baselines.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shear Wall Layout Optimization in Coordination with Architectural Floor Plans</title>
<link href="https://hdl.handle.net/1721.1/145017" rel="alternate"/>
<author>
<name>Philps, Davis Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/145017</id>
<updated>2022-08-30T03:56:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Shear Wall Layout Optimization in Coordination with Architectural Floor Plans
Philps, Davis Sebastian
The cement industry represents the third largest source of carbon-dioxide emissions in the world. A majority of this cement is used in reinforced concrete construction for the creation of building structures and infrastructure. The increasing urbanization of cities is driving the need to build significantly more tall buildings. Consequentially, the production of concrete is continuing to increase. As buildings grow taller the lateral system becomes a more significant component of the structural system. Shear walls are a prominent lateral system, but they are large and cumbersome components and thus present architects with a challenge when trying to position shear walls in their floor layouts. Optimizing the lateral system of a tall building is critical, as its material usage increases exponentially with height. Currently, the shear wall design process is inefficient and very cyclical. As a shear wall layout’s structural behavior is dependent on its topology, when the architect is developing the floor layout they have limited insight into how the associated shear wall layout will perform structurally. Thus, it is unlikely that the arrived upon solution will be optimal. The goal of this research is to create an optimization method to reduce the material usage of shear wall layouts that operates quickly enough that it could be integrated into a design tool to mitigate the cyclical design process currently being used. This paper implements a variation on the level set method. As the problem is very discontinuous due to the nature of the shear walls, we can use the level set method to create a more continuous objective function. This will help with the computational performance of the optimization, as well as allow us to later add additional functionality by being able to access the gradients of the objective function. The method presented in this paper is tested by performing method experiments and creating design applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable methods for navigating large annotation collections in NB</title>
<link href="https://hdl.handle.net/1721.1/145012" rel="alternate"/>
<author>
<name>Schoen, Alizee</name>
</author>
<id>https://hdl.handle.net/1721.1/145012</id>
<updated>2022-08-30T03:04:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Scalable methods for navigating large annotation collections in NB
Schoen, Alizee
NB is an online tool where students can annotate readings and lecture notes, while also discussing with other classmates and instructors. Currently, classes that are using NB have hundreds of students, which results in thousands of annotations per document. After discussing with users of NB, and looking at other platforms, we found methods for students to navigate through the large collections of annotations. These methods include having statistics for each document, the ability to endorse a comment, follow authors, and minimize the number of comments on a document. Once these features were implemented, we studied their impact on NB by collecting user engagement data and feedback.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experiment in Piety: The Three Domed Suhrawardy Tombs at Uchch Sharif</title>
<link href="https://hdl.handle.net/1721.1/145011" rel="alternate"/>
<author>
<name>Nisar, Muhammad Hasan</name>
</author>
<id>https://hdl.handle.net/1721.1/145011</id>
<updated>2022-08-30T03:59:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Experiment in Piety: The Three Domed Suhrawardy Tombs at Uchch Sharif
Nisar, Muhammad Hasan
The extant Khanqahs and Dargahs at Uchch archive the cultural imprint left by Jalāl al-Dīn Bukhāri (d. 1291) and his descendants on the urban fabric of the city. After the arrival and settlement of Bukhāri in the city, at the orders of his spiritual teacher Bahā’ al-Dīn Zakariya at Multan, the city emerged as a stronghold of the Suhrawardy Sufi order. Owing to its location on the peripheries of the Sultanate, and on the convergence of various major trade and military routes, the city was also a popular refuge for travelers, Sufis, deposed princes, artisans, and poets either escaping the onslaught of the Mongols in Central Asia during 13th century or seeking new patronage.&#13;
&#13;
Though enigmatic in their own right, the 14th-15th century monumental tombs of Bahā’ al-Halīm, Bibī Jiwindī, and Ustād Nuriyā at Uchch have not been the subject of any dedicated study and have generally been ignored in scholarship. To remedy this dismissal, this thesis will analyze the monumental tomb architecture to demonstrate the Suhrawardy order’s attempt at entrenching itself at Uchch and establishing it as the new center of Suhrawardy learning and pilgrimage. Formally, this corpus operates within the Central Asian building traditions of Islamic mausolea, drawing inspiration from Seljuq and Samanid monuments. However, these Central Asian forms are made local through the distinct use of brick, differences in the structural fabric, and an indigenous program of ornamentation that predates the arrival of Islam in the Indus Valley.&#13;
&#13;
The orthodoxy of Jalāl al-Dīn Bukhāri, and of his grandson who was also the future leader of the Khanqah Makhdūm Jahāniyān Jahāngasht, can be correlated to the modest tomb architecture used during their lifetime in which they were ultimately interred. Rejecting the indulgent behaviors of their Multani Suhrawardy counterparts, who commissioned monumental architecture of commemoration, the architecture at Uchch for the first three generations of the Bukhāri line of saints maintained a material modest. However, their leadership relented its orthodoxy when the Makhdūm Jahāniyān Jahāngasht’s younger brother, Rajān Qattāl, assumed leadership of the Khanqah in the late fourteenth century. This thesis will demonstrate that this revised attitude towards monumentality happened due to the unorthodoxy of the order’s leadership. Additionally, the introduction of this monumental program of architecture was a conscious attempt at the part of the leadership to establish Uchch as the new center for Suhrawardy learning.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Viscous Grounds: Planning for Friction across the Trans-Alaska Pipeline, 1968-1981</title>
<link href="https://hdl.handle.net/1721.1/145010" rel="alternate"/>
<author>
<name>Rau, Lasse</name>
</author>
<id>https://hdl.handle.net/1721.1/145010</id>
<updated>2022-08-30T03:08:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">On Viscous Grounds: Planning for Friction across the Trans-Alaska Pipeline, 1968-1981
Rau, Lasse
This thesis interrogates the planning and mediation processes of environmental protection, temporary worker accommodation, and indigenous land claims prior to, during, and after the construction of the Trans-Alaska Pipeline in the mid-1970s. By studying three different evidentiary regimes, this thesis posits that the viscous grounds of these environmental, architectural, and cultural terms were modeled to be included as frictions in a predisposed system of planning. Unraveling at the height of cultural shifts around gender, indigenous identity, and conservation, the planning of the Trans-Alaska Pipeline represents a shift in the negotiation of infrastructures from a regulating process of approval to a calibrating process of expertise, coercion, and predisposition.&#13;
&#13;
Revealing the pipeline as a paradigmatic infrastructure valued in terms of the viscosity of oil, the thesis argues that it provided the solid grounds to negotiate the far more fluid externalities of environment, comfort, and cultures. Its process of mediation is read as bringing different types of mobility to clash: The fixity of the pipeline, the fluctuating uncertainties of boom-and-bust cycles, the shifting grounds of environmental protection, the temporariness of workers and their accommodations, and the relationship of indigenous Alaskans to land. Situated amidst these regimes of temporality and tenure, this thesis analyzes three discourses that protruded from their clash: (1) The management of the environment and its crisis through economic models, new legal systems, and corporate publicity, (2) The urban and architectural control of culture within workers’ housing owing to the biopolitics of comfort, and (3) The trading-off of indigenous knowledge through anthropological mapping of native land use. In doing so, the thesis submits planning across the Trans-Alaska Pipeline and its aftermath as a force that valorizes the contractual relationship between states and their subjects.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the design of the retail payment system: Focusing on the retail payment sector in Japan</title>
<link href="https://hdl.handle.net/1721.1/145009" rel="alternate"/>
<author>
<name>Sugio, Yuya</name>
</author>
<id>https://hdl.handle.net/1721.1/145009</id>
<updated>2022-08-30T04:08:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigating the design of the retail payment system: Focusing on the retail payment sector in Japan
Sugio, Yuya
Globally, a variety of innovative technologies are emerging and traditional economic activities are gradually shifting to the digital economy. Among these, in the retail payment sector, which deals with customer contact and payment information, a trend to review interbank retail payment systems is occurring in many countries. The UK initiative has led the way, and similar efforts to instantiate and remake retail payment systems into new systems have been underway in various countries. One of the reasons of the reviews is the fact that, from the user's point of view, there are many aspects of payment services provided by companies that are not user-friendly. In the Japanese retail payments sector, there are various issues such as lack of interoperability, and the government and the banking industry are working to improve these issues.&#13;
&#13;
This paper focuses on providing recommendations for the Japanese case. It examines the state of the retail payment systems, considering the payment systems as a quasi-public social infrastructure that can affect all industries, rather than simply a system in the financial sector. More specifically, this paper focuses on the interbank retail fast payment systems and mobile payments based on it, while taking a broad view of the retail payment system, including its regulatory framework. There are various stakeholders with different perspectives in the retail payment system, and the central bank has a neutral perspective and can be the best entity that could provide the system. In reviewing the retail payment system, it is desirable for stakeholders to compare multiple design options and make decisions after clarifying the performance and functions they need. In the Japanese case, the best design option in the short term would be to utilize the banking industry's Cotra system while applying regulations to ensure interoperability, and in the long term, the central bank could provide the system, including the issuance of Central Bank Digital Currency (CBDC). This paper aims to provide a new perspective to stakeholders of the Japanese retail payment system and contribute to the discussion on the future review of it.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The story behind the output: Enhancing transparency in design research through flexible and adapted strategies</title>
<link href="https://hdl.handle.net/1721.1/145008" rel="alternate"/>
<author>
<name>Chíncaro Donayre, Angélica Graciela</name>
</author>
<id>https://hdl.handle.net/1721.1/145008</id>
<updated>2022-08-30T03:28:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The story behind the output: Enhancing transparency in design research through flexible and adapted strategies
Chíncaro Donayre, Angélica Graciela
Transparency is an integral element in achieving trustworthiness in qualitative research. Current transparency standards are generally built for academic contexts. They often require a substantial investment of time in explaining the rationale behind every step of the research process, synthesis being recognized as the most complex stage to follow, document, and communicate. &#13;
&#13;
Design consultancies deal with different constraints from those faced by academic researchers,  including tight deadlines, and clients’ participation, creating potential barriers to adopting the same kind of transparency strategies in the consultancy context. With this in mind, the exploration began by questioning: are transparency strategies currently being applied in the design consultancy world? If so, how? If not, why not? and how could design researchers be helped to do so? &#13;
&#13;
The thesis used semi-structured online interviews with design research professionals to determine a broad set of themes relevant to the research questions. The subsequent identification of challenges and opportunities to apply transparency in online research led to (1) a new definition of transparency adapted to the design consultancy context, (2) a set of motivations to be intentionally transparent, and (3) a toolbox with strategies to integrate and improve transparency in different remote project situations a design researcher may face. This toolbox is meant to be collaborative, flexible, and fun while providing a stepping stone to purposefully start, encourage, and continue the transparency conversation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated Design and Management Program for Taiwan</title>
<link href="https://hdl.handle.net/1721.1/145007" rel="alternate"/>
<author>
<name>Hsieh, Chieh</name>
</author>
<id>https://hdl.handle.net/1721.1/145007</id>
<updated>2022-08-30T04:05:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Integrated Design and Management Program for Taiwan
Hsieh, Chieh
With the continuous exploration of technology, the problems that various companies will encounter in product development are becoming more and more complex. The issues we face cannot be solved by technology or knowledge in one field. Due to the need for cross-domain problem-solver, many top universities in Europe and the United States have created a design, engineering, and business program. This research names this kind of program an 'integrated design' program. It aims to train a leader with cross-disciplinary talents. As the 21st largest economy globally, Taiwan has been widely known for its excellent OEM and IC manufacturing in the past few decades. In recent years, both the government and private enterprises have invested a lot of money in creating brands. They hope to make these brands go international for more significant economic benefits. This is a substantial increase in the demand for talents with integrated design in the product design or production process. This research analyzes whether Taiwan needs this type of program from many perspectives: The individual needs of students. This research explores personal and professional needs. Taiwanese Brand Potential. There is a significant demand for talents with integrated design in Taiwan's business community. After confirming that Taiwan is suitable for developing this program, review similar programs in several well-known universities globally, formulate a set of program structures exclusive to Taiwan and write an implementation plan. Finally, a case study is conducted on the NTU D-school, which is most similar to the integrated design program in Taiwan.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring a novel cable based architecture for an agricultural robotics platform</title>
<link href="https://hdl.handle.net/1721.1/145006" rel="alternate"/>
<author>
<name>Mohan Kumar, Jayanth</name>
</author>
<id>https://hdl.handle.net/1721.1/145006</id>
<updated>2022-08-30T03:50:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Exploring a novel cable based architecture for an agricultural robotics platform
Mohan Kumar, Jayanth
This thesis examines a potential new architecture for Agri-robotics. A series of architectural decisions are examined and evaluated against a pre-populated problem space foccusing on labour intensive low land use crops. The evaluation also consists of performance metrics at an L1 level and evaluation of the solution space using methods from MDO. The architecture is then compared to existing and near future alternatives and future work is suggested.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Everyone Deserves a Seat at the Table: The Role of Participatory Design in Reimagining our Food Systems for Greater Equity and Resilience</title>
<link href="https://hdl.handle.net/1721.1/145005" rel="alternate"/>
<author>
<name>Martin, Cierra</name>
</author>
<id>https://hdl.handle.net/1721.1/145005</id>
<updated>2022-08-30T03:37:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Everyone Deserves a Seat at the Table: The Role of Participatory Design in Reimagining our Food Systems for Greater Equity and Resilience
Martin, Cierra
This study seeks to explore how design that is participatory and people-powered can confront one of the most pressing challenges of our time: ending hunger. The main research objectives are to develop a deeper understanding of why the United States is failing to address hunger, to explore if and how participatory design practices are being used today to advance community leadership within local food systems and to better understand what barriers and opportunities exist when leading this work today. &#13;
&#13;
To meet these goals, a literature review was conducted along with a mixed-methods study that included qualitative interviews with three core populations: food leaders, participatory designers, and leaders using participatory design in the food system. Interviews were followed by two surveys to expand and validate interview findings.&#13;
&#13;
Participants suggested that food insecurity is not about food, but rather income inequality, systemic racism, and a lack of social services. While individuals and organizations are motivated to address these root causes, too many actors benefit from the current system, creating a lack of political will for change. Organizations also expressed a desire to center people with lived expertise, but what this looks like and what constitutes participation varies widely. &#13;
&#13;
What became increasingly evident through this research is that there is no “recipe” for meaningful, authentic food systems change. Participatory design is just one tool for understanding the problem and creating sustainable impact. Truly equitable, participatory design is a journey that has to be designed to fit the needs of each individual community. However, certain “ingredients” are essential to every process and cannot be left out or substituted, including the need to establish trust, build relationships, and show up authentically in ways that acknowledge and confer power. &#13;
&#13;
The outcome of this thesis is a participatory mise en place for making equitable change in the food system, which includes a set of ingredients that food system and design leaders could incorporate as they seek to cook up change in the food system. &#13;
&#13;
This study serves as an actionable resource for understanding the prerequisites for community-driven change as more organizations strive to work directly with people with lived experience, and aspire to redesign food systems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operational Scheduling of Deep Space Radars for Resident Space Object Surveillance</title>
<link href="https://hdl.handle.net/1721.1/145004" rel="alternate"/>
<author>
<name>Blanks, Lindsey</name>
</author>
<id>https://hdl.handle.net/1721.1/145004</id>
<updated>2022-08-30T03:19:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Operational Scheduling of Deep Space Radars for Resident Space Object Surveillance
Blanks, Lindsey
As space becomes increasingly congested and contested, new capabilities of rivals to threaten vital assets and exploit the area for military advantages make it more important than ever for the United States to proficiently track and monitor space traffic and debris. However, currently the system of radars used by the Department of Defense to track objects in deep space operates in a way that is labor intensive, uncoordinated, and inefficient. In this thesis, we address these issues by automating and coordinating the radar scheduling process. We consider several complex radar systems that operate in an asynchronous, distributed environment and target space objects with varying priority levels, time windows, arrival frequencies, and task mission requirements.&#13;
&#13;
We develop a mixed integer program capable of intelligently distributing task requests and building radar slew plans in a way that aligns with user objectives and system characteristics. We solve the optimization problem repeatedly over time, all while receiving and incorporating updated information, new task requests, and available feedback throughout the planning process. We test our methodologies on various tactical military scenarios and show that an optimization-based approach allows us to maintain custody of more space objects, better prioritize high value objects, and reduce operating costs when compared to a baseline greedy algorithm. We conclude that an automatic, centralized way of scheduling is viable and beneficial for use in the Space Situational Awareness (SSA) mission.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Orion – A Machine Learning Framework for Unsupervised Time Series Anomaly Detection</title>
<link href="https://hdl.handle.net/1721.1/145001" rel="alternate"/>
<author>
<name>Alnegheimish, Sarah(Sarah Abdulaziz)</name>
</author>
<id>https://hdl.handle.net/1721.1/145001</id>
<updated>2026-02-02T14:53:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Orion – A Machine Learning Framework for Unsupervised Time Series Anomaly Detection
Alnegheimish, Sarah(Sarah Abdulaziz)
With the recent proliferation of temporal observation data comes an increasing demand for time series anomaly detection. New methods to detect anomalies using machine learning are continuously emerging. However, algorithms alone only solve one aspect of the problem – finding anomalies. Existing systems often fail to encompass an end-to-end detection process, to facilitate comparative analysis of various anomaly detection methods, or to incorporate human knowledge to refine output. This precludes current methods from being used in real-world settings by practitioners who are not machine learning experts.&#13;
&#13;
In this thesis, we introduce Orion, a machine learning framework for unsupervised time series anomaly detection. The framework supports all the steps of the anomaly detection process. It includes a pipeline hub to maintain many state-of-the-art approaches for time series anomaly detection including statistical and machine learning based methods. Orion logs the entire anomaly detection journey, providing detailed documentation of the status of a signal and anomalies over time. It enables users to analyze signals, compare methods, and investigate anomalies through an interactive visualization tool, where they can annotate events by modifying existing events, creating new ones, and removing them. Using these annotations, the framework aims to leverage human knowledge to improve the performance of the pipeline. We demonstrate the effectiveness and efficiency of Orion through a series of experiments from benchmarking to AutoML on three public time series datasets: NASA, Yahoo, and Numenta. In addition, we showcase the usability of our framework through a study conducted on a real-world use case involving spacecraft experts tasked with anomaly analysis tasks. Orion’s framework, code, and datasets are open-sourced at https://github.com/sintel-dev/Orion.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Professionals in a Soviet America” Federal Housing Policy, the Popular Front, and Architects in Los Angeles, 1919–1947</title>
<link href="https://hdl.handle.net/1721.1/145000" rel="alternate"/>
<author>
<name>Heard, James</name>
</author>
<id>https://hdl.handle.net/1721.1/145000</id>
<updated>2022-08-30T03:02:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">“Professionals in a Soviet America” Federal Housing Policy, the Popular Front, and Architects in Los Angeles, 1919–1947
Heard, James
In the wake of the first Red Scare, Marie Meloney, editor of the women’s magazine “The Delineator,” founded Better Homes in America, a national organization to promote the ideal American home through publications, model homes, and local events. Herbert Hoover, the then-Secretary of Commerce, served as the organization’s first president, operated it as the propaganda wing of the United States’ Commerce Department, and relied on it to rhetorically hitch “American values” to detached, single-family dwellings. After becoming President in 1929, Hoover began to intervene in housing through increasingly direct measures. The complementary trajectories of propaganda and policy coincided at the President’s Conference on Home Building and Home Ownership in 1931, out of which the Federal Home Loan Bank Act was drafted, legislatively establishing the national framework for mortgage lending and normalizing the detached single-family dwelling. This conjunction of form and finance reverberated through congressional discourse and eventually influenced housing restrictions established through the Federal Housing Administration—particularly racial, formal, and stylistic controls.&#13;
&#13;
By the mid-1930s, galvanized by the Great Depression, the Communist Party USA had started organizing a left-liberal Popular Front with architects figured as a vanguard of the professional class. Over the following decade, this network challenged the increasingly hegemonic suburban model of housing. This included labor unions like the Federation of Architects, Engineers, Chemists, and Technicians union, which agitated for public works provisions in New Deal policy and the Hollywood Independent Citizens Committee of the Arts, Sciences, and Professions, a cultural organization that incited architects—alongside other professionals—to protest for an alliance between the United States and the Soviet Union, global nuclear disarmament, and modern housing throughout California. As the second Red Scare accelerated in the postwar period, institutions like the First Unitarian Church of Los Angeles intervened to provide meeting spaces for embattled organizations. While the Popular Front coalition in Los Angeles unravelled by the mid-1950s under the burden of state and federal surveillance, it left behind a built legacy of politically motivated developments throughout the city.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parts-In-Progress</title>
<link href="https://hdl.handle.net/1721.1/144999" rel="alternate"/>
<author>
<name>Kaiser, Kimball</name>
</author>
<id>https://hdl.handle.net/1721.1/144999</id>
<updated>2022-08-30T03:04:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Parts-In-Progress
Kaiser, Kimball
Construction and demolition materials contribute significantly to the waste stream in the United States with the EPA noting in 2018 that the construction and demolition industries generated 600 million tons of debris. Climate change, continued urbanization with population growth, and increasing demands for new and renovated buildings bestows architects with a daunting responsibility of dealing with the repercussions of material usage. Many architectural projects have been developed to recoup construction waste streams, turning discarded materials into useful building materials. However, there is an alternate strategy to address these issues. Design can instead start from the other end of the material stream, accepting that buildings are built with finite life cycles to plan for the disassembly of architecture.&#13;
&#13;
Parts-In-Progress is a design methodology centered around assembly and disassembly, using standard dimensional lumber connected with custom digitally fabricated parts. The assemblies of this design experiment are at the scale of architectural components and furniture. These prototypes are constructed from the same set of materials with a range of connections and joints. Digitally fabricated parts are used as smart jigs that are tools for fabricating said assemblies and guide bolted connections. The fabrication techniques of Parts-In-Progress requires minimal amounts of manipulations to stock materials, in order to preserve them for maximum reassembly possibilities or alternative reuse.&#13;
&#13;
As a parts project, serialization is seen as an advantage. However, the effectiveness of serialization is not found in the reproduction of a singular part, but is instead hijacked from the existing mass-produced parts of the existing building materials logistics network. Lastly, standardization in joints between materials are designed as a range of possible connections around specific dimensional constraints. These variations in connections allow for unpredictable outcomes in their respective assemblies, making it possible to construct the most standard appearing assemblies to the most abnormal assemblies. In the terms of Parts-In-Progress, this is the concept of “calculated precarity.”
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Trajectory of Bitcoin using System Dynamics</title>
<link href="https://hdl.handle.net/1721.1/144998" rel="alternate"/>
<author>
<name>Gopalakrishnan, Vignesh</name>
</author>
<id>https://hdl.handle.net/1721.1/144998</id>
<updated>2022-08-30T03:01:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Modeling the Trajectory of Bitcoin using System Dynamics
Gopalakrishnan, Vignesh
Cryptocurrencies today have a cumulative market cap in the trillions of dollars, consume more energy annually than many countries, and show explosive volatility in their prices. Despite the extensive literature studying their rise, few attempts have been made to use a structural or mechanistic modeling approach to describe the dynamics of the ecosystem. In this study, a model has been built using System Dynamics to formulate and describe the mechanisms and decisions affecting the production side, which consists of the mining of blocks into the blockchain, and the market side, which involves actors using cryptocurrency units to serve different ends.&#13;
&#13;
Bitcoin (BTC), the first cryptocurrency, is now over a decade old, and is still the most popular. It has a market cap larger than all the others. It serves well as a template or point of comparison to build models that seek to understand the dynamics of this system, and is hence used as the basis of this study.&#13;
&#13;
The supply or production side of the system focuses on the mechanisms and interactions that determine the mining of blocks awarding BTC. The demand or market side of the system looks to explain the decision-making mechanism for the different users in the system – classified as ‘chartists’, ‘fundamentalists’, and ‘transacters’ in addition to the miners. The model developed looks to capture the feedbacks generated by the process of mining in the network, the inherent design of the BTC protocol, supply-demand balancing to cause changes in BTC price, exogenous data factors such as the price of mining hardware, transaction volume and fee, and cost of energy, and connect it to the decisions made by the different actors in the system based on an analysis of their costs and benefits.&#13;
&#13;
The model is then calibrated to historic data. Results from this calibration process are discussed. Further refinements and scenarios to be analyzed are suggested.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classification of Auscultation Sounds Using a Smart System</title>
<link href="https://hdl.handle.net/1721.1/144997" rel="alternate"/>
<author>
<name>Kanji, Zahra</name>
</author>
<id>https://hdl.handle.net/1721.1/144997</id>
<updated>2022-08-30T03:06:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Classification of Auscultation Sounds Using a Smart System
Kanji, Zahra
Respiratory diseases are a leading cause of death worldwide. Despite modern medicine, treatment of lung diseases is limited by the tools available to diagnose these disorders, especially in low resource settings. While tools such as chest x-ray and CT scans are highly accurate, their high cost provides a high barrier for many patient populations. The physical exam has been a long standing tried and true method that provides a low cost solution for for diagnosis of many common lung diseases including pneumonia. However, this method is subjective and its sensitivity is limited to the operator ability.&#13;
&#13;
Lung sound classification and using a digital stethoscope can be used to provide an immediate diagnostic for respiratory-related diseases. The International Conference on Biomedical and Health Informatics (ICBHI) created a sound data base in 2017 that is annotated with a classification of the lung sound by physicians. In this thesis, artificial intelligence libraries are used in a deeo learning architecture to identify and classify the lung sounds. The data set was split into training and test data and evaluated using standard performance metrics: precision, 92.3%, accuracy, 87.3%, sensitivity (recall), 87.1%, specificity, 87.5% and F1 Score, 0.89%. Because the data set is skewed right, the best evaluation metric is the F1 Score, which is a weighted average of precision and sensitivity. The F1 score was found to be better than other comparable known attempts on this same data set.&#13;
&#13;
The space for new, innovative, portable and affordable diagnostic devices that aid patients towards pulmonary health and wellness will likely push the development further of the acceptance of electronic auscultations. As telemedicine grows, this will also drive up the demands for such devices. Other holistic measures that are used in medicine will likely also be be developed as the landscape of healthtech changes what is possible.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Attitudes of Incumbent Manufacturing Workers  &#13;
toward Training Opportunities</title>
<link href="https://hdl.handle.net/1721.1/144996" rel="alternate"/>
<author>
<name>Killada, Lakshmi Amrutha</name>
</author>
<id>https://hdl.handle.net/1721.1/144996</id>
<updated>2022-08-30T03:47:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Understanding the Attitudes of Incumbent Manufacturing Workers  &#13;
toward Training Opportunities
Killada, Lakshmi Amrutha
The number of manufacturing industry jobs has declined in the United States over the past decades. In 1979, there were 19.6 million of these jobs at the manufacturing industry's peak, but by 2019 that number had decreased to 12.8 million—a 35 percent decrease from its peak. Moreover, automation and other innovations in the industry have created a need for a new type of manufacturing worker with a different set of skills. Solutions such as recruiting younger workers, creating diverse pathways for manufacturing jobs, and providing training to incumbent workers have been proposed and implemented over the years; however, some workers are not taking advantage of these training opportunities.&#13;
&#13;
To explore the question of why these workers are not taking advantage of training opportunities? We combined a literature review with quantitative research that included an analysis of a nationally representative survey of manufacturing workers—followed by recommendations to improve the intention to train among manufacturing workers. The intended goal of this investigation was not to determine whether a worker is motivated or not but rather to understand the factors that influence the motivations of workers to participate in training. &#13;
&#13;
The findings of this study support the idea that just offering training to all employees is not equally effective, even though, across the board, workers are motivated to take training. Their decisions to do so are influenced by various sets of factors. The study identifies these factors and the nature of their influence on intention to train with recommendations for managers to manage them to increase worker interest in training.&#13;
&#13;
This study provides a resource for organizations looking to transform their training programs or develop new initiatives by understanding key factors affecting workers' training decisions. It also provides critical recommendations for addressing these factors, bolstering the intention to train among the organization's workers.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal Targeting under Gender Fairness</title>
<link href="https://hdl.handle.net/1721.1/144995" rel="alternate"/>
<author>
<name>Niu, Yumeng</name>
</author>
<id>https://hdl.handle.net/1721.1/144995</id>
<updated>2022-08-30T03:31:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimal Targeting under Gender Fairness
Niu, Yumeng
While targeted marketing campaigns can offer high potential for increased firms’ profit, they often lack due consideration for fairness among different protected demographic groups. We investigate methods to mitigate gender disparities for both firm’s actions and benefit outcomes in the setting of offer allocations for targeted marketing campaigns. We develop and compare four optimization models to identify the optimal policies that maximize the firm’s financial return while concurrently satisfying relevant gender fairness conditions. Our results reveal that only regulating the gender disparity in the firm’s actions is not sufficient to guarantee that the responding customers of either gender receive similar level of discount benefit. Hence, we recommend firms to design policies by directly solving for the same level of benefit outcomes instead of firms’ actions across gender. Among the four models developed in this thesis, the optimal transport model is the only model that simultaneously meet both the group fairness condition in aggregate and the conditional demographic parity condition within each socioeconomic segment. Our results in the empirical setting show that the optimal policies from the optimal transport model achieve the lowest gender disparity in overall benefit outcomes. These policies also demonstrate the minimum level of firm manipulation across the four models, and provide the most discounts to the most female-concentrated neighborhoods.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uniform Sampling over Level Sets</title>
<link href="https://hdl.handle.net/1721.1/144987" rel="alternate"/>
<author>
<name>Chiu, Erica</name>
</author>
<id>https://hdl.handle.net/1721.1/144987</id>
<updated>2022-08-30T03:34:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Uniform Sampling over Level Sets
Chiu, Erica
In this thesis, we present an MCMC-based method to extract near-uniform samples from a level set of a provided function &#119891; : Rᵈ → Rᵏ . We propose a sequence of unnormalized distributions over Rᵈ with asymptotic convergence to the Hausdorff measure of the level set, therefore resulting in uniform samples. Beyond our formulation’s asymptotic convergence, we demonstrate its practicality by using MCMC to sample a distribution in the sequence for some analytical functions. Finally, we test our sampling method on representative applications related to machine learning, including extracting geometry from neural implicit representations and multi-objective optimization.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Transferable are Video Representations Based on Synthetic Data?</title>
<link href="https://hdl.handle.net/1721.1/144986" rel="alternate"/>
<author>
<name>Kim, Yo-whan</name>
</author>
<id>https://hdl.handle.net/1721.1/144986</id>
<updated>2022-08-30T03:01:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">How Transferable are Video Representations Based on Synthetic Data?
Kim, Yo-whan
Action recognition has improved dramatically with massive-scale video datasets. Yet, these datasets are accompanied with issues related to curation cost, privacy, ethics, bias, and copyright. Compared to that, only minor efforts have been devoted toward exploring the potential of synthetic video data. In this work, as a stepping stone towards addressing these shortcomings, we study the transferability of video representations learned solely from synthetically-generated video clips, instead of real data. We propose a novel benchmark for action recognition, in which a model is pre-trained on synthetic videos rendered by various graphics simulators, and then transferred to a set of downstream action recognition datasets, containing different categories than the synthetic data. Our extensive analysis on this benchmark reveals that the simulation to real gap is closed for datasets with low object and scene bias, where models pre-trained with synthetic data even outperform their real data counterparts. We posit that the gap between real and synthetic action representations can be attributed to contextual bias and static objects related to the action, instead of the temporal dynamics of the action itself.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep pockets: The economics of deep learning and the emergence of new AI platforms</title>
<link href="https://hdl.handle.net/1721.1/144985" rel="alternate"/>
<author>
<name>Borge, Nicholas J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144985</id>
<updated>2022-08-30T03:45:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Deep pockets: The economics of deep learning and the emergence of new AI platforms
Borge, Nicholas J.
Organizations are increasingly faced with decisions about whether, and at what level, to invest in artificial intelligence (AI) in the development of new products and services. Invariably the business case is based on the performance and costs observed from an initial pilot or proof-of-concept, but these projects can be expensive and time consuming. This is particularly true for deep learning, which is the most important machine learning technique of the past decade. Also, the benefits and costs of deep learning systems scale differently with performance and deployment size, which leads to different organizations implementing systems of differing levels of capability.&#13;
&#13;
This thesis addresses two questions. First, we show how the net benefit of implementing deep learning can be calculated a priori, based on prior research on scaling laws for performance. To help motivate and illustrate the analysis, we present a case study of a real deep learning application. Second, we explore the implications of the economics of individual investment decisions for the broader market dynamics.&#13;
&#13;
We show that there are cutoffs whereby higher performance requires larger deployment sizes to be economically viable, that there is an optimal performance level that maximizes economic benefit given a fixed deployment size, and that these dynamics lead to concentration of market demand and the emergence of new platforms.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Naval Submarine Maintenance: An Examination of Areas of Potential Availability Execution Risk</title>
<link href="https://hdl.handle.net/1721.1/144984" rel="alternate"/>
<author>
<name>Valcourt, Matthew T.</name>
</author>
<id>https://hdl.handle.net/1721.1/144984</id>
<updated>2022-08-30T03:04:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Naval Submarine Maintenance: An Examination of Areas of Potential Availability Execution Risk
Valcourt, Matthew T.
In the growing 'Great Power Competition' of the 21st century, the US Navy has faced near-peer competition that it has not experienced in several decades. This competition has ultimately resulted in increased operational strains on the submarine fleet which have in turn trickled down to affect the nuclear submarine maintenance enterprise. Despite the recognition of that strain, problems continue to persist that are yielding significant ramifications on overall submarine fleet readiness. The urgency to consistently complete maintenance availabilities on time in order to provide combatant commanders with the submarine assets they need, when they need them, has become a primary concern of the fleet.&#13;
&#13;
The goal of this thesis is to explore potential areas of execution risk within the submarine maintenance enterprise. It is clear that the US Navy possesses a strong incentive to better understand ways in which submarine availability durations can be minimized and execution risk can be better managed throughout the lifecycle of an asset. In support of that incentive, this thesis first looks to examine the current state of the submarine maintenance enterprise, including an understanding of the initiatives currently being undertaken to improve performance. Second, the thesis looks to analyze additional ways in which more efficient submarine maintenance processes can be realized, through the lens of a flexible hose case study involving a comprehensive lifecycle analysis and service life evaluation. In doing so, the thesis investigates supply chain composition, as well as the history flexible hose employment and service life policy by way of extensive literature review and stakeholder analysis. Additionally, flexible hose replacement data is quantitatively analyzed to ascertain expected service life and understand the cost savings and benefits that may be achieved by extending flexible hose service life to achieve parity with non-nuclear surface ships.&#13;
&#13;
The results of this thesis highlight the existence of a number of potential risk areas that can be extrapolated to the enterprise as a whole. Inadequate and incomplete maintenance data structures, sub-optimal maintenance scheduling policies, and lack of employment of innovative technology all threaten to exacerbate the ongoing issues exhibited by the enterprise. However, they also present an opportunity for the Navy to adopt new processes and improve the efficiency of submarine maintenance in the decades to come.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of System Templating into the Rapid Ship Design Environment</title>
<link href="https://hdl.handle.net/1721.1/144981" rel="alternate"/>
<author>
<name>Patterson, Natasha</name>
</author>
<id>https://hdl.handle.net/1721.1/144981</id>
<updated>2022-08-30T03:38:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integration of System Templating into the Rapid Ship Design Environment
Patterson, Natasha
Navy ship mission systems are increasingly power-intensive and integrated and thus are increasingly dependent on ship system performance, especially the electrical distribution, thermal management, and data control systems. In recognition of this,&#13;
the U.S. Navy has recently worked with the Electric Ship Research and Development Consortium to develop Smart Ship Systems Design (S3D), a ship system design software environment fully integrated with the Navy’s early-stage ship design toolkit. In addition, the associated templating process provides a level of automation to system design, thus providing a capability for the design and analysis of ship systems much earlier in the design process than was previously possible.&#13;
&#13;
Research and an experimental study were performed to construct a flexible, user-friendly methodology that integrates S3D and its templating tools into the Navy’s Rapid Ship Design Environment (RSDE). This project establishes specified use cases and examples that demonstrate this implementation. The use cases represent common functions that RSDE users seek to implement in the ship design process. The targeted use cases include mission system, propulsion train and electrical system design, and associated full ship studies for design exploration. This research is pivotal to the design process and allows common systems and/or plant configurations to be accessible in a familiar format.&#13;
&#13;
To develop this methodology and implement S3D templating in future projects, the methods, steps, and tools used are recorded and analyzed with feedback from various end-state users and technical experts.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic Analysis and Design of CO₂ Capture and Conversion Systems</title>
<link href="https://hdl.handle.net/1721.1/144980" rel="alternate"/>
<author>
<name>Rufer, Simon B.</name>
</author>
<id>https://hdl.handle.net/1721.1/144980</id>
<updated>2022-08-30T03:44:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Technoeconomic Analysis and Design of CO₂ Capture and Conversion Systems
Rufer, Simon B.
Carbon capture and conversion technologies must become economically viable and scale to the gigaton level by 2050 to avoid the most serious effects of a climate crisis. Here we present a techno-economic analysis of two promising capture and conversion technologies: CO₂ capture from ocean waters via electrochemical pH swing and electrochemical conversion of CO₂ into valuable chemicals. We identify cost drivers of the proposed direct ocean capture process and suggest future work to reduce costs and technological risks. Finally, we examine the sensitivities of the cost of CO₂ conversion with regards to the design of electrode gas diffusion layers. We design and construct a CO₂ conversion reactor for testing of next generation gas diffusion layers. Strong baseline performance of the reactor is validated with a 47% Faradaic Efficiency towards C₂H₄ at 200mA/cm².
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Synthesis of Solid-State Electrolytes Using Flame-Assisted Spray Pyrolysis</title>
<link href="https://hdl.handle.net/1721.1/144979" rel="alternate"/>
<author>
<name>Muldoon, Valerie L.</name>
</author>
<id>https://hdl.handle.net/1721.1/144979</id>
<updated>2022-08-30T03:58:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Scalable Synthesis of Solid-State Electrolytes Using Flame-Assisted Spray Pyrolysis
Muldoon, Valerie L.
As the impacts of climate change become more apparent, demands for reliable and safe energy storage options are rapidly increasing. The decarbonization of several sectors that are responsible for large amounts of annual greenhouse gas emissions, such as the electricity and transportation sectors, can significantly benefit from improved energy storage options. In recent decades, lithium-ion batteries (LIBs) have become a prominent contender for storing energy in a variety of applications. However, the safety, lifespan, power capability, and energy density need to improve in order for LIBs to be adopted on a wider scale. Unfortunately, LIB technology is reaching its performance limits due to the inclusion of a flammable liquid electrolyte. One solution being considered to improve LIB performance is to replace the liquid electrolyte with a non-flammable solid-state electrolyte (SSE), which enables the use of high voltage cathode materials and a lithium metal anode, thereby greatly improving the power and energy density of the battery. Of the various chemistries being considered for solid-state electrolytes, oxide-based SSEs are advantageous due to their safety and electrochemical and thermal stability. To ensure good rate performance and energy density, oxide-based SSEs must be manufactured in a thin, dense format. However, many current methods used to synthesize oxide-based SSEs are either too expensive and complex or produce powders that require many post-processing steps, which precludes the commercialization of oxide-based SSEs. In this work, flame-assisted spray pyrolysis (FASP), an inexpensive, scalable synthesis method, was used to produce SSE powders which can be further processed to fabricate thick pellet or thin-tape solid-state electrolyte samples. Li6.25Al0.25La3Zr2O12 (Al-doped LLZO) was synthesized due to its impressive electrochemical stability and relatively high ionic conductivity. The effect of FASP parameters on the as-synthesized Al-doped LLZO powder and on the quality of pellets and thin-tapes was investigated, and a stand-alone, all-oxide-based SSE having a total ionic conductivity of 3.5 x 10-6 S/cm was synthesized. The results show that FASP parameters can be tailored to produce solid-state electrolytes in an inexpensive, scalable way.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconstructing 3D ocean temperature fields from real-time satellite and buoy surface measurements</title>
<link href="https://hdl.handle.net/1721.1/144978" rel="alternate"/>
<author>
<name>Champenois, Bianca</name>
</author>
<id>https://hdl.handle.net/1721.1/144978</id>
<updated>2022-08-30T03:35:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Reconstructing 3D ocean temperature fields from real-time satellite and buoy surface measurements
Champenois, Bianca
Despite advancements in computational science, nonlinear geophysical processes still present important modeling challenges. Physical sensors (such as satellites, AUVs, or buoys) can collect data at specific points or regions, but are often scarce or inaccurate. Here, we present a framework to build improved spatiotemporal models that combine dynamics inferred from high-fidelity numerical models with measurements from sensors. Specifically, we are interested in ocean temperature which can serve as a useful indicator for ocean acidification, and we are motivated by a data set of sensor measurements only available at the surface of the ocean. We first apply standard principal component analysis (PCA) at every ocean surface coordinate to a numerical simulation of a 3D temperature field (reanalysis data) over time. For each horizontal location, the vertical structure of the field can be represented with just two PCA modes and their corresponding time coefficients, significantly reducing the dimensionality of the data. Next, a conditionally Gaussian model implemented through a temporal convolutional neural network (TCN) is built to predict the time coefficients of the PCA modes, as well as their variance, as a function of the surface temperature. The full 2D surface temperature field is estimated by a multi-fidelity Gaussian process regression scheme, for which the buoys have the highest fidelity and the satellite measurements have lower fidelity. The surface temperature is then inputted into the neural network to obtain probabilistic predictions for the PCA coefficients, which are used to stochastically reconstruct the full 3D temperature field. The techniques described provide a framework for building less expensive and more accurate models of conditionally Gaussian estimates for full 3D fields, and they can be applied to geophysical systems where data from both sensors and numerical simulations are available. We implement these techniques to estimate the full 3D temperature field of the Massachusetts and Cape Cod Bays, an area with a significant ocean economy. We compare the predictions with in-situ measurements at all depths. Finally, we discuss how the developed ideas can be leveraged to make more informed decisions about optimal in-situ sampling and path planning.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy and Design Courses of Action to Improve Resilience of Proliferated Low Earth Orbit Constellations Against Adverse Solar Weather</title>
<link href="https://hdl.handle.net/1721.1/144977" rel="alternate"/>
<author>
<name>Novak, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/144977</id>
<updated>2022-08-30T03:54:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Policy and Design Courses of Action to Improve Resilience of Proliferated Low Earth Orbit Constellations Against Adverse Solar Weather
Novak, Jonathan
There are three main questions answered by this thesis: 1) Would an extreme event on a scale commensurate to historically observed events induce catastrophic failures to current New Space mega-constellations? 2) How do increasing levels of constellation proliferation alter resilience to adverse solar weather? And 3) How do increasing levels of constellation proliferation alter the effectiveness of courses of action to improve resilience?&#13;
&#13;
In order to answer these questions, solar weather effects are modeled using a unique process of correlating solar weather event intensities to radiation effects leading to failure. Representative constellation populations are dynamically altered with respect to a Monte Carlo based stochastic simulation of solar cycle 25. The performance degradation, value, and resilience of each system is recorded throughout a solar cycle in baseline cases and then compared to cases employing alternative designs or policy criteria.&#13;
&#13;
The results of this thesis show that New Space architectures are resilient to the radiation effects of even extreme case scenarios of solar weather. The results also show that, of the parameters tested, shielding is among the most effective for improving the resilience of highly proliferated systems. The results also imply that increasing manufacturing and launch timelines are most effective at improving resilience when increased together, rather than one parameter in isolation. Finally, through a net present value analysis, this thesis demonstrates how policies may be valued and assessed. A sample valuation for an emergency launch insurance policy is shown in the results as well as evidence supporting “Careful COTS” as a viable and effective methodology for ensuring resilience of COTS-enabled, proliferated systems. All code, datasets, and sample results depicted are provided in a GitHub at the end of the thesis.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implications of heating electrification on distribution networks and distributed energy resources</title>
<link href="https://hdl.handle.net/1721.1/144976" rel="alternate"/>
<author>
<name>Lee, Tony L.</name>
</author>
<id>https://hdl.handle.net/1721.1/144976</id>
<updated>2022-08-30T03:52:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Implications of heating electrification on distribution networks and distributed energy resources
Lee, Tony L.
The electricity sector’s transformation towards renewables, combined with technological improvements in electric appliances, has enabled electrification as a decarbonization pathway for other sectors. Electric heat pumps are an attractive solution for space heating, which is still largely served by direct combustion of fossil fuels. While current deployments are still low, widespread adoption of heat pumps is required to achieve emissions reductions consistent with global climate targets. &#13;
&#13;
This thesis explores the impacts of residential heat pumps on electric distribution systems operations and planning. In the network studied, we find that heat pumps increase winter loads and reduce summer loads, leading to a winter-peaking system above 25% adoption and indicating near-term potential for “beneficial” heating electrification. Distribution issues emerge above 45% adoption on the network studied: substation-level transformers are the most immediate and costly investment need, and voltage quality becomes an issue near full adoption. Time-varying rate designs can induce consumers to shift heating demand and reduce peak impacts, increasing electrification levels on a constrained network from 45% to 50–60% with the rates tested. As heat pumps reshape residential electricity demand, accurate price signals for distributed energy resources are essential for efficient grid operation and investments.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Win-Win-Win? Evaluating the Climate, Health, and Equity Benefits of Retrofitting Low Income Housing in the US</title>
<link href="https://hdl.handle.net/1721.1/144975" rel="alternate"/>
<author>
<name>Caswell, Helena</name>
</author>
<id>https://hdl.handle.net/1721.1/144975</id>
<updated>2022-08-30T03:58:43Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Win-Win-Win? Evaluating the Climate, Health, and Equity Benefits of Retrofitting Low Income Housing in the US
Caswell, Helena
Large scale energy efficiency and electrification of the US residential sector are needed to meet the Paris Agreement goal of limiting warming below two degrees Celsius. Energy efficiency retrofits can also provide significant health and economic benefits, especially for low-income residents. These non-energy benefits are often excluded from policy analysis of energy efficiency programs because they are not well quantified, especially at a local level. In this thesis I quantify a subset of these public and private health benefits and develop a cost benefit analysis at the county level across the continental US of a retrofit policy for low-income households that includes electrification and efficiency measures.  The retrofit policy yields a positive average net present value in 29\% of counties when considering only the private benefits of reduced energy consumption.  However, retrofits yield positive net present value in all US counties when public and private health benefits are included. I also explored the potential impacts of a retrofit policy on household energy burden (household energy expenditures divided by household income).  In most counties, retrofits would more than offset the additional cost of energy from a \$51 per metric ton carbon price, the current social cost of carbon estimated by the federal government. The gap between public and private retrofit benefits prevents low-income households from implementing energy efficiency measures. Programs to subsidize low-income retrofits would reduce economic deadweight loss, climate emissions, and energy costs, while improving human health and wellbeing for low-income households that currently suffer disproportionately from inefficient housing.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying a Range of Global Air Pollution Projections and Health Impacts under the Paris Agreement’s Temperature Targets</title>
<link href="https://hdl.handle.net/1721.1/144974" rel="alternate"/>
<author>
<name>Atkinson, William</name>
</author>
<id>https://hdl.handle.net/1721.1/144974</id>
<updated>2022-08-30T03:13:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantifying a Range of Global Air Pollution Projections and Health Impacts under the Paris Agreement’s Temperature Targets
Atkinson, William
Air pollution is a key sustainability challenge with similar emissions sources to anthropogenic climate change – making it critical to assess the effect of climate and air quality actions on pollutant emissions, the resulting health impacts, and broader sustainability metrics. This thesis responds to these needs by developing a new Tool for Air Pollution Scenarios (TAPS) and applying it to example policy effects on emissions, health impacts, and alternative metrics that are consistent with a stock-based sustainability framework of inclusive wealth. In Chapter 2, we develop and implement TAPS with three components: recent global anthropogenic emissions inventories, emitting activity scenarios from the MIT Economic Projection and Policy Analysis model, and emissions intensity trends based on recent scenario data from the Greenhouse Gas – Air Pollution Interactions and Synergies model. Initial results show the limits of existing policy and the importance of different policy levers for different pollutants – including climate action to reduce fossil fuel related air pollutants (such as sulfur and nitrogen oxides), and other air quality controls to reduce pollutants such as ammonia and organic carbon. Chapter 3 connects the tool’s emissions results to health impacts, focusing on the difference between two pollution control scenarios under the common assumption that the Paris Agreement’s climate targets are met. We find major differences in ambient fine particulate matter concentrations as well as impacts on premature mortality and morbidity – showing that climate action alone does not guarantee a clean-air future. We also find distributional differences between different measures of national impacts, especially when comparing standard or monetized health endpoints with our alternative that focuses on healthy life years. Finally, Chapter 4 concludes with future considerations for scenario development, analytical choices, and stakeholder considerations for integrating the health impacts of air pollution into sustainability decisions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clamp–On Magnetic Energy Harvesting</title>
<link href="https://hdl.handle.net/1721.1/144973" rel="alternate"/>
<author>
<name>Monagle, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/144973</id>
<updated>2022-08-30T03:26:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Clamp–On Magnetic Energy Harvesting
Monagle, Daniel
Large-scale deployment of low-power sensing and computing units call for unique power management solutions to overcome the inconvenience, costs, and waste problems associated with batteries. Energy harvesting offers an exciting solution to the battery problem, enabling circuits that can power themselves on-site from available ambient energy. Magnetic energy harvesters, configured as current transformers (CTs), extract energy from the magnetic fields surrounding current-carrying power lines. This thesis proposes generalized analytical methods for modeling magnetic energy harvester behavior and validates these methods along with existing circuit model techniques. A simplified magnetic core permeability characterization method is introduced. This thesis is motivated by addressing the feasibility of a split-core magnetic energy harvester to power a microcontroller unit, and the models are experimentally validated for multiple harvester cores. A completely self-powered system is implemented with an energy enhancement strategy that significantly increases power harvest in the presence of core saturation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced Indium Tin Oxide as a Transparent Superconductor</title>
<link href="https://hdl.handle.net/1721.1/144972" rel="alternate"/>
<author>
<name>Batson, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/144972</id>
<updated>2022-08-30T03:55:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Reduced Indium Tin Oxide as a Transparent Superconductor
Batson, Emma
Absorption of optical light in superconducting electronics is a major limitation on the quality of circuit architectures that integrate optical components with superconducting components. Such absorption causes losses in the optics and quasiparticle generation in the superconductor, decreasing the performance of both [1]. However, integration of optical and superconducting components will be crucial for the development of electro-optical transducers for quantum networking [2], scalable readout of single photon detectors [3], and neuromorphic computing [4]. Ideally, we could fabricate the superconducting electronics in these systems out of a material that is transparent to the wavelengths used by the optical components.&#13;
&#13;
Few conductive materials are transparent to optical wavelengths though, let alone superconducting materials. Typical metals have a high carrier concentration and no band gap, resulting in strong absorption for light below x-ray frequencies [5]. However, certain degenerately doped semiconductors known as transparent conductive oxides have ultraviolet band gap energies, high mobilities, and low carrier concentrations, thus allowing for both good conduction and optical transparency. Under the right conditions, these materials may superconduct as well. One such material, indium tin oxide (ITO), has been shown to superconduct with a maximum transition temperature of about 4 K when doped to carrier concentrations of about 1021cm−3 [6]. In particular, arbitrary samples of ITO can superconduct when sufficiently doped by electrochemical reduction [7].&#13;
&#13;
In this thesis, we characterize the effects of electrochemical reduction on the electronic properties, structure, and composition of ITO and evaluate its suitability for superconducting electronics. First, in Chapter 1, we outline the theory of transparent superconductivity and review existing work on such materials. Then in Chapter 2 we describe the basic theory and design of our electrochemical cell and discuss the characterization techniques we will use to evaluate our films. In Chapter 3 we present our findings on the electronic properties, structure, and composition of ITO reduced to different total reduction charge densities. In Chapter 4 we quantify the optical properties of reduced ITO and compare it to niobium, a common material for superconducting electronics. In Chapter 5 we consider different methods for fabricating electronics on reduced ITO and evaluate the resulting microwires. Finally, in Chapter 6 we discuss the implications of our findings and future directions for work on transparent superconductors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Analysis of Time of Day Pricing for Residential Consumers</title>
<link href="https://hdl.handle.net/1721.1/144969" rel="alternate"/>
<author>
<name>Nejad, Saba</name>
</author>
<id>https://hdl.handle.net/1721.1/144969</id>
<updated>2022-08-30T03:02:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Analysis of Time of Day Pricing for Residential Consumers
Nejad, Saba
Time-of-day (or dynamic time-of-use, dToU) pricing is a mechanism by which system operators try to lower stress on the grid in times of high demand. The price for high demand periods is pre-set but the times of day they are applied is dynamic. Data on how residential consumers respond to the pricing scheme can inform more accurate models of consumption to maintain the integrity of the grid while lowering consumers' utility bills and optimizing renewable use. In this thesis, I analyze the data from a time-of-day pricing trial in London to see whether the treatment was effective in lowering consumption. I do this analysis using four different models and compare the accuracy of each and the results; an aggregated linear regression model, a multi linear regression model, an aggregated multi linear regression model, and a random forest regression time series model. I found that the time-of-day pricing during the trial was effective in lowering consumption and costs. A dependence on households' socio-economic status was observed.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Modeling and Validation of the Deformation and Failure Response of Human Metastatic Vertebrae</title>
<link href="https://hdl.handle.net/1721.1/144967" rel="alternate"/>
<author>
<name>Xu, Michelle</name>
</author>
<id>https://hdl.handle.net/1721.1/144967</id>
<updated>2022-08-30T03:06:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Computational Modeling and Validation of the Deformation and Failure Response of Human Metastatic Vertebrae
Xu, Michelle
Metastatic cancer, the spread of cancer cells from the primary cancer site, is responsible for many cancer-related complications and deaths. Particularly, metastatic cancer affecting the vertebra leads to the degradation of bone quality and architecture, which weakens the vertebra's load carrying capacity and puts the patient at high risk of skeletal adverse events (SAE), such as pathological vertebral fractures (PVF) and spinal cord compression. Therefore, there remains a strong need for the accurate assessment of fracture risk of metastatic vertebrae. &#13;
&#13;
In this thesis, we propose a comprehensive computational framework for the modeling and validation of the deformation and failure response of human metastatic vertebrae. First, we develop an image-based pipeline for the generation of patient-specific finite elements (FE) models starting from CT images of human metastatic vertebrae. We adopt a viscoelastic, viscoplastic cortical bone modelfor the elastic and plastic response of bone and a damage model for the softening of bone. We utilize the SUMMIT computational solid mechanics framework to perform large-scale, parallel simulations of these vertebral models under compression loading.&#13;
&#13;
We validate the computational framework against an experimental dataset of 10 metastatic human vertebrae, consisting of 1) CT image data and 2) experimental load-displacement curves obtained from uniaxial compression testing,provided by researchers at the University of Bern. Using our proposed pipeline, we generate homogenized finite element (hFE) models from the obtained CT images and calibrate the material model parameters by solving the boundary value problem for a selected model using trial values of the material model parameters until we obtain a simulated load-displacement curve that approximately matched the corresponding experimental load-displacement curve. Then, we conduct simulations of the other 9 models and compare the simulated and experimental load-displacement curve to assess the validity of our models. &#13;
&#13;
We show that the proposed approach provides quantitative predictions of the experimental stiffnesses and failure strengths of metastatic vertebrae with good accuracy regardless of the type of metastases the vertebrae exhibit. In addition, we show that by capturing the unique, spatially varying bone volume density in the vertebrae, we are able to obtain detailed descriptions of the local stress and damage responses. From this, we achieve a better understanding of the role metastases play in the deformation and damage response of metastatic vertebrae.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework of a Power Management System for a Hybrid Electric VTOL Aircraft using Optimal Control</title>
<link href="https://hdl.handle.net/1721.1/144965" rel="alternate"/>
<author>
<name>Pham, Duc Ngoc</name>
</author>
<id>https://hdl.handle.net/1721.1/144965</id>
<updated>2022-08-30T03:44:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Framework of a Power Management System for a Hybrid Electric VTOL Aircraft using Optimal Control
Pham, Duc Ngoc
The operation of a hybrid fixed-wing vertical takeoff and landing unmanned aerial vehicle (VTOL UAV) is assessed using an optimal control framework for the most energy-efficient and time-efficient trajectories. The UAV is equipped with a modular hybrid propulsion system (MHPS) where the electrical and carbon-fuel system components are interchangeable on a mission-to-mission basis, enabling aircraft performance flexibility. The framework is used to assess the effects of MHPS electric power hybridization, energy hybridization mass ratio, and peak power output on UAV performance during demanding phases of flight, which include takeoff, landing, and hover. Results showed that the most time and energy efficient takeoff trajectories involved minimizing the vertical displacement gained during the transition from vertical to horizontal flight. The power management strategy for minimum energy consumption during takeoff, landing, and hover was largely dictated by propulsion component efficiencies; maximizing the electric motor power and minimizing the carbon fuel power would reduce energy consumption. As peak power output and electric power hybridization increased, takeoff and landing energy consumption decreased. Minimum time takeoff and landing trajectories and power management strategy were only dependent on power-to-weight ratio; a higher peak power output reduced takeoff and landing time. The power management strategy for efficient hover mirrored that of takeoff and landing; a higher peak power output resulted in less energy consumed. A preliminary assessment of the tradeoffs of electrification was conducted using a takeoff/cruise endurance non-dimensional performance group given as the ratio of the product of peak power output and cruise endurance to the total energy consumed during takeoff. Increasing electric power hybridization adversely affected overall aircraft performance, due to the reduction in cruise range and endurance. For each selected value of electric power hybridization, there is an optimal peak power output that strikes the best balance between takeoff and cruise performance. For short, demanding mission segments, like takeoff or landing, the energy hybridization mass ratio has a smaller impact than electric power hybridization, since range or mass changes are not of concern when analyzing independent takeoff or landing performance.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Immersive Art Experience - an exploration of visuals and sounds -</title>
<link href="https://hdl.handle.net/1721.1/144964" rel="alternate"/>
<author>
<name>Murao, Mieko</name>
</author>
<id>https://hdl.handle.net/1721.1/144964</id>
<updated>2022-08-30T03:51:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing Immersive Art Experience - an exploration of visuals and sounds -
Murao, Mieko
Art is a form of communication. It can come in many forms and be expressed from different perspectives. This thesis uses literature review and field research to survey the immersive art landscape and explore various perspectives offered through examples of practices in the field and each creator's kodawari, denied as "the craftsperson's creed that manifests in craftsmanship." Then I use the insights from these sources to help distill my ideas and kodawari for a prototype. Next, based on a human-centered design concept, I test out the prototype and collect user feedback to understand the general public's thoughts. Finally, I take those feedback into account and contemplate possible futures for Immersive Art (IA).&#13;
&#13;
The thesis centers around the question, what technologies can we incorporate to make art experiences more immersive and accessible to those who otherwise feel alienated from the art world?&#13;
&#13;
For the production of IA space, I used projection mapping, spatial sound, AR, haptic sensors, audio-reactive video synthesis, MIDI controllers, and incense.&#13;
&#13;
There are five main topics covered: (i) existing ways of consuming/experiencing art; (ii) literature review, field research, and best practices; (iii) building a prototype design iteration for an IA space; (iv) user feedback and what it tells us (v) contemplate possible futures for IA.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Method for Airfield Pavement Condition Index Determination</title>
<link href="https://hdl.handle.net/1721.1/144963" rel="alternate"/>
<author>
<name>Pietersen, Randall A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144963</id>
<updated>2022-08-30T03:29:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automated Method for Airfield Pavement Condition Index Determination
Pietersen, Randall A.
Infrastructure inspection and maintenance is a necessary, and often costly, process required for civil engineering structures throughout a project's life-cycle to ensure continued safety and serviceability. While many of these procedures have seen the introduction of technologies to assist, augment, or automate traditional methods of inspection, current practices for assessing airfield pavement serviceability remain predominately manual. Though roadway inspection has benefited from automation with the introduction of various types of sensor arrays attached to automobiles, the characteristics of airfields and their pavements have prompted research into the use of drones as a flexible, and low cost solution for automating aspects of the inspection process. As one of the largest owners and operators of airfield pavement across the globe, the United States Air Force has a unique interest in implementing such a process in a way that is both compliant and compatible with current institutional guidelines. Funded by the US Air Force Civil Engineering Center, this research proposes a novel method for conducting an automated airfield pavement condition index (PCI) survey on Air Force owned airfields using drone mounted imaging technology. Intermediate results from different stages of field testing over an auxiliary airfield located at the Air Force Academy in Colorado Springs, CO are presented and discussed in detail. Ultimately, the automated data collection and analysis developed by this study produced a PCI value of 56.5, which strongly agrees with manual inspection results that calculated a PCI value of 54 for the same runway. Also presented is a fiscal analysis of the autonomous method being proposed. Using uncertainty analysis and Monte Carlo simulation, cost estimates are given for replacing manual PCI inspections with an autonomous solution across a large number of airfield pavement assets. These estimates provide economic insights into factors that affect technological development and implementation and suggest that replacing manual methods with an autonomous system could reduce inspection costs roughly 25%.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating an Optical Neural Network for Deep Learning in Edge Computing</title>
<link href="https://hdl.handle.net/1721.1/144962" rel="alternate"/>
<author>
<name>Cochrane, Jared</name>
</author>
<id>https://hdl.handle.net/1721.1/144962</id>
<updated>2022-08-30T03:37:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simulating an Optical Neural Network for Deep Learning in Edge Computing
Cochrane, Jared
Deep learning has risen to prominence in fields from medicine to autonomous vehicles. This rise has been driven by improvements in parallel computing from graphics processing units (GPUs) as well as large data sets. Applying deep learning to edge computing is challenging because deep neural network (DNN) hardware must not only possess the needed computational power but must also satisfy size, weight, and power (SWaP) constraints for practical deployment. Many DNNs require a GPU or data center to run, both of which are too large to fit onto edge devices. Here, an optical neural network (ONN) accelerator called netcast is simulated on two real-world machine vision applications: MNIST digit classification and scene recognition. The netcast ONN enables large DNNs to run on SWaP-limited edge devices with significantly less energy needed to run inference compared to digital models. Software simulations are used to assess netcast’s performance on MNIST classification and scene recognition&#13;
relative to digital networks. Using an accuracy per energy consumption figure of merit (FOM), the simulations indicate that netcast is able to outperform digital electronics on average by over three orders of magnitude. Netcast’s strong performance relative to its digital counterparts indicates that it will enable the novel deployment of large DNNs to edge applications in a way that would be infeasible using current digital electronics. Netcast’s novel applications give rise to a host of policy challenges, one of which focuses on defining and applying acceptable performance metrics to optically enabled deep learning.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Informational Analysis on US &amp; China Platform Strategy: A Comparative Analysis</title>
<link href="https://hdl.handle.net/1721.1/144961" rel="alternate"/>
<author>
<name>Yu, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/144961</id>
<updated>2022-08-30T03:18:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Informational Analysis on US &amp; China Platform Strategy: A Comparative Analysis
Yu, Catherine
US and China remain the two largest global economies in the world, with US nominal GDP at $20T, China at $14T. Meanwhile, platform technology continues to dominate the world’s largest companies, with four of the top five market cap companies in the world being platform technology companies.&#13;
&#13;
Beyond the business, platform economy also comes from living and understanding the culture behind both countries, and in doing this thesis features 43 informational interviews, two deep dive case studies on US and China based platform economies, and analysis of consumption and each country’s investment strategy. Informational Analysis on US &amp; China Platform Strategy: A Comparative Analysis hopes to be able to highlight these growing areas in the world, and how history can affect future trends.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanded Cinema and War; Trauma in Hyper-Documented Age</title>
<link href="https://hdl.handle.net/1721.1/144960" rel="alternate"/>
<author>
<name>Šabanović, Faruk</name>
</author>
<id>https://hdl.handle.net/1721.1/144960</id>
<updated>2022-08-30T03:35:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Expanded Cinema and War; Trauma in Hyper-Documented Age
Šabanović, Faruk
This thesis investigates ways to wed Gene Youngblood’s notions of expanded cinema with the popular appeal of dramatic films whose narrative structures are driven by the conflict-crisisresolution system. By examining alternatives to conventional narratives surrounding war-torn sites of conflict, this thesis examines a series of experimental animation and kinetic and nonlinear story-driven artworks to develop an alternative grammar for a possible visual, narrative, and technical storytelling style for a future society of global peace. Drawing upon my personal history as a Bosnian animation filmmaker and having experienced first-hand the conflict as a casualty of war, these experiments seek to shed light on the imperative for simultaneously seeking a popular anti-war language while avoiding the mass appeal of conflict-driven narrative structures. I have developed a series of open-source, easily reproducible expanded cinematic experiments using everyday materials to address this issue.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Working in Seoul: Integrating Public Infrastructure into the Metaverse</title>
<link href="https://hdl.handle.net/1721.1/144959" rel="alternate"/>
<author>
<name>Ha, Ji Ye</name>
</author>
<id>https://hdl.handle.net/1721.1/144959</id>
<updated>2022-08-30T03:39:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Co-Working in Seoul: Integrating Public Infrastructure into the Metaverse
Ha, Ji Ye
Hybrid working has now become the new normal. Besides working from home, there is an increasing demand for a space that is neither home nor the traditional office. While the home office has many benefits such as flexibility, work-like balance, and reduced transportation costs, employees still need a space that is detached from household chores and noise. Such demand is being met by various forms of working environment such as co-working office, dispersed office, satellite office and metaverse office. This societal demand for a new remote workspace is also happening in conjunction with digitization, rise of the metaverse, and the changing ways people engage with public infrastructure.&#13;
&#13;
This project looks at Seoul, South Korea as an example of this societal shift, and finds opportunities in two types of public infrastructure: Post office and the welfare and administrative centers located at every administrative district in Seoul. With digitization, the number of post offices in Seoul is decreasing every year, and in some cases extra spaces are being leased to the private sector. With Seoul Metropolitan Government releasing a five-year plan to build in intricate metaverse platform, it is expected that more and more physical infrastructure within Seoul will be made available for alternative uses starting from year 2026 and beyond.&#13;
&#13;
Matching societal demand for flexible remote working environment and a growing supply of public space for alternative use, this thesis explores ways of reappropriating portions of the existing public infrastructure in Seoul as remote work space. The proposed designs seek to provide public good that cater to the needs of the locals, at the same time creating a new revenue stream for the public sector.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inheritance Geographies: Black Presence and the Making of London</title>
<link href="https://hdl.handle.net/1721.1/144958" rel="alternate"/>
<author>
<name>Kettner, Katharine</name>
</author>
<id>https://hdl.handle.net/1721.1/144958</id>
<updated>2022-08-30T03:27:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Inheritance Geographies: Black Presence and the Making of London
Kettner, Katharine
Blackness has been fundamental in the making of Western cities. This thesis takes London as a site of focus through which to explore Black spatial practices. All too often, the disciplines of architecture and planning attempt to adopt apolitical, ahistorical approaches to physical space – the reality, however, is that no such space exists. Traditional pedagogies struggle to accept built interventions which occur outside strict disciplinary boundaries. By extension, these fields devalue, trivialize, or refuse to acknowledge the influence of racialized Others in shaping the built environment. Although Black people have lived and worked in Western cities for centuries, within the dominant discourse Black people are hardly ever recognized as active agents of spatial transformation and creation. This is exacerbated by visual and discursive norms which fixate on representing space in particular ways —which are not necessarily representative of the ways in which racialized groups exist, use, and make space.&#13;
&#13;
This thesis rejects the minimization of Blackness in the Western canon. It calls on the disciplines of architecture and planning to expand their pedagogical horizons and to challenge normative ways of reading and understanding the built environment. Two broad case studies, British transatlantic slavery and the Windrush migration, serve as the lens through which London is investigated and mapped. In doing so, we can complicate traditional readings of space, and better recognize the roles of Blackness in creating the city – through Black presence simultaneously physical and psychological, tangible and intangible. A celebration and exploration of the richness of Black contributions in London allow us to engage the city as an ever-evolving historical, political, and social archive. The project considers some of the ways in which Black people — those departed, those present, and those future — have transformed London, which for centuries sat at the heart of a global empire, and which today remains a site of contestation. Ultimately, the soul of the project is simple: London is what it is because of Black presence.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anomaly Detection in Database Operating System</title>
<link href="https://hdl.handle.net/1721.1/144955" rel="alternate"/>
<author>
<name>Xia, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/144955</id>
<updated>2022-08-30T03:03:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Anomaly Detection in Database Operating System
Xia, Brian
Database Operating System (DBOS) is a new operating system (OS) framework that replaces the traditional file-based system with a high-performance database management system (DBMS). This design choice addresses the needs of a rapidly evolving software and hardware landscape that cannot be met by a traditional, mainstream OS. However, DBOS is a relatively new project under active development, with some missing secondary capabilities. In particular, the provenance capture system has not been fully explored with respect to real-time anomaly detection. To that end, Nectar Network (NN) was developed on top of DBOS as a public web application to generate real-world traffic and provenance data. In this thesis, I present a machine learning (ML) model to label anomalous provenance data captured by the NN, in the form of HTTP logs, in real-time. The model consists of two components: tokenization and classification. In the tokenization step, Byte-level Byte Pair Encoding (BBPE) breaks down the input bytes into token bytes that hold semantic meaning. In the classification step, a Convolutional Neural Network (CNN) takes the token bytes as input and outputs the predicted probability of anomaly. The model achieved strong performance, with a F1 score of 0.99951. Importantly, this work serves as a proof-of-concept for future endeavors to develop real-time security analysis features on top of DBOS systems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Police and Criminal Court Data Transparency in the United States: A Case Study</title>
<link href="https://hdl.handle.net/1721.1/144954" rel="alternate"/>
<author>
<name>Elbashir, Ahmed</name>
</author>
<id>https://hdl.handle.net/1721.1/144954</id>
<updated>2022-08-30T03:43:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Improving Police and Criminal Court Data Transparency in the United States: A Case Study
Elbashir, Ahmed
Effective reform of criminal justice in the United States, including how to understand and fight inequity and structural racism embedded in the system, is hampered by the complexity and opaqueness of America’s criminal justice system. While extensive data is generated and recorded on individual police encounters, arrests, charges, court cases, sentences, and more, most of it is inaccessible to the public, or to researchers who could study this data and produce actionable insights. In the past decade, government officials in many major American municipalities have signaled an intent to collect and publish more data online to promote transparency and improve community relations. However, these efforts have been hindered by bureaucratic difficulties, the eclectic nature of the authorities who generate critical policing and criminal justice data, and the publication of online data in incomplete or non-useful formats. This paper will review the published criminal justice data in Philadelphia, New York City, and Chicago, describe the state of their criminal justice open data publications, and make recommendations for those and other municipalities on how to record and publish data. For data producers, these recommendations emphasize the importance of developing coherent tabular datasets for different criminal justice data topics, releasing visualizations which allow the general public to interface with the data, and publishing downloadable tabular datasets with as much detail as possible without infringing on individual privacy. For government organizations, this paper recommends that legislators pass laws requiring police departments, district attorneys, and courts to collect and publish data, and provide those organizations with the necessary funding to do so.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial optimization of an existing, low-cost sensor network for air pollution in London</title>
<link href="https://hdl.handle.net/1721.1/144953" rel="alternate"/>
<author>
<name>Herrera, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/144953</id>
<updated>2022-08-30T03:24:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Spatial optimization of an existing, low-cost sensor network for air pollution in London
Herrera, Alex
Air pollution sensors are rapidly decreasing in cost and can provide measurements with higher spatial and temporal resolution. In this paper we combine two different Gaussian process methods to optimize spatially an existing low-cost sensor network for air pollution in London. We demonstrate the practical utility of these combined algorithms using a cross-validation approach, applied to air pollution data obtained from 75 sensors within the London Air Quality Network (LAQN) in 2011. The analysis steps were as follows. First, based on a training subset of the original data, we trained a spatio-temporal variational Gaussian process model to quantify the uncertainty within the area of London for a year at daily intervals. A second Gaussian process algorithm was then used to optimize sensor placements by maximizing mutual information to recommend relocating a subset of the existing sensors. Evaluating a second training subset of the original data, as if sensors were relocated to the new recommended locations, we find (on average) that the second model reflecting our new recommended locations increases the mutual information across the area of London by 27.3% while maintaining the same performance on prediction root mean square error within a third subset of the original data (the validation set). We then apply this procedure to a model trained on all 75 sensors to generate an optimized redistribution of the LAQN air pollution sensors. We conclude with ideas for further extensions of this work.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local Algorithms for Sparsification of Average-case Graphs</title>
<link href="https://hdl.handle.net/1721.1/144952" rel="alternate"/>
<author>
<name>Cao, Ruidi</name>
</author>
<id>https://hdl.handle.net/1721.1/144952</id>
<updated>2022-08-30T03:42:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Local Algorithms for Sparsification of Average-case Graphs
Cao, Ruidi
Given an input graph &#119866;, a Local Computation Algorithm for sparse spanning graphs provides query access to a sparse subgraph &#119866;′ ⊆ &#119866;, where &#119866;′ maintains the connectivity and/or distances in &#119866;, by making a sublinear number of probes to the input &#119866; for each query to &#119866;′ . It is known that worst-case graphs require Ω(√ &#119899;) probes in order to detect whether a specific edge &#119890; ∈ &#119866;′ . We want to show that, in expectation, this task can be accomplished much faster, by considering average-case graphs such as Erdos-Renyi random graphs and the Preferential Attachment model. We first present an LCA algorithm which, on an Erdos-Renyi graph input &#119866; with edge parameter &#119901; ≥ Ω(log(&#119899;) &#119899; ), gives fast access to a sparsification &#119866;′ of &#119866;, such that &#119866;′ is connected and has &#119899;+&#119900;(&#119899;) edges. Queries to &#119866;′ are answered &#119978;(∆ log2 (&#119899;)) probes to &#119866; (where ∆ = &#119978;(&#119901;&#119899;) is the maximum degree). We then show an LCA algorithm that, for an Erdos-Renyi graph &#119866; with edge parameter &#119901; ≥ Ω(log(&#119899;)/√ &#119899; ), gives access to a 4-spanner &#119866;′ of &#119866; in &#119978;(log2/(&#119899;)) probes in expectation per query, such that &#119866;′ has at most 2&#119899; edges. Finally, we give an LCA that runs on a Preferential Attachment graph &#119866; with edge parameter Θ(log(&#119899;)), which gives fast access to a sparsification &#119866;′ of &#119866; where &#119866;′ is connected and has &#119899; + &#119900;(&#119899;) edges. Each query to &#119866;′ takes an expected &#119978;(log3 (&#119899;)) probes to &#119866;.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning</title>
<link href="https://hdl.handle.net/1721.1/144951" rel="alternate"/>
<author>
<name>Hamilton, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/144951</id>
<updated>2022-08-30T03:35:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning
Hamilton, Mark
Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine’s behavior. We show that the theory of fair credit assignment provides a unique axiomatic solution that generalizes several existing recommendation- and metric-explainability techniques in the literature. Using this formalism, we show when existing approaches violate “fairness” and derive methods that sidestep these shortcomings and naturally handle counterfactual information. More specifically, we show existing approaches implicitly approximate second-order Shapley-Taylor indices and extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to search engines. These extensions can extract pairwise correspondences between images from trained opaque-box models. We also introduce a fast kernel-based method for estimating Shapley-Taylor indices that require orders of magnitude fewer function evaluations to converge. Finally, we show that these game-theoretic measures yield more consistent explanations for image similarity architectures.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Methodology for an Ultra-High Efficiency Coreless Resonant Power Transformer</title>
<link href="https://hdl.handle.net/1721.1/144950" rel="alternate"/>
<author>
<name>Salk, Noah J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144950</id>
<updated>2022-08-30T03:02:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design Methodology for an Ultra-High Efficiency Coreless Resonant Power Transformer
Salk, Noah J.
Coreless resonant power transformers, operating at high frequency, have several advantages over the traditional iron core transformer. They have a simple structure, are lighter, cheaper, and more efficient due to the elimination of core losses. For a given cooling capacity, pushing the efficiency of these devices by as little as a fraction of a percent can lead to a substantial increase in power throughput capability. In order to achieve ultra-high efficiency designs, several advanced conductor topologies are explored with the development of corresponding experimentally validated modeling techniques to capture extra losses due to non-ideal conductor construction and elliptically rotational magnetic fields. In consideration of industrial economics, care is taken throughout this work to minimize conductor complexity. The variety of modeling techniques developed in this work allow for fast design space exploration as well as accurate loss predictions for down-selected conductors. An optimization is performed to choose a final design for an ultra-high efficiency (&gt;99%) 40 kW transformer with a x4 voltage ratio. The transformer was constructed and thermal comparisons at partial load were made with a lower efficiency transformer of the same magnetic design built using solid conductors. Results demonstrate a &gt;2x reduction in loss and a subsequent coil efficiency &gt;99%.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ally: Designing Interfaces for Human + AI Collaborative Creativity for Computer Aided Design (CAD) Applications</title>
<link href="https://hdl.handle.net/1721.1/144948" rel="alternate"/>
<author>
<name>Chong, Isabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/144948</id>
<updated>2022-08-30T04:01:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Ally: Designing Interfaces for Human + AI Collaborative Creativity for Computer Aided Design (CAD) Applications
Chong, Isabelle
Creativity is an essential component to design and is something that is seen as intrinsically human. As the world continues to become more and more digital, computers become ever more present in the world of creativity. However, the relationship between human and technology does not have to be adversarial. My research for Ally builds on the work of Paper Dreams, an adaptive drawing canvas platform with the objective of augmenting creativity using machine learning and multimodal inputs. In collaboration with PTC, Ally expands Paper Dreams, taking digitally drawn sketches using a Sketch-A-Net model or webcam images using a YOLOv5 model to recognize user input and build a Computer Aided Design (CAD) scene that has been collaboratively created by a human and an algorithm. The devised model is ultimately able to correctly identify the desired part for the user’s design in one of the top five most similar results at a rate better than by chance on a test set of possible user input data. Using the CAD capabilities of PTC Onshape and the Sketch-A-Net/YOLOv5 models, an extension to Paper Dreams has been built that will hopefully be usable in industry to create "digital twins" and allow humans and machines to design together.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long Term Policy Goals Under Electoral Competition Given Varied Temporal Discount Rates Among Voters</title>
<link href="https://hdl.handle.net/1721.1/144947" rel="alternate"/>
<author>
<name>Nicholas, Sara</name>
</author>
<id>https://hdl.handle.net/1721.1/144947</id>
<updated>2022-08-30T03:33:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Long Term Policy Goals Under Electoral Competition Given Varied Temporal Discount Rates Among Voters
Nicholas, Sara
Many important issues facing the world involve temporal tradeoffs, requiring costly investment in the short term for payoffs that accrue much later. However, politicians facing frequent elections are held accountable to voter preferences, where voters tend to be myopic, making investment difficult. Further, the costs and benefits of an investment are often distributed unevenly across voters. Thus investments take a two dimensional shape, creating both cross-sectional and inter-temporal distributions of utility. This paper examines under what conditions investment can occur, and studies how different levels and types of investments distribute utility. We develop a model involving multiple voting districts with variable time preferences and model a game involving legislative action followed by an election, solving the game via backward induction to find the subgame perfect Nash equilibria.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>TCAD-Informed Surrogate Models of Semiconductor Devices</title>
<link href="https://hdl.handle.net/1721.1/144946" rel="alternate"/>
<author>
<name>Chinnery, Samuel B.</name>
</author>
<id>https://hdl.handle.net/1721.1/144946</id>
<updated>2022-08-30T03:29:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">TCAD-Informed Surrogate Models of Semiconductor Devices
Chinnery, Samuel B.
Extensive research has been conducted over the last half-century to develop models of semiconductor devices for use in circuit analysis and simulation. Such models typically fall into one of two categories: “Cheap” analytical models that can be solved quickly but introduce significant error, and “expensive” physics-based models that achieve high accuracy at the price of prohibitive computation time. As electronic circuits grow to contain billions of active devices, there is a pressing need for new models that are both accurate and fast to compute.&#13;
&#13;
In this thesis, we introduce Semiconductors.jl, a new semiconductor simulation tool written in the Julia programming language. We use Semiconductors.jl to implement performant surrogate models that approximate the behavior of fine-grained technology computer-aided design (TCAD) device models using a coarsified grid. The resulting surrogate models are shown to approximate the current-voltage characteristics of the fine-grained models to within a maximum error of 0.1% while using less than one tenth as many discretization nodes as the fine-grained baseline model.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalizable Robot Manipulation through Task and Motion Planning and Interactive Perception</title>
<link href="https://hdl.handle.net/1721.1/144945" rel="alternate"/>
<author>
<name>Fang, Xiaolin</name>
</author>
<id>https://hdl.handle.net/1721.1/144945</id>
<updated>2022-08-30T03:29:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generalizable Robot Manipulation through Task and Motion Planning and Interactive Perception
Fang, Xiaolin
For a robot operating in a daily household environment, generality is of great importance. It should be able to generalize to different tasks that involve different objects in varying backgrounds and configurations.&#13;
&#13;
In this thesis, we will move towards this goal from two perspectives. We will first present a strategy for designing a robot manipulation system that can generalize to a wide range of goals, environments, and objects. Such generality is achieved through task and motion planning with affordances estimated by both learned and engineered modules. We demonstrate that this strategy can enable a single policy to perform a wide variety of real-world manipulation tasks. Next, we will present an interactive perception solution to deal with the uncertainty in the estimated affordances, with a focus on the segmentation of objects. We adopt an object-based belief representation to estimate the uncertainty coming from predicted segmentation, and select actions to reduce that efficiently. Our experiments show that our system can generalize better to different environments and reduce uncertainty more efficiently compared to our baselines.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bluefish: A Grammar of Discrete Diagrams</title>
<link href="https://hdl.handle.net/1721.1/144944" rel="alternate"/>
<author>
<name>Pollock, Joshua Maxwell</name>
</author>
<id>https://hdl.handle.net/1721.1/144944</id>
<updated>2022-08-30T03:07:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Bluefish: A Grammar of Discrete Diagrams
Pollock, Joshua Maxwell
Discrete diagrams show collections of data objects and the discrete relationships between them, which include not just nominal and ordinal relations, but also more general ones such as the parent-child relation in a tree and the friend-friend relation in a social network.&#13;
&#13;
Perceptual grouping principles – such as spatial proximity, nesting, and linking – are central to reading a discrete diagram. Yet, despite their importance, these principles are typically only implicit when creating one. As a result, a diagram author must either build their own abstractions on top of a grammar-of-graphics visualization toolkit or pre-commit to a structural representation supported by a domain-specific diagramming tool.&#13;
&#13;
The key idea of this paper is that perceptual groups visualize discrete relations. We operationalize this insight in Bluefish, a grammar of discrete diagrams, in which diagram authors can use perceptual grouping principles to construct visual encodings of discrete relations.&#13;
&#13;
A prototype of Bluefish has been implemented as a JavaScript library that can be embedded in an Observable notebook. We evaluate Bluefish by comparing it to a direct manipulation editor, a library inspired by the Grammar of Graphics, and a domain-specific diagramming tool.&#13;
&#13;
More broadly, this explicit connection between discrete relations and perceptual groups provides insight into the effectiveness of statistical charts, suggests new interfaces for accessibility, and may prompt new ideas for visualization recommendation, analysis, and synthesis.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rate-1 non-interactive arguments for batch-NP</title>
<link href="https://hdl.handle.net/1721.1/144943" rel="alternate"/>
<author>
<name>Devadas, Lalita</name>
</author>
<id>https://hdl.handle.net/1721.1/144943</id>
<updated>2022-08-30T03:11:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Rate-1 non-interactive arguments for batch-NP
Devadas, Lalita
Succinct non-interactive arguments for batch-NP computations, called BARGs (Choudhuri, Jain and Jin, STOC 2021), have emerged as a powerful tool to construct succinct non-interactive arguments (SNARGs) for expressive classes of computations such as all deterministic computations (P), time-space bounded non-deterministic computations (NTISP), and so on. A BARG gives us a way to prove k NP statements where the size of the proof (resp. the verification time) is proportional to the size of a single witness (resp. the time for a single NP verification).&#13;
&#13;
We present a rate-1 construction of a publicly verifiable non-interactive argument system for batch-NP (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of k NP statements each with an m-bit witness, has size m + poly(λ). In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size m · poly(λ) (Kalai, Paneth, and Yang, STOC 2019, and Choudhuri, Jain, and Jin, STOC 2021). The soundness of our construction relies on the learning with errors (LWE) assumption.&#13;
&#13;
We also observe we can obtain an incrementally verifiable computation (IVC) scheme for arbitrary deterministic computations, even beyond P; a multi-hop BARG scheme for NP; and a multi-hop aggregate signature scheme, in the standard model, with unbounded and universal aggregation. Prior to this work, IVC schemes were only known for P under a bilinear map assumption, and beyond P only under non-standard knowledge assumptions or in the random oracle model; multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; and aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single cell landscape of innate and adaptive immunity in metastatic melanoma treated with immunotherapy</title>
<link href="https://hdl.handle.net/1721.1/144938" rel="alternate"/>
<author>
<name>Fu, Ruiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/144938</id>
<updated>2022-08-30T03:45:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Single cell landscape of innate and adaptive immunity in metastatic melanoma treated with immunotherapy
Fu, Ruiwen
Immune checkpoint inhibitors (ICIs) have revolutionized the care for cancer and extended survival for many patients. While ICIs have shown astonishing clinical benefits, less than 50% of patients experience a durable response. To find better biomarkers for ICI response and understand the diverse cellular players in the tumor, we performed a multi-omic study on a metastatic melanoma cohort with RNA sequencing (39 samples), single cell RNA sequencing (222,351 cells; 39 samples), and single nucleus transposase-accessible chromatin sequencing (45,478 cells; 15 samples). We identified marker genes and functional modules associated with ICI response (e.g. cell-adhesion) and resistance (e.g. oxidative phosphorylation). Through single cell study of ICI resistant tumors, we revealed how cell-adhesion and ribosomal activity changes in the adaptive immunity could reflect tumor-level therapeutic failure. We further characterized the T cell diversity in the tumors and discovered an early activated state and a terminally exhausted T cell state with therapeutic potential. Among the innate immune mediators of tumor microenvironment, we detected a mature dendritic cell state as a powerful predictor of survival and extensively studied its differentiation trajectory, transcriptome signatures, and epigenome landscape. With functional roles in strengthening cancer immunity, many of the molecular and cellular mediators found in this study are relevant to the efficacy of all ICI regimens, and could potentially be extended to targeted inhibitors or other immunotherapies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Algorithms for Bounded-Range LIS Approximation</title>
<link href="https://hdl.handle.net/1721.1/144937" rel="alternate"/>
<author>
<name>Sawettamalya, Pachara</name>
</author>
<id>https://hdl.handle.net/1721.1/144937</id>
<updated>2022-08-30T03:59:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Fast Algorithms for Bounded-Range LIS Approximation
Sawettamalya, Pachara
We introduce an improvement to additive approximation of Longest Increasing Subsequence (LIS) of a sequence with a bounded number of unique elements. In particular, for a sequence &#119891; of length &#119899; with &#119903; unique elements and &#120598; additive error paramenter, we present an algorithm that approximate the size of &#119891;’s LIS within ±&#120598;&#119899; using &#119874;(&#119903;&#120598;⁻²) · &#119901;&#119900;&#119897;&#119910;(log &#120598; ⁻¹) samples and &#119874;(&#119903;&#120598;⁻²) · &#119901;&#119900;&#119897;&#119910;(log &#119903;, log &#120598; ⁻¹) runtime. Our approache introduces small adjustments to the previously known algorithm for this problem, due to [5], resulting in a polynomial runtime algorithm which uses less queries by a factor of &#120598; ⁻¹. Similar approaches can also be applied to estimating edit distance to monotonicity in 2-dimenstional array and &#119871;₁ edit distance of a sequence within sublinear time using &#119901;&#119900;&#119897;&#119910;(&#119903;, &#120598;⁻¹) queries.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perception and Motion Planning for Autonomous Surface Vehicles in Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/144936" rel="alternate"/>
<author>
<name>Zhang, Jerry</name>
</author>
<id>https://hdl.handle.net/1721.1/144936</id>
<updated>2022-08-30T04:02:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Perception and Motion Planning for Autonomous Surface Vehicles in Aquaculture
Zhang, Jerry
The "Oystermaran" USV was designed and developed by students at MIT SeaGrant to resolve the oyster basket flipping bottleneck that slows down oyster farming at Ward Aquafarms. The state of the USV requires remote operation within close distance of the vessel. In this thesis, we present an automated solution that will enable the Oystermaran to autonomously depart from its parked location, navigate to its destination, execute the flipping tasks, and return to a designated location with little to no human intervention. The details explored in this project and discussed in the thesis focus on the perception and motion planning aspects of the proposed autonomous system. Our results show a capable basket detection algorithm based on our collected dataset. The system’s path planning approach is also proven sufficient in simulation. Additional data collection with further testing may be required to fully realize the system on board the Oystermaran.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Vector Instruction Selection for Digital Signal Processing</title>
<link href="https://hdl.handle.net/1721.1/144935" rel="alternate"/>
<author>
<name>Root, Alexander James</name>
</author>
<id>https://hdl.handle.net/1721.1/144935</id>
<updated>2022-08-30T03:22:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimizing Vector Instruction Selection for Digital Signal Processing
Root, Alexander James
Digital signal processing applications benefit from fast implementations of vectorized inner kernels. Existing compilers rely on brittle pattern-matching or search-based methods with poor scalability for vector instruction selection – techniques which are limited by a reliance on the syntax of the input code. These techniques struggle to utilize the efficient fused instructions that exist on modern hardware.&#13;
&#13;
This thesis extends the Rake synthesis-based optimizing compiler to target the ARM Neon ISA via the design of a high-level intermediate representation for vector computation, with each component of the IR unifying multiple concrete instructions for the target ISA. This technique relies on the semantics of the input code, rather than the syntax alone, allowing for powerful equivalent rewrites that existing compilers are currently incapable of performing.&#13;
&#13;
On 11 real-world benchmarks, our system achieves up to a 65% faster runtime (geometric mean of 12%) than the Halide and LLVM vector instruction selectors that have been developed over the past decade.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Communicating with Care: Design strategies to empower caregivers to improve interactions with healthcare providers</title>
<link href="https://hdl.handle.net/1721.1/144934" rel="alternate"/>
<author>
<name>Kurian, Nihara Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/144934</id>
<updated>2022-08-30T03:37:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Communicating with Care: Design strategies to empower caregivers to improve interactions with healthcare providers
Kurian, Nihara Rachel
Today, more than one in five Americans are informal or family caregivers, and these numbers are increasing at an unprecedented rate. Tasked with the burden of advocacy and responsibility of care, these caregivers also navigate the implications of this care on the emotionally charged relationship with their care recipient. Further, otherwise saturated with innovation, the healthcare space seems to neglect these caregivers, with healthcare providers and patients taking precedence. Often playing an integral role in the medical care management of their care recipient, caregivers struggle with short appointment times, leaving with unanswered questions and anxiety about providing care until the next visit.&#13;
 &#13;
In partnership with the MIT AgeLab, this thesis utilizes the design research process through remote in-depth interviews to identify and uncover methods used by the caregivers to record and share symptoms and progression with healthcare providers (primary and other associated appointments). The study seeks to understand the most significant challenges they face in interacting with healthcare providers and how it connects and translates to their caregiving approach. &#13;
 &#13;
The outcome of this work explores the need for a framework for effective caregiver- healthcare provider communication. By building self-awareness of their evolving needs in their caregiver journey, the study looks to arm them with adapted skills and tools within the limitations of the system. Caregivers can anticipate and help healthcare providers get the whole picture. This would serve in navigating interactions with healthcare providers to go beyond the appointment, building confidence in their caring abilities off the clock / without supervision or official training, lightening the emotional load.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trust Machines: Cryptocurrencies, Blockchains, and Humans in Cultures of Mistrust</title>
<link href="https://hdl.handle.net/1721.1/144933" rel="alternate"/>
<author>
<name>Guarna, Tomás Andrés</name>
</author>
<id>https://hdl.handle.net/1721.1/144933</id>
<updated>2022-08-30T03:05:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Trust Machines: Cryptocurrencies, Blockchains, and Humans in Cultures of Mistrust
Guarna, Tomás Andrés
Network technologies allow individuals to participate in technological market systems that can mediate trust independently from traditional public institutions. This presents a novel idea of governance that is distinct to the one in liberal democracies. I explore the use of cryptocurrencies (digital currencies based on cryptography) in Argentina to shed light on the social dynamics underlying technological market systems that mediate trust. These social dynamics include ideas, perceptions, and emotions, as well as specific practices that determine different relations to traditional institutions. I study how cryptocurrencies and blockchain (decentralized records that rely on cryptography) technologies are understood by Argentine enthusiasts and developers, and how communities of enthusiasts generate adequate social environments for the transmission of information and for emotional support. I highlight the discursive and social aspects of the phenomenon. Based on these findings, I describe the imaginary of participatory institutions (a vision where individuals engage with public institutions providing limited information on a consensual basis) and I describe how a city government in Argentina is interpreting this imaginary.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Simultaneous Localization and Mapping in Perceptually Aliased Underwater Environments</title>
<link href="https://hdl.handle.net/1721.1/144932" rel="alternate"/>
<author>
<name>Singh, Kurran</name>
</author>
<id>https://hdl.handle.net/1721.1/144932</id>
<updated>2022-08-30T03:50:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Active Simultaneous Localization and Mapping in Perceptually Aliased Underwater Environments
Singh, Kurran
The problem of semantic simultaneous localization and mapping (SLAM) is especially difficult in underwater environments due to sensor characteristics and terrain. The primary underwater sensor, sonar, is subject to multipath reflections, as well as an elevation angle ambiguity that makes it difficult to integrate its data into SLAM frameworks. Furthermore, the lack of training data makes it difficult to accurately obtain object detections from sonar for semantic, or object-based, SLAM. Finally, the technique of actively choosing trajectories that can take into account data association ambiguities between semantic landmarks is still an open research area. This work comprises of two main contributions: the design and implementation of a Gaussian mixture representation for data association of semantic object detections in environments perceived with sonar, and the design and implementation of a path planning algorithm that allows a vehicle to actively seek trajectories that disambiguate and elucidate the robot's position and its map of the surrounding environment. These two techniques are tested in various experimental settings, with results showing the novel ability to actively navigate with an awareness of semantic object landmarks' data association ambiguities. Future work will involve further experimental evaluation on combining the underwater mapping techniques with the active navigation techniques developed in this thesis, as well as the development of more techniques for designing and training object detectors for sonar.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stakeholder mental model alignment influence on mid-stage &#13;
performance of new product engineering teams</title>
<link href="https://hdl.handle.net/1721.1/144931" rel="alternate"/>
<author>
<name>Krehbiel, Nathan E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144931</id>
<updated>2022-08-30T03:53:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Stakeholder mental model alignment influence on mid-stage &#13;
performance of new product engineering teams
Krehbiel, Nathan E.
The engineering of complex systems-of-systems requires management of dependencies and coordination between multidisciplinary teams-of-teams. Coordination strategies and the understanding of supporting mechanisms are critical in the execution. Mental models are cognitive structures used to describe system form and function and predict outcomes. The concept of shared mental models, the influence of shared mental models, and the methods of elicitation and analysis are all relatively recent and active areas of study. This thesis reviews past work in these domains and proposes a treatment to stimulate the development of two attributes of shared mental models in the context of mid-stage project development of R&amp;D teams. This work explores the influence of structured context tools on sharedness and breadth of team shared mental models and in turn, team shared mental model influence on team performance within the mid-stage R&amp;D context. An experiment was designed placing participants in a team-based, role-play scenario where they were asked to work as a team-of-teams developing a system-of-systems, in particular the verification and validation activity of that development. Information was collected on team mental models utilizing a concept similarity rating elicitation method and analyzed utilizing Pathfinder network and pairwise comparison methods. Performance data was collected via custom software and analyzed utilizing a Pareto fitness ranking method. Trends were detected in the data indicating the importance of shared mental model development. Limitations, recommendations, and future areas of study are provided, including recommendations to: adjust the application of the treatment, include additional instrumentation in the experiment, and explore additional attributes of the teams’ mental models.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards More Generalizable Neural Networks via Modularity</title>
<link href="https://hdl.handle.net/1721.1/144929" rel="alternate"/>
<author>
<name>Boopathy, Akhilan</name>
</author>
<id>https://hdl.handle.net/1721.1/144929</id>
<updated>2022-08-30T03:49:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards More Generalizable Neural Networks via Modularity
Boopathy, Akhilan
Artificial neural networks have become highly effective at performing specific, challenging tasks by leveraging a large amount of training data. However, they are unable to generalize to diverse, unseen domains without requiring significant retraining. This thesis quantifies the generalization difficulty of a task as the amount of information content in the inductive biases required to solve a task, and demonstrates that generalization difficulty relies crucially on the number of dimensions of generalization. Inspired by the modularity of biological learning systems, this thesis then demonstrates theoretically and empirically that modularity promotes generalization by providing a powerful inductive bias. Finally, the thesis proposes a new challenging spatial navigation benchmark that requires a broad degree of generalization from a small amount of training data. This benchmark is presented as a test of the generalization capability of learning algorithms; based on the results of this thesis, modularity is expected to promote generalization on this benchmark.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Learned and Rule-Based Policies for Hospital Bed Assignment</title>
<link href="https://hdl.handle.net/1721.1/144928" rel="alternate"/>
<author>
<name>Wong, Hallee E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144928</id>
<updated>2022-08-30T03:30:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Evaluating Learned and Rule-Based Policies for Hospital Bed Assignment
Wong, Hallee E.
In many complex sequential decision making problems in healthcare such as hospital bed assignment, resources are limited and shared between patients. Hospital bed assignment is an important decision making problem because a patient's bed assignment influences their medical outcomes, including their risk of developing a healthcare associated infection (HAI). In this thesis, we consider the problem of assigning patients to hospital beds with the goal of reducing the incidence of HAIs. We propose a two part approach to this task: first, use reinforcement learning to learn a function from logged data for assessing different patient and bed pairs, then use this function to design policies for sequentially assigning batches of patients to beds. We develop a simulation to demonstrate this approach and conduct experiments exploring how assumptions about the environment affect the performance of learned and rule-based policies. We examine the performance of weighted importance sampling for off-policy evaluation. Our results show that policies that prioritize patients with the highest risk of poor outcomes outperform purely greedy policies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Testing, Learning, and Optimization in High Dimensions</title>
<link href="https://hdl.handle.net/1721.1/144927" rel="alternate"/>
<author>
<name>Gatmiry, Khashayar</name>
</author>
<id>https://hdl.handle.net/1721.1/144927</id>
<updated>2022-08-30T03:52:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Testing, Learning, and Optimization in High Dimensions
Gatmiry, Khashayar
In this thesis we study two separate problems: (1) What is the sample complexity of testing the class of Determinantal Point Processes? and (2) Introducing a new analysis for optimization and generalization of deep neural networks beyond their linear approximation. For the first problem, we characterize the optimal sample complexity up to logarithmic factors by proposing almost matching upper and lower bounds. For the second problem, we propose a new regime for the parameters and the algorithm of a three layer network model which goes beyond the Neural tangent kernel (NTK) approximation; as a result, we introduce a new data dependent complexity measure which generalizes the NTK complexity measure introduced by [Arora et al., 2019a]. We show that despite nonconvexity, a variant of Stochastic gradient descent (SGD) converges to a good solution for which we prove a novel generalization bound that is proportional to our complexity measure.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>As the Curtain Falls</title>
<link href="https://hdl.handle.net/1721.1/144921" rel="alternate"/>
<author>
<name>Cunningham, Joel</name>
</author>
<id>https://hdl.handle.net/1721.1/144921</id>
<updated>2022-08-30T03:03:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">As the Curtain Falls
Cunningham, Joel
For the last century, architects have embraced the efficiencies of the curtain wall. As a technological solution that mediates between our interior desires and the realities of the outside world, these envelope systems have been liberally applied to buildings across the globe. Regardless of longitude and latitude, minimal vitreous enclosures have grown to represent progress and modernisation - the triumph of capitalist logic over all else.&#13;
&#13;
Today, however, as concerns surrounding climate change are pulled to the forefront of contemporary culture, the myopic tendencies with which these enclosures were designed is starting to become apparent. With use-lives rarely exceeding 50 years, many curtain walls are now struggling to keep pace with contemporary change, not only falling short of evermore stringent performance standards, but also rapidly evolving cultural demands. With a vast number of these envelopes set to fail in the not-so-distant future, it is now simply a matter of time until the world’s first generation of crystalline skylines are either erased or replaced.&#13;
&#13;
When considering the sheer quantity of curtain walls that have been assembled over the last fifty years, in urban canters as diverse as New York and New Delhi, the true magnitude of this issue starts to become apparent. As a generation of young architects, we are set to inherit an inventory of large buildings possessing perfectly sound structures, but fundamentally flawed envelopes. Concurrently sitting in the midst of what has come to be known as a “climate crisis”, it seems an appropriate time to question our current paradigm of enclosure design. Do we really need more short-term solutions, or a fundamental shift in the way we perceive and produce the outer inches of our architecture?
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thin shell foundations: Embodied carbon reduction through materially efficient geometry</title>
<link href="https://hdl.handle.net/1721.1/144920" rel="alternate"/>
<author>
<name>Feickert, Kiley Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/144920</id>
<updated>2022-08-30T03:11:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Thin shell foundations: Embodied carbon reduction through materially efficient geometry
Feickert, Kiley Anne
Due to increasing global population, floor area is expected to double by 2060. At the same time, the building sector contributes 11% of global greenhouse gas emissions annually as a result of current construction processes. Therefore, if global warming is to be limited to 1.5ºC above pre-industrial levels, reducing embodied carbon will play a key role and business-as usual construction processes must be reconsidered. This research aims to reduce carbon emissions associated with reinforced concrete structural elements while addressing the need for a significant increase in adequate housing due to rapid urbanization.&#13;
&#13;
The structural floor system, frame and foundations represent the systems with the most potential to limit emissions, as they are the biggest contributors to embodied carbon in a building. In contexts where labor costs drive construction costs, particularly in the Global North, material is consumed excessively at the expense of time. This research proposes shell foundations in lieu of spread foundations, drawing from historical applications such as Félix Candela’s Customs Warehouse, built in 1953. Shells distribute loads more efficiently through their cross-section, reducing the quantity of material required structurally which ultimately reduces their embodied carbon.&#13;
&#13;
In this research, existing analytical equations are applied in a parametric design workflow to evaluate the environmental impact of conventional prismatic foundations and shell foundations for the same design load. For a 2MN column load on clay soil, shells reduce embodied carbon in foundations by 48%. By applying this approach systematically, insights are gained regarding their applicability to various building typologies and site conditions. For high applied loads, and soils with low bearing capacity, shells significantly outperform their prismatic counterparts. Foundations are then considered within the context of a whole building to determine the potential downstream savings when multiple systems are shape optimized. When floor slabs are shape-optimized in addition to using shell foundations, the embodied carbon of a building can be reduced by 72%.&#13;
&#13;
Digital fabrication offers a pathway to economically build materially efficient foundations while addressing the additional time and labor often associated with more complex geometry. For example, advances in 3D printing earth suggest local soil can act as formwork if printed in the required shape to receive the shell geometry. Additionally, subtractive methods are explored, where earth is compacted and milled to create formwork for a shell foundation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smart Remote Personal Health Monitoring System: Addressing Challenges of Missing and Conflicting Data</title>
<link href="https://hdl.handle.net/1721.1/144918" rel="alternate"/>
<author>
<name>Zhu, Ye</name>
</author>
<id>https://hdl.handle.net/1721.1/144918</id>
<updated>2022-08-30T03:29:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Smart Remote Personal Health Monitoring System: Addressing Challenges of Missing and Conflicting Data
Zhu, Ye
Clinical usage of Remote Patient Monitoring (RPM) systems has spurred during the past two years. Driven by an  increase in demand during the COVID-19 pandemic,  Internet of Medical Things (IoMT) systems are becoming much more diverse and prevalent. They are excellent candidates for monitoring patients’ health status and disease state, for predicting patients’ response to treatment, for alerting extraordinary, acute, or emergency events, for analyzing and managing large datasets, and for preventing disease progression and symptom manifestation. &#13;
&#13;
Although healthcare technology had improved over the years, two main challenges to enhance remote patient monitoring or improve professional telehealth programs continue to be interoperability and data handling. Many new solutions have been under investigation as the pandemic shifts the world’s perspective on how healthcare should be performed to treat acute, chronic, psychological and infectious diseases. &#13;
&#13;
This thesis first focuses on using a system thinking approach to design and architect a low-cost and scalable RPM system for general applications. The key challenges and potential solutions are discussed from the system design and architecture points of view. &#13;
&#13;
To deep dive into particular data challenges faced by researchers implementing big-data analytics for remote monitoring, the second part of this thesis selects the remote smart cardiac health monitoring system as the example system for detail technical aspect analysis. Several deep learning methods, including RNN, BRITS, GAN, DeepAR based methods, are applied to address the missing data issues during remote monitoring,. The methods are tested and compared to each other.  A federated learning approach is also explored and proposed to be implemented in distributed remote patient monitoring systems for improving privacy and security.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Thinking for Prioritizing Technology Research &amp; Development in Public Administration</title>
<link href="https://hdl.handle.net/1721.1/144917" rel="alternate"/>
<author>
<name>Makino, Yuya</name>
</author>
<id>https://hdl.handle.net/1721.1/144917</id>
<updated>2022-08-30T03:30:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Systems Thinking for Prioritizing Technology Research &amp; Development in Public Administration
Makino, Yuya
In Japan's policies on science and technology, the distribution of resources may be done without any theoretical basis. The Cabinet Office, which prepares the basic plan for science and technology policy, and the ministries and agencies that specifically implement science and technology policy under them have divergent evaluations of projects.&#13;
&#13;
For this reason, we referred to past papers and picked up some ideas. As a result, we created a formula. Based on it, we evaluate projects in the environment and energy sectors. There are five projects. The primary reason why I’m choosing these projects is that many projects have not yet prepared the necessary indicators for this type of evaluation.&#13;
&#13;
We will create indicators to create priorities in the basic plan based on this. Discuss how it could be used within the Cabinet Office. This will allow the Cabinet Office, which formulates the Science and Technology Basic Plan, to manage the projects included in that plan. Project priorities can be set more clearly than before based on the indicators. This will result in a more rational allocation of limited budget and human resources. Until now, explicit prioritization has not been part of the budget assessment, but this indicator is one of the evaluations that will contribute to that action.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telehealth in Sub-Saharan Africa: A Human-Centered Design Approach to Bridging Gaps in Healthcare and Wellbeing Across the African Diaspora</title>
<link href="https://hdl.handle.net/1721.1/144916" rel="alternate"/>
<author>
<name>Onuoha, Chinelo</name>
</author>
<id>https://hdl.handle.net/1721.1/144916</id>
<updated>2022-08-30T03:46:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Telehealth in Sub-Saharan Africa: A Human-Centered Design Approach to Bridging Gaps in Healthcare and Wellbeing Across the African Diaspora
Onuoha, Chinelo
The advances in telehealth have allowed us to reimagine access to healthcare. Doctors and health providers can reach more patients in more flexible ways at reduced costs. The rise of telehealth has also presented more opportunities to design community-focused solutions and interventions that address the needs of communities in sub-Saharan Africa and the diaspora. A few of the needs identified are access to affordable care providers in rural/remote areas, healthcare management and tracking of individual health outcomes over time, healthcare management for family members in different areas of the world, monetary support for medical procedures for family, friends, or community members. &#13;
&#13;
In partnership with DoctorNow, a telehealth startup based in Ghana, our aim for this analysis is to understand how we can use human-centered design to educate and empower individuals and families in Africa on their healthcare and management options using telehealth as well as better connect and support the wellbeing of families and communities across the African diaspora.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solving the Traveling Salesman Problem via Semantic Segmentation with Convolutional Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/144914" rel="alternate"/>
<author>
<name>Chin, J. K. Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/144914</id>
<updated>2022-08-30T03:01:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Solving the Traveling Salesman Problem via Semantic Segmentation with Convolutional Neural Networks
Chin, J. K. Samuel
The Traveling Salesman Problem (TSP) is a problem that has been formally studied since the 1930s and attracts great theoretical and practical interest. The theoretical aspects are particularly interesting as the TSP is an NP-hard problem and exact solutions for large TSPs are difficult to obtain. On the practical side, the growth of e-Commerce has resulted in more deliveries and it is increasingly important to obtain higher quality routes to increase efficiency. To that end, we introduce a Human Inspired Heuristic (HIH) that converts a road network semantic map into a truncated distance matrix that can be passed to a traditional TSP solution algorithm. The HIH can be further augmented in the image domain with our proposed novel Convolutional Neural Network (CNN). Our proposed CNN takes as input this original road network semantic map and outputs a reduced road network of plausible paths that are learned from near-optimal route instances. Through extensive numerical experiments, we find that additional pre-processing done by the CNN does not improve the performance of the HIH. In the context of real-world applications, a HIH designed based on physical constraints already works well. While the CNN in its current form in general fails to outperform a HIH, we did find one instance where it outperformed. This suggests that there is potential for CNNs to outperform and a promising research direction is to predict semantic maps in an autoregressive manner and reduce the reliance or remove entirely, the HIH.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ground Station Mixed-Signal PCB and SFP Ethernet-to-Optical Connector for the Deployable Optical Receiver Aperture (DORA) CubeSat</title>
<link href="https://hdl.handle.net/1721.1/144913" rel="alternate"/>
<author>
<name>Arnold, Julia Marshall</name>
</author>
<id>https://hdl.handle.net/1721.1/144913</id>
<updated>2022-08-30T03:02:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Ground Station Mixed-Signal PCB and SFP Ethernet-to-Optical Connector for the Deployable Optical Receiver Aperture (DORA) CubeSat
Arnold, Julia Marshall
The Deployable Optical Receiver Aperture (DORA) project at the Jet Propulsion Laboratory (JPL) in collaboration with Arizona State University (ASU) aims to demonstrate 1 Gbps data rate for crosslink communications among small spacecraft. There are numerous applications for this technology, including satellite swarms/ constellations, and surface to orbit communications. DORA allows a satellite’s primary mission to continue without requiring the satellite to reorient for communication, thus enabling missions to use low-cost, off-the-shelf Altitude and Determination Control Systems (ADCS). DORA meet these specifications by replacing the spacecraft’s traditional receiving telescope with incident-angle sensitive photodiodes to steer the onboard transmitting laser in the corresponding direction. The DORA optical ground terminal (OGT) will be stationed on the ground to communicate with the DORA CubeSat in flight. This thesis project will address the ground station analog and digital electronics and data transfer needed to deliver data received from the DORA CubeSat to its final destination for processing and storage. The ground station mixed-signal PCB (GS-MSPCB) will serve as the interface between the terminal’s FPGA and the optical ground station control components. The FPGA will connect via an enhanced small form-factor pluggable (SFP+) port to an SFP switch and Wifi-6 router so that data may be accessed wirelessly.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Asymptotic &#119905;-Wise Independence of Substitution-Permutation Networks</title>
<link href="https://hdl.handle.net/1721.1/144912" rel="alternate"/>
<author>
<name>Pelecanos, Angelos</name>
</author>
<id>https://hdl.handle.net/1721.1/144912</id>
<updated>2022-08-30T03:07:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Non-Asymptotic &#119905;-Wise Independence of Substitution-Permutation Networks
Pelecanos, Angelos
In this thesis, we study the &#119905;-wise independence of block ciphers following the Substitution-Permutation Network design to prove resilience against cryptanalytic attacks and show non-asymptotic bounds for two widely-used ciphers. There are two main contributions of this thesis.&#13;
&#13;
In the first part of this thesis, we study the pairwise independence of AES. Replacing the INV &#119878;-box with an ‘ideal’ variant, we are able to compute tight convergence properties and prove that this ideal AES is pairwise independent in 5 rounds. As a corollary, we show how to simulate the ideal AES variant using the true AES, after silencing parts of some AES rounds. We call the resulting construction censored AES and we prove that it is pairwise independent in 92 rounds. Since this variant is modeled after AES, but does not perform a significant fraction of the mixing steps, we believe that our result is evidence that the true AES is pairwise independent in less than 100 rounds.&#13;
&#13;
In the second part of this thesis, we study the &#119905;-wise independence of the MiMC cipher. In particular, we use exponential sums results from algebraic number theory to show that 7&#119905;+&#119900;(&#119905;) rounds of MiMC on a prime order field are &#119905;-wise independent. This result is tight up to constant factors and is the first proof of &#119905;-wise independence for any concrete cipher.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Automated Assessment of Crowdsourced Crisis Reporting for Enhanced Crisis Awareness and Response</title>
<link href="https://hdl.handle.net/1721.1/144911" rel="alternate"/>
<author>
<name>Lewis, Dylan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/144911</id>
<updated>2022-08-30T03:28:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards Automated Assessment of Crowdsourced Crisis Reporting for Enhanced Crisis Awareness and Response
Lewis, Dylan R.
The availability of information during a climate crisis event is critical for crisis managers to assess and respond to crisis impact. During crisis events, affected residents post real-time crisis updates on platforms such as RiskMap and Twitter. These updates provide localized information, which has the potential to enhance crisis awareness and response. However, with limited resources, crisis managers may endure information overload from the inundation of these updates. Prior work has demonstrated the potential of machine learning (ML) methodologies to mitigate this problem. We have identified limitations in the prior work including the lack of involvement of crisis managers in the development and evaluation of a ML methodology.&#13;
&#13;
To address these limitations, we propose a novel framework and ML methodology which investigate the efficacy of various ML methods in enhancing crisis awareness and response beyond model performance metrics. This framework aims to iteratively embed the information needs and priorities of crisis managers during crisis into the design of the ML methodology. We cooperated with crisis managers in Fukuchiyama City (FC), a city in Japan which is susceptible to flood events, and analyzed crowdsourced crisis image and text data from past FC flood events. We devised the Flood Presence image classification task, constructed Train/Dev/Test splits, and annotated images from FC. We report a weighted F1 score of 92.1% on the test split and 82.5% on the FC images. Using the results of our image analysis ML methodology and the insights we gained from crisis managers, we iterated on the design of our text analysis ML methodology. This led to the creation of the Human Risk text classification task which is tailored to a subset of the identified information needs of the crisis managers. To align with the priorities of crisis managers for this task, we determined the model evaluation metric to be the F2 score. We report an F2 score of 92.8% on an FC crisis text test dataset, which is a significant improvement over the baseline score of 43.4%.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Cross-Platform Bridging Library for Native Mobile SDKs</title>
<link href="https://hdl.handle.net/1721.1/144910" rel="alternate"/>
<author>
<name>Li, Yanlin</name>
</author>
<id>https://hdl.handle.net/1721.1/144910</id>
<updated>2022-08-30T03:32:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Building a Cross-Platform Bridging Library for Native Mobile SDKs
Li, Yanlin
Mobile application development frequently requires usage of third-party software development toolkits (SDKs), which usually come in two variants, Android and iOS. This necessitates two separate workflows and expertise to create the application on both platforms, rendering the native development process both time-intensive and costly. Although the emergence of cross-platform frameworks like React Native attempts to streamline the process by allowing developers to share code on multiple operating systems via a single code base, many SDKs provide no React Native integration — a bridge must be built out of the native versions.&#13;
&#13;
In this thesis, we present a React Native bridging library for Cambridge Mobile Telematics’ Android and iOS SDKs. We explore and evaluate abstraction and implementation strategies for mitigating differences between the two SDKs and platforms. Accompanying the library is the React Native Sample App, which demonstrates the library in action. We then evaluate the React Native Sample App by comparing first its line count and secondly its performance against native versions of sample applications previously built by CMT. Finally, we suggest possible software design choices that can facilitate cross-platform development. Our results suggest that a bridging library, on a cross-platform framework, is promising in improving code portability and thus reducing development effort in building mobile applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Origin of the Lunar Ultramafic Glasses Constrained by Experiments and Models</title>
<link href="https://hdl.handle.net/1721.1/144909" rel="alternate"/>
<author>
<name>Guenther, Megan E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144909</id>
<updated>2022-08-30T03:20:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Origin of the Lunar Ultramafic Glasses Constrained by Experiments and Models
Guenther, Megan E.
To place further constraints on the origin of the lunar ultramafic glasses and the evolution of the lunar interior, phase equilibrium experiments are carried out on two synthetic compositions which represent hybridized, bulk source compositions of (1) high-titanium and (2) very low-titanium/high-aluminum primary magmas. The compositions are designed to produce liquids compositionally similar to the high-Ti Apollo 14 Black (A14B) glass (16.4 wt.% TiO₂) and the high-Al Apollo 14 Very Low Titanium (A14VLT) glass (0.61 wt.% TiO₂, 9.60 wt.% Al₂O₃). Experiments on the synthetic source composition “HiTi1” at pressures of 1.5-2.0 GPa, temperatures of 1380-1460°C, and degrees of melting ≈ 30% produce the best fitting melts to A14B. Experiments on the synthetic source composition “VLTCum1” at pressures of 1.8-2.0 GPa, temperatures of 1460-1480°C, and degrees of melting ≈ 30-45% produce the best fitting melts to A14VLT. The forward melting experiments performed on both source compositions contain equilibrium mineral assemblages that match those obtained through inverse melting experiments on the glass compositions. Experimental conditions that produced good fits to the target glass compositions overlap with conditions corresponding to previously determined olivineorthopyroxene multiple saturation pressures and temperatures for the glasses. Our experimental results confirm melting from a compositionally heterogeneous lunar mantle source that was hybridized through cumulate mantle overturn. Using a petrogenetic mass balance model, we suggest source components which could have been involved in the production of these hybridized source regions. We also calculate density as a function of pressure for several high-Ti glasses as well as the A14VLT glass and determine if these liquids are positively buoyant at their hypothesized depth of origin relative to their mantle residue. We find that the A14VLT glass is always positively buoyant at relevant depths, while some of the high-Ti glasses are positively buoyant only at depths corresponding to more oxidizing source region compositions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GRAND-assisted Optimal Modulation</title>
<link href="https://hdl.handle.net/1721.1/144908" rel="alternate"/>
<author>
<name>Ozaydin, Basak</name>
</author>
<id>https://hdl.handle.net/1721.1/144908</id>
<updated>2022-08-30T04:02:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">GRAND-assisted Optimal Modulation
Ozaydin, Basak
For Gaussian channels with peak and average power constraints the optimal modulation (OM) schemes are known to have nonuniform probability distributions over the signal points. An established way to obtain these distributions is assigning different number of bits to different constellation points. However, this method leads to challenges in demodulation as if a symbol is identified falsely, due to the different bit lengths of symbols, bit insertions or deletions may occur which may in return cause error propagation. Hence, the difficulty of realizing the channel optimal distributions on constellation signals impeded OM from becoming widely utilized in communication systems. In this thesis, we propose a practical system for OM that uses only a simple padding scheme instead of the complex mechanisms in the current literature. A guess-based error correction demodulator lies at the core of the proposed system. Together with the padding scheme of our choice, our novel light-weight variant of Guessing Random Additive Noise Decoding (GRAND) demodulator protects the system against insertions and deletions. We display that with our approach an overall gain of up to 2 dB in energy per bit over noise spectral density (&#119864;&#119887;/&#119873;0) is achievable compared to Quadrature Amplitude Modulation (QAM) with the same number of points.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving electricity supply in the Indian state of Odisha using under-the-grid micro-grid technology</title>
<link href="https://hdl.handle.net/1721.1/144907" rel="alternate"/>
<author>
<name>Kulkarni, Aparna Ravikumar</name>
</author>
<id>https://hdl.handle.net/1721.1/144907</id>
<updated>2022-08-30T03:39:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Improving electricity supply in the Indian state of Odisha using under-the-grid micro-grid technology
Kulkarni, Aparna Ravikumar
On 28th April 2018, Narendra Modi the Prime Minister of India announced that every single village of India was electrified. Despite continuous efforts taken by the Indian government through various rural electrification programs, the issue of reliable and affordable electricity persists in the rural areas of India. Besides, the distribution companies (DISCOMs) that provide power supply to the rural areas are in poor financial health due to high operating costs, high AT&amp;C losses, power theft and subsidized consumer tariffs. For the state of Odisha, due to frequent occurrence of cyclones and floods and presence of wildlife activity that make the grid maintenance a complex affair, the DISCOMs incur into exceptionally higher O&amp;M costs. The objective of this thesis is to study some issues that result in systematic economic losses in localized areas for Tata Power Odisha (TPO) who has recently acquired distribution licenses for Odisha and consequently to propose some measures that can mitigate or solve the problem. Multiple hypothetical scenarios were devised considering the potential decisions of TPO and other stakeholders involved in the distribution operation. A techno-economic model was used to simulate these scenarios for a cluster of villages in Odisha looking at the possible outcome for each of them. A reference electrification model was used to obtain the local grid design. Using the best available data to us, the results showed that when performance-based incentives on target reliability are not applied, a local generation operating completely off-grid should be used as an alternative when the annual O&amp;M cost rises above a certain value for the cluster of villages. If incentives are applied, operating the local generation set in a connected microgrid-under-the grid model gives best operating benefits for reliability, performance, and business viability.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dataset Deduplication with Datamodels</title>
<link href="https://hdl.handle.net/1721.1/144905" rel="alternate"/>
<author>
<name>Liao, Yunxing</name>
</author>
<id>https://hdl.handle.net/1721.1/144905</id>
<updated>2022-08-30T03:29:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Dataset Deduplication with Datamodels
Liao, Yunxing
Large curated datasets have been essential to the development of deep learning models across many disciplines. Consequently, the properties of these datasets have a large impact on the behavior of these models. As machine learning pipelines increasingly leverage more unlabelled datasets—which tend to undergo less curation than labelled datasets—controlling data quality becomes even more important. We focus on a particular aspect of data quality: train-test leakage or duplicate examples. These can cause overestimation of models’ performance on benchmarks among other issues. In this work, we apply datamodels, a framework for analyzing the behavior of a model class as a function of its training data, to deduplicate unlabelled datasets. Inspired by the recent CLIP model, we focus on detecting duplicates between YFCC15M and the ImageNet validation dataset. Our results demonstrate how to adapt datamodels effectively for these filtering tasks in unsupervised, large-scale settings. We finish by discussing the challenges of our method and duplicate detection more broadly.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Gaseous Emboli Mimics in an ECMO Flow Phantom</title>
<link href="https://hdl.handle.net/1721.1/144904" rel="alternate"/>
<author>
<name>Liu, Sabrina</name>
</author>
<id>https://hdl.handle.net/1721.1/144904</id>
<updated>2022-08-30T03:02:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generating Gaseous Emboli Mimics in an ECMO Flow Phantom
Liu, Sabrina
Patients undergoing extracorporeal membrane oxygenation (ECMO) therapy are prone to developing emboli. These unattached masses of solid blood clots and gaseous air bubbles have the potential to occlude blood vessels and lead to complications such as neurological damage. Existing ultrasound methods for detecting and characterizing them are designed and tested on data sets that often are small, are not representative of clinical conditions, or lack a ground truth to compare the results to. We aim to construct a flow phantom that fills these gaps.&#13;
&#13;
We build upon prior work on this project by mixing a translucent fluid that mimics the acoustic and rheological properties of blood. We explore various bubble generation designs and produce gaseous emboli mimics with diameters as small as 250 µm. In addition, we experimentally confirm a monotonic dependence between bubble diameter and peak backscattered power under no flow conditions, which can help with sizing emboli. Finally, we investigate interactions between ultrasonic acoustic waves and emboli through simulations in k-Wave. This work makes progress towards ultimately developing a well-tested Doppler ultrasound system that can detect and characterize emboli in ECMO.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monkey: A Distributed Orchestrator for a Virtual Pseudo-Homogenous Computational Cluster Consisting of Heterogeneous Sources</title>
<link href="https://hdl.handle.net/1721.1/144903" rel="alternate"/>
<author>
<name>Stallone, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144903</id>
<updated>2022-08-30T03:48:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Monkey: A Distributed Orchestrator for a Virtual Pseudo-Homogenous Computational Cluster Consisting of Heterogeneous Sources
Stallone, Matthew J.
As machine learning research becomes increasingly ubiquitous, novel algorithms and state-of-the-art models are progressing to an advanced state with considerably more complex and involved procedures. That is, to achieve groundbreaking results in such a climate, a researcher increasingly depends upon immense computational requisites to develop, train, and evaluate such algorithms. As a result, research labs are faced with the challenge of providing ample computational resources, and researchers are detracted from their core research in order to design, code, and configure experiments for the disparate computational resources provided.&#13;
&#13;
The framework proposed herein, therefore, strives to bridge the gaps between research labs, researchers, and computational resources by abstracting and automating the standard process of designing, training, and evaluating an algorithm. This framework, built upon the preexisting Monkey framework, will provide a fault-tolerant, decentralized system that is capable of scheduling and reproducing research training jobs. The framework maintains a virtual pseudo-homogenous cluster built on top of existing heterogeneous computational clusters. Moreover, the framework, designed to be flexible and cost-effective, also prioritizes user accessibility by providing access to an integrated machine learning toolkit with hyperparameter optimizers and a visualization dashboard.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learned String Index Structures for In-Memory Databases</title>
<link href="https://hdl.handle.net/1721.1/144902" rel="alternate"/>
<author>
<name>Spector, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/144902</id>
<updated>2022-08-30T03:36:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learned String Index Structures for In-Memory Databases
Spector, Benjamin
Within the field of machine learning for systems, learning-based methods have brought new perspective to indexing by reframing it as a cumulative distribution function (CDF) modeling problem. The burgeoning field, despite its nascence, has brought with it many opportunities and efficiencies. However, most work in this area has focused on efficiently indexing numerical keys, as the additional challenges posed by indexing strings have prevented the effective application of these techniques to string domains. We hypothesize that the machine learning approaches which have, in recent years, made significant strides in scalar indexing applications can also be effectively adapted to string applications. First, we introduce the RadixStringSpline (RSS) learned index structure for efficiently indexing strings. RSS is a tree of learned radix splines each indexing a fixed number of bytes. RSS achieves better performance than other structures by first using the minimal string prefix to sufficiently distinguish the data, followed by a contextual learned model to predict its location. Additionally, the bounded-error nature of RSS accelerates the last mile search and also enables a memory-efficient hash-table lookup accelerator. Second, we benchmark RSS against existing algorithms on several real-world string datasets and study its performance in-depth. RSS approaches or exceeds the performance of traditional string indexes while using up to 300× less memory, suggesting this line of research may be promising for future memory-intensive database applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open Intent Generation Through Unsupervised Semantic Clustering of Task-Oriented Dialog</title>
<link href="https://hdl.handle.net/1721.1/144901" rel="alternate"/>
<author>
<name>Wagner, Julia N.</name>
</author>
<id>https://hdl.handle.net/1721.1/144901</id>
<updated>2022-08-30T03:03:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Open Intent Generation Through Unsupervised Semantic Clustering of Task-Oriented Dialog
Wagner, Julia N.
The natural language processing field has seen task-oriented dialog systems emerge as a strong area of interest in research and industry over the past years. However, the limited existence of complex and sufficiently annotated training data still places a bottleneck on the development of more advanced, domain-agnostic chatbots. Novel domains require extensive time and manual effort from experts when creating intents for new datasets to support dialog systems. This thesis analyzes a two-staged unsupervised semantic clustering and intent generation approach with multiple dataset adaptive interchangeable methods. We examine various pre-trained embeddings, scoring objectives for the number of clusters, unsupervised clustering algorithms, intent generation techniques, and utterance tokenization schemes. We then run experiments with these combinations on three datasets: SNIPS, MultiWOZ, and real-world chat data. This is followed by quantitative metric and in-depth qualitative cluster-based evaluation. We show the benefits of bigram frequency intent generation as datasets increase irregularity and confirm the success of the universal sentence encoder embeddings with K-Means clustering. Additionally, our examination of real-world data underlines the importance of fine-grained utterance tokenization and gives promise to the feasibility of research methods on unpublished data. Altogether, this thesis provides a comprehensive analysis covering the abilities of the the two-stage pipeline components to support open intent discovery for a variety of dataset characteristics, offering alternative solutions where beneficial for real-world applications. This gives insight to the optimal configuration to automatically generate a novel dialog training dataset from unstructured, unlabeled chat utterances. The code for this thesis can be found at https://github.com/jnwagner53/dialog-intent-generation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Domain Coincidence Processing and Memory Architecture for Real-Time Geiger Mode LiDAR</title>
<link href="https://hdl.handle.net/1721.1/144899" rel="alternate"/>
<author>
<name>McGuire, Jacob T.</name>
</author>
<id>https://hdl.handle.net/1721.1/144899</id>
<updated>2022-08-30T04:07:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Multi-Domain Coincidence Processing and Memory Architecture for Real-Time Geiger Mode LiDAR
McGuire, Jacob T.
Geiger-Mode LiDAR is a powerful time-of-flight range sensing technology that enables rapid, wide area three-dimensional mapping with the unique capability of foliage penetration. These sensor arrays produce very high data rates on the order of 5 Gbps, requiring high-bandwidth motion compensation and coincidence processing to correlate the range returns and locate the modes in three-dimensional space. This paper proposes a multi-processor system architecture and memory management techniques for performing orientation-compensated histogram generation and peak detection to filter the LiDAR data stream, removing redundancy and spurious outputs. The multi-processor design, employing custom logic in concert with multiple CPUs, offers a reduction in system size, weight, and power [SWaP] by several orders of magnitude when compared to existing CPU-only real time coincidence processor designs. Behavioral simulations and hardware-in-the-loop testing offer partial proof of functionality for this design, which is capable of reducing the data rate by a factor of approximately 300 with output in the form of Cartesian coordinates, which can be directly integrated into a point cloud data structure for viewing. This promising result warrants further development work on LiDAR system designs incorporating these concepts.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Influence of Turbulence Intensity on Energy Production at the Vineyard Wind 1 Farm</title>
<link href="https://hdl.handle.net/1721.1/144896" rel="alternate"/>
<author>
<name>Condon, Emily P.</name>
</author>
<id>https://hdl.handle.net/1721.1/144896</id>
<updated>2022-08-30T03:42:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Characterizing the Influence of Turbulence Intensity on Energy Production at the Vineyard Wind 1 Farm
Condon, Emily P.
Turbulence in the atmospheric boundary layer mitigates wake losses between turbines and is critical to power generation by wind farms. As offshore wind energy development increases in the United States, it is necessary to understand the impact turbulence intensity uncertainty has on predicting the annual energy production (AEP) of a wind farm. In numerical models used to calculate farm power, turbulence intensity is treated as a constant input, though it has variability in the physical atmosphere. Wind conditions, such as turbulence intensity, can be modeled with numerical weather prediction (NWP), or measured with in situ instruments that may not be available offshore in the exact location of interest. For the Vineyard Wind 1 offshore farm off the coast of Massachusetts, this uncertainty between data sources led to an overprediction of 4.4% by the NWP data compared to that of the in situ data. We found that assuming a median turbulence intensity, instead of the full turbulence intensity distribution, resulted in an AEP prediction difference of less than a third of a percent. While the quantitative results presented in this thesis are site-specific to the Vineyard Wind 1 farm, the results suggest that wind condition uncertainty has a significant impact on AEP uncertainty. The results motivate further in situ measurement campaigns to assess the wind conditions that offshore wind farms will encounter.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Attention-Based Learning for Combinatorial Optimization</title>
<link href="https://hdl.handle.net/1721.1/144893" rel="alternate"/>
<author>
<name>Smith, Carson</name>
</author>
<id>https://hdl.handle.net/1721.1/144893</id>
<updated>2022-08-30T03:17:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Attention-Based Learning for Combinatorial Optimization
Smith, Carson
Combinatorial optimization problems, such as the Traveling Salesman Problem (TSP), have been studied for decades. However, with the rise of reinforcement learning in recent years, many of these problems are being revisited as a way to gauge these new models in different environments. In this thesis, we explore the use of a new type of model, the Decision Transformer, which is a Self-Attention Transformer architecture that was recently developed for training on reinforcement learning problems. To analyze the model, we structure the Traveling Salesman problem as a reinforcement learning problem and, by continuously varying parameters of the environment, measure its generalizability and success in this environment. This thesis aims to conduct an initial study of applying Decision Transformers to combinatorial optimization problems.¹
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What Makes Your Business A Winner: Empirical Analysis Using the Department of Defense Contracts with Small Manufacturing Firms</title>
<link href="https://hdl.handle.net/1721.1/144891" rel="alternate"/>
<author>
<name>Ingabire, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/144891</id>
<updated>2022-08-30T03:01:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">What Makes Your Business A Winner: Empirical Analysis Using the Department of Defense Contracts with Small Manufacturing Firms
Ingabire, Jessica
Strengthening small businesses in any economy remains a key pillar of economic growth, technological breakthrough, and national security. The Federal Government has always sought to support its large base of small businesses through its various socio-economic policies and targeted initiatives to increase opportunities of small businesses. However the latter still face numerous challenges when it comes to securing government contracts and building manufacturing capabilities. Focusing on small manufacturing firms contracting with the Department of Defense(DoD), this study sought to empirically evaluate the effects of certain attributes on the probability of winning contracts. My findings suggest that manufacturing small enterprises are mainly found in manufacturing hubs, belong to an R&amp;D ecosystem, meet certain quality standards, and are domestically focused. The latter finding may be of concern for the DoD as the literature has shown that export-focused firms tend to be more competitive and innovative than non-exporting firms. As the U.S. aims to regain its place as a manufacturing power, its small business strategy may need a closer look to ensure that they attract and retain small innovative firms as key parts of their supply chain.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pathways for Investor Climate Action: Trade-offs and Synergies under the Banner of Net Zero</title>
<link href="https://hdl.handle.net/1721.1/144890" rel="alternate"/>
<author>
<name>de Vasconcellos Oporto, Pedro</name>
</author>
<id>https://hdl.handle.net/1721.1/144890</id>
<updated>2022-08-30T03:20:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Pathways for Investor Climate Action: Trade-offs and Synergies under the Banner of Net Zero
de Vasconcellos Oporto, Pedro
Climate change is a critical issue for financial markets because of physical risk to assets from extreme weather events, and risks and opportunities arising from the world’s transition to a low carbon economy. This transition can be understood as a wave - a metaphor in which investors use different logics in response, resulting in them making the wave, riding the wave, or being hit by the wave. The latter means investors are at risk of shocks from technologies, policies and regulations affecting their portfolio. Riding the wave represents mitigating portfolio risk and tapping into opportunities for improved financial performance, while making the wave is about finding opportunities to drive impact and mitigate systemic climate risk. We dive into how asset managers and asset owners make sense of the transition wave through qualitative means including interviews and case studies. We show how investors are using Net Zero as an overarching goal and explore how they justify their strategies under that banner and what resulting actions are. Using a system dynamics approach we explore interactions from combining certain investor mechanisms for action, such as shareholder engagement, flexible capital provision, and divestment. We interpret these emerging effects as synergies or trade offs between making the wave and riding the wave and chart the course for future research to understand the interactive effects of investor climate actions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Comic Artist’s Tools Suite: Centralized and Intuitive Non-Photorealistic Computer Graphics Renderings</title>
<link href="https://hdl.handle.net/1721.1/144885" rel="alternate"/>
<author>
<name>Gerr, Joanna</name>
</author>
<id>https://hdl.handle.net/1721.1/144885</id>
<updated>2022-08-30T04:01:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Comic Artist’s Tools Suite: Centralized and Intuitive Non-Photorealistic Computer Graphics Renderings
Gerr, Joanna
As advancements are made in Computer Generated Imagery (CGI) software and Non-Photorealistic Rendering (NPR) techniques, modern comic artists have reaped the benefits of technology-aided workflows by creating 3D CGI environments for their comic backgrounds. However, while several NPR techniques exist, there remains a high barrier to entry for many non-technical artists when learning how to use these techniques to their full ability. This paper introduces the Comic Artist’s Tools Suite (CATS), a system for the open-source 3D modeling software Blender, which enables artists to procedurally generate and synthesize a selection of curated Non-Photorealistic Rendering (NPR) effects through an accessible and user-friendly interface. Users can quickly create and adjust shaders that customize the style of their 3D environment backgrounds, bypassing the need to craft shaders by hand and speeding up the traditional workflow for rendering pipelines.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Action in Action: Process and Tools that Empower Students to Make a Real-world Impact Using Technology</title>
<link href="https://hdl.handle.net/1721.1/144881" rel="alternate"/>
<author>
<name>Pang, Hannah H. (Nicole)</name>
</author>
<id>https://hdl.handle.net/1721.1/144881</id>
<updated>2022-08-30T03:56:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Computational Action in Action: Process and Tools that Empower Students to Make a Real-world Impact Using Technology
Pang, Hannah H. (Nicole)
How can we help K-12 students who are learning computer science and artificial intelligence (A.I.) feel motivated, competent, and empowered? The computational action framework, proposed by Tissenbaum, Sheldon, and Abelson, suggests that the preferable way is to ensure that young people are creating technology projects that address issues in their community. I add to this framework by creating the computational action process, which is composed of curriculum, toolkit, and website that teach five key concepts: defining a real-world problem; understanding users and communities; designing responsibly with and for users and communities; teamwork, project management, and implementation; and planning and making a long-lasting impact. From a research study conducted with 101 international young people in middle school and high school, results show that after learning the computational action process, students showed significant increase in computation skill, digital empowerment, and self-efficacy. Students also demonstrated an improved understanding of the impact of technology on people and society and improved ability to work towards solutions to ambiguous problems. This thesis describes the computational action process, presents the research, and analyzes the results, concluding with key findings, recommendations, and how this work contributes to the field of K-12 computer science education and A.I. literacy.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Text-Driven Movie Manipulation</title>
<link href="https://hdl.handle.net/1721.1/144880" rel="alternate"/>
<author>
<name>Reyes Espinoza, Victor M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144880</id>
<updated>2022-08-30T04:06:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Text-Driven Movie Manipulation
Reyes Espinoza, Victor M.
In this thesis, we designed a method for manipulating a character’s appearance through a textual description of their desired appearance. Our method consisted of training a video-specific neural net model with an existing architecture for extracting keypoints, manipulating a representative frame to fit the user’s textual description through latent optimization, and producing a new video in a single forward pass. Our method requires an artist to follow 3 simple steps to manipulate a video and has only two inputs: a representative frame of the character the user wants to edit and a textual description of the desired appearance manipulation. Compared to work-intensive special effects, our method enables quick and flexible experimentation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Processing and Recommendation Engine for Stack Overflow Data</title>
<link href="https://hdl.handle.net/1721.1/144879" rel="alternate"/>
<author>
<name>Wang, Julia J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144879</id>
<updated>2022-08-30T03:29:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Natural Language Processing and Recommendation Engine for Stack Overflow Data
Wang, Julia J.
Query intent classification is important for information retrieval and problem solving. We use natural language processing and collaborative filtering algorithms to build a recommendation engine for Stack Overflow tag predictions. Our pipeline consists of document retrieval (TF-IDF and HOTT), text embedding (Sentence BERT), and classification (multi-label and multi-class). We experiment with neural networks and other classifier strategies to identify the most relevant Stack Overflow tags. We then use these tags to implement collaborative filtering and recommend solutions based on similar existing posts in the database. The results displayed in this paper use Stack Overflow’s public dataset (https://www.kaggle. com/stackoverflow/stackoverflow).
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heterogeneous Hardware Support for Apiary</title>
<link href="https://hdl.handle.net/1721.1/144878" rel="alternate"/>
<author>
<name>Weckwerth, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/144878</id>
<updated>2022-08-30T03:01:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Heterogeneous Hardware Support for Apiary
Weckwerth, Nathan
Function-as-a-Service (FaaS) platforms are an appealing option for developers because they save time and money by eliminating the work spent managing application servers. Apiary is a novel FaaS platform which performs extremely well on datacentric tasks by tightly integrating computation and storage layers, eliminating the time spent transferring data between the two. Moreover, Apiary’s robust provenance system and straightforward programming model provide compelling reasons for developers to use it for both data-centric and compute-intensive tasks. In this paper, we detail a general architecture for using Apiary’s asynchronous programming model to implement compute-intensive tasks as external services. These external services are free to make usage of specialized hardware such as GPUs, which provide extremely good performance for many typical compute-intensive tasks such as machine learning inference.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Efficacy of the 15-Minute City Using Large-Scale Mobility Data from the Perspective of Accessibility and User Choice: A Case Study on the Urban Food Environment</title>
<link href="https://hdl.handle.net/1721.1/144877" rel="alternate"/>
<author>
<name>Li, Tingyu</name>
</author>
<id>https://hdl.handle.net/1721.1/144877</id>
<updated>2022-08-30T04:03:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Modeling the Efficacy of the 15-Minute City Using Large-Scale Mobility Data from the Perspective of Accessibility and User Choice: A Case Study on the Urban Food Environment
Li, Tingyu
The growing popularity and adoption of the 15-minute city, a concept aimed at improving physical accessibility to services and amenities, indicates a global effort towards making cities more equitable and sustainable. However, at it’s core, the 15- minute city implies that accessibility can be quantified by proximity, and that people are more likely to visit amenities that are physically closer to them. In this study, we investigate the relationship between choice and spatial proximity in the context of healthy food accessibility by modeling an individual’s choice to visit their closest grocery store, and the extent to which certain sociodemographic variables contribute to their choice.&#13;
&#13;
Using logistic regression models on ∼ 7&#119872; grocery store visits from ∼ 72, 000 people in the Greater Boston area, we show that proximity is not a good proxy for accessibility, and that peoples’ behaviors differ widely by sociodemographic traits, time, and type of amenity. These results indicate that distance cannot be used as the primary basis of a holistic urban design or accessibility policy. Instead, effective policies will need to be tailored to specific communities and categories of amenities in order to promote sustainable and equitable cities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallelizing Tree Traversals for Binomial Option Pricing</title>
<link href="https://hdl.handle.net/1721.1/144876" rel="alternate"/>
<author>
<name>Brunelle, Terryn</name>
</author>
<id>https://hdl.handle.net/1721.1/144876</id>
<updated>2022-08-30T03:35:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Parallelizing Tree Traversals for Binomial Option Pricing
Brunelle, Terryn
Quantitative finance analysts and software developers often need to develop efficient software implementations of pricing models, hedging tools, and other financial algorithms in order to support their research. Some of the most commonly used quantitative analysis tools include binomial trees, which are useful to represent possible future model states and estimate the current value of an asset.&#13;
&#13;
Open-source libraries such as QuantLib provide tools to help analysts implement such algorithsms without needing to reinvent widely studied logic. Though such libraries are widely used by quantative finance developers in industry and academia, the algorithms presented for binomial tree traversals do not take advantage of parallelism or optimizations for cache locality, such as those proposed by Zubair and Mukkamala in 2008. The optimizations presented by Zubair and Mukkamala have not been tested on modern machines, nor have they been implemented in a way accessible to quantatative analysts.&#13;
&#13;
We provide a performance analysis of the cache optimizations presented by Zubair and Mukkamala on modern machines. Beyond this, we contribute an open-source framework that enables developers to compute a variety of binomial option-types using an easy-to-use programming interface and taking advantage of the parallel, cache-optimized algorithm developed by Zubair and Mukkamala. We find that our optimizations achieve up to a factor of a seven times speedup over a vanilla serial implementation, and a two times speedup up over a vanilla paralllel implementation. Our performance results support the direction of continuing to explore cacheoptimized algorithms for tree traversals and seeking to create generalized frameworks for more widespread use.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Linear Programs with Polynomial Coefficients and Applications to 1D Cellular Automata</title>
<link href="https://hdl.handle.net/1721.1/144875" rel="alternate"/>
<author>
<name>Guo, Chenghao</name>
</author>
<id>https://hdl.handle.net/1721.1/144875</id>
<updated>2022-08-30T03:44:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Linear Programs with Polynomial Coefficients and Applications to 1D Cellular Automata
Guo, Chenghao
Given a matrix A and vector b with polynomial entries in d real variables δ=(δ₁,…,δ subscript d) we consider the following notion of feasibility: the pair (A,b) is locally feasible if there exists an open neighborhood U of 0 such that for every δ∈U there exists x satisfying A(δ)x≥b(δ) entry-wise. For d=1 we construct a polynomial time algorithm for deciding local feasibility. For d≥2 we show local feasibility is NP-hard.&#13;
&#13;
As an application (which was the primary motivation for this work) we give a computer-assisted proof of ergodicity of the following elementary 1D cellular automaton: given the current state ηₜ∈{0,1}ℤ the next state ηₜ₊₁(n) at each vertex n∈ superscript ℤ is obtained by ηₜ₊₁(n)=NAND(BSCδ(ηₜ(n−1)),BSCδ(ηt(n))). Here the binary symmetric channel BSCδ takes a bit as input and flips it with probability δ (and leaves it unchanged with probability 1−δ). It is shown that there exists &#120575;₀ &gt; 0 such that for all 0 &lt; &#120575; &lt; &#120575;₀ the distribution of &#120578;ₜ converges to a unique stationary measure irrespective of the initial condition &#120578;₀.&#13;
&#13;
We also consider the problem of broadcasting information on the 2D-grid of noisy binary-symmetric channels BSCδ, where each node may apply an arbitrary processing function to its input bits. We prove that there exists δ′₀&gt;0 such that for all noise levels 0&lt;δ&lt;δ′₀ it is impossible to broadcast information for any processing function, as conjectured in Makur, Mossel, Polyanskiy (ISIT 2021).
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Random Sequential Encoders for Private Data Release in NLP</title>
<link href="https://hdl.handle.net/1721.1/144874" rel="alternate"/>
<author>
<name>Jaba, Andrea</name>
</author>
<id>https://hdl.handle.net/1721.1/144874</id>
<updated>2022-08-30T03:36:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Random Sequential Encoders for Private Data Release in NLP
Jaba, Andrea
There are many scenarios that motivate data owners to outsource the training of machine learning models on their data to external model developers. While doing so, it is of data owners’ best interests to keep their data private - meaning that no third party, including the model developer, can learn anything more about their data than the labels associated with the machine learning task, which is difficult to guarantee while maintaining the model utility of said task. In computer vision, lightweight random convolutional networks have shown potential to be an encoder that balances privacy and utility. This thesis takes a novel exploration of random sequential encoders - (1) random recurrent neural networks and (2) random long short-term memory networks as encoding schemes for private data release in natural language processing. Experiments were conducted to evaluate the utility and privacy of these encoders against known baseline encoding schemes with less privacy: (1) not using an encoder and (2) random linear encoder. For the private release of a spam classification dataset, the usage of random long short-term memory networks as encoders maintained the most utility among all random encoders, while being relatively robust to the privacy attacks this thesis considers, and signals a promising direction for future experiments.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Text-Free Audio Captions of Short Videos from Latent Space Representation</title>
<link href="https://hdl.handle.net/1721.1/144873" rel="alternate"/>
<author>
<name>Agarwal, Anisha</name>
</author>
<id>https://hdl.handle.net/1721.1/144873</id>
<updated>2022-08-30T03:51:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Text-Free Audio Captions of Short Videos from Latent Space Representation
Agarwal, Anisha
In this thesis, we re-implement previous work exploring image to speech captioning. We expand upon the work to implement video to speech captioning. Specifically, we implement a text-free image to speech captioning pipeline that integrates four distinct machine learning models. We alter the models to process video data rather than image data and analyze the resulting speech captions. We conduct experiments on the Wav2Vec2 and HuBERT Automatic Speech Recognition models, and identify which works best with synthesized speech.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Early Stage Design Sketches and Reflections on Prototyping</title>
<link href="https://hdl.handle.net/1721.1/144872" rel="alternate"/>
<author>
<name>Das, Madhurima</name>
</author>
<id>https://hdl.handle.net/1721.1/144872</id>
<updated>2022-08-30T04:08:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Assessing Early Stage Design Sketches and Reflections on Prototyping
Das, Madhurima
Designers routinely create informal “thinking” sketches to explore a design space, “talking” sketches to communicate design ideas during the early phases of the design process, and “learning” prototypes to test potential concepts. This study presents two new tools to assess novice designers’ sketch attributes and prototyping reflections in the context of an introductory design course. First, it proposes a rubric for assessing the quality of early stage design sketches including line smoothness, proportion, and understandability. Of particular note is the contribution of assessing understandability as a metric for sketches as communication tools. This study also presents a tool to capture designer reflections after each iteration of a prototype. Not only does this record what is learned about a design, but also designers’ personal and emotional reactions to the process. Sketching-related results show a positive correlation between sketch quality and understandability, indicating the importance of sketch quality especially when designers use sketches to communicate. Results also indicate that early stage sketch quantity, but not quality, is linked with design outcomes. The study also finds a link between frequency of sketching and higher maximum sketch quality scores (i.e. at least one highly rated sketch) as well as a correlation between individuals’ maximum sketch quality scores and overall design outcomes. Preliminary results around prototyping indicate that reflection on both the technical and emotional aspects of prototyping may be a worthwhile area of further study. Finally, several results point to novice designers’ lack of consistent focus on users in their prototyping reflections and presentations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluid Shear Stress Effects on Cancer Metastasis</title>
<link href="https://hdl.handle.net/1721.1/144871" rel="alternate"/>
<author>
<name>Floryan, Marie A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144871</id>
<updated>2022-08-30T03:48:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Fluid Shear Stress Effects on Cancer Metastasis
Floryan, Marie A.
Metastasis is responsible for at least 66% of cancer-related deaths, yet it remains a poorly understood phenomenon. Fluid shear stress is common to all metastatic events but its effects on circulating tumor cells (CTCs) remains a mystery. It is known that shear stress can act either directly, causing membrane rupture and cell death, or by altering cell phenotype via mechanotransduction pathways, but to what extent these and other unknown effects have on the metastatic cascade remains unresolved. In vivo models have been limited in their ability to effectively study this problem largely because of the challenges in tracking TCs in the circulation and poor control over the flow environment. In vitro platforms with fluid flow are not physiologically relevant due to their 2D nature and the requirement for large bulk volume, and their inability to recapitulate the entire metastatic cascade. Here, an in vitro flow system that allows circulation of cells through physiologically relevant 3D microvasculature in presence or absence of tumor spheroids and organoids that addresses the limitations of current models is presented. The flow system was first used solely with MVNs. Applying flow at a magnitude comparable to physiological levels of FSS resulted in fully perfusable MVNs and flow magnitude-dependent vessel remodelling. Flow also increases the lifespan of MVNS three-fold compared to static culture. Furthermore, higher flow resulted in fewer arrested TCs and a larger fraction of extravasated and proliferated TCs. Finally, we are able to support metastatic outgrowths up to 1000 um in diameter. The system can be further expanded to incorporate more steps of the metastatic cascade, study the differences in survivability of single TCs or TC clusters, and use surgical samples to probe patient variability. This platform allows for a controlled method of investigating the effects of fluid shear stress on the metastatic cascade and can reveal potential anti-cancer therapeutic targets.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resolution Tricks and Disaggregation Tools for Smart Power Metering</title>
<link href="https://hdl.handle.net/1721.1/144866" rel="alternate"/>
<author>
<name>Langham, Aaron William</name>
</author>
<id>https://hdl.handle.net/1721.1/144866</id>
<updated>2022-08-30T03:23:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Resolution Tricks and Disaggregation Tools for Smart Power Metering
Langham, Aaron William
A nonintrusive load monitor (NILM) aims to solve the energy disaggregation problem by incorporating power system analysis, signal processing, and machine learning. This thesis addresses two problems present in state-of-the-art nonintrusive load monitoring research. First, the ability of existing nonintrusive load monitoring techniques and data to generalize is very low, so any data collected for model training needs to be domain-specific. For this reason, this work explores the limits of power signal processing used by deployable NILMs. Secondly, load electrical behavior is almost always assumed to be stationary. Thus, this work presents Adaptive NILM, a set of feature space selection and classification tools useful for nonintrusive load monitoring with limited training data when load operation drifts over time. These techniques are synthesized into a new NILM software package that allows for high-level automation of resolution tracking, feature space evaluation, and adaptive classification. A new NILM hardware implementation, capable of wirelessly integrating data from distributed sensors, is described and demonstrated with case studies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Spatio-Temporal Graph Convolutional Networks</title>
<link href="https://hdl.handle.net/1721.1/144864" rel="alternate"/>
<author>
<name>Tell, Max R.</name>
</author>
<id>https://hdl.handle.net/1721.1/144864</id>
<updated>2022-08-30T03:31:40Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Dynamic Spatio-Temporal Graph Convolutional Networks
Tell, Max R.
Spatio-temporal modeling is an essential lens to understand many real-world phenomena from traffic [20] [10] to epidemiology [12]. Although forecasting time series is an exceptionally well-studied problem, recent years have seen impressive gains in the performance of graph learning as a paradigm for spatial learning problems. Some recent work has explored the intersection of these two fields but often assumes that the underlying graph structure is static. We introduce Dynamic Spatio-Temporal Graph Convolution Network (DST-GCN) as a novel architecture for spatio-temporal modeling with changing graph structure. DST-GCN employs a convolutional architecture to learn spatio-temporal relationships that provide strong generalization and attractive computational efficiency. We provide empirical results for several datasets from different domains that demonstrate the gains provided by DST-GCN.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RF-Based Indoor Localization Around Corners</title>
<link href="https://hdl.handle.net/1721.1/144863" rel="alternate"/>
<author>
<name>Cao, Peng</name>
</author>
<id>https://hdl.handle.net/1721.1/144863</id>
<updated>2022-08-30T03:15:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">RF-Based Indoor Localization Around Corners
Cao, Peng
Unmanned robots are increasingly used around humans in factories, malls, and hotels. As they navigate our space, it is important to ensure that such robots do not collide with people who suddenly appear as they turn a corner. Today, however, there is no practical solution for localizing people around corners. Optical solutions try to track hidden people through their visible shadows on the floor or a sidewall, but they can easily fail depending on the ambient light and the environment. More recent work has considered the use of radio frequency (RF) signals to track people and vehicles around street corners. However, past RF-based proposals rely on a simplistic ray-tracing model that fails in practical indoor scenarios. This thesis introduces CornerRadar, an RF-based method that provides accurate around-corner indoor localization. CornerRadar addresses the limitations of the ray-tracing model used in past work. It does so through a novel encoding of how RF signals bounce off walls and occlusions. The encoding, which we call the hint map, is then fed to a neural network along with the radio signals to localize people around corners. Empirical evaluation with people moving around corners in 56 indoor environments shows that CornerRadar achieves a median error that is 3x to 12x smaller than past RF-based solutions for localizing people around corners.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Marine carbohydrate-active enzymes illuminate microbial ecology, evolution, and carbonate precipitation</title>
<link href="https://hdl.handle.net/1721.1/144851" rel="alternate"/>
<author>
<name>Cutts, Elise Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/144851</id>
<updated>2022-08-30T04:06:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Marine carbohydrate-active enzymes illuminate microbial ecology, evolution, and carbonate precipitation
Cutts, Elise Margaret
Marine carbohydrate-active enzymes (CAZymes) in microbial mats build, modify, and degrade the matrix of exopolymeric substances (EPS) that structure these ancient ecosystems and control mineral formation via the alkalinity engine and organic surfaces involved in mineral nucleation and growth. Marine microbialites—mineralized microbial mats—preserve the earliest unequivocal evidence of life on Earth and billions of years of the evolutionary history of microbial life. And the gene sequences of marine CAZymes preserve a yet-unexplored record of microbial evolution that can be constrained by the fossil record of major algal lineages. This thesis explores marine carbohydrate-degrading enzymes as molecular-biological windows on microbial ecology, evolution, and mineral precipitation. Section one reviews the application of molecular biological tools to the study of carbonate mineralization in microbial mats and identifies CAZymes as a promising target for future molecular biological studies of microbemineral interactions. Section two provides a brief overview of unique algal polysaccharides that could be used as phylogenomic “standard candles” for dating microbial phylogenies. Section three is an original metagenomic and laboratory study of carbohydrate degradation in a Shark Bay pustular mat community. The combined metagenomic and experimental analyses reveal a widespread potential for EPS degradation among MAGs from Shark Bay pustular mats and suggest distinct roles for many phyla that are reported at high abundances in mats. Taken together, the literature reviews and new results presented in this thesis demonstrate that marine CAZymes are an exciting open frontier in the molecular biology of marine ecosystems. New studies of these enzymes promise discoveries with the potential to illuminate the evolutionary history of microbes—both as preserved in rock record and in the genomes of modern organisms.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the Brick: Collaborations with a Sensing Microbial System in the Built Environment</title>
<link href="https://hdl.handle.net/1721.1/144850" rel="alternate"/>
<author>
<name>Gonzalez, Laura Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/144850</id>
<updated>2022-08-30T03:57:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Beyond the Brick: Collaborations with a Sensing Microbial System in the Built Environment
Gonzalez, Laura Maria
The environmental damage caused by buildings has become clear over the past two decades. Their construction and operation significantly worsened the climate crisis through enormous annual CO2 emissions. Rectifying this damage will require an ideological shift, one that involves working with invisible microscopic living systems. The very same living organisms that have helped shape the Earth’s ecosystems over billions of years.&#13;
&#13;
At present, designers have made efforts to reduce our dependency on carbon-intensive resources by integrating living organisms into the built environment using biomaterials. However, difficulties keeping organisms alive have reduced their implementation to mere fabrication tools. Emerging synthetic biology techniques present an opportunity to integrate organisms into the built environment through engineered living materials. These materials can self-assemble and maintain the embedded properties of microbes, such as self-healing and adaptive response capabilities. The design process focuses on shaping the conditions for their livelihood through the simultaneous design of form, matter, and microbe, exemplifying an organism-centric design process that spans across scales.&#13;
&#13;
In this thesis, I propose that living materials offer a path to address the environmental repercussions of the built environment while also transforming how we inhabit and interact with buildings over their lifespan - achieved through a collaboration with microscopic living organisms. To this end, I explore the design and fabrication of a biocemented engineered living material through in silico, in vitro, and in vivo methods. I propose a design methodology driven by wet-lab experimentation and define design constraints for macro-scale applications. I then fabricate biocemented brick modules and demonstrate their ability to bind into larger assemblies. Lastly, I evaluate the microbial viability of the designed living material and demonstrate sensing and reporting capabilities on the biomineralized surface.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Text Analytics to Inform Deviation Root Cause Analysis in Biomanufacturing</title>
<link href="https://hdl.handle.net/1721.1/144849" rel="alternate"/>
<author>
<name>Nersesian, Lois E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144849</id>
<updated>2022-08-30T03:42:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Text Analytics to Inform Deviation Root Cause Analysis in Biomanufacturing
Nersesian, Lois E.
In biomanufacturing, product quality and safety are critical and there are many controls in place to ensure that processes are followed within the prescribed operating limits. However, deviations from these processes inevitably occur, sometimes requiring in-depth investigations to determine the cause and prevent recurrence. Understanding quality trends on the manufacturing line is also critical in preventing quality issues. At Amgen, a leading biotechnology company, results of such investigations are stored long-term but only in a partially structured manner, making it hard to leverage this historical data to enhance deviation investigation efficiency and study long term quality trends. The goal of this project is to use these historical records to draw insights into the investigation process and help increase the efficiency and accuracy of future deviation investigations and overall quality assurance. To achieve this, we use natural language processing tools to derive information from text describing deviations and causal factors. Several methods are explored, namely, unsupervised clustering using machine learning and natural language processing to identify and cluster similar causal factors, explicit text extraction which identifies known key terms such as equipment mentioned in the text, and process-dependent step classification which leverages reference documents describing the manufacturing process to assign records to process steps. The outputs of these methods are presented in a proof-of-concept tool which can be used to assist investigators. Our results indicate that all these methods have benefits and drawbacks but can be used together for maximal insights. Based on the status of each method, we suggest that Amgen work to create a tool to present potential causal factors to investigators immediately, incorporating clustering and text extraction methods after minor refinement, and continue to explore the potential of process-driven methodologies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wall-Walking and Other Bannable Offenses: Discipline and Deviant Play in World of Warcraft</title>
<link href="https://hdl.handle.net/1721.1/144848" rel="alternate"/>
<author>
<name>Carney, Laurel</name>
</author>
<id>https://hdl.handle.net/1721.1/144848</id>
<updated>2022-08-30T03:30:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Wall-Walking and Other Bannable Offenses: Discipline and Deviant Play in World of Warcraft
Carney, Laurel
As in most games, World of Warcraft’s player characters’ virtual bodies are designed and built to comply with, and be acted upon by, the governing systems of the gameworld they inhabit. The technical equations that determine how a body walks, the rules that define what walking is and what bodies are, are co-developed with a specific definition of a world in mind, and vice versa—both the player character’s body and the terrain on which it stands are constructed in order to more effectively reinforce the functions and norms of the other. What can we discover by looking at the way these creations interact with and influence each other? What room is there in the space between what a virtual world and body can do, and what they shouldn’t do, and how can players make use of it?&#13;
&#13;
This thesis closely reads World of Warcraft’s formal elements, its mechanics and its aesthetic grammar, in order to argue that the game’s virtual bodies and environments are embedded with ideologies and norms designed to reinforce its developer’s financial and political governance over their virtual world. By better understanding the methods by which these norms shape the world in World of Warcraft, players can experiment and co-create new forms of play that complicate, break, and perhaps even overturn the rules that seek to mark their play as deviant.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Environmental Impact of Space Launches and Societal Response</title>
<link href="https://hdl.handle.net/1721.1/144846" rel="alternate"/>
<author>
<name>Sirieys, Elwyn</name>
</author>
<id>https://hdl.handle.net/1721.1/144846</id>
<updated>2022-08-30T04:01:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Environmental Impact of Space Launches and Societal Response
Sirieys, Elwyn
Space-based technologies can be valuable assets for environmental protection and for supporting Sustainable Development Goals (SDGs) on Earth. However, these activities have their own share of environmental impacts. Throughout their life cycle, launch vehicles affect their local and global environments both on Earth and in space. In particular, they generate direct emissions of combustion products into every layer of the atmosphere, inducing ozone depletion and radiative forcing. Recent literature indicates that these consequences of space launches are understudied, especially considering the space industry's projected growth. This thesis aims at assessing the situation in terms of current and future environmental impact, as well as society’s response to the issue.&#13;
&#13;
A historical analysis of space launch vehicle designs is conducted, based on a comprehensive record of 6,502 orbital launches for the period 1957-2021, to inform on technological evolution and implications in terms of emissions. This study suggests that, as part of today's unprecedented diversity in rocket designs, key decisions regarding engines and propellants are being made which will decide the future atmospheric impact of the industry. Trends in the space sector are analyzed and scenarios are generated to assess the future situation.&#13;
&#13;
For the first time, societal response to this issue is analyzed quantitatively and compared with three case studies in the automotive industry, the satellite industry, and aviation. A total of 463,630,586 news articles, 771,604 legal documents, 10,836,398 academic publications over 30 years were examined. &#13;
&#13;
Alternative paths forward are proposed to foster a more sustainable future for the space launch industry, in terms of actionable design choices, impact assessment methodologies, regulatory options, and market-based incentivization mechanisms based on a sustainability index for launch vehicles.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Evaluation of a Lithium-Ion Pouch Battery Cell in Simulated Space Environment for a Pico-Satellite Concept (PicoSat)</title>
<link href="https://hdl.handle.net/1721.1/144845" rel="alternate"/>
<author>
<name>Dubey, Rakesh</name>
</author>
<id>https://hdl.handle.net/1721.1/144845</id>
<updated>2022-08-30T03:51:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Performance Evaluation of a Lithium-Ion Pouch Battery Cell in Simulated Space Environment for a Pico-Satellite Concept (PicoSat)
Dubey, Rakesh
PicoSat is a notional miniature (pico-) satellite concept that includes a low-voltage custom-manufactured Lithium-ion pouch (secondary) battery cell sandwiched between two Silicon wafers. It is imperative to evaluate the performance of the battery cell in a simulated space environment to determine its suitability for the PicoSat concept and identify any potential issues (such as due to the swelling of the battery cell pouch) during the operation of the satellite with the battery cell in space.     &#13;
&#13;
In this thesis, an approach to non-destructively evaluate the performance of a Lithium-ion pouch battery cell for space applications is demonstrated. First, the fundamental capabilities and the setup required to test a low-voltage Li-ion battery cell are developed. Such fundamental capabilities include the ability to charge and discharge a low-voltage battery cell at a controlled rate and the ability to control the temperature of the battery cell-under-test. These capabilities are fundamental as they may be used on a variety of low-voltage secondary battery cell chemistries intended for pico- or nano-class satellites regardless of the form factor of the cells. Next, capabilities specific to testing the custom Li-ion pouch battery cell for the PicoSat concept are developed. Such capabilities include estimating the pressure variation inside the battery cell pouch and charging of the battery cell using simulated photovoltaic cells in a simulated space environment. Finally, the results of the performance evaluation experiments are presented along with the analyses. &#13;
&#13;
The performance of the battery cell was evaluated during charge-discharge cycling at fixed operating temperatures in the range, [-40 degrees C, 50 degrees C], during three-week storage at an elevated temperature (with no charge-discharge cycling), and in simulated low-earth orbits. It is concluded that the battery cell is suitable for use on a PicoSat-type satellite as the cell attained a steady capacity-positive (in mAh) operating state in simulated orbits. Although an increase in the cell pouch pressure was observed in all experiments, the extrapolated pressures over 365 days were estimated to be orders of magnitude below the fracture strength of Silicon wafers and the ideal tensile strength of Silicon as found in literature.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Dynamics Between Original Equipment Manufacturers (OEMs) and Tier-1 Suppliers in the Automotive Industry</title>
<link href="https://hdl.handle.net/1721.1/144844" rel="alternate"/>
<author>
<name>Zhang, Yuru</name>
</author>
<id>https://hdl.handle.net/1721.1/144844</id>
<updated>2022-08-30T03:44:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Innovation Dynamics Between Original Equipment Manufacturers (OEMs) and Tier-1 Suppliers in the Automotive Industry
Zhang, Yuru
Over the past seven decades, the automotive supply chain has been restructured to a tiered system. OEMs and tier-1 suppliers innovate together through joint product development programs: each OEM has multiple suppliers working on different subsystems, and one supplier may offer similar subsystems to multiple OEMs. This work focuses on the relationship between OEM and tier-1 suppliers that focuses on both parties’ interests with a balance in the coexistence of competition and collaboration, using objective data sources. Treating the OEM, its tier-1 supplier, and the competitors in the whole product market as a system, a system-level quantitative study on the buyer-supplier relationship is conducted. A system dynamics (SD) model is proposed to describe the dynamics in an OEM-supplier relationship. To validate the model, the author collects non-subjective data and performs empirical studies on two subsystems – passive keyless entry (PKE) and high-speed transmission (HST) between the model years 2004 and 2021. The empirical studies validate the hypothesis that the outcomes of competitive and collaborative behaviors on the whole product competitiveness depend on market competition, which is reproducible by the model: when the market is stable, the more competitive party in a relationship has a better financial outcome; when the market is highly competitive, collaborative behaviors boost the long-term performance of the OEM-supplier ecosystem. The study also shows that the proposed model delivers accurate predictions with non-subjective inputs when heavy dependence is present in an OEM-supplier relationship.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study on the impact of collaboration between power systems and electric vehicles on the costs and CO2 emissions of energy system</title>
<link href="https://hdl.handle.net/1721.1/144843" rel="alternate"/>
<author>
<name>Yasuhara, Kiyohide</name>
</author>
<id>https://hdl.handle.net/1721.1/144843</id>
<updated>2022-08-30T03:25:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A study on the impact of collaboration between power systems and electric vehicles on the costs and CO2 emissions of energy system
Yasuhara, Kiyohide
The Japanese government has set a goal of achieving a decarbonized society by 2050, and decarbonizing the power system is one of the important issues. However, as the introduction of Variable Renewable Energies (VRE) increases, systems to absorb fluctuations in power generation will be required, which could significantly increase the cost of the overall energy system. On the other hand, the Electric Vehicles (EVs) are expected to be widely used in the future, and the usage of unutilized CO2-free power to charge EVs or the power supply from EVs to the power grid (Vehicle to Grid (V2G)) may help to mitigate fluctuations. Therefore, this thesis paper examined how much costs and CO2 emissions can be reduced in Japan's overall energy system by mitigating power system fluctuations through the collaboration between power systems and EVs.&#13;
&#13;
First of all, we hypothesized that the impact of the shift from combustion engine vehicles to EVs (EV shift) on costs and CO2 emissions may vary depending on how the electricity is generated, but we found that “EV shift” may reduce costs and CO2 emissions whether LNG or hydrogen is used as fuel for thermal power. In addition, our hypothesis was that the impact of EV introduction on the reduction of costs and CO2 emissions may vary by region, and we found that the impact is particularly large in regions with low power demand and high VRE potential. Furthermore, we hypothesized that a cooperative V2G charger, which can control both charging and discharging period, may be the best way to reduce costs and CO2 emissions, but the results showed that this charger type may not be the most cost-effective in many cases. However, this trend may change if the charger unit costs decrease and the penetration ratio of renewable energy increases, so it would be important to determine the type of charger to be installed based on these factors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Verification of The Large Lenslet Array Magellan Spectrograph</title>
<link href="https://hdl.handle.net/1721.1/144841" rel="alternate"/>
<author>
<name>Stenzel, June</name>
</author>
<id>https://hdl.handle.net/1721.1/144841</id>
<updated>2022-08-30T03:53:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Model-Based Verification of The Large Lenslet Array Magellan Spectrograph
Stenzel, June
For large complex engineering systems, testing and verification are essential for ensuring operational success. Proposed and developing ground-based astronomy instruments are continuing to increase in complexity. Therefore, it is necessary for instrumentation engineering programs to leverage the advancements in systems engineering and especially model-based systems engineering (MBSE) methods seen in space systems. Model-based verification (MBV) is a flexible approach to MBSE that supports continuous verification and testing of a system through the use of integrated system modeling as it develops post-design to assembly, integration, and operation.&#13;
&#13;
This thesis presents a MBV methodology and evaluates it by applying it to verification of the The Large Lenslet Array Magellan Spectrograph (LLAMAS), a spectroscopy instrument currently being developed at the MIT Kavli Institute for Astrophysics and Space Research. It is a facility-class instrument that will be installed on the Magellan Telescope at the Las Campanas Observatory in 2022 and will be available to the scientific community for making large field-of-view spectroscopy observations. Three model-based verification activities are implemented and evaluated: thermal budget analysis using a comprehensive system model, component and subsystem verification for optical throughput, and science requirement verification with simulated observation scenarios.&#13;
Recommendations are made for implementation and future study of MBSE.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety in US Air Force Tandem Seat Pilot Training Applying STAMP Processes</title>
<link href="https://hdl.handle.net/1721.1/144840" rel="alternate"/>
<author>
<name>Munekata, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/144840</id>
<updated>2022-08-30T03:28:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Safety in US Air Force Tandem Seat Pilot Training Applying STAMP Processes
Munekata, Adam
The complexity and sophistication of modern systems pose many challenges for conventional accident and hazard analysis techniques.  Although these methods have proven to be reliable on systems in the past, these methods often lack context, are subject to impartiality, and fail to approach systems holistically.  System-Theoretic Accident Model and Process (STAMP) addresses these challenges and helps to identify how unsafe control actions can lead to hazards.&#13;
&#13;
This thesis applies (STAMP) principles to an accident assessment and hazard analysis of tandem seat pilot training in the United States Air Force (USAF).  The USAF Safety investigation board (SIB) process has gained valuable insight from accidents in the past; however, it suffers from the shortcomings mentioned above. &#13;
&#13;
Northrup Grumman’s T-38 Talon is the advanced jet trainer that has been used by the USAF since the 1960s, soon to be replaced by Boeing’s T-7 Red Hawk as early as 2023. Causal Analysis based on Systems Theory (CAST) is an accident analysis technique based on STAMP. CAST is applied to the USAF SIB Report of the recent Vance T-38 crash on 21 November 2019.  The findings of the SIB report are compared with the findings of CAST.  Recommendations are provided to improve the SIB process by applying STAMP principles.&#13;
&#13;
System Theoretic Process Analysis (STPA) is a hazard analysis technique based on STAMP principles. STPA is applied to a hypothetical Next Generation Trainer since the specifications of the T-7 are not known.  Recommendations are generated to improve safety, focusing primarily on the conflict of control authority between the instructor and student pilots.  Design recommendations and suggestions are provided for the next generation trainer to address concerns that may have been overlooked in preliminary hazard analysis.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Structures and Methods for Safe On Orbit Robotic Assembly of Small Satellites</title>
<link href="https://hdl.handle.net/1721.1/144839" rel="alternate"/>
<author>
<name>Dahl, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/144839</id>
<updated>2022-08-30T03:50:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Development of Structures and Methods for Safe On Orbit Robotic Assembly of Small Satellites
Dahl, Mary
While the advent of small satellites such as CubeSats have allowed for space to become quicker and easier to access, the turn-around time is still insufficient for rapid deployment. Example situations are replacing nodes in large constellations, time-sensitive science experiments, or disaster relief imaging. A solution can be found in on-orbit assembly. By flat packing a large quantity of snap-fit compatible boards for a plurality of CubeSats and assembling them on-orbit, time from conception to operation can be significantly lowered. Crucial to on-orbit robotic assembly is the design of the satellite. Traditional CubeSats, with rails, precise pin connectors, dense headers, and small wires, are difficult to assemble for all but the most advanced robots. Instead, this thesis discusses the design and testing of custom-made structures for assembly by a cartesian robot with an electromagnetic end-effector. These structural designs need to ensure consistent, repeatable, and safe assembly of satellites, both on the ground and on orbit. The requirements for such a system are examined with a Systems Theoretic Process Analysis, or STPA. Additionally, different types of compliant design features, such as sliding latches and chamfer overhangs, have their performance analyzed by performing repeated insertion tests. It is found that, with compliant designs, a cartesian robot can assemble the designed structure of eight boards and four rails in approximately four minutes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making A Neighborhood Illegal Zoning, Nimbyism, and Housing Justice in Bensonhurst, Brooklyn</title>
<link href="https://hdl.handle.net/1721.1/144836" rel="alternate"/>
<author>
<name>Prigov, Andrey</name>
</author>
<id>https://hdl.handle.net/1721.1/144836</id>
<updated>2022-08-30T03:04:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Making A Neighborhood Illegal Zoning, Nimbyism, and Housing Justice in Bensonhurst, Brooklyn
Prigov, Andrey
As the national housing crisis pushes low-income tenants out of urban inner cores, the local land use politics of the peripheral neighborhoods that they move to have taken on great importance. Part historical narrative and part zoning analysis, this thesis follows the zoning history of one such place—Bensonhurst, a large residential neighborhood in New York City’s outer ring that finds itself caught between the land use demands of longtime homeowners and the housing needs of lower-income immigrant tenants. While other scholars have explored neighborhood dynamics like these, few have followed them through to the zoning code. By exploring how Bensonhurst residents shaped the zoning code and how, in turn, the zoning code shaped Bensonhurst residents, the thesis provides context for the opportunities and constraints that inform the neighborhood’s trajectory today. As the thesis identifies, at the heart of the neighborhood’s debates around zoning, are the legacies of two tensions: the first between homevoters and newcomers and the second between homevoters and planners. The first reflects racial, ethnic, economic and spatial anxieties about newcomers and what they may bring. The second expresses the means of residents to approve or deny city-led projects and set their own agenda. As this thesis posits, the making of Bensonhurst is the story of how these factors reverberated against each other. Understanding how policymakers can navigate around these tensions to build affordable housing in what already is one of the densest, most transit-oriented places in the country is a crucial first step in addressing the impact of the housing shortage on the nation’s most marginalized renters.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Invisible Hand or the Handgun: Ride Hailing, Violence, and Political Settlements in the South African Urban Mobility Market</title>
<link href="https://hdl.handle.net/1721.1/144835" rel="alternate"/>
<author>
<name>Ebeid, Ehab A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144835</id>
<updated>2022-08-30T03:10:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Invisible Hand or the Handgun: Ride Hailing, Violence, and Political Settlements in the South African Urban Mobility Market
Ebeid, Ehab A.
Markets are thought to constitute a force that fosters peace and brings both material and non-material improvements to society. Technology and innovation are similarly believed to offer developmental promise. The spread of ride hailing platforms such as Uber to developing country contexts is conceived of as a market formalization tool that produces desirable social outcomes, reducing unemployment while creating modern and efficient transportation systems. Why, then, do we see violence as a central, persistent feature of ride hailing markets in the developing world?&#13;
&#13;
To understand violent conflict in urban mobility markets, I conducted semi-structured interviews with drivers, policymakers, platform executives, activists, and other industry actors in Gauteng, South Africa, the province that encompasses the cities of Johannesburg and Pretoria, and one of the earliest and largest ride hailing markets in the developing world. Relying on theoretical frameworks from new institutional economics and critical legal studies, I show that violence plays an important role in market governance. Violence on the part of taxi associations is embedded in the market’s informal institutions and constitutes an enforcement mechanism that underpins the territorial norms that market actors understand as ‘laws’.&#13;
&#13;
I explain the emergence and persistence of market violence as the result of a mismatch between the distribution of power and the distribution of benefits among market actors, engendered by ride hailing’s entry. To better explain how the market is governed and contested, I propose a more precise typology of power and legitimacy, and clarify the sources of power belligerents rely upon to survive and prevail in conflicts. Finally, I use the contrasting fates of two ride hailing services, Uber Bus and Uber Go, to illustrate how groups deploy power to contest the market, and how regulatory decisions go beyond traditional market considerations.&#13;
&#13;
By studying a market characterized by both old and new forms of violent conflict, this thesis inserts violence into literatures on markets, which largely ignore conflict; and applies the macro-institutional political settlements framework to the meso level of a specific market. As policymakers contend with the spread of ride hailing firms, a broad and empirically based view of how they are organized, governed, and how they function in different contexts is needed, to better understand how to regulate them.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Genetic Algorithm Framework using Variable Length Chromosomes for Vehicle Maneuver Planning</title>
<link href="https://hdl.handle.net/1721.1/144833" rel="alternate"/>
<author>
<name>Yu, Benjamin James</name>
</author>
<id>https://hdl.handle.net/1721.1/144833</id>
<updated>2022-08-30T03:41:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Genetic Algorithm Framework using Variable Length Chromosomes for Vehicle Maneuver Planning
Yu, Benjamin James
Incorporating reconfigurability demonstrates great potential in increasing the performance and/or lowering the cost of complex systems. Reconfigurability enables a system to adapt and dynamically respond to the specific objectives it encounters, rather than simply being optimized towards a general case. One such class of reconfigurable systems are fleets of maneuvering vehicles. Considering this class naturally leads to the question of how to generate the optimal set of maneuvers over an operational campaign. This thesis presents a genetic algorithm framework with Variable Length Chromosomes (VLC) to find this optimal set of maneuvers. Said framework generates Pareto optimal sets of maneuvers using non-dominated sorting genetic algorithm II (NSGA-II). The use of VLC removes the necessity for a human designer to impose a priori assumptions on the number and/or timing of vehicle maneuvers. Instead, the optimizer is freed to grow or reduce the number of maneuvers as needed. In addition, the use of a genetic algorithm approach enables the framework to evaluate problem domains and constraints which include non-linear behavior, discontinuities, and nonsmoothness. A small simplified 1D abstract problem is formulated and solved with this framework to familiarize the reader, before two case studies: (1) a reconfigurable satellite constellation observing Earth targets, and (2) an ocean-going maneuvering platform completing a cross-Atlantic voyage while simultaneously offering itself as a calibration target to overhead Low Earth Orbit (LEO) satellites, are explored indepth. The analysis shows that maneuver plans generated from the framework can increase the imaging performance of reconfigurable satellites by 25 to 35 percent, and the calibration metric for the ocean-going platform by up to 40 percent. Throughout this thesis, the key design decisions of the framework are discussed. The framework itself is available as Julia code, which has been written to take full advantage of any distributed computing cluster, particularly those managed by SLURM.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesizing Object Models from Natural Language Specifications</title>
<link href="https://hdl.handle.net/1721.1/144829" rel="alternate"/>
<author>
<name>Gu, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/144829</id>
<updated>2022-08-30T04:10:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Synthesizing Object Models from Natural Language Specifications
Gu, Alex
Program synthesis has traditionally excelled in tasks with precise specifications such as input-output examples and formal constraints by using structured and algorithmic approaches based on enumerative search and type inference. However, traditional synthesis techniques have no mechanism of incorporating real-world knowledge, which is commonplace in software engineering. Motivated by this, we introduce a new synthesis task known as specification reification: synthesizing concrete realizations of vague, high-level application specifications. We focus on a specific instance of this: generating object models from natural language application descriptions. Towards this goal, we present three approaches for object model synthesis that leverage domain knowledge from the GPT-3 language model. In addition, we design a scoring metric to evaluate the success of synthesized object models on seven sample tasks such as classroom management and pet store applications. We demonstrate that our language-model-based synthesizers generate object models that are comparable in quality to human-generated ones.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"No One Washes a Rental Car”: Parsing Contested Narratives of Worker Ownership in the Massachusetts Cooperative Economy</title>
<link href="https://hdl.handle.net/1721.1/144827" rel="alternate"/>
<author>
<name>Rivera, Tyler Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/144827</id>
<updated>2022-08-30T03:54:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">"No One Washes a Rental Car”: Parsing Contested Narratives of Worker Ownership in the Massachusetts Cooperative Economy
Rivera, Tyler Luis
The COVID-19 pandemic has laid bare the structural deficiencies of a capitalist system in which short-term profits and shareholder value are prioritized over human well-being and economic stability. With the search for a more humane and resilient economic model urgent now more than ever, a groundswell of interest in worker cooperatives — firms that are collectively owned and democratically managed by their employees — has recently emerged. For many, worker cooperatives (co-ops) represent a means to raise wages, improve working conditions, mitigate precarity, and build resilience for workers and communities. But worker co-ops have also been envisaged as vehicles for more radical economic change. Indeed, prominent scholars of worker co-ops have framed the burgeoning cooperative movement as a transformative political project striving to build alternative economic institutions to challenge and replace capitalism altogether. Compelling though this vision may be, this thesis explores what is largely missed by such top-down characterizations of the cooperative model’s transformative potential: the perspectives of actual worker-owners. Animated by this gap in the discourse on worker ownership, this thesis addresses a critical question raised by the absence of workers’ voices: to what extent do the actors ostensibly charged with leading such a transformative movement (i.e., worker-owners) think of their businesses as viable alternatives to capitalism and themselves as harbingers of a new economic paradigm? &#13;
&#13;
Drawing from semi-structured interviews with ten worker-owners in worker co-ops based in Massachusetts, this research reveals how worker-owners hold complex, multifaceted understandings of worker ownership and its potential to transform our economy. I find that worker-owners embrace narratives emphasizing how worker ownership can improve the lives and livelihoods of working people within capitalism, while also positioning worker co-ops as stepping stones toward a new economy built around a fundamentally different set of productive arrangements and economic relations. Ultimately, I argue that these multivalent dispositions reflect a hybrid politics of worker ownership rooted in the real-life experiences of worker-owners caught between the intellectual vanguard of the cooperative movement and the working-class polity of which they are a part, with implications for the future of the cooperative movement.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creation Through Destruction: Artifacts of Worldbuilding in Experiential Legacy Games</title>
<link href="https://hdl.handle.net/1721.1/144826" rel="alternate"/>
<author>
<name>Otto-Hawke, Jay Jaeger</name>
</author>
<id>https://hdl.handle.net/1721.1/144826</id>
<updated>2022-08-30T03:59:28Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Creation Through Destruction: Artifacts of Worldbuilding in Experiential Legacy Games
Otto-Hawke, Jay Jaeger
This work draws connections between physically, emotionally, and spiritually powerful media: storytelling, rituals, and games. All three utilize worldbuilding to have a profound impact on our lives and our games. By tracing their entangled evolution over time, it becomes clear that legacy games are one of their more recent forms. Legacy games employ many of the mechanisms of liberation and transformation rituals, setting them apart from similar genres. Legacy games began with a forward-looking goal to subvert the assumptions of traditional games, but many of the recent games labeled “legacy” have strayed from this original ethos. This work returns to the vanguard “legacy game” definition and employs iterative design research to push the boundaries of the game design space. To create meaningful, playful social interactions, the game iterations explore the power of various practices in their mechanics: fire, funeral rites, ancestral connections, generational knowledge, community-building, and more. The unique mechanism of “creation through destruction” emerged as the central tenet of memorable, meaningful legacy games.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Wireless Sensor Network to Detect Lameness in Dairy Cows</title>
<link href="https://hdl.handle.net/1721.1/144823" rel="alternate"/>
<author>
<name>Nguyen, Thanh Nha</name>
</author>
<id>https://hdl.handle.net/1721.1/144823</id>
<updated>2022-08-30T03:58:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Development of Wireless Sensor Network to Detect Lameness in Dairy Cows
Nguyen, Thanh Nha
Bovine mastitis, lameness, and calving are three major problems in the dairy farming industry. They lead to economic losses and decreased animal welfare. With the industrialization of dairy farms, these problems are magnified due to the lack of skilled labor. This work introduces a design for a wireless sensor network to automate the health monitoring of every cow on a farm. Through the constant monitoring of health statistics, we can make a prediction about the early onset of mastitis, lameness, and calving and therefore reducing the burdens on the farmers. The network architecture is designed to accommodate a large number of cows (1000s) and the size of dairy farms (area &gt;1500000 m²) by combining a short-range and a long-range communication protocol (Bluetooth Low Energy (BLE) and LoRa). This work also explores the use of IMUs for lameness detection in walking gaits. Taking advantage of the holonomic constraints of animal limbs, Principal Component Analysis is able to compress the highly correlated data of locomotion into a smaller dimension in which abnormal gaits can be differentiated.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exocompilation for Productive Programming of Hardware Accelerators</title>
<link href="https://hdl.handle.net/1721.1/144822" rel="alternate"/>
<author>
<name>Ikarashi, Yuka</name>
</author>
<id>https://hdl.handle.net/1721.1/144822</id>
<updated>2022-08-30T03:01:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Exocompilation for Productive Programming of Hardware Accelerators
Ikarashi, Yuka
High-performance kernel libraries are critical to exploiting accelerators and specialized instructions in many applications. Because compilers are difficult to extend to support diverse and rapidly-evolving hardware targets, and automatic optimization is often insufficient to guarantee state-of-the-art performance, these libraries are commonly still coded and optimized by hand, at great expense, in low-level C and assembly. To better support development of high-performance libraries for specialized hardware, we propose a new programming language, Exo, based on the principle of exocompilation: externalizing target-specific code generation support and optimization policies to user-level code. Exo allows custom hardware instructions, specialized memories, and accelerator configuration state to be defined in user libraries. It builds on the idea of user scheduling to externalize hardware mapping and optimization decisions. Schedules are defined as composable rewrites within the language, and we develop a set of effect analyses which guarantee program equivalence and memory safety through these transformations. We show that Exo enables rapid development of state-of-the-art matrix-matrix multiply and convolutional neural network kernels, for both an embedded neural accelerator and x86 with AVX-512 extensions, in a few dozen lines of code each.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Engagement Toolkit to Center Unhoused Stakeholders in the Design and Programming of Open Space</title>
<link href="https://hdl.handle.net/1721.1/144820" rel="alternate"/>
<author>
<name>Dávila Uzcátegui, Miguel</name>
</author>
<id>https://hdl.handle.net/1721.1/144820</id>
<updated>2022-08-30T03:54:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Engagement Toolkit to Center Unhoused Stakeholders in the Design and Programming of Open Space
Dávila Uzcátegui, Miguel
As the number of unhoused individuals grows throughout the United States, authorities from over 100 different cities have responded to some of the heartbreaking challenges of extreme poverty with criminalization. The most recent process of criminalization has focused on limiting the use of public facilities and rights of way by unhoused individuals as a response to concerns raised by housed community members and business owners. This is problematic given that the public input that city officials receive tends to overrepresent white property owners and underrepresent all other stakeholders of the built environment. This toolkit seeks to assist the City of Las Vegas and other local jurisdictions expand their engagement efforts regarding the design and programming of open space to include unhoused individuals and elevate their roles as stakeholders with untransferable rights to public facilities. Using the case study of the 2018 closure of the Huntridge Circle Park in Las Vegas and in collaboration with advocates of the Nevada Homeless Alliance, this toolkit compiles history and existing survey data to help planners and other city leaders create meaningful engagement and co-develop solutions that effectively respond to the needs of all users of public space. &#13;
&#13;
Over half of unhoused individuals counted every year in Southern Nevada are experiencing houselessness for the first time that year, suggesting that their entry into the regional homeless system and the growth of the count itself should not be attributed to substance use or individual physical and mental health problems. Existing research has attributed the rising number of unhoused individuals in American cities to rising rents instead. This toolkit discusses houselessness within the broad context of housing insecurity in Las Vegas and the multiple systemic barriers that limit housing opportunity and choice for individuals who do not have the social and financial networks to overcome housing crises.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Product, Processes and Gamification to Motivate users for Positive Habit Formation</title>
<link href="https://hdl.handle.net/1721.1/144819" rel="alternate"/>
<author>
<name>Gupta, Harsh</name>
</author>
<id>https://hdl.handle.net/1721.1/144819</id>
<updated>2022-08-30T03:49:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Using Product, Processes and Gamification to Motivate users for Positive Habit Formation
Gupta, Harsh
Mental health issues have been increasing at an alarming rate, with 35 percent of the world dealing with stress daily. Stress is one of the many other mental health issues that we face and practicing mindfulness has proven to be a strong contender in dealing with these issues. Still, it has not been easy for millennials to form a habit of carrying out this preventative measure. To test the hypothesis, we first explored drive and expectancy theories, concluding with the understanding that users take actions to solve their needs. User behavior is driven by internal and external incentives, which leads us to understand the critical role products and processes play in guiding their motivation. A product forms the basis of the initial trigger to start a habit formation journey, leading to action, which gets enhanced by rewards to foster adoption. A new habit formation model has been developed that considers user motivation, product, processes, and gamification. We propose a unique combination of a hand-held device (hardware) and an app to assist users in forming a habit to practice mindfulness.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Extreme Event Statistics for Ship Motions and Loads Using Low-Fidelity Models and Recurrent Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/144818" rel="alternate"/>
<author>
<name>Howard, Dayne M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144818</id>
<updated>2022-08-30T03:31:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantifying Extreme Event Statistics for Ship Motions and Loads Using Low-Fidelity Models and Recurrent Neural Networks
Howard, Dayne M.
Ship operators and designers alike use ship motion simulation software to predict ship responses in irregular ocean waves, along with the statistics of extreme events. Ship operators rely on precalculated polar plots during heavy seas to select speeds and headings that will protect the ship and crew from dangerously extreme pitch and roll motion. Ship designers use simulations over thousands of operational hours to predict the effects of vertical bending moment on the structural integrity of the ship. This thesis considers two simulation methods that fulfill these needs, Large Amplitude Motion Program (LAMP) and SimpleCode. LAMP is higher-fidelity but computationally expensive, while SimpleCode uses a reduced order model but is orders of magnitude faster. This thesis investigates the use of machine learning, specifically a Long Short-Term Memory (LSTM) artificial neural network, to augment SimpleCode, such that the combined results are high fidelity, akin to LAMP. The LSTM proves effective in creating a map directly from the output of SimpleCode to the output of LAMP, without significant computational overhead. The LSTM’s performance over large sea state domains, including unimodal and bimodal seas, is studied. The distribution of motion peaks predicted by the LSTM over thousands of operational hours in a given sea state is shown to closely resemble that of LAMP. The time savings of using the LSTM approach are quantified and found to provide significant advantage in multiple applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Vehicle Speed with Consumer Grade Mobile LiDAR</title>
<link href="https://hdl.handle.net/1721.1/144817" rel="alternate"/>
<author>
<name>Wang, Ming</name>
</author>
<id>https://hdl.handle.net/1721.1/144817</id>
<updated>2022-08-30T03:19:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Estimating Vehicle Speed with Consumer Grade Mobile LiDAR
Wang, Ming
LiDAR (Light Detection and Ranging) is an emerging sensor technology that measures the time of flight of an emitted laser to measure the depth of surrounding objects. While historically LiDAR has been relegated to industrial and research spaces due to its prohibitive pricing and large form factor, recent developments have made it possible to include short range LiDAR on mobile devices. It is reasonable to postulate that technological developments will enable further adoption and performance enhancements. The high accuracy and resilience of LiDAR proves critical in providing autonomous vehicles robust information on their surroundings. But what if this capability could also be used to enhance the safety of the estimated 50 million commuters using bicycles, e-bikes, and scooters - micromobility riders - sharing the road, often dangerously, with cars? We explore the feasibility of reliably and accurately determining vehicle speed using a LiDAR-enabled mobile device mounted to a bicycle. We implemented an iOS application to gather real-world driving data, created a vehicle track matching algorithm to ascertain ground truth speed, and evaluated both a heuristic and a learned approach to estimate speed on LiDAR data.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding Architectures of Sharing: Public-Housing Authority-Supported Middle-Income Limited-Equity Cooperatives</title>
<link href="https://hdl.handle.net/1721.1/144816" rel="alternate"/>
<author>
<name>Moyer, Christopher Masahiko</name>
</author>
<id>https://hdl.handle.net/1721.1/144816</id>
<updated>2022-08-30T03:27:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Expanding Architectures of Sharing: Public-Housing Authority-Supported Middle-Income Limited-Equity Cooperatives
Moyer, Christopher Masahiko
Amid soaring home prices caused by rampant speculation in high-cost cities like Cambridge, middle-income households are being squeezed harder than ever. Faced with a housing market structured around binaries between renting/owning and market-rate/affordable housing (income- and price-restricted), middle-income households are left with increasingly few options. Neither private developers nor public-sector entities currently serve their needs.&#13;
&#13;
Limited-equity cooperatives (LECs) move beyond these binaries. LECs provide a form of self-governed housing, incorporating elements of renting and owning, designed for permanent affordability with limited wealth building via economic sharing. But LECs also facilitate social and spatial sharing through practices (collective decision making, shared meals, and childcare help) enabled by architecture (open space, common kitchen, and play facilities). LECs can thus endow residents with the benefits of collective control, affordability, and social support through the combination of decommodification and architectural design. LECs have existed in the United States for over a century. Apart from a few local exceptions, however, the model has never been scaled for a middle-income clientele, due to a lack of financial and institutional support.&#13;
&#13;
Building on a literature review of US housing history and interviews with residents, policy makers, and developers, “Expanding Architectures of Sharing” argues that LECs can again serve middle-income households if institutions with the financial means and development expertise collaborate. Specifically, this thesis focuses on the healthcare industry and public housing authorities and imagines the following scenario: Members of the Massachusetts Nurses Association union petition the Cambridge Health Alliance, which has been hard pressed to hire staff due to exorbitant housing prices, to offer its surface level parking lot on Line Street for development; The Cambridge Housing Authority is focused on low-income rental housing, but is driven by an entrepreneurial spirit to broaden its impact; A joint venture between the two CHAs provides the financial and institutional support necessary to build a new mixed-use LEC, developed and managed by the Housing Authority. &#13;
&#13;
By embracing the interrelated tenets of economic, social and spatial sharing, the LEC provides a living environment not possible in either market-rate or traditional affordable developments. Organized around four distinct open spaces, the project combines a rental building for hospital interns and a cooperative building for an array of household sizes and incomes. A daycare, retail spaces, and below-grade parking offer public uses. The proposal reveals the untapped opportunity of institutions and housing authorities to expand architectures of sharing through middle-income LECs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>China's Community Riders: Digital Labor, Delivery Logistics and Spaces</title>
<link href="https://hdl.handle.net/1721.1/144814" rel="alternate"/>
<author>
<name>Lan, Xuan</name>
</author>
<id>https://hdl.handle.net/1721.1/144814</id>
<updated>2022-08-30T03:56:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">China's Community Riders: Digital Labor, Delivery Logistics and Spaces
Lan, Xuan
With the rise of digital platforms, delivery workers using motor scooters (hereafter referred to as “riders”) have gained new prominence through providing essential mobility in urban geographies. Throughout the COVID-19 pandemic, food delivery riders have become a fixture of China’s streets by forming an instant, all-weather, community-based logistic network. In early 2021, more than 7 million riders delivered 65% of daily essentials and served 430 million customers, which constitutes 30% of China's population. As both capital and labor flowed into this booming e-commerce industry, tech giants fiercely competed to offer the most cost-effective food delivery platforms by implementing algorithmic control of the riders. The majority of riders are young migrant workers, struggling to make a living by taking delivery as their entry job in cities. Even though China is moving to protect riders from digital exploitation, conflicts are arising in both social and spatial aspects.  &#13;
&#13;
This thesis undertakes a critical investigation into China's delivery riders by analyzing their mobility services in community-based logistics and proposing spatial tactics to improve their work and well-being. By examining the delivery network and the digital exploitation behind it, the thesis reveals the mechanism of the system's gamified control over the labor conditions. It reviews the plight of China’s food delivery riders, laboring under digital exploitation conditions mirroring those of workers worldwide in the new gig economy. This thesis also demonstrates that urban spaces in cities are incompatible with this booming rider group, resulting in a slew of unsolvable issues like street crowding, disorderly parking, traffic congestion, and vehicle-pedestrian collisions. By uncovering these challenges, this thesis calls for an intersectional perspective to address the riders' digital and physical conditions. It seeks to develop potential interventions in cities to reduce spatial and social inequalities. Prototype proposals shown create better suited spaces for riders by improving their working conditions: spaces of moving, pickup/drop-off and rest.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Optimizations for Action Recognition Applications</title>
<link href="https://hdl.handle.net/1721.1/144813" rel="alternate"/>
<author>
<name>Perez, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/144813</id>
<updated>2022-08-30T03:41:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design Optimizations for Action Recognition Applications
Perez, Brandon
There are many problems that exist within the relatively new field of action recognition that make it difficult for the immediate use of existing models for specific applications. My work at the MIT-IBM Watson lab revolved around utilizing existing assets and optimizing performance for achieving action detection in construction-centric videos. There were several pretrained general action recognition models at our disposal, each one with its own limitations. In addition to fine-tuning, there are other computer vision methods and processing techniques that were explored for performance optimization including background subtraction, optical flow, and frame selection algorithms. Though raw accuracy score gains through adopting these modalities were marginal, other improvements like faster training time and the potential for faster prediction time were observed. The process of building this experimental pipeline and the results obtained offered insight into what was feasible and effective with current technology in this unique problem space. This includes proof of concept with regards to a real-time action detection tool as well as potential modifications to optimize the tool's performance in this context.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multifidelity Covariance Estimation Three Ways</title>
<link href="https://hdl.handle.net/1721.1/144812" rel="alternate"/>
<author>
<name>Maurais, Aimee Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/144812</id>
<updated>2022-08-30T04:03:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Multifidelity Covariance Estimation Three Ways
Maurais, Aimee Elizabeth
In this thesis we develop a suite of three methods for multifidelity covariance estimation. We begin with a straightforward extension of scalar multifidelity Monte Carlo to matrices, obtaining what we refer to as the Euclidean or linear control variate mutifidelity covariance estimator. The mean squared error of this estimator is available in closed form, which enables analytic optimization of sample allocations and weights to minimize expected squared Frobenius error subject to computational budget constraints. Despite its nice analytical properties and familiar closed-form construction, however, the Euclidean estimator can be subject to loss of positive-definiteness. Given this liability, we subsequently develop two multifidelity covariance estimators which preserve positive definiteness by construction, utilizing, to varying degrees, the geometry of the manifold of symmetric positive definite (SPD) matrices.&#13;
&#13;
Our first positive-definiteness-preserving estimator, referred to as the tangent space or log-linear control variate estimator, constructs a multifidelity covariance estimate by appplying linear control variates to sample covariance matrix logarithms, which are symmetric matrices residing in tangent spaces to the SPD manifold. Though the tangent space estimator preserves positive-definiteness and is straightforward to construct, obtaining its expected squared error, and thus choosing optimal sample allocations and control variate weights, are not tractable. When first-order approximations of the matrix logarithms involved are made, however, the optimal sample allocations and control variate weights for the tangent space estimator are the same as those of the Euclidean estimator, and in practice the tangent space estimator has been shown to yield variance reduction in example problems.&#13;
&#13;
In a departure from the control variate formulations of the Euclidean and tangent-space estimators, our third multifidelity covariance estimator is defined as the solution to a regression problem on tangent spaces to product manifolds of SPD matrices. Given a set of high- and low-fidelity sample covariance matrices, which we view as a sample of a product-manifold-valued random variable, we estimate the underlying true covariance matrices by minimizing an intrinsic notion of squared Mahalanobis distance between the data and a model for its variation about its mean. The resulting estimates are guaranteeably positive definite and the Mahalanobis distance which they minimize has desirable properties, including tangent-space agnosticism and affine-invariance. Mahalanobis distance minimization can be carried out using unconstrained gradient-descent methods when a reparametrization in terms of SPD matrix square roots is employed, and we introduce a new Julia package, CovarianceRegression.jl, providing a convenient API for solving these multifidelity covariance regression problems. Using its machinery, we demonstrate that our estimator can provide significant reductions in MSE over single-fidelity covariance estimators in forward uncertainty quantification problems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decision Making for Populations</title>
<link href="https://hdl.handle.net/1721.1/144810" rel="alternate"/>
<author>
<name>Chopra, Ayush</name>
</author>
<id>https://hdl.handle.net/1721.1/144810</id>
<updated>2022-08-30T03:27:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Decision Making for Populations
Chopra, Ayush
Critical decisions for large populations have conventionally been made top-down, with million-dollar centralized satellites and surveillance tools used to sense events and generate instructions that are enforced on people. The last two years of the pandemic have shown that these methods are ineffective and invade user privacy. With smartphone devices becoming increasingly powerful and users privacy-aware, we need to rethink the future of decision making for populations - to be bottom-up. The overarching theme for this thesis is to build tools that can enable decision making in this new decentralized world of data-driven but privacy-sensitive citizens. We imagine a future where we can preempt and efficiently tackle crises by crowdsourcing sensing significant events using real-time smartphones and engaging people to make population decisions - turning the world into a living lab! To realize this, our technical contributions aim to realistically simulate large heterogeneous populations by privately learning from decentralized data sources. First, we introduce DeepABM a framework for Scalable, Fast and Differentiable Agent-based Modeling. DeepABM can simulate million-size populations in a few seconds on personal computers (up to 300x faster than similar prior-art), enable end-to-end gradient-based optimization and learn from heterogeneous data sources. We validate the design through our epidemiological simulator (DeepABM-Epi), which is used to make policy recommendations for resource-constrained vaccine prioritization and enable in-silico forecasting of infection dynamics by merging simulators with deep neural networks. Second, we introduce Adaptive Split Learning (AdaSplit) a mechanism for distributed &amp; privacy-aware machine learning in low-resource setups. AdaSplit achieves state-of-the-art performance on several distributed machine learning benchmarks and can jointly learn from devices with variable resource budgets. Finally, we evaluate our research through peer-reviewed publications and interdisciplinary collaborations that take a step toward extending this work into impact in the real world.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Opportunities to Improve Service Member Access to Non-Clinical Mental Health Counseling</title>
<link href="https://hdl.handle.net/1721.1/144808" rel="alternate"/>
<author>
<name>Lueders, Jacob T.</name>
</author>
<id>https://hdl.handle.net/1721.1/144808</id>
<updated>2022-08-30T03:21:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigating Opportunities to Improve Service Member Access to Non-Clinical Mental Health Counseling
Lueders, Jacob T.
Service Members in the United States military have several options to choose from for non-clinical mental health counseling.  This research focuses on investigating the Military and Family Life Counselor (MFLC) program from the perspective of accessibility, with an aim to determine what works well and where there may be opportunities to improve using architectural changes in developing a future architecture.  The research scope limits the investigation to the Continental United States (CONUS) and East of the Mississippi (EOM) area of operations, where the MFLC program is provided by a single contractor. &#13;
&#13;
This research develops the foundation by using a literature review, and introduces the three methodologies used throughout the thesis, including the Architecting Innovative Enterprise Strategy (ARIES) Framework, System Dynamics, and Systems Architecture with a focus on stakeholder analysis.  The research approach in this thesis focuses on understanding the existing MFLC enterprise, beginning with both internal and external influences that can promote or hinder change.  The stakeholders key to the enterprise are then analyzed to understands their needs, and these needs are then subsequently used to determine how well the current architecture supports meeting those needs. &#13;
&#13;
The research then shifts to envisioning some aspects of potential future architectures and how those choices could utilize feedback loops to further improve accessibility.  Finally, the thesis concludes with both key findings on the existing MFLC enterprise, highlighting its current strengths of flexibility and adaptability. Recommendations are made for pilot programs and for future work to continue the application of the ARIES process.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated design of third generation concentrated solar power plants under uncertainty</title>
<link href="https://hdl.handle.net/1721.1/144807" rel="alternate"/>
<author>
<name>Rajasekaran, Karthik</name>
</author>
<id>https://hdl.handle.net/1721.1/144807</id>
<updated>2022-08-30T03:41:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integrated design of third generation concentrated solar power plants under uncertainty
Rajasekaran, Karthik
This research focuses on the Gen3 (3rd generation) solar tower CSP (concentrated solar power) variant. A methodology is introduced to evaluate two different approaches to deploying this technology - one is the conventional "build large" approach and the other is a "build modular" approach. Performance and cost models of the two different approaches are built and validated against industry data and then the two different approaches compete across three locations (Daggett CA, New Orleans LA, and Boston MA) and three different capacity factors (20%, 30%, and 40%). For these nine cases, the comparison between the two different approaches is first done with deterministic inputs and then with stochastic inputs for selected variables.&#13;
&#13;
The results show that when the "build large" approach is compared against the "build modular" approach using deterministic inputs, the "build large" approach" is favored and has a NPV that is 5%-15% higher than that of the "build modular" approach for most of the nine cases, which aligns with the current industry belief that the "build large" approach is better due to economies of scale. However, when the same approaches are compared using stochastic inputs, the "build modular" approach is preferred over the "build large" approach. The ENPVs for the "build modular" approach are 20% higher than that of the "build large" approach while requiring 50% less initial capital than the "build large" approach. This reversal is driven primarily by the flexibility and the learning rate inherent to the "build modular" approach. By employing a "build modular" approach for this technology, a firm that is entering the CSP market could gain a competitive advantage over other firms in the CSP and renewable energy markets.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing an Educational Mindfulness Experience for Future Leaders</title>
<link href="https://hdl.handle.net/1721.1/144806" rel="alternate"/>
<author>
<name>Harris, Allison M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144806</id>
<updated>2022-08-30T03:06:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing an Educational Mindfulness Experience for Future Leaders
Harris, Allison M.
Living in a world filled with mindful leaders is the vision of the Mindfulness &amp; Leadership Club at MIT Sloan School of Management. Yet, the path to developing mindful leaders within an academic setting is unclear and ill-defined. This study uses human-centered design research to develop principles for designing a graduate student educational mindfulness experience. The goal of the experience is to enable graduate students to build a sustainable mindfulness practice and learn how mindfulness applies to leadership. The primary research question explored is: how might we use design research to create a mindfulness experience for students that builds a lasting habit and develops future mindful leaders? Secondary research provides insight to understand both the benefits of mindfulness and mindful leadership and existing methods for teaching mindfulness. Findings illustrate the value of mindfulness and mindful leadership, but existing programs do not focus on connecting mindfulness to leadership or focus on graduate students in an academic setting. Primary research, consisting of one survey (n = 52) and 34 interviews, validates the need for mindfulness education within the MIT Sloan graduate student community and informs a refined definition of mindfulness and mindful leadership. Barriers hindering students from developing a habit of mindfulness are identified with recommendations for addressing those barriers. Research findings result in eight design principles that serve to guide the development of a mindful leadership program for graduate students at MIT Sloan, which can be adapted to meet the needs of different graduate school programs. Future efforts can build on this work by co-creating an educational experience for prototyping and testing. To supplement this work, additional research be conducted into existing graduate student mindfulness programs, habit building, and adult learning processes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating a Cross-Disciplinary Understanding of Legacy Stories – What Does It Mean to Share a Legacy and What Do Storytellers Need?</title>
<link href="https://hdl.handle.net/1721.1/144805" rel="alternate"/>
<author>
<name>Shafer, Jennifer Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/144805</id>
<updated>2022-08-30T03:43:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Creating a Cross-Disciplinary Understanding of Legacy Stories – What Does It Mean to Share a Legacy and What Do Storytellers Need?
Shafer, Jennifer Elizabeth
As people age, there is an increasing need to make meaning, reminisce, and pass on a legacy. While there are countless resources available to aid in the consolidation and transmission of the tangible, measurable legacy (e.g., through wills and estate planning), there are fewer resources to support the development of a cohesive emotional, personal legacy. Studies across healthcare, psychology, and sociology have demonstrated how reminiscence and life review can be effective tools in enhancing wellbeing as people age or approach end of life, but results are generally categorized by interventions and outputs rather than by the specific needs of the individual.&#13;
&#13;
In this study a combination of survey, workshop, and interview techniques were used to better understand legacy storytelling. First, we focused on the perspective of older adults as they shared stories, advice, uncertainties, and formative experiences, resulting in 35 fundamental needs for storytellers. Next, we considered how experts in aging and intergenerational storytelling have developed programs that foster meaningful conversation. These interviews yielded 8 key design principles to empower loved ones and broader communities to kindle empathetic, inclusive, and productive dialogue. These principles seek to inform both the process of the storytelling experience, as well as the emotional journey of the participants. The foundation of legacy storytelling is the acknowledgement that our time is limited, and this work seeks to make this challenging topic more accessible.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Afterlife of Wells, from oil to soil in the Amazonia</title>
<link href="https://hdl.handle.net/1721.1/144803" rel="alternate"/>
<author>
<name>Degetau Zanders, Gabriela</name>
</author>
<id>https://hdl.handle.net/1721.1/144803</id>
<updated>2022-08-30T04:07:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Afterlife of Wells, from oil to soil in the Amazonia
Degetau Zanders, Gabriela
Fifty years ago, the Ecuadorian State celebrated the beginning of a new economic era for the country as the first barrel of oil was extracted from the Amazon. Nevertheless, resource extraction has not resulted in sustainable economic prosperity, nor has it reduced the inequality affecting local and indigenous communities. &#13;
&#13;
Abandoned oil wells are a significant source of air, soil, and groundwater pollution. They continue to leak substances such as arsenic and methane even after they are no longer operational. It is essential to take action and responsibility for the affected people, nature, and territory. Eventually, all oil wells will go dry, and their economic value will be gone. Nevertheless, the territory, the people, and the effects remain. After Fifty years of extraction, the end of the hydrocarbon era has begun.&#13;
&#13;
The thesis investigates the afterlife of abandoned wells in the Ecuadorian Amazon by understanding the territory and the socio-environmental effects oil has brought to the country. It focuses on the Sucumbíos province, which condenses many of the pressures and disputes that the Ecuadorian Amazon is facing today. Through case studies, policy, and urban design as a methodology, the thesis aims to produce a pilot project that creates a range of spatial scenarios and conceptualizes alternative futures. It provides tools for remediation strategies working on a transitional timeline and different degrees of intervention. Together they demonstrate the possibility of the space in the after-oil condition. &#13;
&#13;
The scope of the research does not try to find a solution, but an initial step to think about a territorial approach towards a post oil scenario and the afterlife of the well. Only through a substantial dialogue will it be possible to transform the afterlife of the well into feasible strategies as an objective to advocate for territorial reparation, coexistence, and ecological conservation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Optimal Ultrasound Transducer Simulator</title>
<link href="https://hdl.handle.net/1721.1/144795" rel="alternate"/>
<author>
<name>Wojcik, Jan</name>
</author>
<id>https://hdl.handle.net/1721.1/144795</id>
<updated>2022-08-30T04:01:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automated Optimal Ultrasound Transducer Simulator
Wojcik, Jan
Over the last couple of years, the demand for highly accurate, wearable, data capturing devices has been increasing. However, the prototyping process of ultrasound devices can be incredibly costly. In order to save time, many researchers run simulations in order to analyze the performance of their prototypes to help guide their manufacturing decisions. Unfortunately, many of these simulations need to be judged by a human and finding the best possible set of transducer specifications becomes increasingly difficult as the number of simulations grows. Currently there is no automated system for evaluating the performance of a simulated ultrasound array. This thesis proposes an automated system for generating ultrasound simulation images, feature extraction as well as a proposal for determining the best possible transducer configuration for the parameter that is being optimized. The results of an object oriented ultrasound simulation system and feature functions are demonstrated. The end goal will be to use the metric to more efficiently guide the fabrication decisions of the laboratory on current and future ultrasound projects.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration of Interaction Potentials for Molecular Dynamics inspired Simulations of Structures: the Role of Dihedral Interactions.</title>
<link href="https://hdl.handle.net/1721.1/144793" rel="alternate"/>
<author>
<name>Vartziotis, Tina Nepheli</name>
</author>
<id>https://hdl.handle.net/1721.1/144793</id>
<updated>2022-08-30T03:49:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Calibration of Interaction Potentials for Molecular Dynamics inspired Simulations of Structures: the Role of Dihedral Interactions.
Vartziotis, Tina Nepheli
The modeling of resilience of structures has proven to be crucial for their design, serving as a safeguard against natural hazards such as earthquakes and wind damage. Traditional approaches to resilience modeling, such as the finite element method, have become dominant for numerically modeling damages caused by natural hazards. However, the existence of numerical instabilities and implementation inefficiencies make it difficult to model destructive phenomena accurately using traditional approaches. To provide a more efficient and accurate framework for buildings’ resilience, a method based on molecular dynamics is proposed and put into the context of structural engineering.&#13;
&#13;
Based on two meshing types, the fcc and the dihedral elements, the proposed molecular dynamics models are calibrated to provide reliable in-plane and out-of-plane stiffness results. In the present thesis, both an analytical and a numerical calibration are provided that produce similar eigenspectrums as the finite element method. The calibrated molecular dynamics models are then tested in the simulations of real-world hazard examples. Through an automated process of acquiring geospatial information from open-source data and applying the modeling approaches, matrices of the buildings’ stiffness are produced.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Renovation of East Campus: Control and Culture</title>
<link href="https://hdl.handle.net/1721.1/144791" rel="alternate"/>
<author>
<name>Ebdy, Hugh T.</name>
</author>
<id>https://hdl.handle.net/1721.1/144791</id>
<updated>2022-08-30T03:20:33Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Renovation of East Campus: Control and Culture
Ebdy, Hugh T.
This thesis looks at the tension between university administrators’ goals for their capital projects and the goals of end users, their students. These goals often diverge, given that universities must make decades-long financial decisions, while students’ experiences can be seen as more fleeting. This thesis investigates this tension and what it means for planning processes and architectural design.&#13;
&#13;
The research and analysis center on East Campus, the second oldest dormitory at MIT which opened in 1924. East Campus houses an active student culture based on self-governance, individualism, and privacy, and as the birthplace of hacking it is strongly tied to the wider public identity of MIT and how MIT is promoted to new students. As part of the MIT 2030 capital projects plan, East Campus was marked for renovation to bring it in line with contemporary living and accessibility standards, with construction originally slated to begin in the summer of 2022. However, given differences between MIT administrators and users in approach to undergraduate life, the author believes the renovation may spell the end of East Campus’ unique student culture.&#13;
&#13;
The author graphically and textually documents the early strategic and design stages of renovation, drawing on his experience as a resident advisor, discussions with students, staff, and consultants, and a seat on the renovation’s student/staff committee. The analysis of MIT’s functioning at the institutional level, its user engagement, as well as its conception of residential buildings reveals how certain processes may have negatively impacted the renovation’s potential.&#13;
&#13;
The author argues that a more ambitious design-led tone should be set before strategic options are agreed upon. He tests a set of interactive design games with users at the room, hall, and dormitory scales to gain a deeper understanding of how the East Campus community navigates space. The author translates these findings into an architectural proposal that emphasizes robustness as both a driver of sustainability and enabler of cultural communication. The thesis intends to re-center design in future MIT-led residential projects, which must balance user input, culture, budgetary demands, and donors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Coding Exercises for Language Concepts by Searching, Simplifying, and Annotating Existing Code</title>
<link href="https://hdl.handle.net/1721.1/144790" rel="alternate"/>
<author>
<name>Johanna, Stacia Edina</name>
</author>
<id>https://hdl.handle.net/1721.1/144790</id>
<updated>2022-08-30T03:16:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generating Coding Exercises for Language Concepts by Searching, Simplifying, and Annotating Existing Code
Johanna, Stacia Edina
Learning a new programming language is best done through coding exercises. However, manually creating coding exercises is time-consuming because there are many language syntax and concepts to cover. PraxisGen aims to alleviate the burden of problem creation by providing an interface to search, simplify, and annotate publicly available code to be used as exercises. By using the system, the user can create code examples for language concepts that represent real-life programs more efficiently because the system automates some parts of the process. The technical evaluation proves the ability of PraxisGen in finding a wide variety of language concepts, while the user study demonstrates that the system makes the process of creating high-quality exercise problems simpler.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Visualization and Anomaly Detection in International Timber Trade Flows</title>
<link href="https://hdl.handle.net/1721.1/144789" rel="alternate"/>
<author>
<name>Gopal, Charvi</name>
</author>
<id>https://hdl.handle.net/1721.1/144789</id>
<updated>2022-08-30T03:47:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Network Visualization and Anomaly Detection in International Timber Trade Flows
Gopal, Charvi
Our planet’s forests, which are vital for sustainable development, are threatened by deforestation around the world. The problem of missing timber flows poses a significant barrier to preserving our remaining forest reserves, maintaining biodiversity in these ecosystems, and promoting sustainable and fair trade of timber. In this thesis, we utilize (hitherto underutilized) international trade data and historical rates of tree cover loss in major tropical countries to compare trends across datasets in the global timber supply chain and identify anomalies in trade patterns that can be attributed to technical and non-technical errors (that are often indistinguishable).&#13;
&#13;
We first focus on the primary trade partners of our case study countries, Brazil and Indonesia, across three timber trade datasets. We then analyze discrepancies between reported exports and imports for trade partners and compare them with illegality estimates for tropical timber trade. We find a high correlation for the early 2000s between the discrepancies in the FAOSTAT Forestry Trade Flows timber trade dataset and illegality estimates from Chatham House. We further evaluate the similarities over time between timber trade discrepancies and illegality estimates using a map-based network visualization prototype we designed and built. Our work suggests the need to synthesize accurate trade data on global timber flows for regulating illegal activities at source sites and in the supply chain of timber products.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Molecular Changes in Human Neuropsychiatric Disorders to Zebrafish Behavioral Profiles</title>
<link href="https://hdl.handle.net/1721.1/144788" rel="alternate"/>
<author>
<name>Stein, Daniel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144788</id>
<updated>2022-08-30T03:33:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Mapping Molecular Changes in Human Neuropsychiatric Disorders to Zebrafish Behavioral Profiles
Stein, Daniel J.
Despite decades of work, understanding the etiology of human psychiatric disorders has remained elusive. Recent advances in molecular profiling have allowed researchers to probe genetic and transcriptomic differences in post-mortem samples from individuals with healthy and diseased brains, which has provided some hints into the underlying biological dysregulation in these disorders. However, linking these molecular changes to alterations in behavior and developing therapies to ameliorate the effects of disease will require animal models, including zebrafish, which are unique as vertebrate models with complex behavioral traits that also allow for high-throughput perturbations. How to bridge the gap between molecular profiling data from omics experiments in human post-mortem samples and behavioral data from high-throughput drug screens in zebrafish remains an outstanding challenge. Here, we develop a computational method for cross-species translation of these disparate kinds of data, in which we construct a shared latent space of the human and zebrafish data for downstream multivariate analysis. Applying this method to a microarray dataset profiling transcriptional changes in the prefrontal cortex in human schizophrenia, we identify gene modules with coordinated effects on zebrafish behavior that are discriminative of schizophrenia in humans. In particular, we identify zebrafish gene modules involved in amino acid, neurotransmitter, and cation transport that are also dysregulated in human schizophrenia. These results suggest that such computational models for cross-species translation are promising tools for integrating molecular data from human post-mortem samples and behavioral drug screens in zebrafish.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Arterial System to Improve Ultrasound Measurements of Hemodynamic Parameters</title>
<link href="https://hdl.handle.net/1721.1/144787" rel="alternate"/>
<author>
<name>Harabedian, Jeanne</name>
</author>
<id>https://hdl.handle.net/1721.1/144787</id>
<updated>2022-08-30T03:58:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Modeling the Arterial System to Improve Ultrasound Measurements of Hemodynamic Parameters
Harabedian, Jeanne
One of the most crucial parameters for monitoring Cardiovascular disease (CVD) risk is one’s Arterial Blood Pressure (ABP). Clinicians use a radial arterial catheter to measure ABP in an intensive care unit (ICU). Although this method is considered the gold standard, its invasive nature makes it undesirable and inaccessible outside an ICU. One solution to this problem is to take advantage of ultrasonic measurements, which are noninvasive and extremely accessible. However, developing an algorithm to convert ultrasound data into a legitimate ABP waveform requires an extensive amount of patient data. The limitation is that this data is difficult to obtain and impossible to fully control.&#13;
&#13;
The solution presented here is to use a flow phantom: a physical, hydraulic system that mimics arterial blood flow. The phantom provides pressure waveforms, which come directly from a catheterized tube, and volumetric flow waveforms, from an ultrasonic flow meter, that closely match the morphology of patient data. Developing a physical model of the arterial system allows for control over an expanded range of parameter relationships for experimentation.&#13;
&#13;
To help understand the behavior and results of the flow phantom, a hydraulic fluids simulation and a circuit simulation were also developed. The combination of data from all models enables increased understanding of parameter relationships and intuitive understanding of the behaviors of the flow phantom. This data is used to inform the development of the ABP estimation algorithm from blood flow velocity and arterial distension, as well as validating the algortihm’s outputs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Data Coops for Women’s Health</title>
<link href="https://hdl.handle.net/1721.1/144786" rel="alternate"/>
<author>
<name>Tyshchenko, Ekaterina</name>
</author>
<id>https://hdl.handle.net/1721.1/144786</id>
<updated>2022-08-30T03:00:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing Data Coops for Women’s Health
Tyshchenko, Ekaterina
The existing model of data ownership (so-called Web 2.0) does not benefit individuals and communities, instead providing value to a handful of very powerful and large businesses. Data coops represent means to shift this paradigm and empower individuals to take greater control of their data and extract value from it. However, approaching coops from a purely financial perspective does not seem satisfactory. Additionally, monetary rewards do not seem to be sufficient to encourage users to join these institutions. In this paper, we attempted to present that instead of being driven solely by commercial ambitions and monetary rewards, those who attempt to create coops should place community and its needs at the heart of the coop. They must consider what values and benefits coops can bring to their members, what knowledge and insights can be derived from its data and how it will spur the growth and development of that particular community. In our case we have decided to use women as a community whose needs can be addressed by designing a data coop. We have started by consulting potential members and conducting interviews to understand this community and its challenges better. We identified multiple needs - for example, having symptoms and side effects libraries. We tried to address these needs by designing a data coop that would incorporate the right data requirements, user experience and provide analytics and insights back to the community members as well as to data buyers.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Needles in the Quantum Haystack: CMS Anomaly Detection with Normalizing Flows</title>
<link href="https://hdl.handle.net/1721.1/144785" rel="alternate"/>
<author>
<name>Yunus, Mikaeel</name>
</author>
<id>https://hdl.handle.net/1721.1/144785</id>
<updated>2022-08-30T03:21:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Needles in the Quantum Haystack: CMS Anomaly Detection with Normalizing Flows
Yunus, Mikaeel
Recent experimental searches for particles beyond the Standard Model (BSM) have yielded little in the realm of new physics discoveries. A number of research efforts have adopted new anomaly detection strategies which utilize density estimation algorithms based on unsupervised and semi-supervised machine learning. However, these efforts rely exclusively on QCD background priors, and thus drastically limit their own anomaly detection capabilities.&#13;
&#13;
In this thesis, we integrate an unsupervised density estimation algorithm, neural spline normalizing flows, into an anomaly detection strategy called Quasi-Anomalous Knowledge (QUAK), which allows us to take advantage of signal priors in addition to QCD background priors. The introduction of a signal prior allows us to learn the features of a particular type of BSM dijet event, giving us insight into the underlying variable distributions of hidden signals in CMS data. Through several studies on both Monte Carlo samples and 13 TeV data from CMS, we demonstrate that QUAK with normalizing flows (QUAK-NF) can be a powerful tool for conducting searches for BSM physics.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nexion: Enabling Concurrency on Architectures for Ordered Parallelism</title>
<link href="https://hdl.handle.net/1721.1/144784" rel="alternate"/>
<author>
<name>Durfee, Robert Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/144784</id>
<updated>2022-08-30T03:28:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Nexion: Enabling Concurrency on Architectures for Ordered Parallelism
Durfee, Robert Benjamin
Achieving high performance on modern systems with many cores requires highly parallel applications. Conventional parallel systems require structuring applications into activities that are concurrent, i.e., that may interleave arbitrarily. Concurrency makes it easy for hardware to run these tasks in parallel. However, for most applications, concurrency is challenging to reason about and incurs costly synchronization overheads. To address this problem, recent work has proposed architectures that exploit ordered parallelism. These systems enforce a fixed, programmer-specified order among tasks, and execute tasks speculatively to extract parallelism. Ordered semantics enable parallelism without concurrency, avoiding its complexity, and it is a natural fit for many applications. However, concurrency is also a good fit for many applications, and establishing an order is unnatural and unnecessarily limits parallelism.&#13;
&#13;
We present Nexion, an execution model that supports concurrency alongside ordered parallelism. Programmers split applications into short tasks that can be given timestamps to specify order constraints. Groups of tasks can be marked as concurrent and will execute independently if no data is shared among them. If data is shared, Nexion ensures that tasks remain atomic and respect applicable timestamp orders. We extend Swarm, an architecture for ordered parallelism, with minimal additional state to implement Nexion. The implementation is distributed and only involves communication between tasks that share data. On evaluated benchmarks, Nexion improves overall scalability by up to 32× over software-only solutions and up to 2.4× over the Swarm baseline architecture.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Entwine VR: A Toolkit for Creating Behavioral Experiments that Utilize Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/144783" rel="alternate"/>
<author>
<name>Alemu, Yodahe</name>
</author>
<id>https://hdl.handle.net/1721.1/144783</id>
<updated>2022-08-30T03:25:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Entwine VR: A Toolkit for Creating Behavioral Experiments that Utilize Virtual Reality
Alemu, Yodahe
As VR becomes more and more mainstream both in entertainment and enterprise industries, the interest in VR-focused behavioral research is continuously growing. VR offers a powerful platform that allows behavioral researchers to experiment with new environments in a way that is safe, accessible, and able to be replicated. Although there are many tools that help researchers with designing and building their experiments, there is not an existing toolkit or space that allows researchers to simultaneously collaborate and replicate behavioral VR experiments. Introducing the Entwine project - our proposed solution which serves to provide both tools and fully featured experiments to researchers to allow for quick development and collaboration in the behavioral VR research space.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Modeling of the Navy Integrated Power and Energy Corridor Cooling System</title>
<link href="https://hdl.handle.net/1721.1/144777" rel="alternate"/>
<author>
<name>Reyes, Ivan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144777</id>
<updated>2022-08-30T03:02:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design and Modeling of the Navy Integrated Power and Energy Corridor Cooling System
Reyes, Ivan A.
As part of an ongoing U.S. Navy research consortium for next-generation warships, the Design Laboratory of the MIT Sea Grant Program is developing the Navy Integrated Power and Energy Corridor (NiPEC) to underpin the vessel’s power distribution system. The corridor comprises several modular compartments capable of operating independently or as part of a network to execute energy storage, conversion, protection, control, isolation, and transfer functions. The power conversion process is carried out by the corridor's integrated Power Electronics Building Block (iPEBB) based architecture. The iPEBB is a comprehensive and self-contained converter configured to provide power-dense solutions to the ship's stochastic and dynamic loads. A key challenge with the iPEBB's advanced semiconductor technology is the mitigation of its thermal management, constrained by the provision of indirect liquid cooling methods and the objective of a sailor-centric design.&#13;
&#13;
This thesis used numerical analysis and modeling to design an indirect liquid-cooling system aboard U.S. Navy Surface Vessels. Guided by Department of Defense and industry requirements, a new cooling paradigm was developed, promoting human- and intra-system operations, a comprehensive component design, and a robust cooling system architecture within the NiPEC compartment footprint. Documented are the initial investigation, equipment analysis, concept selection, and proof-of-concept testing that set the foundation for future prototyping and NiPEC cooling system development.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation into the Design of High-Power Plug-In Shipboard Electrical Connectors</title>
<link href="https://hdl.handle.net/1721.1/144776" rel="alternate"/>
<author>
<name>Oberst, Scott D.</name>
</author>
<id>https://hdl.handle.net/1721.1/144776</id>
<updated>2022-08-30T03:35:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigation into the Design of High-Power Plug-In Shipboard Electrical Connectors
Oberst, Scott D.
High-power electrical connections are an essential component to all electric power systems. Such connections are important to the Navy as it increases the use of electric energy in ships. High power involves both high current and high voltage simultaneously and hence connections require careful design regarding both properties. While classical connections are typically bolted or welded, a plug-in type connection would greatly reduce installation time and enable more rapid reconfigurations or adjustments as loads are added or changed. This thesis presents the constraints surrounding electrical contacts and insulation requirements toward the development of a highpower plug-in type connector for Navy application. State-of-the-art plug-in contacts technology and mechanisms are identified. A comparison and selection process of dissimilar rated electrical contacts is proposed through the development of Figure of Merits. Insulation requirements, especially those surrounding creepage distance, are presented for high-power contacts across a range of voltages. Additional Navy specific insulation requirements are identified and related to the impact on a high-power connector. Constraints on both electrical contact and insulations requirements are considered and then applied to a 0.4 MW (1 kV, 400 amp) connector concept design. It illustrates the feasibility of developing a new Navy high-power connector. The concept design was fabricated using 3D printing to verify mechanical insertion force constraints were satisfied.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Digital Twin Model for Lubricant Oil Transport and Oil Consumption Study in Internal Combustion Engines</title>
<link href="https://hdl.handle.net/1721.1/144775" rel="alternate"/>
<author>
<name>Zhong, Xinlin</name>
</author>
<id>https://hdl.handle.net/1721.1/144775</id>
<updated>2022-08-30T03:03:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Digital Twin Model for Lubricant Oil Transport and Oil Consumption Study in Internal Combustion Engines
Zhong, Xinlin
Nowadays, internal combustion engines face significant challenges stemming from stricter emission regulations and from the introduction of propulsion systems that use alternative energies. Given its wide adoption and its apparent irreplaceability in the short term especially for the heavy duty sector, further efforts are required to reduce engine emissions. An important step towards this ultimate goal is to deepen the understanding of the lubricant oil transport process in the piston ring pack system given that oil consumption is one of the major sources of engine emission.&#13;
&#13;
This thesis aims to develop a baseline Digital Twin model for the piston ring pack system to study oil transport and oil consumption behaviors in an internal combustion engine, operating under the "healthy system" assumption to establish a lower-bound value of the properties of interests. The model uniquely combine a machine-learning based gas flow model to resolve the complex gas flow patterns within the piston ring pack, which is then coupled with a numerical solver to model the long term oil-gas interaction and subsequently determine the oil consumption rates. Extensive parametric study is performed and results of which demonstrate the effectiveness of the model for a wide range of different engine boundary conditions. The model can be used a tool to study how different engine designs and operating conditions influence oil transport and oil consumption, providing a convenient and cost-effective alternative to expensive engine tests. The output of the Digital Twin can be considered as the ultimate goal which future engine designs should strive to achieve.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing the Usability of Natural Language Processing for&#13;
Detecting Disinformation Tactics, Techniques, and Procedures</title>
<link href="https://hdl.handle.net/1721.1/144774" rel="alternate"/>
<author>
<name>Landwehr, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/144774</id>
<updated>2022-08-30T03:49:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analyzing the Usability of Natural Language Processing for&#13;
Detecting Disinformation Tactics, Techniques, and Procedures
Landwehr, Helen
The purposeful manipulation of information for political gain by powerful state actors is a threat to security and democracy that is challenging to address without infringing upon freedom of expression. By qualitatively analyzing the evolution of Russian media manipulation in the early 21st century, we see that the threat is not a product of, but is exacerbated by, technology such as social media, which increases the speed and reach of malicious information. State strategies for information manipulation co-evolve with internet and communication technology to take advantage of the new platform affordances of social media. We analyze the history of international disinformation policy in the European Union and find that policies fail because they attempt to regulate based on the effect of information manipulation rather than developing tractable definitions and characterizations of illicit information manipulation. As such, this thesis proposes that the persistence of this threat to information security is not primarily a result of technological advancements but rather a failure of policy to adequately define information manipulation. Also, we build a protype, machine learning enabled pipeline to investigate the capabilities and limits of using software techniques to characterize disinformation in a standardized manner. This pipeline offers speed and consistency to process large volumes of disinformation texts. Results indicate that even a prototype of a pipeline can detect important characteristics of disinformation. Standardized characterization of disinformation generated by pipelines such as this prototype could then potentially be used to build legal precedents, supporting a quilt-work policy approach. A technology enabled policy solution is thus a potentially feasible and effective path forward to prevent and combat state-sponsored information manipulation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Manufacturing of Electrokinetic Preconcentration Systems</title>
<link href="https://hdl.handle.net/1721.1/144773" rel="alternate"/>
<author>
<name>Wynne, Eric Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/144773</id>
<updated>2026-01-14T14:50:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Low-Cost Manufacturing of Electrokinetic Preconcentration Systems
Wynne, Eric Michael
Detection of low-abundance biomolecules is a critical challenge in improving the safety and efficacy of pharmaceuticals. The methods and technologies for detecting DNA, RNA, and proteins have increased in sensitivity such that even single copies can be detected in certain conditions. However, due to diffusion limited transport of the biomolecules to sensors, these emerging biosensing technologies are only able to process a very small fraction of sample volume available. Therefore, there is a need for technology that can reduce the sampling error by concentrating the relevant molecules into a volume compatible with the sensing technology.&#13;
&#13;
Here we present a low-cost, manufacturable implementation of a sample preconcentration device that uses the ion concentration polarization phenomena to filter charged biomolecules from a solution. Compared to previous electrokinetic concentrators, our device prioritizes manufacturability and sample throughput so that there is a simple path to deploying the device in the biological labs and even field-based detection in the future. We demonstrate the stabilizing effect of microscale fluidic features in electrokinetic concentrators by characterizing device performance with various feature dimensions. We also characterize the preconcentration performance as a function of the applied voltage and the pressure-driven flow velocity. Finally, we explore alternate device designs made possible by our low-cost manufacturing methods and materials, to provide a guide for future improvements. Ultimately, our work demonstrates a novel fabrication method that could be generalized to other devices and bring electrokinetic concentrators closer to broader adoption outside of the microfluidics research community.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects</title>
<link href="https://hdl.handle.net/1721.1/144772" rel="alternate"/>
<author>
<name>Simeonov, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/144772</id>
<updated>2022-08-30T04:04:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects
Simeonov, Anthony
We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation, i.e. without prior object models. Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics by relying on a set of generalizable manipulation primitives. We show that for rigid bodies, this abstraction can be realized using low-level manipulation skills that maintain sticking contact with the object and represent subgoals as 3D transformations. To enable generalization to unseen objects and improve planning performance, we propose a novel way of representing subgoals for rigid-body manipulation and a graph-attention based neural network architecture for processing point-cloud inputs. We experimentally validate these choices using simulated and real-world experiments on the YuMi robot. Results demonstrate that our method can successfully manipulate new objects into target configurations requiring long-term planning. Overall, our framework realizes the best of the worlds of task-and-motion planning (TAMP) and learning-based approaches.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Systems and Algorithms for Deep Learning on Point Clouds</title>
<link href="https://hdl.handle.net/1721.1/144771" rel="alternate"/>
<author>
<name>Tang, Haotian</name>
</author>
<id>https://hdl.handle.net/1721.1/144771</id>
<updated>2022-08-30T03:11:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Efficient Systems and Algorithms for Deep Learning on Point Clouds
Tang, Haotian
Deep learning on point clouds has received increased attention thanks to its wide applications in AR/VR and autonomous driving. These applications require low latency and high accuracy to provide real-time user experience and ensure user safety. Unlike conventional dense workloads, the sparse and irregular nature of point clouds poses severe challenges to running sparse CNNs efficiently on the general-purpose hardware. Furthermore, existing sparse acceleration techniques for 2D images do not translate to 3D point clouds due to poor system support. Therefore, in this thesis, we tackle the challenging problem of accelerating deep learning on point clouds via system-algorithm co-design.&#13;
&#13;
We first introduce TorchSparse, a high-performance point cloud inference engine that accelerates the sparse convolution computation on GPUs. TorchSparse directly optimizes the two bottlenecks of sparse convolution: irregular computation and data movement. It applies adaptive matrix multiplication grouping to trade computation for better regularity, achieving 1.4-1.5× speedup for matrix multiplication. It also optimizes the data movement by adopting vectorized, quantized and fused locality-aware memory access, reducing the memory movement cost by 2.7×. Evaluated on seven representative models across three benchmark datasets, TorchSparse achieves 1.6× and 1.5× measured end-to-end speedup over the state-of-the-art MinkowskiEngine and SpConv, respectively.&#13;
&#13;
We further notice that the dominant module in state-of-the-art point cloud networks, Sparse Convolution, falls short in accurately modeling small objects in the large-scale outdoor scenes. As such, we further propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch. With negligible overhead, this pointbased branch is able to preserve the fine details even from large outdoor scenes. To explore the spectrum of efficient 3D models, we first define a flexible architecture design space based on SPVConv, and we then present 3D Neural Architecture Search (3D-NAS) to search the optimal network architecture over this diverse design space efficiently and effectively. Experimental results validate that the resulting SPVNAS model is fast and accurate: it outperforms the state-of-the-art MinkowskiNet by 3.3%, ranking 1 st on the competitive SemanticKITTI leaderboard upon publication. It also achieves 8× computation reduction and 3× measured speedup over MinkowskiNet still with higher accuracy. SPVNAS is also the 1 st place winner at the semantic segmentation challenge of 6th AI Driving Olympics and 2nd place holder at the nuScenes panoptic segmentation challenge in 2021.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Keyframe Learning (AKL): Learning Interaction and Constraint Keyframes from a Single Demonstration of a Task</title>
<link href="https://hdl.handle.net/1721.1/144770" rel="alternate"/>
<author>
<name>Illandara, Thavishi</name>
</author>
<id>https://hdl.handle.net/1721.1/144770</id>
<updated>2022-08-30T03:37:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Active Keyframe Learning (AKL): Learning Interaction and Constraint Keyframes from a Single Demonstration of a Task
Illandara, Thavishi
Although recent advances in robotics enable the automation of manual tasks in manufacturing, integrating robots into a factory remains time and resource intensive, as it requires conventional robot programming and robot experts. In order to increase the feasibility of robot integration into industrial processes, the programming of robots must be easily accessible to domain experts with little to no experience in robotics. In this thesis, we present Active Keyframe Learning (AKL) for learning the task specification as an ordered sequence of keyframes to capture the physical interactions and geometric constraints from a single demonstration of a task given by a nonexpert. We learn the least restrictive task specification that maximizes the flexibility given to a motion planner by learning the human intent for demonstrated constrained motion online and performing interaction-based and constraint-based segmentation offline. We conduct a user study to evaluate the keyframe, pose, constraint accuracies, workload, and teaching efficiency of AKL against two state-of-the-art techniques in keyframe and constraint learning and demonstrate the significant benefits of utilizing AKL to teach tasks to robots.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact</title>
<link href="https://hdl.handle.net/1721.1/144768" rel="alternate"/>
<author>
<name>Li, Yifei</name>
</author>
<id>https://hdl.handle.net/1721.1/144768</id>
<updated>2022-08-30T03:28:43Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
Li, Yifei
Cloth simulation has wide applications in computer animation, garment design, and robot-assisted dressing. This work presents a differentiable cloth simulator whose additional gradient information facilitates cloth-related applications. Our differentiable simulator extends a state-of-the-art cloth simulator based on Projective Dynamics (PD) and with dry frictional contact. We draw inspiration from previous work to propose a fast and novel method for deriving gradients in PD-based cloth simulation with dry frictional contact. Furthermore, we conduct a comprehensive analysis and evaluation of the usefulness of gradients in contact-rich cloth simulation. Finally, we demonstrate the efficacy of our simulator in a number of downstream applications, including system identification, trajectory optimization for assisted dressing, closed-loop control, inverse design, and real-to-sim transfer. We observe a substantial speedup obtained from using our gradient information in solving most of these applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Representative Benchmarks by Automatically Synthesizing Datasets</title>
<link href="https://hdl.handle.net/1721.1/144767" rel="alternate"/>
<author>
<name>Lee, Hyun Ryong</name>
</author>
<id>https://hdl.handle.net/1721.1/144767</id>
<updated>2022-08-30T03:37:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generating Representative Benchmarks by Automatically Synthesizing Datasets
Lee, Hyun Ryong
Benchmarks that closely match the behavior of production workloads are crucial to design and provision computer systems. However, current approaches fall short: First, open-source benchmarks use public datasets that cause different behavior from production workloads. Second, black-box workload cloning techniques generate synthetic code that imitates the target workload, but the resulting program fails to capture most workload characteristics, such as microarchitectural bottlenecks or time-varying behavior.&#13;
&#13;
Generating code that mimics a complex application is an extremely hard problem. Instead, this thesis proposes a different and easier approach to benchmark synthesis. The key insight is that for many production workloads the program is publicly available, or there is a reasonably similar open-source program. In this case, generating the right dataset is sufficient to produce an accurate benchmark.&#13;
&#13;
Based on this observation, this thesis presents Datamime, a profile-guided approach to generate representative benchmarks for production workloads. Datamime uses the performance profiles of a target workload to generate a dataset that, when used by a benchmark program, behaves very similarly to the target workload in terms of its microarchitectural characteristics.&#13;
&#13;
We evaluate Datamime on several datacenter workloads. Datamime generates synthetic benchmarks that closely match the microarchitectural features of these workloads, with a mean absolute percentage error of 4% on IPC. Microarchitectural behavior stays close across processor types. Finally, time-varying behaviors are also replicated, making these benchmarks useful to e.g. characterize and optimize tail latency.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator</title>
<link href="https://hdl.handle.net/1721.1/144766" rel="alternate"/>
<author>
<name>Zhang, Annan</name>
</author>
<id>https://hdl.handle.net/1721.1/144766</id>
<updated>2022-08-30T03:49:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
Zhang, Annan
Developing reliable control strategies for soft robots requires advances in soft robot perception. Due to their near-infinite degrees of freedom, obtaining useful sensory feedback from soft robots remains a long-standing challenge. Moreover, sensorization methods must be co-developed with more robust approaches to soft robotic actuation. However, current soft robotic sensors pose many performance limitations, and available materials and manufacturing techniques complicate the design of sensorized soft robots. To address these needs, we introduce a vision-based method to sensorize robust, electrically-driven soft robotic actuators constructed from a new class of architected materials. Specifically, we position cameras within the hollow interiors of actuators based on handed shearing auxetics (HSA) to record their deformation. Using external motion capture data as ground truth, we train a convolutional neural network (CNN) that maps the visual feedback to the pose of the actuator’s tip. Our model provides predictions of tip pose with sub-millimeter accuracy from only six minutes of training data, while remaining lightweight with 300,000 parameters and an inference time of 18 milliseconds per frame on a single-board computer. We also develop a model that additionally predicts the horizontal tip force acting on the actuator and demonstrate its ability to generalize to previously unseen forces. Overall, our methods present a reliable vision-based approach for designing sensorized soft robots built from electrically-actuated, architected materials.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Site Synthesis of Halide Perovskite Nanocrystals with Sub-50 nm Positional Accuracy</title>
<link href="https://hdl.handle.net/1721.1/144765" rel="alternate"/>
<author>
<name>Jastrzebska-Perfect, Patricia</name>
</author>
<id>https://hdl.handle.net/1721.1/144765</id>
<updated>2022-08-30T03:07:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">On-Site Synthesis of Halide Perovskite Nanocrystals with Sub-50 nm Positional Accuracy
Jastrzebska-Perfect, Patricia
Metal halide perovskites comprise a promising class of semiconductors with broad applications ranging from optoelectronics and photovoltaics to photocatalysis. To extend these applications to on-chip devices, on-site growth of perovskite nanocrystals with high positional accuracy must be achieved. In this thesis, we demonstrate a method for growing sub-50 nm halide perovskite nanocrystals with sub-50 nm positional accuracy. We first explore how local forces can be engineered to control particle positioning at the nanoscale. We then show how our parameters of control - namely template geometry and solvent contact angle - can be utilized to position nanocrystals within lithographic templates. This thesis reveals a unique strategy that overcomes the limitations of conventional lithographic processes for high-resolution growth of halide perovskites.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rethinking Update-in-Place Key-Value Stores for Modern&#13;
Storage</title>
<link href="https://hdl.handle.net/1721.1/144764" rel="alternate"/>
<author>
<name>Markakis, Markos</name>
</author>
<id>https://hdl.handle.net/1721.1/144764</id>
<updated>2022-08-30T03:02:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Rethinking Update-in-Place Key-Value Stores for Modern&#13;
Storage
Markakis, Markos
Several widely-used key-value stores, like RocksDB, are designed around log-structured merge trees (LSMs). Optimizing for the performance characteristics of HDDs, LSMs provide good write performance by emphasizing sequential access to storage. However, this approach negatively impacts read performance: LSMs must employ expensive compaction jobs and memory-consuming Bloom filters in order to achieve reasonably fast reads. In the era of NVMe SSDs, we argue that this trade-off between read performance and write performance is sub-optimal. With enough parallelism, modern storage media have comparable random and sequential access performance, making update-in-place designs, which traditionally provide high read performance, a viable alternative to LSMs.&#13;
&#13;
In this thesis, based on a research paper currently under submission, we close the gap between log-structured and update-in-place designs on modern SSDs by taking advantage of data and workload patterns. Specifically, we explore three key ideas: (A) record caching for efficient point operations, (B) page grouping for high-performance range scans, and (C) insert forecasting to reduce the reorganization costs of accommodating new records. We evaluate these ideas by implementing them in a prototype update-in-place key-value store called TreeLine. On YCSB, we find that TreeLine outperforms RocksDB and LeanStore by 2.18× and 2.05× respectively on average across the point workloads, and by up to 10.87× and 7.78× overall.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Properties of Scintillating Integrated Fibers as Conformal Radiation Detectors</title>
<link href="https://hdl.handle.net/1721.1/144756" rel="alternate"/>
<author>
<name>Sesler, Jefferson B.</name>
</author>
<id>https://hdl.handle.net/1721.1/144756</id>
<updated>2022-08-30T03:29:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simulating Properties of Scintillating Integrated Fibers as Conformal Radiation Detectors
Sesler, Jefferson B.
Advancements in fabricating multifunctional fibers with embedded integrated circuits have the potential to create fibers with novel capabilities, including radiation detection when combined with elements of scintillating fiber detectors. By sensing and processing scintillation light within the fiber itself, integrated fibers could be woven into a fabric to make a rugged, conformal radiation detector. Such a fabric could be wearable or easily deployable in the field, and useful in searches for radiation sources or in performing neutron coincidence counting for plutonium. Before prototyping these fibers, it is necessary to show that a fabric detector will be able to detect a radiation at a safe distance in a short time, and it is helpful to optimize the fiber design to distinguish gamma rays from neutrons. This work presents two series of Monte Carlo simulations evaluating these capabilities. Calculating a fabric’s  detection of a radiation source over background radiation entailed measuring the room background in our lab and developing a method to model it in the Monte Carlo simulation. We used this to predict the limits of a fabric’s sensitivity, showing that a wearable conformal detector performs comparably or better to existing portable commercial detectors. The second series of simulations examined whether the triggering of multiple fibers by gamma rays, but not by neutron radiation, could be used to distinguish the particles. We present the simulated readout of a variety of fabric and fiber geometries exposed to gamma rays and neutrons, and show that fabrics composed of thin fibers, ideally around 0.15mm to 0.2mm wide, can distinguish a large proportion of gamma ray events, but will need further characterization work to separate the remaining gammas from neutron events.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Root Storage and Exudation in the Brachypodium Genus</title>
<link href="https://hdl.handle.net/1721.1/144754" rel="alternate"/>
<author>
<name>Headrick, Kevin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144754</id>
<updated>2022-08-30T03:52:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigating Root Storage and Exudation in the Brachypodium Genus
Headrick, Kevin C.
In plants, sources are the tissues where a specific nutrient is absorbed or fixed into a plant, and sinks are the tissues in the plant where the nutrients are dispersed after they are absorbed or fixed. In the case of carbon, leaves tend to be the source, as most photosynthesis occurs in the leaves, and one of the sinks for carbon are root tissues. Because carbon-based compounds are the main source of cellular energy in plants, understanding the source-sink dynamics of carbon in plants gives us an insight into which functions or processes the plat is investing in. Specifically, the two aspects of source sink dynamics we investigate in this study are root exudation and root storage. We look at the difference of exudation between annual and perennial species and compare the levels of carbon storage in roots that arise from shoot apical meristems or from root apical meristems. These two functions occur in response to surplus carbon content in a plant, which we artificially prompted by withholding nitrogen from our plants. We use multiple species from the Brachypodium genus, an annual, B. distachyon, and a perennial, B. sylvaticum for this study. We chose this genus because it has evolved annuality and perenniality multiple times, and it is a good model for several economically valuable species. Most conclusively, we found that there is a significantly higher levels of carbon storage in shoot-borne roots compared to seminal root tissues.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accuracy of embodied carbon estimation during early-stage structural design</title>
<link href="https://hdl.handle.net/1721.1/144753" rel="alternate"/>
<author>
<name>Bastian, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/144753</id>
<updated>2022-08-30T03:31:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Accuracy of embodied carbon estimation during early-stage structural design
Bastian, Luke
The embodied carbon of the built environment accounts for approximately 11% of the planet’s annual carbon emissions and is projected to increase significantly as an additional 2.5 billion people are expected to be living in cities by 2050. In recent years, a number of tools have been developed to predict carbon emissions associated with buildings. However, due to these tools being either overly simplistic or contingent on exact material quantities being known, very few can be effectively used to inform structural design decisions. It is then essential that strategies are developed to better estimate and reduce embodied carbon early in the design process.&#13;
&#13;
This work explores the ability of an improved building model generator to predict the embodied carbon stemming from structural members, specifically floors, beams, columns, foundations, and structural walls. The generated models can take the shape of any building outline - no matter how complex - and allow designers to adjust important design parameters including column spacing, beam spacing, loading, materiality, and more. Regarding materials, designers can explore tradeoffs between timber, steel, and concrete framed structures (or combinations of two). Building information models (BIMs) of four existing structures are then utilized as ground truths for validation of these models, which are found to be relatively predictive of actual embodied carbon with a maximum error of 10%. While the speed of the model needs some improvement, this work demonstrates that it can be accurately used in early design stages to predict embodied carbon and adjust designs accordingly.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-Based Methods for Spacecraft Dynamics Modeling, Filtering, and Predictive Control</title>
<link href="https://hdl.handle.net/1721.1/144750" rel="alternate"/>
<author>
<name>Parker, William E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144750</id>
<updated>2022-08-30T03:28:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning-Based Methods for Spacecraft Dynamics Modeling, Filtering, and Predictive Control
Parker, William E.
Spacecraft motion under model uncertainty arises in many on-orbit assembly, servicing, and assistance scenarios, including tasks requiring manipulation of unknown grappled objects. Traditionally, adaptive model-based control approaches have relied on an analytical dynamics model with a set of parameters that are estimated from observations of effective spacecraft dynamics. Without extensive a priori knowledge of the system under study, however, it can be difficult to identify a parametric model structure that accurately captures the dynamics of the system. In this work, the author proposes a new approach for learning unknown ``hard-to-model'' spacecraft dynamics in a non-parametric way using techniques including Gaussian process regression, deep evidential regression, and a novel particle filter regression scheme. These non-parametric and uncertainty-aware methods allow previously unmodeled dynamics to be learned onboard a spacecraft in real-time with very little a priori knowledge of the system required, but come with increased computational cost.&#13;
&#13;
State estimation and control tasks typically rely on accurate process and observation models, but these process models have historically been analytical and parametric, requiring advance knowledge of the system. In this work, uncertainty-aware learned non-parametric dynamics models are used for state estimation filtering, model predictive control, and Fault Detection, Isolation, and Recovery (FDIR) scenarios. The Uncertainty-Aware Regression Unscented Kalman Filter (UAR-UKF) is developed and applied to perform state estimation for a nonlinear dynamical system using a learned process model. The Uncertainty-Aware Regression Bayesian Filter (UAR-BF) is also designed to capitalize on the learned process model's ability to perform state and covariance transitions, and uses Gaussian conflations instead of the Kalman gain to compute a posterior state and covariance at each timestep. A non-parametric model predictive control framework is also discussed, where optimal control trajectories are computed over a finite receding time horizon using a learned process model.  An example scenario is presented to highlight the utility of learning-based methods for fault detection, isolation, and recovery after a sudden actuator failure. Learning-based methods are shown to be favorable compared to parametric modeling methods for both the filtering and control applications in simulation and on real robotic systems operating in microgravity on the International Space Station. The tools developed in this work are generally applicable and potentially useful for non-parametric learning and control of any complex, uncertain system in which little or no a priori knowledge is available.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Orbit Characterization of a Microelectromechanical Systems (MEMS) Deformable Mirror (DM): Mission Results from the Deformable Mirror Demonstration Mission (DeMi) CubeSat</title>
<link href="https://hdl.handle.net/1721.1/144748" rel="alternate"/>
<author>
<name>Vlahakis, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/144748</id>
<updated>2022-08-30T03:31:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">On-Orbit Characterization of a Microelectromechanical Systems (MEMS) Deformable Mirror (DM): Mission Results from the Deformable Mirror Demonstration Mission (DeMi) CubeSat
Vlahakis, Sophia
The Deformable Mirror Demonstration Mission (DeMi) CubeSat operated from July 2020 to March 2022 and demonstrated the successful operation of a Microelectromechanical Systems (MEMS) Deformable Mirror (DM) on orbit for the first time. As part of space-borne adaptive optics systems, DMs correct optical aberrations and speckles due to mechanical, thermal, and optical effects. MEMS DMs are particularly well suited for space systems because they are compact, low power devices and have a high density of actuators to provide high precision wavefront control. This makes MEMS DMs a key technology for future space-based telescopes with adaptive optics for applications such as exoplanet direct imaging with coronagraphs.&#13;
&#13;
The DeMi payload design is a miniature space telescope with an adaptive optics system including a 140-actuator MEMS DM from the Boston Micromachines Corporation (BMC) and a Shack-Hartmann wavefront sensor (SHWFS). The mission objectives were on-orbit characterization of the MEMS DM, on-orbit demonstration of closed-loop mirror control, and improvement of the point spread function (PSF) of an astronomical source.&#13;
&#13;
This thesis discusses experiments characterizing the on-board MEMS DM by using the SHWFS to measure deflections of individual actuators on the DM in response to input voltages up to 150 V. Results show the DeMi mission successfully measured on-orbit actuator deflection to a precision of 12 nm for input voltages up to approximately 100 V. Repeatability is characterized by the difference in measured deflection between actuators commanded to the same voltage over time, which is shown to have a median of between 2-13 nm in space. Data from space operations show that on-orbit DM performance and measurement uncertainty are similar to ground testing performance. Together, ground testing and on-orbit data indicate a correlation between actuator deflection amount and temperature, with actuators deflecting 22% less at when the payload temperature is at 6 degrees C than at 28-29 degrees C.&#13;
&#13;
The DM performed consistently and accurately throughout its lifetime with no evidence of actuators becoming stuck or unresponsive. The DeMi mission results have raised the Technology Readiness Level (TRL) of MEMS DM technology from a 5 to a 9.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site-selective C-H Bond Diversification of Glycosides</title>
<link href="https://hdl.handle.net/1721.1/144747" rel="alternate"/>
<author>
<name>Liu, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/144747</id>
<updated>2022-08-30T03:29:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Site-selective C-H Bond Diversification of Glycosides
Liu, Aaron
Synthetic carbohydrates have recently emerged as an important motif in the development of modern therapeutics. Despite the biological significance of carbohydrate mimetics, synthetic challenges continue to limit the widespread implementation of these compounds. The thesis presented herein includes three specific sections with the ultimate goal of developing a general, selective and efficient catalytic system for monosaccharide functionalization: 1) site-selective halogenation &amp; derivatization of sugars, 2) expedient synthesis of L-glucose and 3) diastereoselective C-H alkylation of sugars. These radical-based transformations harness polarity compatibility between the hydrogen atom abstractor and the substrate to afford site-selective functionalization of sugars that have broad potential biomedical value.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationship of Mechanical Deformations and Electrochemical Properties of Lithium Ion Batteries-An Experimental Study</title>
<link href="https://hdl.handle.net/1721.1/144746" rel="alternate"/>
<author>
<name>Reynolds, Christopher M.A.45</name>
</author>
<id>https://hdl.handle.net/1721.1/144746</id>
<updated>2022-08-30T03:40:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Relationship of Mechanical Deformations and Electrochemical Properties of Lithium Ion Batteries-An Experimental Study
Reynolds, Christopher M.A.45
MIT’s Impact &amp; Crashworthiness Lab (ICL) has been conducting research into lithium-ion batteries in an effort to produce a computation model that can predict the impact of mechanical deformations on electrochemical properties of lithium-ion batteries. Experiments were conducted on two different types of lithium-ion battery cells in order to continue gathering data to refine and validate the ICL model. First, prismatic cells were cycled through a various number of charges and discharges, with one prismatic cell placed under a compressive load to measure how much force it would exert on its carriage throughout its cycling. Upon completion of the cycling, the prismatic cells were subjected to indentation to the point of mechanical failure. Second, pouch cells were subjected to three different four-point bending conditions, and cycled through 10 charges and discharges. Upon completion of the cycling, the pouch cells were removed from the four-point bending system to measure the deflection of the changed shape. Various voltage, current, and force measurements were taken throughout the experiments to help refine the ICL computational model, as well as allowing for additional observations to be made regarding the relationship between mechanical deformations and electrochemical properties.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Collaborative Channel Finding Approaches for Autonomous Marine Vehicles</title>
<link href="https://hdl.handle.net/1721.1/144744" rel="alternate"/>
<author>
<name>Gershfeld, Nikolai</name>
</author>
<id>https://hdl.handle.net/1721.1/144744</id>
<updated>2022-08-30T03:51:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Adaptive Collaborative Channel Finding Approaches for Autonomous Marine Vehicles
Gershfeld, Nikolai
This thesis presents an investigation into the problem of rapid identification of a channel that crosses a body of water, using one or more Autonomous Marine Vehicles (AMVs). A new algorithm called Proposal Based Adaptive Channel Search (PBACS) is presented as a potential solution that improves upon current methods. The empirical performance of PBACS is compared to lawnmower surveying and to Markov decision process (MDP) planning with two state-of-the-art reward functions: Upper Confidence Bound (UCB) and Maximum Value Information (MVI). The performance of each method is evaluated through comparison of the time it takes to identify a continuous channel through an area, using one, two, three, or four Autonomous Surface Vehicles (ASVs). The performance of each method is compared across ten simulated bathymetry scenarios and one field area, each with different channel layouts.&#13;
&#13;
The results from simulations and field trials presented in this thesis indicate that on average multi-vehicle PBACS outperforms lawnmower, UCB, and MVI based methods, with two main exceptions. One case is when lawnmower start locations are aligned with a straight channel, which can happen for any number of vehicles. The lawnmower outperforms other approaches in this case. However, this alignment on an unknown bathymetry would happen purely by chance, while PBACS identifies the channel regardless of any alignment. The second case is when the shape of the channel is curved, and no straight path exists. In this case, PBACS outperforms other approaches only when more than two vehicles are used.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Health Co-Benefits: Air Quality-Related Equity Implications of US Decarbonization Policy</title>
<link href="https://hdl.handle.net/1721.1/144743" rel="alternate"/>
<author>
<name>Picciano, Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/144743</id>
<updated>2022-08-30T03:12:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Beyond Health Co-Benefits: Air Quality-Related Equity Implications of US Decarbonization Policy
Picciano, Paul
Emissions of greenhouse gases (GHG) that contribute to climate change are often associated with emissions of air pollutants that react to form fine particulate matter (PM2.5), which is a significant cause of premature mortality and disproportionally harms people of color and low-income populations in the U.S. Ambitious climate policy to decarbonize the economy may be an appealing pathway to concurrently reduce air pollution and improve health, and a growing body of literature has established the significant health benefits from policies aimed to reduce GHG emissions. However, uncertainty remains about how different U.S. decarbonization strategies might affect air pollution-related health disparities.&#13;
&#13;
This thesis explores the extent to which near-term federal carbon pricing can reduce racial/ethnic disparities in air pollution exposure, as well as pathways to reduce these disparities more generally. The main policy instrument evaluated here is an economy-wide cap-and-trade program that reduces carbon dioxide (CO2) emissions by 50% in 2030. The analysis leverages modeled energy-economic scenarios to estimate emissions reductions under the policy and applies an air quality model to evaluate PM2.5-related equity outcomes. In 2030, we estimate that the policy drives national emission reductions of sulfur dioxide (49%) and nitrogen oxides (16%), with smaller changes in other PM2.5-related pollutants, relative to a baseline with no federal carbon policy. The policy reduces average PM2.5 exposure for all racial/ethnic groups that we evaluate, with the greatest benefit for Black and non-Hispanic white populations primarily due to changes in the electricity sector. However, despite reductions in average PM2.5 exposures, disparities remain under the policy, and the relative gap in exposure between non-Hispanic white people and people of color slightly widens on average. Sensitivity analysis evaluating alternative distributions of emissions that are consistent with total CO2 reductions under the policy have limited impact on the results. We conclude that near-term federal carbon pricing can reduce air pollution exposure overall but has minimal impact on disparities, emphasizing the need for complementary policy to fulfill goals of mitigating environmental injustices.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonprehensile Manipulation of Multi-Link Hinges</title>
<link href="https://hdl.handle.net/1721.1/144742" rel="alternate"/>
<author>
<name>White, Danielle Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/144742</id>
<updated>2022-08-30T03:42:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Nonprehensile Manipulation of Multi-Link Hinges
White, Danielle Marie
Folding a multi-link hinge using nonprehensile manipulation provides insight into the problem classes of nonprehensile manipulation and nonrigid object manipulation. Because the dynamics of nonrigid object are generally governed by more parameters than rigid bodies, robustness to parameter uncertainty is particularly important for these types of tasks. In this work, we propose several Cartesian impedance controllers which utilize vision feedback, force feedback, or both to fold a multi-link hinge. We characterize the robustness of these controllers to various system parameters, which provides insight into the effect of different types of feedback on controller performance.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithm-Agnostic System for Measuring Susceptibility of Cryptographic Accelerators to Power Side Channel Attacks</title>
<link href="https://hdl.handle.net/1721.1/144741" rel="alternate"/>
<author>
<name>John, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/144741</id>
<updated>2022-08-30T03:37:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Algorithm-Agnostic System for Measuring Susceptibility of Cryptographic Accelerators to Power Side Channel Attacks
John, Brandon
Many digital devices, from secure enclaves to generic processors, often handle encryption of sensitive data. Protecting this sensitive data is a significant challenge, with potential vulnerabilities extending from bugs in both software and hardware. One major class of vulnerabilities under active research is the use of Power Side Channels (PSCs), which involve precisely measuring the power consumption of a device over time. However, current research is fairly disjoint, without a standardized set of tools for quantifying protection techniques. This leads to the motivation of this project: to create a standardized baseline system for evaluating power side channels and their defenses.&#13;
&#13;
This project makes several contributions to the power side channel community. First, it enables calibration of Signal to Noise Ratio (SNR) measurements to a common baseline, and thus easier comparison between various defense techniques. Second, it proposes a method of measuring SNR that requires a constant number of samples, as compared to some techniques that keep sampling until some reference amount of information is leaked. Third, it includes a case study of AES cores which yields a better understanding of how a PSC amplification technique (specifically using many identical cores in parallel) affects the PSC’s signal “strength” and thus time to successfully extract the secret data. Finally, it makes public an ecosystem for quickly starting power side channel research without the significant effort of implementing everything from scratch before any research can begin.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Frame Fields for Polycube Generation and Quad Simplification</title>
<link href="https://hdl.handle.net/1721.1/144740" rel="alternate"/>
<author>
<name>Cheng, Katherine Yi-Lin</name>
</author>
<id>https://hdl.handle.net/1721.1/144740</id>
<updated>2022-08-30T04:00:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Adaptive Frame Fields for Polycube Generation and Quad Simplification
Cheng, Katherine Yi-Lin
Hexahedral (hex) meshing is the problem of approximating a 3-dimensional volume with cube-like structures, and is used to discretize arbitrary volumes to solve PDEs. The structure of a hex mesh is largely guided by its singular graph. For a hex mesh to have good quality, the hexes must have relatively low distortion, and thus relatively simpler singular graphs, as singularities lead to distorted hexes. While frame field-based methods are not guaranteed to produce a hex mesh, they are excellent at simplifying singular graphs. In this thesis, we explore possible solutions to the hexahedral meshing problem based on frame fields.&#13;
&#13;
We gain insight into singularities by adaptively increasing resolution of their frame fields. We show that singularities of octahedral frame fields tend to avoid areas of high resolution, but singularities of odeco frame fields do not. We then use this discovery to help build polycube frame fields by artificially imposing a distorted metric to force singularities towards the boundary. Our approach is able to produce polycube frame fields, but does not solve meshing of certain challenging structures, such as the ramp.&#13;
&#13;
We then apply frame-based ideas towards the 2-dimensional analog of hex mesh simplification: quad mesh simplification. Existing quad mesh simplification methods are not able to output a mesh with a fully simplified singular graph. We combine the ability of frame field optimization to remove complex singularities with the ability of quad mesh simplification to maintain the existence of a quad mesh. More specifically, we use frame field-inspired gradients to produce a candidate ranking algorithm for quadrilateral mesh simplification. We find that we can greatly simplify quad meshes with complicated singularities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Software and Hardware Infrastructure for Visual-Inertial SLAM</title>
<link href="https://hdl.handle.net/1721.1/144739" rel="alternate"/>
<author>
<name>Mohamoud, Mubarik M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144739</id>
<updated>2022-08-30T03:38:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Software and Hardware Infrastructure for Visual-Inertial SLAM
Mohamoud, Mubarik M.
One of the challenges faced by researchers in the field of robot localization and mapping is finding a reliable infrastructure to test their ideas. That infrastructure could be a simulation platform, suitable hardware, or a sensor interface. A useful simulation platform needs to capture the dynamics and the sensor modalities that meet the researchers’ needs. A suitable hardware needs to have the capability to navigate, sense the environments, and use onboard computers to run the software it was designed for. A sensor interface allows adapting and testing algorithms on novel sensors. In this research, we develop an essential hardware and software infrastructure for aiding the development and testing of visual-inertial Simultaneous Localization and Mapping (SLAM) systems. SLAM is a fundamental problem in robot navigation and enables constructing or updating a representation (map) of an environment utilizing sensors on board a robot while concurrently using that representation to localize the robot itself. In visual-inertial SLAM the onboard sensors are cameras (monocular or stereo) and an inertial measurement unit (IMU). The contribution of this thesis is threefold. First, we develop a hardware platform consisting of a real drone capable of running state-of-art metric-semantic SLAM; this infrastructure allows us to test advanced SLAM algorithms using real sensors and real robot dynamics. Second, we develop a multi-robot simulation platform that includes dynamically accurate, photo-realistic drones; this platform allows extending our tests to multi-robot SLAM systems. Finally, we develop a new sensor interface; in particular, we integrate and test an omnidirectional stereo frontend in Kimera, an open-source visual-inertial SLAM pipeline. The thesis presents the design, implementation, and testing of each contribution.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining Functional and Automata Synthesis to Learn Causal Reactive Programs</title>
<link href="https://hdl.handle.net/1721.1/144736" rel="alternate"/>
<author>
<name>Das, Ria A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144736</id>
<updated>2022-08-30T03:51:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Combining Functional and Automata Synthesis to Learn Causal Reactive Programs
Das, Ria A.
Despite impressive advances that have made it the mainstream route towards building human-like AI, deep learning suffers from key limitations that make it unlikely to replicate human intelligence on its own. Specifically, it is very data-hungry, often generalizes poorly to new scenarios, and is not very interpretable, lacking features like compositionality that characterize human knowledge. Given these shortcomings, we explore a different approach to engineering human-like AI called program synthesis, in which learned knowledge is represented in the form of a symbolic program. Programs can be learned from limited data and can interpretably capture a wide variety of structured knowledge. However, existing synthesis methods do not scale to long programs that model very complex datasets. In this thesis, we expand the horizon of programs that can be realistically synthesized by bridging methods from two orthogonal communities within programming languages: the functional synthesis and automata synthesis communities. We focus on the particular domain of causal mechanism discovery in Atari-style grid worlds, and develop a synthesis algorithm that infers a program describing the causal rules of the world from a sequence of observations. We evaluate our algorithm on two benchmark datasets, including one that we constructed using a new programming language called Autumn. Our ongoing results signal the promise of our method, both for modeling efficient, human-like causal discovery and in synthesis and learning contexts more broadly.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Polymer Electrolyte Discovery with Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/144735" rel="alternate"/>
<author>
<name>Bradford, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/144735</id>
<updated>2022-08-30T03:02:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Accelerating Polymer Electrolyte Discovery with Machine Learning
Bradford, Gabriel
Solid polymer electrolytes (SPEs) have the potential to improve energy storage devices by enhancing safety and enabling higher energy densities. However, SPEs suffer from significantly lower ionic conductivity than liquid electrolytes, limiting their adoption in functional devices. To facilitate more rapid discovery of high ionic conductivity SPEs, we developed a chemistry-informed machine learning model that accurately predicts ionic conductivity of SPEs. To train the model, we compiled training data of SPE ionic conductivity from hundreds of experimental publications. Our chemistry-informed model incorporates Arrhenius behavior into a state-of-the-art message passing neural network and has significantly improved accuracy over models with no explicit chemistry encoded. This method of tailoring a model to a specific prediction task by incorporating known chemical physics would be applicable to other materials discovery tasks and would be especially helpful where limited training data are available. Using our fully trained model, ionic conductivity values were predicted for several thousand candidate SPE formulations, allowing us to identify promising candidate SPEs. We also generated predictions for several different anions in two commonly used polymers, allowing us to examine the role of the anion in ionic conductivity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Performance of a Highly Mobile, Climbing, Wheeled, Soft-bodied Robot</title>
<link href="https://hdl.handle.net/1721.1/144734" rel="alternate"/>
<author>
<name>LaRocca, Ava</name>
</author>
<id>https://hdl.handle.net/1721.1/144734</id>
<updated>2022-08-30T03:09:21Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design and Performance of a Highly Mobile, Climbing, Wheeled, Soft-bodied Robot
LaRocca, Ava
Search-and-rescue presents high-risk environments and scenarios to human operators, making it well-recognized as an area for potential robotic contributions. Existing search-and-rescue robotic platforms are often too bulky to infiltrate the dense, complex terrain of a collapsed building, while small robotic platforms are lacking in functionality and practicality. There is a need for a robotic platform that is fast and agile on surfaces at all angles, while being compact enough to navigate rubble and gather information uninhibited. There is also an unexplored area in robotics at the intersection of wheeled and soft robotics. This thesis aims to address the need of a highly mobile small robot while initiating the exploration of this promising merger of fields.&#13;
&#13;
This work presents a design and proof-of-concept testing for a palm-sized vehicle that can travel quickly on and transition between planar surfaces at most angles relative to each other. The primary innovation of the design is the integration of wheeled and soft robotics. The tricycle-style vehicle uses magnetic wheels to adhere to surfaces and a soft, silicone body to introduce continuous, three degree-of-freedom mobility into the vehicle body. Individual components were optimized using theoretical and experimental analyses. The optimization results informed the design parameters of an integrated vehicle. Eight design parameters were further refined via iterative testing of the integrated vehicle variants in a controlled environment.&#13;
&#13;
The final vehicle was able to drive quickly on planar surfaces at any angle relative to gravity. It could transition between surfaces intersecting at angles as small as 70° and as large as 285° at any angle relative to gravity. This presents an advancement over existing vehicles, which are more limited in transition angle ranges and/or rely upon the positioning of the gravitational vector to perform the transition successfully. Additional capabilities of this soft-bodied vehicle include axial twisting of the silicone body to accommodate surface variations, and side-to-side bending for skid- free steering. Altogether, the presented vehicle demonstrates that the fundamental idea of the design concept – the merger of wheeled and soft robotics to achieve greater mobility – is sound and merits further consideration.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bodies, Land, and Instagram: Networked Foraging and Infrastructural Media in the United States</title>
<link href="https://hdl.handle.net/1721.1/144732" rel="alternate"/>
<author>
<name>Grandjean, Emily E.</name>
</author>
<id>https://hdl.handle.net/1721.1/144732</id>
<updated>2022-08-30T03:19:36Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Bodies, Land, and Instagram: Networked Foraging and Infrastructural Media in the United States
Grandjean, Emily E.
This thesis examines the ways in which people in the United States use social media to learn about, practice, and share their experiences of foraging. Through an exploration of the histories of the U.S. high-tech industry and federal land ownership and private property systems, I discuss how colonial, capitalist, patriarchal, and white supremacist logics converge as an ecological regime that exploits bodies and land, accruing power to wealthy, white people and corporations. Acting within this ecological regime, networked foragers use a variety of technologies and techniques to orient themselves within their local environment and develop group-based “skilled vision.” Some networked foragers use their bodies and foraged foods as biotechnologies with which to intimately connect with the land, develop new relationships, and maintain local ecosystems. At the same time, the learning process for some networked foragers may be limited by Western, colonial scientific perspectives and “expertise.” I observe that online interactions may, in some cases, foreclose difficult but generative conversations about foraging ethics and the needs of more-than-human forms of life. Finally, I find that social media platforms may encourage a networked foraging culture of spectacle, entertainment, and consumerism, and discourage the interlinking of foraging with politics and ethics.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vivid: An Operating System Kernel for Radiation-Tolerant Flight Control Software</title>
<link href="https://hdl.handle.net/1721.1/144731" rel="alternate"/>
<author>
<name>Skeggs, Cel Andromeda</name>
</author>
<id>https://hdl.handle.net/1721.1/144731</id>
<updated>2022-08-30T03:03:32Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Vivid: An Operating System Kernel for Radiation-Tolerant Flight Control Software
Skeggs, Cel Andromeda
This thesis considers the challenge of defending flight software from radiation errors without a radiation-hardened processor. A new real-time operating system, Vivid, explores the use of redundant multithreading to protect critical software components from radiation errors, and offers new abstractions to reduce the number of single points of vulnerability in the system. It introduces a static component initialization system for C, which eliminates most runtime initialization steps from the operating system and flight software. It introduces a partition scheduler based on execution clips, which ensures that software components always start from a safe state, and it protects the system’s safe state using a pair of memory scrubbers. Vivid introduces voting ducts, an inter-process communication primitive for redundant multithreading that eliminates single points of vulnerability from the voting process. Finally, it defines a sequence of repair that ultimately grounds the correct operation of all components in the system’s software in a hardware watchdog.&#13;
&#13;
To demonstrate the applicability and effectiveness of Vivid, this thesis introduces Swivel, a testbench spacecraft, and describes SwivelFSW, which is the implementation of flight software that meets Swivel’s behavioral requirements, and SwivelSim, which is the simulation of Swivel’s avionics. Next, this thesis introduces Hailburst, a system for efficient processor emulation and radiation fault injection, and uses it to evaluate Vivid’s radiation tolerance through a series of accelerated radiation injection trials. In the tested configuration, Vivid tolerates approximately 149 out of every 150 injected radiation faults without any observed requirement failures, and recovers from the remaining 1 out of 150 radiation faults within at most 2.05 seconds of recovery time in the worst observed case. Because some of Vivid’s defenses appear to be more effective than others, and some may be counterproductive, this thesis discusses future work that would be required before Vivid’s abstractions could be applied to real-world flight software.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Two-Stage Piezoelectric Resonator and Switched Capacitor DC-DC Converter</title>
<link href="https://hdl.handle.net/1721.1/144729" rel="alternate"/>
<author>
<name>Wanyeki, Babuabel</name>
</author>
<id>https://hdl.handle.net/1721.1/144729</id>
<updated>2022-08-30T03:24:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Two-Stage Piezoelectric Resonator and Switched Capacitor DC-DC Converter
Wanyeki, Babuabel
Power converters are used in virtually every area of our lives from electric vehicle charging stations to television screens. Presently, magnetics pose a challenge for miniaturization as they fundamentally decrease in achievable power density at small scales. Our solution to this problem is to remove magnetic components altogether and instead design power converters based on piezoelectric resonators (PRs) and capacitors as the main passive elements. In previous work, we have demonstrated that PRs have high efficiencies and power density capabilities operating as dc-dc voltage regulators, but that these advantages wane for high step down ratios. Alternatively, utilizing capacitors in a switch capacitor (SC) network can provide high step down ratios with high power densities and efficiencies, but only for specific conversion ratios. By connecting the PR and SC converters together, there is an opportunity for each stage to address the drawbacks of the other in order to create a high power density and high efficiency power converter that can provide good voltage regulation and a high step down ratio. The purpose of this thesis is to investigate, simulate, and build a two-stage converter using a piezoelectric resonator and switched capacitor converter.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Method for Organized Institutional Learning in the Navy Shipbuilding Community</title>
<link href="https://hdl.handle.net/1721.1/144728" rel="alternate"/>
<author>
<name>Collins, Elliot James</name>
</author>
<id>https://hdl.handle.net/1721.1/144728</id>
<updated>2022-08-30T03:56:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Method for Organized Institutional Learning in the Navy Shipbuilding Community
Collins, Elliot James
The implementation of a Set-Based Design (SBD) process by the U.S. Navy’s DDG(X) program was observed by the author beginning in June of 2020. Capturing knowledge and lessons learned is an important part of the SBD process. The adoption of SBD by the naval ship design community presents the Navy with an opportunity to implement a new system for capturing, storing, and providing access to institutional knowledge.&#13;
&#13;
Literature reviews covering recent surface combatant programs and the progression of SBD were completed as background. The challenges faced by naval ship acquisition programs have been&#13;
well documented but the transition from early-stage design, executed by Navy-led team, and detailed design and construction, executed by civilian shipbuilding teams, has repeatedly proven&#13;
difficult to manage. Involving shipbuilders in the early stages of ship design is a common recommendation from past programs, but the government-shipbuilder relationship during early-stage design is hindered by the prospect of more future contract competition. Implementing a process for institutional learning will reduce the Navy’s reliance on civilian contractors and shipbuilders by capturing the impacts that design choices have on future outcomes, including producibility.&#13;
&#13;
Based on the system implemented by the DDG(X) team and inspired by the engineering checklists used by Toyota, a Navy Design Notebook System (NDNS) is proposed. The NDNS&#13;
incorporates standard design documentation, lessons learned, and a networked storage system like the existing Integrated Design Environments (IDE) used by Navy programs. Guidance to&#13;
support the implementation of the NDNS compatible design documentation system on a fictional ship design program, called Program(X), is presented. The role of NDNS system maintainers is&#13;
defined and a format for capturing lessons learned throughout a ship’s lifecycle is presented.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Low-rank Simplicity Bias in Deep Networks</title>
<link href="https://hdl.handle.net/1721.1/144726" rel="alternate"/>
<author>
<name>Huh, Minyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/144726</id>
<updated>2022-08-30T03:01:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Low-rank Simplicity Bias in Deep Networks
Huh, Minyoung
Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations that investigate and extend the hypothesis that deeper networks are inductively biased to find solutions with lower effective rank embeddings. We conjecture that this bias exists because the volume of functions that maps to low effective rank embedding increases with depth. We show empirically that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well. We then show that the simplicity bias exists at both initialization and after training and is resilient to hyper-parameters and learning methods. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance on CIFAR and ImageNet without changing the modeling capacity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital noise reconstruction with a quantum sensor</title>
<link href="https://hdl.handle.net/1721.1/144725" rel="alternate"/>
<author>
<name>Zhu, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/144725</id>
<updated>2022-08-30T03:38:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Digital noise reconstruction with a quantum sensor
Zhu, Yuan
Interactions between a quantum system and its environment are usually inevitable and could lead to decoherence limiting the performance of quantum devices. On the one hand, to build robust quantum devices requires an in-depth characterization of such decoherence mechanism. On the other hand, the extracted environmental information brings us new approaches to investigate novel phases in quantum materials. Thus, probing and characterizing environmental noise is an essential task for both fundamental physics and quantum applications. Existing noise reconstruction methods in quantum systems rely on using approximated delta-like frequency filtering to sample the noise spectrum in frequency domain using dynamical decoupling sequences.&#13;
&#13;
In this thesis, we propose a novel digital noise reconstruction method to reconstruct the environmental noise both in frequency and time domains, which avoids the delta function approximation for frequency filtering. By measuring the decoherence of a qubit sensor under a set of Walsh modulation sequences, the (arithmetic) auto-correlation of a stationary Gaussian noise that couples to the quantum sensor is directly reconstructed and the corresponding noise spectrum is then reconstructed through linear transformations (discrete Fourier transform). &#13;
&#13;
We systematically compare the typical dynamical decoupling-based noise reconstruction method (the Carr-Purcell-&#13;
Meiboom-Gill reconstruction method) and the Walsh reconstruction method by evaluating the reconstruction errors of both methods under an Orstein-Unlenbeck noise model, which is commonly adopted to describe the magnetic noise generated by a dipolarly coupled spin bath. Combining theoretical and simulation results, we conclude that the accuracy of our Walsh reconstruction method is only limited by the time-space sampling and can be easily suppressed by increasing the reconstruction order. &#13;
&#13;
We then perform a proof-of-principle demonstration using a single nitrogen-vacancy center in diamond to characterize its environmental noise dominated by the carbon-13 nuclear spin bath, and discuss the practical limitations of the reconstruction accuracy and avenues for its improvement. Finally we also introduce several directions of interest for future research.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effector Shape and Motion Optimization</title>
<link href="https://hdl.handle.net/1721.1/144723" rel="alternate"/>
<author>
<name>Jiang, Rebecca H.</name>
</author>
<id>https://hdl.handle.net/1721.1/144723</id>
<updated>2022-08-30T03:18:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Effector Shape and Motion Optimization
Jiang, Rebecca H.
In this thesis, methods are proposed for co-optimizing the shape and motion of robotic effectors for planar tasks. An effector is a device, typically at the end of a robotic arm, used to interact with the environment.  While planning object and robot-object contact trajectories is extensively studied, designing an effector that can execute the planned trajectories receives less attention.  As such, this thesis includes a framework that synthesizes an object trajectory and object-effector contact trajectory into an effector trajectory and shape that (a) does not penetrate the object, (b) makes contact with the object as specified, and (c) optimizes a user-specified objective.  This simplifies manipulator control by encoding task-specific contact information in the effector's geometry.  The key insight is posing these requirements as constraints in the effector's reference frame, preventing the need for explicit parameterization of the effector shape.  This prevents artificial restrictions on the shape design space.  Importantly, it also facilitates posing the shape and motion design problem as a tractable nonlinear program.  This method is particularly useful for problems where the shape of the effector surface must be precisely chosen to achieve a task.  This work is then extended to parallel-jaw grasping problems, in which grasp stability is considered while optimizing over contact locations, effector shape, and grasp configuration.  This provides a path forward for future work in which effectors with multiple internal degrees of freedom are co-optimized with motion.  Methods are demonstrated on example problems, including jar-opening, picking up objects in constrained spaces, and stably grasping sets of nonconvex objects.  The algorithms' results and computational cost are evaluated. A physical experiment demonstrates a robotic arm picking up a screwdriver from a table using a tool that was designed using the proposed framework and manufactured to the derived shape.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verified Scheduling Via High-Level Scheduling Rewrites</title>
<link href="https://hdl.handle.net/1721.1/144722" rel="alternate"/>
<author>
<name>Liu, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/144722</id>
<updated>2022-08-30T03:43:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Verified Scheduling Via High-Level Scheduling Rewrites
Liu, Amanda
I propose a lightweight Coq framework for optimizing tensor kernels written in a pure, functional array language. Optimizations rely on user scheduling using series of verified, semantics-preserving rewrites. Unusually for compilation targeting imperative code with arrays and nested loops, all rewrites are source-to-source within a purely functional language. This language comprises a set of core constructs for expressing high-level computation detail and a set of what we call reshape operators, which can be derived from core constructs but trigger low-level decisions about storage patterns and ordering. We will demonstrate that not only is this system capable of deriving the optimizations of existing state-of-the-art languages like Halide and generating comparably performant code, it is also able to schedule a family of useful program transformations beyond what is reachable in Halide.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Retrofit Solutions to Electric Power Sector Decarbonization in the American Midwest</title>
<link href="https://hdl.handle.net/1721.1/144721" rel="alternate"/>
<author>
<name>Morris, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/144721</id>
<updated>2022-08-30T03:14:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Retrofit Solutions to Electric Power Sector Decarbonization in the American Midwest
Morris, Jack
Electric power system decarbonization requires phasing out existing, carbon-emitting power plants and replacing them with new, clean generation and transmission capacity. This transition presents simultaneous challenges in investment and operational costs and system reliability. In hopes of saving costs, ensuring reliability, and preserving the power plant workforce, interest has risen among states and utilities in the potential of power plant retrofits. By reusing existing equipment and infrastructure in aging coal and natural gas power plants, utilities can save costs on new greenfield developments. Several developing technologies well-equipped to reuse all or part of the facilities at these thermal power plants include firm, low-carbon power plants and long-duration storage facilities. These technologies help balance load in a high renewables grid while employing much of the same power plant workforce. A study of retrofit options is particularly important for the American Midwest where coal makes up a large portion of the resource mix and where the potential for intermittent wind deployment is high. This thesis enables retrofit modeling in a multi-stage capacity expansion framework and uses it to evaluate the potential for retrofits to lower system costs and cumulative emissions over three modeled carbon reduction pathways from 2020 to 2040 in the Midwest and surrounding areas. Although resulting reductions in cost and emissions are modest, we observe notable system-level reductions in curtailment of renewable generation, transmission expansion, and new natural gas deployment as well as distributional impacts relating to the costs of transitioning to a low-carbon electric power system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating COVID-19 Personal Protective Equipment Use in Acute Care Hospitals</title>
<link href="https://hdl.handle.net/1721.1/144720" rel="alternate"/>
<author>
<name>McGuigan, Molly K.</name>
</author>
<id>https://hdl.handle.net/1721.1/144720</id>
<updated>2022-08-30T03:43:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simulating COVID-19 Personal Protective Equipment Use in Acute Care Hospitals
McGuigan, Molly K.
America faced crippling shortages of Personal Protective Equipment (PPE) during the COVID-19 pandemic from 2020-2021. In response to these recent shortages, policy makers, emergency responders, public health agencies and private healthcare facilities are investing significant time and money to ensure America is better equipped to meet the need for PPE in the next pandemic. As America pours money into larger stockpiles and increased domestic manufacturing, it is crucial that decision makers understand PPE demand during COVID-19-type pandemics so they can allocate resources appropriately. This thesis aims to answer two central questions: 1) How can planners forecast PPE use in acute care hospitals for future COVID-19- type pandemics? 2) How can the model used to develop these forecasts contribute to a robust PPE preparedness plan?&#13;
&#13;
This thesis presents a simulation that can be used by planners to forecast PPE use in acute care hospitals. The simulation is then applied in a case study to demonstrate potential applications and identify opportunities to shape PPE demand through hospital policy. By implementing conservation policies, policy makers can decrease N95 facepiece respirator use by 47%, and gown and glove use by over 50% in acute care hospitals during a COVID-19-type pandemic. In an environment where significant attention is being paid to increasing supply capacity, a focus on shaping demand at the source is an often neglected, but critical, aspect of enabling supply capacity to meet pandemic demand.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulate Time-integrated Coarse-grained Molecular Dynamics with Geometric Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/144719" rel="alternate"/>
<author>
<name>Fu, Xiang</name>
</author>
<id>https://hdl.handle.net/1721.1/144719</id>
<updated>2022-08-30T04:01:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Simulate Time-integrated Coarse-grained Molecular Dynamics with Geometric Machine Learning
Fu, Xiang
Molecular dynamics (MD) simulation is the workhorse of various scientific domains but is limited by high computational cost. Learning-based force fields have made major progress in accelerating ab-initio MD simulation but are still not fast enough for many real-world applications that require long-time MD simulation. In this paper, we adopt a different machine learning approach where we coarse-grain a physical system using graph clustering, and model the system evolution with a very large time-integration step using graph neural networks. Despite only trained with short MD trajectory data, our learned simulator can generalize to unseen novel systems and simulate for much longer than the training trajectories. Properties requiring 10-100 ns level long-time dynamics can be accurately recovered at several-orders-of-magnitude higher speed than classical force fields. We demonstrate the effectiveness of our method on two realistic complex systems: (1) single-chain coarse-grained polymers in implicit solvent; (2) multi-component Li-ion polymer electrolyte systems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting Shared 3D Model Repositories with Slicing Results for 3D Printing</title>
<link href="https://hdl.handle.net/1721.1/144717" rel="alternate"/>
<author>
<name>Faruqi, Faraz</name>
</author>
<id>https://hdl.handle.net/1721.1/144717</id>
<updated>2022-08-30T03:32:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Augmenting Shared 3D Model Repositories with Slicing Results for 3D Printing
Faruqi, Faraz
In this thesis, we propose a method to augment shared 3D model repositories, such as Thingiverse, with slicing results that are readily available to all users. By having print time and material consumption for different print resolution profiles and model scales available in real-time, users are able to explore different slicing configurations efficiently to find the one that best fits their time and material constraints. To prototype this idea, we build a system called SliceHub, which consists of three components: (1) a repository with an evolving database of 3D models, for which we store the print time and material consumption for various print resolution profiles and model scales, (2) a user interface integrated into an existing slicer that allows users to explore the slicing information from the 3D models, and (3) a computational infrastructure to quickly generate new slicing results, either through parallel slicing of multiple print resolution profiles and model scales or through interpolation. We motivate our work with a formative study of the challenges faced by users of existing slicers and provide a technical evaluation of the SliceHub system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining Urban Highway Overpass Infrastructure in the US: Designing for Spatial Quality and Material Quantity</title>
<link href="https://hdl.handle.net/1721.1/144716" rel="alternate"/>
<author>
<name>Ladhani, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/144716</id>
<updated>2022-08-30T03:34:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Reimagining Urban Highway Overpass Infrastructure in the US: Designing for Spatial Quality and Material Quantity
Ladhani, Sarah
The United States’ aging transportation infrastructure requires over a third of its bridges to be repaired or replaced. This provides an opportunity to reconsider urban highway overpasses by designing for the communities through which they run and reducing global carbon emissions that the construction industry is responsible for. This thesis explores the undercroft spaces of highway overpasses in urban areas and proposes quantitative metrics to describe qualitative spaces. It also reimagines reinforced concrete hammerhead pier design using topology optimization to generate more efficient piers caps that would require less material and contain less embodied carbon. The study finds that additional complexity of an optimized result does not correlate to significant material savings, however even simpler optimized results are more efficient than traditional designs. The study emphasizes underutilized undercroft spaces in urban environments and explores pier typologies based on topology optimization.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Server Private Information Retrieval with Sublinear Amortized Time</title>
<link href="https://hdl.handle.net/1721.1/144714" rel="alternate"/>
<author>
<name>Henzinger, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/144714</id>
<updated>2022-08-30T03:32:08Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Single-Server Private Information Retrieval with Sublinear Amortized Time
Henzinger, Alexandra
We construct new private-information-retrieval protocols in the singleserver setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time—yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Imbalanced Regression: Challenges, Methods, and Applications</title>
<link href="https://hdl.handle.net/1721.1/144713" rel="alternate"/>
<author>
<name>Zha, Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/144713</id>
<updated>2022-08-30T03:54:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Deep Imbalanced Regression: Challenges, Methods, and Applications
Zha, Kaiwen
Real-world data often exhibit imbalanced distributions, where certain target values have significantly fewer observations. Existing techniques for dealing with imbalanced data focus on targets with categorical indices, i.e., different classes. However, many tasks involve continuous targets, where hard boundaries between classes do not exist. We define Deep Imbalanced Regression (DIR) as learning from such imbalanced data with continuous targets, dealing with potential missing data for certain target values, and generalizing to the entire target range. Motivated by the intrinsic difference between categorical and continuous label space, we propose distribution smoothing for both labels and features, which explicitly acknowledges the effects of nearby targets, and calibrates both label and learned feature distributions. We curate and benchmark large-scale DIR datasets from common real-world tasks in computer vision, natural language processing, and healthcare domains. Extensive experiments verify the superior performance of our strategies. Our work fills the gap in benchmarks and techniques for practical imbalanced regression problems.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Fuel Split Ratio on NOₓ Emissions of a Lean-burn Staged Combustor</title>
<link href="https://hdl.handle.net/1721.1/144712" rel="alternate"/>
<author>
<name>Chen, Yang</name>
</author>
<id>https://hdl.handle.net/1721.1/144712</id>
<updated>2022-08-30T03:21:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Effects of Fuel Split Ratio on NOₓ Emissions of a Lean-burn Staged Combustor
Chen, Yang
Aviation NOₓ emissions are a significant factor in causing air quality deterioration, leading to potentially 16,000 annual premature deaths globally. To cope with the expected increase in air traffic demand in the near future, aircraft-based lean-burn staged combustion becomes a promising solution in reducing NOₓ emissions. This thesis investigates the effects of a lean-burn staged combustor’s fuel split ratio and staging threshold on the NOₓ emissions for both a sea-level static scenario and a representative flight mission. NOₓ reduction benefits from optimizing the fuel split ratio are studied, and the EINOₓ performance between an RQL and a lean-burn staged combustor are compared. Chemical reactor networks, NPSS engine cycle models, and a TASOPT flight mission model are utilized. In comparison to previous studies, a wider range of pilot fuel fraction, from 16% to 100%, are tested over more refined thrust cases, from 0% to 100% rated thrust. A wider range of phases, including the cruise conditions in addition to the LTO cycle, is employed in this thesis. This thesis illustrates how a pilot fuel fraction below 30% is infeasible through the calibration of the combustor model. It is found that staging should occur as early as allowed by combustion stability to minimize NOₓ emissions, and the optimal fuel split ratio is roughly constant across different throttle conditions. Moreover, reducing the air distributed to the pilot zone decreases the overall EINOₓ level, and the lean-burn staged combustor is observed to outperform an RQL combustor in terms of NOₓ emissions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bend-Forming: A Deformation Process for In-Space Manufacturing of Truss Structures</title>
<link href="https://hdl.handle.net/1721.1/144711" rel="alternate"/>
<author>
<name>Bhundiya, Harsh Girishbhai</name>
</author>
<id>https://hdl.handle.net/1721.1/144711</id>
<updated>2022-08-30T03:40:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Bend-Forming: A Deformation Process for In-Space Manufacturing of Truss Structures
Bhundiya, Harsh Girishbhai
In-space manufacturing (ISM) is a candidate approach for constructing next-generation space structures with larger dimensions than modern deployable systems. While many ISM approaches have been proposed, analysis of their performance for building precision structures on orbit, such as large-diameter reflectors, is scarce. In this thesis, we present a quantitative comparison of materials and processes for ISM, using performance metrics for suitable feedstock materials and a fast and accurate manufacturing method. Our analysis finds that deformation processes are a promising ISM approach due to their low specific energy consumption, almost an order of magnitude lower than melt-based and extrusion processes which rely on heating of the feedstock. This low specific energy consumption potentially enables deformation processes to fabricate 100-meter diameter structures on orbit in less than a day, whereas melt-based processes may take more than a month and be limited to inferior feedstock materials.&#13;
&#13;
Motivated by this comparison of ISM processes, we present an exemplar deformation process, termed Bend-Forming, for fabricating truss structures in space. The method relies on the combination of CNC wire bending with mechanical joints to form trusses from raw feedstock via plastic deformation. We demonstrate the method with exemplar structures on the order of 1 meter and provide a framework for fabricating arbitrary geometries with Bend-Forming, including reticulated columns, shells, and trusses. To guide the design of Bend-Formed structures for space applications, we next investigate the compressive behavior of Bend-Formed isogrid columns through experiments, finding that the structures undergo a smooth formation of buckling deformations. Finite element analyses accurately predict the maximum loads observed experimentally, highlighting the imperfection-insensitive nature of the Bend-Formed columns. Finally, we present a potential space application of Bend-Forming, namely the fabrication of support structure for an electrostatically-actuated reflector antenna. To demonstrate the concept, we design and fabricate a 1-meter diameter antenna prototype with Bend-Forming.&#13;
&#13;
Overall, this research adds to the growing field of ISM by 1) providing a framework for assessing materials and processes suitable for ISM; and 2) introducing a novel approach for constructing truss structures, called Bend-Forming, with potential application to ISM.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Dynamic Surface Instabilities in Soft Hydrogel Cylinders Subject to Laser-Driven Shock-Loading</title>
<link href="https://hdl.handle.net/1721.1/144710" rel="alternate"/>
<author>
<name>Pickard, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/144710</id>
<updated>2022-08-30T04:00:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analysis of Dynamic Surface Instabilities in Soft Hydrogel Cylinders Subject to Laser-Driven Shock-Loading
Pickard, Daniel
Soft materials subject to both static and dynamic loading are known to exhibit a variety of mechanical instabilities which may lead to intricate surface deformation patterns. In particular, creases and wrinkles have been found to play an important role in the morphogenesis of soft tissues and tumor growth. Soft matter instabilities are also relevant to a number of manufacturing and engineering applications such as the fabrication of microlenses, and the development of soft robots, actuators and ŕexible electronics. Static instabilities in soft matter have been well studied theoretically, and they are known to result from bifurcations of equilibrium due to loss of convexity of the nearly-incompressible elastic strain energy function in the large deformation range. Under dynamic loading, soft solids exhibit many instabilities that are well known in ŕuids, including Rayleigh-Taylor, Faraday and Richtmyer-Meshkov instabilities.&#13;
&#13;
This thesis is concerned with the analysis and mechanistic explanation of a new elastodynamic instability that was recently discovered at MIT. Laser-driven experiments performed at the MIT Institute for Soldier Nanotechnologies have demonstrated undulations along the surface of pressurized cylindrical specimens of soft hydrogels which develop on an intermediate timescale in between what is expected from classic static and dynamic instability mechanisms. In contrast to prior work, the novel instabilities have been observed only along external, as opposed to internal, soft solid boundaries. The new instabilities have not been observed in experiments using pure water and appear to be a unique and novel phenomenon. Motivated by these intriguing differences between the new observations and instabilities considered in the past, we aim to develop a theoretical and numerical framework geared towards understanding the fundamental dynamics leading to the complex mechanical deformations discovered at the Institute for Soldier Nanotechnologies. Among the insights obtained, it is found that the ability of a soft material to sustain large tensile hydrostatic stresses plays a pivotal role in generating the new surface undulations. The observation of tension-driven, shock-induced surface instabilities in hydrogels is indicative of hydrogel’s enhanced resistance to high strain rate cavitation when compared to pure water and may be of technological interest to a number of soft matter applications such as the design of protective equipment or the development of impulse resistant sealants, insulators and adhesives.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Compilation for Stochastic Constraint Programs</title>
<link href="https://hdl.handle.net/1721.1/144709" rel="alternate"/>
<author>
<name>Stephens, Delia Stokes</name>
</author>
<id>https://hdl.handle.net/1721.1/144709</id>
<updated>2022-08-30T03:46:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Policy Compilation for Stochastic Constraint Programs
Stephens, Delia Stokes
Real-world risk-bounded planning and decision-making problems are fluid, uncertain, and highly dynamic, demanding an architecture which can encode and solve a rich set of problems involving decision-making under uncertainty. While many solution architectures exist for solving deterministic CSPs, very few are able to generate decisions that are robust to uncontrolled, stochastic events, and even fewer are able to construct conditional policies that are able to adapt online to these uncertain outcomes. In this thesis, I present a variant of the Optimal Satisfiability Problem Solver (OpSat) that solves dynamic, chance-constrained satisfiability problems. The proposed variant solves these real-world problems efficiently and encodes policies compactly through a hybrid architecture that (a) encodes probabilistic information explicitly as logical constraints, (b) performs temporal reasoning to extract logical temporal conflicts, and (c) compiles out the constraints of a Weighted, Conditional, Stochastic CSP into a compact policy representation which may be efficiently queried. Such an architecture facilitates the design of robust, risk-aware systems by providing a user with the ability to solve a rich set of problems involving mixed logical and temporal constraints.¹&#13;
&#13;
¹This research was generously supported by Airbus SE.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Twist: Sound Reasoning for Purity and Entanglement in Quantum Programs</title>
<link href="https://hdl.handle.net/1721.1/144705" rel="alternate"/>
<author>
<name>Yuan, Chenhui</name>
</author>
<id>https://hdl.handle.net/1721.1/144705</id>
<updated>2022-08-30T03:35:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Twist: Sound Reasoning for Purity and Entanglement in Quantum Programs
Yuan, Chenhui
Quantum programming languages enable developers to implement algorithms for quantum computers that promise computational breakthroughs in classically intractable tasks.&#13;
Programming quantum computers requires awareness of entanglement, the phenomenon in which measurement outcomes of qubits are correlated. Entanglement can determine the correctness of algorithms and suitability of programming patterns.&#13;
&#13;
In this work, I formalize purity as a central tool for automating reasoning about entanglement in quantum programs. A pure expression is one whose evaluation is unaffected by the measurement outcomes of qubits that it does not own, implying freedom from entanglement with any other expression in the computation.&#13;
&#13;
I present Twist, the first language that features a type system for sound reasoning about purity. The type system enables the developer to identify pure expressions using type annotations. Twist also features purity assertion operators that state the absence of entanglement in the output of quantum gates. To soundly check these assertions, Twist uses a combination of static analysis and runtime verification.&#13;
&#13;
I evaluate Twist's type system and analyses on a benchmark suite of quantum programs in simulation, demonstrating that Twist can express quantum algorithms, catch programming errors in them, and support programs that several languages disallow, while incurring runtime verification overhead of less than 3.5%.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Undead Bed: Mattress Recycling in Boston</title>
<link href="https://hdl.handle.net/1721.1/144702" rel="alternate"/>
<author>
<name>Hoffman, Meital</name>
</author>
<id>https://hdl.handle.net/1721.1/144702</id>
<updated>2022-08-30T03:45:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Undead Bed: Mattress Recycling in Boston
Hoffman, Meital
The Massachusetts’s Department of Environmental Protection (MassDEP) published the 2030 Solid Waste Master Plan in October 2021. As part of this plan, MassDEP issued a waste regulation that bans the disposal of mattresses and box springs at solid waste facilities. The waste ban will take effect while the Massachusetts legislature is working on passing an EPR (Extended Producer Responsibility) bill for mattress waste. This bill would require the mattress industry to administer and manage mattress end-of-life programs across the state. In the meantime, the City of Boston is preparing to implement a municipal residential mattress recycling program to comply with the MassDEP regulation before it takes effect on November 1st, 2022. I prepared this thesis in order to inform and advise the City of Boston’s Zero Waste Team and other City stakeholders regarding the creation of a residential mattress recycling program. I interviewed many of the local mattress recycling vendors and public stakeholders and conducted research into several dimensions of mattress recycling. I described the phases of a residential mattress recycling program and the logistic and cost considerations for each phase. I estimated the annual costs of the program and environmental savings using two vendors that have different pricing and business models. I found that a mattress recycling program may cost the City upwards of $1.2 million and could save about 1000 metric tons of carbon dioxide equivalent annually. I establish several recommendations for the City based on cost, environmental savings, and social equity considerations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Canonical Experiment on System Complexity Metric and Its Impact on Engineering Management</title>
<link href="https://hdl.handle.net/1721.1/144701" rel="alternate"/>
<author>
<name>Bortot Hopker, Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/144701</id>
<updated>2022-08-30T03:14:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Canonical Experiment on System Complexity Metric and Its Impact on Engineering Management
Bortot Hopker, Ricardo
Systems are constantly increasing complexity. Being able to quantify the system complexity and how it relates to human effort and cognition can bring numerous benefits for product development and project management. In this thesis, 25 people were part of an experiment using the travel salesperson problem, they completed 13 problems each with varying complexity. The results were summarized and through a series of statistical analysis it was found that the human effort scales super-linear with complexity in the form e = AC {superscript 1.47} + d, where A and d are constants. Additionally, based on the results in this study and previous, it is proposed an objective function for optimization of system architecture decomposition which uses the heuristics learned to reduce the human effort to understand the system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Adversarial Networks for Inverse Design Problems in Engineering: Methods to handle performance, constraints, and creativity requirements</title>
<link href="https://hdl.handle.net/1721.1/144700" rel="alternate"/>
<author>
<name>Heyrani Nobari, Amin</name>
</author>
<id>https://hdl.handle.net/1721.1/144700</id>
<updated>2022-08-30T03:45:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generative Adversarial Networks for Inverse Design Problems in Engineering: Methods to handle performance, constraints, and creativity requirements
Heyrani Nobari, Amin
Engineering design tasks often require synthesizing new designs that meet desired performance requirements. The conventional design process, which requires iterative optimization and performance evaluation, is slow and dependent on initial designs. Automating this process in its entirety, therefore can speed up the design process significantly and mitigate the need for the conventional iterative and time-consuming design process.  If successful in automating the engineering design process using data-driven approaches such as the ones proposed in this body of work, the impacts on humanity as a whole would be extraordinary. The engineering design process is an omnipresent aspect of our modern life, affecting our day-to-day lives in a major way. Accelerating this process through enabling inverse design and automation can reduce costs and increase productivity. In doing this, we develop data-driven approaches based on the generative adversarial networks~(GANs) to address some of the main challenges in data-driven inverse design. First, we propose a new model, named Performance Conditioned Diverse Generative Adversarial Network (PcDGAN), which introduces a singular vicinal loss combined with a Determinantal Point Processes (DPP) based loss function to enhance diversity. PcDGAN uses a new self-reinforcing score called the Lambert Log Exponential Transition Score (LLETS) for improved GAN performance on inverse design based on performance requirements. Experiments on synthetic problems and a real-world airfoil design problem demonstrate that PcDGAN outperforms state-of-the-art GAN models and improves the likelihood of meeting performance requirements by 69\% in an airfoil generation task and up to 78\% in synthetic conditional generation tasks and achieves greater design space coverage. The proposed method enables efficient design synthesis and design space exploration, however, the problem of handling constraints remains as only taking performance into account for inverse design leaves design constraints aside and therefore, we must also address constraints. To do this, we propose a conditional deep generative model, Range-GAN, to achieve automatic design synthesis subject to range inequality constraints. The proposed model also addresses the sparse conditioning issue in data-driven inverse design problems by introducing a label-aware self-augmentation approach. We also propose a new uniformity loss to ensure the generated designs evenly cover the given requirement range. This work is the first of its kind and outperforms conventional optimization-based approaches such as genetic algorithms, specifically we compare our method to NSGA-II. Through a real-world example of constrained 3D shape generation, we show that Range-GAN outperforms state of the art methods and furthermore the label-aware self-augmentation leads to an average improvement of 14\% on the constraint satisfaction for generated 3D shapes, and the uniformity loss leads to a 125\% average increase on the uniformity of generated shapes' attributes. This work laid the foundation for data-driven inverse design problems where we consider range constraints. Finally, we must turn our attention to another aspect of the design process that is crucial, and that is creativity and novelty. The previous models demonstrate the efficacy of GAN-based models in performing inverse design tasks. GAN models, however, are not capable of generating unique designs, a key to innovation and a major gap in AI-based design automation applications. to alleviate this we propose an automated method, CreativeGAN, for generating novel designs using GANs. It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components. The method combines state-of-art novelty detection, segmentation, novelty localization, rewriting, and generative models for creative design synthesis. Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and generalize rare novelties to a broad set of designs. Our automated method requires no human intervention and demonstrates a way to rethink creative design synthesis and exploration. By addressing these important challenges in data-driven inverse design we hope to enable a complete model that combines all three approaches in the future to establish an ultimate generalizable model for automating inverse design based on data.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tornado in Credit Desert: Role of Consumer Credit Access in Disaster Recovery</title>
<link href="https://hdl.handle.net/1721.1/144699" rel="alternate"/>
<author>
<name>Tiurina, Mariia</name>
</author>
<id>https://hdl.handle.net/1721.1/144699</id>
<updated>2022-08-30T03:27:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Tornado in Credit Desert: Role of Consumer Credit Access in Disaster Recovery
Tiurina, Mariia
I study the effect of credit access restrictions on post-disaster financial outcomes of subprime consumers in Arkansas, a state with the lowest usury cap of 17 percent. Due to the restrictive cap, neither payday nor consumer finance companies operate in Arkansas, while they do in all six neighboring states. Using the difference-in-difference approach, I find that borrowers in border zip codes are less likely to be delinquent on mortgage debt, and have a lower drop in credit score in the post-disaster period in comparison with borrowers in center zip codes. The result is consistent with the adverse effects of credit rationing followed by consumer protection law.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graphical User Interface for Anomaly Detection in DBOS</title>
<link href="https://hdl.handle.net/1721.1/144698" rel="alternate"/>
<author>
<name>Redmond, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/144698</id>
<updated>2022-08-30T03:04:19Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Graphical User Interface for Anomaly Detection in DBOS
Redmond, Robert
This thesis describes a graphical user interface which can aid in managing and visualizing detected anomalies within Database Operating System (DBOS). Because web applications can be built atop DBOS, it needs a security system to counteract incoming attacks from online. One of the cornerstones of a full security pipeline is a strong interface so system experts can monitor and react to incoming threats. While command line interfaces and graphical user interfaces are both means to monitor incoming/potential threats, the latter confer stronger advantages within the context of anomaly detection in DBOS. User studies were used to evaluate the interface’s features throughout the development process.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delving into “Self-Construction” in the Era of Social Media</title>
<link href="https://hdl.handle.net/1721.1/144695" rel="alternate"/>
<author>
<name>Wu, Mengke</name>
</author>
<id>https://hdl.handle.net/1721.1/144695</id>
<updated>2022-08-30T03:59:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Delving into “Self-Construction” in the Era of Social Media
Wu, Mengke
With the prevalence of social media, more and more people are getting involved into this new way of communication and sharing. Despite of the convenience of connection and information exchange facilitated by social media, one of the accompanying concerns is the difference and comparison between online and real life contents. Several studies showed that exposure to idealized and attractive posts could negatively impact people’s emotions, self-disclosure behaviors, and self-perception. This study aimed to experimentally investigate individuals’ social media feeds, especially about daily life contents, on their posting behaviors and attitudes toward life. 512 participants were randomly assigned to view one of three sets of simulated feeds: idealized life posts with positive emotions expressed, realistic life posts with negative emotions expressed, and mixed life posts (selected from idealized set and realistic set). Data analysis across these three groups indicated that realistic posts exerted negative influences on respondents’ mood, level of life satisfaction, and self-evaluation, but idealized posts had no impact on them. Meanwhile, neither idealized nor realistic posts were found to affect respondents’ posting behavior.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Material Properties with Machine Learned Interatomic Potentials</title>
<link href="https://hdl.handle.net/1721.1/144694" rel="alternate"/>
<author>
<name>Sema, Dionysios</name>
</author>
<id>https://hdl.handle.net/1721.1/144694</id>
<updated>2022-08-30T03:42:27Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Predicting Material Properties with Machine Learned Interatomic Potentials
Sema, Dionysios
Machine learning interatomic potentials (ML-IPs) have emerged as a promising approach for bridging the gap between quantum electronic structure calculations (QM) and large scale classical molecular modeling simulations and have shifted the development of these many-body force fields to become predominantly data-driven. Although these machine learning methods have been successfully used as specialized models for specific tasks, their performance and accuracy are often assessed by simple metrics during the training phase, while there has been little attention devoted to how data efficient and transferable these methods are in unexplored conditions. Here, we aim to investigate how well state-of-the-art machine learning interatomic potentials generalize beyond their intended systems and tasks. We focus on the Spectral Neighbor Analysis Potential (SNAP) and Message-Passing Graph Neural Networks (GNNs), compare their accuracy and data efficiency and examine their stability during long time integration of classical molecular dynamics. We extract thermodynamic properties connected to chemical reaction dynamics, kinetics and transition barriers of physical processes that are not in the learned phase space. We find that GNNs outperform SNAP as more robust ML-IPs and connect this to the importance of including out-of-domain applications as an extensive set of benchmarks for assessing the effective performance of machine learning architectures. Finally, we discuss the necessity of incorporating an active learning framework as a method to generate robust machine learning reactive potentials.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility study of compact Neutron Resonance Transmission Analysis using a linac, a fusion-based neutron generator, and an isotopic source</title>
<link href="https://hdl.handle.net/1721.1/144692" rel="alternate"/>
<author>
<name>Levine, Peninah</name>
</author>
<id>https://hdl.handle.net/1721.1/144692</id>
<updated>2022-08-30T03:58:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Feasibility study of compact Neutron Resonance Transmission Analysis using a linac, a fusion-based neutron generator, and an isotopic source
Levine, Peninah
Various nuclear security applications such as fuel enrichment analysis and warhead verification seek to identify nuclear materials in a black box target. Neutron Resonance Transmission Analysis (NRTA) is a spectroscopic technique which uses resonant neutron absorption to identify isotopic compositions. Previous NRTA experiments have used expensive beam line facilities with kilometer-long accelerators. This work explores feasibility of compact NRTA configurations using a linear accelerator (linac), fusion-based neutron generator, and isotopic source. Monte Carlo simulations show that these configurations trade off between complexity and flux, which is related to measurement time. A 5.5 MeV linac may yield the highest epithermal (1-10 eV) neutron flux (10⁷ neutrons s⁻¹), but conversion of electrons to neutrons adds complexity, bulk, and expense. A deuterium-tritium (DT) fusion-based neutron generator produces a moderate neutron flux (10⁶ neutrons/s) and complexity relative to the linac and isotopic configurations. Isotopic NRTA may provide the simplest solution but limits flux to 10⁴ neutrons s⁻¹. Preliminary isotopic experiments indicate that limited source activity poses a challenge for overcoming gamma background. This thesis discusses feasibility of each proposed NRTA setup in various security applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controversial Science Argumentation Skills for Teachers in the Digital Clinical Simulation Discussion Leader</title>
<link href="https://hdl.handle.net/1721.1/144689" rel="alternate"/>
<author>
<name>Marvez, G. R.</name>
</author>
<id>https://hdl.handle.net/1721.1/144689</id>
<updated>2022-08-30T03:57:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Controversial Science Argumentation Skills for Teachers in the Digital Clinical Simulation Discussion Leader
Marvez, G. R.
Teaching controversial issues is a critical skill for a continuing democracy and to ensure that the next generation of researchers and designers are well versed in critical analysis skills. Despite this, teachers report that they have received little instruction on how to facilitate a controversial discussion with students and are concerned about possible challenges inside and outside the classroom. To address this need, I have designed a digital clinical simulation of a high school science teacher leading a discussion on the ethics of gene therapy with their class of twenty students using a branching structure on the platform Teacher Moments. In a study with 42 participants, I show that this simulation could be useful in raising teachers' comfort with leading controversial discussions, and that the teacher dialogue choices that experienced teachers make differ from those with less teaching experience. This research shows the usefulness of simulations in preparing teachers to lead controversial discussions with students across a number of discussion skills such as asking open-ended questions and deciding where a teacher's opinion belongs in a discussion. Furthermore, I suggest future design work that could be implemented using machine learning methods to improve the generation of student dialogue and authenticity of simulations about discussions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Federated Learning for Resource Constrained Devices</title>
<link href="https://hdl.handle.net/1721.1/144688" rel="alternate"/>
<author>
<name>Jain, Kriti</name>
</author>
<id>https://hdl.handle.net/1721.1/144688</id>
<updated>2022-08-30T03:28:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Federated Learning for Resource Constrained Devices
Jain, Kriti
As resource constrained edge devices become increasingly more powerful, they are able to provide a larger quantity of higher quality data. However, as these devices are decentralized, it becomes difficult to gain insights from multiple devices at the same time. Federated learning allows us to learn from multiple devices in a decentralized manner without requiring data to be shared. Each client trains its own model and communicates relevant model information to a central server. The server aggregates this information according to some specified algorithm and sends the clients a global model; the clients then update their own private models with this global model, without ever sharing their local data or accessing any other client’s local data. On edge devices, however, federated learning becomes increasingly difficult because of computation, battery, and storage constraints. This thesis has two main contributions. The first is a modular, single-machine simulator for federated learning on edge devices. The second is a real world scalable federated learning system for Android devices that is able to automatically allocate resources by leveraging PyTorch Lightning. To the best of my knowledge, this is the first work that uses PyTorch Lightning specifically for training, and not just inference, on edge devices.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing String Displacement as a Control Modality: Sensor Design and Implementation</title>
<link href="https://hdl.handle.net/1721.1/144687" rel="alternate"/>
<author>
<name>Knappe, Silvia Elena</name>
</author>
<id>https://hdl.handle.net/1721.1/144687</id>
<updated>2022-08-30T03:05:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sensing String Displacement as a Control Modality: Sensor Design and Implementation
Knappe, Silvia Elena
While stringed instruments are a major part of musical practice across cultures, DMIs in which strings are the primary control interface remain rare. In an effort to support the creation of new stringed DMIs we describe an approach to instrument design in which the static 2-dimensional displacement of strings is a primary control modality. We review a variety of sensing techniques for this application, including optoelectronic, electro-magnetic field, resistive, and force. After establishing a set of design criteria we describe our design process as well as implementation and analysis of a transmission mode optoelectronic sensor.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Tinkerability for Accessibility</title>
<link href="https://hdl.handle.net/1721.1/144686" rel="alternate"/>
<author>
<name>Bulovic, Katarina</name>
</author>
<id>https://hdl.handle.net/1721.1/144686</id>
<updated>2022-08-30T03:30:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Designing for Tinkerability for Accessibility
Bulovic, Katarina
As programming has increasingly become a part of the STEM curriculum in elementary and middle school classrooms, block-based visual coding languages have emerged as a tool to teach young kids and other beginners basic computer science concepts. Platforms such as Scratch have been specifically designed to support learning through tinkering and play, making them ideal educational tools for a younger audience. While block-based coding may be more accessible to young programmers compared to more complex text-based languages, many of these visual coding environments are currently designed in a way that is inaccessible to users who are blind or who have visual impairments. This thesis explores general design considerations for accessibility in tinkerable, block-based coding environments, and explains how to apply these principles to improve visual accessibility in a new programming app called OctoPlay.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and evaluating student dropout through understanding student’s journey in a MicroMasters program</title>
<link href="https://hdl.handle.net/1721.1/144685" rel="alternate"/>
<author>
<name>Park, David Sejin</name>
</author>
<id>https://hdl.handle.net/1721.1/144685</id>
<updated>2022-08-30T03:35:45Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Characterizing and evaluating student dropout through understanding student’s journey in a MicroMasters program
Park, David Sejin
The latest evolution of Massive Online Open Courseware (MOOC) includes MOOC based programs where students across the world can complete a series of MOOC courses to achieve microcredentials. However, the high dropout problem in MOOCs also persists in MOOC based programs, with the additional complexity of open enrollment.&#13;
 &#13;
This study provides a framework to characterize dropout in a MOOC based program using the following: understanding the students’ course taking behavior through a student’s course journey model, understanding students’ return behavior between courses using time-to-event analysis, and proposing a metric, time to dropout, that defines what a dropout is in the context of a MOOC based program.&#13;
 &#13;
We demonstrate that students’ course journey representations, in conjunction with the t_dropout metric, can define dropout in a MOOC based program and allow dropout analysis based on student’s course registration behaviors. We also demonstrate that course journey visualization can be used to understand student’s course journey for manual intervention, such as in an educational dashboard for identifying at-risk students.&#13;
 &#13;
Results show that students who are further along in their course sequence are more likely to return to take the next course and more likely to return faster to take the next course. Also, results suggest that students’ choice in course order may impact return behaviors, where taking courses in order has a higher return rate at most stages of the program.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Congestion Pricing: Moving from Equity Analysis to Transportation Justice</title>
<link href="https://hdl.handle.net/1721.1/144684" rel="alternate"/>
<author>
<name>Craik, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/144684</id>
<updated>2022-08-30T03:47:43Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Congestion Pricing: Moving from Equity Analysis to Transportation Justice
Craik, Lauren
Urban traffic congestion poses challenges to American cities in the form of lost time, economic costs, increased accidents, air pollution, and barriers to mobility. Congestion pricing has the potential to be part of the solution yet also raises a number of concerns about whether pricing a public good can be done in an equitable manner. Past studies suggest the policy is not inherently unfair, however, these studies are retroactive and focus on economic notions of welfare that are at odds with how we understand the role of equity in planning. This thesis seeks to move beyond retrospective fairness evaluations and investigate how one could plan for an equitable congestion pricing scheme by proposing a new framework, inspired by the method of scenario planning, to evaluate congestion pricing.  This framework will be used to examine the case of a potential congestion pricing scheme in the Boston Metropolitan Region. This study combines best practices from the field of scenario planning with spatial and statistical analysis to methodically evaluate a scheme definition and understand how subpopulations are impacted differently by charging. This thesis will also analyze the distributional impacts, making use of travel diary data and the synthetic control method, of the London Central Charging Scheme to illustrate how scheme design and policy levers can contribute to differential behavioral responses.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Analysis of an Armenian Hymn Through Digital Signal Processing and Music Information Retrieval</title>
<link href="https://hdl.handle.net/1721.1/144683" rel="alternate"/>
<author>
<name>Bouhanna, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/144683</id>
<updated>2022-08-30T03:58:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Comparative Analysis of an Armenian Hymn Through Digital Signal Processing and Music Information Retrieval
Bouhanna, Jack
Armenian music has existed for centuries, dating back to several millennia BC. The music has undoubtedly evolved over time, whether passed down traditionally or through reimaginations of the original piece. Despite straying from the original versions, the music nonetheless keeps the spirit and tradition behind them intact.&#13;
&#13;
This thesis will compare and analyze the harmonic differences in a famous Armenian Hymn, Տէր Ողորմեա (“Der Voghormia”, meaning “Lord Have Mercy”). The baseline version will be the one that is found in the 20th-century manuscript written by Gomidas Vartabed, and will be compared against later renditions. This will be performed by using several different techniques and algorithms from the Digital Signal Processing (DSP) and Music Information Retrieval (MIR) fields. The final products will be implemented through Python programming, along with related helper packages and toolkits.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Omics Investigation to on the Effect of Replication Stress on Leukemia Cells</title>
<link href="https://hdl.handle.net/1721.1/144682" rel="alternate"/>
<author>
<name>Vermeulen, Sidney</name>
</author>
<id>https://hdl.handle.net/1721.1/144682</id>
<updated>2022-08-30T04:05:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Multi-Omics Investigation to on the Effect of Replication Stress on Leukemia Cells
Vermeulen, Sidney
Differentiation therapy has the potential to treat cancers like acute myeloid leukemia. While the DHODH inhibitor has been shown to induce differentiation in an AML model, the mechanism that this happens is still unknown. Here we characterize a similar differentiation phenotype in the CML cell line K562. Like AML lines studied previously, replication stress leads cells to adopt a cell state similar to oncogene knockdown. Replication stress is likely not acting through chromatin state directly given that upregulated genes did not vary in their H3K27 modification or polymerase pausing, and many chromatin modifier genes that suppressed THP1 differentiation did not also suppress differentiation in K562s. Thus, the mechanism of how replication stress leads to differentiation is still unknown.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactions Between Syntax and Semantics in Language Models</title>
<link href="https://hdl.handle.net/1721.1/144680" rel="alternate"/>
<author>
<name>Bau, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/144680</id>
<updated>2022-08-30T03:39:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Interactions Between Syntax and Semantics in Language Models
Bau, Anthony
How do syntax and semantics interact in neural language models? Is the intuitive dichotomy between semantics and syntax useful as a mental model of their behavior? In this thesis, I systematically investigate how models handle the interactions between sentence corruptions of each kind. I develop a setup for generating “syntactic” and “semantic” corruptions and combining them. Using this setup, I develop a general framework for understanding how classes of corruptions interact with each other, arguing that this can be understood in terms of a single “entanglement” parameter. This allows me to quantify entanglement between syntax and semantics in a variety of situations. The result is that semantics is more entangled with syntax than it is with itself, and that syntax is highly self-entangled.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overlooking the Little Guy: An Analysis of Cyber Incidents and Individual Harms</title>
<link href="https://hdl.handle.net/1721.1/144678" rel="alternate"/>
<author>
<name>Spiewak, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/144678</id>
<updated>2022-08-30T03:12:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Overlooking the Little Guy: An Analysis of Cyber Incidents and Individual Harms
Spiewak, Rebecca
Over the last decade, cybersecurity threats have drastically increased in scale, impact and frequency across the United States. As a result, companies and governments require active monitoring of their cyber risk. While cyber risk management frameworks such as the National Institute of Standards and Technology (NIST) Cybersecurity Framework are helpful, in practice this framework is actualized through formalized approaches to cyber risk measurements. While the emphasis on entity-level loss is valuable in the continued fight against cybercrime and acts of cyberterrorism, the individual-level impact is often neglected, to the detriment of everyday users of vulnerable technologies. Negative impacts to individuals as an outcome of organizations being hacked are often not captured today, thereby artificially excluding costs to individuals from loss calculations. &#13;
&#13;
Through this body of research, we propose a novel approach to size negative externalities in relation to cybersecurity incidents. In contrast to prior research, this approach emphasizes the harm experienced by individuals rather than financial losses to enterprises. We present a new Taxonomy of Individual Cyber Harms, a formalized harm assessment methodology, and a cyber risk forecasting model to predict probable estimates of individual harms through a series of Monte Carlo Simulations. Through the analysis, we show that not only do harms exist for individuals as a result of cyber incidents, but that the extent of this harm is sizeable and can be greater than the harm to the entity for specific types of cyber incidents. Our results demonstrate that harms to individuals make up 42% of total losses experienced due to cyber attacks on US municipalities, or an additional 72% of harms currently captured. From a policy perspective, a discussion follows providing recommendations for avenues for remedy and redress for individuals who have experienced harm from cyber attacks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ship Power Prediction Using Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/144672" rel="alternate"/>
<author>
<name>Kriezis, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/144672</id>
<updated>2022-08-30T03:43:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Ship Power Prediction Using Machine Learning
Kriezis, Anthony
One of the biggest challenges facing the shipping industry in the coming decades is the reduction of carbon emissions. A promising approach to this end is the use of the growing amount of data collected by vessels to optimize a voyage so as to minimize power consumption. The focus of this paper is on building and testing machine learning models that can accurately predict the shaft power of a vessel under different conditions. The models examined include pure theoretical models, pure neural network models, and combinations of the two. Using data on two car carrying vessels for 8 years it was found that neural networks incorporating some physical intuition can achieve a mean absolute percentage error of less than 5%, and an R-squared above 95%. This performance can be further improved by the addition of wave information, but it deteriorates when the data collection becomes less frequent.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time Series Anomaly Detection using Prediction-Reconstruction Mixture Errors</title>
<link href="https://hdl.handle.net/1721.1/144671" rel="alternate"/>
<author>
<name>Wong, Lawrence C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144671</id>
<updated>2022-08-30T03:01:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Time Series Anomaly Detection using Prediction-Reconstruction Mixture Errors
Wong, Lawrence C.
Anomaly detection on time series data is increasingly common across various industrial domains that require monitoring metrics to prevent potential accidents and economic losses. The complications of anomaly detection revolve around a scarcity of labeled data and the need to learn temporal correlations between multiple variables. Most successful unsupervised methods either use single-timestamp prediction or reconstruct entire time series. However, these methods are not mutually exclusive and can each offer complementary perspectives. This work first explores the successes and limitations of prediction-based and reconstruction-based methods. Next, it compares the effect of attention-based architectures with LSTM-based architectures on existing models. Finally, this research proposes a novel autoencoder architecture capable of producing bi-directional predictions while simultaneously reconstructing the original time series by optimizing a joint objective function. An ablation study using a mixture of prediction and reconstruction errors demonstrates that this simple architecture outperforms other state-of-the-art models for anomaly detection on both univariate and multivariate time series.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Jünger Can't Borrow: Demographic Imbalances and Currency Risk Premia</title>
<link href="https://hdl.handle.net/1721.1/144667" rel="alternate"/>
<author>
<name>Adams, Patrick Augustine</name>
</author>
<id>https://hdl.handle.net/1721.1/144667</id>
<updated>2022-10-25T04:48:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Jünger Can't Borrow: Demographic Imbalances and Currency Risk Premia
Adams, Patrick Augustine
Empirically, countries with relatively old populations have significantly lower interest rates and currency returns. As a first step towards explaining this fact, I develop a two-country overlapping generations model to study the relationship between the global wealth distribution and currency risk premia. Relatively wealthy countries in the model have low currency risk premia because their bonds insure wealthy households against increases in the price of their own consumption basket. I discuss how the model can be extended to incorporate demographic heterogeneity across countries. Given observed household savings patterns over the life cycle, differences in population age across countries can potentially generate large differences in financial wealth and currency risk premia.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Institutionalization of the American Dream</title>
<link href="https://hdl.handle.net/1721.1/144663" rel="alternate"/>
<author>
<name>Geoghegan, James G.</name>
</author>
<id>https://hdl.handle.net/1721.1/144663</id>
<updated>2022-08-30T03:33:05Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Institutionalization of the American Dream
Geoghegan, James G.
Currently, there is a gap between what is offered via the single-family homeownership model in the United States and what is desired by home consumers.&#13;
&#13;
Home consumers make one of the biggest investment decisions in their lives primarily based on utilization needs. The state of the housing market, available inventory, borrowing capabilities and interest rates, and where in life the consumer finds oneself are material factors that sometimes play second fiddle to the occupancy objectives consumers pursue when deciding between renting and owning shelter. Factors influencing occupancy related decisions can range from long-term growth or downsizing requirements, job stability, geographical labor mobility, and cost of living that are sometimes in conflict with traditional societal pressures to pursue the American dream of home ownership. &#13;
&#13;
Even though the purchase of a home is one of the biggest investments the average American will make it rarely is evaluated solely as an investment because the timing and market conditions are normally dictated by one’s occupancy needs.&#13;
&#13;
This thesis introduces a more flexible housing offering for consumers looking to enter more established single-family communities, particularly in the Northeast region of the United States where the recent Single-Family Rent (“SFR”) and Build to Rent (“BTR”) models have yet to become broadly institutionalized. &#13;
&#13;
Due to the binary own vs rent decision tree that consumers in these established markets face, there are limited options to align investment with occupancy decisions. As a result, there is a mismatch in what single-family home ownership models offer with what consumers are looking for depending upon their particular criteria.&#13;
&#13;
Via institutional partnership, additional housing transaction offerings focused on optionality can be introduced into the marketplace resulting in a more expansive menu of renting and buying opportunities for consumers to ultimately engineer more tailored investment vehicles for their financial and personal needs. &#13;
&#13;
This unique housing offering proposal will be presented in the form of an investment pitchbook under the fictitious name – The Housing Menu. The attributes of the investment opportunity, strategy, and structure will be applied and evaluated in a case study approach to ultimately determine the viability from an institutional investment perspective as well as to present the benefits to prospective homeowners in the hope to expand on the current homeownership menu offering.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speaker Anonymization using End-to-End Zero-Shot Voice Conversion</title>
<link href="https://hdl.handle.net/1721.1/144662" rel="alternate"/>
<author>
<name>Kang, Wonjune</name>
</author>
<id>https://hdl.handle.net/1721.1/144662</id>
<updated>2022-08-30T03:37:34Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Speaker Anonymization using End-to-End Zero-Shot Voice Conversion
Kang, Wonjune
Spoken language is a rich medium of communication that combines words with various information about emotions, feelings, and excitation through modulations in tone and pitch. In discourse, this allows for maintaining a human element that is lacking in many other channels, such as writing or social media. However, a person's voice is a distinct biomarker, and there exist many settings in which it may need to be anonymized in order to protect the speaker's identity.&#13;
&#13;
This thesis presents a framework for performing speaker anonymization using voice conversion (VC) methods. We first introduce a model for performing end-to-end zero-shot voice conversion by modifying the architecture of a neural vocoder. To the best of our knowledge, this is one of the first end-to-end approaches for zero-shot VC that has ever been proposed. Our model is able to maintain the clarity and intelligibility of transformed speech very well while also achieving good voice style transfer performance---an improvement over current state-of-the-art VC models, which exhibit a trade-off between audio quality and accurate voice style transfer.&#13;
&#13;
Next, we present a method for extending targeted voice conversion to un-targeted voice anonymization. This is done by fitting a Gaussian mixture model (GMM) to the latent space of speaker embeddings that are fed into the VC model, and then sampling from the GMM to select the target voice for anonymization. This obviates the need for explicitly specifying a target speaker when performing VC-based anonymization.&#13;
&#13;
We evaluate both our voice conversion and anonymization methods on publicly available data as well as real-world audio from conversations on the Local Voices Network (LVN) platform, demonstrating their applicability to "in-the-wild" settings. Finally, we provide a discussion of this work's potential applications and the ethical considerations of using voice conversion technologies in society.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Cable-stayed Bridges at the Conceptual Design Stage</title>
<link href="https://hdl.handle.net/1721.1/144661" rel="alternate"/>
<author>
<name>Oey, Olivia</name>
</author>
<id>https://hdl.handle.net/1721.1/144661</id>
<updated>2022-08-30T03:02:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimization of Cable-stayed Bridges at the Conceptual Design Stage
Oey, Olivia
Conceptual design of cable-stayed bridges, which defines the structure’s geometry and typology in the early stages of design, has significant influence on the structure’s efficiency, cost, aesthetics, and constructability. However, conventional design approaches for bridge design places greater emphasis on detailed structural analysis than conceptualization, resulting in designers being stuck in an iterative design loop with a structurally inefficient system.&#13;
&#13;
This thesis looks at developing a user-friendly conceptual design tool in the form of efficiency curves, which relates the geometrical aspect ratio &#119871;/&#119867; of different cable-stayed typologies to its structural performance in terms of volume. By developing a parametric model in the Grasshopper environment, numerous design variables such as number of stay cables, span lengths, materiality, loading conditions, boundary conditions, and flexural rigidity in the towers and decks are able to be investigated and incorporated to obtain a more realistic behavior of the structurally indeterminate cable-stayed bridge.&#13;
&#13;
A series of design curves are proposed for the harp, fan, web, and semi-fan cable configurations. The performance of the forms improves from the web, fan, semi-fan to harp configuration under symmetric loads, and under asymmetric loads, the fan configuration performs better than the harp configuration. Furthermore, since the design curves converge with increasing number of cables, the use of a truss analysis is sufficient for conceptual design, provided that the number of cables is adequate; this design approach, however, does not apply for the web configuration. Furthermore, a region of ’flatness’, equivalent to a range of &#119871;/&#119867; ratios that lies within a 10% variation of the optimum design solution, is proposed for different typologies, material, and boundary conditions. Overall, the web configuration has the most restrictive design curve out of all the typologies, with a very tight range of optimum &#119871;/&#119867; ratio.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Small Apartment Assets Outperform their Larger Counterparts: An Analysis of Investment Risk and Returns, Price Dynamics, and Leverage Points by Property Size</title>
<link href="https://hdl.handle.net/1721.1/144660" rel="alternate"/>
<author>
<name>Nicolais, Teo P.</name>
</author>
<id>https://hdl.handle.net/1721.1/144660</id>
<updated>2022-08-30T03:24:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">How Small Apartment Assets Outperform their Larger Counterparts: An Analysis of Investment Risk and Returns, Price Dynamics, and Leverage Points by Property Size
Nicolais, Teo P.
The 26 million‐unit, 558,0000‐property U.S. apartment property market exhibits remarkable heterogeneity when it comes to property size.  And despite all the efficiencies one would expect to see in a $5.1 trillion asset market, two surprising patterns emerge: small apartment properties persistently outperform large apartment properties on a risk‐adjusted basis and institutional investors inexplicably avoid the market for small properties. &#13;
&#13;
Based on academic literature, industry research, and original empirical analysis, this paper identifies six apartment property size categories (by unit count).  It explores key leverage points that arise from the fundamental nature of properties of different sizes, investors of different types, and the interaction thereof.   It delineates the range of property size over which the comparative advantages of certain investors achieve their maximum effect and over what ranges their influence wanes. Using tools of analysis pioneered at the MIT Center for Real Estate and MIT Sloan this thesis undertakes a rigorous exploration of each property size tier, providing new insights into the understudied market for small apartment properties.&#13;
&#13;
Using property‐level data from over 77,000 transactions between 2000 to 2018 this paper (1) analyses the ownership segmentation of the market and measures the strength of investor affinity for certain sized properties,  (2) uses repeat‐sales regression analysis to develop commercial property price indices to capture price dynamics and investment risk / returns profiles, (3) analyzes property‐level holding period returns (i.e., the IRRs) for nearly 23,000 round‐trip investments, and (4) builds a Monte Carlo simulation model to test and evaluate property‐size based investment strategies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Memory-Corruption Security Defenses for Real-Time Systems</title>
<link href="https://hdl.handle.net/1721.1/144658" rel="alternate"/>
<author>
<name>Horne, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/144658</id>
<updated>2022-08-30T03:31:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Optimizing Memory-Corruption Security Defenses for Real-Time Systems
Horne, Amanda
Real-Time Systems (RTSs) frequently suffer from memory-corruption attacks. Compared to general-purpose systems, RTSs differ because of their scheduling requirements. For this reason, many modern-day security defenses are not compatible with RTSs or impose too much performance overhead to be schedulable. This thesis presents a new Mixed Integer Linear Programming optimization algorithm — Defense Optimization Algorithm for Real-time system Memory-Corruption Security (DOARMS) — that determines the optimal, yet schedulable, set of defenses to protect RTSs against memory-corruption attacks. &#13;
&#13;
Experiments using DOARMS showed that 71% or less utilization is needed for ideal security coverage with the defenses considered and that the algorithm produced better results than selecting the defenses with the best security coverage. A case-study using a smaller subset of defenses also showed that using worst-case instead of average-case performance overheads for defenses leads to lower security coverage, and that more work is needed to quantify the worst-case performance overheads. DOARMS also supports optional weights representing the importance of security for each task and prioritizes the security of the tasks according to those weights. The runtime performance of the algorithm is reasonable with a single optimization taking an average of ∼ 14s and a maximum of ∼ 114s to run, making it a useful tool to help RTS designers secure their RTSs from memory-corruption attacks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Flight Navigation with Liquid Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/144657" rel="alternate"/>
<author>
<name>Kao, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/144657</id>
<updated>2022-08-30T03:13:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Robust Flight Navigation with Liquid Neural Networks
Kao, Patrick
Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations, and generalize well to online and unseen scenarios within the same environment they have been trained on. It is fundamentally challenging for these intelligent agents to take a step further and robustly generalize to new environments with drastic scenery changes they have never encountered before. Here, we present a method to create robust flight navigation agents that successfully perform vision-based fly-to-target tasks beyond their training environment under drastic distribution shifts. To this end, we design an imitation learning framework utilizing liquid neural networks, a brain-inspired class of continuous-time neural models that are causal and adapt to changing conditions. We observe that liquid agents learn to distill the task they are given from visual inputs, and drop irrelevant features. This way, they transfer their learned navigation skills to new environments. When compared to other advanced deep agents, we confirm this level of robustness in decision-making is exclusive to liquid networks, both in their differential equation and closed-form representation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alarm for Autonomous UAV Radiation Mapping Algorithm</title>
<link href="https://hdl.handle.net/1721.1/144656" rel="alternate"/>
<author>
<name>Knoll, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/144656</id>
<updated>2022-08-30T03:44:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Alarm for Autonomous UAV Radiation Mapping Algorithm
Knoll, Justin
The purpose of this research is to create an alarm threshold algorithm for use in an autonomous unmanned aerial vehicle (UAV) for radiation mapping purposes in the event of a nuclear disaster or use of nuclear weapons. Information from this alarm could feed into real time decision making on the UAV system, so the alarm is based on data from 1 second collection windows. All data collection was done with a Cs₂LiLa(Br,Cl)₆ detector. The testing included both laboratory experiments and full scale flight testing with the UAV system. Several alarm methods were devised and tested, attempting to take advantage of assumptions about the isotopes that would be present in a nuclear disaster and the known properties and gamma ray energies of these isotopes. Most data collection used either ¹³⁷Cs or ⁶⁰Co sources. Receiver operating characteristic (ROC) curve analysis demonstrated that alarm methods setting a threshold on narrower energy bins were less sensitive than methods using the full spectrum count rate, while the most sensitive method was to use bins containing all spectrum data up to the full peak energy of the isotopes of interest but ignoring higher energies. Similar ROC curve results were found with a simulated ¹³¹I source, indicating this method would also work with other isotopes. This method was then testing with lab and flight test data using a three standard deviation threshold. The median false alarm rate in the background flights was 0.19%, and sources could be successfully detected at high rates from relatively long distances. For example in one flight a 90% detection probability was attained for an 8 mCi cesium source from a total distance of 22±2 m. The probabilities of finding anomalous sources of radiation and false alarm appear to be sufficient based on the observed data, and this method had the best performing ROC curve of those tested.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Evaluation of Household Energy Systems in the Himalayan Region</title>
<link href="https://hdl.handle.net/1721.1/144653" rel="alternate"/>
<author>
<name>Tang, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/144653</id>
<updated>2022-08-30T04:02:52Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Evaluation of Household Energy Systems in the Himalayan Region
Tang, Lisa
Biomass cookstoves are prevalent in rural areas in developing countries around the world. The smoke from biomass burning results in high levels of household air pollution, which cause respiratory and pulmonary diseases. While many improved stoves have been developed to increase cooking efficiency and reduce household air pollution levels, the Himalayan region presents unique implementation challenges in its remoteness and the high space heating needs faced by households. In order to characterize the performance of existing Himalayan stoves, a handmade two-pot clay stove based on traditional north Indian stoves was constructed and tested in D-Lab with the Water Boiling Test. The stove's efficiency was found to be between 13 and 16\%. The addition of a grate did not significantly change stove performance, and the addition of pot stands concentrated more cooking power on the front cooking vessel but did not improve overall thermal efficiency. The results from field visits to north India and Nepal in January 2022 are also presented. The cooking efficiencies of stoves in the field ranged from 4 to 13\%, and stove thermal efficiencies were found to negatively correlate with fuel use and test duration. Households' ambient indoor temperatures and pollution levels increased when their biomass cookstoves were fired. Ambient particulate matter levels in the field were found to be two to three orders of magnitude above World Health Organization standards, and average indoor temperatures were below the accepted standard. Chimneys approximately halved pollution levels, but did not remove all pollution from households. Pollution levels within a household were found to increase with colder weather, due to space heating needs being met with increased operation of biomass stoves. Due to the strong connections between cooking and heating in the Himalayan region, a new methodology is proposed for the holistic evaluation of household energy for communities which use biomass stoves.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical and Biologic Impact of Dynamic Loading on Bovine and Human Models of Osteoarthritis</title>
<link href="https://hdl.handle.net/1721.1/144649" rel="alternate"/>
<author>
<name>Szapary, Hannah Jacqueline</name>
</author>
<id>https://hdl.handle.net/1721.1/144649</id>
<updated>2022-08-30T03:42:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Mechanical and Biologic Impact of Dynamic Loading on Bovine and Human Models of Osteoarthritis
Szapary, Hannah Jacqueline
Both dexamethasone (Dex) and Insulin-like growth factor 1 (IGF-1) have shown promise as disease-modifying therapeutics for osteoarthritis (OA), a disease characterized by cartilage degradation. Additionally, dynamic loading has been demonstrated to affect chondrocyte metabolic activity, with mechanical stimuli acting as a therapeutic at specific amplitudes and frequencies. The goal of this thesis was to investigate the potential of these therapeutics and loading to be synergistic in preventing post-traumatic osteoarthritis (PTOA) disease initiation (e.g., after a traumatic joint injury) and in ameliorating progression of OA. We hypothesized that Dex and IGF-1 could help to maintain the mechanical properties and biochemical composition of cartilage in human osteochondral tissues when treated with exogenous inflammatory cytokines.&#13;
&#13;
Bovine knee cartilage and human ankle cartilage-bone explants were used in vitro with a cytokine challenged to mimic an OA disease state. A load-controlled dynamic loading protocol was modeled after a physiologically relevant rehabilitation program and was tested in healthy tissue to optimize the applied stress magnitude and loading duration. The optimized protocol was then used for 7 days duration (at 0.33 Hz, 40% duty cycle for 1 hour per day) +/- Dex and IGF-1 to evaluate biochemical changes on the protein and gene expression levels. Stress-relaxation testing was utilized to calculate mechanical properties of human cartilage, both before and after one 1-hour loading session, at baseline and after treatment with cytokines and Dex.&#13;
&#13;
It was found that a load-controlled protocol could mimic exercise with total strain in a physiologically relevant range. Loading after 7 days also increased the effects of Dex and IGF-1 on cell viability, and further increased GAG biosynthesis and decreased GAG loss seen with Dex alone. Notably, in human tissue there was donor-to-donor variability in this response to both loading on its own, and to Dex/IGF-1 with and without loading. While Dex preserved cell viability and decreased GAG loss and NO release from cartilage explants treated with cytokines, Dex did not alter mechanical properties (before or after loading) after 10 days. However, there was variation in these values between donor age groups at baseline. Taken together, these results suggest that a rehabilitative loading therapy in combination with Dex/IGF-1 could enhance certain disease-modifying effects of each of the two drugs, and that inherent tissue variability (between patients) could contribute to individual variation in responses and thus emphasize the need for a personalized medicine approach.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Securing Mechanism for Power Converter in Navy Integrated Power and Energy Corridor</title>
<link href="https://hdl.handle.net/1721.1/144646" rel="alternate"/>
<author>
<name>Tomlinson, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/144646</id>
<updated>2022-08-30T04:04:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design of Securing Mechanism for Power Converter in Navy Integrated Power and Energy Corridor
Tomlinson, Chris
Future US Navy ships will need an updated electrical distribution system to solve two impending challenges. The first challenge is the increase in electrical generation and demand. The second challenge is that the loads will be more dynamic with more complex load profiles (e.g., pulses for energy weapons). A next-generation electrical system, Power Electronic Power Distribution System (PEPDS), is being developed to solve these challenges. It is a power/energy management and distribution system operating in the Medium Voltage AC/DC range that can convert power from AC and DC sources as required by the load using a power conversion module. The power conversion module for this system is known as the integrated Power Electronic Build-ing Block (iPEBB). However, with this new electrical distribution system designed to be put on a ship, the components must be adequately secured. Currently, there is no established way to anchor the novel iPEBB. This thesis modeled a securing mechanism using a hinge design to provide the securing force. It was evaluated based on the structural integrity, bending, and shear stresses. Additionally, the material encompassing the iPEBB is investigated to determine the properties integral to its design. The model produced shows a practical path to secure the iPEBB without additional involvement from other support systems. While this design is functional, it may not be optimal. This thesis lays the foundation for additional study for more advantageous securing mechanism designs for the iPEBB.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>More Choices or Help Choosing?: Experimental Evidence on Helping Firms Hire</title>
<link href="https://hdl.handle.net/1721.1/144645" rel="alternate"/>
<author>
<name>van Inwegen, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/144645</id>
<updated>2022-08-30T03:25:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">More Choices or Help Choosing?: Experimental Evidence on Helping Firms Hire
van Inwegen, Emma
Broadly, there are two main ways to help employers hire: (a) expand their choice set by attracting more applicants or (b) help them choose among that choice set. I report the results of an experiment where employers in a large online labor market were given hiring assista nce that could take either form, based on the determina tion of the helper. In general, job openings with few applicants were given recruiting help, while applicants with many applicants were given selection help. All were given general advice on the hiring process. I find that employers of treated job posts were over 10% more likely to make a hire than those in the control group. While increased recruiting can potentially crowd-out other matches, I find that little if any of the experimental increase was coming at the expense of the control group. In decomposing the reasons for the increased hiring, I find evidence that both (a) and (b) were important, but with recruiting help being about three times more important than selection help. Despite assistance having a marginal cost, the hiring assistance was remarkably cost-effective and a central planner that could tax the wage bill at even just 2% could fund the intervention.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Airline Revenue Management with Segmented Continuous Pricing: Methods and Competitive Effects</title>
<link href="https://hdl.handle.net/1721.1/144643" rel="alternate"/>
<author>
<name>Long, Yanbin</name>
</author>
<id>https://hdl.handle.net/1721.1/144643</id>
<updated>2022-08-30T03:45:12Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Airline Revenue Management with Segmented Continuous Pricing: Methods and Competitive Effects
Long, Yanbin
With the introduction of IATA’s New Distribution Capability (NDC), airlines will no longer be limited to discrete fare classes for their fare distribution: they could be enabled show fare quotes in continuous ranges to the passengers. NDC will also allow airlines to present different fare quotes to passengers from different demand segments as identified by the airline. In theory, airlines can better extract passenger willingness-to-pay, and thus see gains in revenue, by offering segmented continuous fare quotes to different passengers requesting to book.&#13;
&#13;
This thesis explains the revenue management (RM) methods for segmented continuous pricing and examines their potential effects on airlines’ revenue through simulations in the Passenger Origin-Destination Simulator (PODS). We described two types of continuous pricing methods: class-based and classless. The class-based method is a straightforward extension from the traditional methods that can be used with existing RM systems, while the classless method requires more changes from current RM algorithms. Our initial simulation results indicate that airlines can see unrealistically large revenue gains assuming perfect passenger segment identification accuracy in a hypothetical competitive scenario where segmented continuous pricing is adopted simultaneously by all airlines. In the most realistic scenarios in which only one airline adopts segmented continuous pricing and has an 80% accuracy in identifying business versus leisure passenger booking requests, the first-mover airline sees a 3% to 6% revenue gain using constant segmented willingness-to-pay estimates. The revenue gains come primarily from the leisure passenger segment by offering lower fares than competitors closer to departure, although the first-mover airline loses bookings and revenues from the business passenger segment. We examine ways in which the first-mover airline can recover its revenue from business passengers and achieve a larger revenue gain by modifying its estimates of passenger willingness-to-pay.&#13;
&#13;
This thesis also explores potential response strategies by the competing airlines. We discover that competitors can reverse the first-mover’s revenue gain by modifying their fare structures while still using traditional RM methods. We conclude that although adopting segmented continuous pricing is promising in theory, its actual gains depend heavily on the competitive situation and the responses made by other airlines.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inundation Flooding in Urban Environments using on-lattice Density Functional Theory</title>
<link href="https://hdl.handle.net/1721.1/144641" rel="alternate"/>
<author>
<name>Vartziotis, Elli Danae</name>
</author>
<id>https://hdl.handle.net/1721.1/144641</id>
<updated>2022-08-30T03:41:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Inundation Flooding in Urban Environments using on-lattice Density Functional Theory
Vartziotis, Elli Danae
We propose a statistical physics-based computational approach called on-lattice density functional theory (DFT) for evaluating the risk of inundation flooding in urban environments. Originally developed in Materials Science for porous materials this DFT model is herein upscaled from the nanoscale to the city scale. We show that the strength of such an equilibrium-based approach, which discards the timedependence of flooding, stems from a combination of three aspects. First, the model has a minimum of input quantities and an efficient computational time. Second, the model comes with an ease of modeling a variety of city elements that are critical for inundation flooding (e.g., buildings, pavements, permeable soils, and drainage systems). Finally, the model has physically meaningful output parameters, such as the adsorption-desorption isotherms, which can be linked to a city’s drainage capacity and steady-state gauge heights. Also, the resulting isotherms exhibit a pronounced hysteresis, indicating the irreversibility of the flooding and draining properties at city scale. This hysteresis loop is of great importance since it provides a powerful means to qualitatively identifying the risk of inundation flooding.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-economic Analysis and Strategic Decarbonization of the Indian Cement Industry</title>
<link href="https://hdl.handle.net/1721.1/144640" rel="alternate"/>
<author>
<name>Sakhamuru, Devaki Rani</name>
</author>
<id>https://hdl.handle.net/1721.1/144640</id>
<updated>2022-08-30T03:52:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Techno-economic Analysis and Strategic Decarbonization of the Indian Cement Industry
Sakhamuru, Devaki Rani
India, being a developing nation, will witness tremendous growth in the industrial sector, particularly cement production and consumption for constructing its buildings, roads, and other infrastructure. As India's GDP strengthens due to its industrial growth, living standards will rise too, increasing consumption patterns and CO₂ emissions. Climate change is border agnostic. Its effects are everywhere and transcend across nations. The success of the world's ability to limit the global rise in temperatures to under 2°C or 1.5°C to meet climate goals depends heavily on India's ability to decarbonize its fast-growing industry sector, primarily cement and iron &amp; steel. &#13;
&#13;
For India to decarbonize its cement sector, several challenges exist calling for strategic decision making in the Indian Cement Industry to mitigate climate impacts and carbon taxes in the future. First, most CO₂ emissions arise not from energy consumption but process emissions resulting from the chemical reaction of decomposition of limestone into lime during clinker making. It makes cement a hard-to-abate sector as long as the process involves calcination of limestone unless extreme measures are undertaken, such as Carbon Capture and Storage (CCS) or CCUS technologies. Second, these emerging technologies are still not feasible due to high implementation costs, financial barriers, and technology readiness level (TRL). Unlike in China, where the majority of plants are state owned and have access to capital, the cement industry in India has to raise capital on its own or be subject to being bought out by international firms that have access to larger, low-cost capital. Moreover, India has an abundance of low-grade limestone but limited high-grade limestone which is used in OPC clinker suitable for construction. India has already started importing limestone for OPC clinker making due to limited reserves of high-grade limestone that present a dependence on imports and pose raw material risks. This research investigates possible ways to decarbonize on the supply side, demand side, and operational side, and explores the pathways to reach the target emission intensity of 0.35 tonne of CO₂ per tonne of cement to meet the climate goals while considering these constraints. &#13;
&#13;
The findings suggest that improving grid emission intensity or using alternative fuels alone will not be sufficient to achieve the target intensity as the majority of emissions result from process emissions during chemical reaction of calcination of limestone. Additionally, the IEA prescribed roadmap to achieve the target emission intensity of 0.35 t of CO₂ per t of cement is not possible without a major capital expenditure (CAPEX) and new technologies like CCS. Continuing to utilize high-emission intensive and high-grade limestone-based clinker coupled with setting up costly CCS infrastructure for limestone-based cement alone may not be a feasible strategy due to dwindling resources that may cause these systems to become obsolete in the future.&#13;
&#13;
Considering these factors, alternative binders with varying emission intensities and raw material reserves such as geopolymers and other green cement types were optimized to solve for the ideal mix that achieves the emission intensity target of 0.35 t CO₂/ t cement. The closest optimization result to the target was 0.37 t CO₂/ t cement when using the following industry production mix: OPC 21%, PPC 20%, PSC 10%, and Geopolymers 39%. For this scenario to be practical, there has to be more research on the geopolymers which require additional research and development to bring from lab to market. This means there is no one solution, but considering to bring such cements into market is perhaps a better and cost-effective alternative when coupled with grid intensity and alternative fuels and raw materials in a circular economy. While decarbonizing CO2 emissions by improving grid intensity and alternative fuels alone are insufficient for long-term decarbonization goals, they, when coupled with the green cement mix have the potential to limit temperature rise to 2°C. &#13;
&#13;
While CCUS can abate the industrial sector to meet climate goals and relevant when the carbon tax is imposed, limitations exist in technology, financial barriers, and raw material risks (such as limited availability of high-grade limestone reserves). An excessive carbon tax would might kill the growing industry. Therefore, the industry needs to explore new cement types such as geopolymers, and further extend system boundary to decarbonize the energy system and to include the transportation system for benefiting from the logistics optimization. &#13;
&#13;
Finally, the research recommends policy makers to consider diverting R&amp;D funds towards non-limestone-bearing raw materials and binders, especially for developing geopolymer and novel cement that requires chemical bonding instead of lime to reduce CO₂ emissions from cement sector. Existing research has already demonstrated that a circular economy has high potential to lower emissions at the lowest cost. Facilitating an efficient circular economy system will require expansion of system boundary, and will need diverting funds into R&amp;D for developing integrated systems and capabilities but is the most effective route for the Indian cement industry that has limited access to capital and limestone-bearing raw material risks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Defining the molecular basis for the β-catenin and CDC73 interaction</title>
<link href="https://hdl.handle.net/1721.1/144639" rel="alternate"/>
<author>
<name>Lima, Bruna R.</name>
</author>
<id>https://hdl.handle.net/1721.1/144639</id>
<updated>2022-08-30T03:45:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Defining the molecular basis for the β-catenin and CDC73 interaction
Lima, Bruna R.
RNA polymerase II (Pol II) is required for expression of protein coding genes. To initiate gene transcription, Pol II is recruited to gene promoters by transcription factors. Gene specific transcription factors, including β-catenin, a component of the Wnt signaling pathway, are used to regulate gene expression in response to developmental or environmental cues. Previous studies reported that β-catenin binds to the central region of CDC73, a subunit of the Polymerase Associated Factor 1 complex (PAF). PAF associates with Pol II during transcription elongation to stimulate processive elongation and co-transcriptional histone modification. Genetic studies have shown that CDC73 is required for expression of Wnt signaling pathway genes. In addition, such studies have hypothesized that the interaction between β-catenin and CDC73 was used to recruit Pol II to genes through PAF. In this work I explored how the CDC73•β-catenin interaction is used to regulate gene expression. My data indicates that CDC73 uses its middle region to associate with either PAF or β-catenin but cannot associate with both proteins simultaneously. Both CDC73 and β-catenin weakly interact with nucleic acids meaning that this complex might need another protein factor to mediate their interaction with DNA. To solve the structure of the β-catenin•CDC73 complex, purification strategies require some optimization to isolate a stable complex. Solving the structure of β-catenin•CDC73 complex will define the exact nature of their interaction. Moreover, it will allow for cellular studies of β-catenin•CDC73 and PAF to understand if the complexes are differentially used to regulate gene expression. Together, the work presented in this thesis provide new insights on the β-catenin•CDC73 interaction in comparison with the interaction of CDC73 with PAF.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Models for On-Orbit Detection of Temperature and Chlorophyll Ocean Fronts</title>
<link href="https://hdl.handle.net/1721.1/144634" rel="alternate"/>
<author>
<name>Felt, Violet C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144634</id>
<updated>2022-08-30T03:18:22Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Models for On-Orbit Detection of Temperature and Chlorophyll Ocean Fronts
Felt, Violet C.
Small-scale ocean fronts play a significant role in absorbing the excess heat and CO2 generated by climate change, yet their dynamics are not well understood. Existing in-situ and remote sensing measurements of the ocean are of inadequate spatial and temporal coverage to globally map small-scale ocean fronts, and existing algorithms to generate ocean front maps are computationally intensive. We propose machine learning (ML) models to detect temperature and chlorophyll ocean fronts from unprocessed satellite imagery, significantly reducing the standard resources and computational times needed for detecting ocean fronts. These models are developed with resource-constrained satellite imaging platforms like CubeSats in mind, as such platforms are able to address the spatial and temporal coverage challenges. The highest performing models achieve accuracies of 96% and make predictions in milliseconds using less than 100 MB of storage; these capabilities are well-suited for CubeSat deployment.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An fMRI dataset of 1,102 natural videos for visual event understanding</title>
<link href="https://hdl.handle.net/1721.1/144631" rel="alternate"/>
<author>
<name>Lahner, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/144631</id>
<updated>2022-08-30T03:27:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An fMRI dataset of 1,102 natural videos for visual event understanding
Lahner, Benjamin
A visual event, such as a dog running in a park, communicates complex relationships between objects and their environment. The human visual system is tasked with transforming these spatiotemporal events into meaningful outputs so we can effectively interact with our environment. To form a useful representation of the event, the visual system utilizes many visual processes, from object recognition to motion perception. Thus, studying the neural correlates of visual event understanding requires brain responses that capture the entire transformation from video-based stimuli to high-level conceptual understanding. However, despite its ecological importance and computational richness, there does not yet exist a dataset to sufficiently study visual event understanding. Here we release the Algonauts Action Videos (AAV) dataset composed of high quality functional magnetic resonance imaging brain responses to 1,102 richly annotated naturalistic video stimuli. We detail AAV’s experimental design and highlight its high quality and reliable activation throughout the visual and parietal cortices. Initial analyses show the signal contained in AAV reflects numerous visual processes representing different aspects of visual event understanding, from scene recognition to action recognition to memorability processing. Since AAV captures an ecologically-relevant and complex visual process, this dataset can be used to study how various aspects of visual perception integrate to form a meaningful understanding of a video. Additionally, we demonstrate its utility as a model evaluation benchmark to bridge the gap between visual neuroscience and video-based computer vision research.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Trajectory Vector Approach for Characterizing Dynamic Changes in the Performance-Load Representation of Cardiac State</title>
<link href="https://hdl.handle.net/1721.1/144630" rel="alternate"/>
<author>
<name>Shen, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/144630</id>
<updated>2022-08-30T04:03:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Novel Trajectory Vector Approach for Characterizing Dynamic Changes in the Performance-Load Representation of Cardiac State
Shen, Julie
In this work, we present a novel trajectory vector approach to both qualitatively and quantitatively characterize dynamic changes in the performance-load cardiac relationship for application in the context of cardiogenic shock. The performance-load relationship is an expansion on the Frank-Starling mechanism that allows traditional metrics of preload to be correlated with afterload, combining both types of load into a more comprehensive general cardiac load. Through a series of controlled animal studies, we collect hemodynamic data during baseline and various pharmacological interventions while the animal is supported by a percutaneous ventricular assist device to test the feasibility of this approach. Utilizing a 2D Frank-Starling representation of the hemodynamic animal data, we employ Gaussian mixture model clustering to identify distinct patterns in an animal’s drug response. These patterns guide the formulation of trajectory vectors, parameterized by angle and magnitude, for each drug effect. Feasibility of the trajectory vector approach is validated through confirming the independence of angle and magnitude from baseline state. The ability of the approach to detect changes across a spectrum of distinct cardiac states and distinguish intervention dose levels, with minimal influences from the level of Impella support, is realized. With the ability to monitor shifts in performance-load relationship, a reflection of the heart’s current inotropic state, our technique can be applied in the clinic to inform appropriate pharmacological treatment and device management for patients with cardiogenic shock.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microchannel Thermal Management Analysis and Simulation Tool for Integration into Electronic Component Design</title>
<link href="https://hdl.handle.net/1721.1/144628" rel="alternate"/>
<author>
<name>Waterman, Kelli M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144628</id>
<updated>2022-08-30T03:38:41Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Microchannel Thermal Management Analysis and Simulation Tool for Integration into Electronic Component Design
Waterman, Kelli M.
This study focuses on the use of microchannel cold plates as a thermal management solution. The goal of the study was to build the Microchannel Simulation Library Tool (MSLT), which allows cold plate designers to analyze possible solutions to electronic component cooling. The MSLT relies on a library of geometries built in the Star-CCM+ software suite, but pulls the user interaction level to a Matlab interface for ease of access to said library and implementation by designers.&#13;
&#13;
Four basic geometries are included that have been shown to provide cooling enhancement to cold plates: straight microchannels, zig-zagged microchannels, straight microchannels with cavities, and oblique pin fins. Geometric parameters including channel width, length and depth as well as cold plate and working fluid material properties are variable from the Matlab user interface level. Preprocessing to ensure viable geometries and the generation of a Java script to dictate run parameters of the simulation sweeps are completed within Matlab prior to Star-CCM+ running simulations through its Design Manager tool. The data is then retrieved and post-processed from Matlab to generate performance metrics for use.&#13;
&#13;
The straight microchannel simulation set-up was validated against published experimental data with temperature trends and pressure drop analyzed.&#13;
&#13;
Trend analysis was conducted for the geometric features showing the relationship between performance in terms of maximum cold plate temperature and channel width, as well as the impact that it has on pressure drop across the cold plate. Generally, results show that smaller channels provide better performance in the form of lower maximum cold plate temperatures but at the cost of a higher pressure drop. Additionally, features that enhance flow redirection and mixing also cause lower maximum cold plate temperatures at the cost of higher pressure drops.&#13;
&#13;
The MSLT can be used by cold plate designers to simply and cheaply generate modeling of a wide range of microchannel cold plate solutions. This ease of solution analysis allows for integration of thermal management into component design rather than a problem to be solved later. This is especially useful in the case of heat generating components in the developmental stages of the design process with potentially changing conditions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Bicycle Design using Performance-Aware Deep Generative Models</title>
<link href="https://hdl.handle.net/1721.1/144624" rel="alternate"/>
<author>
<name>Regenwetter, Lyle</name>
</author>
<id>https://hdl.handle.net/1721.1/144624</id>
<updated>2022-08-30T03:29:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Bicycle Design using Performance-Aware Deep Generative Models
Regenwetter, Lyle
This treatise explores the application of Deep Generative Machine Learning Models to bicycle design and optimization. Deep Generative Models have been growing in popularity across the design community thanks to their ability to learn and mimic complex data distributions. This work addresses several key bottlenecks in the developing field, such as performance-aware generation, inverse design, and design validity. To support development of deep generative models, this treatise develops a foundation for data-driven design of bicycles, introducing three datasets: BIKED, BIKER, and FRAMED, considering holistic bicycle design, aerodynamic optimization, and structural optimization of bicycles respectively. It further proposes a set of tractable bicycle design tools, such as surrogate models to rapidly estimate performance of design candidates, analysis tools to guide the design process, and targeted design refinement tools using counterfactual explanations. This treatise finally proposes the first Deep Generative Model that actively optimizes for realism, performance, diversity, feasibility, and target satisfaction simultaneously. The proposed model achieves sweeping improvements over numerous evaluation criteria when compared to existing methods and establishes state-of-the-art performance on the bicycle design problem.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structuring Optimal Control of Legged Locomotion with Learning-based Methods</title>
<link href="https://hdl.handle.net/1721.1/144623" rel="alternate"/>
<author>
<name>Jeon, Se Hwan</name>
</author>
<id>https://hdl.handle.net/1721.1/144623</id>
<updated>2022-08-30T03:30:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Structuring Optimal Control of Legged Locomotion with Learning-based Methods
Jeon, Se Hwan
Both optimal control methods and learning-based methods have been widely used for the control of legged locomotion. While optimal control formulations allow the designer to guarantee constraints on the solutions found, learning-based methods can leverage data and past experiences to globally search for solutions robust to noise and errors in the model parameters. This work explores how optimal control methods can be guided and structured by using data-driven techniques such as supervised learning and Bayesian optimization. Two case studies are presented. The first presents a model predictive controller for quadrupedal landing that reasons about body states, reaction forces, and contact timings in an online fashion. This highly nonlinear problem is made tractable by collecting thousands of feasible trajectories offline from trajectory optimizations and learning to generate them from the initial falling conditions. By initializing the search for a solution with the approximation from a deep neural network, the MIT Mini Cheetah is shown to be able to recover from significant falls in simulation and hardware in real time. The second studies the effect of learning command-dependent weights for a convex model predictive controller. The weights for the running costs are adjusted dynamically as a function of the command input. Using black-box optimization techniques and a defined higher-level reward, a function mapping the command input to the weights can be determined by sampling the trajectories from sweeps of command inputs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility and Design of Solar-Powered Electrodialysis Systems for Agriculture Applications</title>
<link href="https://hdl.handle.net/1721.1/144619" rel="alternate"/>
<author>
<name>Easley, Jacob N.</name>
</author>
<id>https://hdl.handle.net/1721.1/144619</id>
<updated>2022-08-30T03:18:46Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Feasibility and Design of Solar-Powered Electrodialysis Systems for Agriculture Applications
Easley, Jacob N.
This paper presents photovoltaic-powered electrodialysis reversal (PV-EDR) as a promising desalination technology for agricultural applications in the Middle East and North Africa (MENA). Water scarcity in MENA has led to reliance on brackish water for irrigation of crops. Irrigating crops with high salinity water causes a host of problems including decreased yield and soil degradation. Current solutions are water and energy intensive, leading to overextraction of renewable water resources as well as overreliance on fossil fuels for electricity, which is expensive. Market research in MENA and interviews conducted with farmers in Jordan led to the conclusion that energy cost is the most significant issue facing small-scale desalination systems for agriculture in MENA. PV-EDR is chosen as an ideal desalination architecture to meet the needs of farmers by reducing energy costs compared to on-grid reverse osmosis (RO) systems that are currently employed in MENA. Time-variant (TV) operational theory for PV-EDR is presented, which allows for desalination production to match the available solar irradiance throughout a day, leading to decreased power system sizing and further cost savings. TV-PV-EDR can be integrated with waterand energy-efficient drip irrigation systems in order to tailor desalination production to crop water demand throughout a season. Given a case study in Jordan, a TV-PVEDR system is designed and compared to current benchmark RO systems in relation to capital cost, energy cost, and total lifetime cost. TV-PV-EDR was found to be less expensive and more energy efficient than RO systems over its lifetime despite having a larger capital cost. TV-PV-EDR has the potential to provide a mechanism through which more energy-efficient, higher recovery desalination for agriculture can be achieved.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-based Control for Robot Manipulation Tasks with High-dimensional State Spaces</title>
<link href="https://hdl.handle.net/1721.1/144618" rel="alternate"/>
<author>
<name>Githinji, Bilha-Catherine "Bilkit" W.</name>
</author>
<id>https://hdl.handle.net/1721.1/144618</id>
<updated>2022-08-30T03:09:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Model-based Control for Robot Manipulation Tasks with High-dimensional State Spaces
Githinji, Bilha-Catherine "Bilkit" W.
Long horizon manipulation tasks are typically composed of sub-tasks with varying complexity. One phase of the task, for example, may require a continuous action space and another may be more efficiently solved using a discrete action space. Similarly, complexity in the state space may require analogous abstractions in order to apply classical planning and control methods; e.g., viewing a symbolic representation versus pixel-based representation. A common approach to addressing long horizon tasks is to develop a hierarchical system with a fixed state representation and a set of discrete and continuous action spaces to solve components of the task. However, tasks with high-dimensional state spaces present a problem for this approach where the fixed representation is ill-fit for solving certain phases of the task. This work motivates an alternative where learnt abstractions of the state space allow a hierarchical system to do coarse-to-fine reasoning of representation information to solve a task more effectively. We demonstrate a prototype of such an adaptive system and compare its performance with a system that has fixed representations. The prototype was tested in simulated table-top experiments as well as physical experiments with the Franka Emika Panda arm. The prototype outperformed the baselines in all long horizon cloth manipulation tasks by a margin of up to 20% and matched baseline performance in the rope domain.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clean heat at what cost? Economic optimization of residential space heating in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/144617" rel="alternate"/>
<author>
<name>McBride, Jameson R.</name>
</author>
<id>https://hdl.handle.net/1721.1/144617</id>
<updated>2022-08-30T03:28:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Clean heat at what cost? Economic optimization of residential space heating in Massachusetts
McBride, Jameson R.
Electrifying space heating is a key strategy to reduce greenhouse gas emissions from buildings. The Commonwealth of Massachusetts has adopted a statutory target of net-zero emissions by the year 2050, and the state has predicted that 100,000 homes must be converted each year to electric space heating to reach a state-wide net-zero trajectory. Current progress is inadequate, as fewer than 1,000 homes per year have electrified space heating in Massachusetts in recent years.&#13;
&#13;
This thesis analyzes the prospects and preconditions for the electrification of space heating in the Massachusetts single-family housing stock using an agent-based building optimization model built upon high-resolution building characteristics and a novel heat pump cost estimation methodology.&#13;
&#13;
We argue that widespread heating electrification of existing homes in Massachusetts is unlikely to be economically optimal for households without significant policy change. In particular, the results of our modeling suggest that air-source heat pumps are significantly costlier to operate for primary space heating relative to high-efficiency natural gas furnaces and boilers under current electricity and natural gas rates. Making heating electrification economic for households would require substantially expanding operational subsidies or reforming rate design to reduce the price ratio of electricity to natural gas. Finally, we find that the electrification of space heating could reduce building CO2e emissions in Massachusetts by 25-35% under current grid carbon intensities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Modular, Extensible, Equivalence-Preserving Compilation</title>
<link href="https://hdl.handle.net/1721.1/144616" rel="alternate"/>
<author>
<name>Jamner, Dustin</name>
</author>
<id>https://hdl.handle.net/1721.1/144616</id>
<updated>2022-08-30T03:47:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Framework for Modular, Extensible, Equivalence-Preserving Compilation
Jamner, Dustin
I present Pyrosome1 , a generic framework for the verification of extensible, compositional compilers in Coq. Current techniques for proving compiler correctness are generally tied to the specific structures of the languages and compilers that they support. This limits the extent to which these systems can be extended and composed. In Pyrosome, verified compilers are fully extensible, meaning that to add a new feature to a language simply requires defining and verifying the compilation of that single feature, reusing the old correctness theorem to cover all other cases. This is made possible by an inductive formulation of equivalence preservation that supports the addition of new rules to the source language, target language, and compiler.&#13;
&#13;
Pyrosome defines a formal, deeply embedded notion of programming languages with semantics given by sorted equational theories, so all compiler-correctness proofs boil down to type-checking and equational reasoning. My work supports vertical composition of any compilers expressed in Pyrosome in addition to feature extension. Since my design requires compilers to support open programs, my correctness guarantees support linking with any target code of the appropriate type. As a case study, I present a multipass compiler from STLC through CPS translation and closure conversion, and show that natural numbers, the unit type, recursive functions, and a global heap can be added to this compiler while reusing the original proofs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finding Sparse Subnetworks in Self-Supervised Speech Recognition and Speech Synthesis</title>
<link href="https://hdl.handle.net/1721.1/144615" rel="alternate"/>
<author>
<name>Lai, Cheng-I Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/144615</id>
<updated>2022-08-30T03:29:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Finding Sparse Subnetworks in Self-Supervised Speech Recognition and Speech Synthesis
Lai, Cheng-I Jeff
The modern paradigm in speech processing has demonstrated the importance of scale and compute for end-to-end speech recognition and synthesis. For instance, state-of-the-art self-supervised speech representation learning models typically consists of more than 300M model parameters and being trained on 24 GPUs. While such a paradigm has proven to be effective in certain offline settings, it remains unclear the extent to which it can be extended to online and small-device scenarios.&#13;
&#13;
This thesis is a step toward making advanced speech processing models more parameter-efficient. We aim to answer the following: do sparse subnetworks exist in modern speech processing models, and if so, how can we discover them efficiently? The key contribution is a new pruning algorithm termed Prune-Adjust-Re-Prune (PARP), that discovers sparse subnetworks efficiently. PARP is inspired by our observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks. We first demonstrate its effectiveness for self-supervised ASR in various low-resource settings. In particular, extensive experiments verify (1) sparse subnetworks exist in mono-lingual/multi- lingual pre-trained self-supervised learning representations, and (2) the computational advantage and performance gain of PARP over baseline pruning methods.&#13;
&#13;
In the second study, we extend PARP to end-to-end TTS, including both spectrogram prediction networks and vocoders. We thoroughly investigate the tradeoffs between sparsity and its subsequent effects on synthetic speech. The findings suggest that not only are end-to-end TTS models highly prunable, but also, perhaps surprisingly, pruned TTS models can produce synthetic speech with equal or higher naturalness and intelligibility, with similar prosody.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulating Orthogonality Of Feature Functions For Highly&#13;
Compressed Deep Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/144614" rel="alternate"/>
<author>
<name>Wei-Chen, Wang</name>
</author>
<id>https://hdl.handle.net/1721.1/144614</id>
<updated>2022-08-30T03:45:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Regulating Orthogonality Of Feature Functions For Highly&#13;
Compressed Deep Neural Networks
Wei-Chen, Wang
When designing deep neural networks (DNN), the number of nodes in hidden layers can have a profound impact on the performance of the model. The information carried by the nodes in each layer creates a subspace, whose dimensionality is determined by the number of nodes and their linear dependency. This paper focuses on highlycompressed DNN – network with significantly less nodes in the last hidden layer than in the output layer. Each node in the last hidden layer is considered a feature function, and we study how the orthogonality of feature functions changes throughout the training process. We first develop how information is learned, stored and updated in the DNN throughout training, and propose an algorithm which regulates the orthogonality before and during training. Our experiment on high-dimensional mixture Gaussian dataset reveals that the algorithm achieves higher orthogonality in feature functions, and accelerates network convergence. Orthogonalizing feature functions enable us to approximate Newton’s method via the gradient descent algorithm. We can take advantage of the superior convergence properties of the second-order optimization, without directly computing the Hessian matrix.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Ask Like a Physician</title>
<link href="https://hdl.handle.net/1721.1/144613" rel="alternate"/>
<author>
<name>Lehman, Eric(Computer scientist)</name>
</author>
<id>https://hdl.handle.net/1721.1/144613</id>
<updated>2024-11-14T17:23:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning to Ask Like a Physician
Lehman, Eric(Computer scientist)
Existing question answering (QA) datasets derived from electronic health records (EHR) are artificially generated and, as a result, fail to capture realistic physician information needs. We present Discharge Summary Clinical Questions (DiSCQ), a newly curated question dataset composed of 2,000+ questions paired with the snippets of text (triggers) that prompted each question. The questions are generated by medical experts from 100+ MIMIC-III discharge summaries. &#13;
&#13;
We analyze this dataset to characterize the types of information sought by medical experts. We also train baseline models for trigger detection and question generation (QG), paired with unsupervised answer retrieval over EHRs. Our baseline model is able to generate high quality questions in over 62% of cases when prompted with human selected triggers. We will release this dataset (and all code to reproduce baseline model results) to facilitate further research into realistic clinical QA and QG.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dimensional Control in Ceramics Printed by Projection&#13;
Microstereolithography</title>
<link href="https://hdl.handle.net/1721.1/144612" rel="alternate"/>
<author>
<name>Han, Gina</name>
</author>
<id>https://hdl.handle.net/1721.1/144612</id>
<updated>2022-08-30T03:44:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Dimensional Control in Ceramics Printed by Projection&#13;
Microstereolithography
Han, Gina
Ceramics, due to their high melting temperatures, are an attractive material choice when it comes to high-temperature applications, especially when combined with a metal to create a ceramic-metal composite, or a cermet.  Because of this, a need to shape ceramics with greater flexibility than the traditional ceramic processing methods can allow has arisen.  Additive manufacturing is a promising potential solution due to the highly flexible nature of the process  compared to traditional methods.  And, within additive manufacturing, projection microstereolithography, a process in which a photosensitive resin is exposed to pixelated images to cure a single layer at once, is an attractive method due to its increased resolution compared to other additive manufacturing methods such as binder jetting.  In order to better control the projection microstereolithography process to allow for the production of dimensionally accurate parts, the curing of photosensitive resins with ceramic particles was studied.  This study had two main parts: a resin composition study and a geometric parameter study.  In the resin composition study, the different resin components were carefully varied in order to study their effect on curing.  In the geometric parameter study, different projection areas, as well as the difference in curing behavior for both "negative" and "positive" features were studied.  Through these studies, it was found that changing the resin components can change the curing behavior, as is supported in the literature.  For geometric variables, there was a change in curing behavior in certain cases, but not necessarily all.  This analysis on geometric variables is more detailed than others that have been done in the literature thus far.  Overall, the results of these curing experiments will better inform printing ceramics using projection microstereolithography.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Mechanical Validation of Commercially Viable, Personalized Passive Prosthetic Feet</title>
<link href="https://hdl.handle.net/1721.1/144611" rel="alternate"/>
<author>
<name>Folinus, Charlotte Méry</name>
</author>
<id>https://hdl.handle.net/1721.1/144611</id>
<updated>2022-08-30T03:30:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design and Mechanical Validation of Commercially Viable, Personalized Passive Prosthetic Feet
Folinus, Charlotte Méry
Existing energy storage and return (ESR) prosthetic feet are available in a low-resolution and discrete set of size and stiffness options. While these devices are reimbursed for thousands of dollars through insurance, the current low-resolution sizing systems may limit the walking performance of many amputees. The design, manufacturing, and provision processes used to create existing prosthetic feet are inherently low resolution; providing amputee-specific personalization with these methods is either not possible or not commercially viable. The Lower Leg Trajectory Error (LLTE) design framework provides an oportunity for designing high-performance, amputee-specific prosthetic feet; however, previous foot prototypes were designed as experimental prototypes to demonstrate the LLTE theory, not to satisfy the economic, mechanical, and aesthetic requirements necessary for commercial adoption.&#13;
&#13;
This thesis aims to understand how a personalized, affordable prothetic foot can be realized for a clinical-commercial setting. First, we evaluate stakeholder needs and identify the flows of products, capital, and services between prosthetics suppliers, distributors, prosthetists, amputees, and insurance providers. We elucidate the design requirements for a personalized prosthetic foot that can be manufactured, distributed, and clinically provided by Hanger, a current leader in both product distribution and patient care in orthotics and prosthetics.&#13;
&#13;
Based on material properties and manufacturing process capabilities, we demonstrate why CNC machining of Nylon 6/6 is an appropriate process for satisfying these requirements. Although additive manufacturing is often seen as a compelling method for creating customized products, additively manufactured ESR prosthetic feet would likely have inferior walking performance, take longer to produce, cost more, and experience greater manufacturing variability than CNC machined feet. Next, this thesis presents a novel parametric foot architecture that can be CNC machined, fits within a commercial foot shell, and can be designed for individual users’ body characteristics and activity levels. Lastly, we demonstrate that prototypes made using the upgraded foot design mechanically behave as anticipated and satisfy industry-standard strength and mechanical performance requirements.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning for strength prediction and optimal design of sustainable concrete formulas</title>
<link href="https://hdl.handle.net/1721.1/144609" rel="alternate"/>
<author>
<name>Pfeiffer, Olivia</name>
</author>
<id>https://hdl.handle.net/1721.1/144609</id>
<updated>2022-08-30T03:41:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Machine learning for strength prediction and optimal design of sustainable concrete formulas
Pfeiffer, Olivia
Given the large environmental impact of the concrete industry, which represents 8- 9% of global CO₂ emissions, the design of concrete mixes with low carbon footprints that still meet structural performance requirements will be an essential part of global decarbonization efforts. In this work, we build a concrete performance model, which maps from concrete constituents to compressive strength, a key structural property. Specifically, we leverage the quantity and quality of information provided by our industrial concrete partners (whereas most existing related studies use small, narrow datasets derived from laboratory experiments) to establish an improved concrete performance model that captures the role of several concrete ingredients and a wide variety of formulas. We find that the features which are predicted to be important to concrete strength are compatible with industry knowledge, and that predictions can be improved in the case of small datasets by leveraging information from other larger datasets. Additionally, we integrate our machine learning model into an optimization procedure, and identify mixtures which have minimal cost and minimal climate impact. Lastly, we discuss the trade-offs between these two design parameters, and how these considerations differ by the required strength of the concrete.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Thesis, Allegedly</title>
<link href="https://hdl.handle.net/1721.1/144608" rel="alternate"/>
<author>
<name>Hong, Jisoo</name>
</author>
<id>https://hdl.handle.net/1721.1/144608</id>
<updated>2022-08-30T03:39:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Thesis, Allegedly
Hong, Jisoo
Current approaches for studying the spread of misinformation on social media tend to focus on the factual integrity of shared content and the reach or circulation of false claims. However, a focus on the truth value of content can obscure the embeddedness of information in social, communicative practices. One way of apprehending the sociocultural dimensions is through an analysis of the stances people take toward the information they circulate online.&#13;
&#13;
In this thesis, we investigate how language mediates perceptions of truth and reality through a close examination of how data is animated as evidence. This process, we argue, is fundamentally interactional and dialogic. Using sociolinguistic and natural language processing (NLP) techniques, we demonstrate how specific features of evidential talk, such as the use of epistemic adverbs like allegedly or supposedly, can dramatically alter how evidence is taken up in discussions of scientific controversy. We present the hearsay effect, a numerical measure mapping the entextualization of data as hearsay to its engagement and circulation on social media, to characterize how subtle inflections in epistemic modulation shape the social life of evidence. We find that the hearsay effect is variably salient in different discursive communities, and is particularly prominent in our case study of evidential discourse amongst ufologists on Twitter. We suggest that this analysis of the strength of weak evidence within contested sites of knowledge production offers new ways of conceptualizing how information and misinformation is animated in the online public sphere.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable for All? How Satellite Remote Sensing Contributes to Sustainable Development in Africa and International Climate Policy</title>
<link href="https://hdl.handle.net/1721.1/144607" rel="alternate"/>
<author>
<name>Ali, Ilham</name>
</author>
<id>https://hdl.handle.net/1721.1/144607</id>
<updated>2022-08-30T03:34:16Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sustainable for All? How Satellite Remote Sensing Contributes to Sustainable Development in Africa and International Climate Policy
Ali, Ilham
Satellite remote sensing applications are among the most important technological and data tools available to sustainable development practitioners today but are still underutilized. Sustainable development is multidimensional and characterized by many factors such as environmental sustainability, economic livelihoods, social dynamics, etc. – most of which satellites have a role in supporting. In this work, satellite contributions to the United Nations Sustainable Development Goals are robustly quantified using a novel impact classification method, and recommendations are given for key expansions of satellite remote sensing use in Africa from disease reduction of malaria and cholera, to implementation of social protection mechanisms for the poor at scale, to expansion of urban greenspace connectivity, and more. The domestic African satellite industry is also evaluated in the context of its capacity development and data policies. African satellite launches have grown significantly in recent years making up 62% of all ‘rest of world’ satellite launches in 2019 and with approximately 20 nations having established national space agencies through 2021. Small satellites, and nanosatellites in particular, have featured as many new entrant nations’ debut satellites due to their modularity and utility. Room for improvements exist in increasing data usage and availability from successful missions, particularly with the new coordination capacities of the regional African Space Agency. Finally, blended data processing methods combining satellite environmental data and geospatial population data are used to illustrate water futures in East Africa. Predictive modeling and feature selection on the dataset were also able to detect El Niño effects in 2015 and connect rising heat levels and groundwater availability to climate-influenced micro-mobility patterns. This work highlights the importance of satellite data for serving small to medium-sized urban agglomerations in particular (&lt;100,000), where the majority of Africa’s urban population lives but are difficult to reach with in-situ methods. Policy recommendations are given in the context of the Paris Climate Agreement to integrate satellite data-based methods for realization of the Warsaw loss and damage mechanism and to spur implementation action to protect vulnerable nations and populations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Surface Ion Milling on Ion Trap Heating</title>
<link href="https://hdl.handle.net/1721.1/144606" rel="alternate"/>
<author>
<name>Rich, Philip H.</name>
</author>
<id>https://hdl.handle.net/1721.1/144606</id>
<updated>2022-08-30T03:29:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Effects of Surface Ion Milling on Ion Trap Heating
Rich, Philip H.
Ions trapped above a surface electrode chip are a viable candidate for a scalable qubit architecture. The internal electronic state of the ions serve as qubits, while the shared motional modes of ions in a lattice serve as a quantum bus by which information can be shared. Gate operations are performed by applying lasers to drive transitions between internal states in individual qubits and motional states in qubit chains. The coherence time of the coupled motional states is limited by an ion’s heating rate from one motional state to the next. An ion’s motional heating rate is proportional to the electric field noise experienced by the ion at the trap frequency. Measurements of ion heating rates are orders of magnitude larger than those predicted by theoretical models, and temperature dependence of the field noise is an underexplored domain because few systems can vary surface chip temperature across a wide range of temperatures. By exposing a surface electrode trap chip to a beam of trapped ions, surface contaminants on the chip surface can be removed, potentially changing the temperature dependence of the electric field noise experienced by the ion.&#13;
&#13;
This thesis presents the first measurements of the temperature dependence of the heating rate of a single ion trapped above an aluminum trap chip before and after milling, as well as above a platinum chip after milling. We measure heating rates at different trap chip temperatures ranging from 8.5-295 K using sideband ratio spectroscopy of the axial motional state of a trapped strontium-88 ion trapped 50 micrometers above the chip surface at an axial frequency of 1.3MHz. The temperature dependence of the heating rate of an ion above an aluminum chip is shown to not change after ion milling, in sharp contrast with prior results with trap chip surfaces made of other materials. Before and after ion milling, the heating rate follows an activated power law dependence. The unchanged heating rate after ion milling is potentially attributable to the rapid formation of surface oxide layers in aluminum. For heating rates of ions trapped above a platinum chip, the heating rate is consistently measured to be an order of magnitude higher than comparable measurements in aluminum, gold and niobium chips. The reason for these elevated heating rates is unknown at this time, but potentially attributable to fabrication issues.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atomistic Simulations of Antiferromagnetic Solitons and their High-Speed Dynamics</title>
<link href="https://hdl.handle.net/1721.1/144605" rel="alternate"/>
<author>
<name>Tremsina, Elizaveta</name>
</author>
<id>https://hdl.handle.net/1721.1/144605</id>
<updated>2022-08-30T03:21:52Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Atomistic Simulations of Antiferromagnetic Solitons and their High-Speed Dynamics
Tremsina, Elizaveta
In antiferromagnetic materials, localized self-sustaining states called solitons namely Domain Walls and Skyrmions, can be efficiently driven by currents and achieve velocities of several kilometers per second. These solitons are massive particles and therefore cannot travel faster than a limiting velocity akin to the speed of light for the material. The specifics of these high velocity dynamics, in which solitons begin to display relativistic effects, have been well understood for the case of singledimensional particles - domain walls. Here. we perform an extensive and systematic atomistic study of both 1D and 2D soliton dynamics in chiral magnetic materials, both at and away from angular momentum compensation. We present a novel outlook on the role of skyrmion compactness in their deformation patterns, velocity limits, as well as the absence of behaviors similar to ones observed in relativistic DWs. We claim that limits on skyrmion compactness also impede their ability to reach the velocity regime where relativistic effects begin to occur in rapidly moving DWs, due to the critical skyrmion breakdown behavior. These results could prove to be significant to the field of spintronics, as well as the potential applications of skyrmions for novel logic and memory devices.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Computation on Encrypted Data Practical through Hardware Acceleration of Fully Homomorphic Encryption</title>
<link href="https://hdl.handle.net/1721.1/144604" rel="alternate"/>
<author>
<name>Samardzic, Nikola</name>
</author>
<id>https://hdl.handle.net/1721.1/144604</id>
<updated>2022-08-30T03:33:40Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Making Computation on Encrypted Data Practical through Hardware Acceleration of Fully Homomorphic Encryption
Samardzic, Nikola
Fully Homomorphic Encryption (FHE) enables offloading computation to untrusted servers with cryptographic privacy. Despite its attractive security, FHE is not yet widely adopted due to its prohibitive overheads, about 10,000× over unencrypted computation.&#13;
&#13;
Hardware acceleration is an attractive approach to bridge this performance gap, but it brings new challenges. These include operations on large vectors with complex dependencies that current vector processor architectures cannot handle, as well as extreme memory bandwidth demands. This thesis presents two FHE accelerators that address these challenges: F1 and CraterLake.&#13;
&#13;
F1 is the őrst programmable FHE accelerator, i.e., capable of executing full FHE programs. F1 is a wide-vector processor with novel functional units deeply specialized to FHE primitives. This organization provides so much compute throughput that data movement becomes the key bottleneck. Thus, F1 is primarily designed to minimize data movement. It speeds up shallow FHE computations (i.e., those of limited multiplicative depth) by gmean 5,400× over a 4-core CPU. Unfortunately, F1 becomes memory bandwidth bound on deeper computations (e.g., deep neural networks). This is because deep FHE programs require very large ciphertexts (tens of MBs each) and different algorithms that F1 does not support well.&#13;
&#13;
CraterLake addresses these shortcomings and is the őrst accelerator to effectively speed up arbitrarily large FHE programs. CraterLake introduces a new hardware architecture that efficiently scales to very large ciphertexts, novel functional units to accelerate key kernels, and new algorithms and compiler techniques to reduce data movement. These advances help CraterLake outperform a 32-core CPU by gmean 4,600× and deliver 11.2× the performance of F1 on deep benchmarks, even when we scale F1’s architecture to the size of CraterLake. These speedups enable new applications for FHE, such as real-time inference using deep neural networks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A complete resource allocation framework for flexible high throughput satellite constellations</title>
<link href="https://hdl.handle.net/1721.1/144601" rel="alternate"/>
<author>
<name>Pachler de la Osa, Nils</name>
</author>
<id>https://hdl.handle.net/1721.1/144601</id>
<updated>2022-08-30T03:55:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A complete resource allocation framework for flexible high throughput satellite constellations
Pachler de la Osa, Nils
The past few years have been a witness for the rise of the new era of satellite communications. The interest for providing broadband access from space has reached levels that remind of those in the late 90s with Iridium and Globalstar among others. The novel mega-constellation designs will rely on thousands of highly capable satellites to provide service to the ever-growing communications market. Nevertheless, the new payload flexibilities and the larger space segment involve a level of complexity and dimensionality that this sector has not seen before. While manual allocation of resources was feasible and efficient in early stages of this industry, it becomes unfeasible under the new conditions. To exploit the capabilities of the new spacecrafts to its full potential, new automatic and optimized tools for the Resource Allocation (RA) problem in the context of satellite communications need to be developed.&#13;
&#13;
While individual tools and methods for the specific sub-problems have been proposed during these last years, most approaches fail to address the interactions between different sub-problems, and those who do rely on simplified assumptions that do not capture the reality of modern operations. To close this gap, this Thesis proposes an adaptive framework to solve the long-horizon RA problem under high dimensionality conditions for highly flexible satellite constellations. The proposed framework uses a divide-and-conquer approach where the RA problem is decomposed into different sub-problems, then solved via state-of-the-art optimization techniques, and integrated back to obtain a valid, feasible, and efficient solution for the long-horizon RA problem. The performance of this framework is then analyzed using different user distributions, model parameters, and solution algorithms to understand the capabilities and robustness of the obtained solutions, as well as the sensitivity to the different variables.&#13;
&#13;
The executed analyses prove the validity and effectiveness of the framework to deal with the incumbent problem. Specifically, the proposed method and algorithms prove to be robust against a variety of user distributions and model parameters, being always able to obtain a feasible plan. In addition, the tests performed in this work demonstrate that the state-of-the-art algorithms significantly outperform simple techniques, being able to multiply the capacity of the constellation by 4 with the same payload characteristics, while reducing to a third the power consumption. Furthermore, the sensitivity tests prove that optimized solutions are able to achieve improved coverage even with limited hardware compared to heuristic techniques.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated Design Pipeline for Tactile Sensing Robotic Manipulators</title>
<link href="https://hdl.handle.net/1721.1/144599" rel="alternate"/>
<author>
<name>Zlokapa, Lara</name>
</author>
<id>https://hdl.handle.net/1721.1/144599</id>
<updated>2022-08-30T03:42:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">An Integrated Design Pipeline for Tactile Sensing Robotic Manipulators
Zlokapa, Lara
Traditional robotic manipulator design methods require extensive, time-consuming, and manual trial and error to produce a viable design. During this process, engineers often spend their time redesigning or reshaping components as they discover better topologies for the robotic manipulator. Tactile sensors, while useful, often complicate the design due to their bulky form factor. We propose an integrated design pipeline to streamline the design and manufacturing of robotic manipulators with knitted, glovelike tactile sensors. The proposed pipeline allows a designer to assemble a collection of modular, open-source components by applying predefined graph grammar rules. The end result is an intuitive design paradigm that allows the creation of new virtual designs of manipulators in a matter of minutes. Our framework allows the designer to fine-tune the manipulator’s shape through cage-based geometry deformation. Finally, the designer can select surfaces for adding tactile sensing. Once the manipulator design is finished, the program will automatically generate 3D printing and knitting files for manufacturing. We demonstrate the utility of this pipeline by creating four custom manipulators tested on real-world tasks: screwing in a wing screw, sorting water bottles, picking up an egg, and cutting paper with scissors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Taxes and Product Market Outcomes: Asymmetric Effects of Tax Cuts on Winners v. Losers</title>
<link href="https://hdl.handle.net/1721.1/144598" rel="alternate"/>
<author>
<name>Yoon, Rachel(Rachel Seou)</name>
</author>
<id>https://hdl.handle.net/1721.1/144598</id>
<updated>2023-09-25T14:59:17Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Taxes and Product Market Outcomes: Asymmetric Effects of Tax Cuts on Winners v. Losers
Yoon, Rachel(Rachel Seou)
This paper examines whether the differential effect of taxes on profitable versus loss-making firms affects their product prices and market share. Using data that allow for direct pricing and product tests - airline route and pricing data - I find evidence consistent with differential consequences of tax rate cuts for profitable versus loss firms. Specifically, after a tax rate cut, profitable airlines lower prices and enter markets where their dominant competitors include a financially constrained tax-loss airline. In addition, the data reveal that tax-loss airlines lose market share and exit routes after the tax rate cut. The results are economically meaningful - I find that airlines in a tax-loss position lose 3.3 percentage points in market share following a significant cut in corporate tax rates in routes where loss-making airlines collectively have higher market share. The evidence is consistent with tax rule changes affecting product markets and product market competition, and the effects vary based on tax status of the competitors.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bioresorbable Osmotic Pump for Long–term Contraception</title>
<link href="https://hdl.handle.net/1721.1/144597" rel="alternate"/>
<author>
<name>Park, Sanghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/144597</id>
<updated>2022-08-30T03:13:35Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Bioresorbable Osmotic Pump for Long–term Contraception
Park, Sanghyun
Family planning has been a key social issue in most developing countries, especially in Sub-Saharan Africa, yet there are still significant unmet needs for contraception. Among the various commercial contraceptive products, contraceptive implants are regarded as the most effective reversible option due to their high drug adherence. However, the implants currently on the market are mostly limited to non-bioresorbable matrix-diffusion systems. These Fickian diffusion-based systems raise problems like inconstant release rates that compromise their long-term efficacy, the requirement of surgical removal, and a long pharmacokinetic tail that delays the user’s return to fertility. Here, we investigated a fully bioresorbable osmotic pump system that provides over 1.5 years of contraception. Osmotic pump systems are recognized for their capacity to provide near zero-order drug release kinetics coupled with immediate drug release cessation upon completion of osmotic swelling/displacement. In the wet tissue environment, the high osmolality of the osmotic engine causes water flux into the pump through a semi-permeable membrane, and the increased internal pressure from the expansion causes the drug formulation to be pushed out through the orifice on the other side. To our best knowledge, bioresorbable osmotic pump implants have never been introduced. The contraceptive drug Levonorgestrel Drug microcrystals were suspended in a viscous polymer solution to achieve long-term stability. To prevent osmotic engine leakage while allowing sufficient water permeation for expansion, we developed bioresorbable semi-permeable membranes by a non-solvent induced phase separation method with Polycaprolactone (PCL) and achieved acceptable molecular weight cut-off values around 50-100k Da. Hyaluronic acid was selected for the osmotic engine, and our early work confirmed its potential for linear expansion and for overcoming the high hydraulic pressure of the viscous formulation. This fully bioresorbable, zero-order release osmotic pump system has the potential to be an impactful long-term drug delivery platform device for contraception, as well as other diseases that affect the global population where drug adherence is necessary for efficacy.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extracting Electromechanical Signals for Icebreaker Insights</title>
<link href="https://hdl.handle.net/1721.1/144596" rel="alternate"/>
<author>
<name>Moeller, Andrew William</name>
</author>
<id>https://hdl.handle.net/1721.1/144596</id>
<updated>2022-08-30T03:16:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Extracting Electromechanical Signals for Icebreaker Insights
Moeller, Andrew William
Nonintrusive load monitoring has a proven track record of providing benefits for equipment operation logging, fault detection and diagnostics, condition-based maintenance, and energy scorekeeping. A nonintrusive load monitor (NILM) can measure the aggregate electrical power at a central utility point and extract individual loads from this power stream. Segregating and identifying these unique electrical signatures from various shipboard machinery components allow a NILM to assess the health of equipment and predict potential failures before they are evident through traditional monitoring methods. NILMs have been installed on multiple US Coast Guard and US Navy vessels over the past several years, collecting vital data that has rapidly accelerated the monitoring capabilities of this technology. This work specifically expands upon the previous successes and applies the same concepts to a 140 ft icebreaking tug, USCGC THUNDER BAY. The NILMs installed on THUNDER BAY are capable of directly monitoring the electric propulsion drive, which coupled with its unique icebreaking mission allow the NILM to gain crucial insights into ship operation that have not been previously available. Additional improvements were developed for the NILM’s software and hardware components to incorporate an added wireless capability, allowing the NILM to act as a central processor for a physically securable network of wireless sensing nodes. Testing was conducted in four separate shipboard environments to confirm the feasibility of this network architecture. Specific methods for implementing this sensor network are discussed, and techniques for combining both power and vibration measurements are presented to identify faults that were previously unattainable strictly through power monitoring alone.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Virtual Muscle Model of the Arm for EMG-Driven Control&#13;
of Prostheses</title>
<link href="https://hdl.handle.net/1721.1/144594" rel="alternate"/>
<author>
<name>Fernandez, Michael F.</name>
</author>
<id>https://hdl.handle.net/1721.1/144594</id>
<updated>2022-08-30T03:33:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Virtual Muscle Model of the Arm for EMG-Driven Control&#13;
of Prostheses
Fernandez, Michael F.
Existing upper extremity prosthesis controllers are unable to provide effective control for persons with amputation, contributing to device abandonment rates as high as 50%. Neuromuscular modeling provides a powerful tool for the development of custom controllers to improve the clinical efficacy of prostheses.&#13;
&#13;
This thesis develops techniques for the creation of models of the arm using a reduced number of virtual muscles. These models take electromyography collected from the residual limb to estimate intended movements for prosthesis control. Neuromuscular model optimization and forward dynamics simulation are used to find parameters that fully define antagonist pairs of virtual muscles that actuate each model degree of freedom. Patient-specific models can be tailored to the morphology of patients with agonist-antagonist myoneural interface (AMI) constructs with minimal hand tuning.&#13;
&#13;
As a case study, a four degree-of-freedom arm model (allowing elbow flexion, index flexion, 3rd to 5th digit flexion, and thumb abduction) was optimized for a subject with unilateral transhumeral amputation who possesses two AMI constructs. The resulting model outputs kinematics that closely match measured kinematics. Joint angle predictions for the elbow, digits, and index finger were very highly correlated with reference trajectories (r = 0.92, r = 0.91, and r = 0.87, respectively), while predictions for the thumb were only moderately correlated (r = 0.55). The optimized model also shows the speed-accuracy tradeoff as quantified by Fitts' Law and achieves some degree of graded control.&#13;
&#13;
These results demonstrate how patient-specific neuromuscular models can replicate characteristics of volitional motor control. The highly intuitive control and restoration of native biomechanics granted by such biophysical controllers can allow persons with amputation more independence, raising their quality of life.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach to System Dynamics Modeling and Control Design</title>
<link href="https://hdl.handle.net/1721.1/144593" rel="alternate"/>
<author>
<name>Chen, George C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144593</id>
<updated>2022-08-30T03:38:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach to System Dynamics Modeling and Control Design
Chen, George C.
This thesis presents an in-depth investigation on modeling and simulation of an optical fiber extrusion system and its controllers in production. With measured production data during the fiber drawing process, a long short-term memory (LSTM) neural network is architected, implemented, and trained to model the process dynamics of the fiber drawing plant. Training experiments were conducted to investigate the effect of several parameters on the model’s performance. Furthermore, statistical analysis models with assumed structures are employed as part of the black-box system identification process to model controllers in the production system, subject to noise and disturbances. With aforementioned components, a closed-loop simulation of the fiber extrusion system is then developed in MATLAB, proving the feasibility of simulating mechanical systems in production using learned models. The approach developed in this study is suitable for data-driven deployment of many kinds of manufacturing plants in production, which may have limited operational domains due to mechanical constraints. The simulation, once implemented in hardware, could potentially replace the laborious, iterative tuning process of the controllers, and serve as a design tool to optimize these controllers using a digital twin.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Multi-Phase-CFD Frameworks for High Void Fraction Flow in Large Diameter Systems</title>
<link href="https://hdl.handle.net/1721.1/144587" rel="alternate"/>
<author>
<name>Aranda, Brandon A.</name>
</author>
<id>https://hdl.handle.net/1721.1/144587</id>
<updated>2022-08-30T03:58:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Assessment of Multi-Phase-CFD Frameworks for High Void Fraction Flow in Large Diameter Systems
Aranda, Brandon A.
Multi-phase CFD is widely applied to low void fraction bubbly flows in small geometry applications. At high void fraction flows, complex interactions occur that make their modeling challenging. Hybrid models have been proposed for application to such conditions as they resolve large structures using interface capturing methods while modeling dispersed structures with mixture approaches adding slip correlations or the Eulerian framework. Beyond simply bubbly flows, these hybrid formulations and multiphase frameworks remain widely untested, particularly for large nuclear reactor components. In this work, the Volume of Fluid framework (VOF), a hybrid Mixture-Multiphase with Large-Scale Interface capturing (MMP-LSI) framework, and the Eulerian-Eulerian framework are validated against experiments performed at the TOPFLOW and HUSTLE facilities. These facilities resemble the two-phase conditions of recent Small Modular light water reactor designs. This work’s objective is to assess the model performance consistently at different mesh resolutions and support future hybrid adoption. The results show that, on sufficiently resolved meshes, void fraction profiles are well predicted by the VOF method for the conditions of the TOPFLOW experiment, while also showing good resolution of the shape and distribution of the large gas structures. However, when applied to the mist flow conditions of the HUSTLE facility, the void fraction profiles deviate from the experimental results and do not show sufficient grid convergence, especially in the near wall regions. The MMP-LSI method is still challenged in these applications since its inability to resolve smaller structures leads to larger errors than the VOF method. The Eulerian-Eulerian framework, although powerful for such large-scale industrial applications, was found to be limited due to the lack of validated interfacial closure models for the flow conditions of interest. Further work would be required to advance the capabilities of the Eulerian framework for high void fraction flows. As a result, the VOF framework was found to be the most applicable for BWR simulations as it was able to model key characteristics of the flow that are relevant for validation and understanding of the flow conditions.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Framework for Zero Waste Structural Design</title>
<link href="https://hdl.handle.net/1721.1/144586" rel="alternate"/>
<author>
<name>Steelman, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/144586</id>
<updated>2022-08-30T03:45:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Computational Framework for Zero Waste Structural Design
Steelman, Alexandra
Construction accounts for one third of the world’s waste, much of which can be attributed to inefficiencies in the design process – necessitating innovative approaches to structural design. In the typical design process a geometry is first determined, and then members are sized to its particular codes and requirements. Oftentimes, to satisfy these requirements, members are iteratively up-sized with the intention to alter them in order to fit within the geometry. By treating the geometry as fixed, this design approach assumes an infinite inventory of material and poorly utilizes total material volume. Instead of choosing the geometry of a project and then identifying members to fit within it, the design process can be improved by first determining the material quantity and then creating a geometry that uses the members most effectively. This material constrained approach has been applied in research such as circular design and beam shaping, but these applications do not address how to use all of the given inventory and thus still result in excess waste. In the furniture and fashion industries, a zero waste constraint has been applied to drive designs that are both functional and produce zero manufacturing waste. The goal of zero waste design is to devise a geometry that effectively uses all of the given material inventory and thus produces no waste in the process. That said, zero waste design has not been as widely applied for structures due to the added challenge of satisfying structural performance goals. This thesis accounts for both of these objectives by proposing a computational framework for zero waste structural design, specifically exploring a 2D domain of material and a target geometry of a pedestrian bridge. Four different computational methods have been applied to transform the material inventory, noted as the sheet domain, and the geometry, noted as the bridge domain. This thesis works with a 4’ x 8’ sheet domain and a bridge span of eight feet, but the methods are generalizable beyond these specific dimensions. The project investigates the structural performance of each possible result through an automated structural modeling and analysis workflow for assemblies of plate elements. The results shown in this thesis prove zero waste design is a viable approach to structural design, demonstrating there are both practical and effective applications for the developed framework.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wake characteristics associated with logjams to inform river restoration</title>
<link href="https://hdl.handle.net/1721.1/144585" rel="alternate"/>
<author>
<name>Porter, Rovi C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144585</id>
<updated>2022-08-30T03:40:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Wake characteristics associated with logjams to inform river restoration
Porter, Rovi C.
In the past, logjams have been removed due to concerns about flooding, erosion, and destruction of property. However, logjams have been found to have many ecosystem benefits, including generating pools for salmonid spawning, propagule retention sites, and increasing biodiversity. Due to these benefits, engineered logjams are being placed back into rivers, especially in the Pacific Northwest of the U.S., to promote salmonid populations. Riverine habitats are created by variations in substrate size, depth, velocity, turbulence, temperature, and cover. All species have preferences for certain conditions. Even within a species, the size and age can also impact preference. Specifically for salmon, juvenile salmon prefer shallow and low velocity regions while adult salmon prefer deep and coarse substrate. The presence of the logjams alters the bed morphology of the river and availability of habitats. To better understand how design choices for a logjam can impact habitats, important parameters include solid volume fraction (SVF) and channel width. Thus, these two variables were varied in 8 constructed logjams. The wake generated by each logjam was analyzed in a recirculating flume for velocity, turbulence, and fine sediment deposition. All logjams displayed increased velocity in the unobstructed side and decreased velocity downstream of the logjam side. Varying the solid volume fraction had little variation in stream-wise velocity and root mean square of the stream-wise velocity (proxy for turbulence) to the first order. Differences in the quarter and half width spanning logjams were seen as the half-width logjam had increased velocity in the unobstructed side, a recirculation region, and lower turbulence in the center of the logjam and unobstructed side. Deposition of fine sediment for the quarter-spanning logjams were not greater than that of the open bed case with the same conditions. Thus, the fine sediment deposition observed downstream of logjams are not correlated with the solid volume fraction. Indicating the deposition could be correlated with bed depth or depositing further downstream, which could not be studied. The logjam width had a greater impact on velocity and turbulence than variations in solid volume fraction. All the logjams provided variation in available habitats for riverine fish. Considering both fish preferences and wake effects from logjam characteristics will result in more effective river restoration projects.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-time Social Media Content Recommendation for Live Sports Events</title>
<link href="https://hdl.handle.net/1721.1/144583" rel="alternate"/>
<author>
<name>Liu, Renbin</name>
</author>
<id>https://hdl.handle.net/1721.1/144583</id>
<updated>2022-08-30T03:23:31Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Real-time Social Media Content Recommendation for Live Sports Events
Liu, Renbin
The presence of social media is getting greater in the sports arena. Many people who watch live sports games also follow social media platforms for live coverage and commentaries. Although these additional content can enrich the watching experiences of the audience, they may become distractions to the audience from some key events in a live sports game. In this thesis, we propose a system that will automatically present relevant and engaging social media content for a live game. We will employ techniques in Natural Language Processing to filter social media posts to select the best ones for users to follow while watching the game. With an engagement prediction model augmented with other metadata of the post and of its author, the audience can enjoy the game without missing out on important game coverage and reactions on social media.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Backtracking on Delivery Routes through Community Detection</title>
<link href="https://hdl.handle.net/1721.1/144575" rel="alternate"/>
<author>
<name>Noszek, Joseph Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/144575</id>
<updated>2022-08-30T03:10:51Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Measuring Backtracking on Delivery Routes through Community Detection
Noszek, Joseph Robert
In logistics, backtracking is the act of a route returning to an area that it has already visited. Despite backtracking’s clear potential to be a source of inefficiency and driver frustration, there is little existing research on backtracking in transportation. We set out to devise a method to measure backtracking that is consistently scalable and transferable to different route settings. Our measurement method utilizes community detection, a group of machine learning algorithms for clustering nodes within graphs, based on edge structure and weight. We then investigate the ability of backtracking, as measured by our community detection-based method, to predict suboptimality of Asymmetric Traveling Salesman Problem (ATSP) solutions. We find that backtracking does demonstrate viability as a predictor of suboptimality, particularly when it utilizes the Louvain algorithm or the Leiden algorithm for community detection. We also investigate the relationship between backtracking and suboptimality when adjusting our measurement process and when considering the additional variables of the number of customers and the number of backtracking instances. After these adjustments, we observe continued and increased viability as a predictor of suboptimality.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Water Injection on Emissions of Nitrogen Oxides from Aircraft Engines</title>
<link href="https://hdl.handle.net/1721.1/144573" rel="alternate"/>
<author>
<name>Zahid, Syed Shayan</name>
</author>
<id>https://hdl.handle.net/1721.1/144573</id>
<updated>2022-08-30T03:04:53Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Impact of Water Injection on Emissions of Nitrogen Oxides from Aircraft Engines
Zahid, Syed Shayan
Aircraft NOₓ emissions form as a result of high-temperature combustion in the engine, and they lead to negative climate and air quality impacts. These emissions increase with temperature, so a NOₓ reduction technique has been proposed where water is injected in different regions of a gas turbine engine to cool down the peak flame temperature and decrease NOₓ production. In previous studies, water injection has been simulated on existing engine models instead of co-optimizing the engine cycle and design characteristics for this technology. This thesis uses pyCycle, a gas turbine cycle analysis tool, to optimize an engine cycle based on this water injection strategy and assess the performance impact of water injection on aircraft missions of varying ranges. This impact is then compared with the emissions impact of water injection estimated by PyCaSo, a combustor modeling tool to evaluate the potential benefits of this technology. It is found that for a turbofan engine, water injection has the potential to reduce NOx emissions by up to 80% without compromising the fuel efficiency of the aircraft. Water injection upstream of the compressors yields performance benefits due to reduction in required compression work, injection downstream of the compressors incurs a penalty to fuel efficiency. The benefit of water injection for NOx reduction in engines operating on rich front-end combustor is more than those running on lean burn configurations. For shorter range missions under 3000 km, the weight penalty due to carrying the additional water on the airplane is offset by the performance benefit of water injection upstream of the compressors depending on the amount of water injected and the engine model, whereas for longer range flights beyond 3000 km, the water weight penalty outweighs the performance benefit in most cases.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A 2-D Scalable Third harmonic radiator at 291.3 GHz with -2 dBm of Radiated Power in 22 nm FinFET Technology</title>
<link href="https://hdl.handle.net/1721.1/144572" rel="alternate"/>
<author>
<name>Elsheikh, Mohamed A. G.</name>
</author>
<id>https://hdl.handle.net/1721.1/144572</id>
<updated>2022-08-30T03:26:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A 2-D Scalable Third harmonic radiator at 291.3 GHz with -2 dBm of Radiated Power in 22 nm FinFET Technology
Elsheikh, Mohamed A. G.
The Terahertz (THz) band exhibits potential for novel applications and improvements to existing systems such as spectroscopy, imaging, and high-speed communications. However, the lack of high-power sources at this frequency range hinders the wide-scale adoption of these technologies commercially. This THz gap exists due to the inability of electronic and optical techniques to generate high power levels at this frequency range. Novel generation techniques have been and continue to be investigated to improve the performance of these sources. In this work, we present a Terahertz (THz) radiator at 291.3 GHz. The footprint of the unit radiator is half-wavelength at the radiation frequency, which enables 2-D scalability. Passive coupling is implemented between the radiator elements for de-centralized phase locking and free-space power combining. The proposed radiator is implemented on Intel 22 nm FinFET (FFL) technology, which enables high-performance high-speed circuits. The presented radiator is an addition to the state-of-the-art radiators and combines several advantages of previous designs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe Exploration for Dynamic Computer Systems Optimization</title>
<link href="https://hdl.handle.net/1721.1/144571" rel="alternate"/>
<author>
<name>Kim, Hyunji</name>
</author>
<id>https://hdl.handle.net/1721.1/144571</id>
<updated>2022-08-30T03:47:25Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Safe Exploration for Dynamic Computer Systems Optimization
Kim, Hyunji
Modern computer systems need to execute under strict safety constraints (e.g., power limit), but doing so often conflicts with their ability to deliver high performance (i.e., minimal latency). To meet these conflicting goals, prior work uses machine learning to automatically tune hardware resources such that system executions meet safety constraints optimally. Such solutions monitor past system executions to learn the system’s behavior under different hardware resource allocations before dynamically tuning resources to optimize the application execution. However, system behavior can change significantly between different applications and even different inputs of the same applications. Hence, models learned using data collected a priori are often suboptimal and violate safety constraints when used with new applications and/or inputs.&#13;
&#13;
To address this limitation, I introduce the concept of an execution space, which is the cross product of hardware resources, input features, and applications. Thus, a configuration is defined as a tuple of hardware resources, input features, and application. To dynamically and safely allocate hardware resources from the execution space, I present SCOPE 1 , a resource manager that leverages a novel safe exploration framework. SCOPE operates iteratively, with each iteration (i.e., reallocation) having three phases: monitoring, safe space construction, and objective optimization. To construct a safe set with high coverage (i.e., a high number of safe configurations in the predicted safe set), SCOPE introduces a locality preserving operator so that SCOPE’s exploration will rarely violate the safety constraint and have small magnitude violations if it does. I evaluate SCOPE’s ability to deliver improved latency while minimizing power constraint violations by dynamically configuring hardware while running a variety of Apache Spark applications. Compared to prior approaches that minimize power constraint violations, SCOPE consumes comparable power while improving latency by up to 9.5×. Compared to prior approaches that minimize latency, SCOPE achieves similar latency but reduces power constraint violation rates by up to 45.88×, achieving almost zero safety constraint violations across all applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Engineering of Directional Message-Passing Algorithms Through a Stencil-Based Approach for Applications in Molecular Dynamics</title>
<link href="https://hdl.handle.net/1721.1/144570" rel="alternate"/>
<author>
<name>Rosa, Isabel</name>
</author>
<id>https://hdl.handle.net/1721.1/144570</id>
<updated>2022-08-30T03:02:09Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Performance Engineering of Directional Message-Passing Algorithms Through a Stencil-Based Approach for Applications in Molecular Dynamics
Rosa, Isabel
The molecular dynamics method, used by scientists across the fields of physics, materials science, and biology, is an increasingly popular way to simulate particle interactions. Current implementations of molecular dynamics simulators can verify macromolecular structures, examine atomic-level phenomena that cannot be observed directly, and predict the behavior of unstudied proteins. The existing implementations, however, rely on inefficient directional message-passing algorithms on graph neural networks. This thesis presents a novel approach for the optimization of these algorithms using a stencil-like technique. The stencil-based algorithm, called StencilMD, provides both the benefits of parallelization and improved cache locality. The results show that StencilMD successfully reduces the amount of time required to run a molecular dynamics simulation by as much as 28.57% with a corresponding 26.92% decrease in cache misses.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Effective and Human-like Policies for Strategic, Multi-Agent Games</title>
<link href="https://hdl.handle.net/1721.1/144569" rel="alternate"/>
<author>
<name>Jacob, Athul Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/144569</id>
<updated>2022-08-30T03:01:39Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning Effective and Human-like Policies for Strategic, Multi-Agent Games
Jacob, Athul Paul
We consider the task of building effective but human-like policies in multi-agent decision-making problems. Imitation learning (IL) is effective at predicting human actions but may not match the strength of expert humans, while reinforcement learning (RL) and search algorithms lead to strong performance but may produce policies that are difficult for humans to understand and coordinate with.&#13;
&#13;
We first study the problem of producing human-like communication in latent language policies (LLPs), in which high-level instructor and low-level executor agents communicate using natural language. While LLPs can solve long-horizon RL problems, past work has found that LLP training produces agents that use messages in ways inconsistent with their natural language meanings. We introduce a sample-efficient multitask training scheme that yields human-like communication in a complex realtime strategy game.&#13;
&#13;
We then turn to the problem of producing human-like decision-making in a more general class of policies. We develop a regret-minimization algorithm for imperfect information games that can leverage human demonstrations. We show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human-likeness of IL while achieving much higher reward.&#13;
___________&#13;
&#13;
This thesis is based on the papers, Multitasking Inhibits Semantic Drift published at NAACL 2021 and Modeling Strong and Human-Like Gameplay with KL-Regularized Search which is currently under review for publication at ICML 2022. The contents of this paper are used with the permission of co-authors David J. Wu, Gabriele Farina, Adam Lerer, Hengyuan Hu, Anton Bakhtin, Mike Lewis, Noam Brown, and Jacob Andreas.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Deformable Templates for Brain MRI</title>
<link href="https://hdl.handle.net/1721.1/144566" rel="alternate"/>
<author>
<name>Rakic, Marianne</name>
</author>
<id>https://hdl.handle.net/1721.1/144566</id>
<updated>2022-08-30T03:37:26Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning Deformable Templates for Brain MRI
Rakic, Marianne
Deformable templates, or atlases, are images, often labelled, that represent a typical anatomy for a population. They are commonly used in medical image analysis for population studies and computational anatomy tasks. Practitioners use image alignment techniques to compare the subject scan and the template. Unfortunately, developing a template is a computationally expensive process with existing methods. Usually, at most one template is available per population of images or anatomy. As a results, analysis is often conducted with sub-optimal templates. In this thesis, we propose a machine learning framework that uses convolutional alignment neural networks to efficiently create both unconditional and conditional templates and the corresponding label maps. We demonstrate our method on a large 3D brain MRI dataset. This is particularly relevant in medical image analysis where templates are difficult to build. We show that this framework can learn sharp templates representative of the population. These templates are representative of the population. Moreover, they can leverage label maps when available. Our method enables rapid registration of any brain image to our template. Moreover, our method has the options of producing representative conditional templates, given subject specific attributes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytics for Sustainable Logistics: Fuel Efficiency&#13;
and Hydrogen Planning</title>
<link href="https://hdl.handle.net/1721.1/144565" rel="alternate"/>
<author>
<name>Humphries, Samuel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/144565</id>
<updated>2022-08-30T03:42:00Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analytics for Sustainable Logistics: Fuel Efficiency&#13;
and Hydrogen Planning
Humphries, Samuel S.
As the global community moves to decarbonize the modern economy, freight-based emissions—comprising about 8% of global emissions—pose a major challenge to meeting the Paris Agreement’s goal of limiting global warming to 1.5 ∘C to 2 ∘C by 2100. One category of solutions seeks to improve current operational systems and processes, often in data-rich environments. A second category involves leveraging emerging technologies toward zero-emissions transportation, often in the absence of detailed operating data and established procedures. In this thesis, we address problems in both categories, employing data-driven machine learning approaches to enhance fuel efficiency in existing operations as well as stochastic optimization to plan the deployment of hydrogen production facilities in future operations.&#13;
&#13;
In Chapter 2, we employ data-driven techniques to explore the drivers of fuel efficiency in the logistics sector in an effort to improve the current operating practices of Rivigo—a large logistics provider in India. Historically, driver training and vehicle maintenance have been the two main areas of focus in the literature to enhance the fuel efficiency of road freight. The advent of telematics, however, provides more granular visibility into fuel consumption, giving rise to opportunities to assess a broader array of fuel-saving interventions. Using data from a large logistics provider in India, we find that driver training and vehicle maintenance can have a limited impact on fuel efficiency. In contrast, we find a more significant effect of driving behaviors, measured through speed and acceleration variables, and of the road infrastructure, captured by the road section of the vehicle. We bring all these fuel efficiency interventions into a unified comparative framework based on machine learning and optimization. Results suggest find that driver training and vehicle maintenance have a smaller overall impact than other interventions such as speed-limiting devices and route optimization. These findings suggest that current practices and policies need to capture an array of interventions to improve fuel efficiency in logistics.&#13;
&#13;
Next, Chapter 3 optimizes the deployment of solar-hydrogen systems. Recent advances in vehicle technologies offer the promise to replace conventional gasoline-powered engines with zero-emission technologies based on hydrogen fuel. To be successful, however, hydrogen-powered vehicles need supporting fueling infrastructure to enable their large-scale deployment in logistics fleets. Moreover, to truly make a dent toward decarbonization, hydrogen needs to be produced from carbon-free electricity sources. In response to these two interrelated challenges, we propose an optimization approach to support the deployment of joint solar-hydrogen systems. Specifically, we formulate a two-stage stochastic mixed-integer model to optimize the location of solar power plants, hydrogen production facilities and a supporting distribution infrastructure, along with subsequent operations pertaining to production and distribution decisions in response to hydrogen demand. We implement the model using real-world data from Dordogne, France. The model’s solution mirrors the current solution planned in practice, but differs by inducing a higher electrolyzer capacity and by distributing this capacity across two locations. Results suggest that the proposed optimization approach can provide significant cost reductions along with reductions in demand shortages, as compared to existing solutions implemented in practice, alternative benchmarks, and deterministic optimization approaches.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Thin Film Supercurrent and Photodetection in Wide Niobium Nitride Wires</title>
<link href="https://hdl.handle.net/1721.1/144564" rel="alternate"/>
<author>
<name>Medeiros, Owen</name>
</author>
<id>https://hdl.handle.net/1721.1/144564</id>
<updated>2022-08-30T03:02:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigation of Thin Film Supercurrent and Photodetection in Wide Niobium Nitride Wires
Medeiros, Owen
Over the past two decades, superconducting nanowire single photon detectors have become the dominant platform for detection at telecommunication wavelengths. Despite their practical success, the theoretical framework that describes the detection mechanism within the nanowire is continually evolving. Early phenomenological models suggested that a hot region forms across the superconducting strip after the arrival of a photon, producing a measurable voltage only if the diameter of the hot region extends across the width of the strip. However, predictions based on the kinetic-equation approach showed that within a certain operating regime detection no longer depends on the strip’s width. This prediction was later supported by the experimental demonstration of single photon detection in strips 1-3 &#120583;m wide. The ability to fabricate detectors with larger widths would allow for higher signal to noise ratios as well as higher fabrication yield compared to narrow wires. These advantages could potentially unlock some long sought after applications of single photon detectors such as large area detectors or &gt;kilopixel arrays of detectors. In order to produce wide wire detectors the design and material properties must be well optimized. This thesis will cover the development of wide single photon detectors using nitrogen rich niobium nitride.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Universally Applicable Differential Privacy System: Redefining Utility in Database Privacy to Prioritize User Experience</title>
<link href="https://hdl.handle.net/1721.1/144558" rel="alternate"/>
<author>
<name>Xu, Helen J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144558</id>
<updated>2022-08-30T03:08:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Universally Applicable Differential Privacy System: Redefining Utility in Database Privacy to Prioritize User Experience
Xu, Helen J.
Data privacy is a fundamental ethical goal. We must aim for innovating without exploiting. In order to provide formal privacy guarantees, differential privacy has been the central method of implementing database privacy. However, there are many barriers to widespread adoption. General methods lack accuracy and more innovative methods lack applicability beyond a specific kind of data or query. This project aims to create an effective differentially private system that provides an identical user experience to using raw data and redefines utility in database privacy to focus on the user experience.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Potential Demand of On-Demand Urban Air Mobility Via Agent-Based Simulation</title>
<link href="https://hdl.handle.net/1721.1/144557" rel="alternate"/>
<author>
<name>Chen, Kexin</name>
</author>
<id>https://hdl.handle.net/1721.1/144557</id>
<updated>2022-08-30T03:43:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analysis of Potential Demand of On-Demand Urban Air Mobility Via Agent-Based Simulation
Chen, Kexin
This thesis analyzes the potential demand of Urban Air Mobility (UAM) by performing agent-based simulation. The comprehensive UAM model proposed by this thesis combines demand, supply, and their interactions at fine spatial and temporal levels. It has been implemented in the state-of-the-art mobility simulation platform, SimMobility, and includes the following considerations: (i) demand-centric vertiport placements and realistic vertiport capacity generation; (ii) explicit service operations that include rebalancing, charging and transition activities at vertiports; (iii) a behaviorally sound decision-making process capturing the switching behaviors. Simulations of at-launch, near-term and long-term scenarios, varying in capacity, accessibility, and pricing constraints, are performed for two real U.S. cities, along with the uncertainties. The results show that UAM presents a niche market, with only a penetration rate of 1.45% to 1.81% even in the long-term scenario for the two cities studied. Furthermore, the potential UAM users are primarily high-income and car-oriented, indicating equity issues. Work and drive-alone trips have the highest penetration rate, and short-range trips constitute the majority of the UAM potential demand. Lastly, capacity, accessibility, and pricing show significant impacts on demand, which are city-specific effects. This thesis contributes to the literature by analyzing the impacts of UAM on mobility pattern, specifically focusing on the potential market size and demand characteristics under various supply configurations, allowing policymakers and the industry to make informed decisions regarding UAM market diffusion.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptual Structural Design of Core Components for a Horizontal, Compact HTGR</title>
<link href="https://hdl.handle.net/1721.1/144556" rel="alternate"/>
<author>
<name>Borman, Brian William</name>
</author>
<id>https://hdl.handle.net/1721.1/144556</id>
<updated>2022-08-30T03:13:48Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Conceptual Structural Design of Core Components for a Horizontal, Compact HTGR
Borman, Brian William
This thesis investigates structural, thermomechanical, and fabrication-based challenges associated with a horizontally oriented High Temperature Gas Reactor (HTGR) concept and its core components. HTGRs hold significant potential in decarbonizing industrial process heat markets. Collaborators at MIT recently introduced the Horizontal, Compact HTGR (HC-HTGR), which expects to see a 20% reduction in overall cost with respect to vertical HTGRs. The horizontal re-orientation and modularization of the HC-HTGR and its accompanying structural challenges form the basis of this work. &#13;
&#13;
First, a detailed thermal/irradiation expansion analysis is conducted to assess graphite interaction with reactor core spatial gaps. This also includes a study of the relative displacements that occur at horizontal control rod and/or control drum locations at various points along the core’s length, and addresses the resulting implications on potential support conditions for the control sleeve spans. Next, graphite element features and control element sleeves are analyzed for strength with consideration to varying material properties and reactor environment conditions. Finally, an assessment of perimeter graphite reflector elements for structural integrity and a design of metallic support components are performed. Through design and analysis, this thesis increases the feasibility of the HC-HTGR concept, provides cross-discipline implications from a structural perspective for reactor core components, and ultimately advances a potential carbon-free technology for high temperature process heat markets.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Knowledge Base Using Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/144555" rel="alternate"/>
<author>
<name>Gammack, Jack George Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/144555</id>
<updated>2022-08-30T03:36:04Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design Knowledge Base Using Natural Language Processing
Gammack, Jack George Alexander
The rise of big data and machine learning in engineering has brought a high hope that designers can learn from past successes and failures. However, this has generally only been possible when the available data is in numerical or graphical form where time series data analytics or convolutional neural networks (CNN) can be used. Very little work has been done toward utilizing the massive amount of textual data that is collected during the early-stage design process such as solution concepts generation and systematic design decisions. Design documentation at this stage contains extremely valuable information and expert knowledge that has been iterated upon for hundreds of years, but so much of that knowledge can only be stored in textual form and is therefore generally unused in big data and machine learning methods for engineering design. This thesis aims to improve the availability and accessibility of knowledge stored within engineering design documentation and to facilitate the learning process of junior designers by enabling big data from textual design knowledge. State of the art models in Natural Language Processing (NLP) are trained on and applied to large corpora of heterogeneous forms of technical design documentation to enable accurate information retrieval via semantic search capabilities. Text analysis algorithms are applied to connect relevant textual and non-textual design data present within documentation to enable searchable representations of non-textual design data. The goal of this thesis is to provide systems and direction for improving human design and engineering practice by learning from past design successes and failures. These systems are applied to case studies of massive corpora of design documentation in the fields of climate change research and micro/nano research to develop semantic knowledge management systems for each domain.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing a Computer’s Ability to Monitor Data Provenance Events</title>
<link href="https://hdl.handle.net/1721.1/144550" rel="alternate"/>
<author>
<name>Becker, Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/144550</id>
<updated>2022-08-30T03:34:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Analyzing a Computer’s Ability to Monitor Data Provenance Events
Becker, Scott
As the amount of data in the world continues to increase, the ability to track data provenance becomes more and more important. In order to tackle this issue, a tool, Knowledge Network Programming System (KNPS), was built to track data history. To do this, KNPS monitors a computer’s processes and file system in order to determine the events that modified the data on the computer. This paper looks into how the amount of CPU Usage the KNPS tools uses effects how well the tool is able to capture these events. Additionally, a prediction mechanism was built in the tool to try to predict events that were not captured. This effectiveness of the prediction mechanism is explored in this paper as well.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Impact of COVID19 on US State GDPs</title>
<link href="https://hdl.handle.net/1721.1/144549" rel="alternate"/>
<author>
<name>Feldman, Evan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144549</id>
<updated>2022-08-30T03:02:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Quantifying the Impact of COVID19 on US State GDPs
Feldman, Evan J.
We have developed a system dynamics model of the economic impact of COVID-19 in the USA. Our model calculates the net impact of covid on a given states GDP and subtracts it from the non-pandemic counterfactual (continued growth at the same right as in the immediate pre-pandemic years). The model’s results show a $1.7 t loss in GDP attributable to covid and spread out over the 2 years since the initial shock in Q1 of 2022.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Censored and Truncated Data in Practice</title>
<link href="https://hdl.handle.net/1721.1/144548" rel="alternate"/>
<author>
<name>Stefanou, Patroklos N.</name>
</author>
<id>https://hdl.handle.net/1721.1/144548</id>
<updated>2022-08-30T04:05:42Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Learning from Censored and Truncated Data in Practice
Stefanou, Patroklos N.
An experimental study of the methods and algorithms developed to learn from truncated data. In my work, I provide a theoretical framework used to learn from missing data, and then show results from the package that I have developed to alleviate such biases.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theory and Applications of Matrix Completion in Genomics Datasets</title>
<link href="https://hdl.handle.net/1721.1/144547" rel="alternate"/>
<author>
<name>Stefanakis, George</name>
</author>
<id>https://hdl.handle.net/1721.1/144547</id>
<updated>2022-08-30T03:09:13Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Theory and Applications of Matrix Completion in Genomics Datasets
Stefanakis, George
The advent of rapid and efficient biological screening and sequencing technologies has enabled high-throughput data collection, opening the door to improvements in drug discovery, disease identification, and personalized medicine, among others. The size and scope of such datasets is unprecedented, and their increased availability over the past decade, in conjunction with rapid advancements in statistical inference and machine learning, has paved the way for an explosion in research. Still, many problems in this space are yet-unexplored or still in their infancy, either due to data availability or lack of computationally efficient or high-accuracy methods for modeling and prediction. In this work, we develop theory and demonstrate empirical results for use of the novel Neural Tangent Kernel (NTK) in matrix completion. We derive the functional form of the NTK for a single-hidden-layer, infinite-width neural network with ReLU activation, and develop a framework applying the NTK to matrix completion. We explore a specific application of this framework, using the Connectivity Map dataset of gene expression data for various cells and perturbations, demonstrating competitive results as compared to other methods. Additionally, we analyze our contributions through the auxiliary lens of performance engineering and develop concrete algorithms for accurate, performant, and intuitive biological imputation.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A recursive matched-filter to systematically explore a volcanic long-period earthquake swarm</title>
<link href="https://hdl.handle.net/1721.1/144546" rel="alternate"/>
<author>
<name>Wimez, Mathilde</name>
</author>
<id>https://hdl.handle.net/1721.1/144546</id>
<updated>2022-08-30T03:57:02Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A recursive matched-filter to systematically explore a volcanic long-period earthquake swarm
Wimez, Mathilde
The matched-filter technique is an effective way to detect repeats, or near-repeats, of a seismic source, but prior identification of an event from that source to use as a template is required. We propose a recursive matched-filter approach to systematically explore earthquake swarms, here applied to a swarm of volcanic long-period seismicity beneath Mount Sidley in Antarctica. We start with a single visually chosen template event with a high signal-to-noise ratio. We then extend our template database by selecting new templates to use in a subsequent matched-filter search from the newly detected set of events, allowing us to recursively expand the number of templates. We demonstrate that each iteration of the matched-filter search progressively extends the spatial coverage of our set of templates away from the original template event. In such a way, our proposed method overcomes the matched-filter search's strictest constraint: that an event must already be identified to detect other similar events. Our recursive matched-filtering approach is well suited for the systematic exploration of earthquake swarms in both volcanic and tectonic contexts.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards the Advancement of Rotating Synthetic Aperture Space Telescope Technology</title>
<link href="https://hdl.handle.net/1721.1/144543" rel="alternate"/>
<author>
<name>Kramer, Evan Laith</name>
</author>
<id>https://hdl.handle.net/1721.1/144543</id>
<updated>2022-08-30T04:06:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Towards the Advancement of Rotating Synthetic Aperture Space Telescope Technology
Kramer, Evan Laith
Optical imagery of the Earth from space-based telescopes is one of the most valuable tools for understanding various large and small-scale processes. Value is derived from the unique observing perspective on orbit, which has enabled an expansive list of new science from the discovery of the ancient city of Tanis in Egypt to the creation of maps that locate invasive plant species on the Galapagos islands. Regardless of the observing target, it is desirable to obtain the highest resolution imagery possible. The traditional approach to improving image resolution is to increase the size of the optical payload’s aperture. However, larger aperture payloads become prohibitively difficult to manufacture and excessively expensive to design, build, launch, and operate. Application of rotating synthetic aperture (RSA) technology, which is a kind of dilute aperture imaging system, to space telescope designs has the potential to unlock the design space of low mass and cost, high resolution space telescopes. RSAs employ a high aspect ratio rectangular aperture that spins about the principal optical axis. A complete image is formed after a 180 degree rotation during which multiple frames are acquired to fully sample the imaging system’s optical transfer function.&#13;
&#13;
While the advanced observing capabilities of RSAs hold much promise, the ability to accurately control an RSA's spinning and target tracking dynamics remains an unproven engineering challenge. To this end, the purpose of this work is twofold. First, spin rate ranges and torque requirements for RSA imaging maneuvers in various Earth orbital regimes were determined based on signal to noise ratio (SNR), look angle, ground sampling distance (GSD), field of view (FOV), and image smear. These results were used to define aspects of RSA imaging maneuver concept of operations (CONOPs) to serve as reference trajectories for future control algorithm testing. Second, characterization of the attitude determination subsystem of a larger RSA testbed, the dynamics and controls testbed (DCT), was accomplished by conducting laboratory experiments to determine noise figures and biases present in the subsystem's attitude sensors. This characterization enabled development of software-based sensor emulators to be integrated into RSA control algorithm simulations and accurate determination of estimation parameters to be implemented during RSA imaging maneuver tests on the DCT. The work presented in this thesis contributed to advancing the technology readiness level (TRL) of RSA space telescope attitude control and ultimately seeks to enable further development of this potential next generation observing platform.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipboard Fault Detection Methods for Condition-based Maintenance</title>
<link href="https://hdl.handle.net/1721.1/144542" rel="alternate"/>
<author>
<name>Quinn, Devin Wayne</name>
</author>
<id>https://hdl.handle.net/1721.1/144542</id>
<updated>2022-08-30T03:02:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Shipboard Fault Detection Methods for Condition-based Maintenance
Quinn, Devin Wayne
Vibration analysis can measure and track machine health. Computational advances in signal processing that leverage spectral coherence to identify subtle shifts in cyclostationary behavior provide new opportunities in vibration-based monitoring. The acquisition of vibration measurements must overcome significant practical challenges for successful vibration analysis. This work demonstrates vibration analysis for shipboard fault detection. Custom instrumentation and measurement techniques are applied to compressors, pumps, fans, and other shipboard electric machines.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete-to-Complete: The Fundamentals of Design Directed Robotics</title>
<link href="https://hdl.handle.net/1721.1/144541" rel="alternate"/>
<author>
<name>Sampson III, Myles B.</name>
</author>
<id>https://hdl.handle.net/1721.1/144541</id>
<updated>2022-08-30T03:53:15Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Discrete-to-Complete: The Fundamentals of Design Directed Robotics
Sampson III, Myles B.
Since building construction is an inherently complex process, the architecture, engineering, and construction (AEC) industry has not fully adopted automation into their practices. At the same time, designers have not embraced or considered robotic tools in their creative design processes. This thesis argues that the AEC industry must use automation methods that originate from established manufacturing procedures to expand the creative output of the disciplines. Leveraging architecture, engineering, and construction’s reliance on digital design, Discrete-to-Complete presents an accessible, adaptive, equitable framework for robotic fabrication. Discrete-to-Complete outlines a new method of robotic construction that combines architectural design, discrete assembly, and Shape Grammars for a design-driven method for robotic construction. In a series of robotic fabrication experiments, this research creates a design-directed approach to robotic fabrication, demonstrates the advantages of rule-based assembly processes, and introduces a workflow to fabricate architectural structures using a position based six-axis industrial robotic arm. The first experiment outlines how using a rule-based approach can strengthen design production using robotic fabrication. Through students in an architectural workshop, the second experiment tests the application of rule-based robotic fabrication. In the third experiment, I use attachment features on customized building elements to build an arch. The fourth experiment evaluates self-correcting geometry for architectural building elements. The final experiment applies self-correcting building elements for decomposed architectural structures. By validating Discrete-to-Complete, a shape grammatical approach to robotic fabrication, I introduce the fundamentals of design-directed robotics and generate a comprehensive method of automated construction.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Disruptive Innovations in High Growth Organizations when Architecting an Enterprise</title>
<link href="https://hdl.handle.net/1721.1/144540" rel="alternate"/>
<author>
<name>Niday, Tyler C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144540</id>
<updated>2022-08-30T03:43:44Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Enabling Disruptive Innovations in High Growth Organizations when Architecting an Enterprise
Niday, Tyler C.
Enterprises continue to push their organizations to enable disruptive innovation discovery to be at the forefront of new markets and to ensure the existing their existing products are not disrupted. Being at the forefront of disruption can allow an enterprise to exist for many market and product cycles. Growing enterprises have been architected in many ways, including mimicking startups, creating skunkworks, and using many other architecting techniques, to find technologies and market fits that could create a new market or compete in the current market. Very few companies have succeeded in creating successful enterprises to enable disruptions.&#13;
 &#13;
This work proposes a set of guiding principles that should be followed when architecting an innovation enterprise to enable disruptive technologies in high-growth organizations. These guiding principles have been vetted by researching ways to increase the likelihood of success and assisting in generating the appropriate paradoxical tensions that are needed when a corporate startup culture in embraced.&#13;
&#13;
Additionally, architecting frameworks are explored to determine if any frameworks are more suitable for architecting an enterprise specifically focusing on enabling disruptive innovations. The Architecting, Innovation, and Enterprise Strategy (ARIES) framework is selected as a high potential framework because the framework uses multiple viewpoints to observe the as-is architecture from and is intended to be used pre-transformation allowing the architect to capture the ecosystem’s external needs and envision a holistic future. This framework is then used as a blueprint, in addition to the guiding principles, to complete a post-transformation analysis on Blue River Technology. Blue River is a high-growth organization focused on delivering disruptive innovations. The ARIES framework is used as an evaluation tool to assess the effectiveness of a transformation to enable disruptive innovations post-transformation. A set of recommendations are provided based on the case study.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining the Post-Pandemic Role of Shared Micromobility: A Study of Travel Behavior, Policy, and Equity in Motion</title>
<link href="https://hdl.handle.net/1721.1/144539" rel="alternate"/>
<author>
<name>Lee, Jacqueline</name>
</author>
<id>https://hdl.handle.net/1721.1/144539</id>
<updated>2022-08-30T03:57:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Examining the Post-Pandemic Role of Shared Micromobility: A Study of Travel Behavior, Policy, and Equity in Motion
Lee, Jacqueline
Since the start of the coronavirus disease 2019 (COVID-19) pandemic, travel behavior has changed in unprecedented ways. Additionally, with the rise of new variants of the virus and with heightened priorities on social distancing, it is important that new transportation alternatives be introduced to mitigate fears of travel and to prevent further reliance on private vehicles, both of which could result in significant envi-ronmental, economic, and societal consequences. In light of this, this thesis studies shared micromobility systems, a mode of transportation that is especially compelling for a post-pandemic society.&#13;
&#13;
Through a series of three experiments, this work: 1) demonstrates how shared micromobility ridership demand and behavior has changed throughout the pandemic; 2) identifies factors linked to ridership changes and measures the impact of micro-mobility providers’ promotions implemented during the pandemic; and 3) proposes a set of novel fairness metrics to assess the equality of outcomes in micromobility trips while illustrating the tradeoffs between location privacy and fairness metric accuracy.&#13;
&#13;
Notable results from these experiments include a transition away from pre- pan-demic peak ridership patterns around 7-9 am and 5-7 pm and towards a more steady increase in ridership throughout the morning to afternoon periods. Moreover, travel time was frequently found to be comparable between public transit and shared mi-cromobility modes for specific central urban neighborhoods with the latter leading to shorter trips in many areas.&#13;
&#13;
Further, spatial lag and ordinary least squares regression analyses demonstrated both persistent and temporary changes in factors correlated with micromobility rid-ership. Two meaningful examples include the steady decrease in correlation - as measured by the magnitude of regression coefficients - for the female population and the initial spike and subsequent decline in the coefficients for communities of color. Changes for other significant variables were also explored and interpreted.&#13;
&#13;
The last experiment applied concepts of equality from various fields, including economics, computer science, and transportation, to define a set of fairness metrics with which micromobility trips can be evaluated. The effect of processing such trip data through a geo-indistinguishability privacy mechanism was then analyzed by its impact on these fairness measures. The results of this third study demonstrated that a reasonable degree of privacy can be secured without entirely compromising the utility of micromobility data in assessing how fair such systems may be.&#13;
&#13;
Motivated by the COVID-19 pandemic, this thesis explores its effects on shared micromobility systems in terms of demand, policies to encourage ridership, and fac-tors affecting use. From this, it theorizes how these systems can be considered and evaluated by policy-makers and city planners looking to adapt to how the pandemic has radically altered travel behavior and urban mobility. For this purpose, this work highlights important consequences that should be addressed and regulated for in or-der to not only meets the new or exacerbated needs of communities impacted by the coronavirus but to do so in an equitable way for all riders.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electromagnetic Printhead Core for Programming Magnetic Pixels</title>
<link href="https://hdl.handle.net/1721.1/144537" rel="alternate"/>
<author>
<name>Bah, Amadou Yaye</name>
</author>
<id>https://hdl.handle.net/1721.1/144537</id>
<updated>2022-08-30T03:54:57Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Electromagnetic Printhead Core for Programming Magnetic Pixels
Bah, Amadou Yaye
Leveraging magnetism for fabrication offers a level of adaptability not found in conventional fabrication methods, and this thesis explores an approach for polarizing discrete points on the surfaces of high-retentivity magnetic sheets. It discusses the design process and implementation of an electromagnetic printhead core used to generate varied flux patterns on said surfaces. The thesis also assess how the generated flux pixels, using an electromagnet, compare to those generated by permanent magnets, before offering some praxis for programmable magnetic sheets.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Taming Torridity New Housing Forms for Heat Resilience</title>
<link href="https://hdl.handle.net/1721.1/144536" rel="alternate"/>
<author>
<name>Brearley, Jonathon</name>
</author>
<id>https://hdl.handle.net/1721.1/144536</id>
<updated>2022-08-30T03:42:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Taming Torridity New Housing Forms for Heat Resilience
Brearley, Jonathon
Intervening on the contemporary US housing typology of the single-family home, this work imagines new building forms that foster resilience to extreme heat through social proximity, housing heterogeneity, and novel space cooling strategies. As we move into a new climate paradigm of increased weather variation and higher temperatures, extreme heat events will become more frequent and extreme. Increased cooling demand during such heat events contributes to electrical grid instability and, in some cases, causes blackouts or rolling brownouts. An established pathway to addressing this problem is more efficient envelopes and building systems, a strategy captured by the Passive House standard. Passive House is not without its constraints, primarily in upfront capital costs, skilled labor availability, and more complex building details. Using two metrics, cooling energy demand and heat index, to model resilience in grid-on (active) and grid-off (passive) scenarios, a heat vulnerability study of single-family houses in four US cities modeled to both Passive House standards and International Energy Conservation Code (IECC) is conducted over a representative hot week in each location. Models are simulated under four climate scenarios (Historic TMY3, and morphed 2020-80 HadCM2 A2) and show that Passive House models improved active resilience by decreasing both peak and total cooling energy over the week in all-weather scenarios by an average of 30% and 33%, respectively. Passive House standard increases passive resilience when houses are ground-coupled through the slab or basement, but otherwise produced worse interior conditions than the IECC model. While it is demonstrated that the Passive House standard is a viable strategy for increasing heat resilience with small deviations from the conventional Passive House logic, this thesis pursues an alternative pathway to heat resilience in US homes by emphasizing building form and architecture that is designed for flexible, resilient functions by exploring three fundamental strategies. Earlier findings on the significant impacts on ground coupling in heat resilience are translated into an architectural and operational strategy that reduces cooling energy and improves passive survivability by leveraging the ground as a heat sink. The second strategy uses zone nesting and thermal buffers conceptualized as layered thermal spaces. Finally, recognizing that social resilience is integral to increasing positive outcomes in extreme events, party walls and unit adjacent reduce exposure and cooling loads while embedding community proximity. The sum of these approaches is presented in a housing proposal that recognizes the forces at play in the desire for low-density, low-rise housing while attempting to subtly undermine kernels of the low-rise, single-family typology such as its resource intensity and homogeneity.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emission Capabilities of Nafion-Based Emitting Geometries</title>
<link href="https://hdl.handle.net/1721.1/144534" rel="alternate"/>
<author>
<name>Wangari, Charity</name>
</author>
<id>https://hdl.handle.net/1721.1/144534</id>
<updated>2022-08-30T03:05:29Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Emission Capabilities of Nafion-Based Emitting Geometries
Wangari, Charity
The propulsion field is home to numerous technological advancements, ranging from the mechanisms of their operation to the propellants used. All of these have enabled numerous space missions, either for technological demonstrations, or for other commercial, and even government use. Fortunately, with the dawning age of miniaturized electronics, these engines can be downsized to lower the system mass requirements, while ensuring the execution of mission specifications. Electrospray propulsion, therefore, is one of the propulsion areas in which this is possible.&#13;
&#13;
Numerous technological advancements have been realized in the field of electrospray propulsion, ranging from the use of exotic plasma propellants, otherwise referred to as ionic liquids (IL, ILs), to the exploitation of various materials to fabricate surfaces for ion emission. All of these have reported attractive results ranging from competitive emission currents that range between nano- and micro-Amperes for single and array emitters, respectively, to characteristic velocity outputs on the order of kilometers per second.&#13;
&#13;
Although the thrust levels are small, usually of the order of micro-newtons, the specific impulse values go as high as 3000 seconds. Unfortunately, with the available materials used in fabricating emitter geometry, with either porous to non-porous bulk properties, the emission characteristics are normally compromised either by off-axis emissions or high hydraulic impedance due to low to no IL transportation to emission sites. As such, it warrants the necessity to opt for materials that ensure smooth geometries for predictable and axially symmetric emissions, thereby fostering increased thrust densities.&#13;
&#13;
Fortunately, ionomers, a group of polymers capable of transmitting electricity, pose as attractive options for this situation. Nafion, a fluorocarbon main-chain and sulfonic end groups composition, is the material used in the research presented in this work. Due to its extensive use in fuel cells, Nafion presents as a viable choice since numerous research has been performed to understand the material's morphological and physical behavior in situ. This resource was beneficial, especially in the research presented in this composition.&#13;
&#13;
From the work done in the past using this material, it was clearly proven that ion emission, initiated via an electrified meniscus, was possible. Unfortunately, due to some unreliable manufacturing results, including bubble-filled tip structure, it was theorized that the emission results were, to some capacity, compromised. The presence of these air pockets inside the tip bulk, including broken apexes, interfered with emission characteristics, either by limiting effecting liquid transport through the bulk or by necessitating high start-off voltages for ion emissions.&#13;
&#13;
Therefore, the main contribution of the research presented herein is to develop a manufacturing process that eliminates bubble structures in the tip bulk, and testing said geometries under high enough electrostatic forces for ion emissions to occur. Thus, this research presents the new manufacturing process and communicated some results obtained for fabricated single and array geometries.&#13;
&#13;
Fortunately, it was proven that for tip geometries impregnated with IL, the absorbed solution is available for transportation to the emission site. It was also shown that the Nafion geometries always mimicked the structural properties of the parent counterparts.&#13;
&#13;
For emission results, it was shown that for single emitters, the emission characteristics were competitive, with emission currents that were higher than those obtained from other single emitters reported elsewhere.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating the Impact of Automated Umpiring in Baseball via Monte Carlo Simulation</title>
<link href="https://hdl.handle.net/1721.1/144528" rel="alternate"/>
<author>
<name>Shepard, Keithen</name>
</author>
<id>https://hdl.handle.net/1721.1/144528</id>
<updated>2022-08-30T03:49:30Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Estimating the Impact of Automated Umpiring in Baseball via Monte Carlo Simulation
Shepard, Keithen
The MLB (Major League Baseball) has made multiple changes to the game of baseball recently to enhance the viewing experience for fans. One viable idea that has been tossed around for multiple years has been the implementation of an automated umpiring system. The MLB has the technology to utilize such a system using Trackman technology however most MLB teams have expressed opposition to the idea. Using an automated system would get rid of human mistakes that umpires make due to the high-speeds of MLB pitches and other challenges.&#13;
&#13;
We present a method to estimate the impact of automated umpiring given MLB pitch data. We define a novel pipeline for simulating the statistical changes in MLB games following the correction of umpire mistakes. This pipeline uses historical game data to guide our estimations and then compares our findings to the baseline real game statistics. We finally use this pipeline to analyze the changes that an automated umping model would bring on average to the MLB game.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Crypto Clustering with NLP</title>
<link href="https://hdl.handle.net/1721.1/144527" rel="alternate"/>
<author>
<name>Zhang, Sammy</name>
</author>
<id>https://hdl.handle.net/1721.1/144527</id>
<updated>2022-08-30T03:03:03Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Unsupervised Crypto Clustering with NLP
Zhang, Sammy
A cryptocurrency is a digital form of currency that is secured using an online ledger with cryptography. The first successful cryptocurrency was Bitcoin, which was launched in 2009 by an anonymous person under the pseudonym Satoshi Nakamoto. Similar to stocks, they can be bought and sold, and their prices can vary over time. Thus, being able to classify cryptocurrencies is key for a crypto investor to determine which crypto assets are worth investing in. This thesis will apply unsupervised learning to crypto whitepapers to cluster various cryptocurrencies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hope-Hopping</title>
<link href="https://hdl.handle.net/1721.1/144525" rel="alternate"/>
<author>
<name>Li, Kwan Queenie</name>
</author>
<id>https://hdl.handle.net/1721.1/144525</id>
<updated>2022-08-30T03:36:01Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Hope-Hopping
Li, Kwan Queenie
This collection of texts, accompanied by a series of artworks, speculates upon a new artistic ecology of displacement that emerges at the intersection between post-coloniality, cosmopolitanism, and technopolitics.&#13;
&#13;
Exilic art practice takes myriad forms in this post-1989’s new “New World”. There are contemporary artists who are far from home, negotiating the distance between their displaced homeland and a foreign settlement. Some of whom manage to envelop their political ruins within a broader, cosmopolitical frame, tackling interconnected systematic challenges that interweave ecological, economic, and cultural knots. Other exilic artists, despite their geographic immobility, are actively practising in a politically-challenged and/or suppressed regime, mostly with limited freedom of speech. Their art goes on exile, speaking in an elusive tongue, acting without proclaiming, enacting an everyday practice that is beyond immediate descry but thrives in hindsight. This thesis asks: against the threat of instrumentalisation, what is the hope of exilic art-making that leverages on the international, almost-borderless art world in contemplating and resolving one’s political trauma? Does an increasing awareness of cosmopolitical responsibility, and the prevalence of virtual artist-activist communities constitute a new scope of hope for exilic art practice?&#13;
&#13;
More often than not, hope is understood against the singular, progressive, productivity-driven type of optimism that is conventionally bestowed by promises of modernity and technology. Yet, the urgency of dismantling this positivist illusion of hope is paramount in our age of new materiality, when scientific discovery and technological advancement, ideological conflicts and anthropocentric consequences have imparted us an insurmountable sense of displacement. Where is hope in this prevailing sense of hopelessness? These questions of hope do not anticipate a simple, positivist response, because the essence of hope does not concern truthfulness but offers plausibility; its efficacy lies exactly at ambiguity. Whether it is true or fake, the (in-)sincerity of hope can only be revealed after it is no longer needed, when the speculation becomes a reality. This entwinement of hope and hopelessness serves as a precarious reminder to the thesis’s desire of locating cosmopolitical neighbourhoods, revealing that the cosmopolitical reconciliation between political struggles always has one foot in an open-ended fiction. It is exactly this incompleteness which propels and motivates incessant attempts for initiating connections, practising intimacy, approaching trust.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Rendered Body: Queer Utopian Thinking in Digital Embodiment</title>
<link href="https://hdl.handle.net/1721.1/144522" rel="alternate"/>
<author>
<name>Lanier, Alison</name>
</author>
<id>https://hdl.handle.net/1721.1/144522</id>
<updated>2022-08-30T03:35:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Rendered Body: Queer Utopian Thinking in Digital Embodiment
Lanier, Alison
The rendered body is pure possibility, but it has been treated with an imaginatively limited lens that belies its potential for radical reimagining. I want to challenge those imaginative limits, especially in regard to gender and how gender is read on digital bodies. In order to do this, I will draw on video games studies' rich field of avatar and body theory, queer theory's concepts of gender instability and failure, animation's tools of abstraction and imagination, and sf studies' figuring of radical possibility.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autoencoding variational inference for the visualization of velocity-enriched scRNA-seq data</title>
<link href="https://hdl.handle.net/1721.1/144521" rel="alternate"/>
<author>
<name>Aina, Tiwalayo Terrence-Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/144521</id>
<updated>2022-08-30T03:50:38Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Autoencoding variational inference for the visualization of velocity-enriched scRNA-seq data
Aina, Tiwalayo Terrence-Luke
Dimensionality reduction is often used to visualize complex expression profiling data. The embedding of expression data is typically based solely on expression levels, which can yield inaccuracies in the representation of the lower-dimensional data. By augmenting scRNA-seq data with velocities for each cell, we can develop better visualization methodologies that use the richer information we may have describing cellular expression dynamics. Current techniques for dimensionality reduction, such as t-SNE and UMAP, are agnostic to the concept of velocity and therefore will embed data agnostic to any such additional information. In this work, we leverage variational inference to design deep learning models that use expression data and velocity data in tandem to produce effective low-dimensional representations. We also provide a methodology for RNA-seq data imputation using the learned models, taking inspiration from ideas in portfolio theory.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Providing Subsidies for Vehicles and Infrastructures to shift toward a low carbon passenger car mix</title>
<link href="https://hdl.handle.net/1721.1/144520" rel="alternate"/>
<author>
<name>Kobayashi, Naoki</name>
</author>
<id>https://hdl.handle.net/1721.1/144520</id>
<updated>2022-08-30T03:09:28Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">The Effect of Providing Subsidies for Vehicles and Infrastructures to shift toward a low carbon passenger car mix
Kobayashi, Naoki
The transportation sector in Japan in FY 2019 accounted for 18.6% of total CO2 emissions in Japan. In particular, passenger cars accounted for 8.5% of total CO2 emissions. To decrease CO2　emissions, The adoption of clean energy vehicles such as Battery Electric Vehicle (EV) and Fuel Cell Vehicle (FCV) is proposed as one effective means to reduce CO2. However, In Japan, the share of EV (0.43%) and FCV (0.04%) still remain less than 0.5% of annual sales. &#13;
&#13;
To address this issue, the Government of Japan (GOJ) provides subsidies both for vehicles and infrastructure. However, no specific goal for the share of cars by type (E.g., EV, FCV) or discussions based on the systematic approach exist. The lack of whole system analysis and strategy for the long-term future can lead to non-desirable results or extra costs to realize a low carbon society in 2050. &#13;
&#13;
In the past, some research has explored the relationship between EV diffusion and subsidies for EVs and infrastructure. Some carried out scenario-based mobility portfolio analyses. However, there is no research to explore a decision-making analysis that adequately considers uncertainty. Therefore, the aim of this thesis is to build a model to predict future mobility shares depending on the amount of subsidies and their duration. Then, analyzing its uncertainties and risk to design policies that more effectively reduce CO2 while meeting residential transportation demands. &#13;
&#13;
This research finds that given consideration of both infrastructure and vehicle subsidies, prioritization of subsidies for EV is a more effective policy strategy than a balanced or prioritized policy for FCV for the Japanese domestic residential transportation market. In addition, subsidies for infrastructure are more cost effective compared with vehicle subsidies. &#13;
&#13;
To deal with uncertainties, hydrogen station and FCV subsidies mitigate the risk of non-carbon neutral power generation. If the ratio of fossil fuel to power generation is over 31.5%, CO2 emissions of FCV main case (03_FCV full commit) are lower than that of EV main case (02_EV full commit).&#13;
&#13;
It is also revealed that policies to improve fuel efficiency, such as fuel efficiency regulations for gasoline vehicles and support for technological development, can reduce the downside risk of increased CO2 emissions, but they are also not good for future CO2 emissions reduction by delaying the replacement of ICEV or HV to EV and FCV.&#13;
&#13;
Finally, considering budget limits, this research finds that Japan policy for residential transportation vehicle subsidies to reduce CO2, should focus on the infrastructure at first is a good way to reduce CO2 emissions, then should go EV and avoid FCV subsidy. In the case of subsidies limitation with 50T¥, subsides for EV and infrastructure case (PF2) decrease 123[Mton] (10.2%) of CO2 emissions compared with the current policy case (01_current policy case).  Then, delaying the initiation of FCV subsidy is an effective measure to decrease CO2 emission by about 0.1~0.3% if ICEV has been replaced by EV.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Class 3D Segmentation of Progressive Damage in Advanced Composites using Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/144519" rel="alternate"/>
<author>
<name>Jain, Sudhir</name>
</author>
<id>https://hdl.handle.net/1721.1/144519</id>
<updated>2022-08-30T03:09:23Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Multi-Class 3D Segmentation of Progressive Damage in Advanced Composites using Deep Learning
Jain, Sudhir
Carbon fiber reinforced polymer (CFRP) composites are playing an increasingly important role in modern aerospace primary structures. Underpinning their growing utilization is their relatively high mass-specific stiffness and strength, together with tailorable bulk mechanical properties. However, while being key drivers of such performance benefits, CFRP microstructural heterogeneity and mechanical property anisotropy are also associated with complex damage mechanisms that make failure prediction highly challenging, limiting CFRP performance knowledge and more efficient utilization. Typical damage mechanisms contributing to progressive failure fall into three categories, each of them manifesting as a crack of some sort: (i) delamination (interlaminar cracking), (ii) matrix cracking, and (iii) fiber breakage (crack through fiber). At present, X-ray computed tomography (CT) performed to observe these complex damage mechanisms in 3D and 4D (3D spatial and 1D temporal) is normally analyzed by manual or semi-automated segmentation techniques, where a user identifies and segments the different damage modes (e.g., internal cracks) in the 2D tomograms that comprise the 3D tomographic scan. However, these scans contain a large amount of data (≈10 GB/mm3 ) making it traditionally challenging to extract mechanistic insights in a reasonable time frame. Thus, as a promising approach to introduce efficiency, repeatability, and objectivity to the CT segmentation process for microscale damage in advanced composites, deep learning (DL)-based algorithms are developed here for automated (i) 2D binary segmentation of fiber break damage mechanisms, (ii) Multi-damage (fiber break and matrix crack) segmentation, and (iii) 3D multi-class segmentation of a benchmark sandstone dataset. Multi-class segmentation of microscale damage is proposed to automate CT segmentation in the advanced composite, leveraging 120,000 trained human-segmented tomograms containing labeled fiber breaks and 65,000 trained human-segmented tomograms containing labeled matrix cracks for training, validating, and testing DL models. Different methodologies for the selection of high-performance DL models are investigated. A training and validation study is used to select a high-performance 2D fiber break DL model, and performance of this DL model is found to be close to the human level for the segmentation of fiber breaks (≈93%) and enables consistent, reproducible, and scalable segmentation deployment without the need for human experts. Additionally, preliminary studies indicate a feasible pathway for 3D feature recognition (with ≈39% improvement in performance over 2D segmentation of sandstone dataset) and 2D multi-class segmentation containing composite fiber break and matrix crack data, termed as multi-damage multi-class segmentation. Future work involves combining multi-damage and 3D segmentation for full complex damage segmentation in advanced composites.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Cryptography: Deniable Privacy for Secure Data Aggregation</title>
<link href="https://hdl.handle.net/1721.1/144517" rel="alternate"/>
<author>
<name>Pence, Eric J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144517</id>
<updated>2022-08-30T03:02:59Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Beyond Cryptography: Deniable Privacy for Secure Data Aggregation
Pence, Eric J.
We assess the privacy properties of the count function, an essential data aggregation primitive, in the context of a real-world secure data aggregation platform called SCRAM (Secure Cyber Risk Aggregation and Measurement). Subject to the constraints of few data contributors and a limited tolerance for noise in the output of the count function, we seek an alternative to differential privacy, and we develop a new privacy-preserving mechanism called deniable privacy. We show that deniable privacy provides the proper balance between accuracy and privacy in the case of SCRAM, and we demonstrate that the utility of deniable privacy extends broadly to other data aggregation applications.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Gesture Recognizing Tool for Virtual Presentations</title>
<link href="https://hdl.handle.net/1721.1/144516" rel="alternate"/>
<author>
<name>Wang, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/144516</id>
<updated>2022-08-30T03:26:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">A Gesture Recognizing Tool for Virtual Presentations
Wang, Jennifer
Due to the format of many common video-conferencing platforms used to deliver virtual presentations, the speaker is often placed far away in the user interface of the video-conferencing platform from the slides they are presenting. As a result, there is a strong lack of connection between the speaker and their visual content during virtual presentations. However, a virtual presentation also provides a format where recent technology can be used to elevate what normally is a point-and-click talk with slides. This work presents a new system for virtual presentations that will be more engaging and more interactive for both the audience and the speaker. By utilizing a LeapMotion sensor for gesture tracking, we can implement a web app that can be combined with slides and video input to decrease the distance between a speaker and their visual content.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Training and Calibration for Learning with Limited Data</title>
<link href="https://hdl.handle.net/1721.1/144511" rel="alternate"/>
<author>
<name>Liu, Emma J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144511</id>
<updated>2022-08-30T03:46:11Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Self-Training and Calibration for Learning with Limited Data
Liu, Emma J.
Semi-supervised learning methods such as self-training are able to leverage unlabeled data, which is widely available, as opposed to only using labeled data like many successful supervised learning methods. One part of self-training is to use a trained model to create pseudo-labels for unlabeled data and then select some of those samples to add to the labeled dataset. One way to do this is to pick samples for which the model has high confidence. However, many models are not well-calibrated, which means that the confidence scores do not necessarily align with the expected distribution in the dataset. Thus, the usage of confidence scores in this manner may result in adding more incorrectly labeled samples to the training dataset than expected. This thesis explores how the addition of a recalibration step during self-training to adjust the confidence scores before they are used to select samples can improve the results of self-training. Performing experiments on natural language processing data revealed that combining self-training with calibration results in improved accuracy when the initial self-training accuracy is not too high and the amount of labeled data initially used is not too small.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Assurance for Automated Vehicles Beyond Collision Avoidance</title>
<link href="https://hdl.handle.net/1721.1/144508" rel="alternate"/>
<author>
<name>Vorbach, Charles J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144508</id>
<updated>2022-08-30T03:48:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Safety Assurance for Automated Vehicles Beyond Collision Avoidance
Vorbach, Charles J.
Each year, automotive crashes cause thousands of deaths and injuries. Autonomous safety systems have the potential to greatly reduce this tragic loss of life and improve safety, but such systems must meet existing requirements for automotive certification. Particularly, active safety systems must designed to comply with the Automotive Safety Integrity Level risk classification scheme described in the ISO 26262 standard.&#13;
&#13;
In this thesis, I design a system using redundant components to independently enforce safety requirements across parallel software supervisors within an autonomous vehicle planning pipeline. I use Hamilton-Bellman-Jacobi reachability analysis to provide new guarantees for safe navigation on public roadways. I create new and extend existing safety modules to independently verify collision avoidance, obedience to traffic rules, and vehicle lane discipline. This project provides theoretical proof of safety and implements control methods within Nvidia’s DriveWorks autonomous vehicle framework.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FlexC: Flexible Compartmentalization Through Automatic Policy Generation</title>
<link href="https://hdl.handle.net/1721.1/144506" rel="alternate"/>
<author>
<name>Ortega, Carolina Perez</name>
</author>
<id>https://hdl.handle.net/1721.1/144506</id>
<updated>2022-08-30T03:07:14Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">FlexC: Flexible Compartmentalization Through Automatic Policy Generation
Ortega, Carolina Perez
The single address space in monolithic kernels enables vulnerabilities to compromise the entire kernel and system. An effective approach to prevent and mitigate these vulnerabilities is compartmentalization. Previous work has mostly focused on the enforcement of compartmentalization policies; to date little research has addressed the creation of such policies. Users are assumed to manually create and supply policies via annotation. Automating this would allow policies to be optimized for different systems. Therefore, our goal is to build a system for creation and enforcement of policies that is automatic, easy to use, and allows exploration of multiple policies, tailored to the needs of the systems.&#13;
&#13;
We introduce a mechanism for Flexible Compartmentalization through automatic policy generation, FlexC, which both creates and enforces arbitrary compartmentalization policies. FlexC automatically creates a code and data flow graph to represent the system being compartmentalized, based on static and dynamic analyses. It allows the user to select how to prioritize the static or dynamic information in the edges of the graph. Then, it merges vertices using a greedy algorithm, into a number of compartments specified by the user, creating a compartmentalization policy that is then enforced using an LLVM pass. For systems with higher security sensitivity, FlexC can create hundreds of compartments, while users that need to prioritize performance can create as few as desired. Additionally, users can easily explore the impact of different policies on their systems, and select whichever is most appropriate. We evaluated FlexC on a Linux kernel 5.10, and measured the impact on a FAT file system. Results showed an overhead with a geometric mean between 10% and 13.5% for policies with different number of compartments. Fine-grained policies can reduce the number compartments that have permission to access FAT file system compartments by 60%.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Force-Velocity Profiling of National Football League Athletes</title>
<link href="https://hdl.handle.net/1721.1/144505" rel="alternate"/>
<author>
<name>Wright, Mark Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/144505</id>
<updated>2022-08-30T03:02:43Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Automated Force-Velocity Profiling of National Football League Athletes
Wright, Mark Joseph
Force-velocity profiles are a well-established approach to generating key parameters of an athlete’s overall fitness profile. They are currently utilized by NFL teams for their players. However, athletes run the risk of injury while testing to create these profiles since they must sprint with a weight attached to them at max speed. As such, teams are not utilizing these profiles as well as they could as they prefer not to jeopardize their athletes.&#13;
&#13;
In this paper, we present a novel approach to generating force-velocity profiling inspired by former work in the MIT Sports Lab to create these profiles directly from tracking data generated by wearable technology sensors. The techniques presented in this paper allow NFL teams to create force-velocity profiles over any time frame of tracking data they have available and allow them to better assess, train, and rehabilitate their players.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Differentially Private Synthetic Text</title>
<link href="https://hdl.handle.net/1721.1/144503" rel="alternate"/>
<author>
<name>Park, YeonHwan</name>
</author>
<id>https://hdl.handle.net/1721.1/144503</id>
<updated>2022-08-30T03:10:07Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Generating Differentially Private Synthetic Text
Park, YeonHwan
The advent of more powerful cloud compute over the past decade has made it possible to train the deep neural networks used today for applications in almost everything we do. However, the amount of existing data for private datasets, such as hospital records, remain scarce and will probably remain scarce for the foreseeable future. Without high-quality data, neural networks will not be able to perform high-quality inference.&#13;
&#13;
To aid in training models when existing information is limited, we aim to train existing deep neural network architectures to generate synthetic text that is similar to the text it was trained on without memorizing one-to-one mappings or leaking any sensitive data. To achieve this goal, we fine-tune our models to adhere to a strong notion differential privacy – a mathematical model bounding the extent to which an adversary can reconstruct the original dataset.&#13;
&#13;
In the desire to use the differentially private models to generate mixed-type tabular datasets with unstructured text, we also perform a survey to gain a better understanding of how our algorithm might be used to supplement existing neural networks.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Clinical Evaluation of a Digital Transtibial Prosthetic Interface</title>
<link href="https://hdl.handle.net/1721.1/144501" rel="alternate"/>
<author>
<name>Lee, Duncan R.C.</name>
</author>
<id>https://hdl.handle.net/1721.1/144501</id>
<updated>2022-08-30T03:13:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Design and Clinical Evaluation of a Digital Transtibial Prosthetic Interface
Lee, Duncan R.C.
For those living with lower-limb loss, the prosthetic interface, comprising a socket and liner, is the component of the prosthesis that most limits its wearability and use. An improperly designed prosthetic interface results in excessive pressure areas that cause wear and chafing with skin breakdown as a common occurrence. Traditionally designed interfaces require extensive time from the patient and an experienced prosthetist, with these factors compounding to make the entire process inaccessible to the majority of persons with amputation. To address these problems, this thesis outlines a prosthetic interface design and manufacturing pipeline that uses a novel computational algorithm to create subject-specific transtibial liner and socket components that can be additively manufactured at low cost. The residual limb is imaged using a magnetic resonance imaging (MRI) device, and the image set is segmented into a three-dimensional model. This approach is superior to other 3D-modeling prosthetic interface techniques as it is able to capture bone geometries and soft tissue depths of the residuum. A more accurate topology of the skin is captured using digital image correlation (DIC), and this mesh is used in replacement of the MRI skin. The socket is divided into four distinct pressure regions, and the nominal pressure applied at each region can be adjusted to be patient-specific. Finite element analysis is run to simulate liner donning and bodyweight loading upon the interface to generate the final pressure map and liner-socket geometries. Novel prosthetic interfaces made using this algorithm were evaluated against conventionally made interfaces for 5 limbs from 4 patients through a combination of kinematic gait data, standing pressure data, thermal skin measurement, and qualitative patient response. The kinematic results in this study use the Mahalanobis distance to evaluate difference in gait asymmetry resulting from conventional and novel prosthetic interfaces. The distance is calculated using asymmetries for step time, swing time, and peak impact ground reaction force. No subjects exhibit significant difference in gait asymmetry resulting from conventional and novel prosthetic interfaces (asymmetry greater than the 5% p-value was not observed for Mahalanobis distance for 3 degrees of freedom). Thermal results show no statistically significant difference in percent temperature change from reference between conventional and novel interfaces. This is true for overall temperature change as well as change at the distal and fibular head regions specifically. Further, standing pressure data do not show significant difference between conventional and novel prosthetic interfaces when the pressure variance at locations excluding the patellar bar are compared. Qualitative feedback from the three unilateral subjects participating in the study are generally neutral, with novel interfaces being evaluated as close in fit to conventional interfaces during sitting and standing. One bilateral patient rates the novel interface as better than the conventional interface on both legs. The three unilateral patients give the novel interfaces slightly worse ratings while walking, however often comfort was reduced due to unfamiliarity with the socket suspension system or socket material, neither of which are directly applicable to our design. Overall, study results show that the performance of the novel interface is comparable to that of the conventional interface with the potential of providing benefits in overall design time, repeatability, and cost.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring Structured World Models from Videos</title>
<link href="https://hdl.handle.net/1721.1/144497" rel="alternate"/>
<author>
<name>Kapur, Shreyas</name>
</author>
<id>https://hdl.handle.net/1721.1/144497</id>
<updated>2022-08-30T03:49:24Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Inferring Structured World Models from Videos
Kapur, Shreyas
Advances in reinforcement learning have allowed agents to learn a variety of board games and video games at superhuman levels. Unlike humans - which can generalize to a wide range of tasks with very little experience - these algorithms typically need vast number of experience replays to perform at the same level. In this thesis, we propose a model-based reinforcement learning approach that represents the environment using an explicit symbolic model in the form of a domain-specific language (DSL) that represents the world as a set of discrete objects with underlying latent properties that govern their dynamical interactions. We present a novel, neurally guided, on-line inference technique to recover the structured world representation from raw video observations, with the intent to be used for downstream model-based planning. We qualitatively evaluate our inference performance on classical Atari games, as well as on physics-based mobile games.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Spatial Transcriptomics with Chromatin Images Using Graph-Based Autoencoder Identifies Joint Biomarkers for Alzheimer’s Disease</title>
<link href="https://hdl.handle.net/1721.1/144496" rel="alternate"/>
<author>
<name>Zhang, Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/144496</id>
<updated>2022-08-30T03:14:58Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integration of Spatial Transcriptomics with Chromatin Images Using Graph-Based Autoencoder Identifies Joint Biomarkers for Alzheimer’s Disease
Zhang, Xinyi
Tissue development and disease lead to changes in cellular organization, nuclear morphology, as well as gene expression. Spatial transcriptomic technologies, such as STARmap and 10x Visium, allow the joint measurement of these different modalities in whole tissue sections.1,2 However, methods for jointly analyzing the different spatial data modalities in 3D are still lacking. We present a computational framework to integrate Spatial Transcriptomic data using over-parameterized graph-based Autoencoders with Chromatin Imaging data (STACI) to identify molecular and functional alterations in tissues. STACI represents multiple data modalities with a single joint representation, which allows for the simultaneous incorporation of the different modalities in downstream tasks, such as the clustering of cells and the identification of disease-associated gene expression and nuclear image features. The joint representation also enables the prediction of spatial transcriptomic data from nuclear imaging data in unseen tissue sections. STACI uses over-parameterization as a technique to integrate different samples and provide built-in batch correction with regard to gene expression and tissue morphology. We apply STACI to analyze the spatio-temporal progression of Alzheimer’s disease in a mouse model. Importantly, we identify nuclear morphometric features such as chromatin condensation as well as coupled gene expression features that are differentially associated with disease progression in different regions of the brain cortex. Collectively, we demonstrate the importance of characterizing cell states and disease progression by integrating multiple data modalities and its potential for the discovery of novel disease biomarkers.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Chemical-Electric Propulsion Systems for CubeSats</title>
<link href="https://hdl.handle.net/1721.1/144493" rel="alternate"/>
<author>
<name>Gentgen, Chloé</name>
</author>
<id>https://hdl.handle.net/1721.1/144493</id>
<updated>2022-08-30T03:44:18Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Hybrid Chemical-Electric Propulsion Systems for CubeSats
Gentgen, Chloé
As CubeSats have proved their benefits for missions ranging from Earth observation, communication, navigation, or science, miniaturized propulsion systems have been actively developed and demonstrated in-flight to support these applications. Propulsion systems are primarily divided into two main categories. Chemical propulsion systems benefit from high thrust to perform impulsive maneuvers but have a low specific impulse. On the other hand, electric propulsion systems have a much lower thrust but a high specific impulse, thus resulting in large delta-v budgets. While common for larger spacecraft to include both types of propulsion on-board, the stringent size, weight, and power constraints on CubeSat have mainly limited CubeSats to only one type of propulsion.&#13;
&#13;
However, the research and development in propulsion systems miniaturization over the last couple of years provides an opportunity to reconsider and evaluate the current and future feasibility of hybrid chemical-electric systems. Hybrid propulsion systems combine two or more propulsion technologies into a spacecraft without any shared hardware and could unlock ambitious missions. One such example is ReCon (Reconfigurable Constellations), a concept developed to enable remote sensing constellations to image specific areas of interest with an increased spatial and temporal resolution, on-demand, and without increasing constellation sizes. These constellations require significant maneuvering capabilities, including responsive impulsive transfers when time-sensitive observation needs arise -- for instance, during extreme weather events or conflicts.&#13;
&#13;
This thesis will evaluate the performance and feasibility of hybrid chemical-electric propulsion systems on CubeSats as an alternative to chemical-only systems for missions requiring high-thrust capabilities and large delta-v budgets. Feasible architectures relying on commercial off-the-shelf (COTS) systems are identified, and their performance is compared to single-mode systems according to different constraints and performance requirements. 2U was identified as the minimum volume required for COTS hybrid chemical-electric architectures to be advantageous over single-mode systems. In a 2U volume, more than 20 hybrid chemical-electric architectures can provide a delta-v for impulsive maneuvers above 70 m/s with a delta-v for low-thrust maneuvers superior to 220 m/s while satisfying power constraints, while the optimal chemical system can provide only up to 245 m/s of delta-v. &#13;
&#13;
Improved hybrid architectures can be generated by concurrent design optimization of the chemical and electric systems. The design space of hybrid architectures is explored through the parametric modeling of a cold gas thruster, a green monopropellant, and an ion thruster. The optimality gap with previously generated designs is then quantified. Hybrid designs with a volume as small as 1.5U can then exceed the performance of single-mode systems. This approach demonstrates that custom-designed hybrid payloads can meet and exceed mission requirements better than COTS hybrid payloads; however, it comes with an increased cost due to additional research and development needs, resulting in necessary tradeoffs.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Connectivity Maintenance For Distributed Robotic Systems</title>
<link href="https://hdl.handle.net/1721.1/144488" rel="alternate"/>
<author>
<name>Singhal, Nikhil M.</name>
</author>
<id>https://hdl.handle.net/1721.1/144488</id>
<updated>2022-08-30T03:03:49Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Efficient Connectivity Maintenance For Distributed Robotic Systems
Singhal, Nikhil M.
Teams of multiple autonomous robots have the potential to improve upon many robotic tasks performed by individuals. As these robots move about, they must maintain connectivity through direct links or multi-hop paths in order to exchange useful information for the completion of collaborative tasks and avoid duplicating work. In this thesis, we survey the field to identify an optimally versatile connectivity maintenance algorithm supporting arbitrary tasks. To this end, we implement, modify, and optimize several connectivity maintenance algorithms from the literature and evaluate their performance on tasks such as trajectory following and multi-agent search. Finally we leverage these results to discuss the trade-offs between these algorithms for use cases with different priorities.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Primary Market Dynamic Pricing for Sports Tickets: Theory and Application</title>
<link href="https://hdl.handle.net/1721.1/144487" rel="alternate"/>
<author>
<name>Hylen, Spencer David</name>
</author>
<id>https://hdl.handle.net/1721.1/144487</id>
<updated>2022-08-30T03:44:21Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Primary Market Dynamic Pricing for Sports Tickets: Theory and Application
Hylen, Spencer David
In professional sports, fan attendance at games is one of the largest drivers of revenue and a significant focus for all sports teams. While sports tickets can benefit from dynamic pricing techniques developed for airlines or hotels, they have constraints related to limited inventory, season tickets, and the attendance-revenue trade-off that make them unique and difficult to price with existing dynamic pricing systems. In this work, we present a novel dynamic pricing strategy for sports tickets, developed with an engineering approach alongside primary market ticket data provided by the San Antonio Spurs. Using dynamic programming and matrix completion for modeling customer demand, we provide optimal price recommendations for all time periods before a game. We use these recommendations, along with information on current prices, to provide a suite of ticket price optimization and analysis tools. These tools are led by our Manual Price Adjustment Guide, which combines our optimal price recommendations with ticketing analyst expertise for a hybrid solution designed to improve ticket pricing and increase team revenue.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Dimensional Evaluation Metrics for Chest X-Ray Reports</title>
<link href="https://hdl.handle.net/1721.1/144486" rel="alternate"/>
<author>
<name>Rawat, Saumya</name>
</author>
<id>https://hdl.handle.net/1721.1/144486</id>
<updated>2022-08-30T04:08:54Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Multi-Dimensional Evaluation Metrics for Chest X-Ray Reports
Rawat, Saumya
In the past few years, there has been abundant research in using machine learning to generate high quality radiology reports using the large MIMIC-CXR chest x-ray dataset. However, there has been little work focused on evaluating the quality of generated reports from a clinical perspective, where accuracy is the most important factor. Current evaluation metrics evaluate reports in one dimension. This work proposes the use of multiple dimensions (factual correctness, comprehensiveness, style, and overall quality) to better capture evaluation preferences of a clinical text generating model where preferences can differ based on the use case. This work also presents a dataset of radiologist rating annotations for generated and reference chest x-ray radiology reports. Lastly, it also creates an improved metric for the readability dimension by adding context awareness of frequent and acceptable medical terminology.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motivated Fictionality: Worldbuilding and The Thousand and One Nights</title>
<link href="https://hdl.handle.net/1721.1/144484" rel="alternate"/>
<author>
<name>Soltan, Meriam</name>
</author>
<id>https://hdl.handle.net/1721.1/144484</id>
<updated>2022-08-30T03:41:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Motivated Fictionality: Worldbuilding and The Thousand and One Nights
Soltan, Meriam
Told and retold in countless iterations, and with roots in Persian, South Asian, and even Chinese folklore, The Thousand and One Nights has, for centuries, been amended and appended by the imaginaries of all those who have deigned to share and shape its stories. Communicated first orally, and then through a parallel textual tradition, it was only upon the arrival of the stories to Europe in the 18th century that these imaginaries would be visualized. British Orientalist Edward Lane’s richly illustrated and annotated translation (1839-41) is among the first and most prolific examples of that evolution in expression, with the 634 woodblock prints created for the edition by William Harvey positioning it as a major milestone in both the history of the Nights and popular Victorian-era illustration. Created with reference to contemporaneous architectural surveys, records, and travel writing, this was an edition that sampled from real-world observations to transform the fictions of the Nights into “a faithful and, as it were, living picture of the East.”1&#13;
&#13;
This thesis foregrounds this liberal sampling of technical drawing and ethnographic research to position Lane’s Nights as both a product and producer of its time. It emphasizes this reciprocity—one that recognizes instances wherein our world has been shaped with and through the Nights in tandem— to affirm the stories not as a finite collection, but as a living fiction consistently animated by exchanges between the real and the imaginary. To do so, it looks first to 19th century trends in knowledge production and advancements in printmaking technology to trace the various scholarly and creative motivations driving Lane’s engagement with the Nights. It then offers close readings of key illustrations to demonstrate how those motivations built colonial-era travel writing, surveying, and representation into the edition. The disparate material and cultural histories embedded into those images and their associated notes are then traced to assert the joint telling and picturing of the Nights as a worldbuilding practice. A way of being in—and of—the world, worlding and worldbuilding is understood here as an active, aggregative process, wherein the work of writing and visualizing fiction is indivisible from the construction and perception of reality, of life as we experience it in real-time. Interspersed with short narrative interludes of my own designed to ground the research in an application of this very process, this thesis argues that the speculative potential promised by the Nights is enacted through a process of worldbuilding, which mediates between conceptions of the real and the imaginary.&#13;
&#13;
&#13;
1 “Review: The Arabian Nights Entertainments: with Copious Notes by E.W. Lane,” The Atheneum no. 572 (October&#13;
1838) 739.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of various methods of solving for flutter speeds</title>
<link href="https://hdl.handle.net/1721.1/144357" rel="alternate"/>
<author>
<name>Fotieo, George.</name>
</author>
<author>
<name>Cunningham, Herbert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/144357</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">An investigation of various methods of solving for flutter speeds
Fotieo, George.; Cunningham, Herbert J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947; Bibliography: leaves 83-84.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soil amplification of SV and P waves.</title>
<link href="https://hdl.handle.net/1721.1/144355" rel="alternate"/>
<author>
<name>Jones, Thomas Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/144355</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">Soil amplification of SV and P waves.
Jones, Thomas Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1970; Bibliography: leaf 107.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer simulation of general systems of interlinked multistaged separators</title>
<link href="https://hdl.handle.net/1721.1/144353" rel="alternate"/>
<author>
<name>Chan, Willie K.</name>
</author>
<id>https://hdl.handle.net/1721.1/144353</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Computer simulation of general systems of interlinked multistaged separators
Chan, Willie K.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1982; Bibliography: leaves 59-60.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of large FDMA satellite systems for telephone services</title>
<link href="https://hdl.handle.net/1721.1/144352" rel="alternate"/>
<author>
<name>Omiya, Yoshitaka.</name>
</author>
<id>https://hdl.handle.net/1721.1/144352</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Analysis of large FDMA satellite systems for telephone services
Omiya, Yoshitaka.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Bibliography: leaves 165-166.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information systems project approval : transaction processing systems vs management support systems</title>
<link href="https://hdl.handle.net/1721.1/144351" rel="alternate"/>
<author>
<name>Ong, Hong Kien.</name>
</author>
<id>https://hdl.handle.net/1721.1/144351</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Information systems project approval : transaction processing systems vs management support systems
Ong, Hong Kien.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Bibliography: leaf 85.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human resource planning and development in the modern hospital</title>
<link href="https://hdl.handle.net/1721.1/144350" rel="alternate"/>
<author>
<name>Overskei, Katherine Ann.</name>
</author>
<id>https://hdl.handle.net/1721.1/144350</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Human resource planning and development in the modern hospital
Overskei, Katherine Ann.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Bibliography: leaves 110-111.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Cryogenic Cooling of Toroidal Field Magnets for Nuclear Fusion Reactors</title>
<link href="https://hdl.handle.net/1721.1/144277" rel="alternate"/>
<author>
<name>Hamilton, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/144277</id>
<updated>2022-08-10T03:20:00Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">Analysis of Cryogenic Cooling of Toroidal Field Magnets for Nuclear Fusion Reactors
Hamilton, Benjamin
New developments in REBCO superconducting tape technology have enabled a new class of high-fi eld tokamak fusion reactors. Higher critical temperatures on the order of 20 K allow the magnets to operate under signifi cant thermal loads during the fusion process. As a case study, we look at the proposed SPARC toroidal  field (TF) magnet design. We investigate the heat transfer inside the cooling channels and uid dynamics inside the cooling channels. System-level issues are also investigated, including impact of an insulated versus non-insulated design on cooling performance and cryodistribution architectures to provide coolant during fusion. These investigations guide the design for future high- field HTS magnets to be used in tokamak reactors.
Thesis: S.M. in Mechanical Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering, February, 2021
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Institutional aspects of spot price coordinated electric power systems</title>
<link href="https://hdl.handle.net/1721.1/144192" rel="alternate"/>
<author>
<name>Mabey, Nicholas.</name>
</author>
<id>https://hdl.handle.net/1721.1/144192</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Institutional aspects of spot price coordinated electric power systems
Mabey, Nicholas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1993; Includes bibliographical references (p. 152-154).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Amplification of generalized surface waves.</title>
<link href="https://hdl.handle.net/1721.1/144186" rel="alternate"/>
<author>
<name>Michalopoulos, Evangelos.</name>
</author>
<id>https://hdl.handle.net/1721.1/144186</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Amplification of generalized surface waves.
Michalopoulos, Evangelos.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaf 139.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reverse Engineering the Intel Cascade Lake Mesh Interconnect</title>
<link href="https://hdl.handle.net/1721.1/143928" rel="alternate"/>
<author>
<name>Dai, Miles</name>
</author>
<id>https://hdl.handle.net/1721.1/143928</id>
<updated>2022-09-13T13:20:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Reverse Engineering the Intel Cascade Lake Mesh Interconnect
Dai, Miles
The rising core counts of multicore processors have driven the move to more scalable on-chip networks to allow for efficient communication between the cores, memory subsystem, and other processor peripherals. Intel’s latest line of Scalable processors introduced the mesh interconnect in the form of a two-dimensional network of rings that reduces the bandwidth and latency bottlenecks of prior ring interconnects. Much is still unknown about the mesh interconnect, and details from the official documentation are sparse. In this thesis, we perform the first in-depth reverse-engineering of the mesh interconnect on an Intel Cascade Lake server. We use performance counters to determine the layouts of the cores on the die and reverse-engineer the traffic scheduling policy for packet routing on the interconnect. In addition, we develop tools to generate and monitor cross-core cache coherence traffic. We then apply these tools to determine the precise conditions required for traffic contention on the network. This information is combined with publicly available documentation and prior work to provide an unprecedented understanding of the new Intel mesh interconnect. Further, this work paves the way for future investigation into the use of the network-on-chip as a potential hardware side channel.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Machine Learning Approach for Understanding and Discovering Topological Materials</title>
<link href="https://hdl.handle.net/1721.1/143926" rel="alternate"/>
<author>
<name>Ma, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/143926</id>
<updated>2022-07-22T03:25:04Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Machine Learning Approach for Understanding and Discovering Topological Materials
Ma, Andrew
Topological materials are of significant interest for both basic science and next-generation technological applications due to their unconventional electronic properties. The majority of currently-known topological materials have been discovered using methods that involve symmetry-based analysis of the quantum mechanical wavefunction. Here we use machine learning to develop a heuristic chemical rule, which diagnoses whether a material is topological using only its chemical formula. It is based on a notion that we term topogivity, which is a learned numerical value for each element that loosely captures the tendency of an element to form topological materials. Topogivities provide chemical insights for understanding topological materials. We implement a high-throughput procedure for discovering topological materials that are not diagnosable by symmetry indicators. The procedure is based on heuristic rule prediction followed by ab initio validation. The concept of topogivity represents a fundamentally new approach to the study of topological materials, and opens up new directions of research at the intersection of chemistry, machine learning, and band topology.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molten Alkali Metal Borate/Carbonate Salts for High Temperature CO₂ Capture and Electrochemical Conversion</title>
<link href="https://hdl.handle.net/1721.1/143922" rel="alternate"/>
<author>
<name>Nitzsche, Michael Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/143922</id>
<updated>2022-07-22T03:21:06Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Molten Alkali Metal Borate/Carbonate Salts for High Temperature CO₂ Capture and Electrochemical Conversion
Nitzsche, Michael Philip
In recent years, alkali carbonate molten salts have been developed as a medium for electrochemical conversion of CO2 into value-added carbonaceous materials, including carbon nanotubes (CNTs). While electricity requirements are significant, the high economic value of CNTs make these processes potentially appealing both as a means of carbon sequestration and as an alternative to current greenhouse gas-intensive CNT synthesis pathways. Prior work in this field has primarily focused on the effects of parameters such as alternate chemistries, electrolyte additives, and electrode composition on the achievable products and energetic demands. This research has worked towards commercial operation of electrochemical CNT synthesis. &#13;
&#13;
In this thesis, we present research advancing integration of electrochemical conversion of CO2 in molten salts into real chemical processes at moderate temperatures (500-650°C). First, we examine molten alkali borates as a novel hybrid sorbent for CO2 conversion. Alkali borates have been demonstrated as a promising high-temperature molten salt sorbent for acid gas separations, but prior studies have focused on regeneration through steam sweeping or thermal cycling. Here, we demonstrate that NaxB1-xO1.5-x with x=0.75 can be regenerated electrochemically, achieving CNT synthesis in the process. We determine an optimal mixture of borate/carbonate salts to maximize CO2 uptake and coulombic efficiency. We then examine novel materials for containment of borates and demonstrate the effects of varying cathode materials on electrolysis. We also investigate potential synergies between carbonate electrolysis and the alkaline thermal treatment (ATT) process for conversion of oceanic biomass and plastic wastes into hydrogen. We perform preliminary investigations into the possibility of an all-in-one gasification/electrolysis reactor, determining that the presence of seaweed ash inhibits CNT synthesis, but LDPE can be gasified without affecting the electrochemistry. Finally, we present a technoeconomic analysis of the ATT process, evaluating the relative merits of both the originally proposed slag-regenerated ATT process and an electrochemically mediated alternative. We determine that variable operating expenses are prohibitive in most cases for a slag-regenerated system, making electrochemical regeneration attractive if practical concerns can be addressed.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quadratic forms : harmonic transformations and gradient curves</title>
<link href="https://hdl.handle.net/1721.1/143665" rel="alternate"/>
<author>
<name>Oum, Jai Yong.</name>
</author>
<id>https://hdl.handle.net/1721.1/143665</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Quadratic forms : harmonic transformations and gradient curves
Oum, Jai Yong.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Bibliography: leaf 53.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Survival strategies for the architecture firm : meeting the business cycle challenge</title>
<link href="https://hdl.handle.net/1721.1/143664" rel="alternate"/>
<author>
<name>Oppenheimer, Stephen Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/143664</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Survival strategies for the architecture firm : meeting the business cycle challenge
Oppenheimer, Stephen Robert.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Bibliography: leaves 119-120.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pressure distribution on the human hip joint In vivo and selection of hemiarthroplasty</title>
<link href="https://hdl.handle.net/1721.1/143663" rel="alternate"/>
<author>
<name>Halcomb, Francis Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/143663</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Pressure distribution on the human hip joint In vivo and selection of hemiarthroplasty
Halcomb, Francis Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Bibliography: leaves 216-232.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A constitutive equation for carbon black filled elastomers</title>
<link href="https://hdl.handle.net/1721.1/143662" rel="alternate"/>
<author>
<name>Oswal, Ravinder Kumar.</name>
</author>
<id>https://hdl.handle.net/1721.1/143662</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">A constitutive equation for carbon black filled elastomers
Oswal, Ravinder Kumar.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biodegradable adhesives for orthopedic surgery</title>
<link href="https://hdl.handle.net/1721.1/143661" rel="alternate"/>
<author>
<name>Orgill, Dennis P.</name>
</author>
<id>https://hdl.handle.net/1721.1/143661</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Biodegradable adhesives for orthopedic surgery
Orgill, Dennis P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of a sequential decision process</title>
<link href="https://hdl.handle.net/1721.1/143624" rel="alternate"/>
<author>
<name>Colina Marie, Miguel L.</name>
</author>
<id>https://hdl.handle.net/1721.1/143624</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Optimization of a sequential decision process
Colina Marie, Miguel L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaf [54]).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A traffic model with service rate quadratic in occupancy.</title>
<link href="https://hdl.handle.net/1721.1/143623" rel="alternate"/>
<author>
<name>Seth, Asha.</name>
</author>
<id>https://hdl.handle.net/1721.1/143623</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">A traffic model with service rate quadratic in occupancy.
Seth, Asha.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision and force controlled robot weld bead grinding system</title>
<link href="https://hdl.handle.net/1721.1/143621" rel="alternate"/>
<author>
<name>Todtenkopf, Alan Benjamin.</name>
</author>
<id>https://hdl.handle.net/1721.1/143621</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">Vision and force controlled robot weld bead grinding system
Todtenkopf, Alan Benjamin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1988; Bibliography: leaves 117-118.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecture and unit design of a capital cost optimized, household electrodialysis desalination device with continuous flow</title>
<link href="https://hdl.handle.net/1721.1/143619" rel="alternate"/>
<author>
<name>Varner, Hannah M.&#13;
            (Hannah Martin)</name>
</author>
<id>https://hdl.handle.net/1721.1/143619</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Architecture and unit design of a capital cost optimized, household electrodialysis desalination device with continuous flow
Varner, Hannah M.&#13;
            (Hannah Martin)
Improved solutions for water desalination are necessary in the urban Indian setting. Currently, a majority of point-of-use (POU) purifiers use reverse osmosis (RO) to desalinate household water. However, RO purifiers waste up to 70% of the feed water when used in the domestic context. Electrodialysis (ED) is a water-efficient alternative means of desalination that preserves &gt;80% of the feed as product water. Though it has been proposed previously and is used in industrial processes, ED has not been successfully implemented for domestic POU desalination in India or globally. This work aims to understand how ED systems can be modified to the POU scale and, critically, how they can be made cost competitive to RO systems. We do this by proposing and then validating a new, direct-flow continuous ED architecture with differential flow rates (and pressures) between the diluate and concentrate channels. This architecture is made possible by a small ED stack, which can withstand a flow channel pressure imbalance. Using numerical system models, a system design was optimized for minimum capital cost, informed by design requirements for a characteristic Indian usage context. A prototype of this system was capable of a 37±6% reduction in feed water salinity from (1500±20 to 940±140 mg/L) at !90% water recovery and incorporated electrodialysis reversal and acid dosing as mechanisms to enhance reliability and prevent mineral scaling. If realized as a commercial POU product, ED has the potential within the Indian market to conserve &gt;200 million liters of water per day if adopted in place of low-recovery RO purifiers among even a small fraction of high-income Indian households.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from student-submitted PDF version of thesis.; Includes bibliographical references (pages 89-91).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process modelling of coated abrasive disk grinding as part of a robotic solution</title>
<link href="https://hdl.handle.net/1721.1/143516" rel="alternate"/>
<author>
<name>Ivers, Douglas Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/143516</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Process modelling of coated abrasive disk grinding as part of a robotic solution
Ivers, Douglas Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1985; Bibliography: leaf 74.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process modeling of weld bead grinding as part of a robotic solution</title>
<link href="https://hdl.handle.net/1721.1/143515" rel="alternate"/>
<author>
<name>Kenwood, Gontran Jean.</name>
</author>
<id>https://hdl.handle.net/1721.1/143515</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Process modeling of weld bead grinding as part of a robotic solution
Kenwood, Gontran Jean.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information on cost control for the Japanese construction industry</title>
<link href="https://hdl.handle.net/1721.1/143514" rel="alternate"/>
<author>
<name>Hamano, Yoshiyuki.</name>
</author>
<id>https://hdl.handle.net/1721.1/143514</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Information on cost control for the Japanese construction industry
Hamano, Yoshiyuki.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Includes bibliographical references (leaves 80-81).
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating convergence of a capacity planning model using Generalized Benders's Decomposition</title>
<link href="https://hdl.handle.net/1721.1/143510" rel="alternate"/>
<author>
<name>Habib, Frances Annette.</name>
</author>
<id>https://hdl.handle.net/1721.1/143510</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Investigating convergence of a capacity planning model using Generalized Benders's Decomposition
Habib, Frances Annette.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Bibliography: leaves 85-87.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilities decision support</title>
<link href="https://hdl.handle.net/1721.1/143508" rel="alternate"/>
<author>
<name>Hales, H. Lee,&#13;
            1948-</name>
</author>
<author>
<name>Jones, Harvey Cooper.</name>
</author>
<id>https://hdl.handle.net/1721.1/143508</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Facilities decision support
Hales, H. Lee,&#13;
            1948-; Jones, Harvey Cooper.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling of U.S.C.G. Reliance-B Class Cutter engine rooms utilizing recovered heat from propulsion machinery</title>
<link href="https://hdl.handle.net/1721.1/143507" rel="alternate"/>
<author>
<name>Halsch, Joseph A.</name>
</author>
<id>https://hdl.handle.net/1721.1/143507</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Cooling of U.S.C.G. Reliance-B Class Cutter engine rooms utilizing recovered heat from propulsion machinery
Halsch, Joseph A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of the fermi surfaces of graphite intercalation compounds using Shubnikov de Haas effect</title>
<link href="https://hdl.handle.net/1721.1/143506" rel="alternate"/>
<author>
<name>Hakimi, Farhad.</name>
</author>
<id>https://hdl.handle.net/1721.1/143506</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Study of the fermi surfaces of graphite intercalation compounds using Shubnikov de Haas effect
Hakimi, Farhad.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Includes bibliographcial references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electromagnetic fields of a dipole submerged in a two-layer conducting medium in the ELF regime</title>
<link href="https://hdl.handle.net/1721.1/143505" rel="alternate"/>
<author>
<name>Habashy, Tarek Mohamed.</name>
</author>
<id>https://hdl.handle.net/1721.1/143505</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Electromagnetic fields of a dipole submerged in a two-layer conducting medium in the ELF regime
Habashy, Tarek Mohamed.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A selective automatic phonograph</title>
<link href="https://hdl.handle.net/1721.1/143502" rel="alternate"/>
<author>
<name>Dow, Irving M.</name>
</author>
<id>https://hdl.handle.net/1721.1/143502</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A selective automatic phonograph
Dow, Irving M.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Post to Policy: Using Social Media Data to Inform Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/143420" rel="alternate"/>
<author>
<name>Guetta-Jeanrenaud, Nicolas</name>
</author>
<id>https://hdl.handle.net/1721.1/143420</id>
<updated>2022-06-16T03:42:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">From Post to Policy: Using Social Media Data to Inform Decision-Making
Guetta-Jeanrenaud, Nicolas
While researchers and policy-makers traditionally rely on survey methods as they seek to understand preferences, user-generated data on social media—coupled with advanced methods of Natural Language Processing—can, in certain cases, serve as a valid alternative. In this thesis, I introduce a novel data set of global social media content and present a multilingual algorithmic method of text analysis which provides valuable insights into population well-being and public opinion at a global scale. I conduct three validation tests to assess the extent to which metrics computed from social media data are consistent with more traditional methods of measurement such as census population counts, well-being surveys, and political polls. I go on to present two case studies which rely on social media-based metrics. In the first, we evaluate the effect of temperatures on subjective well-being worldwide. We find a non-linear, inverse U-shaped relationship and estimate high-temperature damages in a large selection of countries. In the second, we connect subjective perception of climate events with real estate market outcomes. We find that while objective temperature stress is consistently associated with lower location value, regions where sentiment is most sensitive to climate discomfort are also the ones where these shocks are the strongest. Both empirical studies confirm the strong potential of social media data for policy-makers and researchers alike.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Labor Market Design to Reduce Labor Abuse in the Global Supply Chain in Southeast Asia</title>
<link href="https://hdl.handle.net/1721.1/143419" rel="alternate"/>
<author>
<name>Liu, Boyu</name>
</author>
<id>https://hdl.handle.net/1721.1/143419</id>
<updated>2022-06-16T03:28:29Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Improving Labor Market Design to Reduce Labor Abuse in the Global Supply Chain in Southeast Asia
Liu, Boyu
Forced labor and labor abuse have become a growing concern in the global supply chain. Evidence suggest that more than 25 million people are victims of forced labor, and that much of this problem stem from the recruitment process. In collaboration with the Issara Institute (Issara), a non-profit organization based in the US and Thailand, this work aims to improve the above issue in two parts. First, we evaluate the causal relationship between inefficiency in labor recruitment and labor abuse outcomes to provide evidence-based policy suggestion. Second, we design an algorithm of joint matching and learning for the recruitment platform built by Issara, named "Golden Dreams", that aims to make it easier for workers and recruiters to find suitable matches, while using data generated by this process to estimate fair labor practices by employers. Our goal is to create employer ratings that are truth-revealing to help workers make more informed choices, and help employers meet their labor demands faster and mitigate labor risk by monitoring their labor practices on the frontline.&#13;
&#13;
Leveraging 2018-2020 datasets on Myanmar-Thailand labor recruitment and worker-reported abuses, we find that an inability to efficiently alleviate labor shortages significantly worsens workerreported abuses; an increase of one standard deviation in low-skilled labor shortages leads to a 34.5% or higher increase in worker-reported abuse in the following 2-4 weeks. Labor markets that are stressed are also simultaneously more prone to unexpected shortages and abuse. Reducing frictions in recruitment, and strengthening worker reporting mechanisms that provide near-real-time information about workplace labor abuse, are important avenues to eliminating forced labor. &#13;
&#13;
As such, we collaborate with Issara on the design of a labor market to address this friction. The matching while learning part builds upon existing literature in the intersection of computer science and economics. The traditional market design literature assumes known preferences and perfect information, and the classical multi-armed bandits literature does not deal with market settings with collision of preferences and resource constraints. To develop a joint algorithm that satisfy standard axioms in a market setting, yet able to learn from historical data and leverage this learning with uncertainty to inform future actions, require an interdisciplinary approach. We propose such a combined approach. We then discuss practical considerations when putting it into practice as well as policy and social concerns.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transitioning Transit : Modeling the Electrification of an Intracity Bus System</title>
<link href="https://hdl.handle.net/1721.1/143413" rel="alternate"/>
<author>
<name>Sreenath, Ragini</name>
</author>
<id>https://hdl.handle.net/1721.1/143413</id>
<updated>2022-06-16T03:34:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Transitioning Transit : Modeling the Electrification of an Intracity Bus System
Sreenath, Ragini
In the past few years, there has been a significant push towards the electrification of transportation as an important climate change mitigation strategy, especially given that transportation contributes to over 15 % of greenhouse gas emissions. While a lot of the present research is focused around the electrification of the private vehicle fleet, another segment of transportation that merits attention is public transit. In many developing countries, public transit buses while being a popular mode of commute, are also hugely responsible for air pollution. This includes particulate matter pollution that poses very significant health risks. However, there are challenges that limit the adoption of electric buses, including limited driving range, high battery costs and most importantly, developing charging infrastructure best suited to meet travel needs. This thesis seeks to begin addressing these challenges by developing a transit bus electrification model that can calculate the energy needs of a city bus system with minimal operational data and uses the network properties of the system to identify an optimal cost solution for operating an electric bus fleet. It also seeks to understand the factors that drive this transition. The model is applied to the city of Delhi’s transportation system, which further highlights the importance of making route-specific decisions when transitioning to electric buses. The model developed in this thesis may enable policymakers and transit authorities to make informed, data-driven decisions, as they proceed to electrify their public transportation systems.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solving Machine Learning Problems</title>
<link href="https://hdl.handle.net/1721.1/143412" rel="alternate"/>
<author>
<name>Tran, Sunny</name>
</author>
<id>https://hdl.handle.net/1721.1/143412</id>
<updated>2022-06-16T03:45:42Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Solving Machine Learning Problems
Tran, Sunny
Can a machine learn Machine Learning? This work trains a machine learning model to solve machine learning problems from a University undergraduate level course. We generate a new training set of questions and answers consisting of course exercises, homework, and quiz questions from MIT’s 6.036 Introduction to Machine Learning course and train a machine learning model to answer these questions. Our system demonstrates an overall accuracy of 96% for open-response questions and 97% for multiple-choice questions, compared with MIT students’ average of 93%, achieving grade A performance in the course, all in real-time. Questions cover all 12 topics taught in the course, excluding coding questions or questions with images. Topics include: (i) basic machine learning principles; (ii) perceptrons; (iii) feature extraction and selection; (iv) logistic regression; (v) regression; (vi) neural networks; (vii) advanced neural networks; (viii) convolutional neural networks; (ix) recurrent neural networks; (x) state machines and MDPs; (xi) reinforcement learning; and (xii) decision trees. Our system uses Transformer models within an encoder-decoder architecture with graph and tree representations. An important aspect of our approach is a data-augmentation scheme for generating new example problems. We also train a machine learning model to generate problem hints. Thus, our system automatically generates new questions across topics, answers both open-response questions and multiple-choice questions, classifies problems, and generates problem hints, pushing the envelope of AI for STEM education.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under (De)Construction: Speculations on Building and Unbuilding</title>
<link href="https://hdl.handle.net/1721.1/143411" rel="alternate"/>
<author>
<name>Wood, Ellen</name>
</author>
<id>https://hdl.handle.net/1721.1/143411</id>
<updated>2022-06-16T03:15:19Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Under (De)Construction: Speculations on Building and Unbuilding
Wood, Ellen
For each palette of Spanish glass or Pennsylvania steel that arrives at a Manhattan block under construction, a truckload of rubbled concrete and mangled steel debris containing the remnants of a pre-existing structure is hauled away. The building industry in New York City is a machine for material exchange: constantly importing materials in for the construction of new structures and exporting materials out in the form of waste, often to meet their ends in out-of-state landfills or to be recycled down as low-grade aggregates. So despite its seemingly reliable solidity, New York City’s built environment can be characterized as much by its willful impermanence as it can by its staggering monumentality. Buildings rise and then fall over a matter of decades, often reaching their premature obsolescence in the face of shifting ownership, real-estate speculation, and amendments in planning policy. Blocks are continuously transformed to make way for new developments, often soaring higher than their individual predecessors. And in a city whose grid reached maximum capacity 70 years ago, nearly each new act of construction is preceded by acts of demolition.&#13;
&#13;
As key stakeholders in the processes of building, architects do not often take part in the processes of unbuilding. This thesis speculates on a scenario in which architects take agency over the other end of a building’s life: its demolition. In doing so, salvaged and rubbled materials are seen as resources for building, rather than as waste. Working within a city that whose built environment has historically embraced radicality and innovation, this thesis imagines new relationships between these materials and the processes of architecture within the urban environment.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unified Documentation and Information Retrieval for Electronic Health Records</title>
<link href="https://hdl.handle.net/1721.1/143410" rel="alternate"/>
<author>
<name>Murray, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/143410</id>
<updated>2022-06-16T03:59:51Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Unified Documentation and Information Retrieval for Electronic Health Records
Murray, Luke
Clinicians in the Emergency Department want to efficiently provide and document high-quality care but cannot, mainly due to challenges exacerbated by Electronic Health Records. For each patient, clinicians have to review the patient history, perform a physical exam, synthesize findings into a differential diagnosis and care plan; coordinate care with other specialists; order and document tests, labs, procedures, and medications; and finally discharge the patient. Existing EHRs have poor usability, time-consuming data entry, and fragmented information exploration and documentation interfaces. As a result, clinicians struggle to synthesize the patient’s history and care plan into a concise and clear data-driven narrative. Additionally, in an Emergency Department Environment, Clinicians often see 35 patients in a single shift and generally have no prior knowledge of any patient’s medical record. With limited time, clinicians often have to satisfice their information needs and synthesis, potentially leading to errors, harm, or non-optimal care.&#13;
&#13;
Clinical tools must enable rapid contextual access to the patient’s medical record with techniques that do not disrupt existing workflows to better support information exploration and documentation. This thesis outlines the development of such a tool, MedKnowts. MedKnowts is an integrated note-taking editor and information retrieval system which unifies the documentation and search process and provides concise synthesized concept-oriented slices of the patient’s medical record. MedKnowts automatically captures structured data while still allowing users the flexibility of natural language. MedKnowts leverages this structure to enable easier parsing of long notes, auto-populated text, and proactive information retrieval, easing the documentation burden.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>People-Centered Planning: A Case Study in Virtual Participatory Design with Chicago Residents</title>
<link href="https://hdl.handle.net/1721.1/143398" rel="alternate"/>
<author>
<name>Turner, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/143398</id>
<updated>2022-06-16T03:06:00Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">People-Centered Planning: A Case Study in Virtual Participatory Design with Chicago Residents
Turner, Christian
This thesis demonstrates how neighborhood planning decisions can move from the purview of developers, engineers, planners and politicians to community members themselves through participatory design. Amid the stay-at-home orders of the COVID-19 pandemic, I explore how digital tools expand the role of citizen-designers in planning’s engagement processes. Partnering with a community development center in West Humboldt Park, Chicago, we conducted a series of online design charrettes and developed an urban design proposal with staff and residents in the summer of 2021. I use learnings from this virtual co-design process to discuss ways site design can allow community participants to feel heard and represented in planning spaces in their community, revealing how the equity of our spaces relates to the size and diversity of the group designing them. I postulate how participation processes themselves can be designed to facilitate and increase the redistribution of citizen power, comparing this participatory process with others from the field of planning. I find justice-oriented planning practices must incorporate democratic design, equity, transparency and replicability in order to improve neighborhood-level planning in Chicago.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Optimization of Regeneratively Cooled Rotating Detonation Rocket Engines</title>
<link href="https://hdl.handle.net/1721.1/143397" rel="alternate"/>
<author>
<name>Jorgensen, Eric D.</name>
</author>
<id>https://hdl.handle.net/1721.1/143397</id>
<updated>2022-06-16T03:19:09Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Structural Optimization of Regeneratively Cooled Rotating Detonation Rocket Engines
Jorgensen, Eric D.
Combustors in rotating detonation rocket engines (RDREs) must withstand prolonged exposure to high heat fluxes (&gt;10 MW/m2 ) and ultrasonic-frequency detonative loading. Regenerative cooling is being considered for thermal management in RDREs, but there is concern that the cooling channels may fail by fatigue due to the detonative loads. In the present work a structural optimization protocol for regeneratively cooled RDREs is developed and then used to determine optimal cooling channel geometries which minimize thermomechanical stresses that might drive such failures. The analysis considers thermal stresses from temperature gradients through the combustor wall, bending stresses due to cooling channel pressurization, and dynamic stresses from detonative loading. To calculate the dynamic stresses, the combustor hot wall is approximated as a beam on an elastic foundation, where the stiffness of the elastic foundation is a function of the cooling channel geometry and the properties of the combustor material. The structural optimization framework is applied to an exemplary RP2/GOX RDRE combustor, and optimal designs are determined as a function of propellant flow rate for several candidate combustor materials – GRCop-84, IN718, W-25Re, Nb-C103. The deviatoric stress in the hot wall increases monotonically with propellant flow rate. In all cases onset of yielding limits the maximum achievable flow rate. W-25Re can achieve the highest propellant flow rate of the materials considered here, owing to its combination of high thermal conductivity and high strength at elevated temperatures. While thermal stresses dominate most of the design space for each material, dynamic stresses become significant when the detonation wave speed approaches the elastic wave speed of the hot wall. This effect is important in Nb-C103, GRCop-84, and W25Re combustors, since the detonation wave speed matches the elastic wave speed for cooling channel designs that minimize static stresses. These results highlight the importance of dynamic stresses in regeneratively cooled RDREs, as combustor designs which minimize static stresses can sometimes amplify dynamic stresses, mitigating creep but promoting fatigue.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reinforcement Learning in Time-Varying Systems: an Empirical Study</title>
<link href="https://hdl.handle.net/1721.1/143390" rel="alternate"/>
<author>
<name>Hamadanian, Pouya</name>
</author>
<id>https://hdl.handle.net/1721.1/143390</id>
<updated>2022-06-16T03:38:27Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Reinforcement Learning in Time-Varying Systems: an Empirical Study
Hamadanian, Pouya
Recent research has turned to Reinforcement Learning (RL) to solve challenging decision problems, as an alternative to hand-tuned heuristics. RL can learn good policies without the need for modeling the environment's dynamics. Despite this promise, RL remains an impractical solution for many real-world systems problems. A particularly challenging case occurs when the environment changes over time, i.e. it exhibits non-stationarity. In this work, we characterize the challenges introduced by non-stationarity and develop a framework for addressing them to train RL agents in live systems. Such agents must explore and learn new environments, without hurting the system's performance, and remember them over time. To this end, our framework (1) identifies different environments encountered by the live system, (2) explores and trains a separate expert policy for each environment, and (3) employs safeguards to protect the system's performance. We apply our framework to two systems problems: straggler mitigation and adaptive video streaming, and evaluate it against a variety of alternative approaches using real-world and synthetic data. We show that each component of our framework is necessary to cope with non-stationarity.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Economic Development Practitioner’s Guide to Childcare</title>
<link href="https://hdl.handle.net/1721.1/143387" rel="alternate"/>
<author>
<name>Doshi, Neha Jayesh</name>
</author>
<id>https://hdl.handle.net/1721.1/143387</id>
<updated>2022-06-16T03:57:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">An Economic Development Practitioner’s Guide to Childcare
Doshi, Neha Jayesh
Childcare is a critical infrastructure that supports working families, businesses, and the economy in addition to providing important care and development opportunities for infants and young children. In the United States, infant and toddler childcare is largely delivered through the private market and is subject to heavy regulations at federal, state and local levels to ensure the safety and well-being of young children. Unlike other critical infrastructures such as public schools and roads, considerations for childcare provisions are not permanently embedded into long-term city plans in most American municipalities. Support systems to reduce the numerous supply and demand side challenges are insufficient. Funds available to support families and providers are limited and resources to help them navigate the complex processes are sparse. With limited external support, families and childcare business owners both bear the consequences. Families are faced with limited access to and supply of care as well as high fees. Childcare business owners struggle with high business costs, complex processes, and regulatory burden. Economic development organizations can play a central role in redressing the challenges faced by the childcare industry by leveraging their physical assets and extending their industry and workforce development initiatives to include providers. This thesis focuses on the case of New York City where the newly elected mayor has listed childcare reform as a key city priority. It considers the pathways through which its major economic development organization, the NYC Economic Development Corporation (NYCEDC) can work to permanently integrate childcare considerations into its internal planning and programming along with ways in which it can collaborate with other agencies to advocate for its integration in the city’s broader, longer-term strategic plan. It offers several opportunities that the EDC can consider and case studies it can reference to understand how cities and counties across the United States have worked to support and strengthen local childcare systems.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Development and Deployment of Sensors and Algorithms for the Mobile Monitoring of Urban Surface Water Quality</title>
<link href="https://hdl.handle.net/1721.1/143381" rel="alternate"/>
<author>
<name>Meyers, Drew</name>
</author>
<id>https://hdl.handle.net/1721.1/143381</id>
<updated>2022-06-16T03:12:10Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">The Development and Deployment of Sensors and Algorithms for the Mobile Monitoring of Urban Surface Water Quality
Meyers, Drew
Although water quality extensively improved over the last decade, recreational uses of the canal network in Amsterdam are limited by variations in water quality associated with stormwater runoff and episodic harmful algal blooms. The current systems for monitoring water quality are based principally on a stationary network of sampling points and offline testing. There have also been a number of programs to online measurements from a mobile platform/vessel that have focused principally on non-specific indicators of water quality (pH, conductivity, dissolved oxygen etc.). This thesis describes the development and deployment of sensors and algorithms for the monitoring of algal concentrations and identifying algal classes based on their fluorescing pigments in urban surface water of Amsterdam.&#13;
&#13;
In the first chapter, we demonstrate that by using only a single patrol vessel, we are able to observe spatiotemporal heterogeneity of algal and chemical water quality within the Amsterdam canal network. The data provide encouraging evidence that opportunistic measurements from a small number of mobile platforms can enable high-resolution mapping and can be used to improve modeling and control of water quality across the city.&#13;
&#13;
In the second and third chapters we present the development of standardized and reproducible data pipelines, open-source software tools, and algorithms for analyzing fluorescence excitation emission matrices. The data pipeline and algorithms are employed to investigate the identification and quantification of algae in complex mixtures.&#13;
&#13;
The final chapter describes the development and characterization of a field-deployable high-resolution spectrofluorometer for the real-time mobile monitoring of algae. The chapter provides encouraging evidence that such an instrument can begin moving from the lab to the field in order to investigate real-time phenomenon in situ. This move will provide limnologists and oceanographers with a new tool to investigate new questions of aquatic ecology and assist water utilities in managing water quality.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Models for Visuomotor Feedback Control in Object Pile Manipulation</title>
<link href="https://hdl.handle.net/1721.1/143378" rel="alternate"/>
<author>
<name>Suh, Hyung Ju Terry</name>
</author>
<id>https://hdl.handle.net/1721.1/143378</id>
<updated>2022-06-16T03:08:08Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Predictive Models for Visuomotor Feedback Control in Object Pile Manipulation
Suh, Hyung Ju Terry
What is the “state” of a pile of objects? Collections of standard Lagrangian states ideally describe a pile, but it is impractical to estimate Lagrangian states directly from images to use for visuomotor feedback control. Given this burden of state estimation, is there a practical alternative representation that lies closer to our observations? In addition, how can we build predictive models over such representations that can be useful for their task-free generality? In the first chapter of this thesis, we investigate using the image observation directly as state, and compare different models that can be useful over this space of representations. We surprisingly find that completely linear models that describe the evolution of images outperform naive deep models, and perform in par with models that work over particle-space representations. In the next chapter, we analyze and describe the reason for this inductive bias of linear models by describing the pixel space as a space of measures, and show limitations of this approach outside of object pile manipulation. In the final chapter of this thesis, we present a more general solution to image-based control based on doing model-based Reinforcement Learning on the sufficient statistics of a task, which we call Approximate Information States (AIS). We demonstrate that when the model does not have sufficient inductive bias, model-based reinforcement learning is prone to two important pitfalls: distribution shift, and optimization exploiting model error. These problems are tackled through online learning, and risk-aware control that penalizes the variance of the model ensemble.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Socioeconomic Implication of the Circular Economy: A Preliminary Study of The Impact on Employment and Local Economy in the United States</title>
<link href="https://hdl.handle.net/1721.1/143369" rel="alternate"/>
<author>
<name>Lin, Wei-Ching</name>
</author>
<id>https://hdl.handle.net/1721.1/143369</id>
<updated>2022-06-16T03:51:55Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Socioeconomic Implication of the Circular Economy: A Preliminary Study of The Impact on Employment and Local Economy in the United States
Lin, Wei-Ching
A circular economy is an economic system with the goal to minimize resource input and waste, emission, and energy leakages. It has gained momentum in the last decade as an increasing number of businesses have adopted some circular strategies in their sustainable development roadmap. Economic and resource efficiency have predominated studies of the circular economy's benefits, while on the other hand, there are scarce studies of the circular economy's impact on jobs and employment, which are often limited to ambiguous perspectives surrounding job creation. Employment, however, plays a critical role that has a direct impact on social sustainability, and therefore needs to be reviewed through factors such as employment opportunities, skills, wages, earning quality, job security, workplace risk, work schedule and social dialogue, etc. This study examines two sectors connected to the circular economy with the highest potential for employment growth: 1) the waste sector, and 2) the reuse sector. The impacts of the circular economy on these areas were examined through semi-structured interviews with experts and stakeholders ranging from businesses, foundations, municipalities, academia and labor unions in the United States. In addition, this study examines how COVID-19 has changed the workforce in the last two years.  The findings of this study reveal issues and vulnerability of work that are most relevant to the circular economy. In the waste sector, workers work longer hours under high risk environments. In the reuse sector, the work is often manual and involves some informal workers who are less protected by labor law. In addition, the study shows that COVID-19 has changed the workforce drastically. The vulnerability of the workforce  presents the need for institutional actors to approach the circular economy systemically with the goal to achieve strong sustainability and to establish policymaking and better governance framework to keep the transition of zero waste and the circular economy resilient and equitable for society.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Profits from Real Estate Leasing: Flexible Strategies based on Market Conditions</title>
<link href="https://hdl.handle.net/1721.1/143368" rel="alternate"/>
<author>
<name>Raazi, Cassie Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/143368</id>
<updated>2022-06-16T03:57:34Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Increasing Profits from Real Estate Leasing: Flexible Strategies based on Market Conditions
Raazi, Cassie Ann
The increasing use of modern data analytics is changing decision making processes in the commercial real estate industry. Advances in data analytics present opportunities for commercial real estate owners and managers to increase profits by integrating market cycles into leasing strategy. This research presents a model that exploits readily available data to simulate market volatility and uncertainty, inform leasing strategy, and make better decisions about lease durations offered. We compare the results of applying three different leasing strategies: consistent 5-year, consistent 10-year, and variable based on understanding of relative positioning within the market cycle. For comparative analysis of these strategies, Monte Carlo simulation via Julia is used to run 10,000 trials for each strategy, calculating the range of outcomes that could occur with each leasing strategy over the life of an asset. It is found that leasing with market knowledge is most optimal of the three strategies examined as it increases profits. The results suggest that incorporating knowledge of relative position within the market cycle to determine optimal lease length creates opportunity for increased profits from leasing. Given the increasing availability of real estate data, future research is directed at exploring different lease duration strategies and the use of real data feeding the simulation to make better models.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Cardinality Estimates are More Equal than Others</title>
<link href="https://hdl.handle.net/1721.1/143367" rel="alternate"/>
<author>
<name>Negi, Parimarjan</name>
</author>
<id>https://hdl.handle.net/1721.1/143367</id>
<updated>2022-06-16T03:00:47Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Some Cardinality Estimates are More Equal than Others
Negi, Parimarjan
Recently there has been significant interest in using machine learning to improve the accuracy of cardinality estimation. This work has focused on improving average estimation error, but not all estimates matter equally for downstream tasks like query optimization. Since learned models inevitably make mistakes, the goal should be to improve the estimates that make the biggest difference to an optimizer. We introduce a new loss function, Flow-Loss, for learning cardinality estimation models.  Flow-Loss approximates the optimizer’s cost model and search algorithm with analytical functions, which it uses to optimize explicitly for better query plans. At the heart of Flow-Loss is a reduction of query optimization to a flow routing problem on a certain “plan graph”, in which different paths correspond to different query plans. To evaluate our approach, we introduce the Cardinality Estimation Benchmark (CEB) which contains the ground truth cardinalities for sub-plans of over 16K queries from 21 templates with up to 15 joins. We show that across different architectures and databases, a model trained with Flow-Loss improves the plan costs and query runtimes despite having worse estimation accuracy than a model trained with Q-Error. When the test set queries closely match the training queries, models trained with both loss functions perform well. However, the Q-Error-trained model degrades significantly when evaluated on slightly different queries (e.g., similar but unseen query templates), while the Flow-Loss-trained model generalizes better to such situations, achieving 4−8× better 99th percentile runtimes on unseen templates with the same model architecture and training data.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Holistic Framework for Designing Mental Health Technology</title>
<link href="https://hdl.handle.net/1721.1/143358" rel="alternate"/>
<author>
<name>Liu, Yuanbo</name>
</author>
<id>https://hdl.handle.net/1721.1/143358</id>
<updated>2022-06-16T03:48:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Holistic Framework for Designing Mental Health Technology
Liu, Yuanbo
The global burden of mental disorders accounts for an estimated 32.4% of years of healthy life lost due to disability. By 2030, mental disorders are forecast to become the leading cause of morbidity and mortality in the world. Mental health technologies (MHT) based on the internet, mobile apps, wearable sensors and artificial intelligence have great potential to expand the capacity of mental health resources, improve treatment efficacies and even revolutionize mental health care. However, the real-world adoption and engagement of MHT have been underwhelming.&#13;
&#13;
In order to create MHT that can meet real patient needs, keep users engaged in the long run, while meeting the requirements of external stakeholders such as regulatory bodies, clinical systems and healthcare payers, I propose a holistic framework for designing MHT that integrates patient-centered design and technology life cycle design.&#13;
&#13;
This thesis also uses designing for depression patients in China as a case study to illustrate, on a high level, how the framework can be used to guide the design process in a specific real-world setting.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Predicting Tasks at Risk in Team Task Management</title>
<link href="https://hdl.handle.net/1721.1/143357" rel="alternate"/>
<author>
<name>Soliman, Nouran</name>
</author>
<id>https://hdl.handle.net/1721.1/143357</id>
<updated>2022-06-16T03:15:50Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Characterizing and Predicting Tasks at Risk in Team Task Management
Soliman, Nouran
Collaborative project management involves interacting with various tasks in a shared planning space where members add, assign, complete, and edit project-related tasks to have a shared view of the project’s status. This process directly impacts how individual team members select, prioritize, and organize tasks on which to focus on a daily basis. However, such coordination and task prioritization can become increasingly challenging for individuals working on multiple projects with big teams. Accordingly, tasks could become at risk and eventually not be completed on time, leading to personal or team losses in many situations. To support task-doers in completing their tasks, we conducted a mixed-methods study focusing on Microsoft Planner—a collaborative project management tool—to understand how users manage their tasks in a team setting, what challenges they encounter, and their preferred solutions. Based on the findings from a qualitative survey with 151 participants and our Planner log data analysis, we further developed a task at risk prediction model using various task characteristics and user actions. Our experimental results suggest that a task at risk can be classified with high effectiveness (accuracy of 89%). Our work provides novel insights on how users manage their tasks in team task management tools, what challenges they face, how they perceive a task at risk, and how tasks at risk can be modeled. Such an application can significantly improve the user experience in such tools by providing a personal assistant that helps users prioritize their tasks and pay attention to critical situations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CausalSim: Toward A Causal Data-Driven Simulator For&#13;
Network Protocols</title>
<link href="https://hdl.handle.net/1721.1/143352" rel="alternate"/>
<author>
<name>Nasr-Esfahany, Arash</name>
</author>
<id>https://hdl.handle.net/1721.1/143352</id>
<updated>2022-06-16T03:44:27Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">CausalSim: Toward A Causal Data-Driven Simulator For&#13;
Network Protocols
Nasr-Esfahany, Arash
Evaluating the real-world performance of network protocols is challenging. Randomized control trials (RCT) are expensive and inaccessible to most researchers, while expert-designed simulators fail to capture complex behaviors in real networks. We present CausalSim, a data-driven simulator for network protocols that addresses this challenge. Learning network behavior from observational data is complicated due to the bias introduced by the protocols used during data collection. CausalSim uses traces from an initial RCT under a set of protocols to learn a causal network model, effectively removing the biases present in the data. Using this model, CausalSim can then simulate any protocol over the same traces (i.e., for counterfactual predictions). Key to CausalSim is the novel use of adversarial neural network training that exploits distributional invariances that are present due to the training data coming from an RCT. Our extensive evaluation of CausalSim on both real and synthetic datasets and two use cases, including more than nine months of real data from the Puffer video streaming system, shows that it provides accurate counterfactual predictions, reducing prediction error by 44% and 53% on average compared to expert-designed and standard supervised learning baselines.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-learning and Enforcing Useful Conservation Laws in Sequential Prediction Problems</title>
<link href="https://hdl.handle.net/1721.1/143349" rel="alternate"/>
<author>
<name>Doblar, Dylan D.</name>
</author>
<id>https://hdl.handle.net/1721.1/143349</id>
<updated>2022-06-16T03:46:01Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Meta-learning and Enforcing Useful Conservation Laws in Sequential Prediction Problems
Doblar, Dylan D.
In recent years, deep learning techniques have enjoyed storied success in a wide array of problem domains, including computer vision, natural language processing, and robotics. While much of this success can be attributed to increasing availability of both data and computing resources, the inductive biases induced by various training methods, model components, and architectures has helped enable efficient generalization as well. Useful biases often exploit symmetries in the prediction problem, such as convolutional neural networks relying on translation equivariance. Automatically discovering such useful symmetries is a promising path to greatly improving the performance of ML systems, but it still remains a challenge. In this work, we focus on sequential prediction problems in real and simulated physical domains and take inspiration from Noether’s theorem to reduce the problem of finding inductive biases to that of meta-learning useful conserved quantities. We propose Noether Networks: a class of models where an unsupervised, meta-learned conservation loss is optimized inside the prediction function. This adapts the model weights to the particular input and imposes the approximate meta-learned conservation law in the predictions. We show, theoretically and experimentally, that Noether Networks improve prediction quality, providing a general framework for discovering inductive biases in sequential prediction problems.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Sports Videos to Showcase Exciting Content to Viewers</title>
<link href="https://hdl.handle.net/1721.1/143347" rel="alternate"/>
<author>
<name>Pailet, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/143347</id>
<updated>2022-06-16T03:34:59Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Using Sports Videos to Showcase Exciting Content to Viewers
Pailet, Gregory
In this thesis, we explore the task of generating highlight videos from sports games through the means of assessing the level of excitement of such videos to extract interesting moments from a game as well as utilize NLP techniques to generate captions for such videos. We create pipelines for the extraction of highlight clips using an audio heuristic for which we obtain transcriptions and, using a defined schema for exciting captions, fine-tune pre-trained transformer models to extract the best sentence from the video clip to use as a caption. Our results show improvements over baselines that solely use emotion-prediction categories of input sentences, suggesting our models are able to learn additional features to determine the excitement of captions.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrepancy Values and their Applications</title>
<link href="https://hdl.handle.net/1721.1/143345" rel="alternate"/>
<author>
<name>Poorya Habibzadeh</name>
</author>
<id>https://hdl.handle.net/1721.1/143345</id>
<updated>2022-06-16T03:10:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Discrepancy Values and their Applications
Poorya Habibzadeh
In this thesis, we introduce the concept of discrepancy values for the square matrices. We prove several properties of such objects, together with their relationship with other operators of matrices, such as singular values. After creating a toolbox of results about the discrepancy values, we proceed by introducing some ways to compute the discrepancy values efficiently. We then employ these objects to achieve tight bounds on the norms of the commutator of two matrices and several other quantities. We finish the thesis by putting forward some conjectures regarding discrepancy values and discussing their implications.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-parametric threshold for smoothed empirical Wasserstein distance</title>
<link href="https://hdl.handle.net/1721.1/143344" rel="alternate"/>
<author>
<name>Jia, Zeyu</name>
</author>
<id>https://hdl.handle.net/1721.1/143344</id>
<updated>2022-06-16T03:10:33Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Non-parametric threshold for smoothed empirical Wasserstein distance
Jia, Zeyu
Consider an empirical measure P&#119899; induced by &#119899; iid samples from a &#119889;-dimensional &#119870;-subgaussian distribution P. We show that when &#119870; &lt; &#120590;, the Wasserstein distance &#119882;₂² (Pₙ*&#119977;(0, &#120590;² &#119868; subscript &#119889;), P*&#119977; (0, &#120590;² &#119868; subscript &#119889;)) converges at the parametric rate &#119874;(1/&#119899;), and when &#119870; &gt; &#120590;, there exists a &#119870;-subgaussian distribution P such that &#119882;₂² (Pₙ *&#119977; (0, &#120590;² &#119868; subscript &#119889;), P* &#119977; (0, &#120590;² &#119868; subscript &#119889;)) = &#120596;(1/&#119899;). This resolves the open problems in[7], closes the gap between where we get parametric rate and where we do not have parametric rate. Our result provides a complete characterization of the range of parametric rates for subgaussian &#119875;.&#13;
&#13;
In addition, when &#120590; &lt; &#119870;, we establish more delicate results about the convergence rate of W2 distance squared. Assuming the distribution is one dimensional, we provide both the lower bound and the upper bound, demonstrating that the rate changes gradually from Θ(1/√ &#119899;) to Θ(1/&#119899;) as &#120590;/&#119870; goes from 0 to 1. Moreover, we also establish that &#119863;ₖₗ(Pₙ * &#119977; (0, &#120590;² &#119868; subscript &#119889;)‖P * &#119977; (0, &#120590;² &#119868; subscript &#119889;)) = &#119978;˜(1/&#119899;). These results indicate a dichotomy of the convergence rate between the W2 distance squared and the KL divergence, resulting in the failure of &#119879;₂-transportation inequality when &#120590; &lt; &#119870;, hence also resolving the open problem in [17] about whether &#119870; &lt; &#120590; is necessary in proving whether the log-Sobolev inequality holds for P * &#119977; (0, &#120590;²).
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a Novel Wave Energy Converter With a Tension Leg Platform and Oscillating Proof Masses</title>
<link href="https://hdl.handle.net/1721.1/143342" rel="alternate"/>
<author>
<name>Zhang, Franklin</name>
</author>
<id>https://hdl.handle.net/1721.1/143342</id>
<updated>2022-06-16T04:00:55Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a Novel Wave Energy Converter With a Tension Leg Platform and Oscillating Proof Masses
Zhang, Franklin
A design of novel wave energy converter with an oscillating proof mass and an electromagnetic power takeoff mechanism was considered. The wave energy converter has two parts, a tension leg platform connected by tether lines to the sea floor and inside of it, proof mass oscillators with motions which are coupled to those of the tension leg platform. In order to simplify the analysis, the system was constrained to only oscillate in the direction of surge. Complex hydrodynamic forces caused by ocean waves will excite the system and the surge motion of the proof mass relative to the tension leg platform will generate power via the electromagnetic power takeoff mechanism. First a model of the system with a linear restoring force exerted on the proof mass is analyzed using linear theory. Following the development of the linear theory, a more complex model with a nonlinear restoring force was considered. Using both a frequency-domain approach and a time-domain simulation, the average power of these systems were calculated. To further maximize power, a control circuit and control law are introduced which increase the average power by multiple factors. By introducing nonlinear restoring force and a control law, the performance of the system was shown to be further improved.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fukushima Exclusion Zone Survival Handbook</title>
<link href="https://hdl.handle.net/1721.1/143337" rel="alternate"/>
<author>
<name>Zhao, Mengqiao</name>
</author>
<id>https://hdl.handle.net/1721.1/143337</id>
<updated>2022-06-16T03:42:41Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Fukushima Exclusion Zone Survival Handbook
Zhao, Mengqiao
Ten years have passed since the 2011 Fukushima nuclear accident in Japan, but its devastating impact has not subsided and may never end, given the radioactivity from the long-term decay period. As one of the most serious nuclear accidents in human history, this is a disaster not only for Japan but for the whole world. Radioactivity has no boundaries, whether for countries or species: both humans and nonhumans are impacted.&#13;
&#13;
Challenging traditional problem-solving methods that focus on overcoming, solving, remediating and isolating, this thesis proposes an unconventional method of coexisting with radiation in a survival handbook. “The Survival Handbook of the Fukushima Exclusion Zone” is a fiction and an imaginative guide that describes how to coexist in a future radioactive world. It is designed for humans -- residents and visitors returning to the site--and nonhumans--flora and fauna and nuclear waste itself. Based on an investigation of both traditional and promising new materials, this book offers schemes for imagining the future in this radioactive world. Organized around future daily life in the Fukushima Exclusion Zone, this project investigates whether all forms of life can live in a radioactive world and how we might do so. At the scale of atoms, bodies, buildings, and landscapes, the timeframe of the design proposed in this thesis ranges from an almost negligible nuclear reaction time to the whole nuclear waste decay period, lasting more than 10,000 years.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Powered Cars and Trucks: Is there a role for them in the electrified U.S. future?</title>
<link href="https://hdl.handle.net/1721.1/143335" rel="alternate"/>
<author>
<name>Kumar, Hemant</name>
</author>
<id>https://hdl.handle.net/1721.1/143335</id>
<updated>2022-06-16T03:25:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Hydrogen Powered Cars and Trucks: Is there a role for them in the electrified U.S. future?
Kumar, Hemant
Climate change is a systemic risk to the world’s economy. Significant and rapid cuts in carbon emissions are needed to limit global warming. Fuel Cell Electric Vehicles (FCEV) offer an attractive alternative for decarbonizing the transportation sector for both Light Duty and Heavy Duty categories. The cost of hydrogen fuel cell-related technologies are decreasing rapidly and FCEVs may provide an alternative to electric vehicles in decarbonization.&#13;
&#13;
This thesis provides a fresh look at economics of FCEVs and competing alternatives for decarbonizing transportation and their long-term trends in the US. Based on the recent data, the total cost of ownership (TCO) models are developed for three types of drive train Internal Combustion Engine Vehicles (ICEV), Battery Electric Vehicles (BEV) and FCEV for both Light Duty Vehicle (LDV) and Heavy Duty Vehicle (HDV) categories. A hydrogen retail cost model is developed to provide a detailed understanding of the cost components. The fleet dynamics of Light Duty vehicles (LDV), including ICEV, BEV and FCEV, are modeled using MIT Economic Projection and Policy Analysis (EPPA) model to understand the characteristics of long-term trajectories for the LDV fleet growth in the US.&#13;
&#13;
The TCO for BEV and FCEV are higher than ICEV in the LDV sector in the absence of carbon abatement credits or other government support. This implies that FCEVs are about 10% more expensive than BEVs on a cost-per-mile basis. However, there are cost reduction pathways that might make FCEVs competitive in the next 10 years and in the scenarios of accelerated actions. The percentage of FCEVs in total vehicle stock in the US might grow to more than 14% by 2050. The growth is contingent upon the TCO reduction pathways. The TCO of BEV and FCEV Class 8 type trucks are 24% and 40% higher than ICEV trucks, respectively. The fuel cost for FCEV is 2.4 times of BEV’s fuel cost and the retail price of FCEV Class 8 type truck is 1.5 times that of BEV truck. A 40% reduction in hydrogen retail price or a 70% reduction in FCEV truck retail price would make FCEV trucks cheaper than BEV trucks. In all scenarios, substantial government support is needed in the forms of R&amp;D, infrastructure development and financial incentives to realize the potential of hydrogen based transportation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continuous Measured Improvement: A New Approach to Meeting the Municipal Cybersecurity Challenge</title>
<link href="https://hdl.handle.net/1721.1/143334" rel="alternate"/>
<author>
<name>Baral, Avital</name>
</author>
<id>https://hdl.handle.net/1721.1/143334</id>
<updated>2022-06-16T03:51:33Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Continuous Measured Improvement: A New Approach to Meeting the Municipal Cybersecurity Challenge
Baral, Avital
This thesis examines the cybersecurity challenges facing municipal governments and proposes a new policy approach. Through a review of existing public-sector cybersecurity concerns and an interview-based case study of Massachusetts municipalities in partnership with the Massachusetts Cybersecurity Center, this thesis identifies the main problem as a lack of a proper incentive structure for municipalities to prioritize cybersecurity improvements. I propose a new approach to state / local government efforts to improve cybersecurity. I establish the goal of continuous, measured improvement in cybersecurity posture for municipalities, and propose a state-sponsored, eligibility-restricted insurance mechanism for municipalities to systematically lower their cyber risk to meet that goal. In exchange for commitments to implementing regularly-updated cybersecurity best practices, municipalities would receive high-quality, affordable insurance against catastrophic cyber-related losses, and a commitment from the state to aggregate loss and resource-use data to provide best-in-class cybersecurity infrastructure help. I lay out a roadmap for the implementation of such a Massachusetts Cyber Disaster Insurance Program (MCDIP) along with proposals for data-driven refinement of state cybersecurity resource offerings through the use of the new MIT SCRAM platform. This public-sector cybersecurity goal and implementation strategy has implications far beyond Massachusetts and the potential to change the course of cybersecurity policymaking.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating and Interpreting a Cultural Landscape on Twitter to Understand People and Audiences</title>
<link href="https://hdl.handle.net/1721.1/143333" rel="alternate"/>
<author>
<name>Fulay, Suyash Pradeep</name>
</author>
<id>https://hdl.handle.net/1721.1/143333</id>
<updated>2022-06-16T03:34:15Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Creating and Interpreting a Cultural Landscape on Twitter to Understand People and Audiences
Fulay, Suyash Pradeep
Socio-cultural institutions play a large part in shaping our identities as well as signaling to others important parts of our identity. However, the space of such institutions, or "cultural touchstones", is large and the definition is broad; sports teams, places of worship, companies, and celebrities all could fall under the umbrella of socio-cultural touchstones. The New England Patriots, a Hindu temple, Nike, or John Lennon are just a few examples of these touchstones. Making sense of this space and understanding where people fall on it is essential to understand their preferences and identities.&#13;
&#13;
This work presents a few main contributions to this understanding. First, methods to create a "cultural landscape" based on influential Twitter users are presented, using the network structure as well as external data about each account. We find that evidence that these influential users self-sort and audience members sort them along dimensions of identity. Race, political orientation, and religion are the most salient dimensions for separation, while state geography, gender, and nationality are less important. We also compare audience sorting and self-sorting of these influential users, and find that audiences separate influential users relatively more on political orientation, religion, and gender, while the influential users self-sort more on race, occupation and LGBTQ status. This finding is a first step in quantifying the difference in how people view themselves versus how others may perceive them.&#13;
&#13;
Users and audiences are also placed in the cultural landscape to understand their most salient interests. Finally, audiences are broken down according to location in the cultural landscape to understand them at a more granular level. Both of these methods have significant potential commercial application as they provide granular insight into audience preferences in a a common space.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation Toolkit for Adaptable Automatic Gaze Estimation</title>
<link href="https://hdl.handle.net/1721.1/143331" rel="alternate"/>
<author>
<name>Hart, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/143331</id>
<updated>2022-06-16T03:43:00Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Evaluation Toolkit for Adaptable Automatic Gaze Estimation
Hart, Peter
Cognitive development researchers have long been interested in understanding how infants learn to perceive and understand the world [11, 9, 7]. One technique for investigating infant cognition involves presenting stimuli and observing the direction and duration of their gaze [5]. Experiments of this type currently require human annotation, or special hardware like infrared eye tracking to annotate video data of the infants’ faces.&#13;
&#13;
The MIT Early Childhood Cognition Lab developed the project Lookit, which allows volunteers to participate in preferential looking studies from home [10]. In these studies, the stimuli are presented on a laptop screen and the infants’ reactions are recorded with a web camera. Although this platform removes some bottlenecks from the data collection process, data generated from Lookit still require human annotators to determine the infant’s gaze direction and duration. Other researchers, such as Virginia A. Marchman and her associates at the Stanford Language Learning Lab, have recorded videos with notable differences such as the position of the participants, video color, and video resolution.&#13;
&#13;
Recent developments in the field of computer vision have allowed for advancements in automatic gaze tracking from videos. Preliminary results suggest that the convolutional neural network (CNN) based gaze estimation model iCatcher+ can be trained to infer gaze direction with near-human accuracy [4, 2]. Cognitive development researchers care about several different metrics in addition to accuracy. I created a suite of tools for analyzing video data sets and evaluating the performance of gaze tracking models. These tools include key performance metric calculation and visualization and video data analysis. These tools can be used to aid the development of a general purpose gaze detection model that can be adapted to perform well over diverse video attributes.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Audience Tweet Engagement</title>
<link href="https://hdl.handle.net/1721.1/143330" rel="alternate"/>
<author>
<name>Wu, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/143330</id>
<updated>2022-06-16T04:00:57Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Predicting Audience Tweet Engagement
Wu, Julia
Social media has become the ubiquitous infrastructure through which the world is connected. It allows people to interact not only with family members and friends but also with prominent figures like movie stars, presidential candidates, and even royalty. These celebrities have immense presences on social media, and each post they share has the potential to reach millions of people. As the sphere of social media influence grows increasingly large, it also becomes increasingly important to be able to understand how influencers on social media affect their audience. However, it is difficult for individuals with large social media platforms to gain insight into how their posts influence their followers. While social media platforms do provide influencers with some audience breakdowns and statistics, they are often not granular enough to be useful. In this thesis, we present methods to analyze an influencer’s tweets and audience. We then use these results to predict which segments of an influencers audience will interact with different types of posts. These insights can help determine which areas an influencer has the greatest potential to make an impact in and thus guide the direction and content of influencer campaigns.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Control for Dynamic Efficiency Optimization in Switching Regulators</title>
<link href="https://hdl.handle.net/1721.1/143329" rel="alternate"/>
<author>
<name>Cheng, Lok Hin</name>
</author>
<id>https://hdl.handle.net/1721.1/143329</id>
<updated>2022-06-16T03:37:12Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Digital Control for Dynamic Efficiency Optimization in Switching Regulators
Cheng, Lok Hin
Dynamic efficiency optimization leverages opportunities to minimize power loss and adaptively optimize switching regulator efficiency as the operating conditions change. This thesis details the development of an optimization system that dynamically optimizes the efficiency of a targeted converter. The optimization system is designed to be compatible and easily integrated with an existing monolithic converter to serve as a proof of concept for dynamic efficiency optimization. The monolithic converter targeted in this thesis is a next generation Analog Devices monolithic buck converter with telemetry accessible via the PMBus interface. A loss model of the targeted converter is derived from its specific topology and implementation. The loss model and prior works are used to identify possible means or opportunities for the optimization system to dynamically trade off losses to minimize the converters total loss and maximize its efficiency as its operating conditions change. The tradeoff opportunities, the prior works and the desire for compatibility with an already existing converter guide the development of the optimization system in this thesis. The proposed dynamic efficiency optimization system includes the modification of existing circuitry as well as the introduction of additional circuitry in the converter to enable the loss tradeoff, a digital algorithm to optimize the loss tradeoff and a method of estimating converter efficiency, which is used to inform the algorithms decision making. The potentially adverse thermal and control implications of the optimization on the targeted converter are also addressed with circuit modifications verified on the block level with transistor level simulations. The functionality of the entire system is verified with digital simulation that employs real number modeling (RNM) of the analog blocks, which enables rapid top level or functional simulation of mixed signal designs with both analog and digital blocks. The optimization systems efficacy will be experimentally determined by assembling the targeted monolithic converter modified for dynamic efficiency optimization with an external FPGA that implements the optimization algorithm on an evaluation board and measuring the dynamically optimized efficiency of the targeted converter. The process described in this thesis to implement dynamic efficiency optimization can be generally applied to other targeted converters.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transmit Precoder Design for Dual-Function Radar-Communication Systems</title>
<link href="https://hdl.handle.net/1721.1/143326" rel="alternate"/>
<author>
<name>Pritzker, Jacob W.</name>
</author>
<id>https://hdl.handle.net/1721.1/143326</id>
<updated>2022-06-16T03:55:21Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Transmit Precoder Design for Dual-Function Radar-Communication Systems
Pritzker, Jacob W.
As radio-frequency (RF) antenna, component and processing capabilities grow, the ability to conduct multiple RF system functions from a common aperture is being realized. Conducting both radar and communications functions from the same system is potentially useful for vehicular, health monitoring, and surveillance applications. This paper considers multiple-input-multiple-output (MIMO) dual-function radarcommunication (DFRC) systems in which the radar and communication modes use distinct baseband waveforms. A transmit precoder provides spatial multiplexing and power allocation among the radar and communication modes. Multiple optimization approaches for precoder design are developed based upon combinations of radar detection and communication receiver performance metrics. The methods guarantee a level of radar surveillance performance while maximizing communication system performance, or vice-versa. The methods are shown via simulation to enable high performance in both modes with significant design flexibility, yielding improved detection performance and better approximation of desired ambiguity functions while satisfying communications objectives. Insights into precoder operation based upon system goals are also provided.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining Task Parallelism and Multithreaded Concurrency</title>
<link href="https://hdl.handle.net/1721.1/143325" rel="alternate"/>
<author>
<name>Pusapaty, Sai Sameer</name>
</author>
<id>https://hdl.handle.net/1721.1/143325</id>
<updated>2022-06-16T03:56:57Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Combining Task Parallelism and Multithreaded Concurrency
Pusapaty, Sai Sameer
In this thesis, I present Multicilk, a threads library based on C11 threads and the OpenCilk runtime for enabling task parallelism within multiple concurrent threads. With Multicilk, a programmer can parallelize threads in a multithreaded application simply by using Cilk independently within each thread. Without Multicilk, doing so violates the semantics that we expect in concurrent thread programming, leading to catastrophic failure of the application. No other existing combination of task-parallel system — including OpenMP, TBB, and TPL, and the various other implementations of Cilk — and threads library — including C11, C++11, Pthreads, and WinAPI threads — can parallelize multithreaded applications transparently and modularly.&#13;
&#13;
The key insight behind Multicilk recognizes that integrating task-parallel systems with multithreaded concurrency requires two layers of thread abstraction that are conflated in previous systems. Service threads implement the workers in the Cilk runtime system. But the Cilk computation itself, called a cilk, though implemented by many service threads, itself provides the abstraction of a single application thread to other threads within the multithreaded application, regardless of whether they are cilks. Multicilk employs a technique called impersonation to individual workers to act on behalf of the entire cilk, providing the same interface to the outside world as it would it were an ordinary thread. This powerful “two-layer-cake” abstraction enables ordinary multithreaded applications to be ported to a Cilk environment and parallelized in a straightforward and modular fashion.&#13;
&#13;
My Multicilk implementation for OpenCilk provides two thread libraries corresponding to the layers in the two-layer cake, as well as some modifications to the OpenCilk runtime system. The service-thread library is the standard glibc. The application-thread library was created by modifying eight glibc functions using 63 lines of source code and writing wrappers for four glibc functions using 115 lines of code, amounting to about half of the 25 functions in the C11 sublibrary of glibc. I wrote 180 lines of utility functions for threading and synchronization. The changes to OpenCilk amounted to 62 lines, also for threading and synchronization. In sum, I added or modified just over 400 lines of source code to implement Multicilk.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Latent Debiasing of Time-Series Models</title>
<link href="https://hdl.handle.net/1721.1/143324" rel="alternate"/>
<author>
<name>Phillips, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/143324</id>
<updated>2022-06-16T03:37:53Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Unsupervised Latent Debiasing of Time-Series Models
Phillips, Jacob
Traditional training regimens for time-series models have been shown to encode the biases from their training corpora into the models themselves. We aim to train unbiased time-series models using existing biased datasets. However, most debiasing techniques rely on explicit labels that encapsulate the bias, such as pairs of words along some worrying axis of bias such as race or gender for language models. We propose an unsupervised latent debiasing training regimen based on [2] that simultaneously learns the latent distribution of the dataset and a separate language task; datapoints are selected for training batches by sampling weights inverse to their commonality as determined by their placement in the latent space. We adapt [2] to time-series datasets and show algorithmic improvements to bias identification and bias reduction for models trained on toy and real datasets.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Medium Resolution</title>
<link href="https://hdl.handle.net/1721.1/143322" rel="alternate"/>
<author>
<name>Sunshine, Gil</name>
</author>
<id>https://hdl.handle.net/1721.1/143322</id>
<updated>2022-06-16T03:19:01Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Medium Resolution
Sunshine, Gil
Extrude, saw, turn, repeat. &#13;
&#13;
During the industrial revolution, machines and techniques were invented and advanced for forming materials into standardized shapes. This would lead to the development of the industrially mass-produced standardized building materials used today and transformed architecture into a practice of blind trust in superficially dimensioned materials specified from afar. &#13;
&#13;
Extrude, trim, revolve, array. &#13;
&#13;
The 3D modeling software used by the architect contains analogs to the machine processes and abundances of industrial production. However, as we increasingly face the effects of the excesses of the Anthropocene and related disruptions to the building material supply chain, architecture must overcome the cognitive grasp of standardization to accommodate the found, the unwanted, the offcut and the wasted. This produces a new relevance for an architecture of underprocessed and irregular materials.&#13;
&#13;
In order to adapt to material irregularities architects have adopted various 3D scanning techniques to produce digital representations of materials. By the nature of their discrete sampling, however, these representations vary in their precision. What the architect encounters in the 3D modeling software is not the material itself in its infinite specificities, with its weight, moisture content and smell, but rather a surface representation composed of a large but finite set of points. This surface might be called medium resolution. This thesis operates within the medium resolution surface condition, accepting it as a geometric paradigm necessary to respond to emerging material realities.&#13;
&#13;
If there exists an entanglement between 3D modeling software used by the architect and processes of industrial mass production, then in order to realize medium resolution architecture, the 3D modeling software itself must be reconsidered. Inventory, a physics-based 3D modeling software, replaces analogs to the generic surface precision of the standardized material palette ubiquitous in CAD software today with the specificity of pieces of material and precision of actions made possible by the medium resolution paradigm.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Reducer Hyperobjects</title>
<link href="https://hdl.handle.net/1721.1/143321" rel="alternate"/>
<author>
<name>Kilgore, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/143321</id>
<updated>2022-06-16T03:42:21Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Fast Reducer Hyperobjects
Kilgore, Matthew
This thesis investigates the performance of reducer hyperobjects, a feature of the Cilk task-parallel runtime system that enables concurrent associative updates to nonlocal variables.  Reducers are more performant than more traditional methods of enabling concurrent updates, such as locking and atomic updates.  Unfortunately, existing reducer implementations can suffer a cost of up to 10 times that of a serial update, depending on the benchmark.  &#13;
&#13;
This overhead incurred by reducers can be decreased by three approaches: runtime data structures, compiler-runtime integration, and compiler optimization.  When these approaches are used to performance-engineer the OpenCilk runtime system's reducers, the overall performance of a benchmark suite designed to stress-test reducers sees a geometric average improvement of 25.74% and a maximum improvement of 88.32%.&#13;
&#13;
This thesis also investigates ``commutative'' reducers, a proposed reducer optimization premised on restricted semantics, but finds they yield a performance degredation while being linguistically unwieldy.  The examination of commutative reducers leads to an empirical investigation of the scalability of traditional reducers, finding a quadratic upper bound on their performance to be loose.&#13;
&#13;
Parts of this thesis represent joint work with Charles E. Leiserson, Tim Kaler, William S. Moses, Qi Qi, and Tao B. Schardl of MIT and I-Ting Angelina Lee of Washington University in St .Louis.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-collective Revolution: A Reclamation of Black Self-Subsistent Economic Tradition</title>
<link href="https://hdl.handle.net/1721.1/143315" rel="alternate"/>
<author>
<name>Josiah-Faeduwor, Aiyah</name>
</author>
<id>https://hdl.handle.net/1721.1/143315</id>
<updated>2022-06-16T03:36:45Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Re-collective Revolution: A Reclamation of Black Self-Subsistent Economic Tradition
Josiah-Faeduwor, Aiyah
The envisioning and deliverance of a collectively liberated future for all marginalized peoples is rooted in a rectified understanding of the redacted history of marginalized peoples. This paper uncovers a history of collective, collaborative, and communal economic traditions of pre-colonial societies in West Africa, with an anti-revisionist lens, in active repudiation of a capitalist, imperialist, and westernized erasure of Black self-subsistent, or self-reliant, economic tradition.&#13;
&#13;
Through the excavation and utilization of anthropologic, archaeologic, and historic research as well as case studies, this paper highlights the Yorubaland (pre-colonial Nigeria) Guild System of trade and labor organization, the Ghanaian Nnoboa cooperative farming system, and the Rotating Credit and Savings Associations (roscas) of pre-colonial Nigeria. The paper proceeds with an examination of the Boston Ujima Project, the Boston Food Solidarity Economy, and the Center for Cooperative Development and Solidarity – Boston-based organizations and movements today that are actively intentioned on utilizing tools and approaches akin, and/or in intentional alignment with pre-colonial Black and indigenous collective principles and practice. Inspired and instructed by afrofuturism, emergent strategy and pleasure activism, this paper crescendos by engaging with these Black feministprovided frameworks through a communally-minded collectively self-reflexive dialogue addressing the plausibility of, reticence towards, and uncertainty accompanying this particular path towards collective liberation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Rural Ground to Rural Grocery: Designing a local food value chain</title>
<link href="https://hdl.handle.net/1721.1/143314" rel="alternate"/>
<author>
<name>Lee, Allison H.</name>
</author>
<id>https://hdl.handle.net/1721.1/143314</id>
<updated>2022-06-16T04:00:03Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">From Rural Ground to Rural Grocery: Designing a local food value chain
Lee, Allison H.
Present-day food systems in the U.S. are fraught with challenges that have spillover effects ranging from economic hardship of agricultural communities, inequitable access to nutritional foods, asymmetrical distribution of subsidies, and harsh environmental strains. Further contributing to a problematic system is the growing division between urban and rural settings, with the former receiving the majority of attention, planning, resources, and capital investment.&#13;
&#13;
This thesis highlights the need to rethink the relationship between food and spatial planning. In response to more prevalent urban-focused queries that ask, “can food be produced where it is consumed,” the author of this work asks, “can food be consumed where it is produced?” to acknowledge issues around food access, nutritional health, and living wages of farmers and food producers.&#13;
&#13;
Through a proposed design-planning approach that integrates lived experience and data analysis, the author offers methodological strategies for food system planning in a rural context. She discusses the role of design at multiple scales, and its importance in participatory food system planning. Lastly, a case study of a Food Hub project in North Central Massachusetts is used to enact the design-planning approach and propose schematic designs.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Scaling Rule for Hot Gas Ingestion in Representative Turbine Rim Seal System for Large Industrial Gas Turbines</title>
<link href="https://hdl.handle.net/1721.1/143313" rel="alternate"/>
<author>
<name>Hubschman, Thomas Guy</name>
</author>
<id>https://hdl.handle.net/1721.1/143313</id>
<updated>2022-06-16T03:29:53Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Assessment of Scaling Rule for Hot Gas Ingestion in Representative Turbine Rim Seal System for Large Industrial Gas Turbines
Hubschman, Thomas Guy
An assessment has been carried out on the applicability of scaling the hot gas ingestion depth in terms of a non-dimensional sealing parameter for a rim seal cavity wheelspace system representative of that in an industrial gas turbine for power generation. The non-dimensional sealing parameter consists of the ratio of purge mass flow rate to a characteristic mass flow rate based on the rim seal geometry and rim speed, rotational Reynolds number, purge flow Reynolds number, rim-seal Reynolds number, and Rossby number. The geometry and operating conditions of the rim seal cavity wheelspace system were varied to yield a range of variations in the nondimensional sealing parameter and the corresponding hot gas ingestion depth for each. The results have been obtained through a set of three-dimensional computations of flow in the first stage nozzle-rotor between which is a rim seal cavity wheelspace. Post-processing of the computed results demonstrated the scaling of hot gas ingestion in terms of the non-dimensional sealing parameter. Specifically, the scaling provides three distinct regions of variation in the ingestion depth. With increasing non-dimensional sealing parameter there is an almost vertical drop in ingestion extent followed by a short transition to one of marginal changes in ingestion. This provides a guideline for selecting rim seal cavity wheelspace system operation and design. Targeting the short transition region, with required operating margin, results in minimal purge flow with minimal risks from hot gas ingestion. An experimentally tested rim seal cavity wheelspace configuration, which has been arrived at iteratively for its optimality, was shown to be in this short transition region. It was also demonstrated that there is a threshold value of the ratio of Chordal leakage flow to purge flow that demarcates estimated hot gas ingestion. Above the threshold value it limits the rim seal cavity operation to the transition region or the region with near vertical increase in ingestion depth. Below the threshold value its operation cannot be defined in the operating space of hot gas ingestion depth versus sealing parameter. Therefore, while useful the non-dimensional sealing parameter is still needed to demonstrate the full optimality in the rim seal cavity operation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Management Practice for Freight Savings</title>
<link href="https://hdl.handle.net/1721.1/143312" rel="alternate"/>
<author>
<name>Zhao, Jiayue</name>
</author>
<id>https://hdl.handle.net/1721.1/143312</id>
<updated>2022-06-16T03:26:21Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Improved Management Practice for Freight Savings
Zhao, Jiayue
Waters Corporation makes hundreds of shipments daily. Optimizing distribution and reducing shipping expenditures can result in significant cost savings. This thesis deals with two main freight problems at Waters: 1. Customers who should pay but do not pay for shipping; 2. Product shortage and packaging damages that delay the shipping schedule. According to the customer service department, Waters does not charge for shipments for approximately 40% of the orders in the US, and a portion of this is done by mistake. After initial analysis, it was found that the mistake is due to 1. misalignments between Waters’ internal databases 2. delayed shipping schedule that can result in unnecessary, but expensive expedited shipments. First, Waters uses three different databases for contract management (Lotus Notes), sales quotes (Salesforce), and shipments/billing (SAP). Customer master data is not completely synchronized among the three platforms. Correcting misalignments among these databases would help Waters collect more freight charges from customers who should pay for the shipping. Second, Waters pays for very expensive expedited shipping due to time constraints, stock outs, damaged inbound products, and human mistakes. We suggest strategies to reduce these problems and thus reduce the use of expedited shipping. Finally, this thesis concludes with a cost-saving analysis that focuses on misalignments between Lotus Notes and SAP for European customers and on unnecessary expedited shipments from the Global Distribution Center located in Franklin, MA.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable Machine Learning Methods for Landslide Analysis</title>
<link href="https://hdl.handle.net/1721.1/143311" rel="alternate"/>
<author>
<name>Gupta, Deepankar</name>
</author>
<id>https://hdl.handle.net/1721.1/143311</id>
<updated>2022-06-16T03:10:48Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Interpretable Machine Learning Methods for Landslide Analysis
Gupta, Deepankar
Landslides have the power to alter terrain, reshape ecosystems, damage anthropogenic structures, and destroy entire communities. On March 31, 2017, Mocoa fell victim to a devastating landslide that claimed 254 lives and left hundreds more missing. Given the region’s high frequency of landslides, conducting a formal investigation of its landslides and the factors that make the region susceptible to them is urgent. In this project, we use machine learning to computationally study landslide detection, the likelihood of past landslides occurrence, and landslide susceptibility, or risk, in the Mocoa region. The region’s geographical and climate features make it elusive to remote-sensing technologies and difficult to survey. We combat the resulting data sparsity by carefully designing learning tasks and data pipelines for detection and susceptibility. We meaningfully extract 20 features with scientific or computational basis. We then provide a comprehensive evaluation of four different machine learning models: logistic regression (LR), decision tree (DT), random forest (RF), and convolutional neural network (CNN) on both the landslide detection and the landslide susceptibility tasks. All four models displayed performance that was significantly better than random on both tasks. CNN models achieved the highest classification accuracy in the area of interest (AOI) for both tasks, earning 87.3% for detection and 92.5% for susceptibility.We also probed all four types of models using multiple techniques to determine features important to their decision making. Across the landslide detection models, slope, aspect, and the presence of claystones were the most consistent important features for inferring past landslide likelihood. For landslide susceptibility, we found slope, aspect, distance from fault lines, and the presence of claystones to be the most consistent important features for inferring landslide risk. The reliance of the models on slope and aspect does not surprise due to landslides involving mass movement from high to low points of elevation. Less surprising, the presence of claystones also appears to be important to both landslide tasks. The connection between claystones and Mocoan landslides merits further investigation. The importance of distance from fault lines to susceptibility models suggests that seismic activity at fault lines is a key trigger for Mocoan landslides.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Parametric Analyses of the Regulatory Roles of LINE-1 Retrotransposons during Motor Neuron Differentiation</title>
<link href="https://hdl.handle.net/1721.1/143310" rel="alternate"/>
<author>
<name>Park, Hyunjin</name>
</author>
<id>https://hdl.handle.net/1721.1/143310</id>
<updated>2022-06-16T03:40:47Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Non-Parametric Analyses of the Regulatory Roles of LINE-1 Retrotransposons during Motor Neuron Differentiation
Park, Hyunjin
Background: Repetitive elements make up a large portion of eukaryotic genomes, constituting two-thirds of human genome. Although their functional importance were recognized as early as 1950s, study of their functions is stagnated despite the advancements in next-generation sequencing due to difficulty in establishing the identities of specific elements involved in a biological process of interest, e.g. transcription factor (TF) binding, from functional data modalities such as ChIP-seq and ChIA-PET.&#13;
&#13;
Results: First, I present a non-parametric, k-mer based method that overcomes analysis ambiguities introduced by short read multimapping and the incompleteness of reference genomes in low-complexity regions. I use this method to elucidate inferential evidence for the cell type-specific binding of transcription factors to specific L1 subfamilies from ChIP-seq datasets. Second, I applied a method named Mates of Chimera (MoC) to identify L1-derived extrachromosomal circular DNAs (eccDNAs) from Circulome-seq datasets. I characterized differential eccDNA compositions in ESC and MN cell types and found differential enrichment of transcription factor binding motifs in cell type specific eccDNAs. Third, I present inferential evidence consistent with the hypothesis that some low-complexity regions may participate in chromatin interactions with cis-regulatory sequences in a cell-type specific manner analogous to enhancer-promoter interactions.&#13;
&#13;
Conclusion: The thesis elucidates a set of functional hypotheses concerning putative regulatory roles of repetitive elements, L1 elements in particular, that may be extrachromosomal. I base my hypotheses on a wide range of available data modalities including whole-genome sequencing, ChIP-seq, Circulome-seq, and ChIA-PET through non-parametric, k-mer based methods that do not rely on exact read alignment coordinates.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grounded SCAN Human: A Benchmark for Zero-Shot Generalizations</title>
<link href="https://hdl.handle.net/1721.1/143309" rel="alternate"/>
<author>
<name>Sleeper, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/143309</id>
<updated>2022-06-16T03:23:45Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Grounded SCAN Human: A Benchmark for Zero-Shot Generalizations
Sleeper, Dylan
In this work, we collect a new human annotated dataset called Grounded SCAN Human (gSCAN Human) as an extension of the original Grounded SCAN (gSCAN) dataset. The original gSCAN dataset was created to test various compositional generalizations by holding out certain examples during train time. During test time, models must zero-shot execute commands that require the agent to move in new directions, commands that contain novel combinations of objects and adjectives, and other such generalizations in different test sets called splits. However, gSCAN does not contain splits that test zero-shot generalizations to new sentence structures and a whole new vocabulary. The gSCAN Human dataset was created to test these generalizations: can a model trained using a simple grammar generalize to human annotations? We collected and verified a total of 1, 391 human annotations across all of the gSCAN splits (excluding the test and dev split) and evaluated various models on each of the splits. We test the original gSCAN baseline with several modifications, including the baseline with a transformer replacing the encoder, and one with early multimodal fusion of the sentence encoding with the visual embedding. We also test a multimodal transformer similar to VilBERT, which is the state of the art on the original gSCAN splits. We find that the models are somewhat robust to varying sentence structure and new vocabulary; however the models are far less successful given a combination of the two, as evaluated by the human data.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Precision, Very Low 1/f Noise, Low Power, Rail-Rail I/O, Integrated Bi-CMOS Operational Amplifier</title>
<link href="https://hdl.handle.net/1721.1/143308" rel="alternate"/>
<author>
<name>Chavez, Rhian Austin</name>
</author>
<id>https://hdl.handle.net/1721.1/143308</id>
<updated>2022-06-16T03:00:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Design of a Precision, Very Low 1/f Noise, Low Power, Rail-Rail I/O, Integrated Bi-CMOS Operational Amplifier
Chavez, Rhian Austin
The detailed design of a precision, very low 1/&#119891; noise, 100&#120583;A, 30V, rail to rail input and output, integrated Bi-CMOS operational amplifier is presented. Necessity for such an amplifier in the current technological space is examined. Specific attention is given to the novel design of a stable current source requiring no more than approximately 50mV of overhead, for use with a very low noise native NMOS differential input pair. An improved technique for analyzing MOSFET 1/&#119891; noise in modern simulation environments is explored. Special consideration is given to the usage of native NMOS devices as the primary input pair, which are required in order to meet the low noise and zero input bias current requirements simultaneously. Detailed descriptions of key amplifier stages are given; rail to rail input, folded cascode, Monticelli rail to rail output, and Miller compensation. Finally, amplifier transient, spectral, and noise results are presented and discussed.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Linguistic Exposure Modulates the Acceptability of Long-Distance Dependencies</title>
<link href="https://hdl.handle.net/1721.1/143305" rel="alternate"/>
<author>
<name>Bertics, Abigail C.</name>
</author>
<id>https://hdl.handle.net/1721.1/143305</id>
<updated>2022-06-16T03:01:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">How Linguistic Exposure Modulates the Acceptability of Long-Distance Dependencies
Bertics, Abigail C.
A central and still contested question in linguistics is “What makes a sentence good?” This thesis looks into one possible answer: the more you hear it, the better it sounds. More specifically, we are investigating what influences the acceptability of a certain type of long-distance, so-called ‘filler-gap’ dependency: object-extracted wh-question islands. We take a two-pronged approach. First, we look into how long-term, lifetime exposure to various components of a sentence (estimated from corpora) impacts its acceptability. We find support for the verb-frame frequency hypothesis (VFF; Liu et al. 2021a) and find that the frequency of the matrix-verb frame and the construction type in particular have statistically significant effects on acceptability ratings. It remains an open question how the individual components of a sentence combine to result in a single sentence acceptability judgement. The second prong takes partial inspiration from the mere-exposure effect in psychology (Zajonc, 1968). We investigate how short-term, within-experiment exposure to matrix-verb frame and construction type (the two components that influence overall sentence acceptability) affects the acceptability rating. Although no experimentally robust effect of short-term exposure, or priming, was found, we found a small, but statistically significant effect of order; acceptability ratings increase over the course of the experiment. Amazon Mechanical Turk (mTurk), used judiciously and with proper exclusion protocols, is a robust and powerful tool that is not worse than in-person experiments (Crump et al., 2013; Thomas and Clifford, 2017). Whether this small effect is happenstance or holds more generally has yet to be determined, but it should serve as a cautionary tale about running experiments where the incentive of the participants (e.g., to finish the task as quickly as possible to still get paid) and what the researcher is trying to study (e.g., how the human mind processes language) do not necessarily align.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speed Through Flexibility: Shortening the Acquisition Timeline of U.S. Defense Capabilities Using Flexible Systems</title>
<link href="https://hdl.handle.net/1721.1/143304" rel="alternate"/>
<author>
<name>Tuinstra, Jared D.</name>
</author>
<id>https://hdl.handle.net/1721.1/143304</id>
<updated>2022-06-16T03:36:04Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Speed Through Flexibility: Shortening the Acquisition Timeline of U.S. Defense Capabilities Using Flexible Systems
Tuinstra, Jared D.
Military procurement projects face a fundamental dilemma: systems designed and built today must serve the needs of the future, but future needs cannot be reliably forecasted. This reality drives the U.S. defense acquisition system, which creates and maintains defense capabilities in a constantly changing world. There is consensus that in today’s dynamic environment the cumbersome acquisition process operates too slowly to field needed capabilities on a relevant timeline. Despite many proposals to speed the process through reform, significant changes have remained elusive due to the complex stakeholder landscape. Therefore, acquisition techniques that can increase the speed of fielding required capabilities within the acquisition process that exists today are especially valuable.&#13;
&#13;
This thesis advocates for faster capability delivery by deliberately designing systems to be flexible so that new capabilities can be created through system modification rather than new system creation. Although evidence suggests that flexible systems often provide more value than inflexible ones due to their ability to adapt to unforeseen conditions, flexibility remains an uncommon attribute of defense systems due to the lack of established procedure within the acquisition process on how to create flexible systems.&#13;
&#13;
A case study of the U.S. Air Force Ground Based Strategic Deterrent (GBSD) weapon system is used to demonstrate an approach for creating flexible defense systems. Interviews with key personnel and program documentation are used to explain how the GBSD program worked within the constraints of the DoD acquisition system to implement a holistic flexibility strategy based on open system architecture, design margin in key areas, and architectural diversity, thereby creating the conditions for system flexibility. Flexibility in the GBSD program has contributed to schedule certainty during development and will provide a way to respond more quickly to new threats and take advantage of new technology when the system becomes operational. This thesis finds that it is possible to acquire flexible defense systems today and provides recommendations to foster the acquisition of flexible defense systems.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning for the KamLAND-Zen Search for 0&#120584;&#120573;&#120573;</title>
<link href="https://hdl.handle.net/1721.1/143302" rel="alternate"/>
<author>
<name>Fraker, Suzannah</name>
</author>
<id>https://hdl.handle.net/1721.1/143302</id>
<updated>2022-06-16T03:10:12Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Deep Learning for the KamLAND-Zen Search for 0&#120584;&#120573;&#120573;
Fraker, Suzannah
Neutrinoless double beta decay (0&#120584;&#120573;&#120573;) is a major interest in neutrino physics. Discovery of 0&#120584;&#120573;&#120573; would demonstrate that neutrinos are Majorana fermions and that lepton number is not a symmetry of nature, thus providing a possible explanation for the observed matter-antimatter asymmetry of the universe. KamLAND-Zen is a leading search for 0&#120584;&#120573;&#120573;, having placed the most stringent limit on its half-life at [formula] at 90% C.L. in ¹³⁶Xe. The next phase of KamLAND-Zen is currently running and will place even more stringent limits on the half-life. The sensitivity of KamLAND-Zen is primarily limited by backgrounds, including the muon spallation background ¹⁰C. We present a machine learning algorithm based on a convolutional neural network (CNN) that is able to separate ¹⁰C events from 136Xe events in Monte Carlo simulated data. With a typical kiloton-scale detector configuration like the KamLAND-Zen detector, we find that the algorithm is capable of identifying 61.6% of the ¹⁰C at 90% signal acceptance. The algorithm is independent of vertex and energy reconstruction, so it is complementary to current methods and can be expanded to other background sources.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Open-Source Computational Framework for the Scalable Application of Electrification Planning Models</title>
<link href="https://hdl.handle.net/1721.1/143298" rel="alternate"/>
<author>
<name>Aoudi, Lama Sara</name>
</author>
<id>https://hdl.handle.net/1721.1/143298</id>
<updated>2022-06-16T03:08:56Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">An Open-Source Computational Framework for the Scalable Application of Electrification Planning Models
Aoudi, Lama Sara
Global efforts to achieve affordable, universal electrification are inextricably linked to the uptake and application of spatially explicit computational electrification planning tools. The tools automate the delineation of potential modes of electrification (grid extension, mini-grids, and stand-alone solar systems) across all customers, at the lowest cost. My thesis aims to expedite universal electrification by designing a scalable and modular framework on how to leverage open-source information, to produce granular estimates of the data needed by these planning tools. The design of the framework is configured to the input requirements of the most data-demanding tool in the field, the Reference Electrification Model (REM). The REM inputs modelled in my framework are the following: 1) the geolocation and demand characteristics of residential customers, 2) the geolocation and demand characteristics of social and productive customers, 3) the electrification status of each customer, and 4) the layout of the medium-voltage distribution network.&#13;
&#13;
The framework prescribes a method, or Python model, to process and analyze existing open data into reasonable estimates of each REM input. The source code of the Python-based framework is available through GitHub, with directional documents on how to run each module to any region. Built-area population datasets are used to estimate the exact geolocation of all residential and smaller load customers (e.g., village community centers). The geolocation of non-residential loads, or community and productive customers (e.g., markets or large industrial plants), are extracted from the Google Maps API. To do so, the model iterates through an area-of-interest and searches the API for the locations and operational status of each customer type. Non-residential loads are ascribed demand patterns from t supporting literature on archetypal behaviors of larger loads. The Falchetta et al. (2019) is used to classify each customer’s electrification status and assign them a “tier” or level of electricity consumption. Finally, I propose leveraging the grid-design capabilities of REM to estimate a region’s existing medium voltage distribution network when it is unavailable through other means. To apply REM for the task, I initialize its model parameters to force an ‘all-grid-extension’ output and supply it with available a-priori information on the existing medium voltage network. Overall, my thesis lays the foundation for a complete transition of the energy access sector to granular, open-source modelling.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ParChain: A Framework for Parallel Hierarchical Agglomerative Clustering using Nearest-Neighbor Chain</title>
<link href="https://hdl.handle.net/1721.1/143295" rel="alternate"/>
<author>
<name>Yu, Shangdi</name>
</author>
<id>https://hdl.handle.net/1721.1/143295</id>
<updated>2022-06-16T03:03:31Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">ParChain: A Framework for Parallel Hierarchical Agglomerative Clustering using Nearest-Neighbor Chain
Yu, Shangdi
This thesis studies the hierarchical clustering problem, where the goal is to produce a dendrogram that represents clusters at varying scales of a data set. We propose the ParChain framework for designing parallel hierarchical agglomerative clustering (HAC) algorithms, and using the framework we obtain novel parallel algorithms for the complete linkage, average linkage, and Ward’s linkage criteria. Compared to most previous parallel HAC algorithms, which require quadratic memory, our new algorithms require only linear memory, and are scalable to large data sets. ParChain is based on our parallelization of the nearest-neighbor chain algorithm, and enables multiple clusters to be merged on every round. We introduce two key optimizations that are critical for efficiency: a range query optimization that reduces the number of distance computations required when finding nearest neighbors of clusters, and a caching optimization that stores a subset of previously computed distances, which are likely to be reused. &#13;
&#13;
Experimentally, we show that our highly-optimized implementations using 48 cores with two-way hyper-threading achieve 5.8–110.1x speedup over state-of-the-art parallel HAC algorithms and achieve 13.75–54.23x self-relative speedup. Compared to state-of-the-art algorithms, our algorithms require up to 237.3x less space. Our algorithms are able to scale to data set sizes with tens of millions of points, which existing algorithms are not able to handle.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-assisted machine learning for X-ray imaging</title>
<link href="https://hdl.handle.net/1721.1/143294" rel="alternate"/>
<author>
<name>Guo, Zhen</name>
</author>
<id>https://hdl.handle.net/1721.1/143294</id>
<updated>2022-06-16T03:29:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Physics-assisted machine learning for X-ray imaging
Guo, Zhen
X-ray imaging is capable of imaging the interior of objects in two and three dimensions non-invasively, with applications in biomedical imaging, materials study, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory reconstructions. Recently, deep learning has been adopted for 2D and 3D reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the statistical properties of the training distributions. In this thesis, we develop a physics-assisted machine learning algorithm, a two-step algorithm for 2D and 3D reconstruction. The 2D case is studied in the context of randomized probe imaging to retrieve quantitative phase distribution using deep k-learning framework, and 3D case is under X-ray tomography to retrieve the structure of integrated circuit via physics-assisted generative network. In contrast to previous efforts, our physics-assisted machine learning algorithm utilizes iterative approximants derived from the physical measurements to regularize the reconstruction with both known physical prior and the learned priors. The advantages of using learned priors from machine learning in X-ray imaging may further enable low-photon nanoscale imaging. Note that part of this thesis has been previously reported [1, 2].
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering the Language of Actions</title>
<link href="https://hdl.handle.net/1721.1/143293" rel="alternate"/>
<author>
<name>Sharma, Pratyusha</name>
</author>
<id>https://hdl.handle.net/1721.1/143293</id>
<updated>2022-06-16T03:32:06Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Discovering the Language of Actions
Sharma, Pratyusha
This thesis takes a look at discovering language-like discrete infinities for actions. How can a stream of continuous data be parsed into skills/concepts and can we tie the decision of what may be the right set of skills with the problem of generating plans over a continuous action space as in the original stream of data? Can we utilize supervision from aligning parallel language instructions to scaffold the discovery of these named primitives of actions from interactions? Here, we present a framework for learning hierarchical policies from demonstrations, using sparse natural language annotations to guide the discovery of reusable skills for autonomous decision-making. It is formulated as a generative model of action sequences in which goals generate sequences of high-level subtask descriptions, and these descriptions generate sequences of low-level actions. The thesis describes how to train this model using primarily unannotated demonstrations by parsing demonstrations into sequences of named high-level subtasks, using only a small number of seed annotations to ground language in action. In trained models, the space of natural language commands indexes a combinatorial library of skills; agents can use these skills to plan by generating high-level instruction sequences tailored to novel goals. The approach is evaluated in the ALFRED household simulation environment, providing natural language annotations for only 10% of demonstrations. It completes more than twice as many tasks as a standard approach to learning from demonstrations, matching the performance of instruction following models with access to ground-truth plans during both training and evaluation. 1&#13;
&#13;
1Code, data, and additional visualizations are available at https://sites.google.com/view/ skill-induction-latent-lang/.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Unsupervised Anomaly Detection Applied to Motor-Driven Blowers</title>
<link href="https://hdl.handle.net/1721.1/143291" rel="alternate"/>
<author>
<name>Saqr, Tareq E.</name>
</author>
<id>https://hdl.handle.net/1721.1/143291</id>
<updated>2022-06-16T03:28:08Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Deep Unsupervised Anomaly Detection Applied to Motor-Driven Blowers
Saqr, Tareq E.
In the rapidly evolving Industry 4.0 space, predictive maintenance is shifting towards data-driven techniques. This shift is driven by advanced computing, reduced costs of sensing, abundantly available data as well as maturing machine learning algorithms. In particular, deep learning, a subset of machine learning, has been rapidly growing compared to traditional machine learning approaches. This is mainly due to its ability to automatically extract features and its high performance in tackling complex problems. Furthermore, predictive maintenance data are often unlabeled because labeling relies heavily on expensive domain expertise. As such, unsupervised techniques are gaining more popularity compared to their supervised counterpart. Therefore, the work at hand focuses on deep unsupervised anomaly detection.&#13;
&#13;
Our work capitalizes on previous work conducted at MIT to develop an automated fault detection algorithm that was shown to work with high detection accuracy across diverse applications such as rolling element bearings, plasma etching machines, and milling machines.&#13;
&#13;
To further explore the ability of the algorithm to generalize to new applications, we consider the problem of anomaly detection in belt-driven blower-motor units due to variable belt tension. To this end, we instrument a belt-driven motor-blower testbed and generate a dataset featuring electrical and vibration time series data. The dataset contains nominal and anomalous instances at different belt tension and motor speed values.&#13;
&#13;
Applying the automated fault detection model to the dataset initially shows that belt problems can be detected but only with a limited accuracy of 12.5%. Upon further experimentation, an accuracy of 64.22% is achieved using a tuned set of hyperparameters on a subset of the data that contains nominal and no-belt conditions only. We conclude that additional hyperparameter tuning may be required in order for the existing algorithm to generalize well to our application. Finally, the dataset and testbed presented here will contribute to exploration of future anomaly detection techniques for time series data.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sonic Hypermirror: Attuning to Hyperobjects</title>
<link href="https://hdl.handle.net/1721.1/143290" rel="alternate"/>
<author>
<name>Kang, Wonki</name>
</author>
<id>https://hdl.handle.net/1721.1/143290</id>
<updated>2022-06-16T03:59:44Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Sonic Hypermirror: Attuning to Hyperobjects
Kang, Wonki
Long-running pandemic without an end in sight, climate crisis encroaching on our everyday lives—global crises are collective events, but they take on multiple forms and scales, leading to radically different experiences for people. The inter-scalar, inter-temporal representations gained dire urgency due to the crises surfacing simultaneously at a global scale. Hyperobject, as defined by ecological philosopher Timothy Mor- ton, is the in-experienceable object that is vastly distributed in time and space that easily exceeds human’s perceptive capability. I start with a hypothesis: hyperobjects are better heard than seen. This thesis is focused on the critical approach to data representation, by bringing forward listening as a primary modality of interaction. I present Sonic Hypermirror, a custom tool that allows data probing of large-scale audio data based on vocal interaction, accompanied by a visual interface that utilizes computational tools to assemble a soft, continuous semantic space of multiple audio streams. It is an experiment to build a data sensorium where the listeners enter into, inhabit, and learn from. Through the thesis, I propose the system of data representation that is continuous, non-referential, and exploratory; and revisit the affordances of architectural space as a data storage and an interactive datascape.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Warm-Starting Networks for Sample-Efficient Continuous Adaptation to Parameter Perturbations in Multi-Agent Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/143288" rel="alternate"/>
<author>
<name>Huang, Vivian</name>
</author>
<id>https://hdl.handle.net/1721.1/143288</id>
<updated>2022-06-16T03:01:20Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Warm-Starting Networks for Sample-Efficient Continuous Adaptation to Parameter Perturbations in Multi-Agent Reinforcement Learning
Huang, Vivian
Deep reinforcement learning (RL) methods have made significant advancements over recent years toward mastering challenging problems. Because many real-world systems involve multiple agents interacting with each other in a shared environment, one particularly active subfield of RL is multi-agent reinforcement learning (MARL). Learning robust multi-agent policies in real-time strategy games, such as StarCraft II, is an important objective. In particular, being able to quickly adapt game playing agents to perturbations in rules and successfully displaying the ability to take advantage of such changes can yield insights about properties, such as game balance. However, progress in MARL research faces a major challenge associated with the high cost of sample complexity, which makes learning a complicated task from scratch computationally intensive. Therefore, this thesis work details the design and implementation of a MARL framework that facilitates the training of robust agents which are adaptive to perturbations in a multi-agent, StarCraft II-based real-time strategy game such that the features that most affect game balance can be determined. The framework also includes an incremental warm-start approach to improve the computational complexity of agent adaptation. The results show that our approach achieves up to 97% improvement in computational time compared to the standard approach of training the policy with a random initialization.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Neural Networks for Learning Protein Vibrational Behaviors to Characterize Structure and Function</title>
<link href="https://hdl.handle.net/1721.1/143287" rel="alternate"/>
<author>
<name>Granberry Jr., Darnell Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/143287</id>
<updated>2022-06-16T03:26:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Deep Neural Networks for Learning Protein Vibrational Behaviors to Characterize Structure and Function
Granberry Jr., Darnell Scott
Proteins’ structures and motions are essential for nearly all biological functions and malfunctions, making them prime targets for uncovering and controlling processes associated with metabolism and disease. Normal mode analysis is a powerful method that allows us to understand the mechanisms of these functions in high detail, but not without significant cost. Replacing this method with inference by a machine learning model could potentially eliminate this restriction while still providing useful accuracy. Prior work has demonstrated success in a simplified version of this problem that used features computed from each protein’s structure, and predicted parameters for a geometric function-of-best-fit relating the modes, not the explicit modes themselves. In this work, we seek to develop a fully end-toend model that will allow researchers to predict a protein’s normal mode spectrum directly from its peptide sequence, allowing us to bypass the costs associated with both normal mode analysis and protein structure determination. We additionally explore the parallels between protein science and music theory, and provide analysis of a deep neural network trained to understand Bach’s highly structured Goldberg Variations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesizing Tabular Time Series Data using Transformers</title>
<link href="https://hdl.handle.net/1721.1/143284" rel="alternate"/>
<author>
<name>Huang, Ivy</name>
</author>
<id>https://hdl.handle.net/1721.1/143284</id>
<updated>2022-06-16T03:53:53Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Synthesizing Tabular Time Series Data using Transformers
Huang, Ivy
Using synthetic data in place of real data can come with numerous benefits, such as the protection of privacy. However, synthesizing tabular data is difficult, since it is heterogeneous and might contain relationships between its columns and between its rows. While there has been much work dedicated towards generating synthetic tabular data with independent rows based on real data, less has been done towards generating time series tabular data, as it contains an extra temporal component. One such work uses a transformer model, and we use this work as a baseline for our own work. We specifically created a service in order to address the problem of deploying the transformer model on the cloud and to increase accessibility to the transformer model, and looked into addressing the limitations of how the previous work utilized the model. In addition, we performed a case study on the architecture of our service, where we investigated a limitation of our architecture, explored metrics for evaluating the synthetic data output from the architecture, and looked into using the architecture towards generating a forecast, an application the original transformer model is not originally designed for.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying the design structure matrix to streamline the development process: lessons from marine renewable development</title>
<link href="https://hdl.handle.net/1721.1/143280" rel="alternate"/>
<author>
<name>Pickett, Stephen Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/143280</id>
<updated>2022-06-16T03:01:41Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Applying the design structure matrix to streamline the development process: lessons from marine renewable development
Pickett, Stephen Jeffrey
Tailoring a project development process that balances internal and external requirements is difficult for project managers. One specific difficulty is satisfying stakeholders with significant influence and misaligned process requirements. In this thesis, a federally funded marine renewable energy project is analyzed using Design Structure Matrix (DSM) methods. Two opposing processes appear to drive the conflict in the project environment: an environmental impact assessment imposed by a gatekeeper stakeholder, in direct tension with the iterative nature of technology development.&#13;
&#13;
This thesis uses DSM to analyze and re-design the process architecture to create a workable project development process. The DSM methodology relies on identifying and sequencing archetypal dependencies which cause tension and reorganizing task modules to align outcomes at the task and activity level of the project. Two alternatives are generated using this approach and are analyzed for potential impacts on project execution. Particular attention is paid to the tradeoffs created based on an indirect stakeholder exchange model and provides insights for project managers when balancing internal requirements and managing influential stakeholders.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Agile Development and Innovative Technology in the Structural Engineering</title>
<link href="https://hdl.handle.net/1721.1/143278" rel="alternate"/>
<author>
<name>Luo, Cheryl</name>
</author>
<id>https://hdl.handle.net/1721.1/143278</id>
<updated>2022-06-16T03:35:01Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Application of Agile Development and Innovative Technology in the Structural Engineering
Luo, Cheryl
Structural engineering and its broader ecosystem Architecture, Engineering, and Construction (AEC) industry are now, more than ever, facing a new level of challenges: project complexities, compressed schedules, cost pressures and sustainability requirements. However, the project delivery approaches and processes have changed very little over the past few decades. The chronic problems such as a lack of system thinking, prioritizing short-term cost outcomes, and poor communication between stakeholders are magnified in such an environment. In addition, unforeseen design changes with rigid planning and coordination strategies further create costly ripple effects. To combat these common challenges, there is an urgent need for a transformation of project delivery, from methodologies to processes and tools.&#13;
&#13;
This work navigates the solution search into the technology landscapes and identifies the potential of deploying agile and advanced computational tools in the structural engineering. It begins with reviewing the current state of structural engineering and discussing the potential enhancements of adopting other project management methods and bringing new technologies to the field. Major differences between traditional and agile project delivery are analyzed using system dynamics methods. A case study was performed to demonstrate that agile development leads a promising direction for the industry. To maximize the benefits of implementing agility in the real-world projects, this thesis also investigates the use of advanced computational tools in combination. Lastly, this thesis concludes by summarizing practical frameworks for project delivery in the structural engineering and outlining key areas for future work.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel Batch-Dynamic &#119896;d-trees</title>
<link href="https://hdl.handle.net/1721.1/143277" rel="alternate"/>
<author>
<name>Yesantharao, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/143277</id>
<updated>2022-06-16T03:01:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Parallel Batch-Dynamic &#119896;d-trees
Yesantharao, Rahul
&#119896;d-trees are widely used in parallel databases to support efficient neighborhood and similarity queries. Supporting parallel updates to &#119896;d-trees is therefore an important operation. In this paper, we present BDL-tree, a parallel, batch-dynamic implementation of a &#119896;d-tree that allows for efficient parallel &#119896;-NN queries over dynamically changing point sets. BDL-trees consist of a log-structured set of &#119896;d-trees which can be used to efficiently insert or delete batches of points in parallel with polylogarithmic depth. Specifically, given a BDL-tree with &#119899; points, each batch of &#119861; updates takes &#119874;(&#119861; log2 (&#119899; + &#119861;)) amortized work and &#119874;(log (&#119899; + &#119861;) log log (&#119899; + &#119861;)) depth (parallel time). We provide an optimized multicore implementation of BDL-trees. Our optimizations include parallel cache-oblivious &#119896;d-tree construction and parallel bloom filter construction.&#13;
&#13;
Our experiments on a 36-core machine with two-way hyper-threading using a variety of synthetic and real-world datasets show that our implementation of BDL-tree achieves a self-relative speedup of up to 34.8× (28.4× on average) for batch insertions, up to 35.5× (27.2× on average) for batch deletions, and up to 46.1× (40.0× on average) for &#119896;-nearest neighbor queries. In addition, it achieves throughputs of up to 14.5 million updates/second for batch-parallel updates and 6.7 million queries/second for &#119896;-NN queries. We compare to two baseline &#119896;d-tree implementations and demonstrate that BDL-trees achieve a good tradeoff between the two baseline options for implementing batch updates.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing the Performance of a Switched-Mode Radio Frequency Power Generation Architecture</title>
<link href="https://hdl.handle.net/1721.1/143276" rel="alternate"/>
<author>
<name>Cassidy, Grace C.</name>
</author>
<id>https://hdl.handle.net/1721.1/143276</id>
<updated>2022-06-16T03:03:29Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Advancing the Performance of a Switched-Mode Radio Frequency Power Generation Architecture
Cassidy, Grace C.
This thesis proposal explores performance improvements of a recently proposed switched-mode power generation architecture - the Multi-Inverter Discrete Backoff (MIDB) system - that is currently in development. The RF power system is intended to be used in applications such as radio frequency plasma generation for semiconductor fabrication. Improvements in efficiency are desired due to the high frequency and power levels seen during this process.&#13;
&#13;
There are two main aspects of the proposed thesis. The first aspect of the thesis is to improve the efficiency of the power combining of the multiple power amplifiers used to construct the MIDB system. The architecture utilizes transmission line transformer-based power combiners. A first step is to create a more accurate combiner model at the required operating frequency which better addresses the loss and performance of this transformer structure, including the interaction of transmission line capacitance and the magnetizing inductance of the core. This characteristic was not previously modeled in the traditional combiner model. After making a more accurate model, a physical capacitor is added to the combiner design to achieve resonance at the operating frequency in order to limit the impedance distortion seen in power combining owing to these parasitics. It is then explored how to best design the combiner to achieve high efficiency at low impedance distortion.&#13;
&#13;
The second aspect of the thesis is to develop an interface for the FPGA-based control circuitry and the MIDB power system. There are two proposed ideas for the implementation of the controller, and a feedforward approach is implemented.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coping With Neighbors &amp; other entanglements</title>
<link href="https://hdl.handle.net/1721.1/143275" rel="alternate"/>
<author>
<name>Xu, Zhicheng</name>
</author>
<id>https://hdl.handle.net/1721.1/143275</id>
<updated>2022-06-16T03:43:20Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Coping With Neighbors &amp; other entanglements
Xu, Zhicheng
This thesis explores how two ways of "seeing" landscape might be integrated through a set of design interventions for the Kooyooe fish and the people from the Paiute nation. Situated in the state currently known as Nevada, where the Paiute people have lived for over 9,000 years, Pyramid Lake is the Kooyooe's home. The 1905 Truckee River damming and ensuing history of water rights struggles have endangered the Kooyooe, a cultural icon in the Paiute community. Such a struggle represents the violence of colonialism in the landscape and inequitable distribution of its resource.&#13;
&#13;
Contemporary land management that supports damming and extraction and indigenous land stewardship are situated at two moments along a spectrum of knowledge production and land perspectives. The former approach the land from a top-down perspective that foregrounds the land's static, immediate utility. The latter is derived from traditional ecological knowledge (TEK) embedded in the land, which does not differentiate between people, species, and the land on which they are located.&#13;
&#13;
The purpose of this thesis is twofold. In addition to designing the storytelling place, the hatchery, and the series of fish hiders as part of an experiment that attempts to integrate knowledge forms, this thesis is also a journey of self-discovery and a reflection on the institutional architectural pedagogy students experience in American higher education.&#13;
&#13;
The thesis allows the TEK of Paiute people to enter the project and become an invitation to voyage into a different worldview, a native mode of knowledge production previously shunned by architecture higher education. However, expeditions are often accompanied by mishaps and accidents. This project has fractures and failures that the author must fully embrace, as they are critical reminders of what is currently missing in the modes of learning and teaching architecture.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CommunAir: Building Low-cost Community Data Infrastructure with Sensors, Spreadsheets, and Open Datasets</title>
<link href="https://hdl.handle.net/1721.1/143274" rel="alternate"/>
<author>
<name>Woo, Wesley</name>
</author>
<id>https://hdl.handle.net/1721.1/143274</id>
<updated>2022-06-16T03:32:03Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">CommunAir: Building Low-cost Community Data Infrastructure with Sensors, Spreadsheets, and Open Datasets
Woo, Wesley
In the past few years, air quality in the Global South has become recognized as a significant source of various health problems and general premature death. However, without appropriate data collection, reporting, and storage systems, it is difficult to both build community awareness and drive policy with air quality measurements alone. Furthermore, while investigations into poor air quality and its adverse effects have been conducted in resource-poor areas, the results of such studies rarely engage local communities and encourage collective action. Living Data Hubs is an ongoing project at MIT’s Civic Data Design Lab looking to address these issues. The project aims to reinvent data ownership through the merging of community-owned wireless networks and wireless sensor networks. In this thesis, I discuss the challenges of building digital infrastructure for low-resource communities through the design and implementation of an end-to-end PM2.5 measurement and data storage service for the Living Data Hubs project. The process of designing such a uniquely specified system reveals that minimal technical complexity and cost become essential in designing digital infrastructure for equitable community ownership. I also find that it is possible to bring together free cloud computing and storage tools to create an appropriately extensible and robust data storage system at a competitively low cost, although such systems run the risk of becoming dependent on foreign service providers.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven extraction of the substructure of quark and gluon jets in proton-proton and heavy-ion collisions</title>
<link href="https://hdl.handle.net/1721.1/143273" rel="alternate"/>
<author>
<name>Ying, Yueyang</name>
</author>
<id>https://hdl.handle.net/1721.1/143273</id>
<updated>2022-06-16T03:19:06Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Data-driven extraction of the substructure of quark and gluon jets in proton-proton and heavy-ion collisions
Ying, Yueyang
The modification of quark- and gluon-initiated jets in the quark-gluon plasma produced in heavy-ion collisions is a long-standing question that has not yet received a definitive answer from experiments. In particular, the size of the modifications differs between theoretical models. Therefore a fully data-driven technique is crucial for an unbiased extraction of the quark and gluon jet spectra and substructure. We demonstrate a fully data-driven method for separating quark and gluon contributions to jet observables using a statistical technique called topic modeling. We will also demonstrate that jet substructures, such as jet shapes and jet fragmentation function, could be extracted using this data-driven method. This proof-of-concept study is based on proton-proton and heavy-ion collision events from the PYQUEN generator with statistics accessible in Run 4 of the Large Hadron Collider. These results suggest the potential for an experimental determination of quark- and gluon-jet spectra and their substructures.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Supervised Annotation and Active Learning with Uncertainty for Cloud Mask Dataset Generation</title>
<link href="https://hdl.handle.net/1721.1/143272" rel="alternate"/>
<author>
<name>Williams, Christien Spencer</name>
</author>
<id>https://hdl.handle.net/1721.1/143272</id>
<updated>2022-06-16T03:51:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Fast Supervised Annotation and Active Learning with Uncertainty for Cloud Mask Dataset Generation
Williams, Christien Spencer
Satellite imagery data analysis has made great strides, however still endures inertia due to difficulty generating robust, labeled datasets for complex learners. Variation in data and diverse tasks make it difficult to both generally crowd source to build such datasets, and to offload this responsibility to the small number of expert annotators that exist. Currently, no general machine learning methods can automatically generate data labels in all regimes. A chief data labeling concern for remote sensing projects is cloud mask dataset creation. Using optical satellite images requires detecting accurately all clouds in any image. For many applications, automatic cloud detection methods are not accurate enough. This thesis reformulates the problem away from finding a single automatic algorithm to conduct annotation. We amplify an expert annotator’s efforts with an algorithm that learns from his annotations to more efficiently annotate datasets, and an active learning loop that force multiplies this labeling effort. This thesis first contributes a fast, machine learning based annotation system and demonstrates on Sentinel-2 images its efficacy to reach, in four clicks or less, more than 95% accuracy. To obtain these statistics, we constructed an eclectic database of partially cloudy images and its ground truth, and evaluated its accuracy to be greater than 98%. We then show that our fast, supervised annotation is far more accurate than recent sophisticated cloud detectors. Next, we develop an active learning system that employs uncertainty sampling for query selection and uses a modified Efficient Neural Network (ENet) model as its backbone. We evaluate this active learning system by comparing different scoring functions for the uncertainty metric that powers query selection. We show that using this uncertainty measurement, the active learning system performs better using fewer data points. Ultimately, with a minimal number of clicks/annotations, the annotator can build a robust, large, labeled dataset.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing and Fabricating Polarized Light Mosaics with User-Defined Color-Changing Behaviors</title>
<link href="https://hdl.handle.net/1721.1/143270" rel="alternate"/>
<author>
<name>Sethapakdi, Ticha</name>
</author>
<id>https://hdl.handle.net/1721.1/143270</id>
<updated>2022-06-16T03:33:10Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Designing and Fabricating Polarized Light Mosaics with User-Defined Color-Changing Behaviors
Sethapakdi, Ticha
Polarized light mosaics are color-changing structures that alter their appearance based on the orientation of incident polarized light. While artists have developed techniques for crafting polarized light mosaics by hand, there exists no design tool and fabrication process that allows makers to create them using standard fabrication hardware.&#13;
&#13;
In this thesis, we introduce the first end-to-end system for designing and fabricating Polagons: machine-made polarized light mosaics with user-defined color changing behaviors. Our system includes an interface for designing and visualizing Polagons as well as a fabrication process based on laser cutting and welding that requires minimal assembly by the user. We define the design space for Polagons, evaluate the achievable color spectrum, and demonstrate how they can enable new applications in fields such as architectural design, data visualization, and fashion.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fourth Dimension</title>
<link href="https://hdl.handle.net/1721.1/143267" rel="alternate"/>
<author>
<name>Door, Angie</name>
</author>
<id>https://hdl.handle.net/1721.1/143267</id>
<updated>2022-06-16T03:52:50Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Fourth Dimension
Door, Angie
In the basement preservation lab, figures were lifted onto cloth by the German head of cultural policy with gloves while the Nigerian delegation joined to discuss their return. The empty gallery rooms above them, reserved for these Benin Bronzes, still had white pedestals wrapped in plastic, pumps of conditioned air preserving the space for artifacts never to arrive. Of course all of the Bronzes were soon repatriated home with thousands of other objects, leaving behind an empty Ethnological Museum in the middle of Berlin.&#13;
&#13;
Once a People’s Palace, now a conglomerate of private-turned-public Wunderkammers, the recently completed Humboldt Forum building was the result of hundreds of millions of federal, city, and non-profit Euros. The shoddily rebuilt Prussian Palace, encrusted in re-cast princes and cream paint splashes, demands innovative reuse from its expensive vacancy.&#13;
&#13;
To (mis)use Fukuyama’s words, perhaps the end of history means that we are now allowed to leave linearity and enter a multiplicity of post-history. In this state of emerging time, used to describe the postcolony by Mbembe as an “interlocking of presents, pasts, and futures”, one can only contend with this thick density of data through fragmentary encounters.&#13;
&#13;
As the German Federal Ministry assesses lost revenue to the mass repatriation, Preservation Agency LLC has been asked to develop options for the building’s future. The Acts that Agency LLC presents to the board are Boym’s Architectures of the Off-Modern, or structures that exist in the murky space parallel to the past.&#13;
&#13;
Each Act uses tactics of care to interrupt the projective sequence of demolition; Act I with slab cuts and scaffolding that slow the Humboldt Forum’s construction; Act II rehabilitating the Palast Der Republik with layers of fireproofing and insulation; and Act III gathering the bombed rubble of the Stadtschloss in netting, rebar, and glass. These tactics riff on Gissen’s concept of subnature by perpetuating architecture that exists on the margins of finality, cleanliness, and enclosure.&#13;
&#13;
Agency LLC’s final proposal is not another building, but an epilogue of tactics developed from these Off-Modern Acts. The question of emptiness is answered with actions on the building itself, nonlinear acts that set up hospitable encounters through time as a proposal for its future.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ML for Loop Gain Identification of DC/DC Converters</title>
<link href="https://hdl.handle.net/1721.1/143266" rel="alternate"/>
<author>
<name>Chu, Cecelia</name>
</author>
<id>https://hdl.handle.net/1721.1/143266</id>
<updated>2022-06-16T03:10:46Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">ML for Loop Gain Identification of DC/DC Converters
Chu, Cecelia
Control loop identification is necessary for evaluating the stability of switched power supplies and is therefore an important step during design and verification. Analytical models of power supplies often yield inaccurate predictions of the loop gain; therefore, power engineers traditionally must conduct slow, invasive loop gain measurements on physical hardware. This thesis presents an alternate approach to loop gain identification in which a machine learning model infers the frequency-domain loop response from the quick and convenient time-domain measurement of a load step transient. We show that we can train a neural network to accurately infer the loop gain of a current-mode buck converter over a generalized set of configurations and illustrate the disruptive potential of such a model with example applications such as live Bode plot monitoring and automatic loop compensation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forest Framing</title>
<link href="https://hdl.handle.net/1721.1/143264" rel="alternate"/>
<author>
<name>Swagemakers, Jitske</name>
</author>
<id>https://hdl.handle.net/1721.1/143264</id>
<updated>2022-06-16T03:06:17Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Forest Framing
Swagemakers, Jitske
This thesis integrates wild wood research with local knowledge and tools to restore forest ecology and boost rural economies. The proposal is situated in the context of Sweet Home, Oregon, a former logging community. Like many small towns across the United States, Sweet Home is in a state of social and ecological crisis, combating high poverty rates and dwindling resources. Here, industrial mills are abandoned, homes are patched in varying states of decay, and surrounding forests are reduced to arrays of Douglas Fir.&#13;
&#13;
Through a close reading of reciprocal relationships between forestry and wood construction, this thesis outlines a series of wood construction techniques to repair buildings and restore forest habitat. These techniques provide accessible and economical systems of framing, placing special emphasis on the diverse materialities of wood in its natural form, wild wood. Harvesting of wild wood helps prevent the spread of wildfires, it does not contribute to deforestation, and local application would reduce emissions of transportation.&#13;
&#13;
This thesis imagines a restorative future for Sweet Home through a staged transformation of the abandoned mill into a collective wild wood shop, in which wild wood construction techniques are implemented and executed by a cast of community members.&#13;
&#13;
Plates detailing the technical elements of wild wood application, along with construction manuals, build on the community’s receptivity and proficiency to work with wood and are informed by ethnographic research consisting of site visits and interviews with community members. The collective wood shop offers a site for wild wood prototyping and experimental forestry that enhances the material and ecological qualities of wood and feeds back into the forest. This thesis contributes to a reappropriation of wood in architectural construction and offers an economical solution that promotes symbiosis between forestry and architecture.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Challenging Aspects of Optimal Power Management of Unmanned Aerial Vehicle Modular Hybrid Propulsion Systems</title>
<link href="https://hdl.handle.net/1721.1/143263" rel="alternate"/>
<author>
<name>Kunycky, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/143263</id>
<updated>2022-06-16T03:42:16Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Some Challenging Aspects of Optimal Power Management of Unmanned Aerial Vehicle Modular Hybrid Propulsion Systems
Kunycky, Alexander
An optimal control framework has been developed to assess the operation of an unmanned aerial vehicle (UAV) powered by a modular hybrid propulsion system (MHPS). The framework is used to assess the effects of MHPS power hybridization level and energy hybridization mass ratio on the UAV performance for two specified mission types. The results are then used to suggest selection guidelines for MHPS configurations and operational guidelines for an MHPS-powered UAV to execute various mission types with different mission requirements. Two specific mission types are selected for illustration: one, a Survey mission, consisting of climb, dash, survey, dash, and descent phases; the other, a Loiter mission, consisting of climb, dash, loiter, dash, and descent phases. Power hybridization level &#119867;ₚ refers to the percentage of propulsive power sourced from the electric motor, rather than the internal combustion engine. Energy hybridization mass ratio &#119877;ₘ refers to the percentage of energy storage system mass taken up by batteries, rather than carbon fuel. From the results of Survey mission execution optimized for minimum completion time, there are three distinct regimes defined by energy storage system (ESS) mass ratio values &#119877;ₘ₁ and &#119877;ₘ₂. MHPS configurations with &#119877;ₘ₁ ≤ &#119877;ₘ ≤ &#119877;ₘ₂ can achieve the minimum attainable mission completion time; mission completion time is longer, and total energy consumption is lower, for configurations with &#119877;ₘ &lt; &#119877;ₘ₁ and &#119877;ₘ &gt; &#119877;ₘ₂. From the results of Loiter mission execution optimized for minimum energy consumption, there is a distinct demarcation boundary at a threshold value of ESS mass ratio &#119877;ₘ₃ for each specific &#119867;ₚ . All UAV MHPS configurations with &#119877;ₘ ≥ &#119877;ₘ₃ will utilize the same amount of energy to complete the mission; configurations with &#119877;ₘ &lt; &#119877;ₘ₃ require higher energy consumption to complete the same mission.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-Scale Characterization of Quantum Emitters in High-Purity Diamond</title>
<link href="https://hdl.handle.net/1721.1/143261" rel="alternate"/>
<author>
<name>Sutula, Madison M.</name>
</author>
<id>https://hdl.handle.net/1721.1/143261</id>
<updated>2022-06-16T03:27:42Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Large-Scale Characterization of Quantum Emitters in High-Purity Diamond
Sutula, Madison M.
Solid state quantum memories, such as color centers in diamond, are a leading platform for the distribution of quantum information. Quantum repeaters will require many qubit registers at every quantum network node, each with long-lived spin states and high-quality single photon emissions. Here, we present techniques for large-scale characterization of color centers in diamond. We first demonstrate automated confocal microscopy and apply it to characterize silicon vacancies in diamond overgrown via chemical vapor deposition and tin vacancies in overgrown and high pressure high temperature treated diamond, yielding narrow inhomogeneous distributions of both emitters. We then demonstrate widefield photoluminescence excitation microscopy as a tool to multiplex the characterization of color center optical properties, and apply it to measure the optical properties of silicon vacancies in a sample implanted with a focused ion beam. These techniques pave the way for future large-scale characterization efforts necessary to construct quantum memory nodes.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Information Content of Discretionary Disaggregation</title>
<link href="https://hdl.handle.net/1721.1/143260" rel="alternate"/>
<author>
<name>Anderson, Samuel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/143260</id>
<updated>2022-06-16T03:03:22Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">The Information Content of Discretionary Disaggregation
Anderson, Samuel S.
I examine the information content embedded in firms' decision to disaggregate financial statement line items. I find that the level of disaggregation predicts both current and future performance. I also find that significant changes in discretionary disaggregation are indicative of weak fundamentals. In particular, I document a hump-shaped pattern such that both increases and decreases in discretionary disaggregation are negatively associated with measures of performance. Investors do not unravel this information at the time of filing, resulting in predictable return patterns over time. Together, these findings are consistent with discretionary disaggregation providing an informative-but low saliency-signal regarding firms' fundamental performance.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Closed Loop Control for a Piezoelectric-Resonator-Based DC-DC Power Converter</title>
<link href="https://hdl.handle.net/1721.1/143258" rel="alternate"/>
<author>
<name>Piel, Joshua J.</name>
</author>
<id>https://hdl.handle.net/1721.1/143258</id>
<updated>2022-06-16T03:46:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Closed Loop Control for a Piezoelectric-Resonator-Based DC-DC Power Converter
Piel, Joshua J.
Miniaturization of power electronics reduces their cost and increases their scope of potential applications. Power electronics traditionally rely on magnetics for energy storage, but magnetics are fundamentally less efficient and power dense when scaled to small sizes. Piezoelectric resonators (PRs), which store energy in mechanical inertia and compliance, are promising alternatives to magnetic energy storage for miniaturized power electronics because of their high quality factors and favorable scaling properties. Dc-dc converters relying on only a PR for energy storage have been demonstrated to achieve high efficiency through specific behaviors including PR soft charging, ZVS of all active switches, and all-positive instantaneous power transfer. However, closed-loop control of PR-based dc-dc converters is necessary for them to be practically viable. Implementation of this closed loop control is challenging because achieving all desired high-efficiency behaviors requires simultaneous control of duty cycle, dead time, and frequency.&#13;
&#13;
This thesis presents a closed-loop control scheme for PR-based dc-dc power converters that are implemented with six-stage switching sequences and two-half-bridge topologies. The voltage regulation range of a PR-based converter can be derived from its operating modes, referred to as switching sequences. The regulation range is then used to conceptualize each half-bridge in the converter topology as regulating or nonregulating. Control methods for the regulating and nonregulating half-bridges capable of achieving all desired high-efficiency behaviors are proposed.&#13;
&#13;
This thesis also presents several methods for modeling the operation of PR-based dc-dc converters, both in periodic steady state (PSS) and in dynamic operation. PSS solutions are obtained using conservation equations associated with the switching sequence, including strategies for both ideal solutions and solutions considering the mechanical loss of the PR. Several methods for modeling converter dynamics are proposed, including a linearizable state space model.&#13;
&#13;
Finally, this thesis designs and implements an example PR-based dc-dc converter and a microcontroller-based closed-loop controller. The converter is operated at 30 V to 10 V with a 0.5 W output power. The controller was verified to meet all of the desired high efficiency behaviors, and its transient response characteristics are evaluated.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Machine Learning for Description and Inference of Cyber Threats, Vulnerabilities, and Mitigations</title>
<link href="https://hdl.handle.net/1721.1/143257" rel="alternate"/>
<author>
<name>Srinivasan, Ashwin</name>
</author>
<id>https://hdl.handle.net/1721.1/143257</id>
<updated>2022-06-16T03:58:59Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Using Machine Learning for Description and Inference of Cyber Threats, Vulnerabilities, and Mitigations
Srinivasan, Ashwin
Machine learning and natural language processing (NLP) can help describe and make inferences on the vast amount of text data in cybersecurity. We use a graph database named BRON, which contains data from publicly available threat and vulnerability sources, for machine learning inference. Applying machine learning to BRON can provide us with more robust relationships, which can improve defenses against cyber threats. We experiment with different feature representations and subsets of the data, and show that machine learning and NLP can effectively classify edges between entries from different data sources as well as predict possible edge candidates. Experts agree that several of our predicted candidates are plausible edges. We also analyze defensive mitigation similarities using NLP techniques and find that there are identical mitigation descriptions for some entries that have internal relationships.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FlightMARL: A Multi-Agent Reinforcement Learning Framework for Vision-Based Control of Autonomous Quadrotors</title>
<link href="https://hdl.handle.net/1721.1/143256" rel="alternate"/>
<author>
<name>Shubert, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/143256</id>
<updated>2022-06-16T03:46:04Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">FlightMARL: A Multi-Agent Reinforcement Learning Framework for Vision-Based Control of Autonomous Quadrotors
Shubert, Ryan
In this paper, we present FlightMARL, an extension of the Flightmare simulator that implements a modular multi-agent reinforcement learning engine capable of supporting an array of models and algorithms. We explore representation learning of various different models in this environment. We implement recurrent agents as both discrete and continuous networks. We evaluate the learned representations of the models in the multi-agent environment. We demonstrate that agents constructed from continuous-time neural networks achieve interpretable causality in their representations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benefits of branches in sparsely connected networks</title>
<link href="https://hdl.handle.net/1721.1/143254" rel="alternate"/>
<author>
<name>Landry, Madison</name>
</author>
<id>https://hdl.handle.net/1721.1/143254</id>
<updated>2022-06-16T03:49:48Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Benefits of branches in sparsely connected networks
Landry, Madison
Artificial neural networks are most commonly implemented in computer software; however, real time processing and energy efficiency demands require faster and lower power alternatives. Neuromorphic engineering promises speed and energy efficiency, yet these devices can have unique constraints making them difficult to train. Motivated by optoelectronic devices, a unique class of optics-based neuromorphic hardware such as the COIN coprocessor, this thesis explores branched connections networks (BCNs), a kind of neural network in which directed connections may make additional branching connections. It focuses on effective approaches to train sparse BCNs from the bottom up and investigates the efficacy of weight perturbation for recovering sparse BCNs from fault. Under image classification tasks (MNIST &amp; FashionMNIST), it was found that branching granted benefits to sparse BCNs in terms of performance and ability to recover from fault. An “output connectedness” notion, useful for analyzing sparse networks, is defined. To conclude, this work contributes some rules of thumb advising the future development of these optoelectronic devices.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GoTxn: Verifying a Crash-Safe, Concurrent Transaction System</title>
<link href="https://hdl.handle.net/1721.1/143253" rel="alternate"/>
<author>
<name>Theng, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/143253</id>
<updated>2022-06-16T03:50:58Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">GoTxn: Verifying a Crash-Safe, Concurrent Transaction System
Theng, Mark
Bugs related to concurrency and crash safety are infamous for being subtle and hard to reproduce. Formal verification provides a way to combat such bugs through the use of machine-checked proofs about program behavior. However, reasoning about concurrency and crashes can be tricky, especially when scaling up to larger systems that must also have good performance.&#13;
&#13;
This thesis discusses the verification of GoTxn, the concurrent, crash-safe transaction system underlying the verified Network File System (NFS) server DaisyNFS. It focuses on the specification and proof of the write-ahead log and the automatic two-phase locking interface used to enforce crash and concurrent atomicity in transactions, detailing how the verification framework Perennial can be used to manage assertions about crash behavior across multiple threads. By effectively harnessing concurrency to hide disk access latency, GoTxn enables performance in DaisyNFS similar to the unverified Linux NFS server.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation- and Experiment-Based Setpoint Control for Heating, Ventilation, and Air-Conditioning Systems: A Single- and Multi-Objective Optimization Problem</title>
<link href="https://hdl.handle.net/1721.1/143252" rel="alternate"/>
<author>
<name>Cai, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/143252</id>
<updated>2022-06-16T03:15:38Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Simulation- and Experiment-Based Setpoint Control for Heating, Ventilation, and Air-Conditioning Systems: A Single- and Multi-Objective Optimization Problem
Cai, Yuan
Buildings and building construction sectors are together responsible for 40% of global energy consumption, 70% of electricity consumption, and 40% of carbon emission. Heating, ventilation, and air-conditioning systems (HVAC) in residential and commercial units account for 40% to 60% of energy usage. To increase energy efficiency and reduce energy usage, buildings are now better insulated, installed with energy-efficient appliances, and controlled by advanced technologies to provide user comfort while minimizing their environmental impact.&#13;
&#13;
This thesis focuses on utilizing setpoint control methods to design algorithms for operating thermostatically controlled appliances such as HVAC to achieve the goal of minimizing energy consumption, cost, and greenhouse gas emission while maintaining thermal comfort and indoor air quality. The problem is formulated as a constrained convex optimization statement. Specifically, the thesis proposes three optimization-based control frameworks that are verified in simulation testbeds (with state-of-art simulation software and numerical models with MATLAB and Python). The three methods apply setpoint control on the room- and aggregate (building)- level devices and have achieved a 20% to 50% reduction in the peak load demand and greenhouse gas emission in simulation testbeds.&#13;
&#13;
In addition to simulation, onsite experiments are conducted. One of the three simulation based setpoint control frameworks is implemented in two MIT classrooms. Throughout the eight experiment sessions, a significant amount of commissioning of HVAC, software, and hardware is completed. This experimental verification has demonstrated a nearly 50% savings on greenhouse gas emissions and showcased the power of data-driven control methods in real-life settings.&#13;
&#13;
Although we have witnessed the successes in both simulations and experiments, the results presented in the thesis are preliminary and only serve as a proof of concept. There are still plenty of areas worth further investigation to fully materialize and implement these&#13;
methods.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Descriptions of Deep Visual Features</title>
<link href="https://hdl.handle.net/1721.1/143251" rel="alternate"/>
<author>
<name>Hernandez, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/143251</id>
<updated>2022-06-16T03:56:34Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Natural Language Descriptions of Deep Visual Features
Hernandez, Evan
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual-information guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to protected categories like race and gender in models trained on datasets intended to obscure these features. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plantable Maps</title>
<link href="https://hdl.handle.net/1721.1/143245" rel="alternate"/>
<author>
<name>Hua, Xi</name>
</author>
<id>https://hdl.handle.net/1721.1/143245</id>
<updated>2022-06-16T03:27:13Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Plantable Maps
Hua, Xi
This thesis explores the creation of Plantable Maps, which seek to present geographic information in a way that is conscious of its ties to its local environment. I created biodegradable and compostable maps using plant-based materials that visualize the spatial history of "blight" in San Francisco. Blight, originally an ecological term for a fungal disease in plants, was used by mid-century urban planners to justify redevelopment policies with racially unjust consequences. These map sculptures explore the intertwined history of urban land use and natural ecology, and the implications of relating data to its sites of origin. The result is a project that — through material, process and site-specificity — aims to return information to the natural landscape.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inhabiting Wetness</title>
<link href="https://hdl.handle.net/1721.1/143242" rel="alternate"/>
<author>
<name>McIntosh, Ana A.</name>
</author>
<id>https://hdl.handle.net/1721.1/143242</id>
<updated>2022-06-16T03:02:58Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Inhabiting Wetness
McIntosh, Ana A.
This thesis explores the condition where water meets urban edge in Asunción, Paraguay, proposing an architectural response that considers water not as a challenging force, but rather as a powerful asset for the maintenance of ecosystems and health of the city. It makes a case to consider the humedales (wetlands) as an important constituent in imagining the future of Asunción because of its overlooked benefits including carbon-sequestration, cultivation of biodiversity, protection again erosion and the cleaning of water. &#13;
&#13;
Recent development at the water’s edge in Asunción has caused the destruction of wetlands for the construction of highway and other public and private developments. This is the approach of the binary: the separation of wet and the dry. What might it mean instead to inhabit a gradient of wetness by exploring other possibilities for resilient living at the edge? &#13;
&#13;
Sited in the Bañado Sur, this project considers these questions in a zone of informal housing that experiences hazardous flooding from heavy rain and river surges. These inundations often lead to the disruption of life, loss of work and the evacuation of inhabitants. How could designing with buoyancy provide for housing, working, common use and storage spaces for use in both wet, dry, and in between conditions?&#13;
&#13;
The proposed vivienda complex explores how an amphibious architecture might expand, contract and adapt to changing water levels while still supporting basic functions. At the same time, the house itself becomes a vessel to capture, hold and distribute rainwater. &#13;
&#13;
The representation becomes an opportunity to reconsider how to describe and engage with water differently. Through a series of watercolor and video experiments, water itself becomes the medium for drawing and making. Its fading, bleeding, seeping, blurring, pooling, flowing and drying enables a new imagination for understanding water in relationship to time, architecture and the city.&#13;
&#13;
Palimpsest, Symbiosis and Layering become guiding terms that inform not only how to think about the spatial conditions at the water’s edge, but also the diverse narratives and histories that exist in a place like the Bañado Sur. It is a place of great complexity, resilience, culture and tradition. Although adapted for this specific context, the thesis is an invitation to question the line of the wet and dry where city meets water around the world.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bōsai, Tsunami</title>
<link href="https://hdl.handle.net/1721.1/143237" rel="alternate"/>
<author>
<name>Tan, Evellyn</name>
</author>
<id>https://hdl.handle.net/1721.1/143237</id>
<updated>2022-06-16T03:33:26Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Bōsai, Tsunami
Tan, Evellyn
Bōsai is a Japanese term,” Bo” meaning prevention and “Sai” disaster, commonly associated with disaster preparedness and the necessary actions against such catastrophic events. In Japan, Bōsai is critically embedded in the Japanese culture, where locals can face disasters anytime. Therefore, embedding bōsai values to urban coastal landscapes with greater emphasis on the community’s needs is vital in building the nation’s social-ecological resilience.&#13;
&#13;
Located in the Circum-pacific of “Ring of Fire” and surrounded by sea, Japan, with an extreme range of topographic and geomorphological landscape, is highly prone to disasters. The south coastal belt of the islands is incredibly vulnerable to mega-tsunamis as the country is situated at the collision plate forming active troughs capable of generating forcefully destructive tsunami waves. The Great East Japan Earthquake that hit the Tohoku region in 2011 exposed disaster planning challenges and a looming demographic crisis in Japanese coastal towns.&#13;
&#13;
Despite the elaborate network of tsunami barriers constructed by the government to protect the coasts, many coastal settlements will still be significantly affected by L1 tsunamis of 10 meters or higher in the future. Thus, further building disaster-resilient capacity of social and ecological eco- systems against tsunamis is vital to the survival of lives and livelihood of Japan’s coast. This thesis highlights the importance of socio-ecological design strategies of coastal towns by re-evaluating and re-imagining current tsunami evacuation spaces. The design project focuses on the southern tip of the Izu Peninsula region that has been identified as highly vulnerable to tsunamis propagated by the Nankai Trough. The project critically interrogates current typologies of existing evacuation towers and public spaces operating solely for emergency uses. Instead, it proposes an evacuation space that actively engages with the ecological environment and local communities to support coastal livelihood and economy on a daily basis on top of providing safer high ground and routes to evacuate during a tsunami.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utilizing Enterprise Architecture Frameworks to Enable Desired Emergent Behaviors of an Enterprise Transformation</title>
<link href="https://hdl.handle.net/1721.1/143231" rel="alternate"/>
<author>
<name>Le Vély, Rachel H.</name>
</author>
<id>https://hdl.handle.net/1721.1/143231</id>
<updated>2022-06-16T03:02:40Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Utilizing Enterprise Architecture Frameworks to Enable Desired Emergent Behaviors of an Enterprise Transformation
Le Vély, Rachel H.
The field of enterprise architecture builds on principles of organizational theory, managerial science, systems engineering, and systems architecture. An enterprise system is a complex sociotechnical system that must contend with a rapidly changing external context, evolving goals for success, and emergent behavior that is highly complex to predict. Enterprises today face unprecedented technological change, increased competition, changing regulatory factors, and workers' desire for increased autonomy. Many leaders view building the organization of the future as the most crucial challenge they face because the survivability of an enterprise may well depend on its ability to transform effectively.&#13;
&#13;
This thesis aims to analyze the drivers for enterprise transformation along with understanding what makes an enterprise transformation successful. The thesis explores the history of enterprise architecture and then examines how to utilize enterprise architecture frameworks to achieve a successful enterprise transformation. Two enterprise architecture frameworks, the Architecting Innovative Enterprise Strategy (ARIES) framework and the STAR Model™, are applied to a real-world enterprise system post-enterprise transformation to assess the effectiveness of the transformation in achieving the desired enterprise capabilities and goals. Both case studies illustrate the benefits of utilizing enterprise architecture frameworks to realize the enterprise transformation goals. This thesis concludes that the application of enterprise architecture frameworks enhances enterprise transformation efforts. Because there is no recipe that all companies can use to transform effectively, applying enterprise architecture frameworks to enable successful enterprise transformation will increasingly become a strategic advantage for enterprises to ensure their success and longevity.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System Architecture for the Digital Thread in Commercial Airplane Design</title>
<link href="https://hdl.handle.net/1721.1/143229" rel="alternate"/>
<author>
<name>Herrero, Javier</name>
</author>
<id>https://hdl.handle.net/1721.1/143229</id>
<updated>2022-06-16T03:04:07Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A System Architecture for the Digital Thread in Commercial Airplane Design
Herrero, Javier
A Digital Thread for Airplane Design (DTAD) is a high-impact opportunity to deliver competitive value from shortening product development times and allowing further refinements in product performance. This thesis presents first a study based on historical records, of the economic benefits from a DTAD which were found to be a significant fraction of the forecasted total program development cost. After a comparative analysis of previously implemented architectures, a qualitative selection method was used to rate them against a baseline composed of the most advanced features present in each implementation.&#13;
&#13;
An innovative concept for the DTAD was constructed, based on the pattern called “microservices” distributed deployment, an evolved version of the SOA (serviceoriented architecture) concept. Key features of the architecture proposed, are high Cloud compatibility, robust deployability of MBE-tools (model-based engineering), loose coupling of services, and data modeling based on knowledge graphs. Next, an innovative design for the API (application programable interface), backbone of the DTAD, is proposed based on the key paradigm of the Task Based Organization of Objects (TaBOO), for a configuration control capability that emulates contemporary version-control practices in software development. Finally, a prototype DTAD was implemented with positive results, to demonstrate the validity of the concept by using it in a simulation of a simplified trade study, typical of the conceptual design stage in airplane programs.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Statistical Procedure Towards the Discovery of the Higgs Boson</title>
<link href="https://hdl.handle.net/1721.1/143228" rel="alternate"/>
<author>
<name>Atieh, Fadi</name>
</author>
<id>https://hdl.handle.net/1721.1/143228</id>
<updated>2022-06-16T03:58:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Novel Statistical Procedure Towards the Discovery of the Higgs Boson
Atieh, Fadi
For years, physicists hypothesized the existence of the Higgs Boson; a fundamental particle in the standard model of physics playing a crucial role in the understanding of the electroweak force. However, it took almost 50 years of technological advancements until its discovery was empirically announced in 2012 [6]. The discovery was statistical in nature and relied on analyzing huge amounts of data provided by the LHC (Large Hadron Collider) at CERN. In this thesis, we propose a novel hypothesis testing approach leading to the rejection of the null hypothesis that the Higgs Boson doesn’t exist. We use real data recorded at the LHC, provide theoretical build, and back it with implementation/experimentaion. Finally, we contrast our approach with the one used in the Higgs ML Challenge [3].
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Basic Isolated Half-Bridge Silicon Carbide Gate Driver for Electric and Hybrid Electric Vehicles</title>
<link href="https://hdl.handle.net/1721.1/143225" rel="alternate"/>
<author>
<name>Hidalgo, Nancy Yahel</name>
</author>
<id>https://hdl.handle.net/1721.1/143225</id>
<updated>2022-06-16T03:15:55Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Basic Isolated Half-Bridge Silicon Carbide Gate Driver for Electric and Hybrid Electric Vehicles
Hidalgo, Nancy Yahel
A Basic, Isolated, Half-Bridge Silicon Carbide Gate Driver was designed and validated using Cadence and SPICE. The architectures of similar gate drivers were studied and simplified to reduce the total area of the gate driver. The fabrication process was also carefully selected to minimize the total area. The gate driver architecture consisted of various analog and mixed signal subcircuits including floating voltage rail generators, inverter chains, and an on-off key receiver among others. Extensive simulations were performed in SPICE and Cadence to analyze the gate driver behavior for various temperature conditions, operating voltages, load conditions, and process corners. The final product was able to drive 6 amps of peak output current, with 10 nanoseconds of propagation delay, and with a 2 milliamp quiescent current.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic, Careful Online Packing of Groceries Using a Soft Robotic Manipulator and Multimodal Sensing</title>
<link href="https://hdl.handle.net/1721.1/143223" rel="alternate"/>
<author>
<name>Choi, Jeana</name>
</author>
<id>https://hdl.handle.net/1721.1/143223</id>
<updated>2022-06-16T03:33:06Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Automatic, Careful Online Packing of Groceries Using a Soft Robotic Manipulator and Multimodal Sensing
Choi, Jeana
This thesis describes the use of soft robotic manipulators with multimodal sensing for estimating the physical properties of unknown objects to enable sorting and packing. Although bin packing has been a key benchmark task for robotic manipulation, the community has mainly focused on the placement of rigid rectilinear objects within the container. We address this by presenting a soft robotic hand that uses a combination of vision, motor-based proprioception and soft tactile sensors to identify and pack a stream of unknown objects. We translate the ill-defined human conception of a “well-packed container” into metrics that match combinations of our different sensor modalities and demonstrate how this works in a grocery packing scenario, where objects of arbitrary shape, size and stiffness come down a conveyor belt. The proposed multimodal approach is supported by physical experiments demonstrating how the integration of multiple sensing modalities can address complex manipulation applications.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory Experiments of High-Energy-Density Shocks in Magnetized Supersonic Plasma Flows</title>
<link href="https://hdl.handle.net/1721.1/143220" rel="alternate"/>
<author>
<name>Datta, Rishabh</name>
</author>
<id>https://hdl.handle.net/1721.1/143220</id>
<updated>2022-06-16T03:29:14Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Laboratory Experiments of High-Energy-Density Shocks in Magnetized Supersonic Plasma Flows
Datta, Rishabh
Magnetized shocks are of interest in many astrophysical environments, in which high Mach number flows interact with ambient media, planetary obstacles, and/or spacecraft to generate strongly radiating shocks. Some examples include extrastellar jets from radio galaxies, relativistic jets from quasars and blazars, Herbig-Haro jets from Young Stellar Objects, and shocks in core-collapse supernovae and supernova remnants. In this study, we mimic these extreme astrophysical environments using pulsed-power driven high-energy-density-plasma laboratory experiments, by generating hypersonic, magnetized large-Reynolds’ number plasma flows, using exploding z-pinch wire arrays on the MAGPIE facility (1.4 MA peak current, 250 ns rise time). Plasma flows from adjacent wire cores expand and generate oblique shock structures, resulting in modulation of the plasma flow. These flows collide with inductive probes placed in the flow, which serve both as the obstacles that generate the magnetized bow shocks, and as diagnostics of the advected magnetic field. The oblique shocks, which are represented by oblique discontinuities in electron density, are observed to exhibit hollow density profiles. A detached bow shock forms ahead of the probe and exhibits a fully 3D structure, with a larger opening angle in the plane parallel to the magnetic field, than in the plane normal to it. We use the shock Mach angle to determine the upstream Mach number (5 − 8) of the flow. We also introduce a novel technique to estimate the flow velocity and temperature of pulsed-power driven plasmas, via simultaneous imaging of inductive probes and measurement of the inductive probe signal. The velocity and temperature estimated using this method are consistent with values reported in literature. Experimental results are compared with full 3D simulations performed using the resistive MHD code GORGON, and synthetic Thompson scattering spectra are generated, which form the basis of future experiments.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiated Noise Assessment of Shipboard Systems Using Vibration Analysis</title>
<link href="https://hdl.handle.net/1721.1/143219" rel="alternate"/>
<author>
<name>Elatov, David</name>
</author>
<id>https://hdl.handle.net/1721.1/143219</id>
<updated>2022-06-16T03:54:36Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Radiated Noise Assessment of Shipboard Systems Using Vibration Analysis
Elatov, David
Since the emergence of acoustic warfare, especially in modern times, noise regulation of ships has become a concern, leading to ships being built and tested with rigorous and periodical measures to ensure their noise signatures are minimal. Simultaneously, other predictive maintenance and load monitoring systems are installed on ships for better resource management. This work came from an idea to merge both worlds and used the predictive maintenance system to predict the radiated noise due to vibrations from the ship’s systems.&#13;
&#13;
First, a scientific survey to study the field yielded two plausible theoretical models that could help predict vibro-acoustic transmissions in complex systems - Finite Element Analysis and Statistical Energy Analysis. Those methods were implemented on a simple metal cabinet with limited success—however, a frequency-gain model constructed using a set of planned experiments, performed with reasonable accuracy.&#13;
&#13;
Later on, the experimental-based model construction method was implemented on a test ship to predict its frequency-gain model for different shipboard systems. This method did not yield good accuracy; however, using different data analysis tools such as Recurrent Neural Networks helped improve prediction accuracy. Eventually, this work suggests future directions to follow, based on the experience gathered from the research.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New Technology Architecture and Strategy for Early Crop Disease Detection</title>
<link href="https://hdl.handle.net/1721.1/143218" rel="alternate"/>
<author>
<name>Robinson, Maxwell T.</name>
</author>
<id>https://hdl.handle.net/1721.1/143218</id>
<updated>2022-06-16T03:30:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">New Technology Architecture and Strategy for Early Crop Disease Detection
Robinson, Maxwell T.
New approaches are required to meet the challenge of devastating disease outbreaks in global agriculture.  In particular, systemic, whole-plant methods of early crop disease detection are needed to effectively manage crop diseases that have long asymptomatic period and localized infection, such as the devastating citrus disease, Huanglongbing (HLB).  In this thesis, we use system thinking to develop new crop disease detection technology that diagnoses disease by sensing plant-released volatile organic compounds (VOCs). We then recommend strategy for deployment of this detection technology.  First, we demonstrate a new VOCs sensors architecture—humidity-initiated gas (HIG) sensors.  HIG sensors employ water drawn from a humid environment to sense VOCs at low concentrations.  We construct and evaluate two HIG sensor variants—Type I and Type II—and find that HIG sensors, particularly Type II sensors, address key requirements for field detection of crop disease.  Type II sensors achieve &lt;2 min response time and 5 ppb sensor limit of detection for geranyl acetone, a VOC downregulated during asymptomatic HLB. Second, we formulate and model early detection technology architecture that incorporates VOCs sensors.  We then select and prototype a preferred detection technology architecture, and find that this prototype successfully detects and distinguishes between different plant VOC profiles. The preferred architecture incorporates VOCs sensors, gas chromatography, and a pre-concentrator, and is estimated to provide &lt;1 ppt limit of detection for GA.  Finally, we create a decision model for flexible deployment of crop disease detection technology under uncertainty in crop disease characteristics, including time of outbreak, initial outbreak magnitude, and contact rate.  We use the model to select preferred decision rules for detection technology deployment that balance value of detection, cost of deployment, and feasibility for the particular case of HLB in California.  Under severe HLB conditions, we estimate our preferred decision rules paired with existing mitigation methods will yield $83M in value to the California citrus industry.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying tradespace exploration methods to remote sensing system of systems for wildfire detection and management</title>
<link href="https://hdl.handle.net/1721.1/143217" rel="alternate"/>
<author>
<name>Madhivanan, Gautam</name>
</author>
<id>https://hdl.handle.net/1721.1/143217</id>
<updated>2022-06-16T03:52:20Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Applying tradespace exploration methods to remote sensing system of systems for wildfire detection and management
Madhivanan, Gautam
In recent years the world has seen wildfires cause an increasing amount of damage and take more human lives. Studies have shown that this is a result of climate change, which is only going to get worse as time goes on. Currently, firefighting teams have limited access to technologies that could help them reduce the damage of wildfires. Both the firefighting and fire science community have shown interest in using drones and satellites to help with the fire detection and fire management efforts. Both drones and satellite have various tradeoffs when being used for remote sensing applications. To determine what combinations of sensors would best suit the needs of the firefighting and fire science communities, this project conducts a trade study where the relative utilities and cost of the combined systems can be compared. Four phases of the candidate systems operation were identified, fire prediction, fire detection, fire monitoring, and fire damage assessment. Each of these operational phases were used to determine the overall utility of the candidate system. Additionally, the Camp fire (California, 2018) was used as a reference fire to evaluate the utility during each of these phases. It was found that a system of systems comprised of multiple geosynchronous satellites and aircraft would be the optimal system at a cost of 518 million USD. This system would use the geosynchronous orbiting satellites primarily for prediction, detection and damage assessment. The aircraft would primarily be used for monitoring active fires. While this study was focused on the support of one large fire in the western U.S., to support additional areas additional satellites or aircraft could be added as needed.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Symbolic Communication</title>
<link href="https://hdl.handle.net/1721.1/143215" rel="alternate"/>
<author>
<name>Cheng, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/143215</id>
<updated>2022-06-16T03:24:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Understanding Symbolic Communication
Cheng, Emily
We quantitatively study the emergence of symbolic communication in humans with a communication game that attempts to recapitulate an essential step in the development of human language: the emergence of shared abstract symbols in order to accomplish complex tasks. In our experimental setup, a teacher must communicate an abstract notion, a formula in first order logic rendered to them in natural language, to a student. Subjects do so through a narrow channel that deprives them of common shared symbols: they cannot see or speak to one another and must only communicate via the motions of cars in a computer game. We observe that subjects spontaneously develop a shared vocabulary of car motions for task-specific concepts, such as “square” and “forall”, as well as for task-agnostic concepts such as “your turn”. We find that symbols are harder to establish than icons and indices, and that systematically, indices develop before icons, which develop before symbols. We characterize the conditions under which indices, icons, and symbols arise, and identify communicating in ambiguous game environments as the primary pressure for icon and symbol development.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Data-Centric to Citizen-Centric Architecture: Architecting a Future State for Open Data in the Government of Puerto Rico</title>
<link href="https://hdl.handle.net/1721.1/143214" rel="alternate"/>
<author>
<name>Figueroa-Rodriguez, Nestor Victor Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/143214</id>
<updated>2022-06-16T03:03:17Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">From Data-Centric to Citizen-Centric Architecture: Architecting a Future State for Open Data in the Government of Puerto Rico
Figueroa-Rodriguez, Nestor Victor Leonardo
Government institutions throughout the world have been working on transparency initiatives to build accountability with their constituents.  One of these critical initiatives is Open Data.  Open Data has as its primary objective creating transparency in Governments by liberating data sets stored in systems and databases under the custody of public agencies.  Historically, this data has been challenging to find and often does not make it to the public domain.  When it does, it is not easy for ordinary citizens to make it useful. &#13;
&#13;
Governments have made strides to liberate these data sets.  Architecturally, governments have been developing centralized systems that extract the data from many agencies to transform, store, and publish on a website for public access.  However, these enterprise data architectures focus on data liberation and not on citizen value through relevant and contextual insights creation.  In other words, instead of being citizen-centric, these architectures are data-centric.  Since the objective is to liberate data, the full intended benefit is not realized.  It creates an unintended effect: data inequality.  Only people with data skills and corporations with resources extract the value and benefit of these data sets, while ordinary citizens cannot.  &#13;
&#13;
Puerto Rico, a territory of the US with a population of 3.1 million, is not the exception.  In 2019, the Government of Puerto Rico passed Law 121 or Ley de Datos Abiertos.  The objective is to publish data sets generated through the interaction of citizens with government agencies.  Today, there are 98 data sets published through the Puerto Rico Statistics Institute (PRSI).  However, the “as-is” architecture, which complies with the current law, publishes data sets in a format usable for a few skilled citizens. It requires data exploration, analysis, visualization, and interpretation before extracting key insights and motivating follow-up actions.  It is not realistic to expect this type of work from ordinary citizens.    &#13;
&#13;
This thesis proposes a “to-be” Open Data architecture for the Government of Puerto Rico that extends the value and benefit of any published data set.  The “to-be” architecture transforms the current data formats into meaningful and actionable insights while minimizing a possible data inequality issue.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety in Hospital Medication Administration Applying STAMP Processes</title>
<link href="https://hdl.handle.net/1721.1/143213" rel="alternate"/>
<author>
<name>Baker, Elizabeth White</name>
</author>
<id>https://hdl.handle.net/1721.1/143213</id>
<updated>2022-06-16T03:41:08Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Safety in Hospital Medication Administration Applying STAMP Processes
Baker, Elizabeth White
Repeated application of root cause analysis techniques has not led to significant hospital medication administration safety improvements. The healthcare industry has begun to draw on scientific approaches to safety from outside traditional medical fields, including human factors engineering and systems design. This thesis lays the foundation to advance quality hospital healthcare for patients and providers by reducing hospital medication errors and enhancing hospital safety practices using STAMP techniques.&#13;
&#13;
A CAST analysis is performed for a frequently occurring hospital medication administration error to demonstrate the power of avoiding future losses through causal analysis based on systems theory compared to root cause analysis techniques. An STPA hazard analysis for hospital medication administration is also performed. The current hospital safety management system is analyzed, highlighting gaps where applying STAMP analysis to the hospital organization structure would enhance the safety within the hospital organization at large. Potential future directions in healthcare safety engineering are discussed.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Third Teacher: Architecture as enabler of Active Learning</title>
<link href="https://hdl.handle.net/1721.1/143211" rel="alternate"/>
<author>
<name>Tam, Carolyn</name>
</author>
<id>https://hdl.handle.net/1721.1/143211</id>
<updated>2022-06-16T03:29:01Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">The Third Teacher: Architecture as enabler of Active Learning
Tam, Carolyn
In the industrial age, schools were designed as tightly controlled environments to instill discipline and conformity to thrive in a machine era. Today, as architectural education evolves its mission away from manufacturing architects and towards producing creative contributors, the buildings that house education’s mission have remained stagnant - our learning environment is still rendered passive, utilitarian designs of the factory model, reinforcing the unhelpful boundaries between space and active learning.&#13;
&#13;
This thesis challenges the manner in which architectural education works in pedagogy through the built form. Rather than fixing a same batch of learners in a rigid container, this thesis proposes a series of deployable systems that can adapt to various urban conditions to form dispersed learning environments. Learning is not separated from daily life - it could occur in a park, a vacant lot, or in the most unexpected of spaces – fostering diverse modes of learning and greater creative possibilities.&#13;
&#13;
A key concept in active learning, which can extend to architecture, is wilderness education, where students are taken outside the classroom and use full-scale tools to create, play and test boundaries within their environments. This thesis asks, what if learning opportunities found in these instruments could be expanded to architecture. Architecture can be structural and systematic - but at the same time playful and engaging - and cross many disciplines, from geometry and surveying to physics and structure. Instructors and books are no longer the only teachers: the hands, the ears, the eyes, in fact, the whole body and the architectural space itself, become sources of information. Viewing students as active constructors of knowledge, the proposed architecture encourages students to use full-scale instruments and their context to engage with haptic, real-world learning experiences.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring The Impact of Simulated Transfer of Sensory Experience on Social Behavior and Empathy</title>
<link href="https://hdl.handle.net/1721.1/143208" rel="alternate"/>
<author>
<name>Morris, Caitlin</name>
</author>
<id>https://hdl.handle.net/1721.1/143208</id>
<updated>2022-06-16T03:25:50Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Exploring The Impact of Simulated Transfer of Sensory Experience on Social Behavior and Empathy
Morris, Caitlin
Our social behavior and interpersonal actions, often seen as rational and controlled processes, are influenced significantly by many subconscious factors including low-level sensory perception. Many of our emotions, biases, and impulses are driven by the information we perceive from the world around us and within our own bodies. In this thesis, I propose methods for simulating the transfer of elements of first-person sensory experience between individuals, particularly interoceptive or internal-sensing experiences which are typically felt within an individual’s body and not communicated externally. Because interoceptive signals are not typically communicated between people, they may be able to avoid the uncanny valley of attempted mimicry of existing nonverbal communication elements, while offering the benefits of an innately close link to emotional perception and affect. The goal of this research is to begin exploring the impact of novel tools for perception-sharing on interpersonal understanding, relationships, and the general field of social perception.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image Classification with Consistent Supporting Evidence</title>
<link href="https://hdl.handle.net/1721.1/143207" rel="alternate"/>
<author>
<name>Wang, Peiqi(Electrical engineer and computer scientist)</name>
</author>
<id>https://hdl.handle.net/1721.1/143207</id>
<updated>2026-01-16T14:50:26Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Image Classification with Consistent Supporting Evidence
Wang, Peiqi(Electrical engineer and computer scientist)
Adoption of machine learning models in healthcare requires end users’ trust in the system. Models that provide additional supportive evidence for their predictions promise to facilitate adoption. We define consistent evidence to be both compatible and sufficient with respect to model predictions. We propose measures of model inconsistency and regularizers that promote more consistent evidence. We demonstrate our ideas in the context of edema severity grading from chest radiographs. We demonstrate empirically that consistent models provide competitive performance while supporting interpretation.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rebuilding the Edge: The Case of the Sulmona–Carpinone Railway and the Town of Pettorano sul Gizio</title>
<link href="https://hdl.handle.net/1721.1/143204" rel="alternate"/>
<author>
<name>D'Agostino, Ginevra</name>
</author>
<id>https://hdl.handle.net/1721.1/143204</id>
<updated>2022-06-16T03:55:30Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Rebuilding the Edge: The Case of the Sulmona–Carpinone Railway and the Town of Pettorano sul Gizio
D'Agostino, Ginevra
Rebuilding the Edge takes as its point of departure a social reality that directly impacts the built environment: the depopulation of small centers in Italy over the last century and its consequences for citizens, and the country at large. The thesis examines how to look at the depopulation of inner and southern areas of Italy, by exploring the interrelations between three distinct components of architecture: its methodologies of research, its social responsibility and its design process. Rebuilding the Edge investigates how architecture can make a contribution to issues usually tackled by politicians, policymakers, economists and engineers.&#13;
&#13;
This project applies GIS mapping and photogrammetric tools to register the rural realities along an abandoned rail line in Central Italy. It interprets available data at the territorial scale, and generates original data more granularly through the use of contemporary technologies. Combined with stakeholder interviews, and policy framework analyses, this sets the stage to generate considerations regarding architecture’s role in this context.&#13;
&#13;
From a disciplinary perspective, the thesis proposes that architecture has a relevant role in the articulation and resolution of larger initiatives that seek to address the challenges faced by towns across Italy. It does not attempt for architecture to act as a ‘savior’, but rather concludes that architecture must operate in the company of other fields with unique forms of expertise.&#13;
&#13;
Rebuilding the Edge employs this research methodology and disciplinary reflections to test the impact that they may have on the design process. The outcome is a proposal for a building and piece of infrastructure that connects with efforts at the regional scale. By offering a carefully considered vision for a train station in one single town on the Italian Apennines, this thesis uses architecture as the last-mile solution that tends to make, or usually break, the success of nation-wide infrastructural investments.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formation of RAAN-Spread CubeSat Constellations Utilizing Onboard Low-Thrust Propulsion</title>
<link href="https://hdl.handle.net/1721.1/143203" rel="alternate"/>
<author>
<name>Gagnon, Amelia T.</name>
</author>
<id>https://hdl.handle.net/1721.1/143203</id>
<updated>2022-06-16T03:00:55Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Formation of RAAN-Spread CubeSat Constellations Utilizing Onboard Low-Thrust Propulsion
Gagnon, Amelia T.
Internal gravity waves are waves in the atmosphere caused by the interaction of air with differing buoyancy. This instability can cause weather phenomena to occur as the wave travels vertically and horizontally over large distances. To characterize these internal gravity waves, radio occultation sampling must occur within a timeframe and distance that allows for adequate capture of the horizontal wave vector.&#13;
&#13;
Receivers used in radio occultation have been successfully used on CubeSats. A CubeSat constellation of satellites in the same orbital plane with varying right ascension of the ascending node (RAAN) will satisfy the internal wave gravity sampling requirements. Called a RAAN-spread constellation, the separation in argument of latitude of the clusters will allow for and the separation in RAAN will allow samples to be captured with adequate resolution in the horizontal directions. This work focuses on 2-2-200 and 3-3-300 RAAN spread constellations, comprised of 2 clusters of 2 satellites and 3 clusters of 3 satellites, where clusters are spread by 200 s and 300 s, respectively. &#13;
&#13;
Two maneuvers are required to form this constellation from an assumed deployment orbit. To conserve propellant during the constellation formation, differences in RAAN nodal precession and argument of latitude angular momentum are used to spread the CubeSat orbits. Maneuver 1 is an orbit-raising maneuver, where argument of latitude is spread by having satellites drift at different altitudes to obtain needed spacing before rejoining an altitude identical among the satellites to lock in the final argument of latitude spacing and prevent further drift. Maneuver 2 is an inclination-changing maneuver where RAAN separation is induced by differences in nodal precession for satellites at different inclinations. When the appropriate RAAN spacing is achieved, the satellites rejoin at the same inclination to stop further RAAN drift. &#13;
&#13;
The thrust durations and drift durations are calculated analytically for these maneuvers for three thrust cases to first estimate the fastest formation time and associated change in velocity (deltaV), second estimate 1/2 of the maximum deltaV, and third for a thrust case that is longer in drift duration. Analytical thrust durations and drift times are subsequently simulated with high-fidelity orbital propagation software called Systems Tool Kit (STK). This orbital propagator takes into account the effects of orbital perturbations including atmosphere, solar radiation pressure, and Earth-oblateness perturbations including J2.&#13;
&#13;
Simulation results generally agree with the analytical time and deltaV estimations that formation can occur within 45 days for all constellation configurations examined. Additionally, simulation data shows that a RAAN-spread CubeSat constellation with 2 satellites per cluster at a high starting inclination will have most efficient use of deltaV of the maneuvers explored. Results for a 2-2-200 constellation in a sun-synchronous orbit show that the constellation can be formed in 17 days with 50.471 m/s of deltaV. Propellant left over from the constellation formation will be used to maintain the constellation over time and increase orbital lifetime. The work in this thesis is also applicable to maneuver planning for other similar constellations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Better Internet</title>
<link href="https://hdl.handle.net/1721.1/143202" rel="alternate"/>
<author>
<name>Humayun, Zain</name>
</author>
<id>https://hdl.handle.net/1721.1/143202</id>
<updated>2022-08-09T20:19:40Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Building a Better Internet
Humayun, Zain
In the summer of 2016, dozens of computer scientists gathered at a former church in San Francisco. For a long time, they had all been worrying about the same thing: the future of the Internet. &#13;
&#13;
Some of those present were dissatisfied with an online world where websites could go down, and the information on them lost forever; others were alarmed by an Internet dominated by a few powerful tech companies, running opaque algorithms and surveilling millions of people. Some believed that the Internet’s nuts and bolts presented easy targets to hackers and enemy states, while others were more concerned about countries censoring the Internet within their own borders.&#13;
&#13;
Those issues seemed to be unrelated, but each reflected a part of the Internet where control was growing concentrated in the hands of a few companies, governments, or infrastructural services: an overarching trend that computer scientists refer to as centralization.&#13;
&#13;
The meeting in California was a call to arms for those trying to organize a countermovement: a mission to save the Internet, to build a better one, and to realize many of its early, unfulfilled promises. In the five years since, has the movement to decentralize the Internet come any closer to success?
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accessible AI That’s Out of This World: Globalizing AI Literacy through Problem-Based Learning and Deep Learning Models in a Low Code Environment</title>
<link href="https://hdl.handle.net/1721.1/143201" rel="alternate"/>
<author>
<name>Harkavy, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/143201</id>
<updated>2022-06-16T03:47:07Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Accessible AI That’s Out of This World: Globalizing AI Literacy through Problem-Based Learning and Deep Learning Models in a Low Code Environment
Harkavy, Elizabeth
From phones, to advertisements, to search engines, AI is a constant presence in daily life. As AI continues to permeate our everyday behaviors, products, and relationships with the world, an urgent need for equitable, effective AI education emerges. While existing research has shown the importance of democratizing access to AI, this effort has been predominantly geared toward WEIRD (White Educated Industrialized Rich Democratic) populations which leads to biased results and false generalizations.&#13;
&#13;
With the goal of helping students engage meaningfully and safely with AI; in this thesis I implement tooling to allow for interaction with complex Natural Language Processing models in a low code environment, design curriculum for a problembased approach to teaching AI, run a series of workshops with students from WEIRD (United States) and Non-WEIRD (India) countries, and analyze the results.&#13;
&#13;
The research showed students’ confidence and demonstrated ability grew significantly after the workshops; students were able to demonstrate key AI literacy skills, build complex technology projects, and leverage AI to come up with original and creative solutions to specific, real-world problems. Students’ perceptions of Conversational AI agents became more positive after the workshops and notably, their trust in AI increased. When comparing students from WEIRD and Non-WEIRD countries, Non-WEIRD students were less critical of technology than their WEIRD counterparts and believed AI would be used to complete tasks for humans rather than with them.&#13;
&#13;
Overall, this work showcases the possibility for gaining AI understanding and literacy skills through a problem-based approach to AI education in conjunction with hands-on tooling, as well as highlights the importance of expanding research to include populations beyond WEIRD demographics.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-Autonomy Teaming for Improved Diver Navigation</title>
<link href="https://hdl.handle.net/1721.1/143198" rel="alternate"/>
<author>
<name>Pelletier, Jesse</name>
</author>
<id>https://hdl.handle.net/1721.1/143198</id>
<updated>2022-06-16T03:10:36Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Human-Autonomy Teaming for Improved Diver Navigation
Pelletier, Jesse
Diving operations are inherently complex due to navigation and communication limitations. Until recently, fixed-beacon acoustic localization techniques have served as the primary means of improving diver navigation. However, modern artificial intelligence and acoustic modem technologies have enabled accurate relative navigation methods between a diver and an autonomous vehicle.&#13;
&#13;
Human-robot collaboration takes advantage of each member’s strengths to create the most effective team. This concept proves especially advantageous within the ocean domain, where humans are naturally deficient navigators. Yet humans serve as the team’s creative spirit, offering the critical thinking and flexibility needed to succeed in an unpredictable and dynamic environment. Recent underwater human-robot cooperative navigation systems typically rely on autonomous surface vehicles (ASVs), specially designed underwater vehicles, or stereo cameras. This thesis proposes a diver navigation method exhibiting significantly improved accuracy over dead reckoning without relying on a surface presence, cameras, or fixed acoustic beacons. Specifically, we develop and evaluate the communication architecture and autonomous behaviors required to guide a diver to a target location using subsurface human-autonomous underwater vehicle (AUV) teaming with no requirement for ocean current data or exact diver speeds. By depending on acoustic communication and commercial AUV navigation capabilities, our method has increased accessibility, applicability, and robustness over former techniques.&#13;
&#13;
We utilize the Woods Hole Oceanographic Institution (WHOI) Micromodem 2’s two-way-travel-time (TWTT) capability to enable range-only single-beacon navigation between two kayaks serving as proxies for the diver and Remote Environmental Monitoring Units (REMUS) 100 AUV. During processing, a nonlinear least-squares (NLS) method, called incremental smoothing and mapping 2 (iSAM2), utilizes odometry and range measurements to provide real-time diver position estimates given unknown ocean currents. Field experiments demonstrate an average online endpoint error of 4.53 meters after transits four hundred meters long. Additionally, simulations test our method’s performance in more challenging situations than those experienced in the field. Overall, this research progresses the interoperability of divers and AUVs.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Potential Field Method for Cooperative Range-Only Localization in Multi-Robot Networks</title>
<link href="https://hdl.handle.net/1721.1/143197" rel="alternate"/>
<author>
<name>Thumma, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/143197</id>
<updated>2022-06-16T03:24:06Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Distributed Potential Field Method for Cooperative Range-Only Localization in Multi-Robot Networks
Thumma, Nicole
The quality of localization for AUVs using inter-agent ranging measurements is heavily dependent on the configuration of the agents. This work introduces a potential field planner with an E Optimality force to minimize the impact of network configuration on localization when navigating between waypoints in pre-determined trajectories. For this work, the trajectories are generated by Localization-Constrained Graph Planning (LCGP) as the basis for path planning. This system takes waypoints and improves the localizability of the timesteps in between the waypoints, relative to directly going from wach waypoint to the next. A total of 39 tests were conducted with four weight configurations and two noise scenarios with three environments. These tests indicated that the inclusion of an E Optimality force lowered the average localization error as compared to the baseline by up to 56% in the simpler environments, but failed to make a significant difference in a more complex environment.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reductions of ReLU neural networks to linear neural networks and their applications</title>
<link href="https://hdl.handle.net/1721.1/143196" rel="alternate"/>
<author>
<name>Le, Thien</name>
</author>
<id>https://hdl.handle.net/1721.1/143196</id>
<updated>2022-06-16T03:07:46Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Reductions of ReLU neural networks to linear neural networks and their applications
Le, Thien
Deep neural networks are the main subject of interest in the study of theoretical deep learning, which aims to rigorously explain the incredible performance of these function classes in practice. Although a lot are understood about deep linear network (neural network with all linear activations), nonlinearities in the activation make it very challenging to extend existing techniques to more realistic types of neural networks. In this thesis, we describe reductions of ReLU neural networks to linear neural networks, under various general condition of network architectures, loss functions and datasets. When such conditions are met, one can adapt techniques used in the theory of linear neural networks to study ReLU neural networks. To this end, we provide two applications in which we put the reduction to use: the first characterizes an implicit regularization behavior of ReLU neural networks trained with gradient descent and the second characterizes their convergence under gradient-based algorithms.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interactive Approach to Generating SQL Queries from Natural Language</title>
<link href="https://hdl.handle.net/1721.1/143192" rel="alternate"/>
<author>
<name>Durvasula, Ramya</name>
</author>
<id>https://hdl.handle.net/1721.1/143192</id>
<updated>2022-06-16T03:25:12Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">An Interactive Approach to Generating SQL Queries from Natural Language
Durvasula, Ramya
In this thesis, we contribute nalini, an natural-language based interactive interface for SQL query generation. Motivated by a lack of usability of existing systems, nalini was built with the intention of using it for complex query generation. The interface allows users to use a natural language and mathematical operations with a minimal structure. We evaluated nalini with a first-use study with five participants, where participants were asked to generate queries from the TPC-H decision support benchmark. Our study showed that users were able to use nalini to generate complex queries, and points to promising areas of future work.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Network Lateral Movements through the CyberBattleSim Web Platform</title>
<link href="https://hdl.handle.net/1721.1/143191" rel="alternate"/>
<author>
<name>Esteban, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/143191</id>
<updated>2022-06-16T03:01:39Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Simulating Network Lateral Movements through the CyberBattleSim Web Platform
Esteban, Jonathan
Modern cyber attacks demand immediate action plans based on an overwhelming amount of information and options. Microsoft has made available a highly parameterizable model of enterprise networks with the capability of simulating automated cyber-attacks. We provide an extension of this project by means of a web platform. The platform allows a user to model an enterprise network topology, interact with the topology manually, and simulate an automated adversarial agent. Leveraging the CyberBattleSim toolkit, we enable the swift prototyping of different network configurations that can then be analyzed by a defensive security team member either manually or automatically through the automated agent. We demonstrate that the platform can simulate any network topology supported by CyberBattleSim as well as evaluate different Q-Learning strategies. This in turn can provide us with valuable insight regarding the progression of cyber attacks, aiding us at generating appropriate cyber-attack response plans.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Urban Air Mobility Supply</title>
<link href="https://hdl.handle.net/1721.1/143190" rel="alternate"/>
<author>
<name>Yoo, Lisa Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/143190</id>
<updated>2022-06-16T03:03:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Simulating Urban Air Mobility Supply
Yoo, Lisa Y.
Urban air mobility (UAM) is a relatively new concept in the transportation industry. As on-demand services like Uber and Lyft have transformed our daily lives, our objective is to explore how on-demand UAM impacts mobility patterns by modeling the supply-side of such a service within a realistic, high-fidelity simulation. We present a design and implementation of UAM within SimMobility, a multi-scale, multi-modal activity- and agent-based simulation software, which was developed in the MIT Intelligent Transportation Systems (ITS) Lab. This includes a network of vertiports, fleet of UAM aircrafts, and controller logic to accommodate passenger requests and control the fleet. We also implement novel service features including priority landing, stand designation, and matching algorithm customization through parameterized buffer times. Explicit models to simulate key characteristics of UAM services, supported by a comprehensive review of the underlying literature, has enabled us to develop a uniquely realistic simulation consistent with state-of-the-art technological developments, as well as the current urban landscape. The contribution of this thesis is twofold: first in the realistic simulation of UAM supply as described, and second in providing a replicable architecture that can be emulated for future SimMobility mobility service controllers.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of cybersecure observers and control in microgrids; energy dynamics based approach</title>
<link href="https://hdl.handle.net/1721.1/143189" rel="alternate"/>
<author>
<name>Rowles, Premila A.</name>
</author>
<id>https://hdl.handle.net/1721.1/143189</id>
<updated>2022-06-16T03:50:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Implementation of cybersecure observers and control in microgrids; energy dynamics based approach
Rowles, Premila A.
Today's microgrids are modeled as dynamical systems with multiple physical components that interact. Microgrids are essentially small blocks that make up the larger energy grid and they are independent, meaning they can function separately and autonomously from the larger energy grid. These microgrids need to stabilize and produce the appropriate amount of power for the loads. In this thesis we adopt energy modeling for control and review rationale for claims that such an approach can achieve these goals. There are numerous types of control designs that can be used in these systems. A few examples include feedback linearizing control, conventional PID control, and energy control. This thesis discusses these types of control, and shows examples of each one used in a simulation. The examples are modeled and simulated using two main software tools: CAMPS (a MATLABbased Centralized Automated Modeling of Power Systems), and Simulink (an existing MATLAB tool). This thesis particularly emphasizes the implementation of these different control designs and their tradeoff. Each control design is used in an example in either CAMPS or Simulink, and the microgrids are probed at multiple points to compare results. Additionally, there are multiple ways to implement each control design; the tradeoff of the different methods are discussed. energy control is a novel technique used in microgrids - this thesis focuses on new implementation techniques of energy control using derivations from prior work in the field. Finally, an observer is introduced for supporting energy control so that control does not malfunction even when measurements are tampered. The first proof-of-concept simulation is provided to show this cyber-secure combination of energy control and energy observer for microgrids.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding the Capabilities of Dynamic Robotic Systems</title>
<link href="https://hdl.handle.net/1721.1/143188" rel="alternate"/>
<author>
<name>Stanger-Jones, Elijah B.</name>
</author>
<id>https://hdl.handle.net/1721.1/143188</id>
<updated>2022-06-16T03:13:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Expanding the Capabilities of Dynamic Robotic Systems
Stanger-Jones, Elijah B.
For robotics research to reach its full potential the hardware platforms we use will have to be pushed to the physical limits. Building systems that can reach these limits and that are robust and reliable requires careful engineering optimization across the entire design process. This thesis documents the process of designing modular systems to achieve these goals in a variety of robotic applications. In particular the integration and testing of new actuators, design of new compute and power systems and a GaN based three-phase inverter. New platforms including manipulators, humanoids and quadrupeds developed with these systems are presented and initial results with new control architectures that push the systems to the limit are shown.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Processing Methods for the Detection of Landmark Acoustic Cues</title>
<link href="https://hdl.handle.net/1721.1/143187" rel="alternate"/>
<author>
<name>Shi, Belinda</name>
</author>
<id>https://hdl.handle.net/1721.1/143187</id>
<updated>2022-06-16T03:36:43Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Processing Methods for the Detection of Landmark Acoustic Cues
Shi, Belinda
This paper presents work on an aspect of a new speech analysis system for lexical access, which is based on the concept of individual acoustic cues in the speech signal such as Landmarks, which are abrupt changes in the spectrum due to articulatory events associated with vowels and consonants. It provides an organized process that can easily be repeated and modified to be able to create an accurate and efficient detection module for landmark cues in speech files. The paper begins by examining patterns in the speech signal that may indicate the presence of vowel landmark cues, before proposing an algorithm that can predict the locations of vowel landmarks based on these observations. Then, it maps out a generalized system of steps needed to construct modules for detecting landmark acoustic cues, which involves extracting speech related measurements, processing them to accentuate certain characteristics, then using both speech production knowledge and mathematical analysis to determine which measurements are good indicators of certain acoustic cues. Finally, Gaussian Mixture Models using the selected raw and processed measurements are trained in order to efficiently and accurately distinguish landmark cues. These steps are applied to Vowel and Glide landmarks to develop a module that can distinguish them from other landmark cues in a speech signal. Development of this module provides a critical step in the development of a cue-based speech recognition system which can model human speech perception.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agent-Based Approach to Simulating Mobility as a Service</title>
<link href="https://hdl.handle.net/1721.1/143186" rel="alternate"/>
<author>
<name>Li, David D.</name>
</author>
<id>https://hdl.handle.net/1721.1/143186</id>
<updated>2022-06-16T03:28:18Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Agent-Based Approach to Simulating Mobility as a Service
Li, David D.
As a result of the changing transportation landscape, Mobility-as-a-Service (MaaS) was developed to be a streamlined operator of emerging on-demand transportation services and traditional modes of transport. However, much of MaaS’s impact on enduser’s activities and travel patterns remain unknown and require further investigation. Due to its complex nature, a tool is necessary to help us reliably quantify and evaluate the broader impacts of MaaS. To this end, we introduce MaaS into the activitybased, agent-based travel simulation platform: SimMobility. Prioritizing flexibility and compatibility with different cities, we provide a generic implementation on which users can define configurations according to desired scenarios.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Audio Segmenting and Natural Language Processing in Oral History Archiving</title>
<link href="https://hdl.handle.net/1721.1/143185" rel="alternate"/>
<author>
<name>Rieping, Holly Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/143185</id>
<updated>2022-06-16T03:38:48Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Audio Segmenting and Natural Language Processing in Oral History Archiving
Rieping, Holly Anne
Traditional archives preserve physical historical records, documents, artifacts, etc. and tell a story of some historical significance. As the digital age progresses, digital archives have become more commonplace and have given wider access to archival resources and knowledge to the general public. With wider access, historically marginalized groups now have the means to share stories that have typically been excluded from the dominant discourse. As a result, we are faced with both the challenge and the opportunity to tell and preserve stories from these groups and foreground diverse voices in these digital archives. Additionally, we are faced with the challenge of having an abundance of materials, both digitized and born digital, to use in an archive, and can utilize various computational methods to assist in the curatorial process of a digital archive by organizing the materials or finding connections between different materials that would otherwise take hundreds of hours for an archivist to do.&#13;
&#13;
Using materials from the MIT Black Oral History Project, this thesis first explores ways to process digitized audio interviews through audio segmentation, using techniques including silence detection and speaker diarization, with the goal of creating a more flexible way to explore interviews in a digital oral history archive. Second, this thesis uses named entity recognition to experiment with metadata extraction for an archive. Next, this thesis explores ways to discover connections between segments of interviews by using topic modeling with LDA and LSI and topic classification using machine learning methods to identify topics, similarities, and dissimilarities across interviews. Finally, this thesis discusses how these computational methods may enhance the telling of diverse stories in digital oral history archives.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stubborn: A Strong Baseline for the Indoor Object Navigation Task</title>
<link href="https://hdl.handle.net/1721.1/143182" rel="alternate"/>
<author>
<name>Luo, Haokuan</name>
</author>
<id>https://hdl.handle.net/1721.1/143182</id>
<updated>2022-06-16T03:34:27Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Stubborn: A Strong Baseline for the Indoor Object Navigation Task
Luo, Haokuan
This work studies the task of indoor object goal navigation, a widely-studied task that requires the agent to navigate to an instance of a given object category in unseen indoor environments. Previous state-of-the-art methods to this task include mapfree end-to-end learning-based methods and methods that maintain and plan with spatial maps, but they both struggle to perform well in the task. Experiments show that the primary reasons for failures are poor exploration, agent getting trapped, and inaccurate object identification. For exploration strategy, we show that previous mapbased methods fail to use semantic clues effectively and present our semantic-agnostic exploration strategy that proves to perform much better. For object identification, we show that using cumulative information across multiple frames leads to higher accuracy in object identification. We additionally present our methods for decreasing the agent’s chance of getting stuck. The combination of our work leads to the winning entry on the leader board of the CVPR Habitat ObjectNav challenge.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PDDL.jl: An Extensible Interpreter and Compiler Interface for Fast and Flexible AI Planning</title>
<link href="https://hdl.handle.net/1721.1/143179" rel="alternate"/>
<author>
<name>Zhi-Xuan, Tan</name>
</author>
<id>https://hdl.handle.net/1721.1/143179</id>
<updated>2022-06-16T03:05:16Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">PDDL.jl: An Extensible Interpreter and Compiler Interface for Fast and Flexible AI Planning
Zhi-Xuan, Tan
The Planning Domain Definition Language (PDDL) is a formal specification language for symbolic planning problems and domains that is widely used by the AI planning community. However, most implementations of PDDL are closely tied to particular planning systems and algorithms, and are not designed for interoperability or modular use within larger AI systems. This limitation also makes it difficult to support extensions to PDDL without implementing a dedicated planner for that extension, inhibiting the generality and reach of automated planning.&#13;
&#13;
To address these limitations, we present PDDL.jl, an extensible interpreter and compiler interface for fast and flexible AI planning. PDDL.jl exposes the semantics of planning domains through a common interface for executing actions, querying state variables, and other basic operations used within AI planning applications. PDDL.jl also supports the extension of PDDL semantics (e.g. to stochastic and continuous domains), domain abstraction for generalized heuristic search (via abstract interpretation), and domain compilation for efficient planning, enabling speed and flexibility for PDDL and its many descendants. Collectively, these features allow PDDL.jl to serve as a general high-performance platform for AI applications and research programs that leverage the integration of symbolic planning with other AI technologies, such as neuro-symbolic reinforcement learning, probabilistic programming, and Bayesian inverse planning for value learning and goal inference.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Transformation in the Oil and Gas Industry: Challenges and Potential Solutions</title>
<link href="https://hdl.handle.net/1721.1/143178" rel="alternate"/>
<author>
<name>Prestidge, Kelsey L.</name>
</author>
<id>https://hdl.handle.net/1721.1/143178</id>
<updated>2022-06-16T03:02:02Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Digital Transformation in the Oil and Gas Industry: Challenges and Potential Solutions
Prestidge, Kelsey L.
The digital transformation is proving to be a significant force for change globally and is especially true in the oil and gas industry. Many in the oil and gas industry struggle to understand what a digitally transformed future will look like. This thesis aims to explore the oil and gas digital transformation, its challenges, and potential solutions. By using this approach, organizations will be able to establish their organizations' current state (or baseline state), allowing them the opportunity to benchmark against what their future state will look like in the digital transformation environment. As oil and gas operators begin to contemplate the implications of their digital transformation and the coming changes to the upstream sector, all the key stakeholders must be engaged closely with the ecosystem to adapt to the disruptive trends. In the age of digital, oil and gas operators will also need to focus more on establishing partnerships that help promote technology innovation not only within their organizations but also within their business partners' organizations. Digital transformation will push the oil and gas industry out of its comfort zone. Oil and gas operators must prepare to embrace the discomfort to compete in the future.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pre-trained Language Models for Clinical Systematic Literature Reviews</title>
<link href="https://hdl.handle.net/1721.1/143177" rel="alternate"/>
<author>
<name>Ortiz, Juan M. Ochoa</name>
</author>
<id>https://hdl.handle.net/1721.1/143177</id>
<updated>2022-06-16T03:39:20Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Pre-trained Language Models for Clinical Systematic Literature Reviews
Ortiz, Juan M. Ochoa
Although systematic literature reviews play a critical role in clinical-based decision making, manual methods for information extraction can sometimes take prohibitively long. In this work, we first describe the construction of datasets in two distinct clinical domains containing randomized trials and observational studies. We then utilize these two datasets to benchmark the performance of Pretrained Language Model (PLM) based entity and relation extraction models as well as the effect of domain specific pre-training prior to their fine-tuning. Our results show evidence to the effectiveness of pre-training using masked language modeling (MLM), a sentence-level proxy task, on boosting the performance of fine-tuned models on both inter- and intra-sentence level information extraction tasks.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Active Structure Learning for Gaussian Process Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/143176" rel="alternate"/>
<author>
<name>Lin, Gloria Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/143176</id>
<updated>2022-06-16T03:27:43Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Bayesian Active Structure Learning for Gaussian Process Probabilistic Programs
Lin, Gloria Z.
What data should we gather to learn about the underlying structure of the world as quickly as possible, especially in cases where data is sparse or expensive to acquire? Structure learning techniques for Gaussian process (GP) probabilistic programs provide a rich framework for inferring qualitative structure in data. In this thesis, we improve the data-efficiency of probabilistic GP structure learning by extending it to the active learning setting. We present a sequential Monte Carlo algorithm for Bayesian active learning for GPs with a novel objective function, Kernel Information Gain (IG-K), to reduce uncertainty over model structure and parameters. As a baseline for comparison, we also formulate a second objective function, Predictive Information Gain (IG-P), that reduces uncertainty over the posterior predictive distribution.&#13;
&#13;
We empirically validate that active learning with our novel IG-K objective is able to more accurately infer the structure of synthetic datasets using fewer datapoints than active learning with IG-P. We also validate the underlying active learning inference algorithm using simulation-based calibration. Finally, we test our active learning algorithm on a real-world dataset with complex structure. Collectively, the results provide a deeper understanding of the benefits and limitations of active structure learning using Gaussian processes, revealing that an active selection strategy suited for inferring the model structure and parameters may not favorable for providing accurate predictions. These findings suggest directions for future active learning approaches which combine the IG-K and IG-P objectives, leveraging the advantages of each objective to efficiently discover structure in data and provide accurate predictions.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relaxation Dynamics of Photoexcited Carriers in Graphene</title>
<link href="https://hdl.handle.net/1721.1/143175" rel="alternate"/>
<author>
<name>Yeung, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/143175</id>
<updated>2022-06-16T03:51:42Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Relaxation Dynamics of Photoexcited Carriers in Graphene
Yeung, Matthew
Electronics are ubiquitous and essential for our daily lives. To improve upon existing technologies, new materials and their electronic properties need to be explored. The proposed mid-infrared spectroscopy experiments in this thesis place an emphasis on understanding the physical phenomena in graphene. Graphene on its own has many peculiar layer-dependent properties which make it an interesting material to study. The first chapter is dedicated to looking at some basic theories of graphene and its corresponding multilayers, showing some of its peculiar properties.&#13;
&#13;
In the second chapter, graphene device fabrication techniques and tools that were used throughout this thesis are described. The device fabrication plays a crucial role in understanding properties of graphene and without high-quality devices, some properties could not be observed as shown in the actual experiments done in chapter three.&#13;
&#13;
To date, there have not been many ultrafast studies using a mid-infrared probe for studying solid-state materials, let alone studies specifically looking at charge dynamics of photo-excited carriers in graphene. So in the third chapter, a homebuilt near-infrared pump and mid-infrared probe spectroscopy system is described which is designed to have a high temporal resolution to study photo-excited carriers in graphene and other novel materials. It is also shown that for low-mobility materials, the mid-infrared probe can also be used to estimate mobility without the need for electrical contacts.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Program Synthesis with Symbolic Properties</title>
<link href="https://hdl.handle.net/1721.1/143172" rel="alternate"/>
<author>
<name>Sechopoulos, Theodoros</name>
</author>
<id>https://hdl.handle.net/1721.1/143172</id>
<updated>2022-06-16T03:49:51Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Program Synthesis with Symbolic Properties
Sechopoulos, Theodoros
Program synthesis is the task of automatically writing computer programs given a specification for their behavior. Program synthesis is challenging due to the combinatorial nature of the search space. In the short term, improving program synthesis could make people vastly more productive, by transforming how they communicate with computers. In the long term, improving program synthesis could bring us a step closer to understanding human intelligence and to building machines with human-like intelligence. In this work we discuss how symbolic properties (which are themselves programs) can help program synthesis performance. Specifically, building on the formulation of properties in Odena and Sutton (2020) we present PropsimFit, a novel online synthesis algorithm that uses properties for program search and show that it outperforms naive non-property baselines in the Rule (2020) list function dataset. Finally, we discuss future ways to use properties for synthesis based on the insights gained from PropsimFit and its limitations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accurate-ish</title>
<link href="https://hdl.handle.net/1721.1/143170" rel="alternate"/>
<author>
<name>Moyers, Ruth Blair</name>
</author>
<id>https://hdl.handle.net/1721.1/143170</id>
<updated>2022-06-16T03:35:44Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Accurate-ish
Moyers, Ruth Blair
In the highly constructed landscapes of memory in the United States, existing practices of preservation and maintenance of the past are centered on addition and expansion of what has value within the collective public memory, but there is a hesitation that lingers around the notion of deconstruction, or devaluation of historical places. In practice, acts of removal or reconstruction come from moments of rupture, rather than continuous processes of reevaluation and change. In writing an operational epilogue, this thesis is proposing a design of collapse, of reveal, that allows suppressed narratives to exist in ways that begin to leak into and overwhelm spaces of colonial history.&#13;
&#13;
The site, Colonial Williamsburg®, is a 301 acre open air “living-history” museum in Virginia. It is a destination for heritage tourism, located within a two-hour driving radius of Washington, D.C., Richmond and Charlottesville — all central sites of American historical mythology, and all of which have recently become sites of rupture (protest, rallies). The reconstruction of the colonial town in Williamsburg began as the passion project of a local clergyman, and was realized with the support of J.D. Rockefeller Jr. and others invested in its narratives. The goal of restoration was to bring history to life, but it also conveniently served in repairing the self-image of a place that was experiencing economic and cultural instability following the Great Depression and the end of Reconstruction in the American South. These plays for settler-colonial nostalgia led to a highly constructed and deeply amnesic experience designed by and for a singular audience to be easily dramatized and repeated.&#13;
&#13;
This constructed imaging of history has been retained in much of Colonial Williamsburg's® programming as a tourist destination with contemporary retail, hospitality and entertainment venues. And while time seems to be frozen in this place, there are already subtle insistences of difference in moments of landscaping, planning and construction. By proposing a series of canny and uncanny alterations to Colonial Williamsburg®, the seams of a place that holds “accuracy” at the center of its operations will start to come undone, not through a restorative nostalgic criticism, but through the emergence of a series of reflections, refractions and destabilizing realities. What might happen if we hold a mirror up to a place, and ask it to see itself?
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of Disruption from Digital Transformation through the ARIES Framework Enterprise Element Model</title>
<link href="https://hdl.handle.net/1721.1/143168" rel="alternate"/>
<author>
<name>Lucioli, Alessandro</name>
</author>
<id>https://hdl.handle.net/1721.1/143168</id>
<updated>2022-06-16T03:26:00Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Exploration of Disruption from Digital Transformation through the ARIES Framework Enterprise Element Model
Lucioli, Alessandro
Never before has disruption arising from digital transformation been more starkly obvious and relevant than amid the ongoing COVID-19 global pandemic with the increasing focus on digitalisation to address the many challenges presented by public health orders that limit human-human interaction.&#13;
&#13;
Whilst a global pandemic is inherently a disruptive event, catalysing and bringing to the fore other disruption, change as a result of digital transformation has been present in business for at least the last decade manifesting in such things as change in business ecosystems and stakeholder landscapes (amongst others). Consequently, such fundamental, transformative change has invited a deeper understanding of emergent trends by many researchers from various domains. Arguably however, a piecemeal rather than holistic approach to exploring different enterprise elements has dominated.&#13;
&#13;
Using a semi-systematic literature review methodology, this thesis purposefully takes a holistic approach to contribute a meta-analytical synthesis of findings and observations to the existing body of knowledge. By anchoring and structuring the research around the ARIES Framework Enterprise Element Model, and leveraging object-process methodology and diagrams from the systems thinking discipline, this thesis explores a cross-section of research domains using the Scopus® database of curated academic literature in addition to other select, reputable sources. Distilling findings across the ten ARIES Framework enterprise elements, this thesis finds that digital transformation is profoundly transformative for enterprises because it is fundamentally about organisational change rather than simply technological adoption. Consequently, enterprises often cited as exemplary and characterised as digital natives: (a) embrace necessary change around organisational elements such as culture, leadership, creativity and knowledge management in support of their digital aspirations; (b) challenge established paradigms of technology integration and digitalise processes at all levels of the enterprise; (c) readily pivot to new business models which capitalise on coopetition, leverage reduction in information asymmetry between the enterprise and its customers, and support monetisation opportunities for information assets; (d) make no distinction between enterprise and digital strategy; (e) are anticipant of the cybersecurity policy landscape; and (f) continuously evaluate the enterprise in light of emerging decentralised and democratised solutions to societal needs. \&#13;
&#13;
The culmination of the observations and findings is a single, unified object-process diagram or ‘blueprint’. The blueprint characterises an enterprise-wide response to the disruptive, emergent trends arising from digital transformation synthesised from the research, and provides a holistic birds-eye view and orientation for addressing digital transformation across an enterprise.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetic Cleanliness, Sensing, and Calibration for&#13;
CubeSats</title>
<link href="https://hdl.handle.net/1721.1/143167" rel="alternate"/>
<author>
<name>Belsten, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/143167</id>
<updated>2022-06-16T03:31:20Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Magnetic Cleanliness, Sensing, and Calibration for&#13;
CubeSats
Belsten, Nicholas
Magnetometers are widely used on satellites for both attitude sensing and scientific observations. Spaceborne magnetometers have enabled the creation of accurate maps of Earth’s magnetic fields. However, these models have limited spatial and temporal resolution, and therefore are much less accurate in locations with fast or localized magnetic perturbations. Such perturbations can be particularly problematic near Earth’s poles where field aligned currents come close to the surface of the Earth and are concentrated near satellites in LEO. Science missions which need to know the local magnetic field in the polar regions need to bring their own high-fidelity magnetic sensors.&#13;
&#13;
The AERO-VISTA mission comprises a pair of 6U CubeSats which will determine the propagation modes and directions of high frequency (400 kHz–5 MHz) waves in Earth’s ionosphere in the presence of Earth’s aurorae. This mission science requires accurate in-situ magnetic sensing of auroral currents for RF measurement context. This thesis details the design, integration, and testing of the magnetic sensors in the AERO-VISTA Auxiliary Sensor Package (ASP). We discuss the estimation of spacecraft self-interference and implement an informal magnetic interference control process. We present some simple ground testing strategies for magnetic screening of components and measurement of spacecraft self-interference. We evaluate the performance and non-ideal effects of our selected anistropic magnetoresistive (AMR) 3-axis magnetometer. We create a measurement equation, which together with regression techniques, allows for calibration to better than 100 nT repeatability despite non-ideal effects, meeting AERO-VISTA’s requirements. This calibration strategy is extended to include current path and material interference effects. We describe the detailed design of the magnetic sensing system, including the electronics, mechanical design, and software of the ASP. Without self-interference effects, this design has a noise floor better than 10 nTrms.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Covid-19 Pandemic on Student Participation in an Intro CS MOOC</title>
<link href="https://hdl.handle.net/1721.1/143166" rel="alternate"/>
<author>
<name>Mauck, Christopher Glendon Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/143166</id>
<updated>2022-06-16T03:41:03Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Impact of Covid-19 Pandemic on Student Participation in an Intro CS MOOC
Mauck, Christopher Glendon Matthew
The impact of the COVID pandemic spreads far and wide, encompassing nearly all aspects of society. One important community that has been forced to enter uncharted territory is academia. Although many students and instructors were subjected to new tools such as virtual lectures, one platform that remained unchanged throughout is the MOOC (Massive Open Online Course) platform edX: a platform that enables students around the world to engage in academia through an online, virtual environment. In efforts to analyze the impacts of the pandemic, this thesis will provide a data-driven survey of the landscape of the introductory computer science course, titled 6.00.1x Introduction to Computer Science and Programming, offered on the edX platform. With enrollment ranging from thirty thousand to one hundred thousand students per run, this edX class provides many individuals with their first taste of computer programming. This large enrollment count provides ample amounts of granular data in efforts to survey pre-covid, beginning of covid, and steady-state covid class runs in the 2019, 2020, and 2021 years respectively. We first aim to take a high level overview of the differences and similarities created by the onset of the pandemic. Then, using various tools and techniques, take a deeper dive into specific aspects of student involvement and interaction to gain useful insights. Finally, we will use these findings to promote and support future iterations of the edX class.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Hardness in Random Optimization Problems from the Overlap Gap Property</title>
<link href="https://hdl.handle.net/1721.1/143164" rel="alternate"/>
<author>
<name>Huang, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/143164</id>
<updated>2022-06-16T03:03:38Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Computational Hardness in Random Optimization Problems from the Overlap Gap Property
Huang, Brice
We study the limits of efficient algorithms in random optimization problems. In these problems, we are given a random objective function and our goal is to find an input achieving a large output. These problems often exhibit information-computation gaps, where the maximum objective that exists is larger than the maximum objective that known efficient algorithms can find. Our goal is to find rigorous evidence of computational hardness in the hard regime. &#13;
&#13;
We focus on the problems of random k-SAT and mean-field spin glasses. Our results are: &#13;
• It is known that random k-SAT has a satisfying assignment with high probability up to clause density [formula], while the best known algorithm (Fix) finds a satisfying assignment up to clause density [formula]. We prove that low degree polynomial algorithms cannot find a satisfying assignment above clause density [formula], for a universal constant κ∗ ≈ 4.911. Low degree polynomial algorithms encompass Fix, message passing algorithms including Belief and Survey Propagation guided decimation, and local algorithms on the factor graph. This is the first hardness result against any class of algorithms within a constant factor of the clause density achieved by Fix.&#13;
• The maximum asymptotic value OPT of the Hamiltonian [formula] of a spherical or Ising mixed p-spin glass is given by the celebrated Parisi formula. Recently developed approximate message passing algorithms efficiently optimize [formula] up to a value ALG given by an extended Parisi formula, which minimizes over a larger space of non-monotone functional order parameters. These two objectives coincide for spin glasses exhibiting a no overlap gap property, but are generically not equal. We prove that for mixed even p-spin models, no algorithm satisfying an overlap concentration property can produce an objective larger than ALG. This property holds for all algorithms with suitably Lipschitz dependence on the disorder coefficients of HN , including natural formulations of gradient descent, approximate message passing, and Langevin dynamics run for bounded time. In particular, this includes the algorithms achieving ALG.&#13;
&#13;
We prove these results by extending the overlap gap property (OGP) framework of Gamarnik and Sudan to multi-OGPs, which consider forbidden constellations containing several solutions. Our results for random k-SAT are proved by a multi-OGP that generalizes the ladder constellation introduced by Wein. Our results for spin glasses are proved by a new multi-OGP, the branching OGP, that uses an arbitrarily complex ultrametric constellation of solutions.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multidisciplinary Architectural Study of On-Orbit Space Vehicle Refueling</title>
<link href="https://hdl.handle.net/1721.1/143162" rel="alternate"/>
<author>
<name>Ehn, Eric J.</name>
</author>
<id>https://hdl.handle.net/1721.1/143162</id>
<updated>2022-06-16T03:28:49Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Multidisciplinary Architectural Study of On-Orbit Space Vehicle Refueling
Ehn, Eric J.
The next generation of space exploration is filled with bigger and more enterprising missions, like human exploration of Mars and in-space harvesting of energy. In space servicing, specifically on-orbit refueling offers a chance to achieve these ambitious space mission goals.  To understand how on-orbit architectures can produce value for space systems a Technology Roadmapping method is used to analyze the foundations of on-orbit refueling, understand the on-orbit refueling landscape, and consider efficient paths forward to developing on-orbit refueling systems. Using a multidisciplinary architectural analysis of on-orbit refueling systems, this thesis aims to clarify what investments and efforts are crucial to establishing the road to on-orbit refueling.&#13;
&#13;
Existing on-orbit refueling systems have steadily increased the mass of propellant they can refuel from Orbital Express’ 22 kilograms in 2007 to Tianzhou 2’s delivery of 2000 kilograms in 2021, yet refueling rates like the industry-leading Robotic and Refueling Mission’s .155 kilograms per second demonstrated in 2012 shows that some technology areas could still hold back legitimate refueling operations. Refueling system designers should consider a few insights from this research to help define architectures that close existing gaps in refueling system performance.&#13;
&#13;
Missions that have higher delta v budgets (i.e military maneuvering, MEO, and GEO missions) have a larger potential for improvement due to refueling. Using a technical model that analyses refuel time, propellant mass delivered, and space vehicle pmf efficiency, the best MEO and GEO refueling systems produced higher utility scores (.79) than the best LEO systems (.73). The best LEO systems show an estimated total mission cost savings of around $100 million, while MEO and GEO refueling systems offer much higher total mission cost savings of $300 million to $1 billion.&#13;
&#13;
Tank exchange and propellant augmentation refueling methods shine over robotic arm pumping or direct docking and pumping methods, due to bypassing the typically slow micro-gravity pumping rates, lower instrument mass, and less complexity. For comparable high-value LEO systems, tank exchange refueling systems can be built for 12% lower unit costs and produce 27% lower total mission costs than refuelers with pumping mechanisms. Refueling a higher number of customers always adds more value in this research’s modeling, but surprisingly refueling the same system multiple times appears to have a diminishing value return around 4-6 refuels.  Partnerships will be essential to correctly define space vehicle standard interfaces for refueling and growing a sufficient customer base to maximize value.&#13;
&#13;
A high-value 2030 roadmap includes targeting MEO and GEO tank exchange re-fueling systems. To achieve that goal, start by defining and building 200 kilogram delivery size LEO refueling prototypes using tank exchange or propellant augmentation methods to prove assumed high-TRL sub-systems, accurately gauge demand, and push forward standardized docking inter-faces.  Financial modeling of a LEO tank exchange prototype shows the system could be built for roughly $200 million, improving space mission systems mass efficiency by 50% on day one, providing a potential IRR of 25%, and potentially returning value of after 4 years of refueling service; saving conservatively $103 million for mission system owners. Once the prototypes prove TRL advancements, assert standard refueling interfaces, and industry partnerships support refueling demand, the prototypes can be scaled or positioned in high demand refueling locations.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracer: A Machine Learning Based Data Lineage Solver with Visualized Metadata Management</title>
<link href="https://hdl.handle.net/1721.1/143161" rel="alternate"/>
<author>
<name>Xie, Zhuofan</name>
</author>
<id>https://hdl.handle.net/1721.1/143161</id>
<updated>2022-06-16T03:06:48Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Tracer: A Machine Learning Based Data Lineage Solver with Visualized Metadata Management
Xie, Zhuofan
In databases, many data do not come from scratch. They are derived from some other data and what describes this is called data lineage. Knowing the data lineage could help us do data validation, error detection, data debugging, and privacy and access control. Unfortunately, many databases do not have well documented data lineage information, and most existing works in this area heavily relies on extra input such as metadata, source code or annotations. In this paper, we build upon Tracer, a previously purposed machine learning approach to this problem, and make it more accurate, more general, and more intuitive.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>(Re)Turn to Stone</title>
<link href="https://hdl.handle.net/1721.1/143158" rel="alternate"/>
<author>
<name>Filiposyan, Nare</name>
</author>
<id>https://hdl.handle.net/1721.1/143158</id>
<updated>2022-06-16T03:05:40Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">(Re)Turn to Stone
Filiposyan, Nare
In contemporary Armenia, stone is ubiquitous -- from street furniture to the home, from thousands of public water fountains to thousands of medieval churches, from municipal buildings to Soviet housing blocks disguised under stone tiles. Stone is a vital part of the cultural fabric, holding both physical as well as intangible cultural heritage. During a period described as the Dark Age for the Byzantine Empire, Armenian masons developed advanced stone building techniques, producing a rich heritage of religious architecture, much of which still stands today. &#13;
&#13;
However, driven by standardization and efficiency, concrete has largely replaced stone as a structural material, reducing it to veneer surfaces while still tasked with carrying an enormous cultural load. Though the appearance of stone is pervasive, certain stonework techniques are dying out. &#13;
&#13;
The thesis attempts to perpetuate a culture of stone by producing architecture that necessitates those techniques of stonework and re-prioritizes the knowledge of the masons that has been rendered obsolete as a byproduct of standardization. &#13;
&#13;
Situated in my hometown of Sisian, in southern Armenia, the thesis spans the ambiguous seam between the civic and the domestic spheres of my grandmother’s house, street, and neighborhood. The outcome is a new cultural fabric of stone that runs all the way from the civic to the domestic, continuous from the curb to the hearth.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Top-Down, Safety-Driven Approach to Architecture Development for Complex Systems</title>
<link href="https://hdl.handle.net/1721.1/143156" rel="alternate"/>
<author>
<name>Poh, Justin Wei Siang</name>
</author>
<id>https://hdl.handle.net/1721.1/143156</id>
<updated>2022-06-16T03:39:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Top-Down, Safety-Driven Approach to Architecture Development for Complex Systems
Poh, Justin Wei Siang
Architecture development is an important part of the systems engineering process because the system architecture forms the foundation on which the rest of the system design is based. In addition, the system architecture plays a key role in determining the behavior of the system and represents a set of design decisions made to solve a design problem. Because modern systems are increasingly complex and software-intensive, they require architectures that fully consider system-level interactions and unsafe behaviors and ensures that the responsibilities necessary to ensure safety are carried out effectively. Furthermore, the architecture development process should organize design information in a way that assists system designers and reviewers with managing system complexity and developing an understanding of the system design and its underlying rationale. &#13;
&#13;
This thesis proposes a new top-down, safety-driven approach to architecture development that is based on systems theory and incorporates a hazard analysis at the beginning of the design process to drive the identification of system-level requirements. This approach ensures that the system and its environment are analyzed as a whole and emergent properties such as safety are considered as early as possible. Using a structured process and appropriate types of abstraction, this new approach to architecture development facilitates obtaining more information about how the system needs to behave before creating a series of candidate architecture options and assessing the tradeoffs between them. &#13;
&#13;
The proposed approach is applied to create a conceptual architecture for a human pilot and automated flight controller performing medevac flights in Degraded Visual Environments (DVEs). This example illustrates how the new approach can be used to develop architectures in a top-down, safety-driven manner and shows how the design information obtained using this new approach can be used to make more informed architectural decisions.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System for General In-Hand Object Re-Orientation</title>
<link href="https://hdl.handle.net/1721.1/143155" rel="alternate"/>
<author>
<name>Chen, Tao</name>
</author>
<id>https://hdl.handle.net/1721.1/143155</id>
<updated>2022-06-16T03:02:24Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A System for General In-Hand Object Re-Orientation
Chen, Tao
In-hand object reorientation has been a challenging problem in robotics due to high dimensional actuation space and the frequent change in contact state between the fingers and the objects. We present a simple model-free framework that can learn to reorient objects with both the hand facing upwards and downwards. We demonstrate the capability of reorienting over 2000 geometrically different objects in both cases. The learned policies show strong zero-shot transfer performance on new objects. We provide evidence that these policies are amenable to real-world operation by distilling them to use observations easily available in the real world.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>News Feeds and User Engagement: Evidence from the Reddit News Tab</title>
<link href="https://hdl.handle.net/1721.1/143154" rel="alternate"/>
<author>
<name>Moehring, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/143154</id>
<updated>2022-06-16T03:17:47Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">News Feeds and User Engagement: Evidence from the Reddit News Tab
Moehring, Alex
We study how the introduction of a new non-personalized news feed impacts user engagement quantity, quality, and diversity on Reddit. In June 2018, Reddit introduced the News tab on iOS devices that surfaces popular content from a curated list of news-related communities. We leverage this natural experiment to identify the causal effects of the News tab on iOS user engagement in a difference-in-differences design. We find that the News tab increases the share of iOS devices that engage with news-related content and there is a relatively larger increase in low-quality engagement, measured through voting on the platform. We also find that the diversity of engagement within news categories and within articles from publishers across the political spectrum increases as a result of the News tab. These results suggest that non-personalized feeds can be an important tool to mitigate algorithmic filter bubbles, and need not come at the expense of reduced user engagement.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming Obsolescence: A Roadmap for Redeveloping Massachusetts Gas Station Real Estate in a post-Gasoline World</title>
<link href="https://hdl.handle.net/1721.1/143152" rel="alternate"/>
<author>
<name>Hansen, Derek J.</name>
</author>
<id>https://hdl.handle.net/1721.1/143152</id>
<updated>2022-06-16T03:47:57Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Overcoming Obsolescence: A Roadmap for Redeveloping Massachusetts Gas Station Real Estate in a post-Gasoline World
Hansen, Derek J.
There is an inherent tension between the ubiquity of gas station properties and the necessity of reducing greenhouse gas emissions to avoid the worst effects of climate change.&#13;
&#13;
On the one hand a significant amount of real estate value, with all its appurtenant downstream outputs including local tax revenues, employee wages and owner equity is tied to the continued burning of fossil fuels to power light-duty personal vehicles. On the other hand, science and anecdotal evidence of extreme weather events continue to demonstrate the high cost of not reducing global emissions by adopting zero-emission vehicles.&#13;
&#13;
This thesis examines the likely shape of how that tension between real estate and greenhouse gas emissions will play out, with a focus on properties located in Middlesex and Suffolk County Massachusetts.&#13;
&#13;
The research is comprised of nine sections that follow the broad arc of establishing whether zero emission vehicle adoption is coming, the speed at which it is likely to occur, the effect such an occurrence would have on gas station real estate values, and the methodology for targeting groups of properties, and indeed selecting individual properties for redevelopment into new uses.&#13;
&#13;
Analysis in this paper relies heavily on a comprehensive dataset of gas stations located in the target counties and augmented with local tax assessor data, as well as reporting from government agencies and interviews with industry experts. The result is a holistic assessment of when, and to what degree, gas stations will become viable targets for redevelopment due to the adoption of zero emission vehicles.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal Reinforcement Learning with Black Holes</title>
<link href="https://hdl.handle.net/1721.1/143151" rel="alternate"/>
<author>
<name>Micali, Enrico</name>
</author>
<id>https://hdl.handle.net/1721.1/143151</id>
<updated>2022-06-16T03:12:14Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Optimal Reinforcement Learning with Black Holes
Micali, Enrico
We introduce the Black Hole Reinforcement Learning problem, a previously unexplored variant of reinforcement learning in which we lose all turn information and all reward from trajectories that visit a particular subset of states. We assume awareness of the trajectory loss events, making this a censored data problem but not a truncated data problem. We have elected to work in a fixed-horizon setting for easier readability of our arguments in what we hope will be a seminal paper in a new branch of “biased” data science.&#13;
&#13;
We begin by proposing a series of three MDP-inspired models for the Black Hole RL setting, each striking a unique compromise between the compactness of the state space and the optimality of a memory-less policy over the states. For the two models that are memory-less in both the state transition and turn reward functions, we describe a stochastic policy gradient algorithm that guarantees non-asymptotic convergence to a policy with finitely suboptimal expected trajectory reward, focusing on the more compact model which benefits from better convergence rates.&#13;
&#13;
This algorithm’s guarantee hinges on the assumption that any location in the problem environment has a non-zero chance of being an agent’s start state. We dedicate the remainder of the paper to the feasibility of finding optimal policies for Black Hole RL problems in which we don’t have this guarantee. We give proof that converging to an optimal policy is feasible under the new assumption that the distribution determining agents’ starting state is known. Problems in the Black Hole RL setting where the starting state distribution is “sparse” and unknown remain open problems. We provide a potential first avenue of research into their feasibility, which we believe to be the next natural frontier in Black Hole Reinforcement Learning Theory.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Market Exchange for Climate risk</title>
<link href="https://hdl.handle.net/1721.1/143149" rel="alternate"/>
<author>
<name>Jansen van Rensburg, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/143149</id>
<updated>2022-06-16T04:00:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Design of a Market Exchange for Climate risk
Jansen van Rensburg, Nicholas A.
Significant scientific research has been performed that shows human activities are the primary contributor to a warming climate, but disagreement exists in the likelihood and impact of this change. For example, projections for sea level rise (SLR) have been developed by certain cities, and the mitigating costs related to managing this risk can be estimated. Still, there exists disagreement between the true probability of change, rise and impact. This paper proposes a real-time pari-mutuel market based on a blockchain to capture this disagreement and manage SLR risk more cost-effectively for the City of Boston.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Requirements for Distributed Machine Learning Training in the Cloud</title>
<link href="https://hdl.handle.net/1721.1/143146" rel="alternate"/>
<author>
<name>Salamy, James</name>
</author>
<id>https://hdl.handle.net/1721.1/143146</id>
<updated>2022-06-16T03:31:02Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Network Requirements for Distributed Machine Learning Training in the Cloud
Salamy, James
In this thesis, I characterize the impact of network bandwidth on distributed machine learning training. I test four popular machine learning models (ResNet, DenseNet, VGG, and BERT) on an Nvidia A-100 cluster to determine the impact of bursty and non-bursty cross traffic (such as web-search traffic and long-lived flows) on the iteration time and throughput of distributed training. By varying the cross traffic load, I measure the impact of network congestion on training iteration times. I observe that with heavy web-search cross traffic (80% of link capacity), on average training iteration time is increased by up to 4 to 8×, for ResNet and BERT models, respectively. Further, I establish that the ring-all reduce communication collective is negatively impacted by network congestion even if the congestion is only affecting part of the ring. I also develop empirical models for the behavior of machine learning training in the presence of each type of cross traffic deployed. These results provide the motivation for developing novel congestion control protocols that are tailored for distributed training environments.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Symbols and Spatiality of Social Media: Re-constructing the Digital Public Realm</title>
<link href="https://hdl.handle.net/1721.1/143140" rel="alternate"/>
<author>
<name>Chen, Feiyue</name>
</author>
<id>https://hdl.handle.net/1721.1/143140</id>
<updated>2022-06-16T03:45:52Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Symbols and Spatiality of Social Media: Re-constructing the Digital Public Realm
Chen, Feiyue
This project centers around the making of the digital public realm and its social impact. In the first half, I aim to offer a diagnosis of today's problematic online environment from an architectural and urbanistic point of view. In the second half, I will present two new symbols that have the potential to better the online conversational structure.&#13;
&#13;
The research part includes the analysis of the origins, evolutions (or corruptions), and the relational structures of four selected social media symbols: 1) hyperlink, 2) at sign, 3) following, 4) hashtag. From an urban standpoint, this project will examine the digital public realm within the context of global urbanization, and by re-visiting and re-interpreting terms like “echo chambers” and “privacy crisis,” to understand why its current structure, one that rests upon the logic of economics, has turned the internet into an insatiable and formless network that is incapable of conditioning effective political communications. Overall, this project will take architecture and urbanism as the design and research methodology, political philosophy as the conceptual framework, and media studies as the state of field.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Speech and Media Interaction Model for Individuals with Vision and Speech Impairments</title>
<link href="https://hdl.handle.net/1721.1/143139" rel="alternate"/>
<author>
<name>Trivedi, Mihir</name>
</author>
<id>https://hdl.handle.net/1721.1/143139</id>
<updated>2022-06-16T03:59:53Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">A Speech and Media Interaction Model for Individuals with Vision and Speech Impairments
Trivedi, Mihir
Advancements in low-cost computing and electronics have created major opportunities in Accessible Technology for individuals with disabilities. Assistive Technology is especially important to introduce to children with disabilities at a young age, as it can have a significant impact on their learning ability.&#13;
&#13;
This thesis presents a device and interaction model for children with vision and speech impairments. The device is a Speech Generating Device that allows children to distinguish between inputs that speak a configurable set of words, as well as control media on connected Bluetooth devices. The device is designed to facilitate easy interaction for use in early childhood education settings, including special needs classrooms and home environments. The device also expands on existing technology by facilitating easy configuration with a mobile and web application.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Who Cares? Assemblies of Care-and-Repair</title>
<link href="https://hdl.handle.net/1721.1/143137" rel="alternate"/>
<author>
<name>Jurczynski, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/143137</id>
<updated>2022-06-16T03:03:09Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Who Cares? Assemblies of Care-and-Repair
Jurczynski, Emma
We source, process and build with standardized wood and, just as easily, discard it. Rubbish, scraps, the wasted, the junked are assumed to be undesirable due to a lack of knowledge about the structural capacity of second-hand timber. Deemed unusable, approximately 36 million tons of dimensional lumber and plywood end up in U.S. landfills because few people know what to do with it and there is nowhere to store it. &#13;
&#13;
This thesis presents a methodology that reuses these construction materials through a system of assemblage that maximizes the amount used, and catalogues these materials in a prototypical storage warehouse that not only stores discarded standard timber but is itself built from it. The more material used, the less sequestered carbon dioxide is released into the atmosphere. I am taking an excessive approach to use as much leftover wood as possible to be inefficiently efficient.&#13;
&#13;
This methodology is a care-and-repair system that re-assigns value and extends the lifespan of used materials. In this instance, repair means returning a material to its original function, not form. Repair is an act of assembling layers of material from multiple sites and environments with inherent histories and memories from various moments and places.&#13;
&#13;
With the planet in need of “critical care”, this method is proposed as a necessary near-future reality in order to minimize our footprint and carbon emissions. The proposed act of caring towards our materials creates a culture of inefficient construction systems and celebrates the social and environmental efficiencies that grows from it.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Network Analysis of Job Transitions to Inform Career&#13;
Advice</title>
<link href="https://hdl.handle.net/1721.1/143136" rel="alternate"/>
<author>
<name>Clochard, Axelle</name>
</author>
<id>https://hdl.handle.net/1721.1/143136</id>
<updated>2022-06-16T03:23:45Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Using Network Analysis of Job Transitions to Inform Career&#13;
Advice
Clochard, Axelle
The importance of good career advice has become especially salient as the COVID-19 pandemic forces millions of displaced workers to look for stable employment. This research hopes to add to the career advice literature by using network analysis of U.S. job transitions data to model the universe of career paths available from a first job. By linking together the occupations that are connected by significant flows of workers and focusing on the paths that lead from precarious occupations, we can identify areas of the labor market that offer dependable channels to upward mobility and areas that do not, where workers could benefit from additional guidance. Overall, we find that, although there exist opportunities for workers of various educational attainment, upward mobility prospect are generally curtailed for workers without a Bachelor’s degree. What’s more, low-wage or shrinking occupations appear to offer limited access to stable, high-wage employment. Still, there are a number of bright spots occupations that can provide low-wage workers with dependable access to sustainable employment down the line. We hope to use this knowledge to inform the nature of advice given to workers by suggesting careers that are associated with living wages and stability in the long term.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Priors for Active Learning on Robots</title>
<link href="https://hdl.handle.net/1721.1/143133" rel="alternate"/>
<author>
<name>Brand, Isaiah</name>
</author>
<id>https://hdl.handle.net/1721.1/143133</id>
<updated>2022-06-16T03:50:54Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Structural Priors for Active Learning on Robots
Brand, Isaiah
A primary hindrance to neural networks in robotic applications is data efficiency; collecting data on a real robot is slow and expensive. Active learning, in which the learner chooses the data that will best accelerate learning, has been shown to reduce data requirements in machine learning and statistics applications, but has seen limited application to real robots. This thesis leverages Bayesian Active Learning in robotic domains, and proposes two novel extensions that further improve data efficiency by exploiting the structure inherent to most robotics applications. First, we introduce active learning of Abstract Plan Feasibility (APF) --- the likelihood that a plan proceeds as expected when executed on the robot. By incorporating the learned APF model into the active learning loop, we significant improve data efficiency. This approach enables a real 7DOF Panda robot arm to learn a neural network estimator of APF in a block stacking domain after only 400 experiments. For comparison, state-of-the-art model learning methods require thousands or millions of interactions in similar domains. We show that the learned APF estimator significantly improves planning for downstream tasks. Second, we incorporate the notion of objects --- a structure present in many robotic domains --- into the active learning framework. We develop an object-factored dynamics model, which allows the robot to separate uncertainty about individual objects and the global dynamics. When paired with Bayesian active learning, the object-factored dynamics model allows the robot to actively learn about novel objects, without adjusting the global dynamics. We evaluate this approach in simulated block stacking and ball throwing environments.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meeting A Community’s Emissions Reduction Targets Using Urban Building Energy Modeling</title>
<link href="https://hdl.handle.net/1721.1/142856" rel="alternate"/>
<author>
<name>Berzolla, Zachary M.</name>
</author>
<id>https://hdl.handle.net/1721.1/142856</id>
<updated>2022-06-02T03:27:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Meeting A Community’s Emissions Reduction Targets Using Urban Building Energy Modeling
Berzolla, Zachary M.
Communities around the world are striving to meet aggressive emissions reduction targets in a short time frame. This paper lays out a six-step process  using urban building energy modeling to identify a combination of building energy efficiency upgrades and renewable energy deployment strategies that meet emissions goals. The process involves key decision makers in each municipality working with an energy modeling consultant to build up a model of their building stock and simulate various scenarios to meet the desired emissions reduction goals. Through a case study of Oshkosh, Wisconsin, the six-step process is tested, and a concrete action plan to meet their 80% emissions reduction goals by 2050 is presented. The final recommended solution involves upgrading all residences in Oshkosh to ENERGY STAR certified home standards, installing cold climate heat pumps to displace fossil-fuel based heating, and deploying photovoltaics over an area equivalent to 50% of all rooftops. To aid in the final step of the process, implementation, the city-wide strategies were broken down into actions individual homeowners could take and what the cost and payback periods for these actions would be. In order to meet global emissions reduction goals, the six-step process presented in this paper will need to be carried out in communities around the world. The approach has been shown to be flexible and applicable to anywhere with emissions goals and access to building footprint and characteristic data.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feeling the Climate Crisis</title>
<link href="https://hdl.handle.net/1721.1/142845" rel="alternate"/>
<author>
<name>Patekar, Gaurav</name>
</author>
<id>https://hdl.handle.net/1721.1/142845</id>
<updated>2022-06-01T03:34:11Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Feeling the Climate Crisis
Patekar, Gaurav
This project visualizes the ongoing climate crisis through the means of kinetic sculptures that are made with found natural objects juxtaposed with electromechanical elements. Together they create movement patterns that express climate data. The project attempts to create an experience that allows viewers to feel climate data.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LunarWSN: A Wireless Sensor Network for In-Situ Lunar Water Ice Detection</title>
<link href="https://hdl.handle.net/1721.1/142844" rel="alternate"/>
<author>
<name>Liu, Fangzheng</name>
</author>
<id>https://hdl.handle.net/1721.1/142844</id>
<updated>2022-06-01T03:22:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">LunarWSN: A Wireless Sensor Network for In-Situ Lunar Water Ice Detection
Liu, Fangzheng
The future lunar sustainable habitation will be resource-intensive. Taking advantage of local resources on the lunar surface is the most effective way to reduce the cost and risk for future lunar missions. Water is one of the most important resources that can provide not only drinking water for crews, but also fuel for rockets and spacecrafts. To date, most of our knowledge of lunar water distribution is from remote sensing, which is vague (kilometer-scale resolution). More in situ measurements are indispensable to acquire meter-scale resolution knowledge of lunar water distribution. The current main force of in situ planetary explorations is a single high-cost rover that can provide merely a series of single-point measurements or a lander without mobility that can only measure surrounding areas. Neither rovers nor landers can work in dangerous areas where data of interest often exists. &#13;
&#13;
Wireless Sensor Networks (WSNs) are a technology that is typically dedicated to collecting in situ sensing data from regions of interest. A WSN is composed of multiple sensor nodes that are relatively small, light, and easy to deploy. The sensor nodes are designed based on a variety of missions and distinctly different environments. In this thesis, we present a WSN sensor node designed for measuring the water content in lunar soil simulant. The sensor node is designed to be ballistically deployed from a rover or lander to regions of interest that might be unsafe for rovers or landers. The sensor nodes can create an expandable WSN, that we term LunarWSN. The LunarWSN sensor nodes can make simultaneous observations from multiple positions. Each node is a miniaturized, modular design, whose sensor payload can be customized to different scientific missions. After anchoring on the lunar surface, the sensor nodes can localize themselves, set up a wireless communication network, and start the sensing operation — the measurements of permittivity of the lunar soil, which infers water content.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How does my robot know who I am?: Understanding the Impact of Education on Child-Robot Relationships</title>
<link href="https://hdl.handle.net/1721.1/142843" rel="alternate"/>
<author>
<name>DiPaola, Daniella</name>
</author>
<id>https://hdl.handle.net/1721.1/142843</id>
<updated>2022-06-01T03:28:04Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">How does my robot know who I am?: Understanding the Impact of Education on Child-Robot Relationships
DiPaola, Daniella
Technologies designed with personalities and social interfaces are entering our homes in the form of social robots such as Jibo and Cozmo. By emulating interpersonal interactions, social robots have great potential to help us learn, be more creative, and reduce stress. Social robots also introduce potentials for harm, such as emotional manipulation for money or power. More information about the nature of long-term social relationships between social robots and children has the potential help avoid potential harms.&#13;
&#13;
In artificial intelligence, transparency is one of the key tenets of ethical and responsible design. It is hypothesized that by knowing how a system works, users may be able to better use and trust robotic systems. Educators are beginning to create materials that give students both a conceptual understanding of the system (i.e. how do sensors work?) and an applied understanding of the system (i.e. program a sensor to detect when the lights are off). Children, as early as Pre-K, are capable of learning about and building features for social robots. However, we still do not know how social relationships between children and robots change when the inner workings of these systems become more transparent.&#13;
&#13;
First, I discuss the design of two curricula that take different approaches to educating youth (grades 4 and 5) about social robots. The Knowledge and Societal Impact curriculum teaches students about the technical and ethical topics surrounding social robots. The Programming curriculum allows students to program their own conversational skills on Jibo. These curricula represent two pedagogical approaches in the field of AI education, one focused on embedding ethics, and the other focused on students as self-driven makers.&#13;
&#13;
Next, I evaluated the impact of these curricula on fourth and fifth-grade students who simultaneously lived with a social robot in their home for two months. Students were assigned to one of four conditions: no education, only Knowledge and Societal Impact, only Programming, and both Knowledge and Societal Impact and Programming. I found that students were able to understand and engage with the curricula, and that the curricula helped them form a more clear model of what their robot was capable of. However, I found no difference in perceived emotional relationship or usage among groups. Students in all groups found the robot as equally likeable, anthropomorphic, intelligent, safe, and animated. However, students who engaged in the Knowledge and Societal Impact curriculum found their robot to be significantly less trustworthy than those in different groups.&#13;
&#13;
Overall, results from this study indicate that children will continue to treat robots as social partners regardless of the information that they hold about them. However, it seems that teaching students about the societal impact of robots does make them less trusting of their own robot. These results are timely and relevant given public discourse. Many have advocated for education to prevent deception or misuse of social robots, and findings from the study suggest that new approaches to education are needed.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ChaperoNet: Distillation of Language Model Semantics to Folded Three-Dimensional Protein Structures</title>
<link href="https://hdl.handle.net/1721.1/142842" rel="alternate"/>
<author>
<name>dos Santos Costa, Allan</name>
</author>
<id>https://hdl.handle.net/1721.1/142842</id>
<updated>2022-06-01T03:23:17Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">ChaperoNet: Distillation of Language Model Semantics to Folded Three-Dimensional Protein Structures
dos Santos Costa, Allan
Determining the structure of proteins has been a long-standing goal in biology. Lan- guage models have been recently deployed to capture the evolutionary semantics of protein sequences, and as an emergent property, were found to be structural learn- ers. Enriched with multiple sequence alignments (MSA), these transformer models were able to capture significant information about a protein’s tertiary structure. In this work, we show how such structural information can be recovered by processing language model embeddings, and introduce a two-stage folding pipeline to directly es- timate three-dimensional folded structures from protein sequences. We envision that this pipeline will provide a basis for efficient, end-to-end protein structure prediction through protein language modeling.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Membrane I/O: Designing Bits and Atoms for Tangible Telepresence</title>
<link href="https://hdl.handle.net/1721.1/142840" rel="alternate"/>
<author>
<name>Zhipeng, Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/142840</id>
<updated>2022-06-01T03:33:55Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Membrane I/O: Designing Bits and Atoms for Tangible Telepresence
Zhipeng, Liang
This thesis proposes a strategy of designing Computer Mediated Tangible Communication (CMTC) over space and time with the consideration of philosophical concepts posed in Phenomenology. Membrane I/O (M.I/O) is created as a conceptual framework for CMTC, which focuses on creating a thin sensor/motor layer between the human body and the world of being. To solidify and interpret the philosophical studies into design concepts, four prototypes of progressive malleability are made, from chain-like mechanisms to blanket-like mechanisms, from line-shape sensing to surface-shape sensing and actuation. They will be presented with their design spaces and possible applications. The ideal model and future work of M.I/O will be discussed in the end.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sub-Picomolar Detection of SARS-CoV-2 RBD via Computationally-Optimized Peptide Beacons</title>
<link href="https://hdl.handle.net/1721.1/142839" rel="alternate"/>
<author>
<name>Tripathy, Soumya Pratap</name>
</author>
<id>https://hdl.handle.net/1721.1/142839</id>
<updated>2022-06-01T03:02:17Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Sub-Picomolar Detection of SARS-CoV-2 RBD via Computationally-Optimized Peptide Beacons
Tripathy, Soumya Pratap
The novel coronavirus SARS-CoV-2 continues to pose a significant global health threat. Along with vaccines and targeted therapeutics, there is a critical need for rapid diagnostic solutions. In this work, we employ a deep learning-based protein design to engineer molecular beacons that function as conformational switches for high sensitivity detection of the SARS-CoV-2 spike protein receptor-binding domain (SRBD). The beacons contain two peptides, together forming a heterodimer, and a binding ligand between them to detect the presence of S-RBD. In the absence of S-RBD (OFF), the peptide beacons adopt a closed conformation that opens when bound to the S-RBD and produces a fluorescence signal (ON), utilizing a fluorophore-quencher pair at the two ends of the heterodimer stems. Two candidate beacons, C17LC21, and C21LC21 can detect the S-RBD with limits of detection (LoD) in the sub-picomolar range. We envision that these beacons can be easily integrated with on-chip optical sensors to construct a point-of-care diagnostic platform for SARS-CoV-2.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tiny Trainable Instruments</title>
<link href="https://hdl.handle.net/1721.1/142838" rel="alternate"/>
<author>
<name>Montoya-Moraga, Aarón</name>
</author>
<id>https://hdl.handle.net/1721.1/142838</id>
<updated>2022-06-01T03:36:49Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Tiny Trainable Instruments
Montoya-Moraga, Aarón
Can we build flexible and reusable multimedia instruments that are trained instead of programmed? How can we build and publish our own personal databases for artistic purposes? What are the new choreographies and techniques that machine learning running on microcontrollers offer for artists and activists?&#13;
&#13;
Tiny Trainable Instruments is a collection of multimedia devices, running machine learning algorithms on microcontrollers, for artistic purposes. It includes techniques for capturing data, building databases, training machine learning models, and deploying on microcontrollers. The software library created for this project allows for the creation of instruments that react to different inputs, including color, gesture, and speech, to control different multimedia outputs, including sound, light, and movement, using machine learning and embedded sensors.&#13;
&#13;
This thesis emphasizes open source software and artificial intelligence ethics, and includes all the steps for creating these bridges between machine learning and media arts, that are respectful of privacy and consent because of their offline and off-the-grid nature.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DNA Canvas: Towards affordable and scalable enzymatic fabrication of DNA nanoarrays</title>
<link href="https://hdl.handle.net/1721.1/142837" rel="alternate"/>
<author>
<name>Perry, Eyal</name>
</author>
<id>https://hdl.handle.net/1721.1/142837</id>
<updated>2022-06-01T03:08:35Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">DNA Canvas: Towards affordable and scalable enzymatic fabrication of DNA nanoarrays
Perry, Eyal
In biological systems, DNA serves as a carrier of hereditary information, facilitated by predictable and programmable base-pairing rules. The field of DNA nanotechnology takes the DNA molecule out of its original context and using the same set of rules to construct complex structures and molecular machines at the nanoscale regime. At the nanoscale, precise organization of biological and non-biological materials in 2D or 3D space holds great promise for a vast range of applications in areas such as biophysics, point-of-care diagnostics, biomolecule structure determination, drug delivery, and more. Nucleic acid scaffolds, especially DNA origami, have emerged as a promising approach, by enabling &lt;10nm assembly of nanomaterials such as gold particles, carbon nanotubes, and quantum dots. Two-dimensional DNA nanostructures with a plurality of uniquely-addressable linkage sites ("nanopixels") are known as DNA nanoarrays. &#13;
&#13;
Expanding the size of DNA nanoarrays is desired for a variety of applications, from whole genomic sequencing at a fraction of the cost to sustainable digital information storage. Yet, due to the stochastic nature of self-assembly, DNA origami-based approaches suffer from an inherent scale limit. Top-down fabrication techniques enable nanometric precise patterning, yet single-molecule placement remains a daunting challenge. Currently, no method enables independent nanoscale manipulation of more than 10K diverse single-molecules.&#13;
&#13;
In this thesis, we propose a new kind of DNA nanotechnology: DNA Canvas. We lay a theoretical and experimental foundation for the development of a low-cost DNA nanoarray. First, we present a computational model as feasibility proof, as well as statistical analysis of the experimental design space, aiming to minimize the cost per nanopixel. We demonstrate a novel fabrication process that fuses microfabrication by photolithography with enzymatic reactions to surpass the scale limit in previous approaches. Last, we chart a path towards full implementation of &gt;1M DNA nanoarrays and enumerate potential applications to be disrupted by the introduction of such technology.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Efficacy of a Variable Thickness Transtibial Prosthetic Liner</title>
<link href="https://hdl.handle.net/1721.1/142835" rel="alternate"/>
<author>
<name>Meyer, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/142835</id>
<updated>2022-06-01T03:06:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Design and Efficacy of a Variable Thickness Transtibial Prosthetic Liner
Meyer, Christina
Significant advancements in the socket design process have improved fit and comfort, however, the prosthetic liner remains a largely unaltered component with the potential to mitigate socket pain, discomfort, and irritation. During periods of prolonged knee flexion a conventional liner can cause skin irritation over the top of the patella by applying uncomfortable shear forces to the skin as well as pinching behind the knee. To mitigate these liner irritations, this thesis proposes the use of a novel, subject-specific, variable-thickness liner. To target the knee region, the liner thickness is defined as inversely proportional to the absolute value of the maximum skin strains measured during knee flexion. Areas of high strain correspond to a minimum liner thickness of 2 mm and areas of low strain correspond to a maximum liner thickness of 7mm. Static, 30 minute sit tests were done with conventional, variable and uniform thickness liners. A FLIR thermal camera captured images of the residuum without a liner before and after each test with a 10 minute rest in between tests. I hypothesize that the variable-thickness liner will improve patient comfort for the sit test compared to the uniform and conventional liners evaluated. The results of this pilot study show in the posterior knee region a 0.2% increase in temperature from the variable thickness liner and a 2.8% increase in temperature from the uniform thickness liner. In the anterior patella region there was a 5.5% decrease in temperature from the variable thickness and an average of 0% change in temperature from the uniform thickness liner. These result suggest success of the variable thickness liner at reducing skin irritation and thermal output in regions of high skin strain. A qualitative questionnaire indicated better fit of the novel variable thickness liner compared to the conventional liner. The results of this initial pilot study support the hypothesis, and provide motivation for further testing on a larger study cohort.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agency and Community: Supporting Creative Learning in a Global &#13;
Online Course</title>
<link href="https://hdl.handle.net/1721.1/142834" rel="alternate"/>
<author>
<name>Gabaree, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/142834</id>
<updated>2022-06-01T03:38:06Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Agency and Community: Supporting Creative Learning in a Global &#13;
Online Course
Gabaree, Lily
Learning Creative Learning (LCL) is an online course and global educator community which has engaged thousands of people interested in exploring creative learning approaches through projects, passion, peers, and play. LCL has been designed in pursuit of learner agency and global community — that is, enabling the pursuit of meaningful, personal learning journeys while also engaging with peers from around the world. This thesis investigates participants’ experiences in the Learning Creative Learning course and community, with an emphasis on understanding participants’ experiences of agency and community, as well as considerations of their own professional contexts and development, and explores design recommendations based on these findings.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Displays for Discrete Integrated Circuit Electronics</title>
<link href="https://hdl.handle.net/1721.1/142833" rel="alternate"/>
<author>
<name>Christensen, Justin Browning</name>
</author>
<id>https://hdl.handle.net/1721.1/142833</id>
<updated>2022-06-01T03:26:01Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Distributed Displays for Discrete Integrated Circuit Electronics
Christensen, Justin Browning
I present a distributed display architecture that integrates with a set of asynchronous mechanically and electrically re-configurable computing nodes, otherwise known as Discrete Integrated Circuit Electronics (DICE). Each singular display is physically and electrically connected to a DICE node which transmits useful data to be displayed. By integrating these displays with the DICE nodes, a multitude of applications are enabled, starting with real-time data visualization and debugging, and going on up to more complex applications, such as locally computed ray tracing and graphics rendering, as well as structural and volumetric displays. The advantages and implementation of the displays into the DICE architecture, as well as various examples of their applications are demonstrated and discussed. While the DICE nodes themselves address issues with locality in computing, these integrated distributed displays will help them overcome some of their limitations and enhance their capabilities. Together, these integrated devices and their scalability can lead to iterative improvements to graphical processing, form into spatial 2D grid (structural) and 3D mesh (volumetric) displays, and overall reduce the cost and complexity of distributed display and computing systems.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magneto-thermal Transport and Machine Learning-assisted Investigation of Magnetic Materials</title>
<link href="https://hdl.handle.net/1721.1/142832" rel="alternate"/>
<author>
<name>Tatsumi, Yuki</name>
</author>
<id>https://hdl.handle.net/1721.1/142832</id>
<updated>2022-06-01T03:46:27Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Magneto-thermal Transport and Machine Learning-assisted Investigation of Magnetic Materials
Tatsumi, Yuki
Heat is carried by different types quasiparticles in crystals, including phonons, charge carriers, and magnetic excitations. In most materials, thermal transport can be understood as the flow of phonons and charge carriers; magnetic heat flow is less well-studied and less well understood.&#13;
&#13;
Recently, the concept of the flat band, with a vanishing dispersion, has gained importance. Especially in electronic systems, many theories and experiments have proven that some structures such as kagome or honeycomb lattices hosts such flat bands with non-trivial topology. Even though a number of theories suggest that such dispersionless mode exist in magnonic bands under the framework of the Heisenberg spin model, few experiments indicate its existence. Not limited to these flat band effects, magnetic insulators can assume a variety of nontrivial topologies such as magnetic skyrmions. In this thesis, I investigate the highly frustrated magnetic system Y0.5Ca0.5BaCo4O7, where the kagome lattice could potentially lead to nontrivial thermal transport originated from its flat band. While we do not observe signatures of the flat band in thermal conductivity, the observed anomalous Hall effect in electrical transport and spin glass-like behavior suggest a complex magnetization-transport mechanism.&#13;
&#13;
Motivated by the rapid advancement of artificial inteligence, the application of machine learning into materials exploration is recently investigated. Using a graphical representation of crystallines orginally suggested in Crystal Graphical Convolutional Neural Network (CGCNN), we developed the ML-asssited method to explore magnetic compounds. Our machine learning model can, so far, distiguish ferromagnet or antiferromagnet systems with over 70% accuracy based only on structual/elemental information. Prospects of studying more complex magnets are described.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Attempt at Democratizing Resource Allocation for Social Movements Using Decentralized Autonomous Organizations</title>
<link href="https://hdl.handle.net/1721.1/142828" rel="alternate"/>
<author>
<name>Marquez, Daniel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/142828</id>
<updated>2022-06-01T03:42:27Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Attempt at Democratizing Resource Allocation for Social Movements Using Decentralized Autonomous Organizations
Marquez, Daniel A.
Often, a distributed and loosely coordinated set of grassroots organizations spawn to deal with local issues such as, but not limited to, classism, sexism, and racism that drive social movements like Occupy Wallstreet, Me Too, and Black Lives Matter. When no pre-existing umbrella organization can tie them, it is difficult to coordinate and allocate resources. At the same time, through the Internet, a local issue can spread to become a national or global movement, requiring the speed of coordination to match the speed of the sentiment surrounding the issue.&#13;
&#13;
I propose the creation of a network to garner and manage donations with widespread support for these grassroots, decentralized social movements by facilitating the democratization of resources towards the small and local organizations that are enabling action on behalf of the movement. I hypothesize that a decentralized structure, particularly a Decentralized Autonomous Organization (DAO), can act as this network and ensure that each organization can scale at its own rate and that the donors who participate in the DAO can better and more effectively coordinate their support. My goal is for small and local organizations with a common goal to come together to create a DAO that absorbs the support of people who pool funds through the DAO and decide where those funds go. This decision-making is executed via a smart contract-based voting mechanism. In essence, I argue that through DAOs, social movements can both more effectively raise funds and allocate them.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Squishy Music Toys: Creating a less stressful, more pliable way to enter the music world</title>
<link href="https://hdl.handle.net/1721.1/142827" rel="alternate"/>
<author>
<name>Lienhard, Hannah Rhiannon</name>
</author>
<id>https://hdl.handle.net/1721.1/142827</id>
<updated>2022-06-01T03:40:16Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Squishy Music Toys: Creating a less stressful, more pliable way to enter the music world
Lienhard, Hannah Rhiannon
Electronic music education today does not have a clear path to entry - the existing methods are expensive or overly complicated, making them unavailable to a lot of people. In this thesis I will explore a possible solution to this issue, through the creation of a new kind of music interface designed to let anyone, at any age, have the experience of playing an instrument. This interface - the Squishy - is soft and pliable, allowing a comforting yet exciting new way to create and control music. By making it this way, you eliminate much of the detailed technique that is often required when learning an instrument. The Squishy guides users through the basics of electronic music with a simple software interface. Each instrument is embedded with sensors that respond to bending and pressure forces on the exterior shell. These sensors are incredibly reactive, so even small changes to the shells can affect the sounds being created. In this way, the Squishies can also act as a tool for meditation. By being so sensitive, the interfaces encourage users to concentrate on the sounds they are creating, and can guide them to be more intentional with their movements. The Squishies are also able to act as MIDI controllers for more experienced users who want to utilize them with their preferred music software. Throughout this thesis, I will explore the design space and potential use cases for these instruments, as a toy and learning experience, as a high level music controller, as a tool for meditation, and as all three.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Defextiles: 3D Printing Quasi Woven Textiles via Underextrusion</title>
<link href="https://hdl.handle.net/1721.1/142826" rel="alternate"/>
<author>
<name>Forman, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/142826</id>
<updated>2022-06-01T03:38:03Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Defextiles: 3D Printing Quasi Woven Textiles via Underextrusion
Forman, Jack
I present DefeXtiles, a rapid and low-cost technique to produce tulle-like fabrics on unmodified fused deposition modeling (FDM) printers. The under-extrusion of filament is a common cause of print failure, resulting in objects with periodic gap defects. In this paper, we demonstrate that these defects can be finely controlled to quickly print thinner, more flexible textiles than previous approaches allow. Our approach allows hierarchical control from micrometer structure to decameter form.&#13;
&#13;
In this thesis, I introduce the mechanism of DefeXtiles, establish the design space through a set of primitives with detailed workflows, and characterize the mechanical properties of DefeXtiles printed with multiple materials and parameters. Additionally, I demonstrate the interactive features and new use cases of our approach through a variety of applications, such as fashion design prototyping, interactive object, aesthetic lace patterning, and single-print actuators. Finally, I discuss the number of external technique reproductions and expansions, and reflect on methodology strategies to support such phenomena.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superconducting Asynchronous Logic for Ultra-low Power High Performance Computing</title>
<link href="https://hdl.handle.net/1721.1/142825" rel="alternate"/>
<author>
<name>Blackburn, L. Camron</name>
</author>
<id>https://hdl.handle.net/1721.1/142825</id>
<updated>2022-06-01T03:19:43Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Superconducting Asynchronous Logic for Ultra-low Power High Performance Computing
Blackburn, L. Camron
High performance computing is bottlenecked by increasing power demands and memory bandwidth, while superconducting electronics are bounded in circuit complexity due to a limit on the number of switching devices on a single chip. This thesis proposes a modular, asynchronous superconducting computing framework which aims to solve both of these problems. A discrete set of logic gates are proposed and implemented using Adiabatic Quantum Flux Parametron (AQFP) logic. AQFP logic devices can achieve picosecond gate delays with zeptojoule (10−21 J) switching energy, just bordering the theoretical Landauer limit for computing energy demands, by adiabatically switching the location of a single flux quanta in a double-well potential. The heart of the project lies in the modular architecture design that realigns hardware layout with software dataflow to allow for scalable, distributed computing systems from basic circuit building blocks. Projecting the simple circuit design performance to large-scale high performance computing systems, Super-DICE aims to achieve a 103 order of magnitude improvement in power consumption, while still accounting for the cryogenic cooling overhead of the superconducting electronics. Beyond the dramatic power performance improvement with this logic technology and architecture, it also allows for designers to rapidly prototype hardware computing optimizations without needing to go through the expensive and time consuming process of fully custom ASIC design.&#13;
&#13;
In this thesis, I review the device physics of the Quantum Flux Parametron and present a set of basic AQFP combinatorial logic gates. I then propose a circuit design for asynchronous token buffering between these modular gates and describe how they can be assembled as digital materials to create scalable, complex 3D computing structures. I simulate the proposed circuit designs in SPICE and project performance of a potential superconducting supercomputer using this framework. Motivated by the energy efficiency of superconducting electronics, the heart of this thesis radically proposes to redefine traditional processor architecture by discretizing large-scale system integration into a heterogeneous set of building blocks which blur the line between hardware and software with a reconfigurable, asynchronous spatial computing system.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-standing sub-cellular sized PhotoVoltaic devices for minimally-invasive and precise Neuronal Stimulation</title>
<link href="https://hdl.handle.net/1721.1/142820" rel="alternate"/>
<author>
<name>Yadav, Shubham</name>
</author>
<id>https://hdl.handle.net/1721.1/142820</id>
<updated>2022-06-01T03:29:02Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Self-standing sub-cellular sized PhotoVoltaic devices for minimally-invasive and precise Neuronal Stimulation
Yadav, Shubham
Neural stimulation is an important research tool for diagnosis, monitoring and therapeutics. It helps in deciphering neural connections and improving our understanding of how different parts of the brain works. In this thesis work, I will be discussing about designing for the first time sub-cellular sized Photovoltaic devices which can do spatio-temporally precise neuron stimulation. These devices are based on thin-film Organic Photovoltaic technology. The stimulating devices are roughly 250 nm in thickness and a few micrometers in size ( 5um in diameter). This thesis will provide for the first time a way of targeting individual neurons for stimulation without tissue displacement and help in developing novel technologies for therapeutics. Apart from brain stimulation, this technology can be used for targeting different cells like HEK, HELA, etc. which can get electrically stimulated.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>KnitheWorld: lines_of_code_as_loops_of_yarn</title>
<link href="https://hdl.handle.net/1721.1/142817" rel="alternate"/>
<author>
<name>Hong, Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/142817</id>
<updated>2022-06-01T03:18:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">KnitheWorld: lines_of_code_as_loops_of_yarn
Hong, Alice
The rapid rise of digital fabrication provides new opportunities for young people to express themselves creatively and develop computational-thinking skills. By enabling the fabrication of soft materials, knitting machines have the potential to expand the expressive range of digital fabrication tools available — and to expand the range of young people who become interested in digital fabrication. But until now, the process of programming knitting machines has been very complex and not accessible to beginners. I present KnitheWorld, an educational software system to democratize knitting for young people ages 8-12 as a means of creative expression. By dragging and dropping visual programming blocks, learners can generate patterns for Jacquard knits, a type of knit with patterns featured with two or more colors. KnitBlocks features knitting terminologies, allowing learners to code line by line, following a similar process of how knitting creates new forms from loops of yarn. In our KnitheWorld workshops with children, we found that they not only create complex knit patterns with an understanding in computational thinking, but they also engage in storytelling about materials and patterns around them. KnitheWorld invites children to tinker, turning daily objects into patterns of colors, and helping them express their creativity through the world of knits.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Symbiotic Shift: Transcultural Explorations of Community-Guided CRISPR Biotechnology Development</title>
<link href="https://hdl.handle.net/1721.1/142816" rel="alternate"/>
<author>
<name>Ullah, Anika Nawar</name>
</author>
<id>https://hdl.handle.net/1721.1/142816</id>
<updated>2022-09-22T10:46:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Symbiotic Shift: Transcultural Explorations of Community-Guided CRISPR Biotechnology Development
Ullah, Anika Nawar
This master’s thesis focuses on sharing the experience of working collaboratively across the Sculpting Evolution Group at the MIT Media Lab and Indigenous researchers, elders, and community members in Aotearoa (New Zealand) to spearhead community-guided CRISPR biotechnology development— a new way of creating the next generation of CRISPR gene editing biotechnologies that values cultural knowledge and intentionally seeks guidance from the communities that these biotechnologies may impact in the far future. Although this specific conversation focuses on ecological editing biotechnologies, it is a broader mediation on the expansion of knowledge systems used to charter the course of present and future technologies. Throughout this thesis, I weave in narratives shared by our collaborators in order to illuminate our collective learnings, challenges, sources of inspiration, and outcomes.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated System for Interaction in Virtual Environments</title>
<link href="https://hdl.handle.net/1721.1/142815" rel="alternate"/>
<author>
<name>Simonson, Aubrey</name>
</author>
<id>https://hdl.handle.net/1721.1/142815</id>
<updated>2022-06-01T03:24:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Integrated System for Interaction in Virtual Environments
Simonson, Aubrey
Many of the systems for interaction currently being used in virtual environments are borrowed from 2D interfaces, such as mobile and desktop computing. These styles of interaction fail to take full advantage of the possibilities offered by immersive environments, and conceptually don’t make sense in 3D space. This thesis proposes and evaluates a pair of tools for interacting with virtual environments which are conceptually 3D, draw metaphors from physical reality, and fit together into an integrated system. The first of these tools, Bird, is a proposed solution for selection and manipulation tasks, and the second, Pockets, is a proposed solution for interacting with menus. Additionally, I propose a series of other tools and interaction techniques which follow the goal of designing from a fundamentally 3D rather than 2D position, and which integrate with Pockets and the Bird, but which were not implemented or tested during the course of this thesis.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear, A Climate Opportunity</title>
<link href="https://hdl.handle.net/1721.1/142813" rel="alternate"/>
<author>
<name>Babio Fernandez, Guadalupe</name>
</author>
<id>https://hdl.handle.net/1721.1/142813</id>
<updated>2022-06-01T03:29:51Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Nuclear, A Climate Opportunity
Babio Fernandez, Guadalupe
Humanity is paying the consequences of the technical and technological progress of the past decades. As societal challenges, climate change and urban migration,  become more obvious, this phenomenon calls for urgent action. The only path out requires immediate efforts to bring net-carbon emissions to negative numbers.&#13;
&#13;
We must provide reliable and clean energy for everyone, including lower-income countries that still lack proper access to it, while removing the carbon dioxide emitted over the last years. To do so, carbon sequestration will require as much energy as was previously used to create those emissions. We need power sources that provide close to unlimited energy without harming the environment, and renewables won’t solve this problem. Ultimately, nuclear power will provide the world’s need for clean energy security. The reality is that nuclear power is the safest form of energy humanity has ever used, the one that requires the lowest land use and produces the least toxic waste, until fusion becomes available. A world where we would have infinite power, at zero cost and zero climate impact. Imagine.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for CRISPR Cas12a Multiplexing in Mammalian Systems</title>
<link href="https://hdl.handle.net/1721.1/142812" rel="alternate"/>
<author>
<name>Avila, Mariah J.</name>
</author>
<id>https://hdl.handle.net/1721.1/142812</id>
<updated>2022-06-01T03:17:47Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Methods for CRISPR Cas12a Multiplexing in Mammalian Systems
Avila, Mariah J.
Clustered regularly interspaced short palindromic repeat (CRISPR) proteins have been found in many bacterial species, serving as a defense mechanism to defend the cell from bacteriophage invasion. These proteins are guided to cleave a specific nucleic acid sequence through the binding of a crRNA hairpin. In recent years, these proteins have been adapted for a number of gene editing applications. The fusion of additional proteins of interest to dead nuclease (non-cutting) mutants of CRISPR proteins such as Cas9 and Cas12a have allowed for a wide range of gene editing activities, including gene activation (CRISPRa). However, only a small number of genes can currently be targeted at one time, due to a limit on the number of gRNAs that can be successfully placed into a guide array. This is primarily due to the difficulty of synthesizing arrays with a large number of repetitive sequences such as are required in the crRNA. In addition, the repetitive regions of crRNA pose the danger of homologous recombination in vivo. Through the use of mutant library screening and novel mammalian tissue culture assays designed to interrogate the RNA processing and DNA binding abilities of the CRISPR Type V Cas12a mutant crRNAs, we show methods to improve the multiplexing ability of Cas12a. Using these methods, we constructed and utilized 32-guide arrays for successful activation of endogenous genes using CRISPRa, demonstrating the functionality of a larger LbCas12a array than has been previously published. These guides and methods enable simultaneous activation of endogenous genes for basic science applications, as well as large arrays for use in viral defense systems.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Continuum Robotic Structures</title>
<link href="https://hdl.handle.net/1721.1/142808" rel="alternate"/>
<author>
<name>Rubio, Alfonso Parra</name>
</author>
<id>https://hdl.handle.net/1721.1/142808</id>
<updated>2022-06-01T03:10:43Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Discrete Continuum Robotic Structures
Rubio, Alfonso Parra
When overcoming environmental constraints, nature shows the capacity to generate hybrid hard-soft morphing continuum structures at very low cost at almost any scale. Human attempts to replicate nature-like systems to overcome modern engineered solutions, based on classical rigid mechanics, commonly lead to hyper-redundant and complicated designs. Novel trends like soft robotics or continuum robotics are showing new successful directions but mostly at small sizes. It is still a challenge to achieve accessible and cost-efficient scalable nature-like solutions.&#13;
&#13;
The earliest research towards digital materials focused on proving reversibility of their assembly, their low relative densities vs. ultra-high stiffness ratios and scalability properties. Now we can find architected metamaterials with many kinds of exotic physical properties. This thesis will focus on digital materials with custom mechanical properties. Recent work showed the capacity to generate controlled mechanical anisotropies as embedded compliancy, chirality, and auxeticity. That enables generating continuum macroscopic foams with controlled deformation that could pre- serve some properties and help bring simplicity to overcome tasks that, with classic rigid-joint mechanical systems, would require a very complex system.&#13;
&#13;
Equally important, many of the modern engineering solutions that would require digital materials are very dependent on their outer shape. Literature shows less acclaim for providing an accurate shape to these digital materials. Some of the strategies proposed have been based on hierarchical strategies or reducing the overall size of the building blocks but these findings conflict with the many of the claimed premises. This thesis is proposing a folded solution that will integrate onto the continuum structure and provide a desired shape that is structurally efficient while respecting its intrinsic degrees of freedom.&#13;
&#13;
As a whole, this thesis explores if heterogeneous digital materials can provide all the mechanical needs of a movable structure integrated. This thesis tries to mimic nature’s engineering strategies by joining the kinematical and shape-form needs into a single material system composed of a discrete building block core and a folded outer- mold-line layer. As examples, this thesis recreates a water snake and a morphing wing inspired by birds camber morphing.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactivity and authenticity in AI-augmented videos</title>
<link href="https://hdl.handle.net/1721.1/142807" rel="alternate"/>
<author>
<name>Sankaranarayanan, Aruna</name>
</author>
<id>https://hdl.handle.net/1721.1/142807</id>
<updated>2022-06-01T03:01:51Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Interactivity and authenticity in AI-augmented videos
Sankaranarayanan, Aruna
The arrival of AI augmented devices, illusions, algorithms, and a new wave of users who are engaging with these digital artefacts creates the possibility of new and delightful digital experiences. Can AI augmented digital media allow consumers to engage actively with the content they consume? How would the design of interfaces that enable viewers to not just select the content they watch but actively alter it look like? Would we be able to build dials and buttons into a new AI augmented video that viewers can meddle with to invoke a range of alterations to the media they see, that is now under their control? How would such alterations colour the underlying information in the videos? In such a world, would the viewer be able to tell apart the synthesized from the real? This thesis begins with a series of explorations that apply existing computer vision algorithms to modify the content in a video. I create artistic renditions of news and music videos using neural style transfer algorithms, share how segmentation models could be used to change the way in which the underlying information is coloured by transplanting backgrounds and foregrounds between videos, utilize latent expression transformations to modify different intrinsic qualities of a person in an image, create delightful virtual communication experiences by making certain objects disappear in a video. Inspired by the dramatic differences in visual affect between newscasters from 1969 and 2017, I then design and implement a new computer vision algorithm that allows the viewer to modify the facial affect of a person in the video they are watching using a generative adversarial network. Recognizing the dilemmas associated with commercializing such augmentations for mass consumption, I explore how neural network driven manipulations, deepfakes, are discerned by individuals. To do this, I create a new deepfakes dataset of Presidents Donald Trump and Joseph Biden and test how individuals employ visual, auditory and textual reasoning to differentiate between real and synthesized media objects. I also report findings that Biden voters show a shift towards motivated reasoning based discernment when the political content in a media object is visible in audio or text modalities.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of an Automated Fiber Placement Machine to Build Prosthetic Sockets</title>
<link href="https://hdl.handle.net/1721.1/142806" rel="alternate"/>
<author>
<name>Jaeger, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/142806</id>
<updated>2022-06-01T03:03:48Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Design of an Automated Fiber Placement Machine to Build Prosthetic Sockets
Jaeger, Aaron
This thesis presents research towards an automated manufacturing method for producing prosthetic sockets and orthotic interfaces. The prosthetic socket is a noninvasive mechanical interface between the residuum and the prosthesis. There is no standardized method for socket fabrication, but they are generally produced by hand laminating a carbon fiber or other composite around a hand sculpted form. This process is expensive, lacks quality control, and limits future socket improvements. Automated manufacturing is a potential way to reduce labor costs and improved quality through standardization. &#13;
&#13;
Automated fiber placement (AFP) is a process commonly used in the aerospace industry to make large, complex composite parts where a robotic gantry lays down individual pre-impregnated strips of fiber tow. This thesis prototyped a proof-of-concept desktop AFP machine with four degrees of freedom designed for building prosthetic sockets for $10,000 at a scale feasible for small clinics, university research labs, and residential settings. The AFP prototype demonstrated the basic ability to automatically place and laminate strips of fiber. During testing the prototype demonstrated a constant compaction force at 75N with standard deviation of 1.2N over varying surface and produced the 10N of fiber tension that is required for composite lamination.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assembling Integrated Electronics</title>
<link href="https://hdl.handle.net/1721.1/142805" rel="alternate"/>
<author>
<name>Fredin, Zach</name>
</author>
<id>https://hdl.handle.net/1721.1/142805</id>
<updated>2022-06-01T03:32:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Assembling Integrated Electronics
Fredin, Zach
Modern high-performance computing (HPC) systems consist of static architectures built from monolithic components. Miniaturization driven by lithographic technology has pushed Moore’s Law to its limit after more than half a century, to the point that new chips require multi-billion dollar investments and supercomputer systems are built on a decades-long planning horizon. At the same time, typical HPC workloads like physical simulation have inherent geometry which is not reflected in the compute architecture, leading to a broad range of issues from cache concurrency to programming difficulty. Beyond integrated circuits, adjacent problems exist in electronics generally; printed circuit board assemblies (PCBAs) are similarly static, and the production and recycling of these products is environmentally unsustainable and requires extensive infrastructure.&#13;
&#13;
The solution is to modularize electronics and autonomously assemble 3-dimensional computing structures from asynchronous, reusable elements. Of course, this concept brings with it a host of new questions: how are the devices programmed, how is communication bandwidth conserved, how do the elements physically interact, and how are the structures fabricated and assembled?&#13;
&#13;
This thesis provides insight on module design and assembly automation for 3-dimensional electronics through two distinct prototype iterations. Evaluation of these systems revealed the mechanical limitations of commercial connectors, so an alternative method called digital materials is described which merges electrical interconnect and physical substrate. This method discretizes substrates into the fundamental elements that make up interconnect systems: conductive and insulating parts which are properly arranged to route signals to asynchronous processing nodes. Along the way, a novel method for constraining motion in these discrete assembly systems using modular superelastic flexures is introduced, characterized, and used to rapidly fabricate several machines.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A recovery-renewal model for auditory neuron firings</title>
<link href="https://hdl.handle.net/1721.1/142745" rel="alternate"/>
<author>
<name>Wilson, Timothy Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/142745</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">A recovery-renewal model for auditory neuron firings
Wilson, Timothy Alan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Bibliography: leaves 81-83.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of geothermal energy use as heat in industrial processes</title>
<link href="https://hdl.handle.net/1721.1/142743" rel="alternate"/>
<author>
<name>Gupta, Akhil,
            1959-</name>
</author>
<id>https://hdl.handle.net/1721.1/142743</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">An analysis of geothermal energy use as heat in industrial processes
Gupta, Akhil,
            1959-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo study of fatigue crack growth under random loading</title>
<link href="https://hdl.handle.net/1721.1/142740" rel="alternate"/>
<author>
<name>Harris, Richard Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/142740</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Monte Carlo study of fatigue crack growth under random loading
Harris, Richard Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Still Standing: Cooperative strategies for the renovation of Soviet mass housing</title>
<link href="https://hdl.handle.net/1721.1/142715" rel="alternate"/>
<author>
<name>Hoyle, Benjamin</name>
</author>
<author>
<name>Levi, Eytan</name>
</author>
<id>https://hdl.handle.net/1721.1/142715</id>
<updated>2022-05-25T03:28:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Still Standing: Cooperative strategies for the renovation of Soviet mass housing
Hoyle, Benjamin; Levi, Eytan
Mass housing across the former Soviet Union is in varying states of disrepair, having lasted much longer than it was expected to when built in the 1960s. Treatment of the buildings varies greatly depending on context, as some are replaced, others are renovated, and many are neglected. But in most places, residents own their apartment units, having obtained them at a minimal cost following the collapse of the USSR. While this leaves many apartment owners responsible for common amenities that they don’t have the means or incentives to maintain, it also puts them in a position to leverage the latent value of the Soviet structures they live in.; Current trends do not take full advantage of these circumstances, and it is often external developers who manage to profit from the land value of Soviet housing, leaving residents with inadequate compensation. No matter what happens to the buildings, the legacy of mass housing is deeply entrenched and will continue to shape the built environment for generations to come. We argue that it is essential to keep the original structures — with modifications and updates — to create agency for residents in how this legacy is carried into the future.; This thesis demonstrates three scenarios in which residents of the same type of prefabricated modernist housing — in sites spread across the former Soviet territory — collectively leverage their apartments to create renovations that serve their common interests. Using contemporary mass timber construction technology and taking full advantage of local real estate markets, residents can self-organize to improve their living spaces.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Group Heterogeneity and Affective Polarization Within The Democratic Party</title>
<link href="https://hdl.handle.net/1721.1/142705" rel="alternate"/>
<author>
<name>Kang, In Hee</name>
</author>
<id>https://hdl.handle.net/1721.1/142705</id>
<updated>2022-05-25T03:37:01Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Group Heterogeneity and Affective Polarization Within The Democratic Party
Kang, In Hee
Historically, the Democratic party has been infamous for representing a more diverse group of people and interests. However, in recent election years the Democratic primary campaigns have exhibited a growing hostility between different ideological factions within the party. Drawing on traditional studies of cross-party polarization, this study takes on a novel multi-faceted approach to measuring the extent of within-party polarization within the Democratic Party. This includes measures that aim to capture feelings, attitudes, and behaviors towards different ideological groups and their representative political figures. Using data from an original survey, findings suggest compelling evidence to suggest the existence of both substantive and affective polarization between liberal and moderate Democrats. That said, in an historical analysis of polarization over multiple election years using ANES data, I also present evidence that suggests a fluidity in both within-party ideological identities and partisan identities depending on which group is more salient at the time. Lastly, I find that cross-party polarization still operates at a higher intensity than within-party polarization. It is my hope that this study opens up future avenues for research on the phenomenon of within-party affective polarization and its effects on vote choice in the general elections.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quasi-potential analysis of multi-variate stochastic differential equations</title>
<link href="https://hdl.handle.net/1721.1/142704" rel="alternate"/>
<author>
<name>Malek, Bola</name>
</author>
<id>https://hdl.handle.net/1721.1/142704</id>
<updated>2022-05-25T03:32:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Quasi-potential analysis of multi-variate stochastic differential equations
Malek, Bola
Genetic circuit motifs based on two transcription factors can model cell-fate decisions critical for embryonic development and adult homeostasis. One important such motif is the self-activating toggle switch allows for tri-stable configuration and is believed to be responsible for stem-cell differentiation in multi-cellular organisms. To aid observations and experiments, a theoretical framework for studying these motifs using potential theory from classical physics is sometimes utilized.&#13;
&#13;
This thesis aims to be an expository and pedagogical introduction to this topic. Starting from first principles, I derive the deterministic equations describing these systems. Then, I derive the sources of noise and stochasticity based on basic probability theory. Stochastic differential equations are derived for these systems. Finally, I introduce and implement vector field decomposition methods used to arrive at quasi-potentials from the literature for polynomial systems with demonstrations on example systems. The application of these methods to genetic switches fails and is discussed in Chapter 4.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microfluidics for Calcium Imaging of C.elegans Neurons During Temporally Precise Odor Stimulation</title>
<link href="https://hdl.handle.net/1721.1/142691" rel="alternate"/>
<author>
<name>Sánchez-Jáuregui, Paloma</name>
</author>
<id>https://hdl.handle.net/1721.1/142691</id>
<updated>2022-05-25T03:18:02Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Microfluidics for Calcium Imaging of C.elegans Neurons During Temporally Precise Odor Stimulation
Sánchez-Jáuregui, Paloma
How neurons process sensory stimuli to give rise to different behaviors is one of the most pursued and complex questions in neuroscience. With recent technologies, scientists have been able to map how sensory cues, like odors, are represented in the brain. However, it has been proven difficult to study how the processing of sensory information through neural circuits changes with experience. Moreover, due to the complexity of the brain, it is challenging to study how sensory information travels through layers of processing neurons to produce motor responses. But before attempting to solve the difficult task, we need a high-throughput technique that will enable us to localize sites of plasticity in the brain as information about the environment travels through the distinct layers. In this thesis, I provide a protocol for  high-throughput calcium imaging  in C.elegans. I used the nematode worm, C.elegans as a model system for several reasons: (1) odor-evoked responses have been well characterized, both at the neural and behavioral level1, (2) scientists have mapped out all of the 302 neurons and connections between these and more importantly, (3) due to the small nervous system, scientists can do calcium imaging in freely behaving animals at single-cell resolution. Follow-up studies should aim to map out neural responses to different odors and study how these responses change with experience.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lattice vibrational spectra of several mixed crystals of Cds and CdSe.</title>
<link href="https://hdl.handle.net/1721.1/142289" rel="alternate"/>
<author>
<name>Parrish, John Frederic.</name>
</author>
<id>https://hdl.handle.net/1721.1/142289</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">Lattice vibrational spectra of several mixed crystals of Cds and CdSe.
Parrish, John Frederic.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1967; Bibliography: leaves 51-52.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing organization structures : review of current theories, and applications to the construction industry</title>
<link href="https://hdl.handle.net/1721.1/142286" rel="alternate"/>
<author>
<name>Szwarcbard, Avraham Arie.</name>
</author>
<id>https://hdl.handle.net/1721.1/142286</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Designing organization structures : review of current theories, and applications to the construction industry
Szwarcbard, Avraham Arie.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 113-116.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsteady hydrodynamic interaction of ships in the proximity of fixed objects</title>
<link href="https://hdl.handle.net/1721.1/142285" rel="alternate"/>
<author>
<name>Tan, Wooi Tong.</name>
</author>
<id>https://hdl.handle.net/1721.1/142285</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Unsteady hydrodynamic interaction of ships in the proximity of fixed objects
Tan, Wooi Tong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1979; Bibliography: leaves 65-66.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The costing and financing of the 'proposed OTEC plants'</title>
<link href="https://hdl.handle.net/1721.1/142284" rel="alternate"/>
<author>
<name>Tandon, Jitendra N.</name>
</author>
<id>https://hdl.handle.net/1721.1/142284</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">The costing and financing of the 'proposed OTEC plants'
Tandon, Jitendra N.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1979; Bibliography: leaves 165-168.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The thermomechanical response of oil shale</title>
<link href="https://hdl.handle.net/1721.1/142283" rel="alternate"/>
<author>
<name>Switchenko, Peter Michael.</name>
</author>
<id>https://hdl.handle.net/1721.1/142283</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">The thermomechanical response of oil shale
Switchenko, Peter Michael.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Bibliography: leaves 204-212.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of an experimental simulation for a human remote control of an undersea vehicle</title>
<link href="https://hdl.handle.net/1721.1/142282" rel="alternate"/>
<author>
<name>Takahashi, Michio.</name>
</author>
<id>https://hdl.handle.net/1721.1/142282</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Design of an experimental simulation for a human remote control of an undersea vehicle
Takahashi, Michio.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Bibliography: leaves 38-39.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct coupled PV/CCD hybrid focal planes</title>
<link href="https://hdl.handle.net/1721.1/142281" rel="alternate"/>
<author>
<name>Szepesi, Leslie Louis.</name>
</author>
<id>https://hdl.handle.net/1721.1/142281</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Direct coupled PV/CCD hybrid focal planes
Szepesi, Leslie Louis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of power density in a superconducting generator</title>
<link href="https://hdl.handle.net/1721.1/142280" rel="alternate"/>
<author>
<name>Tanaka, Kohji.</name>
</author>
<id>https://hdl.handle.net/1721.1/142280</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">A study of power density in a superconducting generator
Tanaka, Kohji.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid optimization in traffic networks</title>
<link href="https://hdl.handle.net/1721.1/142279" rel="alternate"/>
<author>
<name>Tan, Han-Ngee.</name>
</author>
<id>https://hdl.handle.net/1721.1/142279</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Hybrid optimization in traffic networks
Tan, Han-Ngee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recovery of neptunium in the modified purex process</title>
<link href="https://hdl.handle.net/1721.1/142278" rel="alternate"/>
<author>
<name>Tajik, Saeed.</name>
</author>
<id>https://hdl.handle.net/1721.1/142278</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Recovery of neptunium in the modified purex process
Tajik, Saeed.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1979; Includes bibliographical references (leaves 207-212).
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternative process to produce instant noodles with physical and mechanical characteristics of commercial pasta products</title>
<link href="https://hdl.handle.net/1721.1/142277" rel="alternate"/>
<author>
<name>Sze, Herman Hiu-Lam.</name>
</author>
<id>https://hdl.handle.net/1721.1/142277</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Alternative process to produce instant noodles with physical and mechanical characteristics of commercial pasta products
Sze, Herman Hiu-Lam.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Bibliography: leaves 95-99.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of technological choices in international nuclear fuel assurance strategies.</title>
<link href="https://hdl.handle.net/1721.1/142276" rel="alternate"/>
<author>
<name>Suzuki, Tatsujiro.</name>
</author>
<id>https://hdl.handle.net/1721.1/142276</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">The role of technological choices in international nuclear fuel assurance strategies.
Suzuki, Tatsujiro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1979; Bibliography: leaves 195-198.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A cost study of an American precast panel system.</title>
<link href="https://hdl.handle.net/1721.1/142275" rel="alternate"/>
<author>
<name>Moghadam, Hamid Reza.</name>
</author>
<id>https://hdl.handle.net/1721.1/142275</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A cost study of an American precast panel system.
Moghadam, Hamid Reza.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: p. 195-199.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A comparative study of American and Japanese construction management.</title>
<link href="https://hdl.handle.net/1721.1/142274" rel="alternate"/>
<author>
<name>Takasaki, Taro.</name>
</author>
<id>https://hdl.handle.net/1721.1/142274</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">A comparative study of American and Japanese construction management.
Takasaki, Taro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 82-83.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inequality and fertility in developing nations.</title>
<link href="https://hdl.handle.net/1721.1/142270" rel="alternate"/>
<author>
<name>Martin, Robert Scott.</name>
</author>
<id>https://hdl.handle.net/1721.1/142270</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Inequality and fertility in developing nations.
Martin, Robert Scott.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1977; Bibliography : leaves 239-252.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative characterization of crystallographic textures in zirconium-based alloys.</title>
<link href="https://hdl.handle.net/1721.1/142269" rel="alternate"/>
<author>
<name>Knorr, David Bruce.</name>
</author>
<id>https://hdl.handle.net/1721.1/142269</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Quantitative characterization of crystallographic textures in zirconium-based alloys.
Knorr, David Bruce.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The hydrolysis of ammonium sulphide, and the dissociation of the first and second hydrogen of hydrogen sulphide</title>
<link href="https://hdl.handle.net/1721.1/142268" rel="alternate"/>
<author>
<name>Sammet, C. Frank
            (Charles Frank)</name>
</author>
<id>https://hdl.handle.net/1721.1/142268</id>
<updated>2022-05-04T03:16:59Z</updated>
<published>1903-01-01T00:00:00Z</published>
<summary type="text">The hydrolysis of ammonium sulphide, and the dissociation of the first and second hydrogen of hydrogen sulphide
Sammet, C. Frank
            (Charles Frank)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1903
</summary>
<dc:date>1903-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Location in Parts</title>
<link href="https://hdl.handle.net/1721.1/141960" rel="alternate"/>
<author>
<name>Sunder, Aarti</name>
</author>
<id>https://hdl.handle.net/1721.1/141960</id>
<updated>2022-04-20T03:41:03Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Location in Parts
Sunder, Aarti
This thesis frames decapitation and re-capitation as conceptual and creative spring boards to help re-think and re-align fictions around online gig economies, their attendant labor practices, and notions of time and space in the digital workplace. Though many of the examples from this document are specific to the Indian context and the Global South, they are also acutely relevant to multiple Local Souths. This thesis updates the discourse on gig labor by examining the subject through the lens of myth, the non-human, multi-player gaming, myth and storytelling. Through applying an artistic methodology that both implicates and embraces Amazon’s Mechanical Turk platform as collaborator, this thesis explores notions of sovereignty, fictional edges of protest, scientific discoveries, digital-terrestrial play, and image making and reading. As such this thesis asks what kinds of fictions and relationships can be born out of investigating these contemporary rifts in labor practices, and whether conceptual nodes of thinking the ‘outlandish’ can help us create new (er) pasts, presents and futures with greater empathy.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards mapping spatial transcriptome of an entire vertebrate brain</title>
<link href="https://hdl.handle.net/1721.1/141957" rel="alternate"/>
<author>
<name>Zhang, Ruihan</name>
</author>
<id>https://hdl.handle.net/1721.1/141957</id>
<updated>2022-04-20T03:02:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Towards mapping spatial transcriptome of an entire vertebrate brain
Zhang, Ruihan
Both the brain’s substantial complexity and technical challenges in monitoring and manipulating brains present challenges for understanding this essential organ. Zebrafish, for their modest brain size and transparency in the larval stage, serve as a model organism for whole-brain in vivo imaging and modeling. While calcium imaging generates substantial amounts of neural activity data, the lack of molecular information for individual neurons in a purely activity readout approach limits further biological interpretation. Recent advancements in in situ sequencing allow RNA profiling in its spatial context, which provides rich information on cell types and cell states. In this thesis, we adapted the expansion in situ sequencing(ExSeq) protocol for larval zebrafish brain slices. In brief, performing two rounds of expansion on zebrafish brain slices enabled us to obtain spatially localized sequencing readouts. This lays the foundation for mapping the spatial transcriptome of an entire vertebrate brain.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning Image Augmentation using Inpainting with Partial Convolution and GANs</title>
<link href="https://hdl.handle.net/1721.1/141956" rel="alternate"/>
<author>
<name>Tan, Aik Jun</name>
</author>
<id>https://hdl.handle.net/1721.1/141956</id>
<updated>2022-04-20T03:01:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Deep Learning Image Augmentation using Inpainting with Partial Convolution and GANs
Tan, Aik Jun
The 21st century has seen the remarkable transformation of machine vision by deep learning. This has enabled intelligent systems like autonomous vehicles and facial recognition software. However, the success of deep learning is largely predicated on the availability of sufficient data; in many instances, data may be scarce and expensive to source. In this thesis, we implemented two deep learning techniques: (1) inpainting using partial convolution and (2) generative adversarial network (GAN) to generate synthetic data to train deep learning image classifiers. We show that the addition of synthetic training images dramatically improved the accuracies of our defect classifiers. Using Gradient- Class Activation Map (Grad-CAM), we also demonstrate that the decision rules learned by the classifiers are significantly enhanced where the classifiers are accurately activating at the specific defect locations upon addition of synthetic training images. The study was performed at Amgen using real images of syringes and vials, indicating the practicality of the technique for industrial applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crossings : the promise of proximity</title>
<link href="https://hdl.handle.net/1721.1/141950" rel="alternate"/>
<author>
<name>Wrzeski, Stanley J.</name>
</author>
<id>https://hdl.handle.net/1721.1/141950</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Crossings : the promise of proximity
Wrzeski, Stanley J.
There are four key notions underlying this thesis: a. there are forces at play which cause many of the home's systems to interact with each other in ways that may be intended or unintended; b. due to these forces, our understanding of each of the home's individual systems may not allow us to understand how they work together; c. in order to understand how the home "works", we may need to redefine its systems based on these interactions, or crossings, and develop an ecology of building systems; d. good-looking homes aren't good enough ... we must move toward the concept of a high-performance home, properly deploying our knowledge of these interactions. The thesis is organized in four principal sections: 1. Crossings proposes a typology for interactions among roleplayers, systems and materials in the home; 2. some of the problems associated with their interactions are illustrated by considering the evolution of the residential chimney (Pipeline to the Sky 1 and the residential wall (The Great Barrier Rift); 3. Toward the High-Performance Home considers the implications of those interactions; 4. the opportunities in those interactions are examined metaphorically in Service Core: The Promise of Proximity and Information: The Currency of Crossings.
Thesis: S.M., Massachusetts Institute of Technology, Department of Architecture, 1993; Includes bibliographical references (leaves 76-80).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinematic and dynamic simulation of human prosthetic knee joints</title>
<link href="https://hdl.handle.net/1721.1/141943" rel="alternate"/>
<author>
<name>Manzi, Steven Frank.</name>
</author>
<id>https://hdl.handle.net/1721.1/141943</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Kinematic and dynamic simulation of human prosthetic knee joints
Manzi, Steven Frank.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of Vitamin A deficiency on lymphocyte transformation.</title>
<link href="https://hdl.handle.net/1721.1/141942" rel="alternate"/>
<author>
<name>Mark, David Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/141942</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">The effects of Vitamin A deficiency on lymphocyte transformation.
Mark, David Alan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1977; Vita.; Bibliography : leaves 64-67.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative analysis : Foreign Trade Arbitration Commission, All-Union Chamber of Commerce, Moscow, U.S.S.R.</title>
<link href="https://hdl.handle.net/1721.1/141935" rel="alternate"/>
<author>
<name>Der Marderosian, Armen.</name>
</author>
<id>https://hdl.handle.net/1721.1/141935</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Quantitative analysis : Foreign Trade Arbitration Commission, All-Union Chamber of Commerce, Moscow, U.S.S.R.
Der Marderosian, Armen.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 110-112.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study with the differential analyzer of the steady-state and transient response of simple vacuum-tube amplifier circuits under different load conditions</title>
<link href="https://hdl.handle.net/1721.1/141933" rel="alternate"/>
<author>
<name>Burnett, Carlos E.</name>
</author>
<id>https://hdl.handle.net/1721.1/141933</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1933-01-01T00:00:00Z</published>
<summary type="text">A study with the differential analyzer of the steady-state and transient response of simple vacuum-tube amplifier circuits under different load conditions
Burnett, Carlos E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1933; Includes bibliographical references (leaf 82).
</summary>
<dc:date>1933-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed energy platforms: Who will lead the next electricity revolution?</title>
<link href="https://hdl.handle.net/1721.1/141893" rel="alternate"/>
<author>
<name>Gentil-Cantin, Kevin M.</name>
</author>
<id>https://hdl.handle.net/1721.1/141893</id>
<updated>2022-04-14T03:06:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Distributed energy platforms: Who will lead the next electricity revolution?
Gentil-Cantin, Kevin M.
Climate change is forcing rapid change in the energy landscape. Two opposing paths represent plausible futures for the US power sector: centralization vs. distribution. Without trying to predict its likelihood, this thesis explores the implications of one of these paths, a distributed energy future. My objective is to investigate who could become the leader of this revolutionized industry. I specifically explore the potential role of three types of actors: privately-owned Utilities, energy-software Startups, and Large Tech companies, whom many do not yet factor into their assessment despite what I consider to be their vast scope to shape outcomes. &#13;
&#13;
A literature review establishes the broad contours of a distributed energy future. I find that existing scenarios predict a more complex energy system and that intermittency would become a significant issue. I consider the role of distributed energy resources (DER) as a viable solution if built on digital platforms. To draw out the implications of these observations, I conducted nine in-depth interviews with executives. The analysis yielded four key insights: a distributed system would be a revolution for the industry; Utilities are under much pressure, putting their leading position at risk; Startups are in a position of dependency and cannot take the lead alone; Large Tech are much closer to playing an active role than it looks at first glance. &#13;
&#13;
I sketch out comparisons with recent history to suggest that Large Tech companies should be considered as serious contenders for the leading role of a decentralized energy industry. I extend my analysis to the unique case of Tesla, also in a position of strength in this plausible future. I conclude this thesis by estimating that the answer depends on how utilities will act in this future, either by transforming themselves fast enough or letting Large Tech companies take over.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gaps Between Screens: Can telehealth bring mental healthcare to those who need it?</title>
<link href="https://hdl.handle.net/1721.1/141892" rel="alternate"/>
<author>
<name>Syed, Nafisa</name>
</author>
<id>https://hdl.handle.net/1721.1/141892</id>
<updated>2022-08-09T20:19:14Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Bridging the Gaps Between Screens: Can telehealth bring mental healthcare to those who need it?
Syed, Nafisa
Amid the United States' growing rates of anxiety and depression, an opioid epidemic, and most recently, the COVID-19 pandemic, the need for accessible mental healthcare continues to rise. While virtual care may seem like a simple solution to this access problem, resource, regulatory, and financial barriers can prevent those who most need care from connecting with mental health professionals. Via interviews with mental healthcare providers, public health experts, and patients, this project looks into the potential and limitations of telemental health when it comes to solving the United States' mental healthcare crisis.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Machine Learning and First-Principle Modeling to Evaluate Design Enhancements in Autoinjectors</title>
<link href="https://hdl.handle.net/1721.1/141891" rel="alternate"/>
<author>
<name>Singh, Ankita</name>
</author>
<id>https://hdl.handle.net/1721.1/141891</id>
<updated>2022-04-14T03:38:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Applications of Machine Learning and First-Principle Modeling to Evaluate Design Enhancements in Autoinjectors
Singh, Ankita
One of the key guiding principles in Amgen operations is to ensure reliability of the offered combination products to best serve patients and maintain a competitive advantage for the business. With a broad device portfolio and increasing sales volume, Amgen now has access to large data repositories and an opportunity to realize its value to “Be Science Based” and utilize this data in innovative ways to improve product designs and training programs. The goal of this project is to use machine learning to augment design decisions and provide products that truly resonate with Amgen’s mission - “To serve patients”.  &#13;
&#13;
This thesis presents a hybrid machine learning and first principle based model that can be used by Amgen to enhance the feedback loop between user experience and product design teams. By leveraging data on predicate autoinjector devices, we created models that can produce user experience insights and provide predictive capabilities for future product designs.&#13;
&#13;
The methodology to generate our models relies on theoretical first principle modeling and data science. We utilized domain knowledge to extract product attributes that contribute towards user experience. One such attribute was drug injection time for an autoinjector. The theoretical model used autoinjector design and drug product features to predict drug injection time. The machine learning model used drug injection time data along with other product design parameters to predict user experiences. &#13;
&#13;
The results of our model provided a direct link between the design attributes and user feedback metric. The accuracy of the hybrid model varied between 50-70% depending on the algorithm used. The first principle model results were very close to the empirical injection time data with only 12% error. &#13;
&#13;
Furthermore, the thesis presents an in-depth analysis on the interpretability of results by utilizing techniques like partial dependence and permutation variable importance charts to enhance the understanding of results generated by a machine learning model.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Considerations for Defense Contractors Entering the Small Satellite Market</title>
<link href="https://hdl.handle.net/1721.1/141890" rel="alternate"/>
<author>
<name>Murray, Angela</name>
</author>
<id>https://hdl.handle.net/1721.1/141890</id>
<updated>2022-04-14T03:28:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Considerations for Defense Contractors Entering the Small Satellite Market
Murray, Angela
Federal defense agencies have historically relied on large unique satellites to accomplish their communication and earth-sensing missions. In recent years, there has been a rapid expansion of the commercial small satellite sector as technology has demonstrated that several small satellites can accomplish the same mission of some larger traditional satellites with a shorter lead time at a more affordable price point. As the advantages of small satellite applications become evident, there is a growing consensus that moving from large exquisite single satellite units to a proliferated Low Earth Orbit (LEO) model is necessary to maintain the United States’ (US) dominance in the space domain and is critical to robust national security.&#13;
&#13;
Company X, one of the largest aerospace and defense manufacturers in the world, has profound expertise in manufacturing large exquisite satellite systems with long lead times, high mission assurance, and high associated costs. Their experience with small satellites however, has primarily been limited to prototype manufacturing in conjunction with academia, making it difficult to realize the high-volume low-cost benefits that are seen in the commercial sector. &#13;
&#13;
This thesis presents a framework that can be used by Company X or any defense contractor to evaluate the attractiveness of a small spacecraft program relative to their own capabilities and expertise, as a function of the program’s Technology Readiness Level (TRL), Manufacturing Readiness Level (MRL), contract unit volume, satellite material costs, and technical sophistication. This framework can be used as a tool to determine if they should bid on a contract, and if so, what facilities and personnel should be involved in the manufacturing.&#13;
&#13;
The framework presented in this thesis can be summarized as six lenses for evaluating small satellite contracts. The first evaluates facility type, with Contractor Owned Contractor Operated (COCO) facilities, like Company X, being the optimal location for high TRL and high MRL programs that can utilize manufacturing expertise and justify investment in the program. The second evaluates the optimal manufacturing maturity as a function of the contract’s TRL and MRL requirements.  As TRL and MRL increases, the optimal option moves from only using existing facilities with low manufacturing investments to a high-investment facility with assisted assembly, production management, and dedicated test capabilities. The third assesses a contract’s mission assurance requirements to ensure they will not render the program prohibitively expensive. If requirements are prohibitory, it suggests Company X should evaluate if they can negotiate reduced mission assurance requirements with supplemental quality assurance. The fourth analyzes optimal investment in manufacturing reliability as a function of satellite material costs and contract unit volumes. If evaluating the costs per successful on-orbit unit, it is almost always worthwhile to invest in increased manufacturing reliability. From a strictly manufacturing cost perspective however, the tradeoffs between reliability and increased per unit costs have to be evaluated carefully as a function of satellite material costs and contract unit volume. The fifth, evaluates different expertise such as space systems versus high-rate manufacturing, within the company and the value of each. The final one, assesses if the program is aligned with the corporate strategic priorities or if has the potential to demonstrate spacecraft manufacturing capability and therefore generate demand. &#13;
&#13;
It is clear that small satellite constellations and responsive space programs will be a necessary part of the DoD’s approach to space security in the near future. This thesis presents tools for Company X to evaluate which programs are compatible with their capabilities and how to best leverage them.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Shadowspect as a Potential Measure of Spatial Reasoning</title>
<link href="https://hdl.handle.net/1721.1/141801" rel="alternate"/>
<author>
<name>Anteneh, Melat R.</name>
</author>
<id>https://hdl.handle.net/1721.1/141801</id>
<updated>2022-04-09T03:22:56Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evaluating Shadowspect as a Potential Measure of Spatial Reasoning
Anteneh, Melat R.
Spatial reasoning allows individuals to conceive and manipulate mental representations of  objects in space and is an essential process in countless daily activities (Clements &amp; Battista, 1992). The online geometric puzzle game Shadowspect was created as a tool to evaluate  players' spatial reasoning skills. The goal of this project was to evaluate Shadowspect’s potential as a spatial reasoning assessment by comparing performance on the game to that on Ramful, Lowrie, and Logan’s (2016) validated Spatial Reasoning Instrument. Shadowspect performance was strongly correlated to performance on the Spatial Reasoning Instrument, particularly when measured as a function of average solve time, i.e., the average time spent solving a puzzle (r=-0.579, p&lt;.001) and total number of levels completed (r=0.705, p&lt;.001). The results of this study indicate that Shadowspect has the capability to serve as a measure of spatial reasoning.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Civic Hacking for the Right to Know and the Right to Privacy</title>
<link href="https://hdl.handle.net/1721.1/141798" rel="alternate"/>
<author>
<name>Lee, Geunhee</name>
</author>
<id>https://hdl.handle.net/1721.1/141798</id>
<updated>2022-04-09T03:42:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Civic Hacking for the Right to Know and the Right to Privacy
Lee, Geunhee
The involvement of civic hackers in national and international crisis mitigation efforts as digital first responders has been widely discussed (Palen et al., 2010). Crisis events often cause civic hackers to break ethical boundaries and governance structures during crisis events as they attempt to communicate essential information to the public (Crawford and Finn, 2015), raising questions regarding ethical concerns, sustainability of their projects, and power dynamics with other entities. Interestingly, this is often at odds with their ethical standards which advocate for the right to privacy against government oversharing of data. Thus, in order to develop better standards of practice for civic hackers in crisis mitigation, it is important to understand the ethical dilemmas that they face. This research focuses on South Korean civic hacking activities during the crisis of COVID-19. Two groups of civic hackers were interviewed and surveyed; 1) those mapping government data and 2) those advocating for improvement in the delivery and dissemination of government data. Through interviews of these two groups, this research study found that although civic hackers struggled to determine ethical data practices, the limited efficacy of their projects due to technical difficulties and lack of community power discouraged them to further sustain the civic hacking projects, which limited the social impact of their work. The study provides an insiders’ view of civic hacking for crisis mitigation that can help inform policy for ethical practices in the delivery and use of data during a crisis event.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Black Public Works: The Political Economy of Race and New Deal Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/141797" rel="alternate"/>
<author>
<name>Ulama, Darryle</name>
</author>
<id>https://hdl.handle.net/1721.1/141797</id>
<updated>2022-04-09T03:27:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Black Public Works: The Political Economy of Race and New Deal Infrastructure
Ulama, Darryle
This thesis examines how New Deal public works intersected with race during a critical juncture in American political development and spatial rationalization. The narrative of the New Deal has often underestimated the infrastructure building that became the nucleus of the Roosevelt Administration’s relief and reform policies as well as the ways in which race and racism structured all levels of New Deal operations. This research highlights the promises and limitations of the “public works revolution” that the New Deal set in motion by exploring the extent to which New Deal infrastructure programs were redistributive along racial lines. Using archival records and agency reports, I offer programmatic histories of seven major public works programs and highlight the types of projects that were built in Black communities. I show how New Deal infrastructure building was layered with contradictions and punctuated with moments of progress as well as lost opportunities for redress. I then analyze public works spending in counties that had sizeable African American populations in the 1930s, including the Black Belt, Gulf Coast, and the metro areas of New York City, Chicago, and Philadelphia to show how state and local politics and the urban-rural line shaped infrastructure outcomes. Lastly, I apply mapping and spatial statistics to identify geographic patterns of public works expenditures across the country, which reveal that low per capita spending tended to cluster in regions with significant Black populations.&#13;
&#13;
By focusing on the racialized dimensions of New Deal infrastructure building, this research challenges the logics that have been offered to explain public works in the fields of American political development, economic history, and fiscal federalism. This thesis also problematizes the redistributive impact of infrastructure on material and fiscal grounds by emphasizing how the policymaking and institutional legacies of New Deal public works are as consequential as their physical achievements. As the U.S. pursues ambitious infrastructure buildout in response to overlapping and unprecedented crises, new approaches to infrastructure policy are needed to fully realize their potential.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physical-Security for Wireless with Orbital Angular Momentum Wave</title>
<link href="https://hdl.handle.net/1721.1/141796" rel="alternate"/>
<author>
<name>Woo, Jongchan</name>
</author>
<id>https://hdl.handle.net/1721.1/141796</id>
<updated>2022-04-09T03:20:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Physical-Security for Wireless with Orbital Angular Momentum Wave
Woo, Jongchan
As technology progresses and an ever-increasing number of digital data are transmitted day to day, securing data has emerged as a major field of research. Conventional cryptography in higher layers of protocol stack has been studied as a data protection technique from an unauthorized party by converting secret data into a non-readable binary form. In this work, we leverage an OAM-wave-based transmission as an additional layer of physical security to be used with data encryption. A trustworthy key distribution mechanism for symmetric cryptography protocol is proposed by exploiting randomly hopping among the orthogonal OAM-wave modes and phases. Keccak block generates randomness for OAM modes, and AES is employed for encryption. This work provides physical-layer security, which is compatible with any higher layer encryption techniques. The hardware is implemented in 65nm CMOS technology, and post place-and-route simulation results are presented.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of a data-base information system for building design.</title>
<link href="https://hdl.handle.net/1721.1/141468" rel="alternate"/>
<author>
<name>Folinus, Jeffrey Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/141468</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The design of a data-base information system for building design.
Folinus, Jeffrey Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Bibliography: leaves 171-179.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic planning in a medium-sized manufacturing company.</title>
<link href="https://hdl.handle.net/1721.1/141467" rel="alternate"/>
<author>
<name>Marston, Winslow Mount.</name>
</author>
<id>https://hdl.handle.net/1721.1/141467</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Strategic planning in a medium-sized manufacturing company.
Marston, Winslow Mount.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The interrelationship of career and family life : an attitudinal study of young management professionals.</title>
<link href="https://hdl.handle.net/1721.1/141466" rel="alternate"/>
<author>
<name>Mannheimer, Toby Shayne.</name>
</author>
<id>https://hdl.handle.net/1721.1/141466</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">The interrelationship of career and family life : an attitudinal study of young management professionals.
Mannheimer, Toby Shayne.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977; Bibliography : leaves 148-153.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of trends in ambient air quality</title>
<link href="https://hdl.handle.net/1721.1/141465" rel="alternate"/>
<author>
<name>Martin, Michael Kelly.</name>
</author>
<id>https://hdl.handle.net/1721.1/141465</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Analysis of trends in ambient air quality
Martin, Michael Kelly.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speed-power characteristics of hydrofoil vehicles with fully submerged foils</title>
<link href="https://hdl.handle.net/1721.1/141464" rel="alternate"/>
<author>
<name>Mao, Chunn-Sheng.</name>
</author>
<id>https://hdl.handle.net/1721.1/141464</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Speed-power characteristics of hydrofoil vehicles with fully submerged foils
Mao, Chunn-Sheng.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>As the Starling Flies</title>
<link href="https://hdl.handle.net/1721.1/141353" rel="alternate"/>
<author>
<name>McBride, Alice D.</name>
</author>
<id>https://hdl.handle.net/1721.1/141353</id>
<updated>2022-08-09T20:18:49Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">As the Starling Flies
McBride, Alice D.
The concept of science evokes forward momentum, a steady march of progress as scientists accumulate knowledge, refine hypotheses, and make theoretical leaps. But science, it turns out, can also meander as much as a wandering bird.; Illustrating this concept is research on the European starling, a migratory species whose annual travels have been helping shape the field of bird migration science for the past hundred and twenty years. Starling research, from early bird banding to current satellite tracking efforts, has been marked by moments of controversy and confusion alongside insight and discovery. Following starlings — and the researchers who study them — reveals the hidden twists and turns that characterize the path to scientific understanding.
Thesis: S.M. in Science Writing, Massachusetts Institute of Technology, Department of Comparative Media Studies/Writing, 2021.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of impedance at high frequencies</title>
<link href="https://hdl.handle.net/1721.1/141132" rel="alternate"/>
<author>
<name>Winter, Robert Henry.</name>
</author>
<id>https://hdl.handle.net/1721.1/141132</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1934-01-01T00:00:00Z</published>
<summary type="text">Measurement of impedance at high frequencies
Winter, Robert Henry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1934; Includes bibliographical references (leaf 73).
</summary>
<dc:date>1934-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Muscle Recruitment Mechanism under Optogenetic Neuromodulation</title>
<link href="https://hdl.handle.net/1721.1/140999" rel="alternate"/>
<author>
<name>Herrera-Arcos, Guillermo</name>
</author>
<id>https://hdl.handle.net/1721.1/140999</id>
<updated>2022-03-04T03:21:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Muscle Recruitment Mechanism under Optogenetic Neuromodulation
Herrera-Arcos, Guillermo
Neurological conditions in which the communication pathway between the central nervous system and the peripheral neuromuscular components is severed, such as spinal cord injury, affect motor control, limiting patients’ overall quality of life. Functional electrical stimulation (FES) is the most commonly used technology to restore motion. Although FES is used extensively in the clinic, several drawbacks limit its application for long-term use. FES recruits larger motor units before smaller ones, causing muscles to fatigue quickly as it opposes the physiological recruitment mechanism. FES has low-specificity, making neighboring tissues susceptible to simultaneous activation. These drawbacks make force modulation difficult, limiting its controllability. &#13;
&#13;
Recently, functional optogenetic stimulation (FOS) has demonstrated cell-type specificity and millisecond timescale neural control in the peripheral nervous system, enabling reduced fatigue and greater controllability when compared to FES. Given the novelty of FOS, no study to date thoroughly describes how muscle fibers are recruited to generate force under optical stimulation. &#13;
&#13;
Using precise peripheral neural stimulation and sensing, this work shows the first muscle characterization and systematic production of recruitment curves under optical stimulation. These data show significantly higher modulation range under FOS when compared to FES, indicating physiological graded force modulation. This informs how different optical stimulation parameters translate to functional force production and how modulation strategies can be optimized to orchestrate motor recruitment. A mathematical model that describes the biophysical dynamics observed experimentally is also presented. This work lays the foundation for the design of model-informed neural controllers for optically-modulated prosthetics, with the potential to become the first viable alternative to FES for muscle re-animation applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computationally Designed Peptide Binder and Molecular Beacon for SARS-CoV-2</title>
<link href="https://hdl.handle.net/1721.1/140998" rel="alternate"/>
<author>
<name>Ponnapati, Raghava Manvitha Reddy</name>
</author>
<id>https://hdl.handle.net/1721.1/140998</id>
<updated>2022-03-04T03:19:17Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computationally Designed Peptide Binder and Molecular Beacon for SARS-CoV-2
Ponnapati, Raghava Manvitha Reddy
COVID-19 pandemic has caused a catastrophic loss of human life. With only a few approved vaccine candidates and the slow rate of vaccine distribution, particularly in developing nations, there is a need for antiviral candidates and rapid diagnostic solutions. This thesis describes a hybrid pipeline that combines machine learning tools, energy-based simulations, and experimental validation to develop an ACE2-derived peptide that targets the viral spike protein receptor-binding domain (RBD). The peptide was derived utilizing the existing crystal structure of spike protein’s RBD and ACE2 to determine the linear peptide fragments that contributed the most to the binding energy of the complex. We tested these linear peptide fragments against the spike protein RBD using a degradation assay and identified a 23 amino acid length peptide fragment as a strong candidate for computational and experimental mutagenesis.&#13;
&#13;
We also present a molecular beacon that detects SARS-CoV-2 spike protein through a conformational switch. Our molecular beacons contain two peptides that can form a parallel heterodimer and a binding ligand between them to detect the SARS-CoV-2 spike protein. A fluorophore-quencher pair is attached to the two ends of the heterodimer stems. In the absence of SARS-CoV-2 spike protein (OFF state), the peptide beacon has a hairpin conformation that opens upon binding to the spike protein and produces a fluorescence signal (ON state).&#13;
&#13;
All of the pipelines developed as part of this thesis are applicable to other protein targets of interest.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed and Private Computation for Inference</title>
<link href="https://hdl.handle.net/1721.1/140997" rel="alternate"/>
<author>
<name>Singh, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/140997</id>
<updated>2022-03-04T03:45:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Distributed and Private Computation for Inference
Singh, Abhishek
Recent progress in mobile and cloud computing coupled with the increase in data has resulted in a data-driven ecosystem that is making an impact in several domains of science and engineering. However, this data-driven ecosystem lacks protective measures for privacy resulting in regulations and behaviors that restrict data sharing. Augmenting the existing data-driven ecosystem with privacy preserving solutions could unlock the access to data silos, increasing the impact manifold. In this thesis, I discuss and identify gaps in some of the existing works and develop privacy preserving mechanisms for data analysis and distributed computation. At an abstract level, existing work in this domain includes federated learning, differential privacy, and encrypted computations. I describe the practical scenarios where all these approaches do not suffice due to their intrinsic computation infeasibility or suboptimal privacy-utility trade-off. This work augments such existing approaches by improving certain trade-offs and utilizing priors specific to the problem.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoding of Facial Strains via Conformable Piezoelectric Interfaces and Three-Dimensional Digital Image Correlation</title>
<link href="https://hdl.handle.net/1721.1/140996" rel="alternate"/>
<author>
<name>Tasnim, Farita</name>
</author>
<id>https://hdl.handle.net/1721.1/140996</id>
<updated>2022-03-04T03:16:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Decoding of Facial Strains via Conformable Piezoelectric Interfaces and Three-Dimensional Digital Image Correlation
Tasnim, Farita
Devices that facilitate nonverbal communication typically require high computational loads or have rigid and bulky form factors that are unsuitable for use on the face or on other curvilinear body surfaces. This work reports the design and pilot testing of an integrated system for decoding facial strains and for predicting facial kinematics. The system consists of massmanufacturable, conformable piezoelectric thin films for strain mapping; multiphysics modelling for analysing the nonlinear mechanical interactions between the conformable device and the epidermis; and three-dimensional digital image correlation (3D-DIC) for reconstructing softtissue surfaces under dynamic deformations as well as for informing device design and placement. Most biomedical sensor designs lack an in-depth study of the target soft tissue before the design and fabrication of the sensor meant to couple to that tissue. This work demonstrates the use of 3D-DIC as a method for in-depth biokinematic study of the target region upon which a sensor with mechanically-active functional material, such as piezoelectrics, will be placed. Similar to how chemical assays of a body part would be conducted before designing medication for disorders of that body part, so too does 3D-DIC allow for the mechanical study of biological soft tissue before designing the mechanically-active functional materials on mechanically-adaptive substrates that are meant to intimately integrate with that soft tissue. Thus, the 3D-DIC allowed for the design of a conformable piezoelectric device that mimics the properties of skin and that can interpret and distinguish facial strains in real-time and with low computational load, i.e. with reduced data streaming bandwidth. Finally, pilot studies on healthy individuals and on individuals with amyotrophic lateral sclerosis show that these conformable piezoelectric devices, coupled with algorithms for the real-time detection and classification of distinct skin-deformation signatures, enable the reliable decoding of facial movements. The integrated system could be adapted for use in clinical settings as a nonverbal communication technology or for use in the monitoring of neuromuscular conditions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enlightened: Can short-form news videos open minds?</title>
<link href="https://hdl.handle.net/1721.1/140989" rel="alternate"/>
<author>
<name>Jiang, Mike Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/140989</id>
<updated>2022-03-04T03:34:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Enlightened: Can short-form news videos open minds?
Jiang, Mike Hao
The United States of America has become severely polarized over the last twenty years, coincident with the increase in niche and fringe media. This contributes to the fragmentation of the shared assumptions, beliefs and trust in information that comprises ones perception of reality. Recently, the short-form video format has gained massive popularity in the world of social media and mobile applications (e.g. TikTok). To investigate whether this media format can be used to restore the shared reality among U.S. liberals and conservatives, I built a mobile-first progressive web application Enlightened that presents short, swipeable news videos in a manner similar to the popular dating app Tinder. The news clips were sourced from five major TV networks across the ideological spectrum (MSNBC, CNN, Bloomberg TV, ABC News and Fox News) and processed by a real-time news recording and processing system SuperGlue, to which I have also contributed. The processed news videos were summarized using a variation of the TextRank algorithm on the closed captions and the news source was visually masked by removing the lower third of the video using FFmpeg. Although the current interface of Enlightened has limited features, the results of a user study consisting of two surveys and the daily usage of Enlightened suggest that masked short-form news videos show great promise in opening the minds of both conservative and liberal users. However, the biggest limitation of this thesis is the small size of the user study. Hence, a larger-scale test needs to be conducted to ascertain whether short-form news videos can open minds.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Aware Ensembling in Multi-Modal AI and its Applications in Digital Health for Neurodegenerative Disorders</title>
<link href="https://hdl.handle.net/1721.1/140988" rel="alternate"/>
<author>
<name>Sarawgi, Utkarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/140988</id>
<updated>2022-03-04T03:46:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Uncertainty-Aware Ensembling in Multi-Modal AI and its Applications in Digital Health for Neurodegenerative Disorders
Sarawgi, Utkarsh
Common neurodegenerative disorders such as Alzheimer's dementia and Parkinson's disease are increasingly recognised as leading causes of death and disability with debilitating symptoms such as progressive cognitive decline, communication breakdown, motor dysfunction and accompanying psychiatric disorders. However, factors such as unavailability of efficient and cost-effective assessments for conclusive diagnosis, time-consuming test protocols, poor prognostic capabilities, and inadequate treatment options with accompanying side effects are all barriers to progress in providing faster and more effective intervention to individuals living with these life-altering disorders. In this thesis, we take a step towards using digital health and machine learning to improve diagnostic and prognostic capabilities and to address remote care via telemedicine in Alzheimer's dementia and Parkinson's disease. Our goal is to provide more cost-effective, non-invasive, and scalable technologies for risk stratification of Alzheimer's dementia using speech. We also aim to monitor drug response and disease progression for Parkinson's disease via telemedicine, allowing real time symptom tracking through wearables alongside a patient's treatment status, which will help facilitate remote care and dynamic and adaptive treatment plans. In addition to addressing the challenges in diagnosis and treatment of neurodegenerative disorders, we further propose a novel uncertainty aware boosting technique for multi-modal ensembling and evaluate it on healthcare tasks related to Alzheimer's dementia and Parkinson's disease. This presents manifold benefits, such as reducing the overall entropy of the system, making it more robust to heteroscedasticity, and improving calibration of each of the modalities along with high quality prediction intervals.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robotic Grasping of Fully-Occluded Objects using RF Perception</title>
<link href="https://hdl.handle.net/1721.1/140986" rel="alternate"/>
<author>
<name>Boroushaki, Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/140986</id>
<updated>2022-03-04T03:03:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Robotic Grasping of Fully-Occluded Objects using RF Perception
Boroushaki, Tara
We present the design, implementation, and evaluation of RF-Grasp, a robotic system that can grasp fully-occluded objects in unknown and unstructured environments. Unlike prior systems that are constrained by the line-of-sight perception of vision and infrared sensors, RF-Grasp employs RF (Radio Frequency) perception to identify and locate target objects through occlusions, and perform efficient exploration and complex manipulation tasks in non-line-of-sight settings.&#13;
&#13;
RF-Grasp relies on an eye-in-hand camera and batteryless RFID tags attached to objects of interest. It introduces two main innovations: (1) an RF-visual servoing controller that uses the RFID’s location to selectively explore the environment and plan an efficient trajectory toward an occluded target, and (2) an RF-visual deep reinforcement learning network that can learn and execute efficient, complex policies for decluttering and grasping.&#13;
&#13;
We implemented and evaluated an end-to-end physical prototype of RF-Grasp and a state-of-the-art baseline. We demonstrate it improves success rate and efficiency by up to 40-50% in cluttered settings. We also demonstrate RF-Grasp in novel tasks such mechanical search of fully-occluded objects behind obstacles, opening up new possibilities for robotic manipulation. Qualitative results (videos) available at rfgrasp.media.mit.edu
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning to promote transparent provenance of genetic engineering</title>
<link href="https://hdl.handle.net/1721.1/140985" rel="alternate"/>
<author>
<name>Ethan Chase Alley</name>
</author>
<id>https://hdl.handle.net/1721.1/140985</id>
<updated>2022-03-04T03:13:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Machine learning to promote transparent provenance of genetic engineering
Ethan Chase Alley
The promise of biotechnology is tempered by its potential for accidental or deliberate misuse. Reliably identifying provenance by examining telltale signatures characteristic to different genetic designers, termed genetic engineering attribution, would deter misuse, yet is still considered unsolved. In this work, we present analysis of the biosecurity implications of improved tools for attribution, arguing that the technology has robust co-benefits for deterring misuse and promoting responsible innovation. Then, we demonstrate that recurrent neural networks trained on DNA motifs and basic phenotype data can reach 70% attribution accuracy distinguishing between over 1,300 labs. To make these models usable in practice, we introduce a framework for weighing predictions against other investigative evidence using calibration, and bring our model to within 1.6% of perfect calibration. Additionally, we demonstrate that simple models can accurately predict both the nation-state-of-origin and ancestor labs, forming the foundation of an integrated attribution toolkit which should promote responsible innovation and international security alike. Finally, we discuss ongoing work to crowdsource improved attribution tools via an open data science challenge.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spending oil wealth : a study of Iran's strategies for allocating oil revenues to national development and foreign policy goals</title>
<link href="https://hdl.handle.net/1721.1/140445" rel="alternate"/>
<author>
<name>Brackeen, Richard Ennis.</name>
</author>
<id>https://hdl.handle.net/1721.1/140445</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Spending oil wealth : a study of Iran's strategies for allocating oil revenues to national development and foreign policy goals
Brackeen, Richard Ennis.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 145-156.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Causal Impact of Information Crowd-sourcing Platform on Agricultural Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/140419" rel="alternate"/>
<author>
<name>Kari, Teuku Mahfuzh Aufar</name>
</author>
<id>https://hdl.handle.net/1721.1/140419</id>
<updated>2022-02-17T03:41:33Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Causal Impact of Information Crowd-sourcing Platform on Agricultural Supply Chain
Kari, Teuku Mahfuzh Aufar
This thesis aims to study the impact of increased access to market information on the welfare of micro-firms in informal supply chains, as measured by revenue and price realization. In most informal supply chains, micro-firms receive access to only limited market information through their informal networks (e.g. SMS or Whatsapp). Previous literature finds that the introduction of mobile phones significantly improves market performance for perishable commodities but not so for non-perishable goods. We study whether tools that increase access to market information beyond informal information channels, e.g. through distributed price promotions or through crowdsourcing, can further improve market performance. By leveraging transactions data of an information-sharing platform used by micro-firms in informal supply chains in a developing country and data from a pre-intervention survey, we seek to investigate the causal impact of the platform on the revenue of the users.&#13;
&#13;
We found that the app leads to a significant increase the price (0.43%), post grading price (0.4%), and revenue (0.5%) realization of the sellers. We observed significant heterogeneity in post grading price treatment effect and new buyer rate. The heterogeneity in the former is driven by seller’s learning, while the heterogeneity in the latter is driven by local information availability.&#13;
&#13;
Difference in means analysis revealed that the difference in post grading price treatment effect between users with high and users with low experience is around 80% of the average treatment effect. Linear regression indicates that as the user builds familiarity in using the platform, the conditional average treatment effect can grow by up to 7 times (from 0.2% to 1.6%). We found no evidence that information availability or prior commitment to sell to certain buyers affect the post grading price treatment effect.&#13;
&#13;
Difference in means analysis indicated that heterogeneity in new buyer rate is driven by two factors, local information availability and prior commitment of the seller to sell to certain buyers. We found that the probability of transacting with a new buyer increases with local information availability but decreases with the presence of prior commitment. Regression analysis found statistically non-zero positive effect for local information availability, but not for prior commitment.&#13;
&#13;
The observation that local information availability increases new buyer rate but not post grading price treatment effect can be explained by the fact that user may opt to switch to a new buyer for reason other than better pricing, e.g. shorter travelling distance. Due to difficulty in estimating transportation cost and travelling distance, our analysis in this work is limited to top-line impact only. Our analysis does not account for users who switched to a new buyer for travelling reason and benefit from the saving in transportation cost. Consequently, the results reported here are underestimation of the actual impact of the intervention.&#13;
&#13;
Overall, we presented proof of concept for information crowd-sourcing platform as a low cost option to improve access to market information.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Keep the Lights On: Ensuring Bulk-Power System Reliability in a Decarbonized Future</title>
<link href="https://hdl.handle.net/1721.1/140418" rel="alternate"/>
<author>
<name>Wang, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/140418</id>
<updated>2022-02-17T03:32:28Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Keep the Lights On: Ensuring Bulk-Power System Reliability in a Decarbonized Future
Wang, Cathy
Policy and market forces are ushering in a new power system, one dominated by variable renewable energy (VRE) resources (wind and solar) and energy-limited resources like energy storage. Because these resources have different characteristics than the conventional thermal generators that make up the bulk of our capacity mix currently, this capacity turnover necessitates new ways of thinking about and planning for bulk-power system reliability.&#13;
&#13;
This Thesis evaluates the modeling approaches currently used in capacity planning and resource adequacy frameworks, and proposes a new iterative approach to incorporating cost-efficiency, decarbonization, and reliability goals into capacity and resource adequacy planning. As recent large-scale blackout events in California and Texas illustrate, both demand and supply can be heavily impacted by extreme weather events, contributing to more conditions of system stress in the years to come.&#13;
&#13;
By carefully taking into account periods of high risks of incurring reliability shortfalls, we show that actual reliability can be greatly improved in a systems analysis, compared to separate planning and resource adequacy analyses. Going forward, we need to find better ways of capturing the variations and correlations between time-coincident VRE output, load realizations, and unplanned thermal generator outages, to appropriately characterize and communicate the risks of power supply shortfalls (i.e., duration, frequency, magnitude). This has key implications on how end-use customers think about losing power for some period of time, how much they are willing to pay for customer-side reliability, and how their preferences are reflected at the system level.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Supply Chains and Markets to Support Humanitarian Response Analysis</title>
<link href="https://hdl.handle.net/1721.1/140417" rel="alternate"/>
<author>
<name>Downing, Tristan</name>
</author>
<id>https://hdl.handle.net/1721.1/140417</id>
<updated>2022-02-17T03:11:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Modeling Supply Chains and Markets to Support Humanitarian Response Analysis
Downing, Tristan
In a crisis, information about supply chains and markets for essential commodities can be sparse, yet an understanding of them is critical to delivering effective humanitarian assistance. Humanitarian organizations are looking to improve their processes to assess and analyze supply chains and markets to inform response option analysis. Existing methods remain limited in their consideration of supply chains and markets as inherently dynamic and complex systems; this thesis develops and applies two complementary methods to capture this dynamism and complexity to produce outputs useful for decision-making. The first method, multi-mode information aggregation, involves continuously synthesizing new information from a range of sources to form an understanding of the situation. It was developed and applied to guide United States Agency for International Development (USAID) food security programming to support the maize market system in Uganda in response to the COVID-19 pandemic. A key insight from this application was that female rural traders in border areas may be more significantly affected than other traders. The second method, a system dynamics model, models the behavior of a supply chain for essential commodities in a crisis. It was developed and applied to study the effects of the displacement crisis in Northeast Nigeria on the supply chain for rice in Borno State, and to inform International Committee of the Red Cross (ICRC) processes and response. The model was used to project outcomes for target populations under different scenarios and humanitarian response options, incorporating in-kind assistance, cash assistance, and credit for supply chain actors. A key finding was that when cash assistance is being provided to a broad target population, further humanitarian spending may be significantly more effective as credit to supply chain actors instead of as more cash assistance to the target population. Results from the model also highlighted potential other areas for humanitarian intervention, such as improving access to market information. Broadly, both these methods highlight the need to consider supply chains and markets as complex and dynamic systems that can be disrupted by a crisis and the resulting humanitarian programming, but can also be harnessed to deliver assistance more effectively to people in need.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role of Natural Gas in Future Low-carbon Energy Systems</title>
<link href="https://hdl.handle.net/1721.1/140416" rel="alternate"/>
<author>
<name>Schwartz, Aaron Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/140416</id>
<updated>2022-02-17T03:05:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Role of Natural Gas in Future Low-carbon Energy Systems
Schwartz, Aaron Matthew
Concerns over climate change along with rapidly falling costs of clean energy technologies have led to increased scrutiny over the role of fossil-fuels in a low-carbon energy future. This thesis evaluates the role of natural gas-fired power plants (NG) in future electrical grids using an advanced, multi-period capacity expansion modeling framework with perfect foresight. We model cost-optimal grid operations, investments, and retirements through 2050 using a detailed representation of the American Southeast’s electrical grid which includes inter-region transmission, variable renewable energy resource characteristics, brownfield capacity, and lifetime and economic retirements. We examine several pathways to a highly decarbonized grid, assuming rapid growth in energy demand through mid-century. Sensitivities include CO2 emissions limits, technology costs, nuclear plant lifetime extensions, and NG deployment and financing schemes which aim to minimize stranded costs.&#13;
&#13;
We find that investments in NG are made across all scenarios evaluated, as well as unprecedented deployments of variable renewable energy resources and battery storage. Results highlight the substantial emissions contributions of the existing coal fleet, and the potential for emissions reductions if lower-carbon generation resources, including new NG with and without carbon capture and storage, can replace this capacity. Furthermore, emissions limits which require the lowest mid-century CO2 emissions do not necessarily lead to the greatest cumulative emissions reductions over the planning horizon. These results support a nuanced approach to resource planning for future low-carbon grids which considers both short-term and long-term emissions reductions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Role of Hydrogen in Industrial Decarbonization: A Case for Ammonia Industry in the United States</title>
<link href="https://hdl.handle.net/1721.1/140415" rel="alternate"/>
<author>
<name>Bose, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/140415</id>
<updated>2022-02-17T03:28:29Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Role of Hydrogen in Industrial Decarbonization: A Case for Ammonia Industry in the United States
Bose, Abhishek
Ammonia production contributes more than 1% of the global greenhouse gas emissions (GHG) while being used to serve a majority of the demand for nitrogen-containing fertilizer for agricultural use. While the predominant route for ammonia production today relies on natural gas as a source of energy and hydrogen for thermochemical Haber-Bosch (HB) synthesis, there is growing interest in electrically-driven routes that can reduce carbon-footprint of ammonia production, by relying on low-carbon electricity supply from variable renewable energy (VRE) sources. This electrically-driven ammonia route could not only serve existing uses for fertilizer production, but also be deployed to service energy needs for other end-use sectors where ammonia use is being contemplated (e.g. marine transport). Here, we evaluate the spatial variations in cost of the above electrically-driven ammonia process across the U.S. predominantly, for different scenarios of electricity supply as well as technology cost scenarios for 2030. Our approach goes beyond prior techno-economic assessments of electricity-driven ammonia production by explicitly accounting for variability in electricity supply and its implications on plant design, cost and emissions. This is achieved by using a least-cost integrated design and operations modeling framework that treats as variables the relative sizing of various units (e.g. electrolyzer, Air Separation Unit, renewables capacity), including deployment of alternative forms of on-site storage (battery energy storage, gaseous &#119867;2 and liquid &#119873;2). The overall mixed-integer linear programming (MILP) model is able to optimize for the minimum annualized cost of providing round-the-clock ammonia under the required system emission and flexibility constraints. We also evaluate dedicated grid connected VRE-based ammonia production for locations in close proximity to existing &#119873;&#119867;3 production facilities and agricultural hubs in the US, to identify the cost-optimal VRE mix and storage requirements for future projections of grid scenarios in the US. Based on this framework, we are able to develop optimal sizing requirements for the facility in terms of VRE and capital investments in equipment to be able to sustain round-the-clock production. Our analysis shows that a standalone renewable ammonia production facility makes use of storage of intermediate products (&#119873;2, &#119867;2) in the production process so as to be able to dispatch them during non-availability of renewable electricity. To meet the minimum power input necessary to operate the thermochemical HB process, electrochemical storage (e.g. Li-ion) is also needed. However, if the thermochemical HB process can be operated at less than nameplate feed flow rates, the need for Li-ion battery storage is minimized, allowing for more cost-effective production options.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer-aided design tools for superminds: Understanding user needs and evaluating design options</title>
<link href="https://hdl.handle.net/1721.1/140374" rel="alternate"/>
<author>
<name>Liew, Katherine Mei Fong</name>
</author>
<id>https://hdl.handle.net/1721.1/140374</id>
<updated>2022-02-16T03:38:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computer-aided design tools for superminds: Understanding user needs and evaluating design options
Liew, Katherine Mei Fong
The rise of machine learning and progress on the path towards advanced artificial intelligence requires us to think about the future of work, design and how both will be conducted when humans and machines work together. The supermind design methodology was developed to aid the formation of groups of humans and machines to solve problems together.&#13;
&#13;
This thesis addresses the future of collective human-computer problem solving at two levels. The first is to identify the best initial user segment and user needs for “computer-aided design tools” to help users apply the supermind design methodology. The second is to design the feature set, user interface and ongoing user analytics for a software tool powered by the GPT-3 deep learning model that can support practitioners of the supermind design methodology.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Benefits of Offline Merchandise in Brand Building</title>
<link href="https://hdl.handle.net/1721.1/140373" rel="alternate"/>
<author>
<name>Kim, Saemi</name>
</author>
<id>https://hdl.handle.net/1721.1/140373</id>
<updated>2022-02-16T03:18:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Benefits of Offline Merchandise in Brand Building
Kim, Saemi
The global merchandise market is worth $300 billion and is expected to continue growing at a steady rate. Seven of the ten top players in this industry relate to character licensing, and five directly associate with developing and utilizing fictional characters. From Pooh and Peter Rabbit to BT21 and League of Legends, characters have proven lasting presences both in profitability and in customer behavior. Much research has covered the commercial implications of such branding, but more research needs to be done in how corporations can create lasting characters and fully utilize them for increased brand strength. &#13;
&#13;
The paper analyzes the ways in which corporations have historically worked with characters along with developments in the modern market that enable this market to flourish. Such factors include the proliferation of the Internet and changing consumer behaviors; social media is a new force shaping purchase decisions, and related industries have witnessed increased acceptance of adults enjoying content traditionally associated with children. It then discusses the unique benefits and strengths that fictional characters possess in contributing to corporate branding and presence, followed by a discussion of Line Corporation and its unexpected breakthrough in global markets through the success of its original characters. Corporations can now build their own superstars, but still for time-old purposes: building empathetic, humane relationships with their customers.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multivariate Singular Spectrum Analysis:  A Principled, Practical, and Performant Solution for Time Series Imputation and Forecasting</title>
<link href="https://hdl.handle.net/1721.1/140365" rel="alternate"/>
<author>
<name>Alomar, Abdullah</name>
</author>
<id>https://hdl.handle.net/1721.1/140365</id>
<updated>2022-02-16T03:12:39Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Multivariate Singular Spectrum Analysis:  A Principled, Practical, and Performant Solution for Time Series Imputation and Forecasting
Alomar, Abdullah
The analysis of multivariate time series data is of great interest across many domains, including cyber-physical systems, finance, retail, healthcare to name a few. A common goal across all of these domains is accurate imputation and forecasting of multivariate time series in the presence of noisy and/or missing data. Given the growing need to embed predictive functionality in high-performance systems, especially in applications with time series data (e.g., financial systems, control systems), it is increasingly vital that we build principled prediction algorithms that are statistically and computationally performant, and more broadly accessible. To that end, we introduce a  novel variant of multivariate Singular Spectrum Analysis (mSSA) that allows for accurate imputation and forecasting of both time-varying mean and variance of multivariate time series. We further justify this algorithm by introducing a natural Spatio-temporal factor model, under which the algorithm is theoretically analyzed; Specifically, We establish the in-sample prediction error of our mSSA variant for both imputation and forecasting. &#13;
Further, we propose an incremental variant of the algorithm, upon which, a real-time prediction system for time series data, tspDB, is instantiated and evaluated. tspDB aims to increase accessibility to predictive functionalities for time series data through the direct integration with existing relational time series Databases. Finally, through rigorous experiments, we show that tspDB provides state-of-the-art statistical accuracy while maintaining a superior computational performance with an incremental model update, low model training time, and low latency for prediction queries.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovations in Game-based Learning: How Lead Users Created Minecraft: Education Edition</title>
<link href="https://hdl.handle.net/1721.1/140362" rel="alternate"/>
<author>
<name>Crespo, Amelia</name>
</author>
<id>https://hdl.handle.net/1721.1/140362</id>
<updated>2022-02-16T03:31:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Innovations in Game-based Learning: How Lead Users Created Minecraft: Education Edition
Crespo, Amelia
Over the past four decades, myriad studies have shown that lead users are a significant source of major innovations in various industries. Further, studies have shown that innovations by lead users have resulted in economic benefits to firms while also satisfying the needs and improving the lives of users. With a long-standing and well-established body of evidence, it would be easy to assume that industry leaders would have adopted lead user methods widely. However, Bradonjic et al. (2019) found that, in a survey of 1500 key decision-makers, a substantial number still underestimate the frequency and value of lead user innovations. &#13;
&#13;
In order to better understand how firms work with lead users, I apply lead user research methods to the game-based learning (GBL) market to determine if lead users play a major role in developing functionally significant innovations in a specific GBL product, Minecraft: Education Edition. I find that lead users (teachers) are in fact the originators of Minecraft: Education Edition itself, as well as originators of 90% of significant, functionally novel innovations added to this game over time. In contrast, and in line with existing research findings, producers are found to be the developers of 100% of the dimension-of-merit innovations – innovations that allow product users to perform user-pioneered functions “better.” &#13;
&#13;
The fact that lead users are an important source of innovations in the GBL field suggests that it would be valuable for producers to learn to manage and support this valuable source of innovations as effectively as possible. In a concluding section, I suggest how game producers can align their innovation processes to both support and learn from lead user innovation more effectively than is often the case today.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge Management in Multinational Companies: Informative Case Studies and Their Applications to the Future</title>
<link href="https://hdl.handle.net/1721.1/140360" rel="alternate"/>
<author>
<name>Yu, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/140360</id>
<updated>2022-02-16T03:10:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Knowledge Management in Multinational Companies: Informative Case Studies and Their Applications to the Future
Yu, Catherine
The process of Knowledge Management is the process of being able to document, store, and communicate data and information so that it can be applied to a company’s knowledge. It is often used as a way to help educate employees, give employees access to information, and store knowledge in an organized format. The primary goal of Knowledge Management is to be able to efficiently get the right information and knowledge to the right person in a timely manner. &#13;
&#13;
Knowledge Management has become an institutional tool to help businesses retain information and easily pass on information to an organization’s employees. It has become a fundamental way for employees to train and learn in a tactful manner. This thesis will highlight the best practices for a multi-national company to follow: from creating the right Knowledge Management System to house information, to incorporating Enterprise Social Networks to promote communication, to establishing initiatives focusing on networking and innovation To properly understand each of these different aspects of Knowledge Management, we will highlight different cases from various multinational companies on successful (and unsuccessful) tactics used. These case studies are meant to show how different industries highlight the do’s and don’t of how to successfully implement Knowledge Management into a multi-national company, and how to use Knowledge Management to build a successful culture.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Houseful(l)ness of Public Space</title>
<link href="https://hdl.handle.net/1721.1/140358" rel="alternate"/>
<author>
<name>Alvarez, Paige Xiomara</name>
</author>
<id>https://hdl.handle.net/1721.1/140358</id>
<updated>2022-02-16T03:18:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Houseful(l)ness of Public Space
Alvarez, Paige Xiomara
Domestic life - the programs and functions most closely associated with housing and home - has been largely programmed out of the public spaces of cities in order to make them inhospitable to unhoused residents and the urban poor more broadly. Most visible in the form of so-called hostile architecture, these anti-domestic practices result in public spaces that discourage lingering or gathering, in which it is difficult to spend time. This thesis takes up the role of architects and urban designers in houselessness, not through our positions on affordable housing, but by considering the ways that we play a part in perpetuating the privatization of domesticity. In doing so, we center a long-lasting definition of the socio-political project of “the public'' which defines membership through one’s proximity and access to property. Public spaces of the city are then further policed to discourage uses and occupations that are discordant with the recreational, ordered dominant use case.&#13;
&#13;
Despite this, the public spaces of cities are made house-full by unhoused residents. Lacking access to the programs packaged in housing, unhoused residents piece together different rooms and play out different routines of necessity and joy throughout the city. Attempts to design unhoused people out of public space have resulted in a universally hostile public realm whose impacts are unevenly felt. Unhoused people are closest to this problem and bear the brunt of its violence. In this thesis I consider what public space begins to look like when those of us that design it critically change our agenda. What could the public realm look like if we expanded its domestic possibilities rather than restricting them, and how can we center the domesticities of unhoused residents in that expansion? What happens when public space is seen as housefull?
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of digital marketing strategy in the era of social media in China</title>
<link href="https://hdl.handle.net/1721.1/140357" rel="alternate"/>
<author>
<name>Liu, Xinya</name>
</author>
<id>https://hdl.handle.net/1721.1/140357</id>
<updated>2022-02-16T03:20:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An analysis of digital marketing strategy in the era of social media in China
Liu, Xinya
This thesis analyzes digital marketing strategies in the era of social media in China. With the development of information technology, the gradual stabilization of China's economic growth, and the catalysis of the epidemic, advertisers need to achieve breakthroughs through more effective marketing strategies. The rise of social media undoubtedly provides opportunities. &#13;
&#13;
This thesis will focus on the three digital marketing strategies: content marketing, live streaming e-commerce, and trust fission. As a marketing method that has existed since the paper media era, content marketing has been given new meanings in the social media era. In Chapter 2, this article goes back to the history of content marketing and proposes how to design content marketing based on the characteristics of content marketing in the era of social media, as well as case analysis. With the development of 4G technology and the gradual commercialization of 5G, the importance of live broadcasting has become increasingly prominent. This thesis analyzes e-commerce live streaming in Chapter3. Live streaming e-commerce is a recombination of the three elements of traditional BtoC: people, goods, and fields. Meanwhile, live streaming can use various methods to encourage consumers to make purchases during the live streaming, thereby generating a large number of instant transactions. The author points out in Chapter 4 that as centralized public domain net flow becomes increasingly expensive, advertisers urgently need to consider building and operating private domain traffic. Social media's social and media attributes are the natural platform for building private domain traffic pools, and trust fission is the key to building traffic pools and further operations. In Chapter 5, the author pointed out that the widespread influence of social media determines that the use of social media marketing does not need to prioritize the target population but achieves a wide range of influence by evaluating the demands of businesses and increasing the topicality of the advertising. Social media are characterized by the integration of sales and marketing, especially in e-commerce live streaming. The marketing strategy at this stage is decentralized.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Centralization versus decentralization of information systems : a case study investigation of a framework for decision making.</title>
<link href="https://hdl.handle.net/1721.1/140339" rel="alternate"/>
<author>
<name>Bullen, Christine Valerie.</name>
</author>
<id>https://hdl.handle.net/1721.1/140339</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Centralization versus decentralization of information systems : a case study investigation of a framework for decision making.
Bullen, Christine Valerie.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Bibliography: leaf 115.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The wearing properties of nitrided nitralloy against various alloys</title>
<link href="https://hdl.handle.net/1721.1/140276" rel="alternate"/>
<author>
<name>Kindell, Nolan M.</name>
</author>
<author>
<name>Kiefer, Dixie, 1896-1945.</name>
</author>
<author>
<name>Crist, Marion E.</name>
</author>
<id>https://hdl.handle.net/1721.1/140276</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1929-01-01T00:00:00Z</published>
<summary type="text">The wearing properties of nitrided nitralloy against various alloys
Kindell, Nolan M.; Kiefer, Dixie, 1896-1945.; Crist, Marion E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1929; Includes bibliographical references (leaf 72).
</summary>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The use of the arts by businessmen for profit and other purposes</title>
<link href="https://hdl.handle.net/1721.1/140264" rel="alternate"/>
<author>
<name>Boyer, Neil James.</name>
</author>
<id>https://hdl.handle.net/1721.1/140264</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">The use of the arts by businessmen for profit and other purposes
Boyer, Neil James.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 120-127.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Certificate of need legislation in health care delivery.</title>
<link href="https://hdl.handle.net/1721.1/140263" rel="alternate"/>
<author>
<name>Britton, Charles Ray.</name>
</author>
<id>https://hdl.handle.net/1721.1/140263</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Certificate of need legislation in health care delivery.
Britton, Charles Ray.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 95-97.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An improved resolver-to-digital converter.</title>
<link href="https://hdl.handle.net/1721.1/140262" rel="alternate"/>
<author>
<name>Braun, Thomas Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/140262</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">An improved resolver-to-digital converter.
Braun, Thomas Robert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A high frequency bridge</title>
<link href="https://hdl.handle.net/1721.1/140261" rel="alternate"/>
<author>
<name>Stratton, Julius Adams,
            1901-1994.</name>
</author>
<id>https://hdl.handle.net/1721.1/140261</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1925-01-01T00:00:00Z</published>
<summary type="text">A high frequency bridge
Stratton, Julius Adams,
            1901-1994.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1925; Includes bibliographical references (leaves 44-47).
</summary>
<dc:date>1925-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Method for Kalman Filtering Pose Estimates from Lidar Scans During the Landing Phase</title>
<link href="https://hdl.handle.net/1721.1/140202" rel="alternate"/>
<author>
<name>Wenberg, Dakota</name>
</author>
<id>https://hdl.handle.net/1721.1/140202</id>
<updated>2022-02-08T03:05:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Method for Kalman Filtering Pose Estimates from Lidar Scans During the Landing Phase
Wenberg, Dakota
The Massachusetts Institute of Technology’s Lincoln Laboratory is developing a Lidar scanner to be used on a notional lander mission to Europa, a moon of Jupiter. The goal of this mission is to land safely on the unexplored rough terrain of the moon and analyze samples to detect the possibility of life in the subsurface oceans. A critical component to a safe landing is the ability to accurately estimate the state of the lander as it descends. This paper proposes the application of sensor fusion system that combines Inertial Measurement Unit measurements with relative pose data extracted from Lidar scans using an Extended Kalman Filter. The results show that the proposed system can accurately estimate the state in the Z and X axes while further improvements are required to increase the accuracy of the Y axis and the orientation.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MADE IN RURAL CHINA--The analysis and redesign of the urbanization trajectory for e-commerce villages in rural China</title>
<link href="https://hdl.handle.net/1721.1/140199" rel="alternate"/>
<author>
<name>Sheng, Siyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/140199</id>
<updated>2022-02-08T03:01:01Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">MADE IN RURAL CHINA--The analysis and redesign of the urbanization trajectory for e-commerce villages in rural China
Sheng, Siyuan
While informal economics are normally treated as the “sickness” of the cities to be cured, they have their own treasure parts to retain. They brought the boom of Chinese economy to some extent. In the era of great construction and rebuilding of cities as urbanization mode, developing countries may take the informal economic site as a great chance to make a profit since they have enough reasons to rebuild those areas. They are often mentioned as a problem that needs to be solved and, in most cases, it means reconstruction. Vested interests are happy to integrate these resources and announce that the threat to the cities is finally eliminated. Those reconstructions are standardizing those communities and reaping the benefits brought by the communities. They benefit not by the plunder of capital, but by the elimination of the social subjectivity of those who have created value.&#13;
&#13;
E-commerce villages are one of those communities. They seized the opportunity of e-commerce development to make profit and support the online economy to grow rapidly and vigorously. The informality and the “unintegration” of those areas led to the trend of reconstruction. However, the reconstructions are tending to take the existing urban pattern of residential areas as their template. This is the easiest and fastest way of thinking and not problematic in most of the cases. However, such a way made the e-commerce villages lose their locality since the templates do not actually fit the operating mode of those villages. The thesis is researching on the economic pattern and the social relation in those e-commerce villages and proposing a possible choice for those villages based on the research, trying to make the areas more economically efficient and more livable at the same time.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring spatial and social interdependencies between public schools and the community: City of Cambridge</title>
<link href="https://hdl.handle.net/1721.1/140197" rel="alternate"/>
<author>
<name>Martinez Cuba, Maria de los Angeles</name>
</author>
<id>https://hdl.handle.net/1721.1/140197</id>
<updated>2022-02-08T03:40:25Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Measuring spatial and social interdependencies between public schools and the community: City of Cambridge
Martinez Cuba, Maria de los Angeles
Schools are public institutions, but they are also social infrastructure. They create social worlds that shape and preserve the surrounding communities. While schools and other public institutions, such as public libraries, have been found to be important structures of social infrastructure, the spatial conditions under which they assume that role has been understudied. This thesis investigates how the relationship between public schools and the surrounding neighborhood may vary depending on the spatial interdependence among their amenities. To identify spatial interdependencies, I conduct the analysis from two perspectives: the school and the neighborhood. I use georeferenced data and square footage of public schools’ amenities and recreational amenities in the neighborhood combined with community demographic information and student enrollment data at the building level for the City of Cambridge, Massachusetts. Using spatial accessibility foundations, I analyze what recreation amenities are accessible around schools beyond their own, and then I evaluate what is accessible around homes. I then determine how schools' amenities could contribute to recreational accessibility for residents and vice versa. Moreover, I construct measures of spatial dependency to evaluate the degree to which schools depend on neighborhood recreational amenities and vice versa. To examine social relationships beyond spatial interdependency, I conducted semi-structured interviews to understand non-spatial factors that enable or prevent school-community interaction. The results show that spatial interdependencies between schools and the neighborhood could satisfy the unmet demand via potentially shared amenities for recreational and community activities, and that spatial interaction occurs when a need for space emerges from one side or the other. The stronger the interdependency, the higher the likelihood of social interaction. The weaker the interdependency, the lower the probability of social interaction. And where there is no interdependency, a school-community relationship is less likely to be identified. The findings from the qualitative analysis affirmed the importance of bilateral relationships between the community and the schools. Beyond the spatial interdependency, I found physical, social, and administrative factors that enable or prevent school-community interaction. This research offers a methodological contribution that incorporates space into the study of school-neighborhood relationships.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Early Design Stage Building Lifecycle Analysis (LCA) of Cost &amp; Carbon Impact</title>
<link href="https://hdl.handle.net/1721.1/140196" rel="alternate"/>
<author>
<name>Liu, Jingyi</name>
</author>
<id>https://hdl.handle.net/1721.1/140196</id>
<updated>2022-02-08T03:11:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Early Design Stage Building Lifecycle Analysis (LCA) of Cost &amp; Carbon Impact
Liu, Jingyi
In my research, I have developed a building lifecycle analysis (LCA) workflow that recommends sustainable solutions based on the optimization of building lifecycle cost ($) and carbon impact (kgCO2eq). The workflow can analyze conceptual geometries in the early design stage when there is limited information. The first part of the workflow recommends sustainable features of building attributes, and the second part recommends detailed construction schemes. By following the recommended design solutions, the workflow helps save on average around 15% on cost, and around 25% on carbon impact in the U.S. Take a medium office building as an example; a 15% cost saving corresponds to around $9 million in the U.S. The workflow also reduces its analysis time to around 30 minutes, whereas analyzing a detailed model using conventional LCA tools takes hours.&#13;
&#13;
This new LCA workflow helps with data-driven design decision-making. It is unique because it ensures both performance and flexibility during the early design stage. As for attribute features, it recommends ranges for numerical attributes and rankings for categorical ones, which allows users to choose their preferred values or options. As for construction schemes, design diversity is quantified to produce various design solutions. The workflow also allows users to customize the minimum and maximum boundaries for numerical attributes and select their favorite categorical options, enabling users to tailor their design needs. The whole workflow is developed in Grasshopper, a code-friendly platform in the conceptual design software Rhino. Cutting-edge technologies are applied, including machine learning, optimization, data analysis, and visualization.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Bonus-Based Exploration in Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/140193" rel="alternate"/>
<author>
<name>Chen, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/140193</id>
<updated>2022-02-08T03:56:10Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Understanding Bonus-Based Exploration in Reinforcement Learning
Chen, Eric
Intrinsic reward-based exploration methods have successfully solved challenging sparse reward tasks such as Montezuma’s Revenge. However, these methods have not been widely adopted in reinforcement learning due to inconsistent performance gains across tasks. To better understand the underlying cause of this variability, we evaluate the performance of three major families of exploration methods on a suite of custom environments and video games: prediction error, state visitation and model uncertainty. Our custom environments allow us to study the effect of different environmental features in isolation. Our results reveal that exploration methods can be biased by spurious features such as color, and prioritize different dynamics in specific environments. In particular, we find that prediction-based methods are superior at solving tasks involving controllable dynamics. Furthermore, we find that partial observability can hinder exploration by setting up "curiosity traps" that agents can fall into. Finally, we investigate how various implementation details such as reward design and generation affect an agent’s overall performance.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Achieving the Energy Transition Through Corporate-University Partnerships</title>
<link href="https://hdl.handle.net/1721.1/140192" rel="alternate"/>
<author>
<name>Polly, Allison M.</name>
</author>
<id>https://hdl.handle.net/1721.1/140192</id>
<updated>2022-02-08T03:49:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Toward Achieving the Energy Transition Through Corporate-University Partnerships
Polly, Allison M.
The energy industry is increasingly moving toward a global energy system that is lower carbon, transitioning from predominantly hydrocarbons to including more renewable and sustainable energy sources. The amount of multidimensional change – spanning culture, technology, education, infrastructure, and policy – required to achieve this transition requires more than any one source of innovation. Two common historical sources of innovation have been corporations and universities. Both corporations and universities typically recognize the value of partnering with one another, but there are almost as many partnership approaches as there are partnerships, with most created as bespoke agreements. What if these partnerships had a common structural framework from which to systematically select the right approach to enable the energy transition?&#13;
&#13;
This thesis seeks to address corporate-university partnerships as a sociotechnical system and evaluate partnership architecture decisions in enabling the energy transition. The research analyzes corporate-university partnership architectural patterns broadly across all industries as well as specific to the energy industry, investigates case studies of corporate-university partnerships focused on the energy transition, seeks to understand potential architectural changes post-pandemic, and proposes an architectural framework, evaluation, and portfolio approach for corporate-university partnerships enabling the energy transition.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonreciprocal and Exotic Radiative Transfer in Type-I Magnetic Weyl Semimetals</title>
<link href="https://hdl.handle.net/1721.1/140191" rel="alternate"/>
<author>
<name>Pajovic, Simo</name>
</author>
<id>https://hdl.handle.net/1721.1/140191</id>
<updated>2022-02-08T03:47:07Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Nonreciprocal and Exotic Radiative Transfer in Type-I Magnetic Weyl Semimetals
Pajovic, Simo
The classical theory of radiative heat transfer has proven extraordinarily useful, both in advancing fundamental physics through the discovery of quantum mechanics and in developing practical applications. However, the classical theory can break down, opening up new opportunities for energy production and management. Kirchhoff’s law of radiation breaks down in systems with broken Lorentz reciprocity, which are characterized by asymmetric dielectric tensors and broken time reversal symmetry. This is typically achieved using an external magnetic field, but a recently discovered family of quantum materials—Weyl semimetals—can possess intrinsic nonreciprocity due to the Berry curvature, an internal, pseudo-magnetic field.&#13;
&#13;
In this dissertation, we explore thermal radiation from type-I magnetic Weyl semimetals in planar configurations in the far- and near-fields. First, we demonstrate that a planar interface between air and a Weyl semimetal can intrinsically violate Kirchhoff’s law of radiation, and that nonreciprocal surface plasmon polaritons (SPPs) excited on this interface drive the nonreciprocity in the radiative heat transfer. In addition, we propose a physical mechanism for the nonreciprocity in terms of the forces felt by electrons as a result of the Berry curvature. Further leveraging the nonreciprocal SPPs, we explore the radiative heat transfer between two planar Weyl semimetal surfaces in the near-field (where Planck’s law also breaks down). To accurately compute the nearfield radiative heat transfer, we include the effects of Fermi arc surface states, a hallmark of Weyl semimetals. We show that—in contrast to far-field radiative heat transfer—Fermi arc surface states play a significant role in the near-field and that the heat flux between the two Weyl semimetals can be tuned via twisting their surfaces in the lateral direction. This twist-controlled heat flux raises questions about the role of the configurational symmetry of the two-body system, and we argue that using materials with asymmetric dielectric tensors is necessary but not sufficient to realize nonreciprocal radiative heat transfer. The system must also possess some broken configurational symmetry, namely inversion symmetry. Finally, we identify opportunities for new discoveries in thermal radiation from materials with broken inversion and time reversal symmetry.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Searching for Efficient Multi-Stage Vision Transformers</title>
<link href="https://hdl.handle.net/1721.1/140187" rel="alternate"/>
<author>
<name>Liao, Yi-Lun</name>
</author>
<id>https://hdl.handle.net/1721.1/140187</id>
<updated>2022-02-08T03:10:17Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Searching for Efficient Multi-Stage Vision Transformers
Liao, Yi-Lun
Vision Transformer (ViT) demonstrates that Transformer for natural language processing can be applied to image classification tasks and result in comparable performance to convolutional neural networks (CNN), which have been studied in computer vision for years. This naturally raises the question of how the performance of ViT can be advanced with design techniques of CNN. To this end, we propose to incorporate two techniques and present ViT-ResNAS, an efficient multi-stage ViT architecture designed with neural architecture search (NAS). &#13;
&#13;
First, we propose residual spatial reduction to decrease sequence lengths for deeper layers and utilize a multi-stage architecture. When reducing lengths, we add skip connections to improve performance and stabilize training deeper networks. Second, we propose weight-sharing NAS with multi-architectural sampling. We enlarge a network and utilize its sub-networks to define a search space. A super-network covering all sub-networks is then trained for fast evaluation of their performance. To efficiently train the super-network, we propose to sample and train multiple subnetworks with one forward-backward pass given a batch of examples. After training the super-network, evolutionary search is performed to discover high-performance network architectures. &#13;
&#13;
Experiments on ImageNet demonstrate the effectiveness of ViT-ResNAS. Compared to the original DeiT, ViT-ResNAS-Tiny achieves 8.6% higher accuracy than DeiT-Ti with slightly higher multiply-accumulate operations (MACs), and ViTResNAS-Small achieves similar accuracy to DeiT-B while having 6.3× less MACs and 3.7× higher throughput. Additionally, ViT-ResNAS achieves better accuracyMACs and accuracy-throughput trade-offs than other strong baselines of ViT such as PVT and PiT and high-erpformance CNNs like RegNet and ResNet-RS.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Impact of Underwater Glider Observations on the Navy Coastal Ocean Model (NCOM) in the Gulf Stream Region</title>
<link href="https://hdl.handle.net/1721.1/140186" rel="alternate"/>
<author>
<name>Kausch, Kyle Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/140186</id>
<updated>2022-02-08T03:40:22Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Characterizing the Impact of Underwater Glider Observations on the Navy Coastal Ocean Model (NCOM) in the Gulf Stream Region
Kausch, Kyle Robert
As the western boundary current of the North Atlantic, the Gulf Stream is a well-established area of interest for the United States Navy, predominately due to its proximity to the continental shelf and the associated challenges of acoustic propagation across large property gradients. Autonomous underwater gliders conduct routine, high-resolution surveys along the U.S. East Coast, including within the Gulf Stream. These observations are assimilated into the operational Navy Coastal Ocean Model (NCOM). An investigation of the forecast-to-nowcast changes in the model for 2017 demonstrates the impact of the observations on the model. The magnitude of model change as a function of distance from nearest new observation reveals relatively large impact of glider observations within a radius of O(100) km. Glider observations are associated with larger local impact than Argo data, likely due to glider sampling focusing on large spatial gradients. Due to the advective nature of the Gulf Stream system, the impact of glider observations in the model is anisotropic with larger impacts extending downstream from observation locations. Forecast-to-nowcast changes in modeled temperature, salinity, and density result in improved agreement between observed and modeled ocean structure within the upper 200 m over the 24 hours between successive model runs.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Visual Inspection of Lyophilized Products via Deep Learning and Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/140185" rel="alternate"/>
<author>
<name>Tran, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/140185</id>
<updated>2022-02-08T03:30:21Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Automated Visual Inspection of Lyophilized Products via Deep Learning and Autoencoders
Tran, Peter
Manual visual inspection of every sterile parenteral product for defects is costly, cumbersome, and inconsistent. Industry standard currently relies on a semi-automated method, but the percentage of vials that require additional human inspection is high, at 30%. Using deep learning can help reduce this percentage, but there are a number of challenges. In particular, the dataset for defective lyophilized products is not only small, but also suffers from class imbalance because of the small number of defects.&#13;
&#13;
In this thesis, we test the performance of well known deep learning neural network architectures including VGG16 and ResNet50. We compare results from training these architectures from scratch to results using fine-tuning of pretrained variants of the models, and find that the pretrained variants not only help improve accuracies, but help the model learn the correct reasoning for the defect classification decision, in terms of identifying defect location in an image. Furthermore, we show that autoencoders can be used to create classifiers that perform just as well as the pretrained VGG16 and ResNet50 models for our vial image datasets. Lastly, we demonstrate that simple data augmentation techniques do not improve the training of our vial defect classification models.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battle for the dinner table: Can vegan analogues curb America’s reliance on meat?</title>
<link href="https://hdl.handle.net/1721.1/140183" rel="alternate"/>
<author>
<name>Gold, Alison</name>
</author>
<id>https://hdl.handle.net/1721.1/140183</id>
<updated>2022-02-08T03:31:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Battle for the dinner table: Can vegan analogues curb America’s reliance on meat?
Gold, Alison
As demand for meat continues to rise, globally, the livestock industry produces an estimated 14.5% of manmade greenhouse gas emissions –– more than two-thirds of which come from cattle. Meat production requires roughly three-fourths of the world’s agricultural land and heavy crop and water usage, which according to scientists is a threat to biodiversity, and an accelerator of deforestation and food insecurity. Plant-based diets represent an opportunity for a more sustainable food system, according to large international scientific reports published by organizations including the United Nations Intergovernmental Panel on Climate Change and the EAT-Lancet Commission.&#13;
&#13;
A practical solution seemingly exists: a new generation of plant-based meats which are designed to look, cook, taste, and smell just like meat with a much smaller environmental footprint. In 2016, Beyond Meat and Impossible Foods began selling their meatless burgers, and quickly became leaders in a growing field of companies offering realistic plant-based meat analogues. In the United States, where more meat is consumed per person than anywhere else in the world, consumer acceptance of the novel vegan meats has so far been mixed. Researchers know that taste is highly psychological, and that many people prefer foods they are familiar with. The plant-based meat industry faces a challenge of scaling up to offer a product as appealing, accessible, and inexpensive as meat, which is a cornerstone of many peoples’ diets –– and often a source of great comfort and pleasure.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting a Corporate Venture Capital Firm for a Commodity Enterprise</title>
<link href="https://hdl.handle.net/1721.1/140180" rel="alternate"/>
<author>
<name>Kieke, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/140180</id>
<updated>2022-02-08T04:02:38Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Architecting a Corporate Venture Capital Firm for a Commodity Enterprise
Kieke, Matthew
Since commodity prices can be impacted by external factors, commodity enterprises have historically competed on their cost per unit of production. However, changes within the ecosystem for commodity enterprises are impacting the competitive landscape, leading to a need for enterprise transformation. These ecosystem changes include a growing focus on an enterprise’s environmental impact and a reduction in the barriers of entry due to the growing ease of technology access. Enhanced innovation can assist the enterprise in adapting to this changing ecosystem. This research explores and evaluates both the changing ecosystem and alternative models for enhancing an enterprise’s innovation.&#13;
&#13;
Two innovation models exist: closed and open innovation. Commodity enterprises have historically leaned on closed innovation, yet this reliance has resulted in an enterprise not well equipped to manage a changing ecosystem. Corporate Venture Capital has been a form of open innovation leveraged by enterprises since the early 1950s. These Corporate Venture Capital (CVC) firms can be architected to deliver strategic value, financial value, or a combination of both strategic and financial value. For a commodity enterprise looking to enhance innovation, the CVC firm would pursue a strategic objective. However, there exists limited consensus on how best to architect a CVC firm to pursue a strategic objective, and challenges exist that may naturally lead to a CVC firm focusing on a financial objective. &#13;
&#13;
In this research, the ARIES framework, a systems architecture and engineering framework used for the design and implementation of a transformed enterprise, is used to evaluate the optimal architecture for a CVC firm that is aiming to enhance a commodity enterprise’s innovation. This research analyzes not only the stakeholders of the CVC firm, but also evaluates an envisioned future for the CVC firm, identifies the critical architectural decisions, develops alternative concept architectures, and ultimately evaluates each alternative. Through this evaluation, a hybrid CVC firm containing two separate investment funds is proposed as the preferred concept architecture. This research also proposes a roadmap for the implementation of this new CVC firm architecture, ensuring that the resources and momentum required for a successful launch are in place.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>‘Firefighting’ within the U.S. Coast Guard’s Shore Infrastructure Capital Investment Program</title>
<link href="https://hdl.handle.net/1721.1/140179" rel="alternate"/>
<author>
<name>Fant, Joshua W.</name>
</author>
<id>https://hdl.handle.net/1721.1/140179</id>
<updated>2022-02-08T03:32:59Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">‘Firefighting’ within the U.S. Coast Guard’s Shore Infrastructure Capital Investment Program
Fant, Joshua W.
This thesis proposes that the Coast Guard’s shore infrastructure capital investment program operates in crisis mode, also referred to as firefighting, and cannot develop projects as effectively and efficiently as it should.  This proposal is supported by evidence derived from recent U.S. Coast Guard (CG) and U.S. Navy (USN) project studies, subject matter and organization expertise. With an abundance of needs, the absence of a long-term, sustainable shore infrastructure capital investment strategy leads the organization to make investment decisions that foster and reinforce harmful firefighting activities.   &#13;
&#13;
The CG has many aging and outdated facilities, as evident by its more than $2.4B shore infrastructure backlog.  The multitude of projects needed to rebuild from natural disasters and support the service’s most extensive fleet recapitalization efforts exacerbates the effects of these known facility needs. Because of the high volume, lack of consistent funding and dynamic priorities, the CG’s shore infrastructure capital planning and cost estimating program frequently deviates from its processes and business rules and cannot deliver facilities in a timely manner, with acceptable risk or without rework that frustrates a motivated workforce.&#13;
&#13;
This thesis analyzes ten waterfront capital investment projects by focusing on the projects’ scopes and associated costs. It also summarizes current research into project planning uncertainty and the causes, symptoms, and proliferation of firefighting. The work corroborates the findings with interviews of subject matter experts throughout the program’s network. A systems view revealed excessive planning uncertainty passed into project execution due to scope changes, incomplete planning, and flawed estimating.  A deeper look unveiled more pressing fundamental issues such as inconsistent organizational direction, funding volatility and rampant process deviations resulting in projects built to budget, not requirements, inefficient planning, and a discouraged workforce. &#13;
&#13;
The development of a sustainable, long-term infrastructure capital investment strategy and a consistent funding stream will enable organizational adherence to planning processes. It makes project priority decisions easier, ensuring proper planning to minimize planning risks and better use of scarce funding.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Protoheme IX Farnesyltransferase as an Antimalarial Drug Target</title>
<link href="https://hdl.handle.net/1721.1/140177" rel="alternate"/>
<author>
<name>Edelen, Samantha Leigh</name>
</author>
<id>https://hdl.handle.net/1721.1/140177</id>
<updated>2022-02-08T03:19:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Exploring Protoheme IX Farnesyltransferase as an Antimalarial Drug Target
Edelen, Samantha Leigh
The search for novel antimalarials continues due to the need to combat the rapidly increasing prevalence of multidrug resistant Plasmodium species parasites that cause malaria. Recent advances in technology to genetically modify malarial parasites and methods for high throughput screening of compounds allow for increased understanding of Plasmodium biology and evaluation of potential inhibitors. The Plasmodium falciparum genome continues to be annotated and functionally characterized, including the gene encoding a protoheme IX farnesyltransferase (aka PfCOX10). This enzyme is a critical component of heme O synthesis. Since heme O is a necessary precursor to heme A, a critical cofactor of cytochromes within the mitochondrial electron transport chain (ETC), we infer that function of PfCOX10 is related to ETC-dependent essential processes such as pyrimidine biosynthesis. In this work, we take advantage of modern techniques to explore PfCOX10 as a potential new target of antimalarial compounds. Parasites with conditional knockdown of PfCOX10 are screened for growth against compounds known to interact with heme, as well as the 400-compound Pathogen Box collection from the Medicines for Malaria Venture (MMV). We identify a modest interaction of PfCOX10 with antimalarial DSM1, as well as several compounds from the MMV Pathogen Box. The data ultimately point to utility in further exploration of PfCOX10 as a potential antimalarial drug target and for deepening biological understanding of these disease-causing parasites.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Narrating the Politics of Urban Development in “New Era” Boston</title>
<link href="https://hdl.handle.net/1721.1/140176" rel="alternate"/>
<author>
<name>Diby, Somala</name>
</author>
<id>https://hdl.handle.net/1721.1/140176</id>
<updated>2022-02-08T03:38:28Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Narrating the Politics of Urban Development in “New Era” Boston
Diby, Somala
Since 1950, urban governance in the city of Boston has been predicated on the close collaboration between the city’s economic and political elite, anchored in the Boston Planning and Development Agency (BPDA). A common narrative in Boston is that this “growth coalition” has historically achieved much by way of downtown redevelopment, and much less by way of an equitable housing market. However, with the COVID-19 pandemic, mass mobilizations for racial justice, a progressive turn within the City Council, and the inauguration of the city’s first Black and first woman mayor, recent public discourse reflects a new optimism that the city’s most powerful institutions can be transformed to support a more equitable housing market. This media thesis uses podcasting as a tool to investigate how key actors in Boston’s urban development landscape—from city councillors and administrators, to private developers, and housing justice organizers—believe this unique political moment will shape the city’s land development practices and influence urban governance in the future. I ground this exploration in the recent passage of the Affirmatively Furthering Fair Housing (AFFH) Zoning Amendment, a unique zoning tool designed to assess and address the risk that new development projects will displace nearby residents and reinforce patterns of segregation. Through five twenty-to-thirty-minute episodes, this podcast curates five threads of collective narrative that emerged across 18 semi-structured interviews. Stories are grounded in the theoretical literature of “New Urban Politics” scholars, Boston’s extensive history of urban governance dating back to 1950, and a contemporary framing of uneven urban development in Boston as a cultural challenge as much as a political and economic one. In prioritizing narrative and storytelling over traditional research methods, I advocate for the use of podcasting as a tool for planning research and practice.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Walking to transit – using big data to analyze bus and train ridership in Los Angeles</title>
<link href="https://hdl.handle.net/1721.1/140174" rel="alternate"/>
<author>
<name>Klo’e, Ng Yim Chew</name>
</author>
<id>https://hdl.handle.net/1721.1/140174</id>
<updated>2022-02-08T04:01:37Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Walking to transit – using big data to analyze bus and train ridership in Los Angeles
Klo’e, Ng Yim Chew
Los Angeles passed one of the largest sales taxes in the country in 2016, which will give the county unprecedented financing in improving public transportation. Public transit ridership has been declining despite hefty investments, and it is important to understand why transit has not picked up. Studying current pedestrian-induced ridership is crucial as walkability is key in affecting ridership. Many prior studies assume linear relationships with established variables or explore transformed variables which have constrained assumptions. Machine learning models have the potential to discover nonlinear relationships such as step function and curvilinear relationships, which will help planners and policy makers make effective development decisions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hyperparameter Optimization of Opaque Models for Autonomous Vehicle Algorithms</title>
<link href="https://hdl.handle.net/1721.1/140170" rel="alternate"/>
<author>
<name>Ahmadi, Elaheh</name>
</author>
<id>https://hdl.handle.net/1721.1/140170</id>
<updated>2022-02-08T03:51:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Hyperparameter Optimization of Opaque Models for Autonomous Vehicle Algorithms
Ahmadi, Elaheh
Algorithms usually consist of many hyperparameters that need to be tuned to perform efficiently. It may be possible to tune a handful of parameters manually for simple algorithms however as the algorithm becomes more complex the number of hyper- parameters also increases which makes finding the optimal hyperparameters more difficult. As a result, automating the parameter tuning would be of great interest in many different applications by reducing manual labor while increasing the perfor- mance of the algorithm. In this research, we focused on automating the process of hyperparameter selection for any opaque model to enable fully automated learning. We surveyed different hyperparameter optimization algorithms, selected the most effi- cient ones in different scenarios, and developed a framework that can be easily utilized by different users. We tested our algorithm and framework on NVIDIA’s localization algorithm developed for Autonomous Vehicles. Additionally, we performed hyperpa- rameter optimization on different regression algorithms on the abalone [7] dataset to have another thorough comparison of the different optimization algorithm.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost Optimization of US Sustainable Aviation Fuel Supply Chain Under Different Policy Constraints</title>
<link href="https://hdl.handle.net/1721.1/140169" rel="alternate"/>
<author>
<name>Kelso III, Walter T.</name>
</author>
<id>https://hdl.handle.net/1721.1/140169</id>
<updated>2022-02-08T03:38:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Cost Optimization of US Sustainable Aviation Fuel Supply Chain Under Different Policy Constraints
Kelso III, Walter T.
This thesis quantifies the costs and emissions of a potential sustainable aviation fuel supply chain in the US in 2035 while incorporating regional uncertainty analysis. Feedstock availability is quantified using projected arable land availability, agricultural yields, and projected waste and residue availability. A mixed-integer linear programming model was developed to minimize supply chain costs, subject to uncertain variables which were analyzed using Monte Carlo simulations. Under a baseline set of assumptions, an average of 78% of 2035 US jet fuel demand can be met with sustainable aviation fuels. The optimization model is applied using inputs from four socioeconomic scenarios to meet 25% and 50% of projected 2035 demand. The sensitivity of the results to a carbon emissions cost of 100 $/tonne CO₂e is also evaluated. Under a baseline set of assumptions, when 50% of 2035 US demand is offset, sustainable aviation fuel is produced with 50% higher costs and 39% lower emissions than conventional jet fuel. In all scenarios, the introduction of a 100 $/tonne CO₂e carbon emissions cost resulted in optimized supply chains using feedstocks and pathways with lower life cycle emissions but higher capital costs.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming Data Scarcity in Deep Learning of Scientific Problems</title>
<link href="https://hdl.handle.net/1721.1/140165" rel="alternate"/>
<author>
<name>Loh, Charlotte Chang Le</name>
</author>
<id>https://hdl.handle.net/1721.1/140165</id>
<updated>2022-02-08T03:55:07Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Overcoming Data Scarcity in Deep Learning of Scientific Problems
Loh, Charlotte Chang Le
Data-driven approaches such as machine learning have been increasingly applied to the natural sciences, e.g. for property prediction and optimization or material discovery. An essential criteria to ensure the success of such methods is the need for extensive amounts of labeled data, making it unfeasible for data-scarce problems where labeled data generation is computationally expensive, or labour and time intensive. Here, I introduce surrogate and invariance- boosted contrastive learning (SIB-CL), a deep learning framework which overcomes data-scarcity by incorporating three “inexpensive" and easily obtainable auxiliary information. Specifically, these are: 1) abundant unlabeled data, 2) prior knowledge of known symmetries or invariances of the problem and 3) a surrogate dataset obtained at near-zero cost either from simplification or approximation. I demonstrate the effectiveness and generality of SIB-CL on various scientific problems, for example, the prediction of the density-of-states of 2D photonic crystals and solving the time-independent Schrödinger equation of 3D random potentials. SIB-CL is shown to provide orders of magnitude savings on the amount of labeled data needed when compared to conventional deep learning techniques, offering opportunities to apply data-driven methods even to data-scarce problems.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Transformational Leader Behaviors on Diverse Team Performance and Persistence</title>
<link href="https://hdl.handle.net/1721.1/140164" rel="alternate"/>
<author>
<name>Goodwin, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/140164</id>
<updated>2022-02-08T03:56:32Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Impact of Transformational Leader Behaviors on Diverse Team Performance and Persistence
Goodwin, Jeremy
Scholars have studied diversity and diverse teams for more than 60 years.  While some studies show that diversity is positively linked to team performance outcomes, many others show the opposite.  Indeed, some scholars now refer to diversity as a "double-edged sword" because although it can have its benefits, it can also impose costs on the team.  Reagans et al. (2004) suggest that one reason for these conflicting results is a pair of network structural variables in the team:  external network range and internal network density.  Diverse teams are known for being strong in the former, but weak in the latter.  With both variables known to be positively related to team performance, weak internal network density represents an area that leaders can focus on to improve overall performance.&#13;
&#13;
Additionally, an organization's approach to diversity has also been shown to drive diversity-related concerns according to a member's social group representation within a larger group.  Specifically, Apfelbaum et al. (2016) studied two approaches that are commonly used across different organizations, and although both are meant to value diversity, they each increase concerns in different social groups and ultimately drive different members away from the organization.&#13;
&#13;
This thesis proposes an experiment that triggers a leader to increase team member support in order to increase internal network density and stave off further costs typically associated with diverse teams.  The research presents an agent-based model to simulate multiple team scenarios using both Monte Carlo and single simulations.  Though in its early form and needing further empirical validation, the simulations suggest that addressing diversity-related concerns with supportive behaviors can increase internal network density, which in turn can reduce absences and team member turnover.  Perhaps more importantly, the model represents a foundation for further study and associated incorporation of additional diversity-related phenomena.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediating Chana: Seeding Synergies between Doves and Development</title>
<link href="https://hdl.handle.net/1721.1/140162" rel="alternate"/>
<author>
<name>Huangthanapan, Eakapob</name>
</author>
<id>https://hdl.handle.net/1721.1/140162</id>
<updated>2022-02-08T03:34:56Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Mediating Chana: Seeding Synergies between Doves and Development
Huangthanapan, Eakapob
For almost a century, domestication of zebra doves for birdsongs has given Chana the reputation as the emerging Southeast Asian capital of zebra doves. In this rural district in the southern coast of Thailand, the doves are not only worth more than gold but also hold higher values in the local society and in the community stewardship to the environment. In 2019, the national government of Thailand put forward a 6,000-acre plan to build an industrial metropolis and deep seaports in the area. If realized, this project will transform the pristine beaches and agricultural landscapes of Chana into special economic zones and the largest industrial complex in the south of Thailand. This process would inevitably hinder the dove ecologies of the area.&#13;
&#13;
The forces driving this development are twofold: first, the centralized government has framed the project as a way to promote national growth through an opportunistic global trade. Second, the plan is also driven by a national-security agenda aimed at quelling the on-going ‘separatist insurgencies’ along the southern borders to Malaysia. The plan is not new; some locals see it as another reproduction of large-scale projects deployed under the highly centralized government. These plans often deepen regional impasses by prioritizing economic development and simplify other complex socio-cultural and environmental dimensions.&#13;
&#13;
The thesis is looking at these tensions between the forces of globalization, national development and the local culture. Drawing on my investigation of the unique relationships between humans and non-humans in Chana, the thesis focuses on the potentials of the doves and other local assets to negotiate the direction of development. The thesis proposes a series of design scenarios to preserve the local culture, regenerate the local assets, and project future industries. Countering the top-down plan, the study’s goal is to move beyond the impasse by orchestrating the synergies between the singing doves and the impending development.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrolyte Structure with Explicit Solvent in Nanoslit Capacitors using Classical Density Functional Theory</title>
<link href="https://hdl.handle.net/1721.1/140160" rel="alternate"/>
<author>
<name>Zhang, James H.</name>
</author>
<id>https://hdl.handle.net/1721.1/140160</id>
<updated>2022-02-08T04:00:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Electrolyte Structure with Explicit Solvent in Nanoslit Capacitors using Classical Density Functional Theory
Zhang, James H.
Understanding the effects of double layer formation on charged interfaces is integral in many disciplines such as electrochemistry, polymer science, and solution theory. Classical models to understand double layer thermodynamics is typically based on the Poisson-Boltzmann equation where solvent is treated implicitly as a dielectric background and ions are treated as point charges. Although this theory works well for macroscopic charge distributions, it is known to lead to problems in nanopores when finite size and interfacial effects play important roles. The advent of using nanoporous electrodes for increased surface interactions motivates us to build a more accurate model that can characterize electrolyte structure in nanoconfined regions.&#13;
&#13;
This theses aims to understand the electrolyte structure and the effects of explicit solvent in nanopores through computations. We first start by giving a brief description on classical models in understanding the behavior of solvent and ions under external electric fields. We then elucidate the problems with these classical models when considering electrolyte structure in nanoconfined regions. Afterwards we give a discussion on classical density functional theory and a method to model three component electrolyte with steric interactions and mean-field electrostatics. This method allows us to construct a local relative permittivity that is a function of the molecular interactions. Using this model, we study the effects of solvent properties, temperature, surface charge, and slit geometry on the adsorption and structure of each component. This model can be used to help guide the design of capacitor systems and understand the underlying thermodynamics of confined double layer formation.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Intracochlear Hydrophone and Amplifier</title>
<link href="https://hdl.handle.net/1721.1/140159" rel="alternate"/>
<author>
<name>Zhang, John Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/140159</id>
<updated>2022-02-08T03:59:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Intracochlear Hydrophone and Amplifier
Zhang, John Z.
Assistive hearing systems combining a fully-implantable microphone and electronics with a cochlear implant would enhance directional and focused hearing by taking advantage of ear mechanics. They would be usable in almost all environmental conditions throughout the day and night. Current implantable microphones suffer from unstable mechanics, poor signal-to-noise ratio (SNR), and low bandwidth. Here, we use analytical modeling, a finite-element model, and experiments to design a polyvinylidene fluoride-trifluoroethylene (PVDF-TrFE) intracochlear hydrophone and amplifier system for high-bandwidth sensitivity, surgical viability, and improved SNR by electrical shielding and circuit design. Our analysis shows that the copolymer PVDF-TrFE should be used due to its higher hydrostatic sensitivity, the sensor area should be maximized to maximize gain, and the length should not exceed a maximal value determined by the bandwidth requirement. A short-circuit-topology charge amplifier maximizes the SNR of the sensor by minimizing noise and attenuating electromagnetic interference by shielding. To calibrate and verify our fabricated hydrophones we employ a vibrating water column method to generate a known pressure distribution. We find the measured response of our sensor and amplifier perform are in alignment with our analytical models. In particular, we achieve a sensitivity of 215 aC/Pa, a bandwidth of 470 Hz to 16 kHz, and a SNR of 38.6 dB at 1 kHz for a 40 µm thick sensor of size 15 mm × 0.5 mm. We believe our approach to be a promising candidate to bring fully-implantable assistive hearing systems closer to reality.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconciling Social Housing and access to urbanity in Rio de Janeiro</title>
<link href="https://hdl.handle.net/1721.1/140158" rel="alternate"/>
<author>
<name>Serra, Olivia Paraiso de Campos</name>
</author>
<id>https://hdl.handle.net/1721.1/140158</id>
<updated>2022-02-08T03:44:43Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Reconciling Social Housing and access to urbanity in Rio de Janeiro
Serra, Olivia Paraiso de Campos
This work is a discussion around the potential of "Parcelamento, Edificação ou Utilização Compulsórios" in English: "Compulsory land parceling, building and use" (or CPBU) in the city of Rio de Janeiro. The law exists under the umbrella of the City Statute, a set of laws implemented in Brazil in 2001. It enables municipalities to regulate land use and management towards a more socially inclusive and just environment, acknowledging that the city's social function and the right to the city should be contemplated within the scope of civic rights. CPBU is a way to promote the use of unused and underused land by pressuring the owners of these properties through the increase of property taxation and the eventual seizing of the areas.&#13;
&#13;
Some of the measures proposed by the City Statute were widely accepted and are employed in several cities. The main controversy regarding its application, however, revolves around the fact that in many places, it has been used to benefit public-private partnerships, favoring the market logic and opposing the City Statute's primary goal: to promote land tenure regularization and fight land speculation. This work will deal with the current use of this set of laws juxtaposed to the broader theme of social housing. In this thesis, I argue to overcome this dynamic, limiting the supply and quality of public services and infrastructure. The urban redevelopment of socially unproductive lands by drawing on the CPBU law could render these unused/underused properties profitable for the resident population by focusing on a more accessible and compact city model, for instance.&#13;
&#13;
This work will develop new guidelines for applying CPBU in Rio de Janeiro’s Master Plan in two scales. The first scale will be the development of a framework based on the concept of urban analytics. This framework will hierarchize criteria that outputs a new delineation for CPBU; this will be a static representation of the criteria according to the data inputted. The second scale will be a collection of volumes that are in parcels and blocks within the new CPBU delineation. This interscalar effort will attempt to quantify the potential of CPBU applied for social housing in Rio.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>THE RIGHT TO NAVIGATE RISK IN MEXICO CITY: Possibilities for creating safer spaces for women experiencing fear of sexual harassment in their daily use of the city</title>
<link href="https://hdl.handle.net/1721.1/140156" rel="alternate"/>
<author>
<name>Morelli, Maria Lucia</name>
</author>
<id>https://hdl.handle.net/1721.1/140156</id>
<updated>2022-02-08T03:33:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">THE RIGHT TO NAVIGATE RISK IN MEXICO CITY: Possibilities for creating safer spaces for women experiencing fear of sexual harassment in their daily use of the city
Morelli, Maria Lucia
This thesis explores the effects of fear of harassment on women’s mobility choices in Mexico City by analyzing it through a Right to the City framework. Women’s fear of harassment constitutes a constant state of alert to the smaller and more subtle forms which exist in a spatial continuity as a woman travels throughout the city. It is the gestures, the catcalls, the pursuing, and the groping of women in public space which make women feel vulnerable and uncomfortable. The recurrence of these experiences is a constant reminder of the risk that is out there while moving in the city. This thesis explains how women negotiate with this risk either by constraining or modifying their mobility or by directly defying it –regardless of the negotiation method, fear plays a crucial role in their choices. To make such choices, women create mental risk maps of the spatiality of fear, where they overlay the social and physical conditions that elicit opportunity or probability of harassment.&#13;
&#13;
I position my research with the goal of granting women with the Right to Navigate Risk. This concept, coined by urban theorist Carolyn Whitzman, criticizes the negative outcomes that can result from reducing the Right to the City to a Right to Safety or a Right to Mobility; examples include segregationist programs like women-only transport systems, and forced mobility evident in long, complex and unwanted trips. Analyzing results from an online survey designed to capture women’s everyday activities, their experiences of harassment and their travel choices, this thesis presents the extent to which fear of harassment hinders women’s Right to the City. I examine the strategies women employ to negotiate with risk, and then propose analytical axes needed to understand the dynamism of women’s experiences through the city in order to think about creating safer spaces so they can navigate risk.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Low-Cost, Scalable Platform for Sub-Centimeter UHF RFID Positioning</title>
<link href="https://hdl.handle.net/1721.1/140154" rel="alternate"/>
<author>
<name>Perper, Isaac</name>
</author>
<id>https://hdl.handle.net/1721.1/140154</id>
<updated>2022-02-08T03:43:13Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Low-Cost, Scalable Platform for Sub-Centimeter UHF RFID Positioning
Perper, Isaac
High-precision localization is a technology that has started to make areas from autonomous vehicles to packing robots more efficient and accurate. While there are many approaches to localization, RFID micro-location is a growing technology that has been shown to be fast and robust, and can leverage the existing infrastructure of billions of RFID tags. However, many prior RFID positioning systems lack portability, scalabiltiy, and cost-effectiveness. In this thesis, I explore how low-cost software-defined radios can be leveraged to overcome those three key issues with RFID localization. I contribute a low-cost, scalable, and portable RFID micro-location platform that can overcome real-world deployment issues such as RFID orientation. Finally, I conclude with a characterization of the platform and a novel application of the system for robotic grasping.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Regularized Solution of the Lippmann-Schwinger Equation</title>
<link href="https://hdl.handle.net/1721.1/140153" rel="alternate"/>
<author>
<name>Pang, Subeen</name>
</author>
<id>https://hdl.handle.net/1721.1/140153</id>
<updated>2022-02-08T03:30:13Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Machine Learning Regularized Solution of the Lippmann-Schwinger Equation
Pang, Subeen
The Lippmann-Schwinger equation has been applied on various branches of physics, especially optical and quantum scattering. Solving the equation requires the inversion of a linear operator specified by the scattering potential, which is ill-conditioned. To resolve numerical difficulty originating from such ill-conditionedness, we propose a machine learning approach to find an appropriate regularization. Inspired by the proximal algorithm, we try to solve the equation with a hybridization of the physical operator and a regularizing network: a recurrent neural network with long short-term memory (LSTM).&#13;
&#13;
We train the LSTM using typical scattering potentials and their corresponding scattered fields. For the evaluation of the LSTM, two scattering cases are considered: electromagnetic scattering by dielectric objects, and electron scattering by multiple screened Coulomb potentials. It is observed that the network can estimate scattered fields that are comparable to those from linear solvers with fewer iterations. We also observed surprising generalization ability. Specifically, in the electromagnetic case, the LSTM trained with objects consisting of dielectric spheres can estimate reasonable solutions for general topologically similar objects, such as polygons. This suggests that the scattering physics is properly fused to the network through the training process.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardware Implementation of a Complete Vision-Based&#13;
Navigation Pipeline</title>
<link href="https://hdl.handle.net/1721.1/140151" rel="alternate"/>
<author>
<name>Ni, Susan</name>
</author>
<id>https://hdl.handle.net/1721.1/140151</id>
<updated>2022-02-08T03:45:04Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Hardware Implementation of a Complete Vision-Based&#13;
Navigation Pipeline
Ni, Susan
Autonomous navigation technology has made great advances, but many of the successful hardware systems are reliant on LiDAR. LiDAR is known to be expensive and to have high computation cost, while vision, which is typically used in combination with LiDAR, has additional benefits without the same cost concerns. It would be ideal if vision could replace LiDAR in its entirety, and there has been extensive work on vision-based alternatives for each module of the autonomy pipeline, but there are no well-established complete vision-based navigation pipelines. This project integrates vision-based object tracking, state estimation, and collision avoidance planning modules via the Robot Operating System and implements the system on hardware. Both the state estimation module, OpenVINS, and the object tracking module, CenterTrack 2D with depth images, are benchmarked on our hardware setup and found to have within 0.2 meters of displacement error. Trials of experiments in a real environment are performed to demonstrate the complete pipeline’s ability to navigate to a goal about 8 meters away in the presence of up to 6 naturally moving pedestrians.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultra-Wideband Error Modeling for Improved Localization</title>
<link href="https://hdl.handle.net/1721.1/140150" rel="alternate"/>
<author>
<name>Pedlow, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/140150</id>
<updated>2022-02-08T03:58:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Ultra-Wideband Error Modeling for Improved Localization
Pedlow, Elizabeth
Ultra-wideband (UWB) is a modern range measurement technology which can provide high-speed, low-cost ranging, however UWB measurements can be difficult to model. In an effort to increase accuracy of localization using UWB, this thesis develops models to better understand the complex error patterns of UWB range measurements, specifically how separation distance and relative angle between modules affect error. These models are used to develop three error prediction and correction methods to improve localization: (1) range-based error correction, (2) angle-based error correction, and (3) fused range-angle error correction. While it was found that decreasing mean measurement error does not always decrease localization error, the lowest measurement error and lowest localization error both resulted from the fused error correction method. The fused error model combines the separation distance and relative angle models to predict and correct for range error, decreasing the mean measurement error by over 80%, the mean localization error by approximately 35% when using least squares estimation, and by approximately 56% when smoothing the trajectory with a Kalman Filter.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Urbanism toward Thermal Synergy: Sustainable urban design for district heating and cooling</title>
<link href="https://hdl.handle.net/1721.1/140149" rel="alternate"/>
<author>
<name>Wan, Qianqian</name>
</author>
<id>https://hdl.handle.net/1721.1/140149</id>
<updated>2022-02-08T03:34:43Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Generative Urbanism toward Thermal Synergy: Sustainable urban design for district heating and cooling
Wan, Qianqian
Thermal performance has a long legacy in urban morphology, where the climate is interpreted explicitly in the forms of settlements. Mediating the temperature has taken a large portion in the energy sector since modern times. Besides the passive techniques, a closer observation of the space conditioning mechanism and the nature of thermal energy may produce systematic improvement in energy efficiency while maintaining the comfort level. Specifically, the synergy between heating and cooling needs in the urban environment provides the chance to circulate heat with low primary energy consumption.&#13;
&#13;
This thesis suggests heat recovery as a driving force in generative urban design, based on the echo between the mixed-use development ideology in the design world and the clustering of diverse user profiles suggested by thermal engineers. Contrary to the interdisciplinary collaboration convention, where urban design schemes are largely settled prior to the evaluation of infrastructural performance, this research explores the possibility to integrate engineering considerations into early phase design evolution and the impact of spatial features on energy performance.&#13;
&#13;
The thesis proposes the synergy score metric as a preliminary evaluation method to evaluate urban contexts and design schemes from a scope of thermal overlap between heating and cooling loads. Being naturally compatible with computational design optimization workflow, the metric bridges the iterations of design generation, performance assessment and multi-objective optimization, and navigates the design variants to approach greater heat sharing potential in the land use allocation scenario. The thesis then investigates the district heating and cooling network as the infrastructural system that embeds the synergy idea in essence. It examines the energy flow in a heat sharing network through simulation and analyzes the relationship between spatial sensitive features and energy performance metrics.&#13;
&#13;
The research explores the state-of-art performance-driven design optimization techniques and suggests a reproducible framework that couples design and engineering visions for integrated spatial and energy planning. Communicating at a methodology level in the design-engineering collaboration is the crux of contemporary smart planning toward a sustainable urban metabolism.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>User Based Design of Medical Devices for Translation from Prototype to Clinical Device</title>
<link href="https://hdl.handle.net/1721.1/140147" rel="alternate"/>
<author>
<name>Montague-Alamin, Healey</name>
</author>
<id>https://hdl.handle.net/1721.1/140147</id>
<updated>2022-02-08T04:04:54Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">User Based Design of Medical Devices for Translation from Prototype to Clinical Device
Montague-Alamin, Healey
In this work, two devices currently used in a research capacity were updated based on user and clinical requirements. Consideration was given for final designs with high reliability, broad applicability, and minimal amount of required training or technical skill to operate.&#13;
&#13;
An implantable microdevice allows for accelerated in vivo testing of anticancer agents on human derived tumors. The current method of filling the reservoirs of the microdevice with anticancer agents consists of manual stuffing of the reservoirs until the hole drilled in the device appears full. This process is labor intensive to a degree that prohibits large scale application and produces variable volumes depending on the skill of the technician and the tolerances of the manufacturing process for the microdevice. We designed and tested a new method of creating standardized drug volumes that are easy to load into the microdevice. We machined a master mold of stainless steel that was used to cast a flexible mold of PDMS. The pouring in of heated anticancer agents produces small drug pellets of a reproducible volume. We validated the method with fluorescent release studies which tested the variability in the pellet volumes between wells compared it to the variability found between and within users in the previous method of manual filling. The new method led to a reduction in the coefficient of variation of the amount of drug in each well from 49.92% to 15.32%. Additionally, we compared the time to fill a device with the two different methods to get an evaluation of technician skill and time required in a scale up of the two methods.&#13;
&#13;
A linear peristaltic nanofluidic pump actuated by NITINOL wire is currently used to sample rodent brains with minimal scarring [49]. This pump is in prototype stage and therefore components regularly break and it is oversized at a volume of 584cm3, with the need to replace the batteries on a regular basis. Investigation of the breaking of the NITINOL wire pinpointed friction as the likely primary cause. A newly designed tubing holder improves the average lifetime of the NITINOL wire from 6.1E3 cycles to 6.6E4 cycles and the minimum breaking time at the tubing holder from 119 minutes to 455 minutes. The breaking of the tubing is investigated and recommendations for future designs include reducing the heat transferred to the tubing or using a tubing material with a higher tear strength, lower compression set, and/or higher yield strength. The battery size required is minimized by reducing the NITINOL power drain with lower resistance electrical connections, a modified voltage activation profile that prevents overheating of the wire, and an investigation of the design changes required to reduce the wire length. These modifications reduced the required battery capacity by a third and transform the pump into a wearable device size.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Similarity Search with Sublinear Communication</title>
<link href="https://hdl.handle.net/1721.1/140146" rel="alternate"/>
<author>
<name>Servan-Schreiber, Sacha</name>
</author>
<id>https://hdl.handle.net/1721.1/140146</id>
<updated>2022-02-08T03:47:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Private Similarity Search with Sublinear Communication
Servan-Schreiber, Sacha
Nearest neighbor search is a fundamental building-block for a wide range of applications. A privacy-preserving protocol for nearest neighbor search involves a set of clients who send queries to a remote database. Each client retrieves the nearest neighbor(s) to its query in the database without revealing any information about the query. For database privacy, the client must not learn anything beyond the query answer.&#13;
&#13;
Existing protocols for private nearest neighbor search require heavy cryptographic tools, resulting in poor practical performance or large client overheads. In this thesis, we present the first lightweight protocol for private nearest neighbor search. Our protocol is instantiated using two non-colluding servers, each holding a replica of the database. The protocol supports an arbitrary number of clients simultaneously querying the database via these servers. Each query is only a single round of communication for the client and does not require any communication between servers.&#13;
&#13;
If at least one of the servers is non-colluding, we ensure that (1) no information is revealed on the client’s query, (2) the total communication between the client and the servers is sublinear in the database size, and (3) each query answer only leaks a small and precisely quantified amount of information about the database to the client, even when the client is acting maliciously.&#13;
&#13;
We implement our protocol and report its performance on real-world data. Our construction requires between 10 and 30 seconds of server processing per query over large databases of 10M feature vectors. Client overhead remained under 10 µs of processing time per query and typically less than 4 MB of communication, depending on parameters.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding and characterizing thermal transport in 2D van der Waals nanoelectronics</title>
<link href="https://hdl.handle.net/1721.1/140144" rel="alternate"/>
<author>
<name>Zhong, Yang</name>
</author>
<id>https://hdl.handle.net/1721.1/140144</id>
<updated>2022-02-08T03:59:38Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Understanding and characterizing thermal transport in 2D van der Waals nanoelectronics
Zhong, Yang
With novel electronic and optical properties, two-dimensional (2D) materials and their heterogeneous integration have enabled promising electronic and photonic applications. However, significant thermal challenges arise due to numerous van der Waals (vdW) interfaces limiting dissipation of heat generated in the device, induces significant temperature rise, and creates large thermal mismatch, resulting in the degradation of device performance and even failure of the device. The highly localized heat generation during device operation thus becomes a major bottleneck of 2D nanodevice performance. Nevertheless, classical descriptions of heat transfer, i.e., Fourier’s Law, become invalid from the microscopic view. Furthermore, it remains challenging to measure heat transport precisely. Advances in the characterization and understanding of heat transfer at the nanoscale are thus needed for practical thermal management of nanoelectronics.&#13;
&#13;
Recent theoretical and experimental progress promises more effective nanoelectronics thermal management. On the one hand, atomistic simulation provides great opportunities to investigate fundamental thermal transport processes under ideal conditions by tracking the motion of all atoms. Raman spectroscopy, on the other hand, has been widely applied to detect lattice or molecule vibration on small scales owing to its superior spatial resolution. In this thesis, we leverage the power of atomistic simulation and Raman spectroscopy to understand and characterize thermophysical and thermal transport properties for engineering thermal transport in 2D vdW nanoelectronics. The thesis presents a method of characterizing thermal expansion coefficients for 2D transitional metal dichalcogenide monolayers experimentally and theoretically, and an atomistic simulation framework to predict thermal transport properties, which is used to study vdW binding effects on anisotropic heat transfer and phonon transport through an MoS2-amorphous silica heterostructure toward optimal 2D device heat dissipation. With combined efforts of experiments and simulation, this thesis opens up new avenues to understand, characterize, and engineer thermal transport in 2D vdW nanoelectronics.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representation learning with random images</title>
<link href="https://hdl.handle.net/1721.1/140140" rel="alternate"/>
<author>
<name>Baradad, Manel</name>
</author>
<id>https://hdl.handle.net/1721.1/140140</id>
<updated>2022-02-08T03:27:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Representation learning with random images
Baradad, Manel
Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images.&#13;
&#13;
In this thesis, we investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. We study two types of noise processes, statistical image models and deep generative models under different random initializations. Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property to learn good representations.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Long Run: Inside the race to keep young female runners healthy and performing at the top of their game</title>
<link href="https://hdl.handle.net/1721.1/140139" rel="alternate"/>
<author>
<name>Blaustein, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/140139</id>
<updated>2022-02-08T04:06:40Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Long Run: Inside the race to keep young female runners healthy and performing at the top of their game
Blaustein, Anna
Athletic participation is overwhelmingly positive for girls and women, but it is not without risk. Many female runners — and other female athletes — don’t eat enough given how much they exercise. The motivations driving this underfueling are complex and range from short-term improvements in performance to societal pressures on women to be thin. In the long run, underfueling causes a host of health complications that may end seasons or athletic careers. Many girls and women will do lasting damage to their bodies.&#13;
&#13;
The issue has gotten increasing attention over the last few years as professional and collegiate runners have shared their experiences with the condition, known as Relative Energy Deficiency in Sport (RED-S). The conversation has largely overlooked the middle- and high-school aged girls who are also affected, however. Doctors and researchers are now in a race of their own to understand RED-S and keep young female runners healthy and performing at the top of their game.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System to Enhance Communication for Minimally Verbal Individual with Autism</title>
<link href="https://hdl.handle.net/1721.1/140137" rel="alternate"/>
<author>
<name>Shin, Hye Young</name>
</author>
<id>https://hdl.handle.net/1721.1/140137</id>
<updated>2022-02-08T03:00:55Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">System to Enhance Communication for Minimally Verbal Individual with Autism
Shin, Hye Young
This paper aims to create tools to enhance communication for individuals with with nvASD(non-verbal autism spectrum disorder) and mvASD(minimally-verbal autism spectrum disorder). Using machine learning and android application interface, I aim to classify and better convey the intentions of individuals with nv/mvASD, and the meanings of what they want to convey to others. This paper demonstrates new possibilities to better understand nv/mvASD individuals’ vocalization intent by using machine learning and to provide an interface for enhanced communication between caretakers and Autistic individuals who do not use traditional spoken communication words.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Puppetmaster: a certified hardware architecture for task parallelism</title>
<link href="https://hdl.handle.net/1721.1/140136" rel="alternate"/>
<author>
<name>Perez-Lopez, Áron Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/140136</id>
<updated>2022-02-08T04:04:31Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Puppetmaster: a certified hardware architecture for task parallelism
Perez-Lopez, Áron Ricardo
This thesis presents Puppetmaster, a hardware accelerator for transactional workloads. Existing software and hardware frameworks for transactional memory and online transaction processing are not able to scale to hundreds or thousands of cores unless the rate of conflicts between transactions is very low. Puppetmaster aims to improve upon the scalability of concurrency control by requiring transactions to declare their read and write sets in advance and uses this information to only run transactions concurrently when they are known not to conflict. In this thesis, I present and evaluate the design of Puppetmaster in a high-level model, in cycle-accurate simulations, and on real reconfigurable hardware.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking the Performance of Bayesian Optimization across Multiple Experimental Materials Science Domains</title>
<link href="https://hdl.handle.net/1721.1/140132" rel="alternate"/>
<author>
<name>Liang, Qiaohao</name>
</author>
<id>https://hdl.handle.net/1721.1/140132</id>
<updated>2022-02-08T03:50:52Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Benchmarking the Performance of Bayesian Optimization across Multiple Experimental Materials Science Domains
Liang, Qiaohao
Traditionally, experimental materials optimization has used design of experiments or intuition, combined with in-depth characterization. While these methods have obtained success over the years, they are facing increasing challenges today in the face of complex aggregated systems with larger design spaces. The materials objectives for these systems, e.g. environmental stability of solar cells or toughness of 3D printed mechanical structures, are typically costly to simulate and slow to experimentally evaluate. The need to shorten lab-to-market time of functional materials has inspired the use of machine learning and automation in materials optimization. Active learning algorithms, such as Bayesian Optimization (BO), have been leveraged for guiding autonomous high-throughput experimentation (HTE) systems. There have been individual studies successfully applying BO in experimental materials optimization, yet very few evaluated the performance of BO as a general optimization algorithm across a broad range of materials science domains.&#13;
&#13;
In this work, we benchmark the performance of BO algorithms with a collection of surrogate model and acquisition function pairs across five diverse experimental materials systems, including carbon nanotube polymer blends, silver nanoparticles, lead-halide perovskites, as well as additively manufactured polymer structures and shapes. By defining acceleration and enhancement performance metrics as general materials optimization objectives, we find that for surrogate model selection, Gaussian Process (GP) with anisotropic kernels (automatic relevance detection, ARD) and Random Forests (RF) have comparable performance and both outperform the commonly used GP without ARD. We discuss the implicit distributional assumptions of RF and GP, and the benefits of using GP with anisotropic kernels in detail. We provide practical insights for experimentalists on surrogate model selection of BO during materials optimization campaigns.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Recurrent Network Approach to G-Computation for Sepsis Outcome Prediction Under Dynamic Treatment Regimes</title>
<link href="https://hdl.handle.net/1721.1/140128" rel="alternate"/>
<author>
<name>Hu, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/140128</id>
<updated>2022-02-08T03:01:55Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Recurrent Network Approach to G-Computation for Sepsis Outcome Prediction Under Dynamic Treatment Regimes
Hu, Stephanie
Sepsis is a life-threatening condition that occurs when the body’s normal response to an infection is out of balance. A key part of managing sepsis involves the administration of intravenous fluids and vasopressors, but prescribing the correct balance of interventions is challenging since both under- and over-resuscitation can lead to adverse outcomes. While many retrospective studies have attempted to understand the relationship between sepsis treatment, fluid overload, mortality, and other outcomes, most are correlation-based and cannot actually estimate the causal effects of intervention. Prospective randomized clinical trials allow researchers to test the effects of alternative therapies more directly, but these types of studies tend to span multiple years and recent results regarding optimal regimes have been conflicting.&#13;
&#13;
In this thesis, we use methods from causal inference to predict outcomes in sepsis patients under different fluid and vasopressor strategies. Specifically, we explore a recurrent neural network approach to g-computation, a technique that allows us to estimate effects under treatments that are dynamic and time-varying. Our work builds on a previous sequential deep learning implementation known as G-Net. We evaluate G-Net using synthetic physiological data and show that it outperforms traditional linear regression models in predicting patient trajectories under alternative interventions. We then adapt and apply the improved architecture for analyzing outcomes under counterfactual treatment strategies in a real-world cohort of sepsis patients, using observational data collected from the intensive care unit. Our results demonstrate that G-Net is able to generate reasonable counterfactual estimates under alternative regimes.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Longest Common Subsequence Over Constant-Sized Alphabets: Beating the Naive Approximation Ratio</title>
<link href="https://hdl.handle.net/1721.1/140127" rel="alternate"/>
<author>
<name>Akmal, Shyan</name>
</author>
<id>https://hdl.handle.net/1721.1/140127</id>
<updated>2022-02-08T03:42:30Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Longest Common Subsequence Over Constant-Sized Alphabets: Beating the Naive Approximation Ratio
Akmal, Shyan
This thesis investigates the approximability of the Longest Common Subsequence (LCS) problem. The fastest known algorithm for solving the LCS problem runs in essentially quadratic time in the length of the input, and it is known that under the Strong Exponential Time Hypothesis there can be no polynomial improvement over this quadratic running time. No similar limitation holds however, for approximate computation of the LCS, except in certain restricted scenarios. When the two input strings come from an alphabet of size k, returning the subsequence formed by the most frequent symbol occurring in both strings achieves a 1/k approximation for the LCS. &#13;
It is an open problem whether a better than 1/k approximation can be achieved in truly subquadratic time (O(n^{2-δ}) time for constant δ &gt; 0). &#13;
&#13;
A recent result [Rubinstein and Song SODA'2020] shows that a 1/2+ε approximation for the LCS over a binary alphabet is possible in truly subquadratic time, provided the input strings have the same length. In this paper we show that if for some ε &gt; 0 a 1/2+ε approximation is achievable for binary LCS in truly subquadratic time when the input strings can have differing lengths, then for every constant k there exists some δ_k &gt; 0 such that there is a truly subquadratic time algorithm that achieves a 1/k+δ_k approximation for k-ary alphabet LCS. Thus, we show that for constant-factor LCS approximation, the case of binary strings is in some sense the hardest case. We also show that for every constant k, if one is given two strings of equal length over a k-ary alphabet, one can obtain a 1/k+ε approximation for some constant ε &gt; 0 in truly subquadratic time. This extends the Rubinstein and Song result to all alphabets of constant size, and gives the first nontrivial improvement over the naive 1/k approximation for the LCS of strings over alphabets of size k for all k  ≥ 3.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergent Capabilities of Generative Models: “Software 3.0” and Beyond</title>
<link href="https://hdl.handle.net/1721.1/140126" rel="alternate"/>
<author>
<name>Andonian, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/140126</id>
<updated>2022-02-08T03:51:09Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Emergent Capabilities of Generative Models: “Software 3.0” and Beyond
Andonian, Alexander
Modern AI algorithms are rapidly becoming ubiquitous in everyday life and have even been touted as the new “Software 2.0” stack by prominent researchers in the field. Indeed, these algorithms are fundamentally changing the way we interact with, potentially even how we will program computers to achieve desired outcomes. In this thesis, we advocate that wielding control over these increasingly powerful models is important for progression in the field, and more importantly, for ensuring that models deployed in the real world behave in the ways we would like and preventing cases where they may do unintended harm. First, we present an empirical study in which we train a large-scale Generative Adversarial Network (GAN) on the MIT Places365 dataset, achieving state-of-the-art Inception scores and Fréchet Inception distance, metrics that are used to evaluate image synthesis quality. We then introduce a GAN framework, GANalyze, that allows one to make targeted manipulations to various cognitive attributes of GAN generated imagery, such as memorability and emotional valence, and use this framework to surface “visual definitions” of these properties. Through behavioral experiments, we verify that our method discovers image manipulations that causally affect human memory performance. Finally, we build on this framework by incorporating a powerful new pretrained text-image semantic similarity model to create a novel image editing application that allows users to “paint by word.” All together, this progression of work underscores the advantages of the emerging “Software 3.0” stack, whereby programmers are tasked with orchestrating and finetuning the interactions between large-scale foundation models to carry out higher-order tasks.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Current Shuttling Cell Voltage Balancers: Design, Evaluation, and Simulation</title>
<link href="https://hdl.handle.net/1721.1/140121" rel="alternate"/>
<author>
<name>Negm, Mostafa H.</name>
</author>
<id>https://hdl.handle.net/1721.1/140121</id>
<updated>2022-02-08T03:14:40Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Current Shuttling Cell Voltage Balancers: Design, Evaluation, and Simulation
Negm, Mostafa H.
Batteries are becoming increasingly important in a variety of applications, including electric vehicles and ships as well as load matching in electric grids. Cell voltage balancers are critical to extracting maximal performance out of batteries and to extending their lifespan. Charge pump balancers can quickly and efficiently shuttle charge across battery cells to equalize voltages. Component selection of MOSFETs and capacitors is vital in optimizing for performance, cost, and volume. This thesis presents experimental and PSpice simulation data from several capacitor-based charge pump configurations designed for cell voltage balancing. At 0.4 V cell differential, the peak balance current of the 2S balancer was over 9.9 A. At 0.8 V cell differential, the peak balance current of the 4S balancer was over 14.6 A. Ultimately, these charge pumps can be combined to construct a high-current and multilevel cell voltage balancer efficient across a wide range of voltages.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Potential for Using Transportation Network Companies as an Alternative to Transit Station Parking</title>
<link href="https://hdl.handle.net/1721.1/140117" rel="alternate"/>
<author>
<name>Salz, Alexander Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/140117</id>
<updated>2022-02-08T03:31:37Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Potential for Using Transportation Network Companies as an Alternative to Transit Station Parking
Salz, Alexander Michael
Historically, transit agencies have prioritized sustaining existing Park and Rides (P+R) over further Transit Oriented Development (TOD), incurring high financial opportunity costs in the form of an “implicit” parking subsidy and foregoing potential ridership gains in the process. However, the widespread adoption of Transportation Network Companies (TNCs) like Uber and Lyft over the past several years presents a potential opportunity to change the tradeoff transit agencies face between P+R or TOD. This thesis explores the potential for transit agencies to utilize TNCs to reduce demand for transit station parking while decreasing transit agency subsidies and increasing ridership via a mechanism introduced as TNC and Ride or “TNC+R”. &#13;
&#13;
Through a case study of the North Quincy MBTA Station in the Boston Metropolitan Area, this study conducts a financial analysis to quantify the implied P+R subsidy a transit agency incurs by requiring 1:1 parking replacement in an effort to retain all existing P+R users, in lieu of additional TOD revenue and ridership. The analysis estimates that the MBTA is incurring an implicit subsidy of $20 per current parked car in lieu of another 236,700 Square Feet of TOD. &#13;
&#13;
Taking the calculated parking subsidy amount, I use ridership data for North Quincy Station to model the potential financial savings of subsidizing TNC rides instead of retaining parking spaces in certain situations. The modeling considers short term rider financial indifference between P+R and TNC+R. The financial analysis estimates that the MBTA could eliminate all of the 852 existing spaces at the North Quincy TOD site and still retain existing ridership through a lower-cost TNC+R subsidy. The subsidy would convert 469 daily P+R users who travel up to 13 minutes to the station to a TNC+R alternative. The switch to a TNC+R would allow the transit agency to net another $665,000 annually without any incumbent ridership losses. The average subsidy amount decreases by over 25% to $14.50 per round-trip. Finally, the thesis concludes discussing several ways and situations to best use a TNC subsidy. Because of their significantly different cost structures, using transit station parking and TNCs in tandem is generally the best approach and the best-suited stations are those with high land values and/or with a large number of park and riders that live a short distance to the station.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Scalable Server Platform and API Design for Real-Time Health Monitoring and Diagnostics</title>
<link href="https://hdl.handle.net/1721.1/140116" rel="alternate"/>
<author>
<name>Husnoo, Saadiyah B.</name>
</author>
<id>https://hdl.handle.net/1721.1/140116</id>
<updated>2022-02-08T03:05:47Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Scalable Server Platform and API Design for Real-Time Health Monitoring and Diagnostics
Husnoo, Saadiyah B.
Driven by the needs of the COVID-19 pandemic, remote health monitoring services have become mainstream. Machine learning and Artificial Intelligence present new opportunities for health monitoring and diagnostic support for outpatient care as well as for global health. However, currently available server platforms, particularly for research, are very primitive and fragmented, and offer little support for the development of machine learning models. To address this need, the Rich Fletcher’s group at MIT (a.k.a. the Mobile Technology Lab) has developed a server architecture, known as PyMedServer, in conjunction with a host of Mobile applications to collect and analyze patient data, with integrated support for machine learning algorithm development. While this platform was successfully used for several clinical studies, the analysis algorithms were tightly coupled with PyMedServer’s Electronic Medical Record (EMR) system, which limited who could use them and how they could be used. In addition, this initial version of PyMedServer did not support complex multi-stage data processing pipelines and did not integrate with third party applications.&#13;
&#13;
In this thesis, I present specific server API concepts and UI designs for PyMedServer and how I extended PyMedServer to support new workflows both for academic research and also for third-party integration. I developed new and more robust API endpoints and workflows, while adding a two-stage data processing model in order to separate the step of signal processing (or feature extraction) from the application of the machine learning model. I created a new "anonymous" API so as to decouple the EMR system and the analysis pipeline. In addition to creating these new API’s, existing API’s were also improved with support for data privacy concerns and error codes, with the goal of providing a more useful and user friendly platform. In addition, I also implemented an improved user interface, where I extended the front end functionalities to provide additional feedback and information regarding collected data. Several examples are given of different servers and different use cases, for the purpose of illustration.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Never Walk in a Straight Line Again A Methodology to Stop Making Sense</title>
<link href="https://hdl.handle.net/1721.1/140114" rel="alternate"/>
<author>
<name>Ocampo Aguilar, Jesús</name>
</author>
<id>https://hdl.handle.net/1721.1/140114</id>
<updated>2022-02-08T03:05:19Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">How to Never Walk in a Straight Line Again A Methodology to Stop Making Sense
Ocampo Aguilar, Jesús
Since the neoliberal global Western project declared the lack of alternative economic and social systems in the second half of the twentieth century, the construction and sedimentation of common sense has brought an expanding layer of universality that tends to form a monocultural, governable, hegemonic global society. Backed by rationalism and “scientific” decisions, the effects of this monocultural project have increased bigotry, blind nationalism, and the loss of a sense of scale that threaten every connection not just between humans but within our entangled relations with the environment. “Common sense” plays a key role in the formation of such monocultural structures supported and legitimized by regimes of governmentality.&#13;
&#13;
The potential for divergent possibilities lies in the simple fact that full objectivity is impossible to reach. In the capacity of contemporary art practices and methodologies, we can find a possible subversive agent against the impasse created by increasing bigotry and polarization in political, social, and economical discourses. Artistic practices, in this sense, have the capacity to regain agency, question the construction of these monocultures and linear narratives of thought, and constantly agitate the sedimentation of “common sense” in an agonistic, anthropophagic way.&#13;
&#13;
This thesis begins the agitation by addressing two main issues. First is the inability to comprehend and relate to natural planetary phenomena and their various interconnected scales through scientific means. Second is the excessive and increasing control coming from city planning and its military legacy towards citizens and the environment. Both issues obliterate public life through rigid systems of control using merely profit-based closed models backed by scientific planning methodologies and deterministic computational models. This project approaches both issues in an agonistic, pluralist, heterogenous, and subjective way, proposing a new methodology for misusing, subverting, and building with other ways of sensing, seeing, and relating among humans and nonhumans. This project “drifts” with Situationist methodologies, digesting different aesthetics with anthropophagic subjectivity and inventing new relational interfaces in a heuretic way. In this way, by understanding bureaucracy, mistranslating, misusing, rescaling, and redoing (occupying) by creating new interfaces--that act as performative and relational tools--we may begin to navigate the potential of divergent possibilities.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neighborhood Mutual Aid Groups and Spaces of Deviant Care</title>
<link href="https://hdl.handle.net/1721.1/140113" rel="alternate"/>
<author>
<name>Martin, Jasmine M.</name>
</author>
<id>https://hdl.handle.net/1721.1/140113</id>
<updated>2022-02-08T04:02:19Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Neighborhood Mutual Aid Groups and Spaces of Deviant Care
Martin, Jasmine M.
In mid-March 2020, US residents witnessed mass-mobilization to ensure that vulnerable community members (including food-, housing-, and income-insecure individuals, as well as disabled, elderly, and immunocompromised individuals) had their immediate needs met in the wake of the COVID-19 pandemic and the roll-out of stay-at-home orders. Many of these COVID19 neighborhood mutual aid groups were located in metropolitan areas experiencing simultaneous processes of gentrification, racial banishment, and displacement. The presence of these groups, especially those comprised mainly of white, young, gentrifiers challenged previously held notions of ‘care’. For some, this phenomenon raised the question of whether all care ensures survival. And for whom does care, as offered by mutual aid groups, ensure survival?&#13;
&#13;
The purpose of this thesis is to think through the role COVID-19 neighborhood mutual aid groups play in foreclosing or furthering Black survival. Building off the work of scholars of care and deviance (Joan Tronto, Dorothy Roberts, Ren Yo Hwang, and Cathy Cohen), this thesis contributes to both fields by inquiring about the places where deviant care is practiced. In this thesis, I propose a theory of deviant care geographies, which I define as “productions of space/place which resist Black subjection and Black death by emotionally, materially, and ontologically attending to the survival of ‘deviant’ persons.” Deviant care geographies require the centering of Black people, moves away from care as a tenuous institution or as a allocated according to merit, refuses liberal, individualist notions of society members worthy of care, and it creates new socio-spatial relations. Using this theory of spatialized care, I analyze six interviews from a Central Brooklyn-based COVID-19 neighborhood mutual aid group to explore how practices of mutual aid groups and their members further or foreclose the creation of spaces for deviant care practice.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate and Air Quality Impacts of Electric Vehicles and Comparison to U.S. Tax Credits</title>
<link href="https://hdl.handle.net/1721.1/140112" rel="alternate"/>
<author>
<name>Park, Tae Joong (TJ)</name>
</author>
<id>https://hdl.handle.net/1721.1/140112</id>
<updated>2022-02-08T03:33:54Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Climate and Air Quality Impacts of Electric Vehicles and Comparison to U.S. Tax Credits
Park, Tae Joong (TJ)
Road transportation is the largest contributor to CO2 and second largest contributor to early deaths from air pollution in the U.S. To decarbonize and meet its contribution to the Paris Agreement goal of global average temperature rise &lt;2°C and curb human health impacts from air quality, the U.S. must reduce its road sector emissions. Electrification of the light duty vehicle fleet is a potential solution, and the federal government is offering a $7,500 tax credit, along with 15 states with up to $5,000 rebates on battery electric vehicle (BEV) purchases. This study compares the monetized climate and air quality impacts of driving a BEV over a gasoline internal combustion engine vehicle (ICEV) for its full useful life to the federal and state subsidies in the 48 contiguous states. This comparison is an indicator of how well matched the subsidies are to the potential benefits. I use driving patterns across urban, suburban, and rural regions in addition to ICEV and BEV emission factors to compile an emissions inventory across vehicle sizes, trims, and model years. I convert the air pollution emissions to early deaths using new mortality scaling factors calculated from previous work. I determine the monetized climate impacts using the social cost of carbon, and the air quality impacts using mean value of a statistical life. I find that for a new base trim non-luxury compact SUV, the BEV is on average a $1,212 benefit for climate in 46 states, $1,555 benefit for air quality in 32 states, and a combined average of $2,391 benefit in 42 states. The climate and air quality benefit is smaller than the federal and state subsidy in all states by an average of $6,320, except in New Jersey where the benefit is larger by $30. In states where there are BEV damages over an ICEV, no state subsidy is offered. I find that the average combined BEV benefit is positive in all states and 3.8 times larger for a top trim large luxury sedan than a base trim non-luxury compact SUV, due to minimal efficiency penalties for higher performance from upgrading to top trims for BEVs compared to ICEVs. I also find that ammonia (NH3) dominates the total damages, contributing to 56% and 37% for ICEVs and BEVs, respectively, in Massachusetts for example. Three-way catalytic converters (TWC) in gasoline ICEVs produce NH3, while selective catalytic reduction (SCR) used in power plants exhibit ammonia slip, both of which reduce nitrogen oxides (NOx). The northeast and west coast states have higher benefits while midwestern states have smaller benefits or damages from BEVs. Careful evaluation is needed to avoid climate and air quality damages from BEVs when considering expansion and/or extension of the federal subsidy due to regional disparities in emissions of the electric grid. This highlights the importance of emissions reduction from the electric grid along with vehicle fleet electrification.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Artificial Intelligence and Mobile Technologies to Enable Practical Screening for Diabetic Retinopathy</title>
<link href="https://hdl.handle.net/1721.1/140110" rel="alternate"/>
<author>
<name>Sitienei, Christabel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/140110</id>
<updated>2022-02-08T03:33:57Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Applying Artificial Intelligence and Mobile Technologies to Enable Practical Screening for Diabetic Retinopathy
Sitienei, Christabel J.
With burgeoning middle class populations and changing lifestyles, diabetes is rapidly emerging as a major health concern worldwide. A related complication, Diabetic Retinopathy (DR), affects approximately 1 of 3 people with diabetes and is the leading cause of adult blindness worldwide. Since DR often goes undiagnosed, there has been great interest in using artificial intelligence in the form of Deep Learning algorithms to automatically predict DR using retina (fundus) images. However, the practical application of these algorithms is impeded by the different levels of DR severity, limited algorithm reproducibility, and a large variability in fundus imaging devices. To address these concerns, I present the development of an image processing pipeline and a neural net algorithm that automatically tests image quality (based on brightness, color, and amount of blur), rejects poor quality images, and re-formats each image into a standard image resolution. In order to create a generalized model, I used a public Kaggle database or approximately 35,000 retina images and applied a transfer learning approach using the Inception v3 architecture, to build a convolutional neural net (CNN) model that predicts referable DR. As expected, the performance of the resulting model depended on the severity of DR, with AUCs ranging from 0.96 for severe DR to 0.74 for mild DR. I further customize the model using a smaller data set of 1156 smart-phone images from our clinical partner in India. On the Kaggle dataset, the best model achieved a sensitivity = 0.85 and specificity = 0.87. On the smaller dataset, the model attained a sensitivity = 0.82 and specificity = 0.80. Finally, as a possible alternative to fundus imaging, I explore the use of iris images to explore possible correlations with diabetic retinopathy and diabetes. Using cluster analysis and generating plots with PCA and T-SNE, we observed possible clusters in using GLCM features; however, there was insufficient data from healthy individuals to be able to draw any significant conclusions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering Tweets via Tweet Embeddings</title>
<link href="https://hdl.handle.net/1721.1/140109" rel="alternate"/>
<author>
<name>Sun, Daniel X.</name>
</author>
<id>https://hdl.handle.net/1721.1/140109</id>
<updated>2022-02-08T03:12:40Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Clustering Tweets via Tweet Embeddings
Sun, Daniel X.
Twitter is a popular social media platform where users interact through follows and tweets. This work explores computational methods of analyzing tweets with regards to understanding users and their interests. We consider various embedding models to produce tweet embeddings, which we then use to cluster the tweets, forming groups of semantically similar tweets. We then compare these tweet clusters to users clustered by interest based on accounts they follow. This work introduces techniques on how to effectively cluster tweets by semantic meaning despite the colloquial structure of tweet language. We also discuss how the topics of these tweet clusters align with the interests derived from the follow-based clustering approach, and provide insights into where they do and don’t intersect.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exogenous drivers of public transit and ride-hailing ridership: a study of policy intervention, COVID-19, and the relationship between ride-hailing and public transit in Chicago</title>
<link href="https://hdl.handle.net/1721.1/140108" rel="alternate"/>
<author>
<name>Meredith-Karam, Patrick S.</name>
</author>
<id>https://hdl.handle.net/1721.1/140108</id>
<updated>2022-02-08T03:41:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Exogenous drivers of public transit and ride-hailing ridership: a study of policy intervention, COVID-19, and the relationship between ride-hailing and public transit in Chicago
Meredith-Karam, Patrick S.
Public transit is a crucial component of the urban mobility system for many cities, but several recent shocks have threatened its continued function. Additionally, Transportation Network Companies (TNCs) have grown rapidly in recent years, expanding travel choices for some but posing a challenge to public transportation, prompting the City of Chicago to price and regulate TNC services. The backdrop of the COVID-19 pandemic has posed further shocks to both travel modes and their riders. In response to these changes, this thesis asks the question of “How have public transit and TNC riders responded to various external factors, including a direct policy intervention, a public health emergency, and emerging mobility services, and what lessons can be extracted for policymakers and transit system operators?”&#13;
&#13;
Through Chicago-based case studies of the questions above, this thesis examines the impacts of these shocks to urban mobility and extracts relevant takeaways for policymakers and transit agencies. The studies find that policy interventions may not cause anticipated changes to travel behavior, and that the policy impacts may differ substantially across space. These case studies provide examples that policymakers can use to evaluate program impacts to inform future policy adjustments. Regression analysis and survey findings highlight the importance of public transit to move essential workers during the COVID-19 pandemic and identify core ridership among bus riders and minority populations. This thesis also demonstrates the role of TNC services as acting significantly in competition with public transit, but found that the relationship became less competitive during COVID-19.&#13;
&#13;
Chicago’s mobility landscape has undergone transformative change in recent years, and the future of the urban transportation system is uncertain as we recover from COVID-19. In the establishment of a post-pandemic normal, transit agencies and policymakers will need to continually evaluate the intended and unintended consequences of policy interventions, understand the behaviors and intentions of their riders, and assess their relationship with other modes of transportation. This thesis identifies analysis processes and provides practical examples for performing all these functions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Oil and Gas Production Forecasting using Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/140107" rel="alternate"/>
<author>
<name>Andrais, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/140107</id>
<updated>2022-02-08T03:44:41Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Oil and Gas Production Forecasting using Machine Learning
Andrais, Robert
This thesis improves oil- and gas-well profitability by quantifying the uncertainty of the production-forecasting process, using probabilistic machine learning (ML) techniques. A Bayesian Neural Network successfully modelled a complex shale gas reservoir system (Eagle Ford), generating a production forecast with 5% mean absolute percent error. This result is 10%–35% more accurate than traditional decline curve analysis. These forecasts also quantified the epistemic and aleatory uncertainties, providing plausible probabilistic P10 and P90 values. This range provides analysts with the capability of making informed strategic decisions that consider risk. Next, the model was applied to predict reserves (estimated ultimate recovery) and the underlying reservoir quality. These predictions were combined with unsupervised learning techniques (Gaussian Mixture Modelling), creating gas and oil sweet-spot maps. Finally, this workflow’s robustness was demonstrated by artificially reducing data by 93%; indeed, the algorithm could reproduce the full-dataset results with a 71%–91% Pearson correlation, despite this reduction. Supporting this workflow creation is an evaluation of relevant research, data processing, feature engineering, documentation of the probabilistic ML structure, and discussion of model performance using systems analysis.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Six-Axis Levitated Stage with a Novel Flux-Steering Magnetic Hub Actuator</title>
<link href="https://hdl.handle.net/1721.1/140106" rel="alternate"/>
<author>
<name>Anthis III, Austin Forrest</name>
</author>
<id>https://hdl.handle.net/1721.1/140106</id>
<updated>2022-02-08T03:20:04Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Six-Axis Levitated Stage with a Novel Flux-Steering Magnetic Hub Actuator
Anthis III, Austin Forrest
This thesis presents a two-axis, permanent-magnet-biased, flux-steering actuator and its integration into a levitated stage. The hub actuator consists of a steel stator ring (165 mm OD) and a steel mover cylinder (52 mm OD). The stator has four teeth evenly spaced and pointing radially inward, each with a coil of 454 turns of AWG 20 copper wire. The teeth each have a nominal air gap of 0.5 mm with the mover, though only half of the air gap is used for motion (250 µm radius circle of travel). To increase the force density, permanent magnets are arranged to create north poles on the mover and south poles on each stator tooth to introduce a bias flux in the air gap. The coils then modulate the flux in each air gap and this difference produces a force on the mover. The hub actuator produces 200 N of force in both the X and Y directions using less than 10 W of power. In addition to the 0.5 mm stroke in the horizontal plane, we required relatively large strokes in Z and θz – 3 mm and 6∘ respectively – and designed the hub to produce approximately constant force through the range.&#13;
&#13;
The six-degree-of-freedom (6 DoF) experimental stage consists of six Lorentz actuators acting on an aluminum disc stage in addition to the hub actuator. Six individual single-input, single-output (SISO) controllers stabilize and control each of the six degrees of freedom. Each coil is powered by a custom, linear, transconductance amplifier with 5 kHz bandwidth. Using the stage, we demonstrated control loop bandwidths of greater than 100 Hz for the hub degrees of freedom and RMS position noise below 0.4 µm in translation and 7 µrad in rotation. During levitated operation, we produced up to 90 N of steady state force with less than 2 W of power as well as 2.8 m/s2 acceleration in a sinusoidal position trajectory with under 2 W power dissipation. The stage was designed as an efficient fine-motion stage for photolithography or other precision positioning applications.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First-passage time analysis of particle transport in the cytoplasm</title>
<link href="https://hdl.handle.net/1721.1/140105" rel="alternate"/>
<author>
<name>Dhaliwal, Vira</name>
</author>
<id>https://hdl.handle.net/1721.1/140105</id>
<updated>2022-02-08T03:02:24Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">First-passage time analysis of particle transport in the cytoplasm
Dhaliwal, Vira
Cell mechanics are often probed by tracking fluorescent tracer particles embedded in the cytoplasm. The analysis of such experiments typically involves computation of the mean-square displacement of the particles, and thus ignores the variation in how individual particles are transported by activity within the cell. Here, first-passage time (FPT) analysis is presented as an alternate measure that can better represent the diversity of particle behavior. FPT analysis reveals that the diffusive-like motion of tracer particles can not be accurately modeled as random-walk diffusion due to inhomogeneity of particle transport rates. The technique is then used to investigate the effect of vimentin intermediate filaments (VIFs) on cytoplasmic transport. We find that VIFs significantly inhibit the displacement of objects in the cytoplasm.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Novel Mechatronic System to Test Prosthetic Feet Under Specific Walking Activity Loads and Evaluate their Lower Leg Trajectory Error</title>
<link href="https://hdl.handle.net/1721.1/140104" rel="alternate"/>
<author>
<name>Peterson, Heidi V.</name>
</author>
<id>https://hdl.handle.net/1721.1/140104</id>
<updated>2023-01-08T15:56:16Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Design of a Novel Mechatronic System to Test Prosthetic Feet Under Specific Walking Activity Loads and Evaluate their Lower Leg Trajectory Error
Peterson, Heidi V.
Lower limb amputees, numbered at more than 40 million globally, are challenged with limited mobility due to prosthetic devices that do not fully restore the functionalities of their biological limbs. While commercially available energy storage and return feet do restore some of the functionalities of a missing limb, the development and use of these prosthetic devices are limited by the current design, evaluation, and prescription processes. This is because the connection between the combined mechanical characteristics of a foot and user outcomes, such as mobility, comfort, and walking effort, is not fully understood.&#13;
&#13;
The lower leg trajectory error (LLTE) is a novel prosthetic foot performance metric that provides a quantitative connection between the mechanical characteristics of a foot and the expected gait of an amputee. Thus far, the LLTE value of a foot has only been calculated via simulation, which limits the practical use of the metric in prosthetic foot design, evaluation, and prescription. One way to systematically measure the LLTE value of a physical prosthetic foot would be through a mechanical bench test, but the capabilities of existing bench testing devices are insufficient due to limited degrees of actuation and reported accuracy.&#13;
&#13;
The purpose of this work was to design the Prosthetic Foot Testing Device (PFTD), a mechatronic testing device that could apply specific and uncoupled GRFs to any CoP on a foot and measure its deflection, through which it could measure the LLTE value and thus predict walking performance of any passive prosthetic foot. First, we determined high-level functional requirements of the PFTD, including the ranges of reference loads and prosthetic foot deflections as well as the LLTE measurement accuracy, such that the PFTD could meaningfully measure the full range of commercially available prosthetic feet. Second, we derived the relationships between the variables used to calculate the LLTE metric and those controlled or measured by the PFTD. Third, we used these relationships to design the PFTD and perform sensitivity analysis to ensure it could meaningfully and accurately measure the LLTE value of any passive prosthetic foot. In future work, the PFTD will be built, validated, and used to measure and compare the LLTE values of various prosthetic feet. The PFTD and theory presented herein may become a new tool in the prosthetics industry to systematically and amputee-independently measure and compare the performance of prosthetic devices using the LLTE value as a universal metric, which could ultimately improve the development and prescription processes of prostheses.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel View Synthesis from Casually Recorded Videos</title>
<link href="https://hdl.handle.net/1721.1/140103" rel="alternate"/>
<author>
<name>Qian, Eric Ding</name>
</author>
<id>https://hdl.handle.net/1721.1/140103</id>
<updated>2022-02-08T04:07:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Novel View Synthesis from Casually Recorded Videos
Qian, Eric Ding
Generating new, photorealistic views of a scene given only a single video is a difficult task that computer vision researchers have worked on for decades. This problem has recently seen a resurgence in interest due to its potential application in areas such as virtual reality. However, current novel view synthesis techniques are not suitable for the short, casual videos that people typically record. Such videos deviate from the setups that these approaches typically use, where there are dense, high-resolution images of the scene. In this paper, we propose a method for refining an initial, coarse scene geometry which we then use for novel view synthesis on short video sequences. The core of our method is a geometry refinement step where we project the geometry to source views to remove inconsistent points. This refined geometry provides important shape and appearance information in data poor regions that would otherwise be difficult to accurately render. We evaluate our approach on the RealEstate10K dataset and demonstrate that compared to prior work, we synthesize views that are more temporally consistent.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring the Impact of Elections on Judge Behavior Using Machine Learning and Economics Tools</title>
<link href="https://hdl.handle.net/1721.1/140100" rel="alternate"/>
<author>
<name>Chin, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/140100</id>
<updated>2022-02-08T03:45:19Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Measuring the Impact of Elections on Judge Behavior Using Machine Learning and Economics Tools
Chin, Caroline
Judges play a critical role in maintaining a fair and independent criminal justice system. Using a combination of empirical tools from Computer Science and Economics, this paper examines the effects of judicial elections on decisions by magistrate court judges in Pennsylvania. I find that judges who are running in contested primary races dismiss fewer cases in the months leading up to their election. This effect is driven mostly by changes in the treatment of misdemeanor cases. Judges running in competitive primary races dismiss 16.2% fewer misdemeanor cases in the three months preceding their election date. This effect is consistent across estimates derived from linear regression methods as well as machine learning methods including lasso, decision tree and random forest models. In the context of prior research, these findings suggest that electoral pressure induces harsher treatment by judges across all stages of the judicial system.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Stratification Maxima of the Seasonally Varying Surface Layer in the Arctic Ocean’s Beaufort Gyre</title>
<link href="https://hdl.handle.net/1721.1/140099" rel="alternate"/>
<author>
<name>Roemer, Peter Albert</name>
</author>
<id>https://hdl.handle.net/1721.1/140099</id>
<updated>2022-02-08T03:42:29Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Stratification Maxima of the Seasonally Varying Surface Layer in the Arctic Ocean’s Beaufort Gyre
Roemer, Peter Albert
The Beaufort Gyre region of the Arctic Ocean is strongly stratified at the base of the wintertime mixed layer, which impedes the vertical transport of heat, energy, and other tracers. Ice-Tethered Profiler observations during 2004-2018 were used to characterize and investigate the seasonal and interannual variability of the strength, depth, density, and thickness of this highly stratified layer at the base of the mixed layer. This includes investigating the remnant stratification maximum, which formed when the summer mixed layer shoaled. Seasonally, the stratification maximum was never in a steady state. It was largest in October (4.8 × 10⁻³ &#119903;&#119886;&#119889;²/&#119904;&#119890;&#119888;²) and decreased during all winter months (to 2.3 × 10⁻³ &#119903;&#119886;&#119889;²/&#119904;&#119890;&#119888;² in June), indicating that surface forcing and interior vertical mixing were never in equilibrium during the year. Interannually, the period from 2011-2018 had a higher stratification maximum than then the period from 2005-2010 regardless of the season. The remnant stratification maximum was consistently weaker than the winter stratification maximum from which it formed. The initial evolution of the remnant stratification maximum is used to estimate an effective vertical diffusivity of order 10⁻⁶&#119898;²/&#119904;. No significant geographic variability was found, in part due to high temporal and small scale variability of the stratification maximum layer. Implications for heat transport through to the sea ice cover are discussed.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMOS THz-ID: A 1.6mm² Package-Less Identification Tag Using 260-GHz Far-Field Backscatter Communication</title>
<link href="https://hdl.handle.net/1721.1/140094" rel="alternate"/>
<author>
<name>Khan, Muhammad Ibrahim Wasiq</name>
</author>
<id>https://hdl.handle.net/1721.1/140094</id>
<updated>2022-02-08T03:12:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">CMOS THz-ID: A 1.6mm² Package-Less Identification Tag Using 260-GHz Far-Field Backscatter Communication
Khan, Muhammad Ibrahim Wasiq
Radio Frequency Identification (RFID) tags have been widely used for counterfeit mitigation, authentication and supply chain management. Small form factor, power efficiency and cost are the important requirements of these tags which are often limited by off-chip antenna and packaging. Operating at Terahertz (THz) frequency removes these limitations by enabling on-chip antenna array within mm-size with sufficient gain. In this thesis, I present an ultra-small identification tag that is entirely built in a CMOS chip without external components. The usage of backscatter communications at 260 GHz enables full integration of a 2×2 patch antenna array. For chip compactness and minimum interference caused by direct wave reflection, the backscatter signal is frequency-shifted by 2 MHz and radiated with cross-polarization from the same antenna array. Such a configuration also, for the first time for RF tags, enables beam-steering for enhanced link budget. The presented tag has a peak power consumption of 21 &#120583;W and can be powered by a chip-wide array of photodiodes. Using a low-cost 65-nm bulk CMOS technology, the THz-ID chip has an area of only 1.6 mm^2 and demonstrates measured downlink speed of 100 kbps and upload speed of 2 kbps across 5 cm distance from the reader. The tag-reader authentication/communication protocol is fully demonstrated using external tag power and partially demonstrated using the tag-integrated photo-voltaic powering. The tag size is the smallest among all prior RFIDs using far-field communications.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shaping of Strategic Staffing System</title>
<link href="https://hdl.handle.net/1721.1/140091" rel="alternate"/>
<author>
<name>Catalan, Louis C.</name>
</author>
<id>https://hdl.handle.net/1721.1/140091</id>
<updated>2022-02-08T03:39:10Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Shaping of Strategic Staffing System
Catalan, Louis C.
The workforce is the greatest asset of every business, and the success of an organization lies in its ability to bring and keep top talents within.  This is because the workforce or the people execute plans, create and perform processes and strategies to deliver business results.  For organizations to execute business strategies effectively, it needs the right quality and quantity of workers.  It needs to have the right number of people with the right skills at the right time, location, and cost, and intertwining all these variables is not easy.&#13;
&#13;
Workforce planning is an effective methodology in addressing talent management ambiguity.  The traditional method of pen and paper or the widely used spreadsheets are some ways to perform workforce planning; however, the dynamic elements of Strategic Workforce Planning make these tools somewhat irrelevant.  The fast-paced evolving technology and constantly changing market needs dynamically changes business priorities, requiring a more effective tool that continuously aligns people strategy to business strategy.&#13;
&#13;
This research explores a Strategic Staffing System's functionalities that enable a business to meet the workforce demand.  Several leading workforce management systems were studied. A key finding is that not a single technology addresses 100% of the stakeholder's functional requirements, but components from multiple systems did.  The System Engineering’s Concept Selection Method was used to surface the best components and generate new concepts to meet users’ needs.&#13;
&#13;
This study suggests that there is no ready-to-use system that fully caters to stakeholder’s needs.  The components of a “world-class” Strategic Staffing system are available yet; it needs to be built as a single integrated system.  Until the proposed concept is operationalized, businesses can buy what is currently offered in the market or redesign existing technologies that will address the greater need.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factored State Abstraction for Option Learning</title>
<link href="https://hdl.handle.net/1721.1/140090" rel="alternate"/>
<author>
<name>Abdulhai, Marwa</name>
</author>
<id>https://hdl.handle.net/1721.1/140090</id>
<updated>2022-02-08T03:35:38Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Factored State Abstraction for Option Learning
Abdulhai, Marwa
Hierarchical reinforcement learning has focused on discovering temporally extended actions (options) to provide efficient solutions for long-horizon decision-making problems with sparse rewards. One promising approach that learns these options end-toend in this setting is the option-critic (OC) framework. However, there are several practical limitations of this method, including the lack of diversity between the learned sub-policies and sample inefficiency. This thesis shows that the OC framework does not decompose problems into smaller and largely independent components, but instead increases the problem complexity with each option by considering the entire state space during learning. To address this issue, we introduce state abstracted option-critic (SOC), a new framework that considers both temporal and state abstraction to effectively reduce the problem complexity in sparse reward settings. Our contribution includes learning a factored state space to enable each option to map to a sub-section of the state space. We test our method against hierarchical, nonhierarchical, and state abstraction baselines to demonstrate better sample efficiency and higher overall performance in both image and large vector-state representations under sparse reward settings.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Clinical Pain Management and Patient Experiences of Pain from Electronic Health Records</title>
<link href="https://hdl.handle.net/1721.1/140086" rel="alternate"/>
<author>
<name>Vaughn, Julie R.</name>
</author>
<id>https://hdl.handle.net/1721.1/140086</id>
<updated>2022-02-08T03:34:22Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Understanding Clinical Pain Management and Patient Experiences of Pain from Electronic Health Records
Vaughn, Julie R.
Opioid prescription practices in clinical settings are frequently variable and subjective. Improper usage of prescription opioids is in turn a massive public health issue in the United States. However, lack of pain medication can also lead to patients being unable to perform daily activities due to unmitigated pain. In this thesis, we find that data on opioid prescriptions and self-reported pain reveal differences in how patients of different demographics report pain and in how providers choose to prescribe opioids. We analyze data from two distinct populations, the MIMIC III ICU dataset and records from general medical services at Brigham and Women’s Hospital (BWH). This work is undertaken in collaboration with providers at BWH in Boston. To help quantify and standardize patients’ experiences of pain, we may consider the concept of functional pain — i.e., if the patient is in too much pain to perform basic activities such as turning or walking. This gives rise to the clinical Functional Pain Scale (FPS), which we will endeavor to use retrospectively with clinical notes. We identify and isolate relevant notes and annotate them for relevant spans, and assign an overall functional pain score to each note based on BWH guidelines. Natural Language Processing (NLP) models are then trained to identify these spans and to predict the assigned functional pain score. Through this work, we hope to improve pain management practices, and more broadly, the patient experience.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Humans Among the Clouds</title>
<link href="https://hdl.handle.net/1721.1/140084" rel="alternate"/>
<author>
<name>Sidik, Saima</name>
</author>
<id>https://hdl.handle.net/1721.1/140084</id>
<updated>2022-02-08T04:01:21Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Humans Among the Clouds
Sidik, Saima
For over 100 years, scientists have trekked to an observatory on top of Blue Hill in Milton, Massachusetts to observe and record the weather. When the Observatory was founded in 1885, scientists’ understanding of Earth’s atmosphere was still too limited for them to reliably predict the next day’s weather. Today, a global network of satellites, weather balloons, airplanes, and ocean buoys makes Blue Hill’s contribution to day-to-day weather prediction minimal, and yet the Observatory has found itself at the center of a looming crisis — climate change. Throughout the decades, Blue Hill scientists have stayed deeply committed to consistency. They continue to use thermometers, barometers, anemometers, and other equipment from as far back as the 1800s to avoid disrupting their record by changing their instrumentation. As a result, records from Blue Hill and similar sites are some of the most reliable indicators that the Earth’s climate has shifted in a small, but meaningful, way. Old equipment is difficult to maintain, however, and justifying their seemingly-arcane methods to the lawmakers who control their budget can be challenging. As technology evolves, and automation sweeps through disciplines ranging from meteorology to medicine, will Blue Hill weather observers be able to maintain their way of life? And what will science lose if they can’t?
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Analytics Applications for Oil and Gas Processing Facilities</title>
<link href="https://hdl.handle.net/1721.1/140083" rel="alternate"/>
<author>
<name>Machado Roberty, Elias A.</name>
</author>
<id>https://hdl.handle.net/1721.1/140083</id>
<updated>2022-02-08T03:25:26Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Predictive Analytics Applications for Oil and Gas Processing Facilities
Machado Roberty, Elias A.
The oil and gas industry faces profitability and sustainability challenges that demand companies to have more efficient, reliable, safe, and environmentally friendly operations. Furthermore, as oil and gas companies embark on the Industry 4.0 journey, the pillar of big data becomes increasingly important in an industry that generates massive amounts of data with low to no value extracted from it. Data are generated across all value chain sectors—upstream, midstream, and downstream—starting at reservoirs up to the finished products delivered by the refining and petrochemical sectors. Processing facilities across the value chain, where physical and chemical unit operations convert raw products into intermediate and finished products, generate a wealth of data through their heavily instrumented automatic control systems, operational routines, and quality control systems. Analyzing process data can help companies develop models that predict key process-related parameters to correct potential process upsets timely. In addition, predictive models can also be incorporated into digital twins to emulate diverse operating scenarios for production optimization or facility design purposes. This thesis investigates and reviews the application of predictive analytics on process data, its potential untapped value, analytics as an enabler of digital twins, and big data analytics frameworks tailored for an oil and gas context. Use cases across all segments of the value chain are reviewed with their respective predictive methods. The value of predictive analytics in oil and gas is assessed by reviewing various sources, including a major oil company success case followed by the architectural integration of predictive analytics into the development of a digital twin employing a systems-oriented approach. The last chapter discusses the predictive component of a novel approach tailored for process data analytics: Smart Process Analytics. The advantages such a framework offers versus standard automated predictive model development processes are discussed. Lastly, big data architectures for SPA implementation at process plants are developed.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-path Penalty Metric in Underwater Acoustic Communication for Autonomy and Human Decision-making</title>
<link href="https://hdl.handle.net/1721.1/140080" rel="alternate"/>
<author>
<name>Howard, Bradli Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/140080</id>
<updated>2022-02-08T03:37:54Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Multi-path Penalty Metric in Underwater Acoustic Communication for Autonomy and Human Decision-making
Howard, Bradli Anne
A novel performance metric to improve underwater digital acoustic communication, called Multipath Penalty (MPP), is proposed as an alternative to traditional signal-to-noise ratio (SNR) methods in the context of the Arctic Beaufort Sea. MPP and SNR are compared alongside a third performance metric, Minimum Achievable Error (MAE), which replicates the operation of a channel estimate-based decision feedback equalizer in an acoustic modem. The three metrics are then tested in a hardware-in-the-loop Virtual Ocean simulator for an autonomous undersea vehicle (AUV) communicating with a collaborator. Using field data of modem statistics obtained during ICEX20 and expanding data supplied by the simulator, calibration of the three metrics to modem packet success is evaluated, resulting in proposed recalibration for MAE. The AUV's ability to communicate when adaptively choosing its depth is analyzed above and below the Beaufort Lens, and settings for MPP's engineering variables are obtained. The results show MPP generally improves reception and demodulation of acoustic transmissions over SNR by approximately 5% within an operational range of 8 km, while achieving similar results to the more robust metric MAE. MPP is an improved utility for underwater digital acoustic communication in both marine autonomy and as a tactical decision aid.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the feasibility of a circular economy for 3D&#13;
printed tactile educational aids for visually impaired&#13;
(VI) students in India</title>
<link href="https://hdl.handle.net/1721.1/140078" rel="alternate"/>
<author>
<name>Mandal, Indrayud Biswas</name>
</author>
<id>https://hdl.handle.net/1721.1/140078</id>
<updated>2022-02-08T04:05:52Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Assessing the feasibility of a circular economy for 3D&#13;
printed tactile educational aids for visually impaired&#13;
(VI) students in India
Mandal, Indrayud Biswas
Access to higher STEM education is limited to students with disabilities globally. The issue is particularly prevalent in countries like India where students are frequently advised against pursuing higher STEM education due to insufficient devices and tools in schools. The unavailability of educational devices in the sciences along with insufficient pay and attention for instructors in public schools results in a massive under-representation in higher education. Even for those who are brave enough to pursue their dreams, the statistics are incredibly bleak. 6 out of 10 students fail graduate classes in the visually impaired (VI) community because their fundamentals have not been built up sufficiently [6]. The present system results in few VI students pursuing science and technology as their academic and professional future. This results in a skewed participation in higher education and technology professions which has a direct consequence of accessibility being treated as an afterthought in product design globally.&#13;
&#13;
The extent of this problem is not apparent to sighted individuals as they do not face the same challenges as their differently-abled counterparts. Furthermore, there is a segregation of these students from standard schools; thus, the interaction between both groups of students is minimal. Having such a large problem space, the focus of this study has been on targeting the issue of a single-disability (VI) and a single-subject (geometry). The solution involves tactile devices that can be easily integrated into the curriculum, such as tactile tangrams puzzles and the Hexa-compass. &#13;
&#13;
Focus areas of this study have been analyzing the current curriculum and determining integration of the devices into the same. Furthermore, stakeholder analysis was conducted based on the REAP framework along with the determination of a technology strategy statement for the next five years from an ecosystem and technology perspective. This is a Blue Ocean Market analyzed using the Business Model Canvas, and a circular economy model has been piloted at Bengaluru and Chennai, India, in the past nine months. Finally, there is a recount of the challenges encountered, suggestions on future work, and a commentary on the more extensive socio-technical system.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Engineering Approach to Carbon Accounting Using&#13;
System Theoretic Process Analysis (STPA)</title>
<link href="https://hdl.handle.net/1721.1/140077" rel="alternate"/>
<author>
<name>Ward, John K.</name>
</author>
<id>https://hdl.handle.net/1721.1/140077</id>
<updated>2022-02-08T03:56:41Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Systems Engineering Approach to Carbon Accounting Using&#13;
System Theoretic Process Analysis (STPA)
Ward, John K.
As the threat of climate change escalates unbound, qualitative efforts to address climate change are impeded by a fundamental challenge of quantification, overly simplified by the adage, “If you can’t measure it, you can’t manage it.” Standards and methods of greenhouse gas emissions quantification, colloquially referred to as carbon accounting, remain unsettled in consideration.&#13;
&#13;
Though numerous considerations have been given to carbon accounting in the literature, little has been offered on the topic from the discipline of Systems Engineering (SE) and its practice of “systems-thinking”. Whole-system evaluation of carbon accounting has the potential to offer new perspective and insight at the macro (global, country, industry) scale, the meso (organization, project) scale, and the micro (product) scale. Review of SE tools and techniques may serve to inform, accelerate, and ultimately resolve issues of measure, attribution, and effect valuation.&#13;
&#13;
This study evaluates what SE systems-thinking can offer to resolve the complexity of carbon accounting in the application of System Theoretic Process Analysis (STPA). We believe this study to be the first explicit SE consideration of carbon accounting, making only an entry into the resolution of accounting complexity by identifying system requirements, observing feedback and process model dependencies, and exposing business organization coupling. In considering future work, this study also satisfies an aim of identifying what of carbon accounting is relevant to Systems Engineering, specifically in the product development process.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Environmental Effects of the Beaufort Lens on Underwater&#13;
Acoustic Communications during Arctic Operations</title>
<link href="https://hdl.handle.net/1721.1/140076" rel="alternate"/>
<author>
<name>Goodwin, Daniel Wilson</name>
</author>
<id>https://hdl.handle.net/1721.1/140076</id>
<updated>2022-02-08T03:09:00Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Environmental Effects of the Beaufort Lens on Underwater&#13;
Acoustic Communications during Arctic Operations
Goodwin, Daniel Wilson
Operations in the Arctic Ocean are increasingly important due to the changing environment and the resulting global implications. These changes range from the availability of new global trade routes, accessibility of newly available resources in the area, and national security interests of the United States in the region. It’s necessary to build a greater understanding of the undersea environment and how it’s changing since these environmental changes have a direct impact on adjusting future operations in the region and looming global changes as less Arctic ice is present. The recent presence of the Beaufort Lens is changing the acoustic propagation paths throughout the Arctic region. Here a network of buoys were employed to communicate with an Autonomous Undersea Vehicle (AUV) while it operated under the ice throughout the Beaufort Lens with the goal of achieving near GPS quality navigation. The acoustic communications paths were compared using a vertical array throughout the Beaufort Lens. This beam forming was compared to the prediction from BELLHOP. As well, since acoustic communications are affected by multi-path, attenuation and interference from other sources it was interesting to note that bottom bounce was sometimes a reliable acoustic path.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Discounted Cash Flow, Decision Analysis, and Flexibility in Design for handling uncertainty in Oil and Gas Capital Projects</title>
<link href="https://hdl.handle.net/1721.1/140075" rel="alternate"/>
<author>
<name>Yemets, Serhiy Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/140075</id>
<updated>2022-02-08T03:46:45Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Comparison of Discounted Cash Flow, Decision Analysis, and Flexibility in Design for handling uncertainty in Oil and Gas Capital Projects
Yemets, Serhiy Y.
Oil and gas companies face multiple uncertainties when designing and implementing capital projects. Capital heavy projects with long-time horizons face various uncertainties that are difficult to predict and mitigate. Numerous ways of dealing with these uncertainties exist: decision analysis, real options, flexibility in engineering design, and economic discounted cash flow models. The question remains of what is the most effective way of dealing with uncertainties. &#13;
&#13;
Decision analysis is utilized to understand possible risks and evaluate the decision quality of a project. It addresses uncertainties in the later stages of the project once the design has been completed. Flexibility in engineering design also deals with uncertainty, principally during the design phase of the project. It enables the architecture of the projects to respond to changing circumstances easily and cost-effectively. &#13;
&#13;
This thesis compares and contrasts flexibility in engineering design and its usefulness with decision analysis and discounted cash flow analyses. It focuses on how these methods address uncertainty and compares the project results that received uncertainty treatment during the decision analysis phase with project results where uncertainty is addressed during the project design phase. We examine a hypothetical project case applying methodologies and compare deterministic and probabilistic project models.&#13;
&#13;
The recommendation is to utilize flexibility in engineering design for projects that face high uncertainty. Addressing uncertainty early in the project design phase could deliver the most value compared to uncertainty treatment in a later project phase. Flexibility in engineering design is not a substitute for a complex project model. While it could be difficult to select among various flexible designs, they often could perform better. Still, it is easy to utilize, allows for better development of the complex economic model, and simplifies decision analysis as uncertainty valuations have been completed early in the project design stage.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for Downstream Oil &amp; Gas Refineries: Applications for Solvent Deasphalting</title>
<link href="https://hdl.handle.net/1721.1/140074" rel="alternate"/>
<author>
<name>Dowell, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/140074</id>
<updated>2022-02-08T03:00:57Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Machine Learning for Downstream Oil &amp; Gas Refineries: Applications for Solvent Deasphalting
Dowell, Christian
This thesis seeks to provide continuous DAO yield estimations for an SDA unit by constructing modern machine learning models using data sets from a commercial downstream oil and gas refinery in the United States. These data sets include plant operating parameters and laboratory measurements for feed properties. The best machine learning model, determined via an extensive cross-validation procedure, exhibits high out-of-sample R^2 values of 0.76. Furthermore, this predictive machine learning model is incorporated into a linear optimization framework to enhance crude oil purchasing decisions for a downstream refinery. Results suggest that the proposed approach, combining predictive and prescriptive analytics, can result in significant profitability gains estimated at $730,000 annually. The results of this model can be utilized for more accurate plant monitoring within oil &amp; gas downstream refineries, as well as improved decision making by oil and gas planning professionals.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing for Whom: Does Adherence to Massachusetts' 40B Provide Adequate Stock of Housing Types Needed at the Local Level?</title>
<link href="https://hdl.handle.net/1721.1/140071" rel="alternate"/>
<author>
<name>Fay, John T.</name>
</author>
<id>https://hdl.handle.net/1721.1/140071</id>
<updated>2022-02-08T03:51:54Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Housing for Whom: Does Adherence to Massachusetts' 40B Provide Adequate Stock of Housing Types Needed at the Local Level?
Fay, John T.
Massachusetts General Law 40B has been in effect for over 50 years, yet the housing affordability crisis in the state persists. It is well established that onerous bureaucratic permitting requirements, along with restrictive zoning ordinances constrain the housing market and serve to drive up housing prices. This study aims to determine if the subsidized housing units built in accordance with the demands of 40B can meet the demographic needs of their respective locales.&#13;
&#13;
To test the hypothesis that municipalities favor subsidized unit types that are less of a draw on municipal finance, I compared the catalog of subsidized units in study communities to the locale's demographic profiles. This analysis showed a weak connection between the types of units built under 40B and the proportional population they would serve.&#13;
&#13;
However, these results suggest the need for stricter documentation and reporting standards in the administration of 40B and examples from other states of how to best address the housing affordability crisis in Massachusetts.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Gaussian Factor Graph Inference for Robotic Navigation</title>
<link href="https://hdl.handle.net/1721.1/140070" rel="alternate"/>
<author>
<name>Pu, Can</name>
</author>
<id>https://hdl.handle.net/1721.1/140070</id>
<updated>2022-02-08T04:05:25Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Non-Gaussian Factor Graph Inference for Robotic Navigation
Pu, Can
This thesis addresses non-Gaussian factor graph inference problems that arise in simultaneous localization and mapping (SLAM). We present a general framework to draw samples from the joint posterior distributions of a SLAM problem via ancestral sampling on the Bayes tree. This conditional sampling framework works by traversing all cliques of the Bayes tree from leaves to the root, to learn the local conditional distributions, then sampling the conditional distributions from the root to leaves. By leveraging the Bayes tree, the conditional sampling framework is able to exploit the sparsity structure of the factor graph, thus enabling efficient incremental updates similar to iSAM2, albeit in the more challenging non-Gaussian setting. With this conditional sampling framework, we use normalizing flows to learn local conditional distributions on cliques of the Bayes tree. The normalizing flows exploit the expressive power of neural networks, and train a coupling function that connects a low-dimensional non-Gaussian distribution to a standard Gaussian distribution. Together with our conditional sampling framework, normalizing flows make a novel non-Gaussian inference algorithm, Normalizing Flow iSAM (NF-iSAM), for solving high-dimensional SLAM problems with non-Gaussian factors. We demonstrate the performance of NF-iSAM and compare it against the state-of-the-art algorithms such as iSAM2 (Gaussian) and mm-iSAM (non-Gaussian) in synthetic and real range-only SLAM datasets. NF-iSAM shows better accuracy and efficiency than mm-iSAM, and is able to capture the non-Gaussian posterior distributions that iSAM2 cannot tackle.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-based Methods for Occluder-aided Non-Line-of-Sight Imaging</title>
<link href="https://hdl.handle.net/1721.1/140069" rel="alternate"/>
<author>
<name>Medin, Safa C.</name>
</author>
<id>https://hdl.handle.net/1721.1/140069</id>
<updated>2022-02-08T03:47:29Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Learning-based Methods for Occluder-aided Non-Line-of-Sight Imaging
Medin, Safa C.
Imaging scenes that are not in our direct line-of-sight, referred to as non-line-of-sight (NLOS) imaging, has recently gained considerable attention from the computational imaging community. With a diverse set of potential applications in several domains, NLOS imaging is an emerging topic with many unanswered questions despite the progress made in the last decade. In this thesis, we aim to find answers to some of these questions by focusing on a popular NLOS imaging setting, namely occluder-aided imaging, which exploits occluding structure in the scenes to extract information from the hidden scenes. We do this by first focusing on the scene classification problem, where we study the problem of identifying individuals by exploiting shadows cast by occluding objects on a diffuse surface. In particular, we develop a learning-based method that discovers hidden cues in the shadows and relies on building synthetic scenes composed of 3D face models obtained from a single photograph of each identity. We transfer what we learn from the synthetic data to the real data using domain adaptation in a completely unsupervised way and report classification accuracies over 75% for a binary classification task that takes place in a scene with unknown geometry and occluding objects. Next, we focus on the problem of scene estimation, which aims to recover an image of the hidden scene from NLOS measurements. We present a learning-based framework that exploits deep generative models and demonstrate the promise of this framework via simulations.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Modern Quantum Programming</title>
<link href="https://hdl.handle.net/1721.1/140068" rel="alternate"/>
<author>
<name>McNally, Christopher Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/140068</id>
<updated>2022-02-08T03:49:24Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Practical Modern Quantum Programming
McNally, Christopher Michael
In this thesis we present a compiler for Cavy, an imperative quantum programming language. The main contribution of the Cavy system is the application of region inference to the problem of safe and efficient ancilla qubit allocation, use, and deallocation in a programming language with a reversible subset. This approach enables the compilation of optimized quantum circuits from programs with arbitrary ancilla operations. In contrast with other recent work on ancilla deallocation, the safety analysis is a variant of the borrow checker introduced in the Rust programming language. It features “move references,” a unique reference type that can safely transfer ownership of its referent.&#13;
&#13;
To frame the problem and motivate these features, we describe a quantum algorithm whose recent experimental implementation strains the expressiveness of traditional linearly-typed quantum programming languages, and give a Cavy implementation of this algorithm.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Reliability Analysis under the Multivariate Normal Model</title>
<link href="https://hdl.handle.net/1721.1/140063" rel="alternate"/>
<author>
<name>Wigmore, Jerrod(Jerrod Alexander)</name>
</author>
<id>https://hdl.handle.net/1721.1/140063</id>
<updated>2026-03-19T14:51:35Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Network Reliability Analysis under the Multivariate Normal Model
Wigmore, Jerrod(Jerrod Alexander)
High Frequency (HF) radios have been used since the early 20th century for long-distance communication.  HF communication systems primarily utilize skywave propagation in which the ionosphere is used to reflect radiowaves back to Earth.  The performance of HF communication links is directly tied to the ionospheric propagation medium.   The ionosphere is a highly variable and irregular environment that creates many challenges in the design of robust HF communication networks. Ionospheric characteristics vary temporally and will be spatially correlated. As a result, link failures within an HF network may also be correlated. In this thesis we develop a novel model for probabilistic link failures that captures the correlation expected  in HF communication networks. We focus on two problems related to the design of robust HF networks- 1.) The Network Reliability Problem, which seeks to compute the probability a network is operational in the presence of random link failures, and  2.) the Most Reliable Path Problem, which seeks to identify the most reliable path between two nodes in a network.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delegation with Updatable Unambiguous Proofs and PPAD-Hardness</title>
<link href="https://hdl.handle.net/1721.1/140059" rel="alternate"/>
<author>
<name>Yang, Lisa L.</name>
</author>
<id>https://hdl.handle.net/1721.1/140059</id>
<updated>2022-09-22T10:17:36Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Delegation with Updatable Unambiguous Proofs and PPAD-Hardness
Yang, Lisa L.
In this work, we construct an updatable and unambiguous delegation scheme based on the decisional assumption on bilinear groups introduced by Kalai, Paneth and Yang [STOC 2019]. Using this delegation scheme, we show PPAD-hardness (and hence the hardness of computing Nash equilibria) based on the quasi-polynomial hardness of this bilinear group assumption and any hard language that is decidable in quasi-polynomial time and polynomial space.&#13;
&#13;
The delegation scheme is for super-polynomial time deterministic computations and is publicly verifiable and non-interactive in the common reference string (CRS) model. It is updatable meaning that given a proof for the statement that a Turing machine reaches some configuration C in T steps, it is efficient to update it into a proof for the statement that the machine reaches the next configuration C' in T+1 steps. It is unambiguous meaning that it is hard to produce two different proofs for the same statement.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Analysis of Japanese and Western Corporate Venture Capital</title>
<link href="https://hdl.handle.net/1721.1/140057" rel="alternate"/>
<author>
<name>Okamoto, Tomohisa</name>
</author>
<id>https://hdl.handle.net/1721.1/140057</id>
<updated>2022-02-08T03:35:51Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Comparative Analysis of Japanese and Western Corporate Venture Capital
Okamoto, Tomohisa
Corporate Venture Capital (CVC) is one of methods to facilitate open innovation for companies, and the companies have invested in startups by CVC for a long time in the world. In Japan, more and more large companies have established their CVC, and they have invested in the startups in the last five years to follow up American and European companies. However, while American companies have already implemented investments in startups for a long time and have succeeded in obtaining returns from the investments by their CVC, the effects of the investments by Japanese companies are still unclear, and many failures has been reported in Japan.&#13;
&#13;
This thesis focuses on a comparison analysis of Japanese CVCs with Western CVCs. In this thesis, I will research a purpose, investment policy, prioritized return and operations management for their CVC, and conduct a comparative analysis between Japanese and Western CVC by leveraging public resources and personal insights obtained from interviews. I research pioneers in both Japanese and American CVC in high tech field, then we analyze a comparison of Japanese, American and European CVCs in heavy industry. To deepen the analysis of the latest Japanese CVCs, I research other Japanese CVCs which focus on digital transformation fields.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enterprise Architecting for Tacit Knowledge Transfer: Sustaining Competitive Advantage:</title>
<link href="https://hdl.handle.net/1721.1/140056" rel="alternate"/>
<author>
<name>Tan, Chun Hern</name>
</author>
<id>https://hdl.handle.net/1721.1/140056</id>
<updated>2022-02-08T03:18:58Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Enterprise Architecting for Tacit Knowledge Transfer: Sustaining Competitive Advantage:
Tan, Chun Hern
Tacit knowledge forms the bulk of a firm’s unique knowledge asset, and it is the main source of a firm’s competitive advantage. Up to 90% of organization knowledge exists in tacit form and is embedded deep within the employee’s mind; it is intuitive, unarticulated, and cannot be codified into explicit form. The tacit nature of such knowledge prevents it from being extracted, formalized, and transferred easily. This presents a concern for enterprises as this knowledge must be captured, stored, and made easily accessible within the organization to sustain the competitive advantage. An architecting framework is developed to help users design end-to-end tacit knowledge transfer solutions in enterprises. Solutions generated with the framework consider the amount and type of tacit knowledge to be transferred, transfer pathways, transfer process appraisal, modes of transfer, and overall transfer performance evaluation. Finally, the framework is qualitatively evaluated through an R&amp;D lab case study. The architecting framework is implemented on the R&amp;D lab transformation project for tacit knowledge transfer, guided by the broader Architecting Innovative Enterprise Strategy (ARIES) framework.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Autonomous Casualty Status Communication Tool</title>
<link href="https://hdl.handle.net/1721.1/140055" rel="alternate"/>
<author>
<name>Shah, Rishi</name>
</author>
<id>https://hdl.handle.net/1721.1/140055</id>
<updated>2022-02-08T03:44:02Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Autonomous Casualty Status Communication Tool
Shah, Rishi
Currently, military combat medics are tasked with stabilizing casualties that arise during an operation, and preparing them to be evacuated to higher echelons of care. While doing so, medics have minimal time to record what care is being administered and communicate the status of the patient to other stakeholders, particularly as severity of injury increases. This often results in medical evacuation and surgical assets receiving information which is inaccurate, missing vital elements, and/or communicated so closely to the reception of the casualty that it does not yield any benefit in preparation.&#13;
&#13;
This project assess the feasibility, and proposes a design for a novel machine learning-enabled tool to autonomously detect and communicate casualty status information as a redundancy mechanism to ensure accurate, comprehensive, and timely medical information reaches higher echelons of care.&#13;
&#13;
This thesis specifically details a system design, establishes data collection protocols, and begins prototyping the machine-learning based components of the tool. Design interviews were conducted with a variety of end-users and key stakeholders of the medic-enabling tool in order to inform and construct a system design. The design proposed here is specifically tailored for US Special Operations Command combat medics, to whom this tool would be most applicable. As a proof of concept for this design, a data collection protocol for egocentric perspective video footage from combat medical training exercises was created. Accuracy baselines for state of the art computer vision and speech detection algorithms were established to assess tool feasibility and as guidance for future development. Preliminary proof of concept results are also used to inform design considerations for future work.&#13;
&#13;
While further algorithm development, design refinement, integration planning, and system testing will be required to field this tool, this thesis lays the groundwork for a novel and potentially life-saving capability.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Development and Deployment of Mobile Apps and Server Platform for Real-World Screening of Pulmonary and Cardiovascular Disease in Low-Resource Areas</title>
<link href="https://hdl.handle.net/1721.1/140054" rel="alternate"/>
<author>
<name>Kukadia, Vedaant</name>
</author>
<id>https://hdl.handle.net/1721.1/140054</id>
<updated>2022-02-08T03:50:33Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Development and Deployment of Mobile Apps and Server Platform for Real-World Screening of Pulmonary and Cardiovascular Disease in Low-Resource Areas
Kukadia, Vedaant
In order to address limited access to health care in low-resource parts of the world, our group at MIT, over the past 7 years, has developed a variety of health screening tools that rely on a smartphone with access to a remote server. This mobile health platform, known as "PyMed," consists of a Django server, Postgres data base, and a variety of Bayesian network machine learning and data processing algorithms implemented in Python. While several different server platforms have been demonstrated by our group over the past few years, a great deal of additional development was required to deploy these technologies in a real-world scenario. In addition to the mobile application software and machine learning algorithms, actual deployment of these technologies required the development of transaction sequences and work flows that enable a health worker, or doctor, in the field to collect data from a patient, process the result in real-time on the server, and then receive a complete and usable result on the mobile phone. In this thesis, I discuss the detailed workflow and underlying technology required to perform diagnostic health measurements in a real-world setting. I present the various software modules and server API work that needed to be developed. In addition, I describe how the health results were designed and ultimately presented in a simple and usable format that both the patient and health worker could use and understand. In this work, I describe two disease categories: cardiovascular disease and pulmonary disease. In total, our group has developed two separate servers, 11 mobile apps, and multiple algorithms for signal processing and diagnostic prediction for these two disease categories. The work in this thesis was completed in preparation for several field studies in Bangladesh: (1) a study with coronavirus patients in the NIDCH Hospital in Dhaka, Bangladesh; and (2) an efficacy study with private community health workers in two low-resource areas of Chittagong District and Jamalpur, Bangladesh.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process improvement and policy analysis in oil and gas well development and construction through applications of system engineering and system dynamics concepts</title>
<link href="https://hdl.handle.net/1721.1/140053" rel="alternate"/>
<author>
<name>Toleubay, Bagdat</name>
</author>
<id>https://hdl.handle.net/1721.1/140053</id>
<updated>2022-02-08T03:31:11Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Process improvement and policy analysis in oil and gas well development and construction through applications of system engineering and system dynamics concepts
Toleubay, Bagdat
Ever since the boom in unconventional oil and gas reservoir development, an increasing number of companies started to employ a manufacturing philosophy and use factory-like models in asset development. This thesis seeks to provide an overview of the main development processes and, with the help of system engineering and system dynamic concepts, model the dynamic behavior of the well factory. Additionally, main process improvements were identified in the development of the unconventional plays and were implemented in the system dynamic model. It was shown that increasing organizational capability positively affects the selected performance metrics of a well factory. The thesis describes three main aspects of the well factory: 1) well manufacturing assembly line; 2) hydrocarbon production; 3) cash generation. Each aspect has a detailed analysis of supporting subsystems and effects of subsystem-to-subsystem interface variables on the well factory. Organizational forgetting is introduced as a significant impactor on the well factory performance metrics. It was shown that even though there are short-term gains, there is potential for a significant impact in the long run due to descoping of resources and teams. The simulations also show how an accumulation of certain inventories reduces well factory performance and how improvement in certain aspects of the well factory would not necessarily lead to improvement in value generation.&#13;
&#13;
There are six main objectives for this study to help support the model building and analysis: 1) Investigate applicability of lean manufacturing concepts on the well delivery process; 2) Identify main value generation drivers for unconventional play asset development and their effects on the well factory; 3) Reveal research gaps with respect to system dynamic applicability to model a well factory; 4) Tie the well delivery model to cash flow and value generation; 5) Identify existing process improvement initiatives in the industry and tie them to the dynamic model of the well factory; 6) Analyze how learning and increasing organizational capability affects the well factory dynamics.&#13;
&#13;
It was found there are benefits in applying system engineering and dynamics concepts to analyze policies on process improvement and decision making. Actionable recommendations were advised at the end of the study with the assistance of the system dynamic model of the well factory.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>I, Dentist: Is artificial intelligence the future of oral healthcare?</title>
<link href="https://hdl.handle.net/1721.1/140052" rel="alternate"/>
<author>
<name>Davis, Robert M.</name>
</author>
<id>https://hdl.handle.net/1721.1/140052</id>
<updated>2022-02-08T03:04:12Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">I, Dentist: Is artificial intelligence the future of oral healthcare?
Davis, Robert M.
The field of dentistry has earned a reputation for being more prone to misdiagnosis and overtreatment than other medical subspecialties. This is driven, at least in part, by a professional culture that has traditionally been less scientific and evidence-based than that of general medicine. It is also partially driven by an array of economic pressures that have predisposed dentists towards more aggressive and expensive treatment options, as well as by the legitimate ambiguities of clinical decision making in oral healthcare.&#13;
&#13;
In the past decade, at least half a dozen dental AI companies have begun selling software that they claim can help mitigate the problem of misdiagnosis and overtreatment in dentistry. Some of their systems attempt to do this by monitoring insurance claims, and flagging suspicious patterns in patient records. Other systems focus on automating the diagnosis process itself— scanning patient X-rays to identify simple issues like a cavity, for example, or a tooth abscess.&#13;
&#13;
On paper, dental AI companies pitch their products as helpful tools that can assist dentists by automating busywork and providing a backstop against innocent human error. They suggest that computer vision technology can help ensure that every dentist has access to the latest best practices and evidence-based care recommendations. However, AI technology can do much more than merely play the role of digital assistant. In many ways, these systems also serve as a kind of hall monitor, providing tacit enforcement of the clinical best practices that inform their programming.&#13;
&#13;
This project explores a simple question: What do we gain, and what do we lose, by bringing artificial intelligence to oral healthcare?
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analytical Approach to Automate Stratigraphic Correlation using Well Logging Information</title>
<link href="https://hdl.handle.net/1721.1/140049" rel="alternate"/>
<author>
<name>Parimontonsakul, Monthep</name>
</author>
<id>https://hdl.handle.net/1721.1/140049</id>
<updated>2022-02-08T03:59:44Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Analytical Approach to Automate Stratigraphic Correlation using Well Logging Information
Parimontonsakul, Monthep
Stratigraphic correlation is an essential process and is a prerequisite to several processes in the industry. It is a tedious and time-consuming work process that prevents geoscientists from performing other relevant tasks and becomes a bottleneck to other workflows. With the rapid growth in digital technology, the age of digital transformation becomes a key enabler to automate the correlation by applying the data analytic framework to formulate and solve the problem.&#13;
&#13;
This thesis aims to address and emphasize an opportunity to create a computer-assisted and potential process automation to overcome the cumbersome stratigraphic correlation problem. It integrates a system thinking and an analytical thinking concept to a problem formulation. The step comprises involving the right stakeholders and beneficiaries, identifying needs and use cases, and understanding the problem at hand and its consequences. Three analytical problems are formulated and investigated, corresponding to various use cases.&#13;
&#13;
The machine learning pipeline is developed as a foundation in the data analytic model, intending to automate the model implementation and extract insights. As a result, this thesis emphasizes the power of different analytical formulations to the implication and interpretation of the model. The analytical formulation employing the marker horizon identification concept performs significantly better than other formulations. The model reveals insights, especially on the information geoscientists need to focus on during the correlation analysis and unveils a path to automate the stratigraphic correlation task.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Fast Method to Analyze Patterns in Airport Noise</title>
<link href="https://hdl.handle.net/1721.1/140044" rel="alternate"/>
<author>
<name>Jansson, Madeleine</name>
</author>
<id>https://hdl.handle.net/1721.1/140044</id>
<updated>2022-02-08T03:59:31Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Development of a Fast Method to Analyze Patterns in Airport Noise
Jansson, Madeleine
Traditional airport noise modeling is limited in its ability to analyze large quantities of flight tracks due to high computation time. As a result, yearly noise reports are often limited to modeling flights from a single “representative day,” which lacks detail arising from the natural dispersion of flight tracks and variety in airport operations occurring throughout an entire year of operations.&#13;
&#13;
A framework for processing actual flight data and applying an existing, fast noise approximation is presented. Tens of thousands of flights can be analyzed in a matter of hours, allowing for a data-comprehensive approach to calculating noise metrics. Method results are cross-validated against the Aviation Environmental Design Tool (AEDT) on a single-event basis and an existing aggregate result on a multi-event basis.&#13;
&#13;
Results for a variety of metrics are presented based on data sourced from Boston Logan International Airport in 2016. Day-Night-Level (DNL) is calculated on a yearly, daily, and hourly basis, highlighting the variability in noise patterns depending on evolving airport runway configuration. N60 is calculated as a supplemental metric on a daily basis.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coordinated Planning and Visualization for an Electromagnetically Actuated Reconfigurable Robot</title>
<link href="https://hdl.handle.net/1721.1/140042" rel="alternate"/>
<author>
<name>Cheng, Leon</name>
</author>
<id>https://hdl.handle.net/1721.1/140042</id>
<updated>2022-02-08T04:04:16Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Coordinated Planning and Visualization for an Electromagnetically Actuated Reconfigurable Robot
Cheng, Leon
Self-reconfigurable modular robots present the promise of a fully functional system that can reassemble itself into arbitrary shapes and sizes. One such robot that could be used for this is a cubic block equipped with electromagnets for propulsion and a microcontroller for communication. &#13;
&#13;
This thesis presents an interface that can be used to facilitate multi-cube movement planning. An interactive 3D web simulation is built to visualize an arbitrary number of cubes and consecutive rotations. The simulation tracks how each cube’s orientation and positions of electromagnets changes over time. This proves useful when programming movements in physical cube prototypes and allows more complex assemblies to be coordinated. The functionality of the simulation has been validated with actual successful experiments on an air table and experiments in microgravity on a parabolic flight campaign.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VeGen: A Vectorizer Generator for SIMD and Beyond</title>
<link href="https://hdl.handle.net/1721.1/140040" rel="alternate"/>
<author>
<name>Chen, Yishen</name>
</author>
<id>https://hdl.handle.net/1721.1/140040</id>
<updated>2022-02-08T03:33:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">VeGen: A Vectorizer Generator for SIMD and Beyond
Chen, Yishen
Vector instructions are ubiquitous in modern processors. Traditional compiler auto-vectorization techniques have focused on targeting single instruction multiple data (SIMD) instructions. However, these auto-vectorization techniques are not sufficiently powerful to model non-SIMD vector instructions, which can accelerate applications in domains such as image processing, digital signal processing, and machine learning. To target non-SIMD instruction, compiler developers have resorted to complicated, ad hoc peephole optimizations, expending significant development time while still coming up short. As vector instruction sets continue to rapidly evolve, compilers cannot keep up with these new hardware capabilities.&#13;
&#13;
To facilitate the adaption of complex non-SIMD vector instructions, I propose a new model of vector parallelism that captures the semantics of these instructions and a new framework extracting this new model of vector parallelism automatically based on the formal semantics of the non-SIMD instructions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image registration and bias evaluation for a COVID-19 pulmonary X-ray severity (PXS) score prediction algorithm</title>
<link href="https://hdl.handle.net/1721.1/140039" rel="alternate"/>
<author>
<name>Agarwal, Vibha</name>
</author>
<id>https://hdl.handle.net/1721.1/140039</id>
<updated>2022-02-08T03:02:26Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Image registration and bias evaluation for a COVID-19 pulmonary X-ray severity (PXS) score prediction algorithm
Agarwal, Vibha
As COVID-19 spreads, it is increasingly important to track disease trajectory in order to provide better care for patients. Analyzing chest x-rays (CXRs) is one method used by radiologists to assess disease severity, but manual interpretation is time-consuming and subject to inter- and intra-rater variability. One study has used a Siamese neural network to predict numerical COVID-19 pulmonary disease severity scores [19], but because CXRs from the same patient tend to have differences in positioning and acquisition unrelated to disease progression, image registration can be used to standardize the CXRs to improve longitudinal comparison. In this study, we show that affine image registration using Voxelmorph [3] has the potential to improve the prediction of longitudinal change in COVID-19 severity. Additionally, external generalization is a challenging problem for medical AI, and a model used in healthcare settings must be free of bias in order to be clinically valid. To this end, we analyze the performance of the Siamese prediction model on an external dataset and show that its predictions correlate with expert disease severity labels, and that it performs similarly for different demographic groups (age, sex, BMI, and international location). These preliminary results suggest that the model may be a reliable and equitable way, among the subgroups evaluated, to quantify disease severity labels.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Makerspaces more accessible for people with visual impairment: Understanding user needs to reimagine solutions.</title>
<link href="https://hdl.handle.net/1721.1/140038" rel="alternate"/>
<author>
<name>Jain, Kritisha  Kantilal</name>
</author>
<id>https://hdl.handle.net/1721.1/140038</id>
<updated>2022-02-08T04:08:14Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Making Makerspaces more accessible for people with visual impairment: Understanding user needs to reimagine solutions.
Jain, Kritisha  Kantilal
The maker revolution came about as a relief to passionate tinkerers and makers. It made making approachable. This attracted millions of collaborators and empowered them to express their imagination through tangible results. But, one subset of tinkerers and makers that this revolution hasn’t successfully included is people with disabilities who face significant barriers to Making, the irony being that this group stands to benefit the most from this revolution.&#13;
&#13;
As a response, I am asking a fundamental question - How Might We Make Makerspaces More Accessible to People with Visual Impairment? For this, I draw on examples from a series of interviews and workshops where I introduced hardware machines to blind hobbyists and guided assembly of a corn sheller pioneered at the MIT D-Lab workshop. This allowed me to understand the real needs, desires and frustrations of this group at a more intricate level and hence let me collaborate with them on ways to achieve the goal of making makerspaces within the MIT ecosystem more accessible as a first step through a guided workshop at the MIT D-Lab. This exploration has culminated into a bedrock of foundational knowledge that can be used to further work in this area and a set of robust suggestions that are applicable to all makerspaces - not just within MIT. The study has kept the group of potential benefactors at its fulcrum through every step of the process - from exploration to testing of solutions in an agile, iterative, human-centered manner.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Multidisciplinary Analysis of a Stratospheric Airborne Climate Observatory System for Key Climate Risk Areas</title>
<link href="https://hdl.handle.net/1721.1/140037" rel="alternate"/>
<author>
<name>Dewald, Annick</name>
</author>
<id>https://hdl.handle.net/1721.1/140037</id>
<updated>2022-02-08T03:11:07Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Multidisciplinary Analysis of a Stratospheric Airborne Climate Observatory System for Key Climate Risk Areas
Dewald, Annick
A Stratospheric Airborne Climate Observatory System is proposed to leverage recent advancements in key enabling technologies for solar electric flight (batteries and solar cells) and enabling technologies for Earth observation (lidar, radar, laser systems, etc.). Advantages of this observation system include the ability to make in-situ measurements of the stratosphere, measure at a high spacial and temporal resolution, and direct-ablility (the trajectory can be adjusted in real-time for persistent monitoring or tracking).&#13;
&#13;
Although historical examples of similar solar-electric long-endurance aircraft have faced considerable technical and programmatic challenges, this effort employs several risk mitigation strategies to avoid common pitfalls such as wing structural divergence. The vehicle, mission, and operational strategy are designed in tandem, customizing each aspect of the design to best serve the mission requirements while minimizing risk (modelled by wingspan as a proxy for aero-structural risk). An integrated optimization framework is presented as a tool for aircraft sizing and the key driving parameters are explored, including technology specifications, payload mass and power, and the cruise altitude of the vehicle.  &#13;
&#13;
Several potential climate science missions are then proposed, each where the attributes of this SACOS vehicle fill a persistent void in current observational techniques. The sizing tool is used to show the size, capability and seasonality of a SACOS vehicle designed for said application. This analysis illustrates  a rich feasible space, and minimal technical risk should the SACOS vehicle operate seasonally (only in summer months where solar conditions are favorable).
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Cascade Laser Frequency Combs</title>
<link href="https://hdl.handle.net/1721.1/140033" rel="alternate"/>
<author>
<name>Letsou, Theodore Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/140033</id>
<updated>2022-02-08T03:31:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Quantum Cascade Laser Frequency Combs
Letsou, Theodore Peter
Quantum cascade lasers (QCLs) have been the dominant source of high-power infrared radiation ever since their invention in 1994. The ability to engineer their emission wavelengths from 3 &#120583;m to 300 &#120583;m has allowed scientists to use QCLs in a plethora of applications, ranging from spectroscopy to tomography. In addition, QCLs are highly non-linear devices, and possess the ability to emit many frequencies of light simultaneously. This has made them excellent candidates for frequency combs, which are broadband light sources that emit equally-spaced frequencies with a well-defined phase relation. By manipulating the optical non-linearities through dispersion engineering, QCLs can be made to enter frequency combs states on-demand. By mixing two different frequency combs, absorption features at optically frequencies can be encoded into the radio-frequency domain, eliminating the need for expensive, high-frequency detectors. This "dual-comb spectrometer" offers a chip-scale alternative to bulky spectrometers, making it one of the most attractive applications of QCLs.&#13;
&#13;
This thesis outlines the development, characterization and theory of QCL frequency combs operating in the atmospheric transmission window (8 &#120583;m – 12 &#120583;m)—a spectral region where many chemical species have their fundamental vibrational and absorption bands. By borrowing techniques commonly used in ultra-fast optics, the dispersion of QCLs—which is the primary catalyst for comb formation—can be tuned without the use of mechanically-moving parts. In addition, this thesis utilizes optical coherence techniques to reconstruct the electric field profile of QCL combs, which provides valuable insight on the physics of their formation.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steps Towards a Closed-Loop System for Blood Pressure Control</title>
<link href="https://hdl.handle.net/1721.1/140029" rel="alternate"/>
<author>
<name>Baum, Taylor Elise</name>
</author>
<id>https://hdl.handle.net/1721.1/140029</id>
<updated>2022-02-08T03:52:19Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Steps Towards a Closed-Loop System for Blood Pressure Control
Baum, Taylor Elise
Management of blood pressure in the operating room is currently done manually by anesthesiologists. Poor blood pressure management has been linked to poor postoperative outcomes. As such, development of a closed-loop system for blood pressure control is warranted, given it produces improved patient care. In this work, we present a breakdown of the interdisciplinary problem of blood pressure control, and initial steps towards solving this problem. Previous attempts at solving this problem fall short in their lack of incorporation of mechanistic cardiovascular models. Our novel contribution is a closed-loop system for blood pressure control built with explicit incorporation of cardiovascular system mechanisms. We first build a pharmacokinetic-pharmacodynamic model of the cardiovascular system in response to cardiovascular system actuators, vasoactive drugs. We use two actuators in our framework: phenylephrine, to raise blood pressure, and nicardipine, to lower blood pressure. We emphasize our use of the two-element Windkessel model in our pharmacodynamic component. We then build a model predictive control framework given this pharmacokinetic-pharmacodynamic model, and present preliminary control simulation results. Our simulation results indicate feasibility of our model-based control design in upcoming experimental studies. In the future, we seek to validate this control framework in vivo, and ultimately improve patient care in operating rooms through optimized blood pressure management.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Learning by Proximity Using An Agent-based Model and Simulation</title>
<link href="https://hdl.handle.net/1721.1/140027" rel="alternate"/>
<author>
<name>Hernandez, Matthew John</name>
</author>
<id>https://hdl.handle.net/1721.1/140027</id>
<updated>2022-02-08T03:49:56Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Investigation of Learning by Proximity Using An Agent-based Model and Simulation
Hernandez, Matthew John
This thesis investigates individual and organizational learning, focusing on the impacts of knowledge acquisition and transfer due to cognitive, social, and organizational proximity. A literature review on individual, team, and organizational learning identified how knowledge is acquired and transferred. Knowledge can be broken down into two main categories, explicit and tacit. Tacit knowledge is difficult to articulate and transmit but can frequently occur through collaboration. Simulation analyses using an agent-based model was utilized to explore collaboration as a mechanism for knowledge transfer. Large cognitive distances showed significant increases in collaboration times and a decrease in overall organizational performance. Agents with no prior experience will acquire more knowledge when placed on mixed skilled teams than similarly skilled teams but at the cost of more senior agents’ ability to complete their work demands. With more data readily available, organizations should be more intentional about talent management regarding the development of new skills to penetrate an organization.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AeroSandbox: A Differentiable Framework for Aircraft Design Optimization</title>
<link href="https://hdl.handle.net/1721.1/140023" rel="alternate"/>
<author>
<name>Sharpe, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/140023</id>
<updated>2022-02-08T03:35:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">AeroSandbox: A Differentiable Framework for Aircraft Design Optimization
Sharpe, Peter
This work presents a new computational framework for conceptual aircraft design called AeroSandbox. This framework leverages modern techniques for automatic differentiation developed in the optimal control and machine learning communities. By combining these efficient gradient calculations with robust optimizers such as IPOPT, multidisciplinary aircraft design problems of practical interest can be solved in seconds. We demonstrate this speed with several canonical aircraft design problems in this work, showing that performance and flexibility equals or exceeds that of state-of-the-art tools in many cases.&#13;
&#13;
This framework's modular approach to engineering analysis allows sophisticated aerospace problems to be constructed by connecting plug-and-play building blocks in code. This decreases the time required to go from a qualitative vehicle and mission concept to a quantitative, optimized performance estimate. The framework's emphasis on rapid development time and run time enables an engineer to interactively pose design questions, enabling human insight to be more readily applied to the computational design process.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationships between Class Engagement, Community, and Engineering Design Self-Efficacy in Remote, Kit-based Classes</title>
<link href="https://hdl.handle.net/1721.1/140020" rel="alternate"/>
<author>
<name>Anlage, April</name>
</author>
<id>https://hdl.handle.net/1721.1/140020</id>
<updated>2022-02-08T03:01:10Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Relationships between Class Engagement, Community, and Engineering Design Self-Efficacy in Remote, Kit-based Classes
Anlage, April
Online education has grown explosively in the past 20 years, but experienced an unprecedented bloom during the COVID-19 pandemic, when even largely “hands-on” engineering design classes were forced to operate remotely. As many students and educators bemoaned the shift, urgent questions of how to produce educational value in online environments emerged. Self-efficacy is one measure of student experience that can be used to understand what students gain from a class. In this research, a survey of 81 students in two remote, kit-based engineering design classes in Spring 2021 was conducted to probe the relationships between class engagement, sense of class community, and engineering design self-efficacy. Students with a stronger sense of community were found to have higher engineering design self-efficacy. On a 100-point scale, students with the highest ratings of class community (compared to those with the lowest ratings) had higher confidence by 10.65 points, higher motivation by 14.86 points, and higher expectations of success by 13.38 points. Furthermore, this relationship between community and self-efficacy was statistically significant in confidence, motivation, expectations of success and anxiety for females-identifying students and not significant in any lens for male students. &#13;
&#13;
Demographics were also considered for significant effects. Higher levels of engagement were found in female students and higher levels of motivation in students who worked primarily remotely, as opposed to those who worked on the class project in an on-campus lab. No significantly different community or self-efficacy levels were found of students in different years (sophomore/junior/senior). Additionally, students who identified as an underrepresented minority (URM) race/ethnicity category were found to have statistically significant lower levels of motivation (by 12.39 points) and higher anxiety (by 12.34 points) for engineering design compared to non-URM students. These results inform the future of remote engineering classes and the importance of a sense of community for students in these courses.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Metastudy of Algorithm Lower Bounds</title>
<link href="https://hdl.handle.net/1721.1/140013" rel="alternate"/>
<author>
<name>Liu, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/140013</id>
<updated>2022-02-08T03:43:40Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Metastudy of Algorithm Lower Bounds
Liu, Emily
Algorithms are essential to the field of computer science, and algorithm designers are always searching for the mathematically optimal algorithms. Sherry and Thompson found that improvements to algorithm upper bounds have been steadily decreasing since the 1970s. In this work we aim to discover whether this could be because researchers have already found the optimal versions of many algorithms. In order to get a better sense of the picture, we compiled lower bounds on the algorithm families studied by Sherry and Thompson. We find that, while a few problems still have large gaps between upper and lower bounds where improvement is possible, over threequarters of these problems are already very close to being optimal! The “slowing progress” may in fact prove to be a triumph in disguise, as it is an indicator that many problems have achieved optimal solutions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Ground Multi-Agent Communication&#13;
with Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/140012" rel="alternate"/>
<author>
<name>Lin, Toru</name>
</author>
<id>https://hdl.handle.net/1721.1/140012</id>
<updated>2022-02-08T03:26:07Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Learning to Ground Multi-Agent Communication&#13;
with Autoencoders
Lin, Toru
Communication requires having a common language, a lingua franca, between agents. This language could emerge via a consensus process between agents, but this may require many generations of trial and error. Alternatively, the lingua franca can be given by the environment, where agents ground their language in representations of the observed world. We demonstrate a simple way to ground language in learned representations, which facilitates decentralized multi-agent communication and coordination. We find that a standard representation learning algorithm – autoencoding – is sufficient for arriving at a grounded common language. When agents broadcast these representations, they learn to understand and respond to each other’s utterances, and achieve surprisingly strong task performance across a variety of multi-agent communication environments.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Envisioning Lower Allston’s future: Contested spaces at the margins of Harvard University’s expansion</title>
<link href="https://hdl.handle.net/1721.1/140009" rel="alternate"/>
<author>
<name>Grimaldi, Andrea</name>
</author>
<id>https://hdl.handle.net/1721.1/140009</id>
<updated>2022-02-08T03:36:18Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Envisioning Lower Allston’s future: Contested spaces at the margins of Harvard University’s expansion
Grimaldi, Andrea
Anyone who walks, bikes, rides a bus or a car around Lower Allston today cannot avoid noticing the extent to which the neighborhood is changing and being rebuilt. Changes in global education markets have motivated Harvard University’s expansion in the neighborhood, an expansion that caters to the research and professional interests of the global creative class. With compliance from the leadership of the City of Boston, Harvard is moving ahead with its ambitious campus expansion. Through a close analysis to master plans, maps and publicly available documentation, this thesis explores the relationship between Lower Allston’s main urban development actors—the neighbors, Harvard University and the government of Boston—and analyzes each actor’s visions for the neighborhood’s future. When every single acre of land in Boston has been covered by modern classroom spaces, innovative entrepreneurship labs, and pristine landscaping… where will the people live?
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High performance MoS₂ transistors based on wafer-scale low-temperature MOCVD synthesis</title>
<link href="https://hdl.handle.net/1721.1/140004" rel="alternate"/>
<author>
<name>Zhu, Jiadi</name>
</author>
<id>https://hdl.handle.net/1721.1/140004</id>
<updated>2022-02-08T03:23:41Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">High performance MoS₂ transistors based on wafer-scale low-temperature MOCVD synthesis
Zhu, Jiadi
Among all the possible back-end-of-line (BEOL) solutions to improve the integration density and functionality of conventional silicon circuits, two-dimensional (2D) material devices are believed to be very promising, due to their high mobility, relatively large band gaps, atom-level thickness, performance comparable to the one of silicon devices, and great potential in realizing 3D integration. However, wafer-scale growth of high-quality, continuous 2D materials thin film with BEOL compatible temperature (&lt;400°C) and good uniformity has always been difficult to realize. To achieve low contact resistance to these materials is also very challenging and hinders the development of 2D material devices and circuits. &#13;
&#13;
In this thesis, we will demonstrate a novel 8-inch, BEOL-compatible metal organic chemical vapor deposition (MOCVD) method for the synthesis of 2D transition metal dichalcogenide materials with growth temperature lower than 400°C. Highly-scaled high-performance MoS₂ transistors will also be investigated with different contact engineering methods. These findings represent crucial steps for high performance power electronic circuits as well as realizing ultra-large scale BEOL integration with silicon circuits.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Public Transit from the Margins: How rethinking public transit in Boston to support the travel patterns of transit-reliant women could transform public transit for the better</title>
<link href="https://hdl.handle.net/1721.1/139999" rel="alternate"/>
<author>
<name>Jacobsen, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/139999</id>
<updated>2022-02-08T04:08:56Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Designing Public Transit from the Margins: How rethinking public transit in Boston to support the travel patterns of transit-reliant women could transform public transit for the better
Jacobsen, Adriana
Boston’s public transportation network, the MBTA, is a “hub-and-spokes” system: rail lines radiate out to the suburbs from a few central downtown stations, and traveling between the “spokes” often requires taking multiple buses or traveling all the way inbound to transfer. Particularly on the bus and Commuter Rail systems, off-peak service is limited. For those who live in the suburbs and commute to the city during rush hour, this setup works relatively well. However, many women that depend on public transportation face unique difficulties. Women are more likely to make care-related and household sustaining trips such as grocery runs and dropping off and picking up children from school, to make multiple trips in a row (trip-chaining), and to feel unsafe on public transit. Understanding the limitations that transit-reliant women face can help to build a more comprehensive public transit system that supports all types of trips and improves public transportation for everyone. This practice aligns with the theory of “designing from the margins”.&#13;
&#13;
Using data from a survey I conducted of almost 200 women in the Boston area, I examine some of the issues and obstacles that these women face when using public transit and visualize many of these roadblocks using QGIS. I then use MBTA performance data and census demographic data to determine where the greatest gaps between transit usage/reliance and transit service occur in Greater Boston, with a particular focus on the effects of the COVID-19 pandemic and the inequities which it exacerbated. To conclude, I suggest some design guidelines for new transit infrastructure that the MBTA could adapt to accommodate the travel patterns of the women surveyed and highlight the Boston neighborhoods and nearby cities that should be prioritized.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systematic Approach for Cybersecurity Risk Management</title>
<link href="https://hdl.handle.net/1721.1/139995" rel="alternate"/>
<author>
<name>Chen, Kristin YiJie</name>
</author>
<id>https://hdl.handle.net/1721.1/139995</id>
<updated>2022-02-08T03:06:47Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Systematic Approach for Cybersecurity Risk Management
Chen, Kristin YiJie
In the last few years, the concern over cybersecurity has grown dramatically. With all the existing, and sometimes competing, guidelines and frameworks intended to inform cyber risk strategies, organizations face the problem of deciding which is right for them. To resolve the confusion, this research proposes a practical and effective model that can be used by organizations of any size or in any industry for cyber risk management. We propose a Cyber Risk Cube (CRC) tool designed to be practical for all parts of an organization, which examines three fundamental pairings for looking at cyber risk: Internal/External, Measurement/Management, and Qualitative/Quantitative. The CRC tool can be used as a common language for sharing ideas and solutions to cyber risk management. Ultimately, the CRC provides details for implementing solutions to managing cyber risks in a concise and standardized manner.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging the US Army Corps of Engineers Public-Private Partnerships (P3) Pilot Program to Promote Equitable Outcomes from Local Climate Mitigation and Adaptation Projects</title>
<link href="https://hdl.handle.net/1721.1/139993" rel="alternate"/>
<author>
<name>Gant, Alexander Paine</name>
</author>
<id>https://hdl.handle.net/1721.1/139993</id>
<updated>2022-02-08T03:39:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Leveraging the US Army Corps of Engineers Public-Private Partnerships (P3) Pilot Program to Promote Equitable Outcomes from Local Climate Mitigation and Adaptation Projects
Gant, Alexander Paine
As the negative impacts of a rapidly changing climate continue to exacerbate structural inequities across all sectors and scales, U.S. communities and citizens are increasingly at risk of physical, economic, and environmental harms. Directed federal investment in climate mitigation can reduce disproportionate burdens on at-risk populations, while also providing substantial economic benefits to those individuals and communities. The US Army Corps of Engineers has been the nation's premier flood management agency since the mid-19th century and is uniquely equipped to provide technical and financial support to such communities.&#13;
&#13;
In this client-based thesis, I worked with Aaron Snyder, Lead of the Corps' Water Infrastructure Financing Program and Director of the Corp’s Public-Private-Partnership (P3) program, to evaluate the Corps' role in developing and stewarding resilient civil works and public infrastructure. We focused on the Corp's role of providing flood protection infrastructure in response to the stressors of increasingly frequent and intense natural disasters. Our goal was to assess how the Corp's recently introduced P3 program can be improved to alleviate disproportionate cost-burdens on at-risk communities. Illustrated through case studies of weather and water disasters in Nashville, TN; New Bern, NC; Richwood, WV; and Fargo, ND; we find that the theoretical foundations of the cost-benefit analyses currently employed at the onset of the Corp's water resource management projects substantially limits availability and access of federal aid to communities who need it most. We conclude that the new P3 program, if directed to promote equitable outcomes from local climate mitigation and adaptation projects, would allow the Corps to more accurately assess project feasibility, prioritize projects sponsored by non-federal partners, leverage progressive local funding mechanisms, and ultimately reduce climate risks in vulnerable communities while meeting USACE’s mission of protecting U.S. Citizens, reducing disaster risk, and providing vital infrastructure needs and solutions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mokumitsu Districts in Tokyo</title>
<link href="https://hdl.handle.net/1721.1/139992" rel="alternate"/>
<author>
<name>Ichikura, Ryuhei</name>
</author>
<id>https://hdl.handle.net/1721.1/139992</id>
<updated>2022-02-08T04:08:32Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Mokumitsu Districts in Tokyo
Ichikura, Ryuhei
Mokumitsu is a feature of urban districts in Japan and means “densely built-up with wooden structured buildings”. Many of these buildings are categorized as substandard housing because they were built legally in the last century but no longer fulfill the latest building codes. Mokumitsu districts are considered severely unsafe in case of an earthquake because the houses are structurally weak, combustible, and built very close each other. Especially since the disastrous Kobe Earthquake in 1995, Tokyo’s Mokumitsu districts in particular become one of the most serious issues for the nation, due to the districts’ large area and the high probability of earthquake.&#13;
&#13;
Urban renewal—or the demolition and reconstruction of so-called susbstandard housing— is one of the fundamental measures for disaster mitigation in Mokumitsu districts. It is also significant for the existing residents to keep living in the same community after the renewal, in terms of disaster preparedness. However, the current policies for the Tokyo’s Mokumitsu districts are not sufficient to facilitate this renewal. The subsidy for the developers hardly incentivizes design for the residents. Even with the direct subsidy for the residents, they face difficulties to rebuild their own houses that they would prefer on the small individual lands that would become even smaller and thinner after the road widening in the renewal.&#13;
&#13;
Accelerating the renewal in the Tokyo’s Mokumitsu districts by housing cooperative, this research aims to understand what kind of design preferences the residents have, how renewal by housing cooperative can grasp them, and if it is financially feasible and scalable. The author conducted on-site interviews with the residents to ask their design preferences, tested the design of the renewal, and analyzed the financial feasibility from the perspective of real estate.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Minimax Approach for Learning Gaussian Mixtures</title>
<link href="https://hdl.handle.net/1721.1/139991" rel="alternate"/>
<author>
<name>Wang, William Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/139991</id>
<updated>2022-02-08T04:07:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Minimax Approach for Learning Gaussian Mixtures
Wang, William Wei
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distributionlearning benchmarks including Gaussian mixture models (GMMs). In this thesis, we propose Generative Adversarial Training for Gaussian Mixture Models (GATGMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We discuss the application of the proposed GAT-GMM framework for learning GMMs in the distributed federated learning setting, where the widely-used expectation-maximization (EM) algorithm can incur great computational and communication costs. On the other hand, we show that GAT-GMM provides a scalable learning approach and a distributed GDA algorithm can still solve the GAT-GMM minimax problem without incurring extra computation costs. We numerically support our theoretical results by performing experiments which show that our minimax framework is successful in centralized learning tasks and can outperform standard EM-type algorithms in the federated setting.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Subway Vibrancy in Live-Work-Play: A Case Study from and for Santiago, Chile</title>
<link href="https://hdl.handle.net/1721.1/139988" rel="alternate"/>
<author>
<name>Ramos Yáñez, Maria Camila</name>
</author>
<id>https://hdl.handle.net/1721.1/139988</id>
<updated>2022-02-08T03:45:31Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Understanding Subway Vibrancy in Live-Work-Play: A Case Study from and for Santiago, Chile
Ramos Yáñez, Maria Camila
I characterize the subway neighborhood’s activity in three essential categories: live, work, and play. My research examines how Metro de Santiago’s latest expansion – Line 6 – has affected the configuration of neighborhoods across the city. Using multiyear information on subway new developments, data on amenities and jobs distributed in Santiago, real estate transactions in the city, and transit data in GTFS format, the results from the Differences-in-Difference model show that new subway infrastructure positively contributes to the number of openings of new amenities – be it in a subway neighborhood or in other neighborhood that benefits from network effects. The results show that the opening of line 6 in Santiago has led to an annual increase of 14.85 amenities in treated cells when considering improvements in accessibility to population. The analysis also shows that replacing accessibility to population by accessibility to purchasing power better captures the market effect on the increase in vibrancy. In models that incorporate this variable, the results suggest that the opening of line 6 has led to an average annual increase of 31.31 amenities in treated cells.&#13;
&#13;
I also show that both improved accessibility and the endogenous growth of consumer amenities capitalize into home values following the opening of new stations. The empirical results demonstrate that for every 1% increase in accessibility to population, housing prices increase by about 0.005%; which suggest that most of the changes caused by increased accessibility are absorbed strongly by the initial increase in the number of amenities (indirect effect) and less by the consequential home value appreciation (direct effect).&#13;
&#13;
My research’s second contribution is to provide evidence on the relevance of land use regulations in enhancing the effects of vibrancy stimulated by transit infrastructure. The results show that amenities in new and existing subway neighborhoods in Santiago increased by 403% after the opening of line 6 in cells that allow commercial building or allocate land specifically for commercial purposes. This increase in the number of amenities indirectly contributes to housing price premium by improving the attractiveness of a neighborhood, however, I also find evidence that housing prices are directly affected by changes in accessibility. The results suggest that in cells where commercial building is allowed housing prices increase by 9.86% after the opening of new subway stations and that commercial land use also contributes indirectly to housing appreciation by 40%.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Brooklyn of Korea: Place branding as a process in production of space</title>
<link href="https://hdl.handle.net/1721.1/139987" rel="alternate"/>
<author>
<name>Kim, Poun Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/139987</id>
<updated>2022-02-08T03:01:34Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Brooklyn of Korea: Place branding as a process in production of space
Kim, Poun Laura
In planning and development practices, branding is often used as a promotional tool to attract investments and tourists, and thought of as a mechanism to portray a selected image of a place. In this thesis, I argue that the branding process can be one of the driving forces of neighborhood change and that place brands play an active role in producing sense of place along with physical and social changes. As cities increasingly choose images to communicate outwards and reposition themselves after the decline of industry, it is important to understand the role place brands play in the production and transformation of space.&#13;
&#13;
This thesis examines a neighborhood in transition, Seongsu-dong, Seoul, South Korea. From being one of Seoul’s few semi-industrial zones to a “hot place” for cultural and commercial activities, Seongsu has seen large shifts in the past decade, widely branded with the label “Brooklyn of Korea.” With diverse parties using the Brooklyn brand in different ways while leveraging similar qualities, Seongsu provides a rich case study on how branding as a process not only shapes images of a place, but can also impact the built environment. Through qualitative and quantitative analysis, this thesis tries to bridge the gap between portrayal of neighborhood change and tangible changes and answers: How are place brands created? What are the brands and how do they relate to neighborhood change? And what can place brands tell urban planners about neighborhood change?
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Latent Clustered Causal Models</title>
<link href="https://hdl.handle.net/1721.1/139983" rel="alternate"/>
<author>
<name>Yun, Annie</name>
</author>
<id>https://hdl.handle.net/1721.1/139983</id>
<updated>2022-02-08T04:07:37Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Latent Clustered Causal Models
Yun, Annie
We consider the problem of learning directed graphical models in the presence of latent variables. We define latent clustered causal models as a particular restriction on directed graphical models with latent variables and corresponding clusters of observed nodes, characterized by edges between only observed and latent variables. We discuss this model’s particular applicability towards genomics applications and examine its relationship to prior causal structure recovery work. We show identifiability results on this model and design a consistent three-stage algorithm that discovers clusters of observed nodes, a partial ordering over clusters, and finally, the entire structure over both observed and latent nodes. We also evaluate our method on synthetic datasets and demonstrate its performance in low sample-size regimes.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relating Racial Disparities to Financial Concerns and Shared Decision Making in Opioid Prescriptions</title>
<link href="https://hdl.handle.net/1721.1/139982" rel="alternate"/>
<author>
<name>Chandra, Rishabh</name>
</author>
<id>https://hdl.handle.net/1721.1/139982</id>
<updated>2022-02-08T03:30:17Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Relating Racial Disparities to Financial Concerns and Shared Decision Making in Opioid Prescriptions
Chandra, Rishabh
In this thesis, the author uses clinical notes and works with three widely-accessible healthcare databases to examine the relationship between race and opioid prescriptions in hospitals. While other researchers have previously provided evidence that Black patients receive, ceteris paribus, fewer opioid prescriptions than White patients in American hospitals, this work adds three components of analysis. First, this work adds opioid analysis for Asian and Hispanic patients, which has not been previously attempted. Second, the author creates two derived metrics from clinical notes that serve as proxies for financial hardship in covering medical costs, and willingness to participate in shared decision making. These metrics are then shown to strongly correlate with probability of opioid prescription - greater financial hardship implies fewer opioid prescriptions, and greater participation implies more opioid prescriptions. Finally, this work shows that despite the classification power of the derived metrics, those factors do not account for the racial disparity between Black patients and other races with respect to opioid prescriptions. That is, even when controlling for financial hardship and likeliness to complain, Black patients are still shown to have statistically fewer chances of receiving opioid prescriptions than all other racial groups.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting Transformers for Open Domain Procedural Text Comprehesion</title>
<link href="https://hdl.handle.net/1721.1/139980" rel="alternate"/>
<author>
<name>Pei, Yixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/139980</id>
<updated>2022-02-08T03:49:52Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Augmenting Transformers for Open Domain Procedural Text Comprehesion
Pei, Yixuan
Recent advances in deep learning model architectures have permitted state-of-the-art results in various fields such as NLP and CV. Although these systems have matched and, in some cases, surpassed human performance, many of them are still treated as black boxes, with sometimes unpredictable results. To try and shed some light on behaviors of natural language generation models, we examine the task of procedural text comprehension using neuro-symbolic techniques. We use this task as a testbed for exploring the limitations of state-of-the-art systems such as GPT on the task of predicting the resulting state changes from the text description of a procedure. We also experiment with whether and how symbolic augmentations may help these systems with understanding language. We see some promising results in concept-net knowledge injection, and note that other augmentations provide more natural generations.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“That could have killed me.” How anti-fat bias can be dangerous, even deadly, for heavier patients</title>
<link href="https://hdl.handle.net/1721.1/139976" rel="alternate"/>
<author>
<name>Harper, Kelso</name>
</author>
<id>https://hdl.handle.net/1721.1/139976</id>
<updated>2022-02-08T03:40:58Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">“That could have killed me.” How anti-fat bias can be dangerous, even deadly, for heavier patients
Harper, Kelso
We live in a society that values and treats people differently based on their body size. Such weight stigma can affect a person’s relationships, career opportunities, and daily life. And when this bias infiltrates a doctor’s office or hospital, it puts heavier patients at risk. Discrimination of any kind is bad for a person’s mental and physical health, but weight discrimination in medicine can also discourage patients from seeking care, exclude them from certain treatments, and lead to dangerous misdiagnoses. Drawing from the knowledge of a dozen experts and the experiences of a dozen patients, this thesis explores the myriad ways that medical weight bias can gravely impact the health and well-being of larger-bodied people. It also asks: where do we go from here?
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System of Systems Composition and Course of Action Pathfinding Tool (CNCPT)</title>
<link href="https://hdl.handle.net/1721.1/139974" rel="alternate"/>
<author>
<name>Goolsby, T.C. Fleming</name>
</author>
<id>https://hdl.handle.net/1721.1/139974</id>
<updated>2022-02-08T04:01:58Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">System of Systems Composition and Course of Action Pathfinding Tool (CNCPT)
Goolsby, T.C. Fleming
The CnCPT Framework enables systematic exploration of the optimal configurations for large-scale complex system of systems architectures. CnCPT provides a straightforward and systematic approach to rapidly develop viable architectural concepts by having users focus on the fundamental constraints of architectural design. The composition, CONOP, and heuristic constraints can be adjusted to fit the use cases of Commanders, Architects, and Analysts and enable rapid exploration of an architecture’s design space. The CnCPT framework allows users to define the key metrics of concern and develop optimal architectures that maximize performance. Built upon the proven approaches inherent to military campaign analysis, CnCPT embraces the uncertainty inherent to the large-scale system of system architectures and enables users to prioritize architecture exploration based on their risk tolerance. For Commanders focused on the tactics and strategies of their fighting force, CnCPT enables a fixed composition to explore the optimal system-level CONOPs resulting from military victory. For Architects that aim to determine the best composition of forces using current CONOPs, CnCPT enables fixed courses of action to be exercised on top of varying designs to determine the optimal architectural composition. Finally, for Analysts, CnCPT allows every aspect of the framework to be modified; constraints, architecture generation, architecture breeding, population selection, and scoring approaches are all fully customizable based on an Analyst’s goals.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring the COVID-19 Shock from Outer Space: Local Economic Vibrancy in 15 Global Cities</title>
<link href="https://hdl.handle.net/1721.1/139973" rel="alternate"/>
<author>
<name>Williams, Matías</name>
</author>
<id>https://hdl.handle.net/1721.1/139973</id>
<updated>2022-02-08T03:29:26Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Measuring the COVID-19 Shock from Outer Space: Local Economic Vibrancy in 15 Global Cities
Williams, Matías
The objectives of this thesis project are: (1) how to use nightlight data to track changing patterns of economic activities within cities worldwide, and (2) examine intra-city spatial consequences of the COVID-19 pandemic and whether these patterns differ across them. Informed by existing literature, I propose a cluster analysis using two groups, residential activities and work and play activities, to further understand the local consequences of the COVID-19 pandemic. Using Geographic Information Systems (GIS) and Graph Theory, I create metrics to compare the impact across several cities worldwide. The results of this thesis indicate that the work and play activities were more affected than the residential activities. However, this impact was not evenly distributed spatially.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formal Verification of an Implementation of the Roughtime Server</title>
<link href="https://hdl.handle.net/1721.1/139971" rel="alternate"/>
<author>
<name>Altamirano, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/139971</id>
<updated>2022-02-08T04:01:55Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Formal Verification of an Implementation of the Roughtime Server
Altamirano, Christian
Formal verification has been used in the past few decades to prove correctness of programs. This thesis provides a verification of a simpler implementation of Roughtime [1], a protocol that consists of securely querying the current time via a client-server interaction. The tool that was used is Bedrock2 [3], a work-in-progress Coq framework suitable for reasoning about low-level code, developed in the Programming Languages and Verification group at MIT CSAIL.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On End-to-end Automatic Fact-checking Systems</title>
<link href="https://hdl.handle.net/1721.1/139967" rel="alternate"/>
<author>
<name>Fang, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/139967</id>
<updated>2022-02-08T03:39:56Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">On End-to-end Automatic Fact-checking Systems
Fang, Wei
The emergence of social media has aided the spread of nonfactual information across the internet, and organizations are combating disinformation by performing manual fact-checking. Due to the massive amount of online information, the automation of this process has recently gained great interest. Previous works have formulated several automatic fact-checking tasks, and explored machine learning and natural language processing approaches to the problems. In this thesis we follow this line of work, aim to build a fully-working automatic fact-checking system, and study methods for improving its fact-checking abilities. First, we introduce an end-to-end automatic fact-checking framework that integrates multiple previously studied subtasks to predict the factuality of given claims while providing supporting evidence. Next we explore the use of multi-task learning for improving factuality predictions. Finally, we devise methods for extracting temporal structure from news documents to aid the fact-checking process.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Caught in the Crosswinds: Rural America could be renewable energy’s nemesis—or its savior</title>
<link href="https://hdl.handle.net/1721.1/139965" rel="alternate"/>
<author>
<name>Gribkoff, Elizabeth A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139965</id>
<updated>2022-02-08T03:28:51Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Caught in the Crosswinds: Rural America could be renewable energy’s nemesis—or its savior
Gribkoff, Elizabeth A.
Fighting climate change will require a fundamental shift away from the fossil fuels that still provide most of America’s electricity. In most states, county and local boards have to approve renewable energy projects. But despite the local economic benefits that renewable energy projects can bring, communities around the country have started saying no to wind and solar farms. Political leanings alone do not explain opposition to renewable energy projects, as most wind farms have been built in rural, red areas.&#13;
&#13;
My mom’s family is from Logan County, Illinois—a conservative area with some of the most wind turbines in the state. A few miles down the road, officials in another Republican farming area, Christian County, have effectively banned any wind farms from being built. Looking at why residents and officials in these central Illinois counties took drastically different stances toward wind can shed light on the locally-driven economic, social, and regulatory factors that will determine the future of U.S. renewable energy.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiuser Detection for Pulse Amplitude Modulated Signals</title>
<link href="https://hdl.handle.net/1721.1/139961" rel="alternate"/>
<author>
<name>Weaver, Jessica K.</name>
</author>
<id>https://hdl.handle.net/1721.1/139961</id>
<updated>2022-02-08T03:12:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Multiuser Detection for Pulse Amplitude Modulated Signals
Weaver, Jessica K.
Multiuser Detection (MUD) receiver techniques leverage the signal structure of overlapping transmissions to reduce inter-user interference and improve system performance. One main application for MUD research has been reducing inter-user interference in Code-Division Multiple Access (CDMA) channels with performance degradation caused by the near-far effect. In this thesis, we explore MUD techniques for the case where two co-channel, synchronous users use digital modulations and CDMA is not used. We present three different MUD methods for demodulation and examine the tradespace between performance and complexity. We first examine the Optimal MUD algorithm. The modeling technique of virtual constellations is used to apply the single user maximum likelihood (ML) detector to two synchronous users sharing a common pulse shape and to assess and understand performance. We show how, in some cases, Optimal MUD can be used to enable near single-user performance in the presence of a strong interferer. We also identify cases where the interferer’s signal causes a significant increase in error rates, due to the interactions of the two users’ constellations, and how the phase difference between the two users can greatly vary the shape of the performance curves. The complexity of Optimum MUD motivates our investigation of Successive Interference Cancellation (SIC), a reduced complexity MUD algorithm. For some cases, the performance of SIC is equivalent to Optimum MUD. However, in other cases, an increase in phase difference between the two signals causes the performance of the two algorithms to diverge. To handle the cases where the performance of SIC is unacceptable, we develop the Hybrid Method (HM) algorithm. The HM algorithm is a two stage, reduced complexity algorithm that evaluates a subset of virtual constellation points. We showed how for a small increase in complexity, the HM algorithm achieves comparable performance to Optimum MUD in the cases where SIC performs poorly. This performance and complexity analysis provides insights into the tradeoffs between these three algorithm approaches for MUD system design. These results can be applied to the design of future multiuser communications systems where the best MUD algorithm, one that best balances complexity with performance, is chosen based on channel estimates.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drawn Polymer Fiber Recuperative Heat Exchangers</title>
<link href="https://hdl.handle.net/1721.1/139959" rel="alternate"/>
<author>
<name>Adams, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/139959</id>
<updated>2022-02-08T03:30:36Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Drawn Polymer Fiber Recuperative Heat Exchangers
Adams, Jacob
Polymer microchannel heat exchangers for use in cryocooler applications have been developed. These heat exchangers are manufactured using a thermal drawing process where a bulk polymer preform is heated and stretched. The process results in channels with a characteristic dimension of 50-100 µm and with an overall length of many meters. The drawn heat exchangers are lightweight, flexible, and have a large surface-area-to-volume ratio. Initial tests on a Joule-Thomson cryocooler with a heat exchanger of overall dimension of 2.5mm x 2.5mm x 400mm with nitrogen as a working fluid were performed. Nitrogen was successfully liquefied with a mass flow rate of 34 mg/s and cooling power of 200 mW at 80 K.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Method for Continuous Inspection of Product Weight During Lyophilization</title>
<link href="https://hdl.handle.net/1721.1/139958" rel="alternate"/>
<author>
<name>O'Connell, Ellen Bridget</name>
</author>
<id>https://hdl.handle.net/1721.1/139958</id>
<updated>2022-02-08T03:57:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Method for Continuous Inspection of Product Weight During Lyophilization
O'Connell, Ellen Bridget
During lyophilization, a freeze-drying process used to stabilize pharmaceuticals, vials filled with product go through stages to sublimate out the water contained in the initially frozen product. The final product is a solid with a low water content which is more stable for shipping. This work focused on a method for continuous inspection of product weight in vitro, specifically in the lyophilization context. Traditional batch lyophilizers cannot obtain continuous product weight data as a function of time during the lyophilization process on a vial specific basis (Laurens De Meyer, 2019). The weight-sensing approach developed presents an option for continuous or periodic interval collection of product weight data for the product in each vial. This method for continuous inspection of weight change of a product on a per vial basis allows for a much better understanding of the weight change over time and also opens up the possibility of altering process conditions to obtain a desired weight change over time profile for all vials.&#13;
&#13;
In the larger lyophilization system for which this weight-measuring subsystem was designed the final product was required to be evaluated down to a 0.1% water content. There were additional system considerations such as viewing deflections, which could be related to weight changes, from the top of the system and strict limits on materials due to a need for sterility in the larger system. Largely as a result of these requirements a Keyence Laser Displacement Sensor was selected as the primary sensing method for deflection with an imaging/moiré approach as a secondary sensing method.&#13;
&#13;
This thesis is focused on the options explored to measure product weight change through deflection of elastic structures used to support the vials during lyophilization. Ultimately, a Bent Spring wire approach was selected but this concept was heavily informed and inspired by Triangle Flexure and Diaphragm designs which were also explored but ultimately did not perform sufficiently given our strict deflection change requirement of at least 1mm. This 1mm deflection change threshold allows us to evaluate the final product down to a 0.1% water content. The performance of the deflection approaches and designs were experimentally tested by incrementally altering the weight supported by the designs. The experimental results were also compared with calculated results and simulation results. The Bent Spring design met the deflection change requirement as well as the other functional requirements and design parameters for the larger system. The Bent Spring approach is a method for measuring weight change of product continuously in each vial while also being simple to prototype and manufacture.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regenerative Coordination: Keeping a Live Algorithmic Service Growing by Perpetuating Disruptions</title>
<link href="https://hdl.handle.net/1721.1/139954" rel="alternate"/>
<author>
<name>Zhang, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/139954</id>
<updated>2022-02-08T03:02:41Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Regenerative Coordination: Keeping a Live Algorithmic Service Growing by Perpetuating Disruptions
Zhang, Alan
Many organizations today, in their move to online platforms, seek to provide a ‘live’ service—a digital service capable of updating automatically, offering users continual improvements in content and functionality. I examined the work required to keep such a service going, and found developers struggling to coordinate their work in the face of heterogeneous, concurrent, and indefinite updates. My 15-month field study of an agricultural technology company explores how its members, along with its algorithms, were able to sustain a live imagery-analytics service, despite frequent, unexpected, and disruptive updates. Existing literature shows that sense-making and provisional settlements can be critical for coordinating distributed and dynamic work, but takes a perspective which centers on human actors. Taking a broader perspective, I suggest that the algorithms working in a live service may do neither. I found that algorithms frequently updated operations with changes unanticipated by members, and in so doing, disrupted operations that members had previously settled and, until then, considered usable. Members responded to these updates with further updates required to regenerate the service, but with each update new disruptions emerged. Drawing on these findings, I develop the notion of regenerative coordination that identifies the specific coordination practices that regenerate a live service through updates, thus keeping the service viable and valuable to users. Doing so however, perpetuates disruptions. This paradoxical outcome is the inadvertent result of a process that keeps a service live and growing. I end with contributions to coordination research.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constructing Low Resource Approaches to Improve Speech-to-text Translation from Modern Standard Arabic to English</title>
<link href="https://hdl.handle.net/1721.1/139953" rel="alternate"/>
<author>
<name>Manna, Rami</name>
</author>
<id>https://hdl.handle.net/1721.1/139953</id>
<updated>2022-02-08T03:06:09Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Constructing Low Resource Approaches to Improve Speech-to-text Translation from Modern Standard Arabic to English
Manna, Rami
This thesis explores novel approaches to the Arabic-English speech-to-text translation task. First, we construct a novel Modern Standard Arabic speech and English text parallel dataset. Second, we propose a novel framework for leveraging unsupervised machine translation to improve speech-to-text translation, and apply this framework to the task of Arabic-English speech-to-text translation. In particular, we propose a 3-step cascade approach to speech-to-text translation. In step 1, we use a speech recognition model to transcribe the Arabic speech into Arabic text. In step 2, we leverage unsupervised machine translation to learn a mapping between the output of the speech recognition model (transcribed Arabic) and Modern Standard Arabic (formal written Arabic). In step 3, we use an Arabic-English machine translation model to translate the output of the unsupervised model to English. Our third contribution is an exploration of approaches to low-resource end-to-end speech-to-text translation. We present and compare two approaches for synthesizing parallel training data. Finally, we compare the end-to-end approach with the cascaded approach. We found that the 3-step cascaded speech-to-text did not perform as well as the 2-step cascaded speech-to-text baseline. We show that with the end-to-end approach trained with synthetic English text, we are able to achieve similar performance to the 2-step cascaded speech-to-text baseline.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Fine Motion Stages for Six Degree-of-Freedom Submicron Positioning</title>
<link href="https://hdl.handle.net/1721.1/139949" rel="alternate"/>
<author>
<name>Frejowski, Tom</name>
</author>
<id>https://hdl.handle.net/1721.1/139949</id>
<updated>2022-02-08T03:04:07Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Development of Fine Motion Stages for Six Degree-of-Freedom Submicron Positioning
Frejowski, Tom
Submicron positioning stages are an invaluable tool in a variety of high precision applications including microscopy, optics, micromachining, and photolithography. This thesis covers the development and testing of a mechanical fine motion stage that uses a novel configuration of precision ballscrews and flexures to produce controlled motion in 6 degrees of freedom. The stage is designed to be capable of submicron repeatability within a range of motion that spans ±0.1 mm in &#119909; and &#119910;, ±1.5 mm in &#119911;, ±1 mrad in &#120579;ₓ and &#120579;ᵧ, and ±52 mrad in &#120579; subscript &#119911;.&#13;
&#13;
A full system architecture for controlling the stage and evaluating its performance is developed in this work. This includes the design of a metrology system using low cost position sensors for monitoring the position and orientation of the stage.&#13;
&#13;
Tests show the repeatability of the stage to be on the order of 0.4 µm in &#119909; and &#119910;, 25 nm in &#119911; and on the microradian level for the rotational degrees of freedom, with room for improvement in all degrees of freedom through the use of endpoint feedback and higher resolution sensors. Frequency response measurements show that the dynamics of the stage with a 3 kg payload are well behaved to at least 170 Hz, which indicates that closed loop bandwidths up this frequency are readily achievable.&#13;
&#13;
In addition, this thesis presents practical considerations for the structural design and actuator integration of an electromagnetically levitated fine motion stage.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private equity, disclosure quality, and audit quality</title>
<link href="https://hdl.handle.net/1721.1/139947" rel="alternate"/>
<author>
<name>Baik, Brian (Brian Kunho)</name>
</author>
<id>https://hdl.handle.net/1721.1/139947</id>
<updated>2022-12-07T16:10:49Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Private equity, disclosure quality, and audit quality
Baik, Brian (Brian Kunho)
I study the influence of disclosure/audit quality on private equity funds’ investment decisions, and the relationship between private equity ownership and disclosure/audit quality. Using Preqin and FAME data, I find that PE funds are more likely to invest in firms with superior financial statement transparency (disclosure quality) and in firms that employ big 4 auditors (audit quality). Conversely, I find that PE ownership is associated with audit quality, but not for disclosure quality.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deformable Object Manipulation with a Tactile Reactive Gripper</title>
<link href="https://hdl.handle.net/1721.1/139946" rel="alternate"/>
<author>
<name>Sunil, Neha</name>
</author>
<id>https://hdl.handle.net/1721.1/139946</id>
<updated>2022-02-08T04:01:49Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Deformable Object Manipulation with a Tactile Reactive Gripper
Sunil, Neha
Deformable objects like cloth and cables are challenging for robots to manipulate due to their high-dimensionality and unpredictable dynamics. In previous work, Yu et. al (2019) [45] used a tactile sensor to estimate the pose of a cable within the grip while sliding along it. The authors used linear regression to model the cable sliding dynamics and used a linear quadratic regulator (LQR) controller to keep the cable centered within the grip. However, the underlying dynamics are not linear, so in this work, we explore controllers that take advantage of a non-linear underlying dynamics model. We use Gaussian process (GP) regression for the non-linear model which is used in three controllers in hardware experiments: (1) LQR with the GP model linearized about the target position and (2) time-varying LQR with the GP model linearized about the current state and (3) model predictive control with the full dynamics model and constraints on the state and input of our system over a finite horizon. We extend our framework for the more challenging task of cloth edge following by adjusting our hardware setup and developing a new perception system. We found that the time-varying LQR controller using the GP model performs similarly to the LQR controller with the linear regression model for following both cables and fabric edges.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting data for Urban Metabolism of cities Tool using Machine learning and Satellite Image Analysis of city</title>
<link href="https://hdl.handle.net/1721.1/139944" rel="alternate"/>
<author>
<name>Havugimana, Emmanuel</name>
</author>
<id>https://hdl.handle.net/1721.1/139944</id>
<updated>2022-02-08T03:43:53Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Augmenting data for Urban Metabolism of cities Tool using Machine learning and Satellite Image Analysis of city
Havugimana, Emmanuel
We use image analysis to augment data about a city’s material flow or material stock.We take existing data about cities such as energy consumption,biomass,water consumption,energy production and construction material either at the city level or national level and add data from satellite based remote sensing. From remote sensing we can get data like built area,population distribution across the region,and night light intensities.&#13;
&#13;
We do this by coupling the insights from images which indicate a proxy for where resources are concentrated.We increase data available for the Urban metabolism tool database in resources correlated to satellite data. We show how data can be collected and may be integrated.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced Order Modeling for Stochastic Prediction and Data Assimilation Onboard Autonomous Platforms At Sea</title>
<link href="https://hdl.handle.net/1721.1/139943" rel="alternate"/>
<author>
<name>Heuss, Jacob Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/139943</id>
<updated>2022-02-08T03:44:48Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Reduced Order Modeling for Stochastic Prediction and Data Assimilation Onboard Autonomous Platforms At Sea
Heuss, Jacob Peter
There are many significant challenges for unmanned autonomous platforms at sea including predicting the likely scenarios for the ocean environment, quantifying regional uncertainties, and updating forecasts of the evolving dynamics using their observations. Due to the operational constraints such as onboard power, memory, bandwidth, and space limitations, efficient adaptive reduced order models (ROMs) are needed for onboard predictions. In the first part, several reduced order modeling schemes for regional ocean forecasting onboard autonomous platforms at sea are described, investigated, and evaluated. We find that Dynamic Mode Decomposition (DMD), a data-driven dimensionality reduction algorithm, can be used for accurate predictions for short periods in ocean environments. We evaluate DMD methods for ocean PE simulations by comparing and testing several schemes including domain splitting, adjusting training size, and utilizing 3D inputs. Three new approaches that combine uncertainty with DMD are also investigated and found to produce practical and accurate results, especially if we employ either an ensemble of DMD forecasts or the DMD of an ensemble of forecasts. We also demonstrate some results from projecting / compressing high-fidelity forecasts using schemes such as POD projection and K-SVD for sparse representation due to showing promise for distributing forecasts efficiently to remote vehicles. In the second part, we combine DMD methods with the GMM-DO filter to produce DMD forecasts with Bayesian data assimilation that can quickly and efficiently be computed onboard an autonomous platform. We compare the accuracy of our results to traditional DMD forecasts and DMD with Ensemble Kalman Filter (EnKF) forecast results and show that in Root Mean Square Error (RMSE) sense as well as error field sense, that the DMD with GMM-DO errors are smaller and the errors grow slower in time than the other mentioned schemes. We also showcase the DMD of the ensemble method with GMM-DO. We conclude that due to its accurate and computationally efficient results, it could be readily applied onboard autonomous platforms. Overall, our contributions developed and integrated stochastic DMD forecasts and efficient Bayesian GMM-DO updates of the DMD state and parameters, learning from the limited gappy observation data sets
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refinement of the Computational Vaccine Optimization Framework (OptiVax) through the development and analysis of a better algorithm for vaccine design choice</title>
<link href="https://hdl.handle.net/1721.1/139942" rel="alternate"/>
<author>
<name>Dimitrakakis, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/139942</id>
<updated>2022-02-08T04:04:09Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Refinement of the Computational Vaccine Optimization Framework (OptiVax) through the development and analysis of a better algorithm for vaccine design choice
Dimitrakakis, Alexander
We present the maximum &#119899;-times coverage objective function from a mathematical perspective. Its goal is to select a set number of overlays to maximize a population coverage metric. We formulate two novel algorithms to solve the problem: NTimesILP and WeightSum and compare them to each other and to the MarginalGreedy algorithm [30]. Finally, we link the mathematical formulation of the maximum &#119899;- times coverage problem to epitope vaccine design (OptiVax) and compare various vaccine designs both found in the literature and produced by the three aforementioned algorithms.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Evaluation of Learning-Based Methods for Loop Closure Detection in Simultaneous Localization and Mapping</title>
<link href="https://hdl.handle.net/1721.1/139935" rel="alternate"/>
<author>
<name>Herrera Arias, Luis Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/139935</id>
<updated>2022-02-08T03:54:29Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Experimental Evaluation of Learning-Based Methods for Loop Closure Detection in Simultaneous Localization and Mapping
Herrera Arias, Luis Fernando
Simultaneous Localization and Mapping (SLAM) is the capability to estimate a robot’s trajectory in an initially unknown environment while reconstructing the geometry of the environment. In order to bound the accumulation of localization error in SLAM, it is crucial to recognize previously seen locations, a process called "loop closure." This allows the robot to make corrections to its localization and map estimates. This project evaluates ORB feature extraction and matching, a state-of-the-art technique to detect loop closures, against recently developed learning-based approaches. In particular, our first contribution is to benchmark established techniques based on hand-crafted descriptor matching against novel learning-based approaches based on neural networks (i.e., SuperPoint and SuperGlue). As a second contribution, we integrate a learning-based loop closure detection method as part of Kimera, a SLAM system, and demonstrate its performance in both simulated and real benchmarking datasets. Finally, we collect data on long trajectories using a Jackal robot to compare the different approaches on real-world situations beyond available datasets. Our evaluation shows that, while learning-based approaches detect many more loop closures across wider baselines, when integrated in a SLAM system, they do not lead to substantial performance improvements compared to standard ORB feature matching.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CHuff: Conditional Huffman String Compression</title>
<link href="https://hdl.handle.net/1721.1/139926" rel="alternate"/>
<author>
<name>Nagda, Bhavik</name>
</author>
<id>https://hdl.handle.net/1721.1/139926</id>
<updated>2022-02-08T04:05:20Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">CHuff: Conditional Huffman String Compression
Nagda, Bhavik
Columnar databases have become ubiquitous in recent years due to their performance for analytical processing applications. Data storage in columnar form benefits from opportunities for improved compression performance as compared to row-oriented systems. For common string data, dictionary encoding is a light-weight compression scheme that replaces string tokens with fixed-size integers. In performing dictionary compression on a given column, database systems initially build a table of distinct values and then compress tokens into their corresponding table indices. This work focuses on optimizing compression for strings in columnar database stores. We introduce Conditional Huffman (CHuff) compression, a novel approach leveraging longstanding Huffman encoding and recent advances in hashing and storage paradigms. CHuff relies on low-entropy conditional relationships between consecutive characters in textual data to construct and apply Huffman-based compression models. The system additionally auto-tunes parameters for various corpus workloads, optimizing the compression rate while avoiding over-fitting. We demonstrate that on real-world data, CHuff performs favorably compared to similar string compressors, achieving an average 24% improvement in compression rate on our diverse experimental corpora.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Investigation of Blown-Flap Airfoils</title>
<link href="https://hdl.handle.net/1721.1/139918" rel="alternate"/>
<author>
<name>Long, Trevor</name>
</author>
<id>https://hdl.handle.net/1721.1/139918</id>
<updated>2022-02-08T03:30:02Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Experimental Investigation of Blown-Flap Airfoils
Long, Trevor
At present, there is significant interest in electrically-powered Urban Air Mobility (UAM) aircraft  that can operate in constrained take-off and landing areas (TOLAs) for a variety of missions. While many present designs use vertical take-off and landing (VTOL) capabilities to fulfill this requirement, fixed-wing aircraft utilizing distributed electric propulsion (DEP) to enable blowing across their wings may be able to provide competitive performance while decreasing energy requirements for take-off and landing. In this thesis, the performance of blown-flap wings is investigated to provide both performance estimates and validation data for future work on blowing enabled aircraft.&#13;
&#13;
A quasi 2-dimensional wind tunnel model is used to conduct surveys on both the performance and flow characteristics of propeller blown-flap airfoils. These surveys produced accurate estimates for the cₗ,cₓ,cₘ performance of these wings as a function of the test parameters of angle of attack, &#120572;, flap deflection, &#120575; subscript f, and blowing power, Δcj. In proper operating regimes, maximum cₗs of over 9.5 were observed, and cₗs over 5 were shown to be easily achievable given proper design. From the surveys of the wake and boundary layer development, the individual propeller slipstreams were found to spread very evenly across the span of the wing, suggesting that 2-dimensional estimates of blown wings may be used for design and analysis of blown wing sections. Stall on the flap upper surface was found to be the primary cause of decreases observed in attainable cₗ as &#120575; subscript f was increased, and that this stall region was stable and unchanged by increased blowing. Changing the flap geometry was found to delay the onset of this stall and increase performance. Areas of interest moving forward are also identified.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Miniaturized Magnetostrictive Antennas for Wireless Sensing Applications</title>
<link href="https://hdl.handle.net/1721.1/139916" rel="alternate"/>
<author>
<name>Chiyezhath Joy, Baju</name>
</author>
<id>https://hdl.handle.net/1721.1/139916</id>
<updated>2022-02-08T03:51:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Miniaturized Magnetostrictive Antennas for Wireless Sensing Applications
Chiyezhath Joy, Baju
Magnetostrictive (MES) antennas have been widely used for wireless sensing applications. Miniaturization of MES antennas can open new applications in in-vivo wireless sensing with higher spatial resolution and solve challenges involved in the miniaturization of conventional electromagnetic antennas. In this thesis two different methods have been explored for easy and fast fabrication of miniaturized MES antennas down to sub-mm sizes from the amorphous magnetostricive film Metglas 2826 MB and their advantages and disadvantages have been studied. Laser micromachining is shown to be a more versatile method compared to fabrication using Diesaw to fabricate antennas of different shapes easily. The frequency response of the fabricated antennas is first studied in air and characterized using Finite Element Analysis and analytical modelling to characterize the quality factor and magnetomechanical coupling efficiency. The antennas are then tested in water to understand the effects of viscous damping on antenna response and check the feasibility of sensing in liquid or wet environment. After characterization, a highly sensitive pH sensitive MES antenna of size 6 mm x 1 mm x 28 um with a sensitivity of 15 kHz/pH is fabricated using a pH sensitive copolymer of Acrylic Acid and Isooctyl Acrylate and characterized using solutions of different pH. The challenges involved in further miniaturization of the sensors and future work is also discussed.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration and Production Risk Mitigation for Geothermal Adoption in the Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/139915" rel="alternate"/>
<author>
<name>Holmes, Robert Chadwick</name>
</author>
<id>https://hdl.handle.net/1721.1/139915</id>
<updated>2022-02-08T03:54:00Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Exploration and Production Risk Mitigation for Geothermal Adoption in the Energy Transition
Holmes, Robert Chadwick
Geothermal provides a continuous, low-emissions source of energy with enormous potential in the United States, both singularly or as part of a broader energy mix. Although a small contributor to the current national energy grid, geothermal electricity generation dates back nearly a century for natural hydrothermal systems. More recently, enhanced geothermal systems (EGS) promise a broader reach with engineered solutions for extracting subsurface heat from a wider variety of locations. The potential synergy between the oil &amp; gas and geothermal offers an opportunity for building a lower-carbon energy portfolio that requires compatible skills and expertise. Nevertheless, the risks involved at multiple stages of the field lifecycle remain a hurdle to adoption of geothermal.&#13;
&#13;
In this thesis, risk-mitigation strategies for geothermal target two phases of the lifecycle: exploration and production. The first strategy uses a diverse set of measurements spanning multiple interrelated earth systems to collectively determine geothermal potential at the play scale. Analytical workflows integrate geologic, geochemical, and geophysical data to estimate subsurface geothermal gradient, with quantitative uncertainty estimates associated with the measurements, the models, and the solution space. These uncertainty estimates provide a measure of risk, as well as decision tools for investments in additional data-gathering activities before the first well is drilled. The second strategy applies flexibility in engineering design to a hypothetical EGS expansion of an existing power facility. Specifically, key uncertainties are integrated into a cost model with operational decision rules to create an ensemble of possible outcomes. Tailoring the model and decision rules to a particular facility concept allows for a rapid feasibility testing and optimization of project actions that limit downside risk while capturing upside potential. Both of these strategies use uncertainty characterization to reduce the threat of high-consequence geothermal risks. And by including them in a broader risk management approach, oil &amp; gas companies can make data-driven decisions on investing in geothermal during the energy transition.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology Roadmapping for Energy Storage using&#13;
ZEBRA Batteries</title>
<link href="https://hdl.handle.net/1721.1/139913" rel="alternate"/>
<author>
<name>Bahl Chambi, Gloria J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139913</id>
<updated>2022-02-08T03:45:43Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Technology Roadmapping for Energy Storage using&#13;
ZEBRA Batteries
Bahl Chambi, Gloria J.
Energy Storage Systems are expected to be the key enablers that will allow Variable Renewable Energy to increase its penetration in the electricity market. The objective of this thesis is to explore the application of ZEBRA Battery technology for Energy Storage Systems. The ZEBRA battery is of particular interest because it is a rechargeable battery built with Earth-abundant materials, primarily nickel and conventional table salt. Also, it has many advantages compared to Lithium-Ion Batteries, such as lower degradation rates, higher safety performance, wider temperature range of operation, and less maintenance. To understand the role of batteries in hybrid energy systems, successful examples of electro-chemical Energy Storage Systems are discussed, and an analysis of the stakeholders is performed. Additionally, three different locations were studied: Maine, Texas, and Guinea-Bissau. A Design of Experiments approach was implemented to explore different solutions to supply the electricity demand with Variable Renewable Energy in these locations. A model was built to calculate the energy supply and the cost of it for each solution. Cases on the Pareto frontier were selected and analyzed to understand the performance of batteries. Finally, a Life Cycle Analysis of the system, a comparison with Lithium-Ion Batteries, and a sensitivity analysis were performed. The main outcome of this work is a technology roadmap for ZEBRA batteries technology that will enable the adoption of this technology for Energy Storage Systems application by reducing its high capital cost. Currently, ZEBRA batteries exhibit a cost of about 600 USD/kWh. By applying the proposed projects, the cost of the battery is projected to be about 360 USD/kWh by the end of 2035.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Goal-Directed Systems Testing: Automated Execution of Intelligently Generated Cyber Attack Plans</title>
<link href="https://hdl.handle.net/1721.1/139911" rel="alternate"/>
<author>
<name>Dorchuck, Samuel Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/139911</id>
<updated>2022-02-08T03:48:08Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Goal-Directed Systems Testing: Automated Execution of Intelligently Generated Cyber Attack Plans
Dorchuck, Samuel Joseph
Red teaming, in which a team of professional hackers emulate an adversary in order to attempt to penetrate a network, has emerged as a vital tool in the cybersecurity industry to identify deficiencies in network defenses. Yet, hiring or maintaining a red team requires a substantial investment of time and money, and frequently such penetration testing proves non-comprehensive [1]. The major contribution of this project is to develop the foundations of an end-to-end process to automate adversarial emulation of systematically generated attack plans. Dr. Howard Shrobe has developed an intelligent attack generation tool, AttackPlanner, that exhaustively enumerates possible attack paths by which an adversary could attempt to achieve a high-level goal [2]. Built around observed adversarial tactics, techniques, and procedures identified in the ATT&amp;CK Matrix [3], MITRE’s CALDERA is a robust automated, post-compromise, adversary emulation framework which allows users to autonomously execute cyber attacks [4]. By coupling AttackPlanner with CALDERA, we have demonstrated the ability to autonomously execute intelligently generated cyber attack plans. With further work on this project, the ultimate product would provide an automated, goal-directed systems testing capability.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technical and Economic Feasibility of Crushed Rock with Synthetic Oil Heat Storage Coupled to Light Water Reactors in the United Arab Emirates</title>
<link href="https://hdl.handle.net/1721.1/139910" rel="alternate"/>
<author>
<name>Aljefri, Ali Saleh</name>
</author>
<id>https://hdl.handle.net/1721.1/139910</id>
<updated>2022-02-08T04:04:05Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Technical and Economic Feasibility of Crushed Rock with Synthetic Oil Heat Storage Coupled to Light Water Reactors in the United Arab Emirates
Aljefri, Ali Saleh
The United Arab Emirates (UAE) has the goal to reduce greenhouse gas emissions from the electricity sector. The UAE is building four pressurized-water reactors and expanding solar generation. Large-scale addition of solar without storage results in excess capacity and inefficient dispatch during some periods of the year. Adding heat storage to the nuclear power plants was investigated to reduce electricity generation at times of excess solar generation and provide added electricity at times of no solar output while the reactors are operated at baseload. This results in full utilization of nuclear and higher utilization of solar while reducing carbon dioxide emission from fossil plants. A new low-cost heat storage is proposed to address the storage needs—the Crushed Rock Ultra-large Stored Heat (CRUSH) system. CRUSH enables low-cost 100-GWh of heat storage to address daily to weekday/weekend heat storage needs. The performance of the grid and the performance of CRUSH were analyzed to understand total system performance and explore the option space for the design.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synchronizing Glitches as Internetworked Entities</title>
<link href="https://hdl.handle.net/1721.1/139907" rel="alternate"/>
<author>
<name>Chi, Po-Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/139907</id>
<updated>2022-02-08T03:55:21Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Synchronizing Glitches as Internetworked Entities
Chi, Po-Hao
This thesis investigates connectivity as a generative and aesthetic object through arts-based research. It engages contemporary digital media perspectives while reviewing the impacts of technological advances on connectivity and Internet infrastructure. Along with comparisons of cyberculture and new media artworks, the article extends the discussion by introducing a series of personal works in each chapter. The models, developed from the author's artistic practice, emphasize how participatory and performative pieces create a new generative system with personal mobile devices and Web applications. Discourses prompted by these models respond to what happens on the Internet and use the Internet as the fundamental element to set forms and rules for crowd participation and systematic iteration. This project aims to raise awareness of "internetworked" systems by turning daily usages of technology into performative gestures and exploring how artistic expression enhances the way we coexist with digitality.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Detection and Characterization of Severe Features in Colonoscopy Videos Using Combined Segmentation and Classification Models</title>
<link href="https://hdl.handle.net/1721.1/139906" rel="alternate"/>
<author>
<name>Wang, Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/139906</id>
<updated>2022-02-08T03:55:57Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Detection and Characterization of Severe Features in Colonoscopy Videos Using Combined Segmentation and Classification Models
Wang, Yi
Inflammatory Bowel Disease is a chronic disease that requires regular monitoring procedures, such as colonoscopy. Assessing disease severity through endoscopy is critical to determining therapeutic responses in IBD, but its use in clinical practice is limited by the requirement for experienced human reviewers. In recent development, artificial intelligence is used to evaluate endoscopic disease severity based on colonoscopy images. However, due to the variability in disease phenotypes, the nonrigid nature of objects, and clinical artifacts, very few groups have studied image or video segmentation for colonoscopy videos, and the existing deep learning systems showed low transparency and traceability. We propose a feature-based model that breaks down the problem into smaller components and combines segmentation and classification models to characterize IBD features and predict disease severity for frame-level and clip-level data. Our combined segmentation and classification models had an average accuracy of 90% for the detection of severe IBD features such as ulcerations and erosions. This thesis was completed at Iterative Scopes, a Boston startup working on bringing precision medicine and technology to gastroenterology.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning, Reasoning, and Planning with Relational and Temporal Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/139905" rel="alternate"/>
<author>
<name>Mao, Jiayuan</name>
</author>
<id>https://hdl.handle.net/1721.1/139905</id>
<updated>2022-02-08T03:35:00Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Learning, Reasoning, and Planning with Relational and Temporal Neural Networks
Mao, Jiayuan
Every day, people interpret events and actions in terms of concepts, defined over evolving relations among agents and objects and their goals. We learn these concepts from a limited amount of data, generalizing directly over different numbers and arrangements of agents and objects, and detailed timings of trajectories. We also effectively recompose these concepts to describe unseen behaviors from other agents, and leverage the causal relationships among actions to make plans for ourselves. &#13;
&#13;
This thesis gives an overview of a neuro-symbolic framework for learning, reasoning, and planning with relational and temporal neural networks. The key idea is to exploit a structural bias in neural network learning that enables us to describe complex relational-temporal events and actions. These structures form a minimal amount of prior knowledge but are generic and crucial: scenes are composed of objects; events are temporally related; actions have preconditions and goals. Our systems learn from trajectories with rich temporal and relational patterns and labels for events and actions. We demonstrate that they can generalize from small amounts of data to scenarios containing more objects than were present during training and to temporal warpings of input sequences, and exploits the goal-centric representation of actions to make plans for novel goals.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observing and Quantifying Kinematic Properties and Lagrangian Coherent Structures of Ocean Flows using Drifter Experiments</title>
<link href="https://hdl.handle.net/1721.1/139904" rel="alternate"/>
<author>
<name>Getscher, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/139904</id>
<updated>2022-02-08T03:50:49Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Observing and Quantifying Kinematic Properties and Lagrangian Coherent Structures of Ocean Flows using Drifter Experiments
Getscher, Timothy
This thesis analyzes data from two types of unique drifter experiments in order to characterize two aspects of ocean flows that are often difficult to study. First, vertical velocities and their associated transport processes are often challenging to observe in the real ocean since vertical velocities are typically orders of magnitude smaller than horizontal velocities in mesoscale and submesoscale flows. Second, Lagrangian coherent structures (LCS) are features which categorize ocean flows into regimes of distinct behavior. These structures are also difficult to quantify in the real ocean, since sets of gridded trajectories from real ocean data (rather than model fields) are rarely available.&#13;
&#13;
The first experiment uses drifters drogued at multiple depths in the Alboran Sea to observe and characterize the ocean’s vertical structure, particularly near a strong front where vertical velocities are expected to be much stronger than other regions of the Ocean. The second experiment uses a roughly gridded pattern of surface drifters in the Gulf of Mexico to study LCSs as quantified by methods from dynamical systems such as finitetime Lyapunov exponents (FTLEs), trajectory arc-length, correlation dimension, dilation, Lagrangian-averaged vorticity deviation (LAVD), and spectral clustering. This thesis includes the first attempt to apply these dynamical systems techniques to real drifters for LCS detection.&#13;
&#13;
Overall, these experiments and the methods used in this paper are shown to be promising new techniques for quantifying both the vertical structure of ocean flows and Lagrangian Coherent Structures of flows using real drifter data. Future work may involve modified versions of the experiments, with denser sets of ocean drifters in the horizontal and/or vertical directions.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Securing Operating Systems using Hardware-Enforced Compartmentalization</title>
<link href="https://hdl.handle.net/1721.1/139903" rel="alternate"/>
<author>
<name>Giannaris, Yianni</name>
</author>
<id>https://hdl.handle.net/1721.1/139903</id>
<updated>2022-02-08T03:11:48Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Securing Operating Systems using Hardware-Enforced Compartmentalization
Giannaris, Yianni
Monolithic kernels have been the traditional design choice of many modern operating systems for practical and historical reasons. Though monolithic systems excel in performance, they suffer from exposure to security vulnerabilities. The past 6 years of published Linux CVE data has revealed hundreds of security vulnerabilites that can potentially be exploited by an attacker to escalate privileges and leak sensitive user data. Though some of these vulnerabilites can be mitigated with proper memory safety enforcement, others require privilege separation to ensure code only accesses data that is explicitly granted by a developer. We present Hardware-Assisted Kernel Compartments (HAKC), a solution that mitigates exposure to security vulnerabilities by leveraging modern commodity Arm hardware and automatic LLVM instrumentation to enforce compartmentalization in an effective manner without requiring significant developer effort. Using Arm Pointer Authentication Codes (PAC) and Arm Memory Tagging Extensions (MTE), HAKC enforces a two-tier compartmentalization scheme that is performant and provides flexibilty for up to 4 * 10¹⁵ compartments, which, when compared to prior works, is orders of magnitude more compartments afforded to developers. To test HAKC, we implemented a compartmentalization policy for nf_tables, a commonly used packet filtering LKM. LKMs are prime targets for compartmentalization because CVE analysis has shown that most kernel vulnerabilites reside in LKMs, and the HAKC two-tiered compartmentalization scheme easily adapts to LKM logical groupings of kernel subsystem functionality. Evaluations show that we are able to acheive strong security enforcement without adding significant overhead.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practica: A Music Education Application for Learning Jazz Improvisation</title>
<link href="https://hdl.handle.net/1721.1/139902" rel="alternate"/>
<author>
<name>Fiksinski, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/139902</id>
<updated>2022-02-08T03:53:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Practica: A Music Education Application for Learning Jazz Improvisation
Fiksinski, Julia
We present Practica, a music education application which combines the user experience of a play-along practice application, the functionality of a melody transcription program, and the soloing tips and harmonic analysis that a music student might receive from an instructor. The user can read soloing tips and introductions to harmonic analysis, record themselves soloing over a backing track, and view transcriptions of their solos with different analysis modes applied in the form of color-coded notes. We develop an audio processing and transcription pipeline to generate sheet music for solo recordings. We examine how to present the subjective teaching and evaluation of improvisation in a programmatic manner. Two user studies suggest that Practica successfully presents students with an educational platform that empowers them to improve their musical abilities and explore jazz improvisation in a interactive and beginner-friendly format.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Viewpoint-Aware Task Planning and Model Predictive Control for Applications in Videography and Multi-Target Tracking</title>
<link href="https://hdl.handle.net/1721.1/139901" rel="alternate"/>
<author>
<name>Ray, Aaron Castagna</name>
</author>
<id>https://hdl.handle.net/1721.1/139901</id>
<updated>2022-02-08T03:55:35Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Viewpoint-Aware Task Planning and Model Predictive Control for Applications in Videography and Multi-Target Tracking
Ray, Aaron Castagna
We seek to combine high level planning with low level reactive control to solve a variety of viewpoint-constrained target following tasks. In the scenarios we consider, a team of tracking agents is desired to gain some sort of visual information about one or more target agents. A high level planning algorithm accounts for coarse, global decisions, such as “Which targets should each tracker be responsible for?”, or “When should a tracker visit each target?” This level of planning is combinatorial in nature and requires coordination between the tracking agents. We combine this process with a lower-level reactive control accounts for stochastic target motion. By making this controller aware of a viewpoint cost function, the behavior of the tracking agents can be both more performant and easier to deploy on real robots.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Selling Information in Competitive Environments</title>
<link href="https://hdl.handle.net/1721.1/139898" rel="alternate"/>
<author>
<name>Nouripour, Amir</name>
</author>
<id>https://hdl.handle.net/1721.1/139898</id>
<updated>2022-02-08T03:31:33Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Selling Information in Competitive Environments
Nouripour, Amir
We consider a setting where data buyers compete in a game of incomplete information, about which a data seller owns some payoff relevant information. We formulate the problem facing the seller as a joint information and mechanism design problem: deciding which information to sell, while at the same time eliciting the private value types of the buyers and collecting payments. We derive the welfare- and revenue-optimal mechanisms for a class of binary games. Our results reveal some important features of selling information in competitive environments: (i) the negative externalities arising from competition among buyers increase the profitability of selling exclusive information to one of the buyers; (ii) in order for the buyers to follow the seller’s action recommendations, the extent of exclusive sales must be limited; (iii) these same equilibrium constraints also limit the distortions in the allocation of information distortion that can be introduced by a monopolist data seller; (iv) the fiercer the competition across buyers the stronger the previous two limitations, and the weaker the impact of market power on the equilibrium allocation of information.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Vertically Loaded Diamond Microdisk Resonator (VLDMoRt) towards a Scalable Quantum Networks</title>
<link href="https://hdl.handle.net/1721.1/139893" rel="alternate"/>
<author>
<name>Duan, Yuqin</name>
</author>
<id>https://hdl.handle.net/1721.1/139893</id>
<updated>2022-02-08T04:05:44Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">A Vertically Loaded Diamond Microdisk Resonator (VLDMoRt) towards a Scalable Quantum Networks
Duan, Yuqin
This thesis reports a cavity design of vertically-loaded diamond microdisk resonator (VLDMoRt). It couples to a nitrogen-vacancy (NV) center in diamond for efficient collection of fluorescence emission into low numerical aperture (NA) free-space modes. The VLDMoRt achieves a Purcell enhancement of 172 with 39% of the emitted light collected within a NA of 0.6, leading to a total external spin-photon collection efficiency of 0.33. Furthermore, this design is demonstrated by established nanofabrication techniques, achieving a quality factor of &gt; 2000. As the design is compatible with fabrications and couples to low-NA modes accessible by cryogenic free-space optical systems, it is a promising avenue for increasing the efficiency and scalability of quantum devices based on diamond quantum emitters.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Aided Aerial Radiation Mapping</title>
<link href="https://hdl.handle.net/1721.1/139890" rel="alternate"/>
<author>
<name>Xue, Shangjie</name>
</author>
<id>https://hdl.handle.net/1721.1/139890</id>
<updated>2022-02-08T03:49:33Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Machine Learning Aided Aerial Radiation Mapping
Xue, Shangjie
In recent years, radiation mapping has attracted widespread research interest along with increasing public concerns on environmental monitoring. However, due to the complex mechanisms of gaseous radionuclide dispersion, radiation-matter interaction, and the current limitation of dose rate data collection, radiation mapping is considered to be a challenging task. In this study, a general framework for radiation mapping is proposed in static and dynamic scenarios using machine learning techniques. The proposed method enables rapid radiation mapping, as well as trajectory planning for measurements. &#13;
&#13;
Firstly, a novel directional radiation detection algorithm is presented. Single-pad radiation detector arrays and attenuation materials are proposed to be used for radiation detection. This thesis presents a deep neural network model to estimate the angular distribution of the incident radiation. Wasserstein distance is applied as a loss function to train the neural network for accurate prediction. Furthermore, radiation mapping could be enabled by performing directional measurements at different positions. In particular, optimization-based approaches are presented to fuse the directional measurement results for source localization and radiation mapping in static scenarios.&#13;
&#13;
Secondly, this thesis presents a model for tracking dynamic radionuclide atmospheric dispersion using probabilistic graphical models. Kalman Filter is applied for incremental estimation of atmospheric concentration and ground release simultaneously, as well as the prediction of concentration evolution. Moreover, a path planning algorithm by maximizing the information gathered from the measurements is also presented. The presented method enables joint estimation of the concentration and release source distributions in dynamic scenarios. Such method is also able to plan for future measurements in order to obtain more accurate estimations from the environments, given the previous measurement results. Simulation results for an environment similar to MIT research reactor showed that for radiation release during an accident, the proposed aerial radiation mapping algorithm is able to achieve relative error &lt;10% within a few minutes using three or more agents, and hence showing potential advantages over conventional approaches, which require manual survey at selected locations, and take about an hour with the existing emergency procedure.&#13;
&#13;
This work provides an algorithmic basis for radiation mapping problem, and shows potential to enable autonomous radiation surveys.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kiosks for Non-Contact Vital Sign Detection</title>
<link href="https://hdl.handle.net/1721.1/139889" rel="alternate"/>
<author>
<name>Goryachev, Ivan</name>
</author>
<id>https://hdl.handle.net/1721.1/139889</id>
<updated>2022-02-08T03:23:36Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Kiosks for Non-Contact Vital Sign Detection
Goryachev, Ivan
Motivated by the COVID-19 Pandemic and its effect on individuals’ baseline vital signs, this work presents a modular hardware and software platform for contact-less vital sign detection. The kiosk is intended collect users’ vital sign data to track changes over time and try to identify periods of illness. It is modular and transportable, able to be deployed and moved quickly, and reconfigured with different sensors if needed. In this implementation, it is instrumented with an infrared thermal imaging camera, FMCW radar sensors, motion and height detection, and ambient climate sensors. The kiosk collects body temperature data and radar-based chest displacement data for offline processing. The algorithms described here show good performance in measuring respiration rate when used with a mechanical chest simulator, but require additional work to properly measure heart rate, due to the low signal-to-noise ratio.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Audition Curriculum and Real-Time Music Accompaniment</title>
<link href="https://hdl.handle.net/1721.1/139888" rel="alternate"/>
<author>
<name>Hussein, Nada</name>
</author>
<id>https://hdl.handle.net/1721.1/139888</id>
<updated>2022-02-08T03:49:49Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Machine Audition Curriculum and Real-Time Music Accompaniment
Hussein, Nada
A machine audition curriculum was created as part of the MIT Media Lab’s Artificial Intelligence Education initiative. This curriculum was geared towards middle school students to help them understand how humans and machines perceive sound, and allow them to apply this knowledge to create and analyze their own music. This thesis presents the tools created to aid in the teaching of this curriculum: a new music audition Scratch extension. This extension introduces the ability to create and analyze music, as well as the integration of Google Magenta, a machine learning library that allows students to generate new music or accompany music that they have created. Through the use of this Scratch extension, it was possible to pilot the machine audition curriculum with middle school students and show that they were able to better understand signal properties, create and analyze their own music, and understand the similarities and differences between human and machine audition.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalist 3D Cell Phenotyping for All-Type Tissues</title>
<link href="https://hdl.handle.net/1721.1/139887" rel="alternate"/>
<author>
<name>Gu, Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/139887</id>
<updated>2022-02-08T03:35:56Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Generalist 3D Cell Phenotyping for All-Type Tissues
Gu, Xinyi
Tissue-clearing methods, light-sheet microscopy, and antibody labeling enable extracting cellular and subcellular information, producing large amount of image data needs to be analyzed. Hundreds of heterogeneous cell types were detected through the data obtained across species and types of tissues. We developed a novel approach that is generally applicable to a wide range of cell types in the large-scale 3D brain datasets, using a pipeline that performs accurate detection of cells regardless of image resolution, labeling pattern, and tissue processing techniques used. The pipeline is compatible with various labeling techniques including IHC, Fluorescence in situ hybridization (FISH), and genetic labeling and can be used for cellular level quantification in all types of tissues.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Localized Visual Aberration Detection and Suppression Dataset for UAV Perception Systems</title>
<link href="https://hdl.handle.net/1721.1/139883" rel="alternate"/>
<author>
<name>Keszler, John Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/139883</id>
<updated>2022-02-08T03:29:23Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Localized Visual Aberration Detection and Suppression Dataset for UAV Perception Systems
Keszler, John Alexander
Depth perception is an essential component of autonomous mobile robotics platforms. Due to size and weight limitations, a monocular camera system is typically best suited to collect this data. However, using these setups for depth perception can result in poor depth mapping in areas of the frame where the camera encounters visual aberrations (fog, glare, dust). This thesis presents a large-scale dataset with a variety of scenes in both a simulated and real world indoor environments containing both visual and dynamical sensor data measured in the presence of varying levels of localized smoke sources. This dataset aims to facilitate the development of more robust autonomous UAV navigation systems. The size and variety of data presented make it a valuable tool for evaluating and testing visual-inertial estimation algorithms and haze/fog suppression methods that identify, locate and suppress these noise sources in an attempt to improve the accuracy and stability of monocular localization and mapping algorithms.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushcarts to Platforms: Measuring Food Delivery Apps’ Effect on Street Vendors’ Location Preferences in the Global South. Case Study: Surakarta, Indonesia</title>
<link href="https://hdl.handle.net/1721.1/139879" rel="alternate"/>
<author>
<name>Wisambodhi, Prathito Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/139879</id>
<updated>2022-02-08T03:57:25Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Pushcarts to Platforms: Measuring Food Delivery Apps’ Effect on Street Vendors’ Location Preferences in the Global South. Case Study: Surakarta, Indonesia
Wisambodhi, Prathito Andy
How does online commerce affect the offline presence of retail, food, and beverage (F&amp;B) establishments in cities? While extensive literature exists on e-commerce’s effect on the retail industry, its impact on retailers’ location preference and in particular street vendors in the Global South has been less explored. E-commerce and food delivery apps (FDA) change search costs for customers and could therefore change the desirability of locations for retailers. Yet, most existing retail economic studies are specific to brick-and-mortar establishments in the Western urban context, despite street vendors’ rapid adoption of online commerce and the Asia Pacific region’s lead in the global e-commerce growth rate even before the COVID-19 pandemic.&#13;
&#13;
This thesis focuses on the effect of FDA on the growth trend and location preferences of F&amp;B street vendors in Indonesia, using the city of Surakarta as a case study. By using spatial analysis and interviews, the thesis analyzes four hypotheses about the changes in street vendors’ presence, clustering, and location preferences based on street vendor location data collected in 2014 and 2019 on the same set of streets. The results show a negligible change in location preferences for street vendors of all kinds and a more pronounced change for F&amp;B vendors after controlling for street vendor growth. Without growth control, FDA has a minimal effect on the change of F&amp;B street vendors’ clustering and location preference which was also validated by the interviews. Finally, the thesis discusses data limitations and future opportunities that could inform policies on street vending and online delivery services.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surveilling Sin: Locating Sodomy in the Early Modern Florentine Bathhouse</title>
<link href="https://hdl.handle.net/1721.1/139877" rel="alternate"/>
<author>
<name>Flynn, Aidan</name>
</author>
<id>https://hdl.handle.net/1721.1/139877</id>
<updated>2022-02-08T03:09:39Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Surveilling Sin: Locating Sodomy in the Early Modern Florentine Bathhouse
Flynn, Aidan
This thesis examines the carnal sin of sodomy in early modern Florence, Italy (1432–1600). More specifically, this project investigates one particular sodomitical locale: the San Niccolò bathhouse. Domenico Cresti’s (called ‘Il Passignano’) Bathers at San Niccolò (1600) depicts a contemporaneous scene of all-male bathing, imbued with homosexually suggestive acts within a locatable urban space. What can this particular image tell us about the lived realities of sodomy in early modern Florence? When examined alongside topographical, legal, health, and religiopolitical archives, Bathers illuminates the intricacies of same-sex pleasure and punishment. In identifying this specific site along the Arno River, and combining Bathers with various written documents, one can better achieve a history of sexual persecution, its surveillance, and institutional efforts to control illicit sex across the urban landscape.&#13;
&#13;
The bathhouse, a simultaneously public and private space, was a center for relaxation, sociability and health but also functioned as an arena for homosexual encounters. Sodomy was blasphemous, generating anxiety throughout early modern Italian city-states. Citizens feared for their safety: a sodomite in their midst could provoke divine wrath, as it had in the biblical narratives of Sodom and Gomorrah—sexual sins could lead to urban destruction. Police forces were created to surveil and punish such abominable acts in order to maintain the sacrality of the urban interior.&#13;
&#13;
While these magistracies policed every parish, the Florentine bathhouse was more challenging: it permitted nakedness and, as such, often resulted in unsavory interactions between men. How might topographical and painterly representations of water, wantonness, and punishment allow the historian to check written accounts (legal, religious, literary) of sexual encounters within specific architectures—and vice versa? Looking at and beyond the figures in Bathers, this project investigates the represented backdrop in which sodomitical activities are depicted. In so doing, this project engages with larger historiographical issues, namely the ways in which studies on premodern sex and gender have and have not been mobilized through postmodern theories. This thesis combines Passignano’s artwork with other visual and written materials to challenge and expand on the ways in which sex, space, art, and society functioned in Renaissance Florence.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Map Inference from Satellite Segmentation Data through Reinforcement Learning: A Novel Approach</title>
<link href="https://hdl.handle.net/1721.1/139876" rel="alternate"/>
<author>
<name>Jagwani, Satvat</name>
</author>
<id>https://hdl.handle.net/1721.1/139876</id>
<updated>2022-02-08T03:35:44Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Map Inference from Satellite Segmentation Data through Reinforcement Learning: A Novel Approach
Jagwani, Satvat
Online road maps need to be kept up to date for a variety of purposes, and the task of updating them can be automated. There are many algorithms to infer road map structure from data, including satellite imagery and crowdsourced GPS trajectories. However, most of these algorithms use supervised learning and require hyperparameter tuning on a given location to be able to infer maps with high accuracy. In addition, these algorithms are trained for metrics like per-pixel loss but not trained on end-to-end objectives. In this project, we experiment with a Reinforcement Learning based algorithm that may counter the limitations of current algorithms.&#13;
&#13;
We use a map extraction algorithm with heuristics as a baseline and demonstrate that our RL algorithm achieves precision and recall that are comparable to the baseline algorithm. The RL algorithm is able to do this without much hyperparameter tuning, whereas the baseline requires aggressive hyperparameter tuning to give comparable results. In addition, the RL agent can be trained end-to-end to directly maximize the relevant metrics, including the topology of the extracted road network, whereas the baseline requires heuristic post processing to produce such outputs.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Management Strategies for Reduced Freight Costs</title>
<link href="https://hdl.handle.net/1721.1/139875" rel="alternate"/>
<author>
<name>Feron, Amelie</name>
</author>
<id>https://hdl.handle.net/1721.1/139875</id>
<updated>2022-02-08T03:14:44Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Improving Management Strategies for Reduced Freight Costs
Feron, Amelie
This thesis deals with two freight problems currently encountered by Waters Corporation, an analytical laboratory instrument company. According to the customer service department, for approximately 40% of the orders, Waters does not charge for shipments in the US, and a portion is done by mistake due to misalignments between databases or due to unnecessary expedited shipments. The company uses several databases for contract management (Lotus Notes) and shipments (SAP). Correcting these misalignments would ensure that Waters does not absorb the freight charges for customers that are supposed to pay for shipping. Moreover, Waters pays for expedited shipping of some orders due to time constraints, stocks out, damaged inbound products or human errors. Therefore, there is a real opportunity for freight savings. This work offers a cost analysis of potential savings and provides some recommendations to reduce the freight costs. In particular, this thesis focuses on misalignments between Lotus Notes and SAP for European customers and on unnecessary expedited shipments from the Global Distribution Center located in Franklin, MA.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commoning the Public: Federal Land as a Site of Housing Struggle in Rio de Janeiro</title>
<link href="https://hdl.handle.net/1721.1/139873" rel="alternate"/>
<author>
<name>Hoffman, Ava R.</name>
</author>
<id>https://hdl.handle.net/1721.1/139873</id>
<updated>2022-02-08T03:43:42Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Commoning the Public: Federal Land as a Site of Housing Struggle in Rio de Janeiro
Hoffman, Ava R.
The state of Rio de Janeiro concentrates the largest number of public lands and buildings under federal ownership in Brazil, a legacy of the former colonial, imperial, and federal capital status of the eponymous city. Many sit vacant, failing to fulfill their constitutionally required social function. In this context, federal-owned public property emerges as a critical site of housing struggle. Contesting the dispossessory logics of ownership mobilized to exclude poor and working class residents from these spaces — bounded and policed against uses and users deemed improper and unproductive — housing occupations inscribe a new logic of collective use through everyday practices of “commoning the public.” In re-imagining public property and the ways that people might relate to it beyond claims to ownership, I suggest that these practices work toward shifting the governance of public property in a more deeply democratic direction. These imaginings are not relegated to the realm of abstraction. Rather, they provide a roadmap for building out robust public policies that see to the transformation of disused public properties in central urban areas into social interest housing.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Power of Social Information in Distributed Consensus in Ant-Colonies: Model and Analysis</title>
<link href="https://hdl.handle.net/1721.1/139872" rel="alternate"/>
<author>
<name>Zhao, Jiajia</name>
</author>
<id>https://hdl.handle.net/1721.1/139872</id>
<updated>2022-02-08T04:02:35Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Power of Social Information in Distributed Consensus in Ant-Colonies: Model and Analysis
Zhao, Jiajia
The decentralized cognition of animal groups is both a challenging biological problem and a potential basis for bio-inspired engineering design. The understanding of these systems and their application can benefit from modeling and analysis of the underlying algorithms. In Chapter 2, we define a modeling framework that can be used to formally represent all components of such algorithms. As an example application of the framework, we adapt to it the much-studied house-hunting algorithm used by emigrating colonies of Temnothorax ants to reach consensus on a new nest. We provide a Python simulator that encodes accurate individual behavior rules and produces simulated behaviors consistent with empirical observations, on both the individual and group levels. We use the simulator to make predictions about several aspects of collective emigration behavior, some with empirical support and some are new predictions. Critically, our results highlight the value of individual sensitivity to site population in ensuring consensus, and suggest its empirical measurement.&#13;
&#13;
Though the above model captures a wide range of observed phenomenon and make new predictions, our work and previous work have mostly focused on experimental or modeling work, and lack rigorous mathematical justification. Building a theoretical understanding of the key mechanisms in the house-hunting process is necessary for the designs of novel distributed consensus algorithms. In order to do so, in this chapter we further simplified the model introduced in Chapter 2 and investigated the marginal benefits of the quorum sensing mechanism. We show theoretical confirmation of the hypothesized evolutionary advantage of quorum sensing in helping consensus. In addition, the desirable values of the quorum size from our theoretical results have been observed empirically.&#13;
&#13;
It is our hope that the scientific insights and the modeling and mathematical tools can inspire further research from both the biology and computer science community.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimization-Based Qualitative/Algorithmic Approach to Transit Service Planning: Addressing the MBTA Green Line Extension</title>
<link href="https://hdl.handle.net/1721.1/139870" rel="alternate"/>
<author>
<name>Moody, John Takuma</name>
</author>
<id>https://hdl.handle.net/1721.1/139870</id>
<updated>2022-02-08T03:30:26Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">An Optimization-Based Qualitative/Algorithmic Approach to Transit Service Planning: Addressing the MBTA Green Line Extension
Moody, John Takuma
When changes to transit operations are necessary to accommodate changes in the network, demand levels, or agency resources, there is a risk that more obvious solutions (e.g., adjusting headways without changing service patterns) may be unnecessarily detrimental to the quality of the service provided. Complex trunk-with-branches transit networks present both opportunities and challenges for service planning in this context. There may be a large number of potentially feasible operating schemes that could address the problem, with some presenting worthwhile trade-offs that result in much better outcomes for passengers. However, identifying the most promising alternatives from such a large set is a difficult task. While human judgment is a critical part of the process, particularly in the analysis of the most promising solutions, subjectivity from human judgment introduced too early on in the alternative identification process can lead to a suboptimal selection of alternatives.&#13;
&#13;
This research proposes and demonstrates the benefits of a combined qualitative/algorithmic approach to service planning. The proposed approach combines scenario planning, optimization, and qualitative analysis to generate solutions that are robust against uncertainty while providing consistently high passenger level-of-service. An integer optimization program is used to model complex trunk-with-branches transit networks, which outputs a set of service patterns that satisfy various constraints (e.g., passenger capacity, agency resources, fleet composition, infrastructure limitations) while minimizing detriments to passenger level-of-service, namely wait time and transfers. The value of the subsequent qualitative assessment is increased by the use of optimization, as comparisons are being made between high-performance operating schemes.&#13;
&#13;
This approach is applied to the MBTA Green Line to propose service plans after the construction of the Green Line Extension (GLX), which adds an additional two branches to the current four. This extension is occurring during the COVID-19 pandemic, which has resulted in a significant reduction in demand and tightening of agency resources. Both events warrant and facilitate a shift in service patterns. Four phases of post-GLX evolution of demand and resources were considered to illustrate short- and long-term operating conditions. In most cases, plans generated by the qualitative/algorithmic approach included single-car train operations during the peak period to reduce expected wait time relative to the current plans. The alternatives identified may allow post-GLX operations to achieve a pre-pandemic level of service even before agency resources have fully recovered. The research suggests that the qualitative/algorithmic approach can allow service planners to maximize the potential benefits of paradigm shifts such as the GLX.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Residual Model Learning for Microrobot Control</title>
<link href="https://hdl.handle.net/1721.1/139867" rel="alternate"/>
<author>
<name>Gruenstein, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/139867</id>
<updated>2022-02-08T03:53:41Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Residual Model Learning for Microrobot Control
Gruenstein, Joshua
A majority of microrobots are constructed using compliant materials that are difficult to model analytically, limiting the utility of traditional model-based controllers. Challenges in data collection on microrobots and large errors between simulated models and real robots make current model-based learning and sim-to-real transfer methods difficult to apply. We propose a novel framework residual model learning (RML) that leverages approximate models to substantially reduce the sample complexity associated with learning an accurate robot model. We show that using RML, we can learn a model of the Harvard Ambulatory MicroRobot (HAMR) using just 12 seconds of passively collected interaction data. The learned model is accurate enough to be leveraged as “proxy-simulator” for learning walking and turning behaviors using model-free reinforcement learning algorithms. RML provides a general framework for learning from extremely small amounts of interaction data, and our experiments with HAMR clearly demonstrate that RML substantially outperforms existing techniques.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing for Electromechanical Systems</title>
<link href="https://hdl.handle.net/1721.1/139866" rel="alternate"/>
<author>
<name>Krause, Thomas Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/139866</id>
<updated>2022-02-08T03:50:30Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Sensing for Electromechanical Systems
Krause, Thomas Charles
Electromechanical devices and systems, for example, pumps, compressors, and electric machines, are the foundation of modern society. Condition monitoring of these systems is crucial for efficient operation and operator safety. Modern electronics enable new developments in electromechanical system condition monitoring. In this work, custom instrumentation and measurement schemes are introduced and applied to non-contact voltage measurement, the integrated electronic piezo-electric (IEPE) standard, and cutting tool condition monitoring.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Experience: An Interactive and Ethical Curriculum for Teaching Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/139863" rel="alternate"/>
<author>
<name>Hoekstra, Chessa Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/139863</id>
<updated>2022-02-08T03:01:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Learning from Experience: An Interactive and Ethical Curriculum for Teaching Reinforcement Learning
Hoekstra, Chessa Nicole
As AI becomes prevalent in society, AI education and AI literacy are also increasingly important fields. Reinforcement learning (RL) plays and will continue to play an intense role in many systems, including online advertising, self-driving cars, and personalized tutoring. There is a corresponding need for education of RL. In my thesis work, I developed activities and materials that teach RL to middle and high school students who don’t necessarily have backgrounds in computer science or AI. Then, I evaluated these materials by teaching a series of pilot workshops. The project had two main goals: effectively educating the students in the workshops, and contributing insights to the ongoing research of AI education. In this thesis, I will go over the curriculum design, as well as the analysis of each pilot.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Decarbonization Factors on Deeply Decarbonized Electrical Systems: Texas Case Study</title>
<link href="https://hdl.handle.net/1721.1/139861" rel="alternate"/>
<author>
<name>Junge Bascur, Cristian</name>
</author>
<id>https://hdl.handle.net/1721.1/139861</id>
<updated>2022-02-08T03:30:48Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">The Effect of Decarbonization Factors on Deeply Decarbonized Electrical Systems: Texas Case Study
Junge Bascur, Cristian
The expectation of continued CO2 emissions reduction in the power sector has prompted interest among policymakers, regulators and utilities in expanding electrification of other end-use sectors as a way to meet long-term economy wide decarbonization goals. Expanded use of electrification in these sectors to displace fossil-fuel use, such as for heating or transportation, is appealing not only because it eliminates distributed sources of CO2 emissions and has associated efficiency benefits, but also because it leverages existing end-use technologies and infrastructure. However, the full CO2 emissions benefit of electrification is contingent on deep decarbonization of electricity systems. This work is centered on the impact of factors that contribute to the deep decarbonization of power systems, under a high electrification assumption and taking Texas as the case study.&#13;
&#13;
The factors studied are the availability and cost of generation and storage technologies; electrification level; demand flexibility; demand response; and the coupling of the power system with the industry to supply electricity-driven hydrogen supply to supply process heat. By means of a Capacity Expansion Model, GenX, and a design of experiments (DOE) framework, each factor is studied in depth at different CO2 emission intensity targets, starting with the unconstrained system, and then ranging from 85% up to 100% decarbonization (total CO2 mass yearly offset with respect to 2018). The impact of each factor is quantified in terms of its effect on average system cost (SCOE); installed power capacity; storage needs; wholesale prices distribution and system operation.&#13;
&#13;
Results show that: (1) under no CO2 constraints (a "No Policy" scenario), the power system tends to decarbonize itself to a level of 72%, driven by assumed cost projections for 2050 and the high availability of variable renewable energy (VRE) in Texas. (2) Achieving a 98% decarbonization implies reaching a system average cost of $41/MWh, or an increase in system average cost of 17% from the No Policy case. (3) The various factors evaluated here impact power system outcomes (system costs, system total power capacity, wholesale electricity price distribution, reliability) differently depending on the emission constraint. A combination of factors is generally found to lead to favorable outcomes on multiple dimensions. (4) The most impactful factor is the costs of VRE, followed by hydrogen use in the industry and availability and cost of long duration storage (LDES) technologies. (5) Increasing share of VRE generation increases the number of hours of zero wholesale electricity prices, implying that technologies have to rely on only a few hours to recover investments in energyonly markets. Deployment of dispatchable generation sources such as the Allam cycle, LDES, and activating the coupling with the industry to supply electricitydriven hydrogen, reduces instances of zero wholesale electricity prices. (6) Demandside management factors (demand flexibility and demand response) prove mainly to contribute to reduce the system footprint, reduce price volatility and to a lesser extent, system costs. (7) Higher electrification of energy demand is found to be beneficial not only to increase the cost-effectiveness of decarbonization via VRE generation owing to overlap between peak demand and VRE resource availability, but also contributes to reduce system SCOE and VRE curtailment levels.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Environmental and economic characteristics of electrofuel production pathways</title>
<link href="https://hdl.handle.net/1721.1/139711" rel="alternate"/>
<author>
<name>Isaacs, Stewart Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/139711</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Environmental and economic characteristics of electrofuel production pathways
Isaacs, Stewart Anthony.
Electrofuels are liquid fuels derived from CO₂ and electricity, which have the potential to store intermittent renewable power and reduce transportation's climate impact. In this work, I assess the economic and environmental characteristics of four technology pathways for electrofuel production, using the methods of life cycle analysis and techno- economic assessment. In addition, the analysis includes a number of scenarios in which the technologies are powered directly from dedicated renewable electricity generation. The results indicate that the hybrid power- and biomass-to-liquids (PBtL) pathway may represent a promising option for electrofuel production in terms of lifecycle emissions reductions and minimum selling price. I further characterize the PBtL pathway by combining spatially-resolved data on biomass cultivation, electricity generation, and cost-optimized solar-hydrogen production in the United States (US). I find that the resulting fuel would have a minimum selling price between $2.10 and $3.81 per liter and lifecycle emissions of 15-27 [subscript g]CO₂[subscript e]/MJ depending on the production location.
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 63-66).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Environmental Cost Basis for Regulating Aviation NOx Emissions/</title>
<link href="https://hdl.handle.net/1721.1/139710" rel="alternate"/>
<author>
<name>Miller, Cassandra Joy.</name>
</author>
<id>https://hdl.handle.net/1721.1/139710</id>
<updated>2025-11-06T18:01:16Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">An Environmental Cost Basis for Regulating Aviation NOx Emissions/
Miller, Cassandra Joy.
Nitrogen oxides and carbon dioxide are two by-products of combustion in aircraft engines, and have different impacts on the environment. Nitrogen oxides (NO[subscript x]) are both an air quality concern and an indirect contributor to radiative forcing, while carbon dioxide (CO₂) is a long-lived greenhouse gas The International Civil Aviation Organization has been responsible for evaluating and setting commercial aircraft NO[subscript x] emissions standards since 1981 Each of the historical standards has been more stringent than the previous and, when implemented, requires newly certified engines to produce less NO[subscript x] per unit rated thrust Each iteration has been defined as a function of engine overall pressure ratio, which then links the engine cycle, and implicitly fuel burn and CO₂ emissions, to allowable NO[subscript x] levels These regulations have historically been evaluated and implemented with a focus on reducing adverse air quality impacts around airports, but the thermodynamic tradeoff with CO₂ requires additional analysis to quantify net climate impacts This paper introduces a social cost basis for evaluating aviation NO[subscript x] emissions regulations, and quantifies air quality damage, climate damage, and fuel costs associated with allowable emission levels. The result is monetized environmental and fuel costs associated with certain emission standards. Results show higher overall pressure ratio engines operating at the current NO[subscript x] regulatory limit are allowed more environmental damage per unit rated thrust than lower overall pressure ratio engines, therefore allowing uneven social costs per unit thrust (i e fuel and environmental costs combined) across the engine design space This is a consequence of the definition of the regulation today, where higher pressure ratio engines are allowed higher NO[subscript x] emissions Alternative regulation definitions are evaluated which consider the engine cycle and combustor together. Achieving constant social costs requires the regulation to decrease in slope at higher pressure ratios, corresponding to the diminishing marginal efficiency improvements, instead of increasing slope in that region.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.; Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2019; "x" for NOx in title is "subscript x." Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 43-44).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Treet list processing language</title>
<link href="https://hdl.handle.net/1721.1/139707" rel="alternate"/>
<author>
<name>Haines, Edward Cadmus.</name>
</author>
<id>https://hdl.handle.net/1721.1/139707</id>
<updated>2022-02-10T03:46:44Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The Treet list processing language
Haines, Edward Cadmus.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaf 59).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality assessment of glass reinforced plastic ship hulls in naval applications</title>
<link href="https://hdl.handle.net/1721.1/139698" rel="alternate"/>
<author>
<name>Thomas, Ronald David.</name>
</author>
<author>
<name>Cable, Christopher Wheeler.</name>
</author>
<id>https://hdl.handle.net/1721.1/139698</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Quality assessment of glass reinforced plastic ship hulls in naval applications
Thomas, Ronald David.; Cable, Christopher Wheeler.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1985; Includes bibliographical references.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approaches to the synthesis of structures related to shikimic and chorismic acids.</title>
<link href="https://hdl.handle.net/1721.1/139697" rel="alternate"/>
<author>
<name>Mane, Jean Maurice.</name>
</author>
<id>https://hdl.handle.net/1721.1/139697</id>
<updated>2022-02-10T04:07:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Approaches to the synthesis of structures related to shikimic and chorismic acids.
Mane, Jean Maurice.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Supply Regulation to Improve Farmers’ Income in Agricultural Markets</title>
<link href="https://hdl.handle.net/1721.1/139612" rel="alternate"/>
<author>
<name>McCombs, Morgan Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/139612</id>
<updated>2022-01-15T03:30:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data-Driven Supply Regulation to Improve Farmers’ Income in Agricultural Markets
McCombs, Morgan Jane
Millions of smallholder farmers live in persistent poverty. To improve farmers’ livelihood and connect physically distant markets, the Karnataka state government in India launched the Unified Market Platform (UMP) in 2014. The UMP is an online agri-platform used to facilitate commodity auction markets. In this research we assume the role of the state government and propose to regulate the daily auction supply for sale under the objectives of increasing farmers’ revenue and reducing day-to-day price fluctuation. Using data collected via the UMP, we estimate a model on the relationship between market supply, demand, and realized daily price, then employ Dynamic Programming (DP) to determine the government’s optimal inventory control policy. We characterize the optimal solution and revenue, and how the supply and price model parameters affect the solution. We first consider a benchmark, deterministic formulation, then model extensions that incorporate inventory holding cost and stochastic demand. In the deterministic setting, supply regulation yields an average annual revenue improvement of 0.1%-4.3%; price variance is reduced by an average of 34%-99%. Comparable results are achieved with the model extensions across most commodities and markets. We conclude there is significant benefit to implementing an inventory regulation scheme.&#13;
&#13;
With the prospect of further improving farmers’ revenue, we also consider the combinatorial problem of merging two or more markets. To gain structural insights, we first analyze the special case of potentially merging two identical markets. The theoretical results indicate that merging is optimal if the difference between daily supply quantities is relatively small and the supply quantities are initially small. Using UMP data, our counterfactual empirical analysis shows that merging markets yields additional revenue gains of 0.3%-3.8%. Finally, we numerically analyze potential revenue gains when merging actual markets from the UMP data. Most analyses yield comparable results, however certain settings produce mixed results for different parameters.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering Perovskite Degradation Equations Using Scientific Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/139611" rel="alternate"/>
<author>
<name>Naik, Richa Ramesh</name>
</author>
<id>https://hdl.handle.net/1721.1/139611</id>
<updated>2022-01-15T03:50:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Uncovering Perovskite Degradation Equations Using Scientific Machine Learning
Naik, Richa Ramesh
Many important materials are metastable or unstable under certain operating regimes. The degradation mechanisms can be varied and complex, making the discovery of underlying differential equations (DEs) through a first-principles approach challenging. This invites the application of data-science methods to infer root causes. Traditionally, machine learning (ML) applied to materials research has focused on optimization and regression over a limited training set. Inferring physical laws directly from data may allow the extraction of more generalizable scientific information that enables one to understand underlying mechanisms. In this study, we apply scientific ML  — a blend of traditional scientific mechanistic modeling (differential equations) with machine learning methodologies — to identify differential  equations governing the degradation of methylammonium lead iodide perovskite (MAPI), a material with known instability under environmental stress. We explore scientific ML applied to simulated and experimental datasets, obtaining equations that describe the temperature- and time-dependencies of MAPI degradation. Our method of choice is sparse regression method PDE-FIND (Rudy, Samuel H., et al. "Data-driven discovery of partial differential equations." Science Advances 3.4 (2017): e1602614). We find that the underlying DE governing MAPI degradation corresponds to the Verhulst logistic function, often used to describe autocatalytic or self-propagating kinetics. This thesis demonstrates the application of scientific ML in practical materials science systems, highlighting the promise and challenges associated with ML-aided scientific discovery.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Change Point Detection in Time Series via Multivariate Singular Spectrum Analysis</title>
<link href="https://hdl.handle.net/1721.1/139610" rel="alternate"/>
<author>
<name>AlAnqary, Arwa</name>
</author>
<id>https://hdl.handle.net/1721.1/139610</id>
<updated>2022-01-15T03:08:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Change Point Detection in Time Series via Multivariate Singular Spectrum Analysis
AlAnqary, Arwa
The objective of change-point detection (CPD) is to estimate the time of significant and abrupt changes in the dynamics of a system through multivariate time series observations. The setup of CPD covers a wide range of real-world problems such as quality control, medical diagnosis, speech recognition, and fraud detection to name a few. In this thesis, we develop and analyze a principled method for CPD that combines a variant of multivariate singular spectrum analysis (mSSA) approach with the cumulative sum (CUSUM) procedure for sequential hypothesis testing. In particular, we model the underlying dynamics of multivariate time series observations through the spatio-temporal model introduced recently in the mSSA literature. The change points in such a setting correspond to a change in the underlying spatio-temporal model. As the primary contributions of this work, we develop a CUSUM-based algorithm to detect such change points in an online fashion. Further, we extend the analysis of CUSUM statistics, traditionally done for the setting of independent observations, to the dependent setting of (multivariate) time series under the spatiotemporal factor model. Specifically, we analyze the performance of our algorithm in terms of the average running length (ARL) – a common metric used traditionally in sequential hypothesis testing to measure the trade-off between the delay in a true detection and the running time until a false detection. We formally establish that for any given detection parameter h &gt; 0, on average, the algorithm detects a change point with a delay of &#119874;(h) time steps, while in the case of no change it takes at least Ω(exp(h)) time steps until it makes a false detection. Finally, we empirically show that the proposed CPD method provides state-of-the-art performance across synthetic and benchmark datasets.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing surrogate models of engineering structures with graph-based and physics-informed learning</title>
<link href="https://hdl.handle.net/1721.1/139609" rel="alternate"/>
<author>
<name>Whalen, Eamon Jasper</name>
</author>
<id>https://hdl.handle.net/1721.1/139609</id>
<updated>2022-01-15T03:43:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Enhancing surrogate models of engineering structures with graph-based and physics-informed learning
Whalen, Eamon Jasper
This thesis addresses several opportunities in the development of surrogate models used for structural design. Though surrogate models have become an indispensable tool in the design and analysis of structural systems, their scope is often limited by the parametric design spaces on which they were built. In response, this work leverages recent advancements in geometric deep learning to propose a graph-based surrogate model (GSM). The GSM learns directly on the geometry of a structure and thus can learn on designs from multiple sources without the typical restrictions of a parametric design space. &#13;
&#13;
Engineering surrogate models are often limited by data availability, since designs and performance data can be expensive to produce. This work shows that transfer learning, through which training data of varying topology, complexity, loads and applications are repurposed for new predictive tasks, can be used to improve the data efficiency of surrogates, often reducing the required amount of training data by one or two orders of magnitude. This work also explores new potential sources for training data, namely engineering design competitions, and presents SimJEB, a new public dataset of simulated engineering components designed specifically for benchmarking surrogate models. Finally, this work explores the emerging technology of physics-informed neural networks (PINNs) for structural surrogate modeling, proposing two new heuristics for improving the convergence and accuracy of PINNs in practice. Combined, these contributions advance the generalizability and data efficiency of surrogate models used in structural design.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discontinuous Galerkin solutions of the Boltzmann equation: spectral collocation and moment methods</title>
<link href="https://hdl.handle.net/1721.1/139608" rel="alternate"/>
<author>
<name>Van Heyningen, R. Loek</name>
</author>
<id>https://hdl.handle.net/1721.1/139608</id>
<updated>2022-01-15T03:32:17Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Discontinuous Galerkin solutions of the Boltzmann equation: spectral collocation and moment methods
Van Heyningen, R. Loek
This thesis explores the ability of the discontinuous Galerkin (DG) method to numerically solve the Boltzmann equation. Constructing numerical methods for this equation is a challenge, due in part to the kinetic theory description of moving particles, which relies on space, time, and velocity variables. Two novel approaches are presented and compared. The first uses a spectral collocation basis in velocity space. The resulting system is solved in time using Diagonally Implicit Runge-Kutta methods, chosen in order to mitigate stiffness concerns. A Jacobian-Free Newton—Krylov method is presented, accelerated with a sweeping preconditioner.  The method is tested on 1D and 2D problems in order to validate its convergence behavior and investigate its efficiency. The second method uses DG for moment equations, which can be derived as spectral methods in velocity space with spatial and temporal adaptivity. These methods were first proposed in 1949 by Grad, but their applicability has been limited. The equations are not guaranteed to be hyperbolic, leading to stability issues. The elegance and potential for cost-reduction of Grad’s moment method have led to the development of different moment closures that preserve hyperbolicity and model accuracy. The approaches studied in this thesis, the globally hyperbolic moment methods, restore hyperbolicity by introducing a term that cannot be written in conservative form. The equations are typically solved with operator splitting and low-order methods. We examine the promise and challenges of applying a high-order DG method with explicit Runge-Kutta time-stepping to these equations on common 1D test cases. The thesis ends with a discussion on the prospects of both methods and suggestions for future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Learning and Planning for Intelligent Manipulation Using Probabilistic Hybrid Models</title>
<link href="https://hdl.handle.net/1721.1/139602" rel="alternate"/>
<author>
<name>Feng, Meng</name>
</author>
<id>https://hdl.handle.net/1721.1/139602</id>
<updated>2022-01-15T03:11:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Model-Based Learning and Planning for Intelligent Manipulation Using Probabilistic Hybrid Models
Feng, Meng
While the rapid advancement of deep learning and grasp-affordance grasping has allowed the fast planning of grasping poses directly from visual inputs, it still commonly adopts an open-loop architecture that has made it slow to react and prone to failure, limiting its use in more complicated manipulation problems that require online adaptation and dexterous interactions. Although designing behavior tree may work as short-term hotfix for simple manipulation tasks, such approach is not scalable as the complexity of the desired skill increases. Modern deep reinforcement learning techniques have shown good performances in learning closed-loop policies for a selected sets of atomic manipulation skills, but they often require a prohibitive amount of training data and lack interpretability in the learned policy models. To reduce the cost of learning closed-loop manipulation controllers and facilitate more transparency, we propose to a model-based reinforcement learning algorithm. Our algorithm learns deep probabilistic hybrid automata (DPHA), a novel graphical model that learns to predict both low-level state evolution as well as high-level transitions among distinct modalities. We also show that the discrete modes that naturally arise when learning DPHA models can provide promising insights that reveal semantically meaningful intentions and discover potentially generalizable skills. We present a sampling-based model predictive control algorithm that leverages the DPHA model to plan for actions over spatially and temporally extended horizons. Our benchmark shows that these algorithms are capable of achieving comparable asymptotic performance with up to 10 times less training data compared to standard benchmark algorithms on pushing and grasping problems.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hydrodynamic Analysis and Conceptual Design Study for an External Storage Enclosure System for Unmanned Underwater Vehicles.</title>
<link href="https://hdl.handle.net/1721.1/139600" rel="alternate"/>
<author>
<name>Hait, Matthew Warren</name>
</author>
<id>https://hdl.handle.net/1721.1/139600</id>
<updated>2022-01-15T03:48:21Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Hydrodynamic Analysis and Conceptual Design Study for an External Storage Enclosure System for Unmanned Underwater Vehicles.
Hait, Matthew Warren
Medium-sized Unmanned Underwater Vehicles (UUV) are limited in their scope of operations, range, and endurance by their relatively small energy storage capacity. The majority of commercially available medium-sized UUVs are incapable of mission operations with durations longer than 30 hours, many unable to achieve 24 hours. The complex integration of control and instrumentation equipment internal to the UUV has a detrimental impact on the location, type and number of sensors installed within UUVs of this size, and are often at the cost of additional energy storage capabilities. This research investigates the hydrodynamic resistance and powering requirements needed to support a conceptually designed rigid multi-bodied UUV built around DARPA’s SUBOFF hull from, the goal being to develop new and innovative low-cost methods of modifying commercially available UUVs to enhance range, payload capabilities and sensory performance through the use of novel external enclosure systems. This thesis investigates the impact on the UUV’s drag by optimizing the location and size of the spheroidal shaped external mounted equipment bays such that resistance is minimized. Enclosures are capable of extending the sensor and payload capacity or increasing the onboard energy storage via detachable store pods. Energy storage methods, total energy capacity, and the impact of the overall system on range are investigated utilizing the constrained weight and volume of a 3000-meter-deep capable pressure hull. Performance is predicted via Computational Fluid Dynamics using OpenFOAM and is validated using Experimental Fluid Dynamics via model towing resistance. Structural strength was determined by Finite Element Analysis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>heirlooms in search of the fifth ecology</title>
<link href="https://hdl.handle.net/1721.1/139599" rel="alternate"/>
<author>
<name>Wong, Erin</name>
</author>
<id>https://hdl.handle.net/1721.1/139599</id>
<updated>2022-01-15T03:51:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">heirlooms in search of the fifth ecology
Wong, Erin
From the removal and displacement of indigenous peoples to gain land to grow cotton, to the creation of high-yielding modern seed varieties, agricultural practices over the past 200 years have created a broken and unsustainable system. Technological advances in plant genetics have fueled large scale-crop production, and a race to control one of our most important resources, seeds.&#13;
&#13;
In the shadows, agro-chemical companies have amassed considerable control over the seed industry. Their efforts have resulted in a consolidation of seeds available for commercial use. So, while crop yields have increased, agriculture biodiversity has decreased. And since the 1900s the United States has lost over 90% of its fruit and vegetable varieties, spurring seed saving efforts around the world. The most well-known of these collective efforts is the Svalbard Global Seed Vault in Norway. Opened in 2008, the Global Seed Vault holds the world’s largest collection of agriculture biodiversity. The seeds lying in the deep freeze of the vault include wild and old varieties, many of which are not in general use anymore. But while the vault protects and preserves, the seeds hidden away, and frozen in time, wait to be woken.&#13;
&#13;
We are now in a period known as the Awakening. A time that requires a new type of heirloom seed institution, one that is decentralized and accessible, one that designs for the entire life-cycle of the seed. Where once in nature, heirloom seeds found ways to move by themselves, by wind, by ocean current, in the bellies of animals, or by ballistic dispersal, they must now be supported by new heirloom seed practices. Therefore, urban centers, once removed from the life-cycle of the seed, are reintroduced as the site of an urban food culture. Told through a four-course meal, this is the story of the seed keepers of Los Angeles.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scripting Inclusion</title>
<link href="https://hdl.handle.net/1721.1/139598" rel="alternate"/>
<author>
<name>Merzaban, Amanda Sayed</name>
</author>
<id>https://hdl.handle.net/1721.1/139598</id>
<updated>2022-01-15T04:03:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Scripting Inclusion
Merzaban, Amanda Sayed
Efforts to bring underrepresented modernist women artists of Arabic speaking countries into the scope of Western art exhibitions has been on the rise, particularly since the early 2000s. Who decides what artists get shown and how their stories are told? What are the power structures guiding their inclusion? I inspect the consequences of the prevailing power dynamic through a feminist lens. This thesis is meant to offer a way of reviewing these systems of power so it can be more explicitly analyzed and discussed in tandem with how art is inscribed into Western discourse.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fractured and Dissolved, Architecture Ablaze: Towards an Understanding of Ayeneh-Kari in the Palaces of Iran</title>
<link href="https://hdl.handle.net/1721.1/139596" rel="alternate"/>
<author>
<name>Daftarian, Reza</name>
</author>
<id>https://hdl.handle.net/1721.1/139596</id>
<updated>2022-01-15T03:33:36Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Fractured and Dissolved, Architecture Ablaze: Towards an Understanding of Ayeneh-Kari in the Palaces of Iran
Daftarian, Reza
Ayeneh-kari (lit. ‘mirror-work’), a term that refers to a surface encrusted with fragmented mirrored glass, is a form of architectural ornament found in myriad historic monuments across Greater Iran. This decorative medium first emerged in the celebrated palatial structures of the Safavids (r. 1501–1722) and culminated in the intricate geometric mosaics that became the preferred decorative schema for both palaces under the Qajars (r. 1789–1925). Despite this preeminence, at its zenith ayeneh-kari was largely denigrated by foreign visitors to Iran as garish and a feeble emulation of European ‘culture,’ an attitude which has unfortunately permeated the scant scholarly literature on the subject both in English and Persian. In response to the cursory inquiries dealing with the ornamental form, the present work examines the emergence of ayeneh-kari in early modern Iran and traces its evolution as both an ornamental form and an ideological mechanism. How did this medium evolve from an obscure ornamental program of Safavid palaces to a conspicuous decorative schema that became ubiquitous in Qajar monuments? What was the sociopolitical climate in which this peculiar surface ornament flourished, and how was it reflected by the self-conscious use of ayeneh-kari in palatial architecture? Herein lies the crux of the present study, which will treat ayeneh-kari as a multisensory art form in its own right and as a dialogic instrument wielded to simultaneously forge and enunciate the mystique, splendor, and authority of a sovereign figure. By tracing the transformations in composition, location, and scale of ayeneh-kari and contextualizing such shifts within their respective socio-historical moment, we can recognize how nearly four centuries of Iranian rulers have employed this ornamental form as an ideological contrivance. Ultimately, I contend that on account of its material, sensory and symbolic qualities, ayeneh-kari was methodically used in the architectural programs of the Safavids and especially the Qajars, whose imperial enterprise was contingent upon a symbolic linkage to their predecessors, to convey a distinctively Perso-Shi’i configuration of kingship.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Lean Manufacturing Concepts to a High-Mix Low-Volume Make to Order Environment</title>
<link href="https://hdl.handle.net/1721.1/139593" rel="alternate"/>
<author>
<name>Rodriguez, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/139593</id>
<updated>2022-01-15T03:07:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Applying Lean Manufacturing Concepts to a High-Mix Low-Volume Make to Order Environment
Rodriguez, Andrew
Despite research documenting the operational benefits achieved by reducing the number of product offerings, manufacturing businesses frequently serve markets in which customers demand a large variety of products with options across multiple features at irregular intervals. To meet customer requirements across a range of possible demand, businesses manage the high mix and low volume by choosing to make-to-order instead of make-to-stock. While principles of Lean Manufacturing have been recognized as enablers of operational excellence in high volume production operations, questions remain about the applicability of the concepts in high-mix, low-volume, make-to-order environments.&#13;
&#13;
This project explores the applicability of Lean concepts within this manufacturing environment at PSI Control Solutions (PSI), a mid-sized business assembling electrical distribution and control products for industrial consumers. To stay competitive, the business must provide high quality, cost competitive products that meet customer specifications with minimal lead times. Within this organization, the primary research areas for applying Lean concepts were component replenishment policy and production process improvement. After a production cell applied the methods of eliminating waste, mistake proofing, and pull, it increased the percent of sales delivered on time to the customer desired date from 85.7% to 100% over subsequent 3 month periods and improved product first pass-yield from 87.9% to 93.1% over 6 month periods.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo Method for Calorimetric NRF Cargo Screening</title>
<link href="https://hdl.handle.net/1721.1/139592" rel="alternate"/>
<author>
<name>Bickus, Jacob E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139592</id>
<updated>2022-01-15T03:44:48Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Monte Carlo Method for Calorimetric NRF Cargo Screening
Bickus, Jacob E.
A number of fields in nuclear security require isotopic analysis and identification. Nuclear resonance fluorescence(NRF) has provided a non-intrusive isotope-sensitive measurement technique to detect special nuclear material in cargo [8], and has been proposed to be used as a verification technique in arms control treaty verification [41]. Standard methods of performing NRF involve the use of expensive HPGe detectors to detect a scattered signal to discriminate between isotopes of special nuclear materials. Furthermore these require a continuous wave (CW) beam, which currently can be delivered only by large and static accelerators [40]. We propose a system using an energy-modulating chopper wheel and a simpler, pulsed electron accelerator beam as the radiation source. This work builds upon a concept presented by Kemp et al. [24], with the difference of a measurement of NRF in a scattering mode. In this approach the chopper wheel serves as a switch effectively modulating the beam to include or exclude photons of NRF energies for interrogating the test object. Comparison between the chopper "On" and "Of" will provide a differential signal which upon integration can allow inference of special nuclear materials based on their NRF signals. The approach places integrating calorimetric Cherenkov detectors at a back-scattered angle which will eliminate much of the background typically found in a transmitted spectra. Cherenkov detectors will replace the HPGe detectors in effort to decrease the low energy background. We present a thoroughly tested Monte Carlo model to compare with experimental testing using Cherenkov detectors and nuclear resonance fluorescence to discriminate between isotopes of special nuclear material. Preliminary simulation results show that a uranium interrogation object could not be determined within a 5 minute interrogation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Trailer Capable, Open Ocean Sailing Yacht</title>
<link href="https://hdl.handle.net/1721.1/139591" rel="alternate"/>
<author>
<name>Maxwell, Nathan E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139591</id>
<updated>2025-10-30T18:06:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of a Trailer Capable, Open Ocean Sailing Yacht
Maxwell, Nathan E.
A design is developed for a small sailing yacht capable of being towed, launched, and recovered with a standard-sized truck or sport utility vehicle, while retaining capability for extended, open ocean transits. A review of factors affecting small yacht seaworthiness is presented, and relevant design parameters are proposed. Design requirements pertaining to trailer capability, seaworthiness, and vessel intended use are developed, and a multi-criteria decision-making method is employed to down-select to preferred options in key functional areas of the design. From there, an iterative point-based design approach is employed to converge on a design that satisfied requirements. Major design work encompassed developing a suitable hull form; keel and rudder design; selection and validation of appropriate scantlings; designing a composite mast and spars; determining a sail plan and rigging schema; engine selection, propeller design, and o↵-design propulsion analysis; arrangements layout; detailed weights and stability assessments; and sailing performance predictions. The design meets or exceeds all developed requirements, including exceeding International Standards Organization (ISO) stability and buoyancy requirements on Stability Index (S.I.) and Righting Energy for the highest design category classification, which pertains to vessels expected to experience significant wave heights up to 7 m and up to Force 10 winds. A 1:7 scale model of the hull was constructed with a fused deposition modeling 3D printer and used to measure upright resistance of the yacht in towing tank experiments, for comparison to resistance predictions generated from the Delft Systematic Yacht Hull Series.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Deficiencies from Missing Data in Electronic Health Records</title>
<link href="https://hdl.handle.net/1721.1/139587" rel="alternate"/>
<author>
<name>Zhou, Tianqi</name>
</author>
<id>https://hdl.handle.net/1721.1/139587</id>
<updated>2022-01-15T03:47:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Addressing Deficiencies from Missing Data in Electronic Health Records
Zhou, Tianqi
Electronic health records (EHRs) contain a wealth of data that can be used to improve patient-centered outcomes. In particular, EHRs have been used for disease prediction, data-driven clinical decision support, patient trajectory modeling, etc. However, it is common that EHRs data contain substantial missing information that could make the clinical prediction tasks more challenging if left unaddressed.&#13;
&#13;
This thesis focuses on the problem of multivariate data imputation in patients’ ICU EHRs, where the source of missing information comes mostly from the time-series data. The key challenge is to design flexible models that model the time-series and non-time-series data jointly such that accurate time-series imputation can be achieved. To this purpose, a multi-modal neural network for sequential regression, to accurately estimate the missing values in the time series from EHRs is proposed. Specifically,  the model is trained by using a self-supervised regression objective, namely, the masked value regression task, which mimics missing data situations at test time and optimizes a supervised regression loss in each learning episode. Additionally, an adversarial training procedure is employed to further improve the proposed system, similarly as the conditional generative adversarial networks. Empirical results on the MIMIC-III dataset show that the proposed system achieves superior performance against several strong baseline methods. The proposed imputation system can also address the deficiencies from the missing data of EHR as it enables robust clinical prediction over a variety of missing rates, on two large-scale clinical prediction tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Accessible Governance Innovation in Sierra Leone</title>
<link href="https://hdl.handle.net/1721.1/139586" rel="alternate"/>
<author>
<name>Ventres-Pake, Cory</name>
</author>
<id>https://hdl.handle.net/1721.1/139586</id>
<updated>2022-01-15T03:50:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing for Accessible Governance Innovation in Sierra Leone
Ventres-Pake, Cory
In 2002, pioneering government units in Australia and Denmark began experimenting with design approaches in the public sector. This launched a revolution in design-driven government innovation, developing a model that has now been replicated in dozens of governments across the globe.&#13;
&#13;
That same year, Sierra Leone emerged from a decade-long civil war with the lowest Human Development Index on the planet. Today, building on two decades of peaceful democratic governance, the Government of Sierra Leone’s Directorate of Science, Technology and Innovation has codified its commitment to human-centered solutions. However, many of the resources available for embedding design in government still belong to, and consider the realities of, the Northern model.&#13;
&#13;
Despite design’s ability to envision contextually-relevant solutions, the government design revolution has given surprisingly little consideration to the gap between the model developed by governments in the Global North and the needs and realities of governance in the Global South. This thesis addresses one aspect of that mismatch: it considers public sector innovation formats, tools, and methodologies created in the Global North and redesigns them for the Government of Sierra Leone.&#13;
&#13;
This thesis uses literature review, qualitative design research, and material benchmarking to uncover themes important to developing contextually-relevant governance innovation tools and programming for the Government of Sierra Leone. Insights derived from interviews and observations drive design decisions. The designs delivered depart from other public sector design resources in the accessibility of language used, the illustrative value of imagery, and the reflectiveness of governance realities in Sierra Leone. This work acknowledges inconsistencies between existing government design models and diverse governance realities and provides an initial step in building out a body of more tailored solutions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Super Apps: Opportunities and Challenges</title>
<link href="https://hdl.handle.net/1721.1/139585" rel="alternate"/>
<author>
<name>Diaz Baquero, Andrea Patricia</name>
</author>
<id>https://hdl.handle.net/1721.1/139585</id>
<updated>2022-01-15T03:25:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Super Apps: Opportunities and Challenges
Diaz Baquero, Andrea Patricia
Super apps are accelerating digital adoption in developing markets with marketplaces that offer a wide range of products and services and mobile payments (QR codes). They bundle many single apps' functionalities and bring them together in one app that works as the umbrella for many services.&#13;
&#13;
This thesis is aimed to help companies to understand the opportunities and challenges created by super Apps. It covers super apps as a concept and how they differentiate from aggregators. It deep dives into business models, payment systems, user experience, mini-programs, and open APIs ecosystems while exploring the super app offering in Asia and Latin America. It also explores opportunities for super apps in two markets: the elderly and healthcare. These chapters investigate the existing digital offering for the two segments while examining how a super app for this segment would be.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptualizing an online platform to facilitate purposeful serendipity, meaningful networking and hiring, through play and creative collaborations.</title>
<link href="https://hdl.handle.net/1721.1/139584" rel="alternate"/>
<author>
<name>Adhikarla, Saket Kashyap</name>
</author>
<id>https://hdl.handle.net/1721.1/139584</id>
<updated>2022-01-15T03:31:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Conceptualizing an online platform to facilitate purposeful serendipity, meaningful networking and hiring, through play and creative collaborations.
Adhikarla, Saket Kashyap
Can being at the right place at the right time with the right people over the most compelling causes lead to fountains of creativity, innovation and self-discovery?&#13;
This study explores how to build a space that advocates creative confidence, opens paths to discovering ikigai, and facilitates interactions among resonating people, all in a space powered by collaboration and playfulness. In addition, this research attempts to understand the underlying motivation, behavior and emotions of individuals when they encounter serendipitous interactions in professional settings as employees of organizations and as entrepreneurs/freelancers’.&#13;
 &#13;
‘How can we spark purposeful serendipity to catalyze creative collaborations &amp; innovation?’&#13;
‘How to find and network with professionally resonating individuals in a chaotic world?’&#13;
 &#13;
I went through 4 research strategies to answer the above questions: 1) Studying the existing professional networking platforms in both online and physical worlds for their functional prowess and gaps through primary and secondary research. 2) Interviewing over 60 individuals, conducting qualitative analysis of the output and defining the key behaviors and needs. 3) Rapid prototyping and testing concepts with the prospective users consisting of innovators, employees and founders of startups. 4) Building and testing a playful interaction model to test the here forth learned. Prototypes, user testing outcomes, feedback and learnings are shared for the 6 concepts created in this study. &#13;
 &#13;
The thesis aims to bring forward that there is indeed a strong demand for platforms/programs that boost creativity, collaboration and meaningful networking, and that the right modes of interactions facilitate these factors in circles of entrepreneurs, students and employees of organizations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Corporate Strategy Using Data-Driven Business Growth Decisions</title>
<link href="https://hdl.handle.net/1721.1/139583" rel="alternate"/>
<author>
<name>Nepsky, Patrick A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139583</id>
<updated>2022-01-15T03:16:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Enhancing Corporate Strategy Using Data-Driven Business Growth Decisions
Nepsky, Patrick A.
Corporate strategy – the process of formulating and achieving company objectives – is a broad area involving many types of complex decisions. This thesis explores a data-driven approach to executing business growth decisions. Specifically, the thesis provides a machine-learning framework that can be used to predict outcomes of two major corporate growth decisions: investment into internal research and development (R&amp;D) projects, and acquisitions of external early-stage ventures. Historically, these two growth decisions have primarily been made by small teams of subject matter experts, and have not been thoroughly explored using modern machine learning methodologies. The prediction problems in this thesis are framed as binary classification schemes, and the outcomes are predicted using a custom feature set generated from structured databases. The initial results show significantly better-than-random performance for both growth prediction problems, with Area Under the Curve (AUC) of model Receiver Operating Characteristics (ROC) reaching as high as 0.7–0.8. This performance is consistent across a range of different model architectures. A portfolio simulation suggests that the binary outcome prediction may be sufficient information in order to generate positive financial returns. Future work should explore more complex system models, different and/or multi-class prediction targets, and real-time data fusion frameworks to incorporate the model recommendations into corporate strategy workflow.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Interactions of an Unmanned Underwater Vehicle Operating in Close Proximity to a Moving Submarine</title>
<link href="https://hdl.handle.net/1721.1/139582" rel="alternate"/>
<author>
<name>Hammond, Brady M.(Brady Meikle)</name>
</author>
<id>https://hdl.handle.net/1721.1/139582</id>
<updated>2024-01-09T15:30:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Interactions of an Unmanned Underwater Vehicle Operating in Close Proximity to a Moving Submarine
Hammond, Brady M.(Brady Meikle)
While the United States Navy has developed a strong arsenal of tools to model the hydrodynamic forces and moments of different vehicles in different conditions, they do not have a model that enables them to understand the forces and moments that an Unmanned Underwater Vehicle (UUV) experiences when operating in close proximity to a moving submarine as a result of the interactions between their potential fields and wakes. The launch and recovery of UUVs from submarines is very challenging because these hydrodynamic interactions make UUVs hard to control near submarines and my even cause collisions between the two vehicles. The mapping of these forces and moments is vital to simulate the motion of the vehicles and enable developers to create UUV control and autonomy systems that are adaptive to these hydrodynamic interactions to further enable UUV launch and recovery. Due to the complex nature of the hydrodynamic interactions, this study used computational fluid dynamics to expand the current understanding of the forces and moments between these two vehicles. A Gaussian process regression model was used to perform an optimal experimental design and map the resulting hydrodynamic interactions based on the UUVs longitudinal position, lateral position, speed, heading angle, UUV diameter, and UUV length. The model was validated using an out of sampling method and was shown to be capable of accurately predicting the hydrodynamic interactions between a submarine and UUV.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Attributes and Research Facility Design for 1KW High-speed Small Gas Turbine Engine</title>
<link href="https://hdl.handle.net/1721.1/139581" rel="alternate"/>
<author>
<name>Shorter, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/139581</id>
<updated>2022-01-15T03:58:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Performance Attributes and Research Facility Design for 1KW High-speed Small Gas Turbine Engine
Shorter, Matthew
An assessment of the performance and operating attributes of a high-speed small gas turbine engine with power output approximately one kilowatt has been carried out. The design of an experimental test facility for such an engine is subsequently proposed. Cycle analyses and computations on a state-of-art small gas turbine engine suggest that its cycle efficiency can be improved from 4% to 9% through an achievable increase in compressor pressure ratio and polytropic efficiency or by incorporating a high performance recuperator. The compressor polytropic efficiency is improvable by 3 to 6 percentage points through effective clearance management. Previous experiments on the test engine demonstrated the risk of operability issues, particularly during the engine start-up transients. These operability challenges are tentatively attributed to shaft-bearing housing clearance variation differing from design intent due to differential radial thermal expansion of the shaft and bearing housing system. Parametric assessments of the clearance variation for varying imposed conditions by the primary flowpath are conducted using a reduced-order model. This model is derived from combined unsteady CFD and conjugate heat transfer computations and finite element analysis. The model shows that the size of the shaft-bearing housing clearance reduction can vary by an order of magnitude depending on the transient duration. The non-dimensional shaft-bearing housing clearance has a functional dependence on the Fourier number and is independent of the final normalized turbine inlet temperature. The engine characteristic timescales for the aero-thermal-structural interactions are identified. The rotor acceleration timescale is determined by the ratio of the rotational kinetic energy to the engine power output and is 0.15 seconds. This timescale scales linearly with the rotor radius &#119903;. The transient thermal timescale is determined by the ratio of the engine structure heat capacity to the estimated convective heat transfer rate and is 76 seconds. Results from cycle analyses, unsteady computations, the reduced-order model, and quantification of engine time scales are then used to formulate and design a small gas turbine engine research facility and the associated measurement system. In addition to experiments for characterizing the performance metrics, engine thermal condition and engine mechanical clearances, experiments are proposed to challenge the scaling for thermal-induced shaft-bearing housing clearance variation from engine start-up to steady state operation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring Shape and Material from Sound</title>
<link href="https://hdl.handle.net/1721.1/139579" rel="alternate"/>
<author>
<name>Zhang, Zhoutong</name>
</author>
<id>https://hdl.handle.net/1721.1/139579</id>
<updated>2022-01-15T03:49:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Inferring Shape and Material from Sound
Zhang, Zhoutong
Humans infer rich knowledge of objects from both auditory and visual cues. Building a machine of such competency, however, is very challenging. One possible solution is to rely on supervised learning, which requires a large-scale dataset containing sounds of various objects, with clean labels on their appearances, shape and material. However, it is difficult and expensive to capture such a dataset. Another approach is to tackle the problem in an analysis-by-synthesis framework, where we iterative update current estimates given a generative model. This, however, requires sophisticated generative models, which is too computationally expensive to support iterative inference. Finally, despite the popularity of deep learning methods in auditory perception tasks, most of them are derived from visual recognition tasks, which may not be suitable for processing audios.&#13;
&#13;
To address such difficulties, we first present a novel, open-source pipeline that generates audio-visual data, purely from 3D object shapes and their physical properties. Using this generative model, we are able to construct a synthetic audio-visual dataset, namely Sound-20K, for object perception tasks. We further demonstrate that the representation learned on synthetic audio-visual data can transfer to real-world scenarios. In addition, the generative model can be made efficient enough to support iterative inference, where we construct an analysis-by-synthesis framework that infers object’s shape and material by hearing it falling on the ground.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Pareto-Optimal Experiment Design via Efficient Bayesian Optimization</title>
<link href="https://hdl.handle.net/1721.1/139577" rel="alternate"/>
<author>
<name>Tian, Yunsheng</name>
</author>
<id>https://hdl.handle.net/1721.1/139577</id>
<updated>2022-01-15T03:26:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Automating Pareto-Optimal Experiment Design via Efficient Bayesian Optimization
Tian, Yunsheng
Many science, engineering, and design optimization problems require balancing the trade-offs between several conflicting objectives. The objectives are often blackbox functions whose evaluation requires time-consuming and costly experiments. Multi-objective Bayesian optimization can be used to automate the process of discovering the set of optimal solutions, called Pareto-optimal, while minimizing the number of performed evaluations. To further reduce the evaluation time in the optimization process, testing of several samples in parallel can be deployed. We propose DGEMO, a novel multi-objective Bayesian optimization algorithm that iteratively selects the best batch of samples to be evaluated in parallel. Our algorithm approximates and analyzes a piecewise-continuous Pareto set representation, which allows us to introduce a batch selection strategy that optimizes for both hypervolume improvement and diversity of selected samples in order to efficiently advance promising regions of the Pareto front. Experiments on both synthetic test functions and real-world benchmark problems show that our algorithm predominantly outperforms relevant state-of-the-art methods. The code is available at https://github.com/yunshengtian/DGEMO.&#13;
&#13;
In addition, we present AutoOED, an Optimal Experiment Design platform that implements several multi-objective Bayesian optimization algorithms with state-of-the-art performance including DGEMO with an intuitive graphical user interface (GUI). AutoOED is open-source and written in Python. The codebase is modular, facilitating extensions and tailoring the code, serving as a testbed for machine learning researchers to easily develop and evaluate their own multi-objective Bayesian optimization algorithms. Furthermore, a distributed system is integrated to enable parallelized experimental evaluations by independent workers in remote locations. The platform is available at https://autooed.org.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Macaulay Bases of Modules</title>
<link href="https://hdl.handle.net/1721.1/139576" rel="alternate"/>
<author>
<name>Rao, Sujit</name>
</author>
<id>https://hdl.handle.net/1721.1/139576</id>
<updated>2022-01-15T03:49:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Macaulay Bases of Modules
Rao, Sujit
We introduce fully general Macaulay bases of modules, which are a common generalization of Groebner bases and Macaulay &#119867;-bases to suitably graded modules over a commutative graded k-algebra, where the index sets of the two gradings may differ. The additional generality includes Groebner bases of modules as a special case, in contrast to previous work on Macaulay bases of modules. We show that the standard results on Groebner bases and Macaulay &#119867;-bases generalize in fields of arbitrary characteristic to Macaulay bases, including the reduction algorithm and Buchberger’s criterion and algorithm framework. A key result is that Macaulay bases, in contrast to Groebner bases, respect symmetries when there is a group &#119866; acting homogeneously on a graded module, in which case the reduction algorithm is &#119866;-equivariant and the k-span of a Macaulay basis is &#119866;-invariant. We also show that some of the standard applications of Groebner bases can be generalized to Macaulay bases, including elimination and computation of syzygy modules, which require the generalization to modules that was not present in previous work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Discovery of Microstructured Composites with Optimized Trade-Off between Strength and Toughness</title>
<link href="https://hdl.handle.net/1721.1/139575" rel="alternate"/>
<author>
<name>Li, Beichen</name>
</author>
<id>https://hdl.handle.net/1721.1/139575</id>
<updated>2022-01-15T03:13:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computational Discovery of Microstructured Composites with Optimized Trade-Off between Strength and Toughness
Li, Beichen
The conflict between strength and toughness is critical to practical engineering problems. To create structural composites with extraordinary strength and toughness, previous works mainly drew inspiration from nature and attempted to replicate nacrelike structures in synthetic composites using a brick-and-mortar architecture. There is no microscale control over constituent materials and the designed composites often exhibit anisotropic properties. Recent advances in high-resolution multi-material additive manufacturing enable the creation of high-performance heterogeneous composites through voxel-level microstructure configurations. However, past efforts in exploring these designs suffered from a limited design space and a weak benchmark that only comprised base materials. More importantly, they failed to address the clear discrepancies between simulation predictions and experimental measurements (the sim-to-real gap). To our best knowledge, no work has been successfully conducted to tackle the conflict between strength and toughness while taking on the sim-to-real challenge. In this work, we propose a computational pipeline where microstructures with optimal strength and toughness trade-offs are automatically discovered and analyzed for intrinsic toughening mechanisms. Based on a fast physics-based simulator, it employs a competitive game approach to bridging the gap between simulation and experiment. The pipeline will open the door for reversing the traditional scientific discovery process through analysis-by-synthesis, and potentially generalize to a wide range of applications in material science, chemistry, pharmaceutics, robotics, etc.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Programmable Hardware Accelerator for Fully Homomorphic Encryption</title>
<link href="https://hdl.handle.net/1721.1/139574" rel="alternate"/>
<author>
<name>Feldmann, Axel(Axel Stephan)</name>
</author>
<id>https://hdl.handle.net/1721.1/139574</id>
<updated>2026-01-06T19:04:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing a Programmable Hardware Accelerator for Fully Homomorphic Encryption
Feldmann, Axel(Axel Stephan)
Fully Homomorphic Encryption (FHE) allows computing on encrypted data, enabling secure offloading of computation to untrusted servers. Though it provides ideal security, FHE is expensive when executed in software, 4 to 5 orders of magnitude slower than computing on unencrypted data. These overheads are a major barrier to FHE's widespread adoption.&#13;
&#13;
We present F1, the first FHE accelerator that is programmable, i.e., capable of executing full FHE programs. F1 builds on an in-depth architectural analysis of the characteristics of FHE computations that reveals acceleration opportunities. F1 is a wide-vector processor with novel functional units deeply specialized to FHE primitives, such as modular arithmetic, number-theoretic transforms, and structured permutations.&#13;
&#13;
Due to the static nature of FHE computations, F1 uses an exposed ISA, requiring novel compilation techniques to statically schedule all compute and data movement. We design a compiler that efficiently maps FHE programs onto F1 hardware and maximizes reuse of on-chip data, helping to reduce data movement bottlenecks. The compiler leverages F1's explicitly managed scratchpad to decouple computation from data movement, a necessary ingredient in achieving high performance given the large size of FHE operands.&#13;
&#13;
We evaluate F1 using cycle-accurate simulation and RTL synthesis. F1 is the first system to accelerate complete FHE programs, and outperforms state-of-the-art software implementations by gmean 6,500x and by up to 17,000x. These speedups counter most of FHE's overheads and enable new applications, like real-time private deep learning in the cloud.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Artificial Intelligence Based Approach to Automate Document Processing in Business Area</title>
<link href="https://hdl.handle.net/1721.1/139571" rel="alternate"/>
<author>
<name>Chen, Ta Hang</name>
</author>
<id>https://hdl.handle.net/1721.1/139571</id>
<updated>2022-01-15T03:37:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Artificial Intelligence Based Approach to Automate Document Processing in Business Area
Chen, Ta Hang
Automatic document processing is always a strategy for business executives to improve operational efficiency. With Optical Character Recognition (OCR) and machine learning techniques, businesses are able to apply Artificial Intelligence (AI) to automate the process. However, introducing an AI application to business is challenging; it is easy to fail because of the complexity between the technical and organizational components. This thesis considers document processing from a sociotechnical system perspective and leverages a four-step system analysis approach to identify the critical components. &#13;
&#13;
This research also proposes a machine learning model using Support Vector Machine (SVM) as the classifier and Word2vec embeddings as document features to classify business documents. The proposed model reaches a 0.872 Macro F1-score using scanned business documents from the RVL-CDIP dataset. The proposed model outperforms the other commonly used rule-based algorithms, RIPPER and PART, showing that the proposed model is potentially suitable to be deployed into business to classify the&#13;
documents.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ising Machine Based on Electrically Coupled Spin Hall Nano-Oscillators</title>
<link href="https://hdl.handle.net/1721.1/139570" rel="alternate"/>
<author>
<name>McGoldrick, Brooke C.</name>
</author>
<id>https://hdl.handle.net/1721.1/139570</id>
<updated>2022-01-15T03:40:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Ising Machine Based on Electrically Coupled Spin Hall Nano-Oscillators
McGoldrick, Brooke C.
The Ising machine is an unconventional computing architecture that can solve NP-hard combinatorial optimization problems more efficiently than traditional von Neumann computing architectures. The spin Hall nano-oscillator has potential as a building block for a high-speed, low-power Ising machine based on its GHz operating frequency, sub-micron dimensions, and high degree of tunability. We develop an analytical framework describing how the dynamics of an electrically coupled array of spin Hall oscillators can be mapped to the Ising Hamiltonian based on the device characteristics. Our analytical model is integrated into a lightweight and versatile Verilog-A device that is used to model the nonlinear spin Hall oscillator’s phase dynamics in SPICEbased circuit simulators. Finally, by integrating this device model with off-the-shelf electronic amplifier models, we analyze the Ising machine performance at the circuit level considering phase noise and scalability of the coupled network. The physics-based analytical models and quantitative tools presented in this work will enable future experimental realization of an electrically coupled spin Hall oscillator-based Ising machine operating with a high degree of time, space, and energy efficiency.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling in a Database-Based Distributed Operating System</title>
<link href="https://hdl.handle.net/1721.1/139569" rel="alternate"/>
<author>
<name>Mathew, Shana</name>
</author>
<id>https://hdl.handle.net/1721.1/139569</id>
<updated>2022-01-15T03:56:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Scheduling in a Database-Based Distributed Operating System
Mathew, Shana
Current operating systems date from over 40 years ago and were designed for very different computing requirements, making them ill-equipped to handle serverless workloads as well as modern challenges in scalability, heterogeneity, availability, and security. Hence, we propose a radically new data-centric OS design for serverless computing. This database OS (DBOS) centralizes all cluster state in a uniform data model: database tables stored in a high-performance, distributed, main-memory database management system. Operations on this state will be performed via serverless, stateless tasks.&#13;
&#13;
This thesis presents work done to build a preliminary scheduler and to implement and evaluate various global scheduling algorithms. We also demonstrate the performance of a modern DBMS in executing various scheduling operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Totems: Verifying the Integrity of Visual Information using Neural Light Field</title>
<link href="https://hdl.handle.net/1721.1/139568" rel="alternate"/>
<author>
<name>Ma, Jingwei</name>
</author>
<id>https://hdl.handle.net/1721.1/139568</id>
<updated>2022-01-15T03:25:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Totems: Verifying the Integrity of Visual Information using Neural Light Field
Ma, Jingwei
In this work, we introduce a new approach to image forensics: physically placing a totem into the scene before taking a photo that needs to be protected from manipulations. A totem is any reflective or refractive object such that when placed in a scene, it displays a distorted version of the scene, which is called a totem view. When an image contains a totem, an adversary needs to modify both the totem view and the rest of the image (camera view) in a geometrically consistent manner in order to not have the manipulation detected. We assume that the adversary does not have access to totem shape and index of refraction (IoR), so achieving this consistency would be extremely difficult. Our work focuses on designing such algorithms that detect inconsistencies between the totem view and camera view given totem shape and IoR. In contrast to prior learning-based approaches that require large datasets of manipulated images, our methods are physics-based and work on a single image.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an Electrochemical Method to Investigate the Thermodynamic Behavior of Lanthanum and Sulfur in Liquid Steel</title>
<link href="https://hdl.handle.net/1721.1/139567" rel="alternate"/>
<author>
<name>Suzuki, Teppei</name>
</author>
<id>https://hdl.handle.net/1721.1/139567</id>
<updated>2022-01-15T03:56:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Development of an Electrochemical Method to Investigate the Thermodynamic Behavior of Lanthanum and Sulfur in Liquid Steel
Suzuki, Teppei
Since rare earth elements (REE) have great potential to improve mechanical properties of steel by refining grains and controlling the distribution and the shape of inclusions, REE may be widely used in steel industry. Currently, the method for REE separation is solvent extraction. Since this method has economic and environmental problems, more efficient process is strongly required. In this study, I look at vacuum distillation of RE sulfide. The challenge is that thermodynamic data of RE sulfide of gas phase are insufficient. Therefore, I investigated the thermodynamic properties of gaseous rare earth compounds to evaluate the feasibility of vacuum distillation of RE sulfide, here during the steelmaking process. To study the thermodynamic behavior of REE, especially La in liquid steel at high temperature, an electrochemical method (EMF) using Laβ” Al2O3 solid electrolyte has been developed and its performances are herein discussed.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoupling continuous manufacturing processes&#13;
to increase new product valuation</title>
<link href="https://hdl.handle.net/1721.1/139566" rel="alternate"/>
<author>
<name>Morgan, Ellen Franklin</name>
</author>
<id>https://hdl.handle.net/1721.1/139566</id>
<updated>2022-01-15T03:08:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Decoupling continuous manufacturing processes&#13;
to increase new product valuation
Morgan, Ellen Franklin
As competition floods the maturing contact lens industry, Johnson &amp; Johnson Vision (JJV) seeks to differentiate itself by consistently introducing innovative, specialized products. While the company’s highly automated, continuous manufacturing platforms efficiently produce large volumes of core business products, these systems lack the flexibility to quickly scale new products and respond to changes in customer demand.&#13;
&#13;
The goal of this thesis is to design a future-state manufacturing platform capable of supporting JJV’s shift toward a higher-mix, lower-volume product portfolio. Analysis focuses on redefining the manufacturing process for one new product, ACUVUE Theravision with Ketotifen (Theravision), a combination product that combines vision correction with anti-allergy medication. This work explores how decoupling continuous manufacturing processes can both increase valuation for Theravision products, and more effectively enable new products in the future. &#13;
&#13;
Decoupling the manufacturing system would allow JJV to utilize existing capacity for common process steps and introduce more variety into their products, labeling, and packaging. Our analysis considers decoupling the current manufacturing system at three distinct locations. Each location was reviewed individually for its technical feasibility, organizational impact, and opportunity for cost savings. The resulting business case demonstrates an expected 22% capital reduction per manufacturing line, and a standard gross profit improvement of 8% for the Theravision product.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating Good Jobs in Automotive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/139565" rel="alternate"/>
<author>
<name>Kilby, Matthew A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139565</id>
<updated>2022-01-15T03:00:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Creating Good Jobs in Automotive Manufacturing
Kilby, Matthew A.
A stable workforce is critical to successful manufacturing operations. A stable workforce results in repeatability of jobs, which in turn leads to higher quality vehicles and decreased down-time on the assembly line. It also results in lower turnover costs and allows management to focus on performance improvement instead of putting out fires that arise throughout the workday. This research explores the current employment strategy of a major automotive manufacturer, and identifies key drivers that lead to the 11.8% annualized turnover rate they are experiencing, with some employee segments as high as 38%. Specifically, we dive into the dynamics of the Trim &amp; Chassis Department, which has the highest employee turnover in the manufacturing plant, in order to identify how effectively Nissan is meeting employee needs, and where they can adapt in order to improve. The goal of this research is to identify the key drivers of turnover, quantify effects of turnover in an automotive manufacturing setting and to provide recommendations for how to reduce turnover. &#13;
&#13;
Through a combination of data analysis, employee interviews, and Gemba walks, we find that workload, employee empowerment, and career development play a prominent role in this high-volume manufacturing environment. Employee turnover is 57% more costly than previous models predicted when accounting for indirect costs of employee turnover. I also find that using System Dynamics modeling can be an effective tool to model these interconnected variables in order to better understand the reinforcing feedback loops at hand, and how to address them in order to make positive change. Additionally, this research makes recommendations on how to use the Good Jobs Strategy framework to reduce employee turnover and improve performance. Specifically, we look at increased staffing levels, investments in people, and cultural wins, combined with a financial calculator for management to use in order to quantify their investments.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Vision-based Dynamics Models</title>
<link href="https://hdl.handle.net/1721.1/139564" rel="alternate"/>
<author>
<name>Liu, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/139564</id>
<updated>2022-01-15T03:09:36Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Understanding Vision-based Dynamics Models
Liu, Cynthia
Recent developments in vision-based dynamics models have helped researchers achieve state-of-the-art results in a number of fields. For instance, in model-based reinforcement learning, vision-based methods perform extremely well on a variety of games and control tasks while using orders of magnitudes less data than model-free methods. One example is GameGAN, which learns to simulate the dynamics of observed games solely from visual and action inputs. However, there is very little understanding of these models and how they work. To address this lack of understanding, we apply the Network Dissection framework to analyze vision-based dynamics prediction models. We inspect individual trained neurons in convolutional layers of these models and modify the output of neurons to understand their effect on the representation. We also theoretically extend the Network Dissection framework by generalizing it to fully connected layers instead of only convolutional layers. Overall, we provide insight into the node-level workings of dynamics models.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"A Great Civilizing Agent": Architecture at MIT, Drawing Education, and Boston's Cultural Elite, 1865-1881</title>
<link href="https://hdl.handle.net/1721.1/139562" rel="alternate"/>
<author>
<name>Dubbs, Katherine Pearl</name>
</author>
<id>https://hdl.handle.net/1721.1/139562</id>
<updated>2022-01-15T03:12:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">"A Great Civilizing Agent": Architecture at MIT, Drawing Education, and Boston's Cultural Elite, 1865-1881
Dubbs, Katherine Pearl
This thesis examines the origin of architecture as an American discipline and its relationship to the concurrent promotion of public drawing education in the second half of the nineteenth century. In postbellum Massachusetts, textile manufacturers and their professional networks took control of local drawing education. Part of the perceived antidote to national disunity — as well as a justification for growing financial inequality — was the control of design knowledge through the creation of pedagogical programs and cultural institutions. Drawing simultaneously negotiated a multifarious identity as an industrial skill, a leisure activity, and a specialized profession. Bolstered by the rise in disposable wealth, Boston-based elites invested in drawing as a symbol of class status and industrial control in an increasingly stratified city.&#13;
&#13;
This development coincided with the mid-century emergence of architectural education in American universities. In 1865, architectural educator William Robert Ware was hired to create the architecture department at the Massachusetts Institute of Technology (MIT), the first architecture department in a university and the oldest architecture program in the country. For the duration of his tenure, Ware was part of a powerful network of arts patrons and professionals in Massachusetts who ascribed a civilizing purpose to art, an idealized category which included architecture. As part of this effort, he was not only the founder of MIT’s architecture department but also a founding instructor at two other cultural institutions in Boston. Underpinning these elite ambitions, in Ware’s case, were both economic and intellectual aspirations to elevate architecture as a profession and to cultivate the architect as a cultural connoisseur. This thesis argues that Ware capitalized on the evolving status of drawing –– as a manual labor, a contractual document, a cognitive act, and a cultural marker –– to craft architectural education as an intellectual undertaking worthy of its university setting. This history is illustrated through Ware’s contemporaneous involvement in the promotion of local drawing education, his advocacy for professionalism in architectural education, and his design of new printed material.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Level Optimization of Urban Air Mobility</title>
<link href="https://hdl.handle.net/1721.1/139561" rel="alternate"/>
<author>
<name>Wijaya, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/139561</id>
<updated>2022-01-15T03:03:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">System Level Optimization of Urban Air Mobility
Wijaya, Grace
The rise of interest in urban mobility, along with trends in vehicle electrification and autonomy have led to the development of electric vertical and takeoff landing aircraft to offer Urban Air Mobility (UAM) services. We have quantified the market demand of UAM for 5 U.S. cities considering demand from substitution of ground based transport through UAM services for commutes and airport access. By quantifying the number of travelers in each city who would be willing to pay for the UAM service, we find that the unconstrained market potential demand for a city reaches up to 585,000 daily round trips, but varies for different cities and price points. We considered the market impacts of airspace, weather, and infrastructure constraints. We further couple this market model and a vehicle model with an operations model to optimize for daily profit via a Systems of Systems decomposition. In a case study of the Bay Area, the tilt rotor or autogyro maximize the system top level objective of daily profit, reaching over $20 000 a day with 6-10 vertiports.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Fuel Cell Driven Origami-Inspired Large-Elongation Soft Robot Modules</title>
<link href="https://hdl.handle.net/1721.1/139558" rel="alternate"/>
<author>
<name>Hilby, Kristan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139558</id>
<updated>2022-01-15T03:31:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Hydrogen Fuel Cell Driven Origami-Inspired Large-Elongation Soft Robot Modules
Hilby, Kristan M.
Soft robots have gained popularity in recent years, especially in fields spanning medicine, space, to assistive robotics. Unlike traditionally rigid robots, soft robots offer inherent safety and a robust design. Furthermore, modular reconfigurable soft robots, which are robots composed of repeating reconfigurable units, provide the same benefits as traditional soft robots while also being able to perform a wide range of tasks based on their configuration. Such characteristics make them especially applicable for operations near humans. Though soft robots and reconfigurable soft robots have come a long way, they still cannot achieve motion over large ranges in a well-modeled, well-understood manner. Long-range movement is imperative for space applications and any task where the ability to stow in small spaces is crucial to their operation and transportation. Furthermore, to maximize their usage, qualitative and quantitative analyses are also required.  &#13;
&#13;
This thesis presents the design of a Yoshimura origami-inspired reconfigurable soft robot capable of achieving elongations up to 1715\%, over twice that of prior art in the field, using an adaptation of the layer-stacking method. By reconfiguring the modules between serial and parallel attachments, the robot collective can achieve linear motion, rotary motion, and locomotion. Furthermore, the soft Yoshimura module is easily integrated into traditionally rigid systems to achieve a truly versatile range of tasks. Computational finite element analyses provide structural and buckling behaviors. Experimental data, such as hysteresis, fatigue, and tensile testing profiles, validate computational results and offer further insight into performance and operational lifetime. These analysis techniques determined that no perceptible damage occurred during the unfolding of the Yoshimura structure, thereby suggesting they are suitable for long-term use.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Modeling and Characterization of a Multiscale Heat Exchanger for High-Temperature, High-Pressure Applications</title>
<link href="https://hdl.handle.net/1721.1/139557" rel="alternate"/>
<author>
<name>Wilson, Chad T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139557</id>
<updated>2022-01-15T03:15:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design, Modeling and Characterization of a Multiscale Heat Exchanger for High-Temperature, High-Pressure Applications
Wilson, Chad T.
Heat exchangers are devices that facilitate thermal energy transfer between two or more mediums and which function as key components in many industrial processes, such as steam power plants, refrigeration, chemical plants, nuclear plants, refineries, and next-generation renewable energy storage processes. Recent advancements in power generation techniques and heat engine cycle structures offer potential improvements to the efficiency of each process but require components capable of operating in increasingly demanding environments. Specifically, the shift to high-temperature, high-pressure thermodynamic cycles for improved efficiency has offered a new opportunity for device-level innovations, including producing a heat exchanger that supersedes classical material and design operating limits. Previous work has attempted to infiltrate this new market by developing heat exchangers using costly metal materials in mature architectures that achieve low power densities. Silicon carbide, while well known as a high-temperature material, has rarely been used under such loading conditions due to its low resistance to fracture and high tensile stress concentrations featured in existing heat exchanger designs.&#13;
&#13;
In this thesis, we present the design, structural modelling, and initial characterization of a multiscale ceramic heat exchanger, capable of operating in extreme environments with high power densities and a safety factor against mechanical failure. The heat exchanger device is evaluated for a supercritical CO2 Brayton cycle, using air and sCO2 as working fluids at 80 bar, 1300 °C and 250 bar, 300 °C, respectively. A multiscale channel design, enabled by ceramic co-extrusion, results in a counterflow heat exchanger core with both high thermal performance and high mechanical strength during steady state operation. For coupling this core to typical power-cycle tubing we designed manufacturable ceramic headers, whose geometry was optimized to minimize both pressure loss and flow maldistribution of the working fluids. To evaluate prototypes of our design, we constructed a test setup and experimentally quantified the performance of initial heat exchanger core components. Our design offers a practical solution to address the material limitations imposed by high-temperature, high-pressure thermodynamic cycles while predicting efficiency and performance improvements compared to current state-of-the-art heat exchanger alternatives.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electroactive polymer actuators: theory and computations</title>
<link href="https://hdl.handle.net/1721.1/139556" rel="alternate"/>
<author>
<name>Stewart, Eric M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139556</id>
<updated>2022-01-15T03:40:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Electroactive polymer actuators: theory and computations
Stewart, Eric M.
Electroactive polymer actuators are devices composed of polymeric materials which display mechanical actuation under an applied electric field. This mechanical actuation coupled with the relative compliance of polymeric materials renders electroactive polymer actuators an attractive choice for soft robotics applications, where they are sometimes referred to as “artificial muscles”. We present continuum-mechanical theories and finite element computations for three common types of electroactive polymer actuators: (1) ionic polymer-metal composites (IPMCs), (2) piezoelectric polymers, and (3) dielectric elastomers.&#13;
&#13;
Highly-coupled physical phenomena drive the actuation mechanism for each of these electroactive polymers. We present theory which accounts for electro-chemo-mechanical coupling in the case of ionic polymer-metal composites and electro-mechanical coupling in the case of piezoelectric polymers and dielectric elastomers, all within a thermodynamically consistent, finite deformation framework. We report demonstration computations of electroactive polymer devices using our own finite element implementations of the IPMC theory and piezoelectric polymer theory and an existing implementation of the dielectric elastomer theory. The demonstration computations reported include bending actuators, a biomimetic fin, a soft robotic gripper, and a piezoelectric serpentine ribbon. The theory and simulation capabilities presented in this work lay a foundation of modeling tools which hold great practical utility for the fast-growing field of soft robotics.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financing Fusion Energy</title>
<link href="https://hdl.handle.net/1721.1/139555" rel="alternate"/>
<author>
<name>Halem, Zachery M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139555</id>
<updated>2022-01-15T03:46:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Financing Fusion Energy
Halem, Zachery M.
The case for investing in fusion energy—increasing global energy demand, high annual carbon dioxide output, and technological limitations for wind and solar power—has never been greater, yet financing for fusion companies through traditional means has proven challenging. While fusion startups have an unparalleled upside, their high upfront costs, lengthy delay in payoff, and high risk of commercial success have historically restricted funding interest to a niche set of investors. Drawing on insights from investor interviews and case studies of public-private partnerships, we propose the utilization of a megafund structure in which a large number of projects are securitized into a single holding company funded through various debt and equity tranches, with first loss capital guarantees from governments and philanthropic partners. The megafund exploits many of the core properties of the fusion industry: the diversity of approaches to engender fusion reactions, the ability to create revenue-generating divestitures in related fields, and the breadth of auxiliary technologies needed to support a functioning power plant. The model expands the pool of available capital by creating tranches with different risk-return tradeoffs and providing a diversified “fusion index” that can be viewed as a long hedge against fossil fuels.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tri-phase emulsions as tunable liquid lenses with aberration correction</title>
<link href="https://hdl.handle.net/1721.1/139554" rel="alternate"/>
<author>
<name>Feldstein, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/139554</id>
<updated>2022-01-15T03:46:21Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Tri-phase emulsions as tunable liquid lenses with aberration correction
Feldstein, Hannah
Multi-phase emulsions represent a versatile material platform for the design of in-situ tunable lenses. One challenge in fluid lens design is the minimization of aberrations that are related to refraction of light at spherical interfaces. These aberrations can in principle be compensated by systematic arrangement of multiple spherical interfaces in succession within a compound lens. Adapting this approch to fluid optical elements, we assess the potential of tri-phase emulsions to correct primary aberrations. We combine optical modeling, fabrication and experimental characterization of tri-phase emulsions and compare their optical properties to bi-phase droplets. Ray-tracing, based on an experimentally realizable chemical composition of tri-phase emulsions, showed improvements in some, but not all of the monochromatic Seidel aberrations. However, in the case of minimizing multiple Seidel aberrations, the triphase emulsions significantly outperformed the bi-phase emulsions, as the tri-phase system contains more degrees of freedom than the bi-phase system does. Initial experimental validations of the optical properties of tri-phase emulsions with not fully optimized morphologies, which are comparable in their performance to more easily fabricated bi-phase emulsions, confirm the potential of tri-phase emulsions to correct aberrations, addressing a significant challenge in liquid micro-lens design.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying the Prevalence and Effects of, and Motivations for Online Search Activities During Birth</title>
<link href="https://hdl.handle.net/1721.1/139553" rel="alternate"/>
<author>
<name>Kim, Nahun</name>
</author>
<id>https://hdl.handle.net/1721.1/139553</id>
<updated>2022-01-15T03:20:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Identifying the Prevalence and Effects of, and Motivations for Online Search Activities During Birth
Kim, Nahun
Online search activity* during births is an emerging trend among women today in developed countries [15, 22, 26]. In the US, however, there is a lack of research that outlines the relationships between online search activities during birth and birth experience satisfaction levels. This study was conducted to (1) determine the prevalence of online search activities during birth, (2) identify if online search activity engagement during pregnancy is a predictor for online search activity engagement during birth, (3) analyze whether there are significant relationships between online search activities and satisfaction levels of birth experience, patient and provider interaction, and control and autonomy, and (4) identify situations that motivate women to engage in online search activities.&#13;
&#13;
A self-administered, web-based survey was conducted to women who gave birth in the US, and statistical analysis was conducted using logistic regression analysis and linear regression analysis. A total of 182 women who gave birth in the US participated and completed the web-based survey. Of the 182, 61 women (33.5%) engaged in online search activities for the purpose of finding information about birth during birth. There was a significant relationship between engagement levels of online search engines during pregnancy and engagement of online search activities during birth. The more women engaged in online search activities during pregnancy, the more likely they were to engage in online search activities during birth.&#13;
&#13;
For online search activities’ relationship with birth experience satisfaction levels, despite the prevalence and engagement levels of online search activities during birth, there was no statistically significant relationship between online search activities and satisfaction levels of overall birth experience, control and autonomy, and patient and provider interactions. Situations that prompted women to engage in online search activities were when women (1) encountered a medical term that they did not know about, (2) experienced symptoms they were unsure of, (3) were unsure if they were allowed to do certain things, and (4) were not sure what effacement and dilation measurements meant.&#13;
&#13;
Due to the diverse range and quality of information women find from online search activities, it is difficult to conclude whether online search activity has positive or negative correlations with women’s birth experiences. Further research on what types and aspects of online search activities contribute to positive experiences, and which types and aspects contribute to negative experiences may help in understanding what kind of assistance is perceived to be helpful to women who seek information for empowerment and shared decision making. Healthcare professionals and care teams should recognize that online search activity is a prevalent trend amongst women in birth suites, and should embrace this behavior and use it as an opportunity for improving maternal healthcare. &#13;
&#13;
*Online search activity is the act of seeking information on online search engines such as, but not limited to, Google, Wikipedia, and Youtube using phone, tablet, or other electronic devices. In this thesis, it refers to women searching birth and/or pregnancy related information on their devices (phone, tablet, etc) during their birth experiences.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tradespace Analysis of Workplace Health System Focusing on Diabetes</title>
<link href="https://hdl.handle.net/1721.1/139552" rel="alternate"/>
<author>
<name>Tangsathapornpanich, Nitchakorn</name>
</author>
<id>https://hdl.handle.net/1721.1/139552</id>
<updated>2022-01-15T03:49:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Tradespace Analysis of Workplace Health System Focusing on Diabetes
Tangsathapornpanich, Nitchakorn
The health system is transitioning financial and outcome risk-bearing from the health insurance to the employers themselves. In the US, there are about 165 million people who are on health insurance with employers. A surprising statistic shows that 96% of companies larger than 5000 employees are self-insured. (Mercer, 2018) This means that companies pay their employees’ healthcare costs out of their pocket through the insurance claimed services. This gives the employers the incentive to keep their employees healthier than ever before. For that reason, many digital health solutions emerge and try to answer the demand for cost-cutting in both utilization and prevention.&#13;
&#13;
However, this is not a well-designed health system. Each element in the system is separated from the other. Despite having more innovations, employers cannot quantify or utilize the highest potential of such a system because each element is not connected. Literature review shows that even in the traditional health system, prevention and treatment are not well connected, and researchers have designed multiple approaches to tackle this problem, e.g., Chronic Care Model, Patient-Centered Medical Home.&#13;
&#13;
This thesis reviews the whole employer health system focusing on diabetes, comparing estimated health outcomes, the system's performance with investment cost, and healthcare reimbursement cost. Performance of the system is calculated from health outcome, employee engagement level, and productivity. The Tradespace analyses found that the elements that connected the health system can play an essential role in the health outcome and performance of the system. While the investment cost is more expensive than other systems, the healthcare reimbursement cost can be much lower. Also, the participation of the employee in the system development team can make a significant shift in higher health outcomes while requiring a minimum budget. Therefore, system thinking is crucial to increase efficiency in the employer health system, especially in Diabetes.&#13;
&#13;
The approach taken does not go into detail, explaining which method or choice is better. The insight from this thesis can be used as a guideline to think through, yet a more profound analysis would be required to give a definite answer. In order for this framework to work better, two more components are required: real-world data and system dependency study.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Pharmacology – Machine Learning Approaches &#13;
in Profiling Oncology Drug Candidates</title>
<link href="https://hdl.handle.net/1721.1/139551" rel="alternate"/>
<author>
<name>Ujwal, ML</name>
</author>
<id>https://hdl.handle.net/1721.1/139551</id>
<updated>2022-01-15T03:32:25Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Systems Pharmacology – Machine Learning Approaches &#13;
in Profiling Oncology Drug Candidates
Ujwal, ML
While the thesis is framed from the systems thinking perspective, however, the main focus is on the drug discovery and application of machine learning approaches in profiling oncology drug candidates for a select subset of validated targets in the oncogenesis pathways. In this study, we built in-silico predictive models to predict prospective drug candidates from compound libraries. Robust predictive models help in saving enormous experimental, and resource overheads and compress product cycle times. We used several machine learning algorithms, in building models that include logistic regression (LR), support vector machines (SVMs), Naïve Bayes, Artificial neural nets (ANN), and Decision trees – classification and regression tree (CART) and multi-tree majority voting ensemble techniques i.e., random forest and XGBoost.The feature sets for building these models were extracted by computing chemical fingerprints and quantum chemical descriptors. We generated both sparse and dense matrices for modeling. We cross-validated, parameter hypertuned, and evaluated model performance on different statistical performance metrics, including Receiver-Operating Characteristic (ROC) curves.&#13;
&#13;
We investigated the full and reduced model through feature engineering for model stability with  LR models. We evaluated model regularization techniques, namely, LASSO, Ridge, Elastic Net, and Neural drop to prevent model overfitting both for LR and ANN models. We evaluated SVM kernels and showed non-linear radial basis function (RBF) performed better than others. We also showed that adding additional hidden layers, beyond three, to the ANN model with ADAM optimizer did not improve performance. Besides, multi-tree ensemble models were superior to single tree models (CART). Finally, we benchmarked the performance metrics of each of these machine learning algorithms in a side-by-side comparison and conclude that the ensemble random forest produced the lowest mean misclassification error.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A decision model on optimising cybersecurity controls using organisation preferences</title>
<link href="https://hdl.handle.net/1721.1/139550" rel="alternate"/>
<author>
<name>Ansaria, Afra</name>
</author>
<id>https://hdl.handle.net/1721.1/139550</id>
<updated>2022-01-15T03:04:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A decision model on optimising cybersecurity controls using organisation preferences
Ansaria, Afra
Cybersecurity is an organisational issue that should be looked at through the lens of various stakeholders. However, it is often treated as a siloed issue in which more is always seen as better. The CISOs, CIOs and the key decision-makers struggle to understand how much security is enough. All cybersecurity solutions, often referred to as controls, result in a residual risk since there is no such thing as perfect security. The level of the risk should ultimately be the choice predicated by the business goals of the organisation. Cybersecurity controls are often presented in a context that lacks sufficient business context, which is required to optimize the risks and balance them with the needs to run other business operations. For uninterrupted business operations, there is a need to bridge the gap between technology and business decision making. &#13;
Optimizing cybersecurity risk in a business context demands a model that considers the priorities of the organisation through the lens of the key stakeholders. By taking into consideration the overall priorities in the context of the business goals, we can better guide the decision process of choosing the optimal security controls. Such an approach would help answer questions such as ‘How can we manage cybersecurity risk in the company? What are the right cybersecurity controls for our business goals? How much should we spend on cybersecurity?’&#13;
There is no one perfect formula when it comes to picking security controls. Each organisation has a different set of priorities and thus the needs for its security controls will be different. An optimal solution requires a balanced approach towards the risk, cost and benefit of the solution. A thorough analysis of the overall costs and the benefit of implementing each control, and its potential risk, would enable the decision-maker to pick controls that are in line with the business goals.&#13;
&#13;
The work of this thesis will involve looking at the trade-offs of security controls, which are influenced by the organisation's priorities, with respect to the cost and value they bring to the organisation. We will be representing the organisation's priorities as preferences.  These preferences are then translated into a utility function that can be used to evaluate the controls available. Once the list of preferred controls is gathered, we will analyze the cost and benefit relationship for each of the controls. The cost and benefit are represented in terms of the value defined by the organisation to its processes and business units that are under threat. Finally, we will look for an optimal range of potential controls and their placement, which can provide utmost security to the organisation while keeping the business preferences in place.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Modern Approach for Measuring Environmental, Social, and Governance Preferences</title>
<link href="https://hdl.handle.net/1721.1/139548" rel="alternate"/>
<author>
<name>Metzman, Zachary M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139548</id>
<updated>2022-01-15T04:08:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Modern Approach for Measuring Environmental, Social, and Governance Preferences
Metzman, Zachary M.
With the rapid growth of Environmental, Social, and Governance (ESG) investing, several concerns have been raised regarding the ability of ESG rating companies and investment managers to accurately and transparently reflect the ESG preferences of individual and institutional clients. To address this issue, we developed the ESG Machine, a website used to measure ESG preferences by applying methods from revealed preference theory. In a short time, this website gathered 17,248 decision observations from 800 individuals in 55 countries. A subset of this data is used to better understand the importance of measuring ESG preferences and how preferences vary by demographic. We first measure the rationality of individuals and the relationship to demographics and response time. Second, we examine donation amounts and the impact of prices as well as the equality and efficiency tendencies of individuals. Third, for each individual we estimate the parameters of a two-good Constant Elasticity of Substitution (CES) utility function and analyze the substitution parameters and the preferences towards the social and environmental causes. Fourth, for more than two goods we apply nested CES functions to estimate the aggregate preferences of all individuals and demographic clusters. We find that it is important to measure ESG preferences to improve the accuracy and transparency of ESG investing.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformer Pruning Relation and General Neural Network Augmentation</title>
<link href="https://hdl.handle.net/1721.1/139547" rel="alternate"/>
<author>
<name>Lim, Yong Hui</name>
</author>
<id>https://hdl.handle.net/1721.1/139547</id>
<updated>2022-01-15T03:23:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Transformer Pruning Relation and General Neural Network Augmentation
Lim, Yong Hui
In this thesis, a method of initializing neural networks with weights transferred from smaller trained neural network weights was investigated. We name this process augmentation and present a few versions of it, some of which involve pruning. Firstly, the pruning relation of testing loss against density was found for the GPT-2 transformer network on a causal language modeling task. An interesting double plateau of testing loss was found whenever the attention weights were pruned. Next, augmentation on low dimensional datasets and shallow networks was investigated. We found that performing a step of zeroing final layer initializations (ZFLI) results in better augmentation. With this insight, we proceeded to investigate a variety of datasets and networks. Two forms of augmentation were investigated: basic augmentation and pruned augmentation. However, both forms of augmentation were found to not produce any consistent improvement in testing accuracy/loss.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analytical Approach to Inventory Management for Telecommunications Network Equipment</title>
<link href="https://hdl.handle.net/1721.1/139546" rel="alternate"/>
<author>
<name>Cutlip, Margaret G.</name>
</author>
<id>https://hdl.handle.net/1721.1/139546</id>
<updated>2022-01-15T03:06:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Analytical Approach to Inventory Management for Telecommunications Network Equipment
Cutlip, Margaret G.
Verizon’s core network relies on routers that use expensive plug-in cards (PICs) to function. PIC failures can cause disruptions to FiOS or wireless service in the surrounding area. Maintaining an appropriate level of spare PICs is important to maintaining high service levels. Due to the high cost and volume of PICs used in Verizon's networks, it becomes expensive to hold excessive inventory onsite, without exponentially increasing working capital.&#13;
&#13;
This project takes two approaches to aiding in inventory management: first, building predictive models for PICs failures in FiOS routers based on previously recorded failures; second, developing a holistic Central Office (CO, where telecommunications data is routed) prioritization system to determine where to place constrained inventory. While developing predictive models for PICs failures proved unsuccessful, the CO prioritization approach resulted in reductions in outage customer impact and traffic impact of more than 40% across all test cases.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Raw Material Optimization to Bend the Biopharmaceutical Cost Curve</title>
<link href="https://hdl.handle.net/1721.1/139545" rel="alternate"/>
<author>
<name>Chen, Julia Mengpei</name>
</author>
<id>https://hdl.handle.net/1721.1/139545</id>
<updated>2022-01-15T03:38:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Raw Material Optimization to Bend the Biopharmaceutical Cost Curve
Chen, Julia Mengpei
Raw materials sourced from sole third-party suppliers have been identified as one of the key risks to Amgen’s operation. This project aims to reduce raw material costs through sourcing alternative materials and optimizing material consumption. &#13;
&#13;
To systematically evaluate alternative material supply, a cross-functional assessment framework consolidating qualitative and quantitative inputs with flexibility option analysis for uncertainties is established. Case studies are conducted on a chromatography resin and a cell culture media constituent, both of which are currently sourced from single channel. To optimize raw material consumption, an integrated biomanufacturing process and material cost model is developed to provide recommendations on the process parameters and the use of raw materials. The optimization model incorporates equipment capacity constraints as well as material consumption across all stages of  drug substance manufacturing. The model is applied to the development of a next-generation process for a commercial molecule and identifies process operating conditions where the raw material costs can be reduced while maximizing productivity. Sensitivity analyses are conducted to understand the impact of uncertainties to materials costs and process yields. Material cost sensitivity analysis reveals the importance of material order planning and alternative material opportunities.&#13;
&#13;
Overall, the project adopts a systematic approach to reduce costs and mitigate raw material risks at an early stage of the biopharmaceutical product life cycle.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A market feasibility analysis of the carbon capture utilization and storage landscape in China for foreign firms</title>
<link href="https://hdl.handle.net/1721.1/139544" rel="alternate"/>
<author>
<name>Bay, Phebe</name>
</author>
<id>https://hdl.handle.net/1721.1/139544</id>
<updated>2022-01-15T03:19:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A market feasibility analysis of the carbon capture utilization and storage landscape in China for foreign firms
Bay, Phebe
China has made ambitious climate commitments in recent years, the latest being its goal to achieve carbon neutrality by 2060. This paper examines the carbon capture utilization and storage landscape in China, and attempts to uncover market opportunities for foreign firms with expertise in this area. Referencing market entry strategies taken by foreign firms in China’s wind and solar industry at the start of the 21st century, this paper looks at their past successes and their current operating environment two decades later. A section of this paper will discuss existing financing and legislative gaps in China’s CCUS landscape and potential policies which could drive large-scale commercialization. This paper will also provide recommendations on possible areas where foreign players can value-add should they decide to enter this industry.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the New Development Direction of Chinese&#13;
Overseas Fintech Payment Companies</title>
<link href="https://hdl.handle.net/1721.1/139543" rel="alternate"/>
<author>
<name>Wu, Shuaiyu</name>
</author>
<id>https://hdl.handle.net/1721.1/139543</id>
<updated>2022-01-15T03:01:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Analysis of the New Development Direction of Chinese&#13;
Overseas Fintech Payment Companies
Wu, Shuaiyu
This paper gives a broad description and analysis of the payment industry development in recent years. Global settlement no longer belongs to giant companies and top multinational banks. Fintech payment companies gained a superpower by combining with banks and adding their technical and user-friendly solutions. These Fintech payment companies could provide customers with global multi-currency accounts and international settlement. They have taken root and sprouted worldwide and have developed rapidly in the past few years. &#13;
&#13;
This paper focuses on the new development direction of Chinese overseas Fintech payment companies. Their management team is from China, and their target customers also come from china. They combined the world changes with the particular Chinese market. This paper will analyze their payment scenarios, operating structure, business, and revenue model. &#13;
&#13;
After several years, the new market has gradually become clear. This paper analyzes the opportunities and challenges Payment Fintech may face in the medium and long term from a macro and micro perspective. &#13;
&#13;
In the end, this paper put forward possible prospects on this track in the payment industry based on the above analysis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Composing Parallel Runtime Systems: A Case Study in How to Compose the Julia and OpenCilk Runtimes</title>
<link href="https://hdl.handle.net/1721.1/139542" rel="alternate"/>
<author>
<name>Kralj, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/139542</id>
<updated>2022-01-15T04:10:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Composing Parallel Runtime Systems: A Case Study in How to Compose the Julia and OpenCilk Runtimes
Kralj, Tim
Julia [5] [15] is a high-level computing language used by many developers for its performance and ease of use. Julia operates on tasks that are run concurrently on threads. In its current state, however, Julia is not able to effectively employ fine-grained parallelism. OpenCilk [9] is an open-source implementation of the Cilk concurrency platform designed to utilize fine-grain parallelism. The Cilk runtime system, based on Cheetah [12], offers provably efficient parallel scheduling whose performance is borne out in theory and practice. I propose a combination of the Julia and OpenCilk runtimes through the integration of multiple components. One contribution of this thesis is a novel algorithm for combining C/C++ memory allocations with Julia’s precise garbage collector. Composing the parallelism of OpenCilk and Julia enables programmers to write efficient multithreaded code. Additionally, this work is a case study of combining the high levels of parallelism present in Cilk with a high-level language.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Network and the Classroom: A History of Hypermedia Learning Environments</title>
<link href="https://hdl.handle.net/1721.1/139541" rel="alternate"/>
<author>
<name>Freudenheim, William</name>
</author>
<id>https://hdl.handle.net/1721.1/139541</id>
<updated>2022-08-09T20:09:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Network and the Classroom: A History of Hypermedia Learning Environments
Freudenheim, William
Hypermedia tools for organizing knowledge have long been designed to benefit how people think, learn, collaborate, and generate ideas. Contemporary iterations of these types of “tools for thought” build upon both technology and pedagogy that has been developing since the early days of personal computers. However, despite a multi-decade history of the development of hypermedia knowledge organization tools both within and outside of educational contexts, we see little transformation of the classroom connected to these types of tools today. In this thesis, I argue that examining the history of hypermedia knowledge organization tools by looking at both successful and failed experiments in bringing them into classrooms, one can more deeply understand the conceptual origins of the recent generation of networked knowledge tools and how to avoid challenges that have plagued them in the past when considering where they might fit into today’s classrooms. &#13;
&#13;
Looking across three distinctly different time periods, I examine technical, cultural, and pedagogical shifts that contributed to the changing designs and classroom applications of these tools. I develop a case study describing one application of contemporary hypermedia knowledge organization tools in a middle-school classroom during the Fall of 2020. This case study, a project called “Learning Dens,” builds upon lessons from the previously examined eras, and draws inspiration from contemporary uses of hypermedia knowledge organization tools outside of the classroom for sharing in-progress collections of ideas. Set against the backdrop of the COVID-19 pandemic, this case explores using hypermedia knowledge organization tools in the classroom to support social-emotional learning and reflection.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating life cycle carbon emissions of the global oil supply chain at a high-resolution using optimization in a network model</title>
<link href="https://hdl.handle.net/1721.1/139540" rel="alternate"/>
<author>
<name>Dixit, Yash</name>
</author>
<id>https://hdl.handle.net/1721.1/139540</id>
<updated>2022-01-15T03:03:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Estimating life cycle carbon emissions of the global oil supply chain at a high-resolution using optimization in a network model
Dixit, Yash
Climate change, being the multi-faceted problem that it is, requires aggressive decarbonization across the entire life cycle. With the evolving energy mix, the oil industry is in a phase of adaptation. At present, petroleum fuels account for a third of the global primary energy supply. Future forecasts range across a spectrum from plateauing to decreasing supply, up to a 40 percent decrease from present levels [1]. Furthermore, certain applications such as aviation and petrochemicals have limited short-term, scalable alternatives. On this backdrop, there is an increasing push for better emissions reporting throughout the supply chain and regulatory mandates at making climate friendly choices. Notable examples include the Low Carbon Fuel Standard by the California Air Resources Board [2] and the Fuel Quality Directive by European regulators [3]. Existing literature is directionally aligned with these efforts, in that it points towards carbon accounting in the supply chain. However, studies are either limited to specific processes (e.g: crude oil extraction) and/or regions (e.g: North America). Furthermore, those with a wider scope including all phases of the supply chain, have a poor resolution whereby the carbon accounting is done at the level of countries and is thus unable to capture the complexities associated with oil trade. These inadequacies stem from poor availability of data and methodological challenges which fail to accurately portray the heterogeneity in life cycle emissions. The thesis quantifies this heterogeneity using a market-based approach that addresses the aforementioned limitations by estimating the life cycle carbon intensity of crude oil trades from sources (oil fields) to destinations (refineries). With a scope that includes crude extraction and transportation, the emission modeling is undertaken using high-fidelity commercial datasets, existing emission estimators and computational techniques based on optimization. The thesis concludes that globally, the carbon footprint variability ranges from 1.80 to 32.92 gCO₂/MJ with a volume weighted mean of 9.73 gCO₂/MJ. This variability coupled with supply forecasts up to 2050 from low-carbon scenarios amount to additional CO₂ savings of 2-5 GT.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disruptions and Robustness in Air Force Crew Scheduling</title>
<link href="https://hdl.handle.net/1721.1/139538" rel="alternate"/>
<author>
<name>Chin, Christopher Ho-Yen</name>
</author>
<id>https://hdl.handle.net/1721.1/139538</id>
<updated>2022-01-15T03:31:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Disruptions and Robustness in Air Force Crew Scheduling
Chin, Christopher Ho-Yen
Air Force crew scheduling involves assigning pilots to flights to fulfill mission duties and complete training requirements. Because of complex qualification requirements, as well as crew rest and availability constraints, Air Force crew scheduling is a challenging combinatorial optimization problem. Further, last-minute disruptions and uncertainties in factors like flight duration and pilot availability motivate the need for more robust schedules. Traditionally, this has been a manual, tedious, and time-consuming process. In this thesis, we leverage optimization techniques to improve the crew scheduling process. We start with a baseline integer program formulation. We develop objective functions based on two known scheduler priorities: maximizing training requirements completed, and minimizing overqualification (assigning the lowest qualified pilot feasible for each pilot seat). Then, we present a formulation to handle disruptions to an original schedule. We develop an intuitive schedule visualization tool that we use for user studies, and discuss user feedback on our scheduling algorithms. Finally, we identify key uncertainties in Air Force crew scheduling and contrast them with commercial aviation. We adapt two concepts from commercial aviation for robust crew scheduling: buffer times (slack time between two successive flights operated by the same pilots) and move-up crews (back-up crews for substitution when pilots become unavailable). This work will contribute to the core of the Puckboard scheduling software being developed by the Air Force for crew scheduling.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NIGHTRISE Through the Valley of Jabal ‘Amil’s Shadow</title>
<link href="https://hdl.handle.net/1721.1/139537" rel="alternate"/>
<author>
<name>Nahle, Mohamad</name>
</author>
<id>https://hdl.handle.net/1721.1/139537</id>
<updated>2022-01-15T03:31:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">NIGHTRISE Through the Valley of Jabal ‘Amil’s Shadow
Nahle, Mohamad
This thesis is about the night, and particularly about nightrise, which I propose here as the social and cultural construction of the nocturnal landscape. It sites itself along a twenty-six-night walk across the Lebanese hinterland that I made in August 2020, where moving shadows begin to awaken amorphous subcultures capable of weaponizing their formlessness in the name of self-preservation. Because the night resists the reign of any solitary subculture, these nocturnal cohabitations often rely on unspoken rules of civility all but invisible to strangers. And it was on the sixth night of this walk into the heart of Jabal ‘Amil – what is today known as South Lebanon – that my transgression of these rules was matched with an act of hostility that, strangely, culminated in the opportunity to imagine and implement an architecture of nightrise: a path on the southern border of Lebanon between a mountain and a river. If a path for the day seeks to impose the lone perspective of a single direction, then this path for nightrise revels in the unseen, in the ability to interrupt, and perhaps invert, the ubiquitous association between eyesight and insight. The erasure of the unidirectional line comes to propose a series of scattered stations that whisper, hint, and conjure countless variations of the same path in the minds of its visitors. These stations draw out the nocturnal qualities inspiring some of Jabal ‘Amil’s oral myths and legends, and the politics that are deeply rooted within them: from distressing celestial appearances to the imaginal world of the Jinn, and from tales that follow the spread of Shi’ism to the darkness surrounding the famous proverb, “Look under any stone in Jabal ‘Amil and you will find a poet.” Unfolding across the pages of this thesis is thus a peripatetic journey of two nocturnal voyages, one that begins in the past with the stories of my walk across Lebanon, and another in the future, on the Path of Nightrise, which will be implemented in the months following the submission of this work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Machine Learning Model for Understanding How Users Value Designs: Applications for Designers and Consumers</title>
<link href="https://hdl.handle.net/1721.1/139536" rel="alternate"/>
<author>
<name>Bilotti, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/139536</id>
<updated>2022-01-15T04:09:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Machine Learning Model for Understanding How Users Value Designs: Applications for Designers and Consumers
Bilotti, Jeremy
In this thesis, I demonstrate a number of advances toward developing a machine learning (ML) model of how designs are valued by their users. The model can be used to better understand the implications of furniture design decisions, as well as for commercial strategy.&#13;
&#13;
Existing ML systems have been trained on the physical and aesthetic features of completed furniture designs. We consider these methods to be “top-down” because designers and software engineers alone determine which features are considered important to the value of a design. To better capture the nuances of how users actually value the various functions of their furniture, I first develop a framework for ingesting and classifying user feedback. Next, I conduct a user survey to test this framework, generating a “bottom-up”, labeled dataset from the feedback, requiring no post-processing. Finally, I develop methods for the computational analysis of this data. The analysis is based on a probabilistic ML model trained on the real user data collected. The model is trained to quantify how users value various features of furniture designs, beyond only physical and aesthetic features. I show how the model can augment existing datasets and produce data visualizations to inform design practice and commerce.&#13;
&#13;
This framework represents a step toward a future in which data sets for furniture—and other design domains—are more accessible. By making user feedback available to designers at scale, and establishing methods for collecting this data, we can accelerate the development of designer intuition and deliver significantly greater value to more users.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perceiving Shape from Surface Contours via Artificial Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/139532" rel="alternate"/>
<author>
<name>Brandt, Laura E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139532</id>
<updated>2022-01-15T04:09:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Perceiving Shape from Surface Contours via Artificial Neural Networks
Brandt, Laura E.
This thesis explores the challenge of teaching a machine how to perceive shape from surface contour markings. Such markings are commonly used in clothing, data visualizations, and other man-made constructs, because humans have an apparently natural ability to interpret them. By glancing at a simple collection of curves drawn upon a 3D surface, we can quickly glean general shape and curvature information; and such contours drawn on a 2D surface can give the illusion of curvature where there is none. Machines have no such visual intuition, and therefore are not particularly well-equipped to interpret things designed to leverage this human ability. We approach this problem by synthesizing a new dataset of surface grid- and line- marked 3D surfaces (SurfaceGrid) and training a deep neural net to estimate their shape. Our algorithm successfully reconstructs shape from synthetic 3D surfaces rendered with a variety of grid- and line-contour markings with &lt; 0.5% mean-squared relative error, and extracts general shape and curvature information from 2D pictures of 3D mesh models and real-world wireframe objects.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soft Aerial Manipulation</title>
<link href="https://hdl.handle.net/1721.1/139522" rel="alternate"/>
<author>
<name>Fishman, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/139522</id>
<updated>2022-01-15T03:27:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Soft Aerial Manipulation
Fishman, Joshua
This thesis explores the theory and implementation of a soft drone, consisting of a quadrotor and a tendon-actuated soft gripper, which for the first time fully exploits the advantages of softness in aerial manipulation. Manipulation and grasping with unmanned aerial vehicles (UAVs) currently require accurate positioning and are often executed at reduced speed to ensure successful grasps. This is because modern aerial manipulation platforms employ rigid manipulators with few degrees of freedom, limiting their capability to compensate for disturbances caused by the vehicle positioning errors and maintain stability despite external contact forces. Biological systems, on the other hand, exploit softness to overcome similar limitations, and leverage compliance to enable aggressive grasping. To the best of our knowledge, ours is the first work at the intersection between soft manipulation and UAV control.&#13;
&#13;
We present a control and planning approach for the soft drone (quadrotor and soft gripper), decoupling the two subsystems and employing (i) a geometric controller and a minimum-snap trajectory optimization for the quadrotor (rigid) base, and (ii) a quasi-static finite element model and control-space interpolation for the soft gripper. We prove that the geometric controller asymptotically stabilizes the quadrotor velocity and attitude despite the addition of the soft load. Next, we describe our soft drone prototype, including electro-mechanical design, software infrastructure, and fabrication. Finally, we evaluate the proposed system in a realistic soft dynamics simulator (SOFA) and in real tests, and show that: (i) the geometric controller is fairly insensitive to the soft payload, (ii) in simulation, our soft drone outperforms more rigid alternatives, (iii) the platform can reliably grasp unknown objects despite inaccurate positioning and initial conditions, both in simulation and in real testing. Our soft drone can grasp at up to 2 m/s in simulation and consistently grasps at 0.2 m/s in real tests (91.7% success rate). &#13;
&#13;
Video attachments:&#13;
https://youtu.be/NNpQxP0SPFk&#13;
https://youtu.be/mqbj8mEyCdk
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Time-Variant Photovoltaic Electrodialysis Reversal: A Novel Design Optimization using Predictive Machine Learning and Control Theory</title>
<link href="https://hdl.handle.net/1721.1/139521" rel="alternate"/>
<author>
<name>Connors, Grace B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139521</id>
<updated>2023-01-08T15:45:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Predictive Time-Variant Photovoltaic Electrodialysis Reversal: A Novel Design Optimization using Predictive Machine Learning and Control Theory
Connors, Grace B.
This paper introduces a novel control theory and system design optimization to reduce the Levelized Cost of Water (LCOW) and maximize the reliability of Photovoltaic Electrodialysis Reversal (PV-EDR) groundwater desalination systems. This work aims to exploit the relationship between water production and Specific Energy Consumption (SEC) for time-variant PV-EDR systems and to introduce a control system that optimizes the energy management strategy with a goal of maximizing water production as well as operates efficiently with respect to energy utilization. The novel control theory introduced in this paper consists of a machine learning algorithm used to predict the future solar irradiance, a model predictive controller to use this prediction to plan the best utilization of energy between the desalination system and energy storage, and a lower level controller that determines the optimal flow rate and voltage of the EDR system based on the power available for desalination. Furthermore, this paper aims to use this control theory to build a design tool that can determine optimal PV-EDR system configurations based on geographical constraints. The system design optimization is then tested for a case study of a rural village in India. As compared to previous works, this control theory reduces the LCOW by 7%, or $0.15/m3, while meeting the target water production every day. Moreover, this study demonstrates the flexibility of this design tool as well as the impact of the design assumptions through a sensitivity analysis. This study determines how to design and control PV-EDR systems to minimize cost and maximize reliability, improving the commercial viability of using PV-EDR systems as a primary water desalination solution.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Student Well-Being</title>
<link href="https://hdl.handle.net/1721.1/139520" rel="alternate"/>
<author>
<name>Usta, Nazlı Ece</name>
</author>
<id>https://hdl.handle.net/1721.1/139520</id>
<updated>2022-01-15T03:16:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing for Student Well-Being
Usta, Nazlı Ece
High levels of student stress are prevalent at MIT, especially among the undergraduate community. Many undergraduate students report that the MIT environment is harmful to their mental health. However, the current research efforts at MIT are mainly quantitative and do not capture the deeper insights required to understand the underlying stress factors and students’ needs. This thesis aims to contribute to the knowledge about undergraduate student well-being at MIT through in-depth qualitative research. It also seeks to explore a well-being intervention informed by the research findings. This thesis is composed of two studies: a series of interviews and a design case study. Sixteen one-on-one interviews with MIT undergraduate students were conducted to gather in-depth qualitative insights. Internal, social, and academic pressures to manage a high workload were identified as the main stress factors. Findings indicate that students want to remain productive while maintaining peace and cheerfulness in their MIT experiences. Students’ related needs are also discussed. The case study explored using an SMS bot to practice self-reflection, increase self-awareness, and prioritize well-being. Thirty-four MIT students participated in the study to use the bot every morning and night for two weeks. The changes in participants’ self-awareness, mental well-being, and perceived stress levels were measured through pre-, mid-, and post-study surveys and compared to a control condition with an alternative intervention (open-ended journaling through texting). The results are discussed. Participants shared their appreciation of the bot’s friendly and caring tone, simplicity of use, and content.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Approach for Conducting Stakeholder Interviews for a Novel Technology, as applied to the Locomotive Industry</title>
<link href="https://hdl.handle.net/1721.1/139519" rel="alternate"/>
<author>
<name>Zhang, Allison T</name>
</author>
<id>https://hdl.handle.net/1721.1/139519</id>
<updated>2022-01-15T03:41:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Approach for Conducting Stakeholder Interviews for a Novel Technology, as applied to the Locomotive Industry
Zhang, Allison T
This thesis describes the development of a survey to interview US freight railroad engineers for the information requirements and the possible benefits of a new type of “augmented reality head up display” (AR-HUD).  The goal is to use a windscreen mounted transparent display to provide a wide field-of-view while maintaining AR conformality using image-based head tracking.   Stakeholders have different backgrounds and expectations, so solicitation and evaluation of opinions is inevitably challenging.  However, it is important to factor expert opinions into the AR-HUD design at an early stage.  Due to the Covid-19 pandemic, in-person, “hands-on” evaluations in a railroad simulator were not feasible. Instead, a video briefing, “overview,” of the AR-HUD concept, written reference, and an online opinion “survey” were developed using the Qualtrics survey platform.  This thesis describes the survey design rationale and methodology.  Prior simulator studies, hierarchical cognitive task analyses and other analytic methods were used to identify the tasks, goals and display information requirements for a reference freight rail operational driving “scenario”.  Questions on in-cab, external, and predictive information were provided, and overall opinions were acquired by allowing free-text open-ended responses.  Responses obtained from several early subjects were evaluated and compared.  One expressed concern about the potential for deskilling if engineers become reliant on AR-HUD predictive information.  A roadmap for further the analysis of the Survey results was developed to help form a better understanding of survey subjects, allowing for further refinement of the survey itself, and also the design of the AR-HUD prototype before initiation of locomotive simulator experiments.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Studies of PbS Quantum Dots</title>
<link href="https://hdl.handle.net/1721.1/139518" rel="alternate"/>
<author>
<name>Zhang, Xiang</name>
</author>
<id>https://hdl.handle.net/1721.1/139518</id>
<updated>2022-01-15T03:47:56Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computational Studies of PbS Quantum Dots
Zhang, Xiang
A third-generation photovoltaic technology, PbS quantum dot solar cells boast tunable, infrared-compatible bandgaps, air stability and scalable production. Their unfortunately limited photovoltaic efficiency, a result of low carrier lifetime, has been tentatively ascribed to polydispersity, fusing, or mid-bandgap trap states, the exact nanostructural origin of which remains a subject of discussion. We seek to use DFT simulations to understand the structure-property relationship of PbS QDs. To aid in the computational efforts, we first developed a Python library to automate DFT calculations and facilitate analysis, a framework based on nested directed graphs to manage and schedule concurrent and consecutive tasks, and a sigma.js- and Flask-based web frontend. In an effort to reduce the high computational cost of geometrically relaxing a PbS quantum dot, we explored the use of the Behler-Parrinello approach, recurrent neural networks, as well as a discretized and regularized manybody cluster expansion formulation alongside multilayer perceptrons, in training neural network potentials, as well as in directly accelerating geometry relaxations for PbS quantum dots. Noticing some unexpected nontrivial behavior during the geometry relaxation runs, we sought to quantify and clarify the observed intuitively pathological behavior, both phenomenologically with outlier detection, and physically by inspecting bonds on the quantum dots’ surfaces, touching on their electronic structure in the process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adjusting for Autocorrelated Errors in Neural Networks for Time Series</title>
<link href="https://hdl.handle.net/1721.1/139516" rel="alternate"/>
<author>
<name>Sun, Fan-Keng</name>
</author>
<id>https://hdl.handle.net/1721.1/139516</id>
<updated>2022-01-15T03:52:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Adjusting for Autocorrelated Errors in Neural Networks for Time Series
Sun, Fan-Keng
Time series are everywhere and exist in a wide range of domains. Electrical activities of manufacturing equipment, electrocardiograms, traffic occupancy rates, currency exchange rates, speech signals, and atmospheric measurements can all be seen as examples of time series. Modeling time series across different domains is difficult. In many cases, it requires enormous effort and a significant amount of prior knowledge to generate highly accurate models tailored to a particular time series domain. In response, an increasing body of research focuses on training neural networks on time series, such that the neural networks learn to model the time series. A common assumption in training neural networks on time series is that the errors at different time steps are uncorrelated. However, due to the temporality of the data, errors are actually autocorrelated in many cases, making the assumption inaccurate.&#13;
&#13;
In this thesis, we propose to learn the autocorrelation coefficient jointly with the model parameters in order to adjust for autocorrelated errors and thus improve model performances on time series. We first develop our method for time series regression. Then, extensions are made to three other time series tasks: time series forecasting, time series classification, and anomaly detection. Large-scale experiments with various neural network architectures and datasets from the four time series tasks verify the effectiveness of our method. Results show that our method enhances performance across most of these time series modeling tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Numerical Characterization of Fragmentation in Ionic Liquid Clusters</title>
<link href="https://hdl.handle.net/1721.1/139513" rel="alternate"/>
<author>
<name>Schroeder, Madeleine</name>
</author>
<id>https://hdl.handle.net/1721.1/139513</id>
<updated>2022-01-15T03:25:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Numerical Characterization of Fragmentation in Ionic Liquid Clusters
Schroeder, Madeleine
Ionic liquid ion sources are a promising technology that can be used for many applications from space propulsion to focused ion beam microetching. The variety of ionic liquids that can be synthesized enables the selection of desired beam properties for optimizing propulsion and focused ion beam performance. Ionic liquid ion sources produce ion beams by extracting single ions and metastable solvated ion clusters from the surface of the ionic liquid and accelerating them using an electric field generated by applying a voltage between a sharp tip and a plate with an aperture. The solvated ion clusters often fragment in the electric field region, reducing the specific impulse and efficiency for propulsion applications and increasing the beam spot size for focused ion beam applications by broadening the energy distribution of the beam. &#13;
&#13;
Fragmentation behavior has previously been characterized in the region with no electric field. However, fragmentation under the effect of an electric field has not been investigated as experimental results are difficult to interpret for regions with electric fields. The goal of this work is to use various types of numerical methods to characterize fragmentation under the effect of an electric field. Molecular dynamics simulations are performed of various ionic liquid clusters under different conditions to determine the rate of fragmentation. These simulation results are also used to determine the different fragmentation pathways taken by each type of cluster, and the size of the different clusters as a result of energy content and electric field strength. Various physics-based models are compared to the molecular dynamics results with the goal of deriving a new model that accounts for the effect of the electric field on fragmentation. Approximate Bayesian computational methods are employed to infer the temperature of different ionic liquid cluster types and the percentage of the beam composed of each species by comparing simulated retarding potential analysis curves to experimental ones. Finally, the results of multi-scale N-body simulations are postprocessed and compared to experimental data. Results show remarkable agreement between N-body simulations using the fragmentation rates determined by molecular dynamics and experimental data.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies in Selective C-C Bond Formation via Borylation and Dehydrogenation</title>
<link href="https://hdl.handle.net/1721.1/139512" rel="alternate"/>
<author>
<name>Barbour, Johanna Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/139512</id>
<updated>2022-01-15T03:58:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Studies in Selective C-C Bond Formation via Borylation and Dehydrogenation
Barbour, Johanna Christine
Three stories of C-C bond formation are presented. First, we attempt to harness the Lewis acidity of boron to induce chemo- and regioselectivity in two fundamental organic reactions (Chapter 1). We explore 1,2-difunctionalization of alkenes via formation of a three-membered cyclic intermediate with the goal of adding various Lewis bases to control the intermediate’s direction of ring opening (Chapter 1.2). Unfortunately, the tunability of this system remains to be studied, as we were unable to confirm the formation of this intermediate.&#13;
&#13;
Secondly, we look to employ the hyperconjugative stability afforded by boron to adjacent alkyl radicals. We demonstrate room temperature regioselective alkylation of pyridazines by (iodomethyl)boronic acid pinacol ester in the presence of trifluoroacetic acid in up to 70% yield (Chapter 1.3). We see that this reaction proceeds upon variation of either the halide identity or the length of the alkyl fragment, indicating the a-boryl group is the only essential factor.&#13;
&#13;
Finally, we demonstrate photocatalytic acceptorless dehydrogenation of unactivated hydrocarbons (Chapter 2). Using a dual catalytic system with sodium decatungstate and a cobaloxime catalyst, we separate dehydrogenation into two discreet hydrogen atom abstraction events. The first C-H bond breaking event is achieved by photoexcited decatungstate. The C-H bond b- to the resultant alkyl radical is significantly weakened, and now susceptible to activation by cobalt to give the product alkene. We demonstrate this strategy is successful for aromatization of a number of unactivated alkenes in up to 96% yield.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wetting transition and fluid trapping in a microfluidic fracture</title>
<link href="https://hdl.handle.net/1721.1/139511" rel="alternate"/>
<author>
<name>Qiu, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/139511</id>
<updated>2022-01-15T03:53:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Wetting transition and fluid trapping in a microfluidic fracture
Qiu, Yu
During immiscible fluid-fluid displacement in partial wetting regime, defending fluid is often trapped as a liquid film on solid surfaces through the mechanism of wetting transition. Here, we study the impact of roughness on wetting transition and fluid trapping in a microfluidic fracture. We demonstrate that roughness significantly reduces the capillary number threshold that onsets the wetting transition, even to a vanishing value. Above the reduced threshold, fluid is trapped in two configurations: (1) below the roughness amplitude as a thin film; (2) enveloping the rough surface as a thick film. We further show that the thin film may either remain stable or dewet as a film of uniform thickness, which is distinct from the classic viscous dewetting on smooth surface. We delineate three displacement regimes: complete displacement, thin film and thick film, in a phase diagram with theoretical criteria that govern the crossovers among them. Different displacement regime leads to distinct morphology of residual fluid at late times, which eventually determines hydrodynamics and geochemical reaction in subsurface environment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Bimodal Chemical-Electrospray Propulsion System using Ionic Liquid Monopropellants</title>
<link href="https://hdl.handle.net/1721.1/139510" rel="alternate"/>
<author>
<name>Bruno, Amelia R.</name>
</author>
<id>https://hdl.handle.net/1721.1/139510</id>
<updated>2022-01-15T03:03:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of a Bimodal Chemical-Electrospray Propulsion System using Ionic Liquid Monopropellants
Bruno, Amelia R.
Two primary propulsion modes currently exist for spacecraft: chemical (e.g. monopropellant, cold gas, solid propellant) and electric (e.g. Hall thruster, ion engine, electrospray). Chemical propulsion typically offers high thrust and low specific impulse, while electric propulsion provides the inverse of low thrust and high specific impulse. As such, having access to both of these modes on the same spacecraft is extremely useful for a wide range of applications. The conventional propellants used by chemical and electric thrusters are highly incompatible, making this particularly difficult for small spacecraft, which lack the mass, power, and volume to accommodate two separate propulsion systems. However, recent advancements in green monopropellants - developed as less-toxic alternatives to hydrazine in chemical monopropellant thrusters - have created a new family of propellants that are also compatible with electric thrusters. In particular, hydroxylammonium nitrate (HAN) based green monopropellants are also ionic liquids, which is the standard propellant for electrospray thrusters. This thesis outlines a design that takes advantage of this to create a bimodal propulsion system with access to both chemical monopropellant and electrospray propulsion. The proposed system builds upon existing technology, commercially available green monopropellant thrusters and the MIT iEPS electrospray thrusters, connected to a single, shared monopropellant tank. The design primarily focuses on propellant conditioning for the electrospray thrusters. Key technical objectives of this design include 1) addressing the need for pressure conditioning of the propellant and 2) ensuring electrical isolation between the thrusters and propellant line during firing. A prototype propellant line was fabricated to test the system and proved that the design sufficiently addresses the technical objectives. This successfully validates the design and proves its feasibility for a bimodal spacecraft propulsion system.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Interactive Modeling for Control and Simulation of Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/139509" rel="alternate"/>
<author>
<name>Flanagan, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/139509</id>
<updated>2022-01-15T03:27:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Modular Interactive Modeling for Control and Simulation of Electric Power Systems
Flanagan, Sarah
Power systems must ensure reliable service during normal operation and unexpected disturbances. They also should enable decarbonization goals by supporting utilization of new renewable energy resources that are being added to the system. Conventional control used in power plants and generators is becoming insufficient because previously true assumptions no longer hold with the widespread implementation of renewable energy sources. Future electric power systems will comprise of a more distributed grid of loads and Distributed Energy Resources (DERs), all contributing to electricity service goals. Novel modeling and control for their provable performance are actively being pursued. This thesis builds on the idea of novel modeling and controlling future electric power systems using a multi-level modular approach. Particular emphasis is on general simulation tools for assessing dependence of these new architectures on control design. A MATLAB-based Centralized Automated Modeling of Power Systems (CAMPS) software models the primary dynamics of components in a modular way and develops a centralized model of the interconnected system. In this thesis further extensions to CAMPS improve plotting of state variables and their expressions, enable conversion from the dq (direct quadrature) reference frame to the abc-reference frame, and allow substitution of different controllers into an open loop model. &#13;
&#13;
A recently introduced modeling approach, which maps voltage and current variables into the energy space and interactively exchanges energy space variables called interaction variables between components, is used as the starting model for new simulations. One energy space-based controller is simulated using Simulink to test the controller’s performance when using a switching model instead of an average model. A new software tool, Plug-And-Play Automated Modeling of Power Systems (PAMPS) based on this recent theoretical work implements distributed algorithms in MATLAB. One example applies PAMPS to a RL (resistive and inductive) circuit controlled by a voltage source and connected to a constant power load. Future work can use PAMPS to model additional electrical components including synchronous machines and solar inverters. Since PAMPS exchanges information within the energy space, it can also be applied in future work to model the interactions between multi-energy sources such as mechanical and thermal energy conversion components.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Iñupiatun Iñuguġlavut Miqłiqtuvut: Let Us Raise Our Children in Iñupiaq</title>
<link href="https://hdl.handle.net/1721.1/139508" rel="alternate"/>
<author>
<name>Olin, Annauk Denise</name>
</author>
<id>https://hdl.handle.net/1721.1/139508</id>
<updated>2022-01-15T03:01:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Iñupiatun Iñuguġlavut Miqłiqtuvut: Let Us Raise Our Children in Iñupiaq
Olin, Annauk Denise
Iñupiatun Iñuguġlavut Miqłiqtuvut is a language learning guide dedicated to reclaiming the Iñupiaq language in the home. Linguists usually create records primarily for scientific purposes and secondarily for language learning needs. Exceedingly often, linguists write descriptions that are typically inaccessible to those who need them most. A decolonial approach to language pedagogy that intertwines peoplehood, language, and cultural context is critical for effective language revitalization. This curriculum will focus on teaching parents to speak Iñupiaq to their children by coupling Iñupiaq child raising practices and Minimal Course methodology. Minimal Course is a methodology specifically designed to help learners face the added challenges of becoming a proficient speaker of a language that is threatened by colonial systems. Minimal Course features a non-technical (yet linguistically informed) presentation of the language's everyday usage and conversation-building patterns in a series of short lessons. The lessons are also taught relationally, where each part reinforces at least one other related part. In the same way, the Minimal Course intends to rebuild whole speech communities versus lone individuals. Diverging from Minimal Course, there is an optional Iñupiatun Uqautchim Irrusia (Iñupiaq Grammar) section for those who wish to understand better how parts of each unit in a word or sentence combine. Given that the curriculum is built around the development of infants and toddlers, songs and hands-on activities are central for families to learn the Iñupiaq language. The Iñupiaq language is our birthright. Uqautchiq Inupiatun kiŋuvaanaktaaksrautikput.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fare Approach to Attracting Transit Ridership After COVID-19</title>
<link href="https://hdl.handle.net/1721.1/139507" rel="alternate"/>
<author>
<name>Morgan-Roselló, Rubén Grayson</name>
</author>
<id>https://hdl.handle.net/1721.1/139507</id>
<updated>2022-01-15T03:37:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Fare Approach to Attracting Transit Ridership After COVID-19
Morgan-Roselló, Rubén Grayson
The COVID-19 global pandemic substantially depressed ridership on transit agencies across North America. While much is still unknown about the anticipated return of transit ridership after the pandemic, the exacerbation of previous work-from-home trends due to continued remote work policies can negatively affect transit ridership recovery and the use of traditional pass fare products. For example, an increase in work-from-home flexibility after employees return to the office is likely to affect the ongoing establishment of “pass multiples”, or the “break-even” point, for monthly passes. This thesis examines two case studies of potential new or modified fare products and one randomized control trial and suggests a strategy for transit agencies to attract ridership as employers reopen their downtown offices. The research analyzes the Massachusetts Bay Transportation Authority (MBTA), the regional transit agency for Greater Boston and one of the largest in the nation. A focus on commuter rail users and the Perq program (the corporate pass program at the MBTA) narrows the analysis to traditional peak commuters (AM and PM frequent peak riders). The first case study dissects a new pass option that was introduced early in the COVID-19 pandemic known as the Flex Pass. While an honorable attempt at providing a flexible pass option during a time of uncertainty, alternative pass structures and heavier discounts will likely be necessary to attract more users to this, or an alternative, fare product. Based on an analysis using pre- and during COVID-19 commuter rail individual passenger usage, an alternative more heavily discounted 20/30 (20 trips within 30 days) fare product is recommended to replace the Flex Pass along with increased discounts on the Monthly Pass. Additionally, a randomized control trial conducted just before the pandemic shows how an email marketing campaign can be used to increase pass product adoption among regular system users. Coupled with the new 20/30 fare product and an increased discount on the Monthly Pass from the first case study, the email marketing campaign can help quickly roll out a new product to meet ever-shifting travel behaviors. Finally, a new employer-based fare product, named the Mobility Pass (a pay-per-use product for employers that functions as an unlimited pass for employees and requires all benefits-eligible employees be covered and is heavily subsidized by the employer), is analyzed to show the ridership growth potential if rolled out to all employers in the Perq program (as well as those who use third party employee benefit administrators). These three tactics can be used to increase ridership as transit agencies seek to recover from a global pandemic and historically low ridership.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation of Multivariate Process Control for Biomanufacturing</title>
<link href="https://hdl.handle.net/1721.1/139505" rel="alternate"/>
<author>
<name>Lui, Christopher A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139505</id>
<updated>2022-01-15T03:31:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Investigation of Multivariate Process Control for Biomanufacturing
Lui, Christopher A.
Biologics manufacturing is primarily managed through single loop univariate or cascaded controls - technology that has not fundamentally changed in decades. Outside of biomanufacturing, process control technologies have advanced to include multivariate and predictive control. This project examines the feasibility of developing a generalized multivariate control scheme proof of concept to link several individual loops by evaluating impact on control quality. A multivariate simulation environment was created to model the reactions of different control schemes on the critical quality attributes under investigation. This study reveals that model predictive control can be used on bioreactor control in this simulated environment; however, the results do not match the PID loops as closely as expected. While the multivariate model predictive control scheme shows a shift in mean difference off set point than the traditional control scheme in this purely simulation based experiment, it may be sufficient if additional benefits, such as better insight into more significant critical quality attributes, can be ascertained. Several future uses of this technology are hypothesized and, with additional effort, can be virtually tested given the baseline simulations from this project. Future testing can be implemented using this framework environment that can test the hypotheses that may lead to tighter control of quality attributes, increase in high quality titer, or lower waste of materials.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application Choices to Gender-typed Jobs</title>
<link href="https://hdl.handle.net/1721.1/139504" rel="alternate"/>
<author>
<name>Labuzova, Tatiana</name>
</author>
<id>https://hdl.handle.net/1721.1/139504</id>
<updated>2022-01-15T03:29:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Application Choices to Gender-typed Jobs
Labuzova, Tatiana
While most research explaining the persistence of gender inequality has focused on evaluator biases, a growing body of work points to mechanisms that may arise before the hiring process starts. Job seekers decide to apply to particular areas and job levels based on their mental models. Anticipated discrimination and biased self-assessment can result in male and female applicants using different criteria for putting themselves forward to seek jobs. Using a survey vignette study, I find that women and men differ in their application decisions. Women aren't avoiding male-typed jobs, but men are avoiding female-typed jobs. In terms of anticipated discrimination, while there is no gender difference for male-typed jobs, for female-typed jobs, men compared to women anticipate a less favorable employer reaction to their application if they were to apply. However, the expectation of how appropriate the potential employer will find their application is more important for women in their decision to apply to female-typed jobs, whereas men are less likely to incorporate anticipated employer reactions into their application decisions. In terms of biased self-assessment, women report no less confidence than men in their ability to perform well at male-typed jobs, whereas men report less confidence than women in their ability to perform well at female-typed jobs. Further, women and men are similar in the extent to which they incorporate self-assessment considerations into their application decisions. I conclude with a discussion of the study's theoretical and practical implications regarding the design of application processes and further research contributing to the solution-oriented streams of the gender-sorting literature.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering-Based Methods for Clinical Risk Prediction of Rare Missense Variants</title>
<link href="https://hdl.handle.net/1721.1/139502" rel="alternate"/>
<author>
<name>Bernatchez, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/139502</id>
<updated>2022-01-15T03:04:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Clustering-Based Methods for Clinical Risk Prediction of Rare Missense Variants
Bernatchez, Jackson
A long-standing goal in clinical genomics is to map individual genetic variants to clinical outcomes. Typically, variants which lead to loss of function (e.g. nonsense or stop-codon inducing variants, frameshifts, or deletions) are more easily classified as pathogenic in an established disease gene. However, there are many other missense variants identified in established disease genes which are more challenging to classify. Improving predictions of such variants has the potential to lead to clinically actionable solutions for individual patients. In this paper, we develop and evaluate several new clustering-based approaches for predicting the clinical risk of rare missense variants. We find that our results are comparable to existing methods, and offer several opportunities to significantly improve clinical risk predictions for missense variants.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct Manipulation Techniques for Creation of Multiple-View Visualizations</title>
<link href="https://hdl.handle.net/1721.1/139501" rel="alternate"/>
<author>
<name>Bacher, Katharine E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139501</id>
<updated>2022-01-15T03:05:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Direct Manipulation Techniques for Creation of Multiple-View Visualizations
Bacher, Katharine E.
I propose an extension to Lyra, a tool for authoring interactive data visualizations, that introduces novel processes for creation and laying out of multiple-view (MV) visualizations. The existing tools for designing data visualizations lack specific support for MVs, making the process of creating such visuals cumbersome for users. I draw on the concepts of direct manipulation, in which users interact with objects in the visual to enact desired changes, to develop new methods of creating MVs that are more intuitive to users and have less of a learning curve. With my proposed changes, users can iteratively add new groups in Lyra simply by dragging and dropping visual marks onto dropzones. To facilitate this new method for adding groups, I implement a layout concept within Lyra to allow for easier creation of gridded layouts for MVs. Additionally, this concept is extended to data-driven layouts that can be automatically generated for small multiples, or trellis charts. The system is evaluated by demonstrating the new mechanisms and showing an example gallery of MVs that can be created with these methods.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decreasing Size, Weight, and Power of Opto-Mechanical Assemblies Using Single-Crystal Silicon</title>
<link href="https://hdl.handle.net/1721.1/139500" rel="alternate"/>
<author>
<name>Roll, Christopher D.</name>
</author>
<id>https://hdl.handle.net/1721.1/139500</id>
<updated>2022-01-15T04:02:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Decreasing Size, Weight, and Power of Opto-Mechanical Assemblies Using Single-Crystal Silicon
Roll, Christopher D.
As small satellites and unmanned-aerial-vehicles (UAVs) continue to proliferate, there is a growing need to increase the capability of their payloads. These platforms are typically attractive because of their lower cost and shorter development timelines compared to traditional programs. For optical assemblies in particular, the cost and schedule constraints substantially limit an engineer's options with regard to high-performance opto-mechanical materials. This restricts the achievable performance of the overall system. In order to overcome this barrier, additional design options must be made available. While often used as an optical substrate, single-crystal silicon (SCSi) is not typically thought of as a structural material. However, it has excellent opto-mechanical properties such as a high stiffness to weight ratio, a low coefficient of thermal expansion, and high thermal conductivity. Because of this, ultra-stable, lightweight, and robust assemblies can be built by using SCSI as both the primary metering structure and as a substrate for optical elements. While the application space is very broad, this effort focuses on the design elements typically required for a notional laser communication terminal. The results demonstrate the successful design, analysis, fabrication, and assembly of a lasercom SCSI optical bench. Ultimately, this study establishes both the feasibility and utility of using SCSI in next generation low Size, Weight and Power (SWaP) optical payloads.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Privacy with Split Learning: Benchmarking of Algorithmic Defenses against Reconstruction Attacks</title>
<link href="https://hdl.handle.net/1721.1/139497" rel="alternate"/>
<author>
<name>Zhang, Emily T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139497</id>
<updated>2022-01-15T03:09:15Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computational Privacy with Split Learning: Benchmarking of Algorithmic Defenses against Reconstruction Attacks
Zhang, Emily T.
Distributed deep learning has potential for significant impact in preserving data privacy and improving model accuracy by leveraging massive sets of training data. However, passing intermediate weights, gradients, or activations is inherent in current distributed learning techniques, all of which contain information related to input data. This thesis analyzes split learning, a current state of the art distributed deep learning technique, in the context of the private collaborative inference scheme against reconstruction attacks. This is achieved by creating a benchmark and introducing new methods of improving privacy algorithmically. Benchmarking is done by comparing input data reconstruction quality and accuracy of sensitive attribute prediction over the axes of number of activation, input data pairs are leaked, and whether or not model parameters and general data distribution information is known. The proposed privacy improvements involve changes in model training to leak less information that may be used for reconstruction while preserving accuracies for the originally intended model prediction task. These improvements are compared against current state of the art privacy methods in protection over various reconstruction attacks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Facemesh in MIT App Inventor to Empower Students to Apply Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/139496" rel="alternate"/>
<author>
<name>Yu, Joy</name>
</author>
<id>https://hdl.handle.net/1721.1/139496</id>
<updated>2022-01-15T03:34:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Using Facemesh in MIT App Inventor to Empower Students to Apply Artificial Intelligence
Yu, Joy
It’s possible that a teenager’s first direct contact with artificial intelligence (AI) comes in the form of facial filters that are readily accessible on Instagram and Snapchat. These filters allow users to seemingly transform themselves into animals, put on pieces of clothing, or try out new “looks”. Accompanying active usage, however, is often lack of knowledge or awareness of the technology behind the fun applications. In my work, my goal was to allow youth to access, apply, and understand both AI and AI ethics. I built an open-source FaceExtension tool, an MIT App Inventor extension that uses Facemesh, through which users gain access 486 different facial landmarks on the face. I also designed a middle-school level curriculum to build a cat or lion facial filter camera, published a corresponding online sidebar tutorial on the official "AI with App Inventor" page, and ran workshops with 7-8th grade students. Students with no background in coding or AI could successfully complete the curriculum with enthusiasm, demonstrated by consistent attendance at workshops. Survey results show they gained not only a better understanding for and increased interested in AI, but also new realizations on the importance of AI ethics when applying AI tools. Students also are interested in making different AI facial filters as a means of self-expression and social impact. Importantly, after attending the workshops, students became empowered to create their own apps using AI. Furthermore, professional educators have also tested the curriculum; they not only demonstrated excitement to use FaceExtension in the classroom, but also published their own AI projects using the FaceExtension.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Mainland China REIT Return Index (2015-2020) through a Pure-Play Approach</title>
<link href="https://hdl.handle.net/1721.1/139495" rel="alternate"/>
<author>
<name>Zuo, Kan</name>
</author>
<id>https://hdl.handle.net/1721.1/139495</id>
<updated>2022-01-15T03:41:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Developing a Mainland China REIT Return Index (2015-2020) through a Pure-Play Approach
Zuo, Kan
As China takes steps to develop a REIT market, institutional investors are keen to seek a proxy for expected returns in this potential market for formulating investment decisions. Expected returns on an exante basis are typically based on ex-post return data, and since China does not currently have a REIT market, there is no historical data available. One prism through which to view this issue is the historical returns of the REITs traded in the Singapore exchange with mainland China exposures. Some observers have cited the performance of the REITs in the Singapore exchange with pure allocations in mainland China as a proxy. However, the skewed market allocations of this portfolio of REITs render it an inaccurate representation of a hypothetical Chinese REIT market. Through applying the pureplay methodology developed at MIT onto REITs that are not by themselves pure in Chinese mainland allocation, this thesis creates an alternative and improved way of producing a shadow Chinese REIT return index for a hypothetical mainland REIT market, and provides crucial insights into the question: what would the performance have been like in the past several years, had China had a REIT market for commercial real estate?
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Bandgap-Less Temperature Sensor for Achieving High Untrimmed Accuracy</title>
<link href="https://hdl.handle.net/1721.1/139494" rel="alternate"/>
<author>
<name>Mittal, Vipasha</name>
</author>
<id>https://hdl.handle.net/1721.1/139494</id>
<updated>2022-01-15T03:39:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of a Bandgap-Less Temperature Sensor for Achieving High Untrimmed Accuracy
Mittal, Vipasha
Temperature sensors are extensively used in measurement, instrumentation and control systems. CMOS based temperature sensors offer benefits of low cost and direct digital outputs over conventional sensors. However, they are limited in their absolute&#13;
accuracy due to the non-ideal behaviour of the devices used to design them. Therefore, these sensors require either calibration or gain, offset and linearity adjustments to achieve desired accuracies. The latter process, also called trimming needs additional expensive test equipment, valuable production time and is a major contributor to the cost of the sensors. In order to enable high volume production of CMOS based temperature sensors at low cost, it is imperative to achieve high accuracies without trimming.&#13;
&#13;
This work describes the design of an untrimmed, bandgap-less CMOS temperature sensor based on fundamental physical quantities resilient to process variations, package stress and manufacturing tolerances. Compared to previous art, this sensor employs high-thermal noise switched capacitor amplifiers, which are digitized by a bandgap-less successive approximation analog-to-digital converter. The flicker noise is attenuated through autozeroing, which also allows amplifying the thermal noise further. This work demonstrates the design of the first untrimmed closed-loop thermal noise temperature sensor. The chip is fabricated in TSMC 65 nm low-power process and simulation results show that the sensor achieves an untrimmed 3&#120590; inaccuracy of&#13;
less than 1 deg C.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Compositional Models for Few Shot Sequence Learning</title>
<link href="https://hdl.handle.net/1721.1/139493" rel="alternate"/>
<author>
<name>Akyurek, Ekin</name>
</author>
<id>https://hdl.handle.net/1721.1/139493</id>
<updated>2022-01-15T03:28:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Compositional Models for Few Shot Sequence Learning
Akyurek, Ekin
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of tasks. However, neural models perform poorly in settings requiring compositional generalization beyond the training data—particularly to rare or unseen subsequences. Past work has found symbolic scaffolding (e.g. grammars or automata) essential in these settings. We describe two simpler and more general modeling approaches that enable a large category of compositional generalizations without appeal to latent symbolic structure. The first is a data augmentation scheme called R&amp;R, built from two components: recombination of original training examples via a prototype-based generative model and esampling of generated examples to encourage extrapolation. Training an ordinary neural sequence model on a dataset augmented with recombined and resampled examples significantly improves generalization in two language processing problems—instruction following SCAN and morphological analysis SIGMORPHON (2018)—where R&amp;R enables learning of new constructions and tenses from as few as eight initial examples. The second is a lexical translation mechanism for neural sequence modeling. Previous work shows that many failures of systematic generalization arise from neural models' inability to disentangle lexical phenomena from syntactic ones. To address this, we augment neural decoders with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontextualized, token-level translation rules. We describe how to initialize this mechanism using a variety of lexicon learning algorithms, and show that it improves systematic generalization on a diverse set of sequence modeling tasks drawn from cognitive science, logical semantics, and machine translation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding principles for universal energy access: integrated distribution frameworks and their implementation</title>
<link href="https://hdl.handle.net/1721.1/139488" rel="alternate"/>
<author>
<name>Jacquot, Grégoire</name>
</author>
<id>https://hdl.handle.net/1721.1/139488</id>
<updated>2022-01-15T03:31:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Guiding principles for universal energy access: integrated distribution frameworks and their implementation
Jacquot, Grégoire
Holistic approaches to energy access have the potential to break the current status-quo. They can accelerate energy access efforts, and make universal energy access possible through tight coordination among the national grid, mini-grids, and standalone solar systems, and with optimal allocation of technical, human, and financial resources among all three electrification modes. Realizing these benefits requires careful identification of the key bottlenecks in energy access and the design of adequate frameworks based on an integrated approach that includes institutional, regulatory, economic, financial, technical, and operational analysis of the distribution sector.&#13;
&#13;
This dissertation demonstrates how integrated approaches can be applied in practice to advance the universal energy access agenda, through a detailed analysis of the Integrated Distribution Framework (IDF) and possible regulatory vehicles allowing for its implementation. From a study of past and ongoing African experiences in energy access, it shows that the IDF is a prime avenue to address key bottlenecks in distribution sector reforms for universal energy access. To explore the practical feasibility of integrated approaches, this thesis examines the role of concessions as a possible regulatory vehicle for the implementation of IDF. It examines the potential of utility concessions, briefly mentions the role of mini-grid concessions, and concludes with a much more thorough analysis of solar concessions as a promising mechanism to integrate off-grid solar into regulated approaches to energy access such as IDF.&#13;
&#13;
The primary contributions from this work include establishing the strong connection between the concept of integrated approaches to energy access and past and ongoing experiences in Sub- Saharan Africa; the identification of the key high-level institutional, regulatory, economic, and financial challenges facing the implementation of the Integrated Distribution Framework; a review of the potential of territorial utility concessions for the implementation of IDF and outlining possible guidelines to design IDF-like utility concessions for universal energy access; a brief analysis of the limitations of current mini-grid concessions in energy access and the need for further reforms to bring mini-grid concessions into the realm of IDF; an assessment of the role of solar concessions in harnessing the full potential of solar in universal energy access, and the challenges facing planners in the integration of off-grid solar into IDF; finally, a framework for the design and implementation of solar concessions.&#13;
&#13;
This thesis is grounded in a detailed study of past and ongoing African experiences and, to a lesser extent, Latin American and Asian experiences in energy access to inform distribution sector reforms in Sub-Saharan Africa. Most of the insights and recommendations can be generalized and applied to similar contexts.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer vision-based post-disaster needs assessment from low altitude aerial imagery</title>
<link href="https://hdl.handle.net/1721.1/139487" rel="alternate"/>
<author>
<name>García Franceschini, René Andrés</name>
</author>
<id>https://hdl.handle.net/1721.1/139487</id>
<updated>2022-01-15T03:38:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computer vision-based post-disaster needs assessment from low altitude aerial imagery
García Franceschini, René Andrés
Over the past decades, climate change has driven an increase in the frequency and intensity of natural disasters. In an effort to increase the situational awareness and timely support for search and rescue missions in the aftermath of a disaster, the United States Civil Air Patrol (CAP) gathers aerial imagery of the impacted areas. However, these high resolution and timely images are seldom used for quantitative assessment of damage. This thesis focuses on the following question: How can we use modern computer vision techniques to utilize CAP imagery for post-disaster needs assessment, specifically for the purpose of damage estimation and localization? This question is important because the data gathered by CAP has significant potential to expedite response operations and help reduce significant societal costs. The key technical challenge to address is problem arises from the fact that CAP-gathered aerial images are spatially sparse and oblique, and well-calibrated object detection datasets are not available for damage-prone situations. &#13;
&#13;
To address the aforementioned challenge, we develop an approach to simultaneously detect and localize damage within images using ideas from weakly-supervised object localization and structure from motion. Firstly, we refine a well-known proposed technique called class activation mapping to detect the extent of damage within an image solely relying on image-level labels. Secondly, we utilize structure from motion to georeference batches of CAP images from an area of interest. The main advantage of our approach is that the outputs of these two techniques can be easily combined to assign real-world coordinates to damage hotspots in the aftermath of a natural disaster. Finally, we evaluate its potential using data from the 2016 Louisiana floods and provide estimates of flood-related damage. &#13;
&#13;
Our approach achieves a precision of 88% when compared against official flooding estimates. Practical deployment of this approach depends on how the current practices and technologies used by CAP are tailored to improve damage detection and localization. To this end, we propose the following technical and policy recommendations: 1) Implement best practices that allow for a high-quality image sequences that can be labeled and georeferenced using modern computer vision techniques;  2) Incorporate other sensing modalities such as satellite imagery into CAP imagery analysis for quantitative damage assessment over large spatial regions; and 3) Invest in low altitude imaging technologies and benchmark dataset development.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving Paradigms in State-Level Integrated Resource Planning</title>
<link href="https://hdl.handle.net/1721.1/139486" rel="alternate"/>
<author>
<name>Peluso, Nina</name>
</author>
<id>https://hdl.handle.net/1721.1/139486</id>
<updated>2022-01-15T03:42:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evolving Paradigms in State-Level Integrated Resource Planning
Peluso, Nina
As global energy systems electrify, long-term planning processes are evolving to allow flexible economic analysis and acknowledge rapid financial and operational transformation. State-level integrated resource planning (IRP) processes allow oversight of long-term electric utility resource planning. Yet, outdated rules, procedures, and practices may impede utilities in planning for a new energy future. Is the IRP process constrained by technical modeling decisions, when it ought to serve as a platform for stakeholders to shape optimal and just electricity system outcomes?&#13;
&#13;
This paper assesses the state of integrated resource planning to inform utility planners, commissioners, and their staffs, along with the array of advocates that participate in such proceedings. I employ a case study methodology to assess docket filings and other relevant materials in recent IRP proceedings for four major utilities in Michigan, Georgia, New Mexico, and North Carolina. Section 3 details modeling software selection and use for those four cases. Section 4 uses capacity value assumptions to illuminate the iterative process around establishing model input assumptions. Section 5 takes a broader view of nascent efforts to include equity and justice into IRP processes.&#13;
&#13;
Consistent commission oversight and robust stakeholder processes are integral to ensure that utilities’ integrated resource plans reflect the pace of change in the U.S. energy sector. Policymakers can encourage advanced modeling methodologies (software, settings, and assumptions) through three channels: (1) written IRP rules, (2) commission procedure, and (3) intervention in utility processes. Furthermore, as equity and justice come to the forefront of utility planning, policymakers should consider intervenor compensation programs, energy justice assessments, and forms of public ownership to incorporate energy justice principles into the planning process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying Risk Exposure in a Global Retail Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/139485" rel="alternate"/>
<author>
<name>Xu, Liza</name>
</author>
<id>https://hdl.handle.net/1721.1/139485</id>
<updated>2022-01-15T03:15:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Identifying Risk Exposure in a Global Retail Supply Chain
Xu, Liza
In recent decades, supply chains have become increasingly vulnerable as firms expand and globalize their operations. Consumer expectations, short product life cycles and the adoption of lean principles have also contributed to supply chains becoming less resilient. The COVID-19 pandemic highlighted that for many firms, more work is needed to understand where vulnerabilities exist and to mitigate and prepare for eventual disruptions. Supply chain risk management (SCRM) is one area of study focused on the identification, assessment, mitigation and recovery from risks. In particular, we focus on the identification of risk within a global retail supply chain. Traditional risk models are ill-suited for addressing low-probability and high-impact events, such as a global pandemic. Therefore, we explore the development and application of a risk model based on the principles of time-to-survive (TTS) and time-to-recovery (TTR). We examine two definitions for TTS. In a supply disruption scenario, TTS represents the time a supply chain can continue to operate before facing performance impact. In a demand disruption, TTS&#119889; measures the time a firm can operate before reaching storage capacity constraints. We also introduce an alternative TTR based on the weighted average lead time from a disrupted node to a distribution center. The key metric we examine is risk exposure time (RET), which enables us to quantify the relative vulnerabilities of nodes for use in prioritizing additional SCRM processes and resources.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MASS BALANCE: Design strategies for lightweight, thermally massive construction systems</title>
<link href="https://hdl.handle.net/1721.1/139484" rel="alternate"/>
<author>
<name>Gascón Alvarez, Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/139484</id>
<updated>2022-01-15T03:21:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">MASS BALANCE: Design strategies for lightweight, thermally massive construction systems
Gascón Alvarez, Eduardo
The design of lightweight, thermally massive construction systems offers the opportunity to tackle two of the main challenges currently facing the built environment: the need to reduce the use of concrete, responsible for 5-8% of global carbon emissions, and the mitigation of the impacts of extreme heat events, which are becoming increasingly recurrent worldwide. This work presents new strategies to design and evaluate integrated construction elements that simultaneously consider their structural and thermal performance. Specifically, focusing on concrete floor systems becomes critical from both perspectives given their outsized contribution to structural mass and impact on thermal comfort.&#13;
&#13;
Methodologically, this thesis proposes the application of computational fluid dynamics (CFD) to study the dynamic thermal behavior of structurally optimized slabs. By simulating the ability of the thermal mass and ceiling’s geometric shape to flatten daily temperature fluctuations, the impact on occupants’ thermal comfort is evaluated. At the same time, the activation of these floor systems by, for example, embedding water pipes is analyzed as an additional opportunity of integrating functions and further improving the performance of these systems. The results obtained demonstrate the possibility of designing shaped slabs that, in addition to a 55% embodied energy reduction relative to conventional prismatic solutions, can still increase their passive thermal mass performance by 6.5% and their active cooling capacity by 14.5%. Moreover, the implementation of multi-optimization techniques allows for the exploration of Pareto-optimal designs that, at the expense of lowering the material savings achievements to 38%, can further improve their thermal behavior up to 9.5% (passive) and 28% (active).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monstrous Space: Architectural Production in an Age of Algorithms</title>
<link href="https://hdl.handle.net/1721.1/139483" rel="alternate"/>
<author>
<name>Waller, Alexandra L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139483</id>
<updated>2022-01-15T04:06:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Monstrous Space: Architectural Production in an Age of Algorithms
Waller, Alexandra L.
There has been much attention in the last half-century on developing digital technologies for architectural and design production. These research trajectories are frequently concerned with interfaces between machines and designers and the fabrication of objects in innovative ways. As a result of these efforts, these technologies have enabled novel and powerful methods of representation, approaches to fabrication and construction, and the unprecedented exploration of architectural form.&#13;
&#13;
In this thesis, I offer an expanded view of the role digital technologies play in architecture and design by investigating the spatial consequences of ubiquitous computation (Weiser and Brown 1997). In particular, this research is concerned with domestic architecture both as a physical manifestation and—as represented through cloud-based peer-to-peer video communication platforms—a digital echo. This physical-digital conversion fragments architectural space as its digital representation is redacted and disjoined from its embodied counterpart. Critical parallels surface between the material qualities inherent to these fragments and those belonging to spolia, architectural fragments produced through the ruination of existing architecture and repurposed as material in new constructions.&#13;
&#13;
A conceptual framework is developed which situates a hybridized physical-digital domestic space in cross-disciplinary dialogue with other concepts and processes of hybridity and aligns the aesthetic qualities of domestic-digital space within the lineage of the grotesque in Western art and architecture. A methodology for architectural production is developed in response, including a structure for hybrid human-machine design collaboration, and approaches to material creation, organization, and assembly. A Fragment Catalogue is produced, documenting and organizing a collection of digital spolia, and a series of speculative domestic architectures are constructed using fragments from the Catalogue. Differing approaches towards assembly are tested with the goal of producing spatial qualities resonant with grotesque expression.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalizable Modelling of Vacuum-Powered Soft Actuators And Its Use in Design for Mechanical Assistive Applications</title>
<link href="https://hdl.handle.net/1721.1/139481" rel="alternate"/>
<author>
<name>Gollob, Samuel Dutra</name>
</author>
<id>https://hdl.handle.net/1721.1/139481</id>
<updated>2022-01-15T04:09:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Generalizable Modelling of Vacuum-Powered Soft Actuators And Its Use in Design for Mechanical Assistive Applications
Gollob, Samuel Dutra
In this thesis, we present a generalized modeling tool for predicting the output force profile of vacuum-powered soft actuators using a simplified geometrical approach and the principle of virtual work. Previous work has derived analytical formulas to model the force-contraction profile of specific actuators. To enhance the versatility and the efficiency of the modelling process we propose a generalized numerical algorithm based purely on geometrical inputs, which can be tailored to the desired actuator, to estimate its force-contraction profile quickly and for any combination of varying geometrical parameters. We identify a class of linearly contracting vacuum actuators that consists of a polymeric skin guided by a rigid skeleton and apply our model to two such actuators - vacuum bellows and Fluid-driven Origami-inspired Artificial Muscles (FOAMs) - to demonstrate the versatility of our model. We perform experiments to validate that our model can predict the force profile of the actuators using its geometric principles, modularly combined with design-specific external adjustment factors. Our framework can be used as a versatile design tool that allows users to perform parametric studies and rapidly and efficiently tune actuator dimensions to produce a force-contraction profile to meet their needs, and as a pre-screening tool to obviate the need for multiple rounds of time-intensive actuator fabrication and testing.&#13;
&#13;
The work presented here was published in Frontiers in Robotics and AI on 03 March 2021, “A Modular Geometrical Framework for Modelling the Force-Contraction Profile of Vacuum-Powered Soft Actuators,” by S. Gollob et al. Figures reproduced from this work are referenced following the journal's open-access Creative Commons practices.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameterized Shape Adaptive Material: A New Design Method for Inclusive Sportswear</title>
<link href="https://hdl.handle.net/1721.1/139480" rel="alternate"/>
<author>
<name>Beem, Jennifer L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139480</id>
<updated>2022-01-15T03:37:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Parameterized Shape Adaptive Material: A New Design Method for Inclusive Sportswear
Beem, Jennifer L.
Conventional sportswear design does not take into account body size changes that many individuals experience (i.e. through pregnancy, menstruation, etc.). This study focuses on transforming multi-stable mechanisms into composite form to create shape adaptive wearable materials for periods of body size and shape change. A corresponding predictive mathematical model is created to explore geometric parameter aˆ, which is the ratio of unit cell amplitude to width. This predictive tool feeds into an optimization tool, which allows designers to create these shape adaptive composites based on desired force-extension curve parameters. Experimental testing is completed to validate the predictive model portion of the optimization tool and shows good agreement in mid-range (aˆ=0.3 &amp; 0.4) designs, with some noted inconsistencies in lower range values (aˆ=0.1 &amp; 0.2). To illustrate how the optimization design tool works two design examples are shown, one for expected shape change during pregnancy and one for targeted compression in swimwear. In addition to uniaxial testing samples, realistic apparel pieces with integrated shape-adaptive panels are created and pressure tested to understand how user perception may be affected. Initial pressure testing shows an improvement in pressure regulation in apparel pieces with standalone multistable panels, verifying that multi-stable structures can help achieve shape adaptive properties in apparel.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Strategy for Consumer Products</title>
<link href="https://hdl.handle.net/1721.1/139478" rel="alternate"/>
<author>
<name>Marcus, Jonathan Bailey</name>
</author>
<id>https://hdl.handle.net/1721.1/139478</id>
<updated>2022-01-15T03:24:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Digital Strategy for Consumer Products
Marcus, Jonathan Bailey
There has been a bifurcation in the consumer goods sector. Many of the most successful players of the past century have used a traditional retail strategy to develop and sell manufactured products. However, in the past decade, the success of digital upstarts has instigated a shift in marketplace dynamics, and the incumbents will need to exercise a digital pivot in order to maintain relevancy and market share. &#13;
&#13;
In this thesis, I analyze the key industry trends driving the digitization of the consumer products industry. Based on this analysis, I then discuss and present strategy recommendations for legacy brands, digital-native brands, and new ventures. I find that many of the fastest-growing players in consumer goods can credit their success to internet-enabled models of management. By building their businesses on a foundation of adaptable, digital capabilities and technologies, these firms have positioned themselves to best serve the growing number of consumers who engage with and purchase products and brands via digital channels.&#13;
 &#13;
I conclude by positing that future-oriented and consumer-centric manufacturers and retailers will recognize that consumer culture is increasingly online and in order to capitalize on this new era of value creation, consumer product teams must consider their digital strategy. I speculate that successful firms will employ agile methodologies and a human-centered, data-driven approach within their organization to respond quickly to changing requirements, identify emerging trends, and create value for stakeholders within the changing retail environment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Alternatives to AI Proctoring Software</title>
<link href="https://hdl.handle.net/1721.1/139477" rel="alternate"/>
<author>
<name>Kumar, Aditi</name>
</author>
<id>https://hdl.handle.net/1721.1/139477</id>
<updated>2022-01-15T03:29:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design Alternatives to AI Proctoring Software
Kumar, Aditi
With the increasing popularity of and avenues for online learning, a considered approach to academic integrity has emerged as a priority. Artificial intelligence (AI) enabled proctoring software has been touted as a solution. However, it relies on imperfect and biased technology. For this dissertation, the author partnered with an online education company to provide alternatives to AI proctoring that include assessment design and administration, honour code, and in-house authentication. In addition to recommending the alternatives, a phased implementation plan was also created.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operationalizing Psychophysiological Correlates of Mobile App User Experience</title>
<link href="https://hdl.handle.net/1721.1/139476" rel="alternate"/>
<author>
<name>Carlson, Ethan L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139476</id>
<updated>2022-01-15T03:12:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Operationalizing Psychophysiological Correlates of Mobile App User Experience
Carlson, Ethan L.
As our mobile, digital lives have continued to grow, the usability of mobile applications has become a central design consideration for the organizations that deliver those applications and the users who consume them.  Existing methods for evaluating the usability of new designs and design changes have drawbacks.  Digital methods which track user actions and time spent are scalable, but it is difficult to know how a user was feeling when an action was taken, and therefore to know how to improve the experience.  Alternatively, qualitative research methods allow designers to interview and observe users in real time to obtain high quality data on app usability.  However, these methods are expensive and do not scale well to every product release, demographic, and geography.&#13;
 &#13;
‘Real Time UX’ is a phone case that measures several biometric signals available at the user’s hands and uses these data to infer stress, frustration, and engagement in context of app behavior.  This insight can then be operationalized to provide automated usability feedback to mobile app designers (slow loop feedback) or it can be integrated directly into the mobile app itself in order to adapt the app functionality to the user (fast loop feedback).  This paper presents the motivation for and design of Real Time UX, potential applications of the platform, feedback from users, and opportunities for future work in the space.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing State-of-the-art Multiphase CFD Modeling for PWR Applications</title>
<link href="https://hdl.handle.net/1721.1/139473" rel="alternate"/>
<author>
<name>Pham, Monica V.</name>
</author>
<id>https://hdl.handle.net/1721.1/139473</id>
<updated>2022-01-15T03:32:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Advancing State-of-the-art Multiphase CFD Modeling for PWR Applications
Pham, Monica V.
Multiphase Computational Fluid Dynamics (M-CFD) has the potential to provide high fidelity simulation of complex boiling phenomena in Light Water Reactors (LWRs), thereby accelerating the development cycle and reducing the need for expensive large-scale experiments. M-CFD relies on two-phase closure models to consistently represent the relevant physical phenomena in flow boiling. However, the still incomplete understanding of the ability of these closures to accurately capture the underlying physics limits the adoption of M-CFD in reactor development and design optimization. Due to the interaction of complex physical phenomena present in subcooled flow boiling, local measurements are necessary to assess the performance of existing closures. Additionally, because previous validation was performed using low pressure data, measurements at high pressure are needed to understand the performance of multiphase closures at PWR conditions.   &#13;
&#13;
In this work, benchmarking was conducted using local measurements from subcooled flow boiling experiments that reproduced density ratios and scaled flow conditions corresponding to PWR operating conditions. Measured radial profiles of void fraction and bubble diameters from the DEBORA experiments were used to assess the performance of two-phase closures. The DEBORA experiments consist of vertical flow boiling of R12 in a circular pipe, in which experimental conditions were scaled to replicate PWR conditions. Eleven test cases at various flow conditions, heat flux, inlet subcooling, and pressure have been used in this systematic validation. This work leverages advancements in momentum closures to perform a systematic assessment of wall boiling representation by evaluating heat flux partitioning formulations and the related closure relations. The influence and sensitivity of bulk boiling and condensation models were also evaluated. Using separate effect assessments and recent advancements in experimental understanding, this work presents an optimal closure representation that demonstrates consistent predictions and is applicable to prototypical PWR conditions while identifying areas for future improvement.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Weapon Precision</title>
<link href="https://hdl.handle.net/1721.1/139472" rel="alternate"/>
<author>
<name>Kendall, Thomas P.</name>
</author>
<id>https://hdl.handle.net/1721.1/139472</id>
<updated>2022-01-15T03:05:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Optimizing Weapon Precision
Kendall, Thomas P.
An accurate soldier is a lethal soldier, lethal soldiers make an effective army, and an effective army is a deterrent to war. In this paper we aim to apply optimization methods to nonlinear trajectory models to make every soldier more lethal with their assigned weapon system (and hunters more humane with their rifles). We first discover a way to optimize the trajectory of the average soldier's bullet over a specific range in a way that can be immediately implemented with negligible impact to current infrastructure, equipment, practices, and procedures. Next, we develop the mathematics behind an aiming device which considers the angle at which a weapon is aimed and the angle at which the weapon is tilted, and automatically adjusts to produce the optimal bullet trajectory over a specified range. We design this device while minimizing power consumption, minimizing weight, and avoiding the use of additional sensors such as laser range finders. Finally, we model a similar device intended for civilian hunters and military snipers which guarantees minimized deviation at an estimated range.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Urban Residence Options to Meet Zero Energy Requirements: Simulation-Based Tradespace Exploration of Yokohama Considering Energy Production, Consumption, and Life-Cycle Cost</title>
<link href="https://hdl.handle.net/1721.1/139471" rel="alternate"/>
<author>
<name>Kawano, Masato</name>
</author>
<id>https://hdl.handle.net/1721.1/139471</id>
<updated>2022-01-15T03:18:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evaluating Urban Residence Options to Meet Zero Energy Requirements: Simulation-Based Tradespace Exploration of Yokohama Considering Energy Production, Consumption, and Life-Cycle Cost
Kawano, Masato
Most countries have announced ambitious decarbonization goals towards 2050. The Japanese government has declared a target to reduce CO2 emission by 46% from 2013 and taken on the challenge of reaching a height of 50%. Following this declaration, local governments are also considering specific plans for decarbonization. Among them, the City of Yokohama plays a leading role as the chairman of the local government. Yokohama has announced a renewable energy utilization strategy and is calling for dialogue on future issues. The purpose of this research is to clarify the future issues of the energy system of the City of Yokohama. The target is the household sector, and a house is regarded as a complex system in this paper. The systems approach explored the optimal system architecture. Stakeholder analysis confirms that Prosumers will be essential players in future energy systems. The owner of a decarbonized home system will be the Prosumer. To make the best architectural decisions, 360 concepts were defined and were simulated the performance of each concept: energy consumption and production. As an evaluation of economic efficiency, the payback period was calculated from the life cycle cost of each concept. Tradespaces based on performance and payback period were explored. As a result, it was found that the payback period is shortened (ten years can be seen), and the system performance is improved in the case where the PV capacity is large. The results of this research recommend installing PV as much as possible, while a cogeneration system is not recommended due to consumption of much gas. To further shorten the payback period, it is better to use a third-party-owned scheme. Although simulations show that the energy efficiency performance of electric and non-electric houses is comparable, non-electric houses use gas, so unless carbon-neutral gas is available, a new non-electric house is not recommended to achieve the decarbonization goal.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Human Planning in Maze Orienteering Problems</title>
<link href="https://hdl.handle.net/1721.1/139470" rel="alternate"/>
<author>
<name>Yang, Zhutian</name>
</author>
<id>https://hdl.handle.net/1721.1/139470</id>
<updated>2022-01-15T04:01:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Modeling Human Planning in Maze Orienteering Problems
Yang, Zhutian
To create socially intelligent artificial assistants for humans in complex, naturalistic search environments, we need to develop algorithms that build models of human planning given their past decisions. In this thesis project, I focused on modeling human planning in Maze Orienteering Problems (MOP), an optimization problem with the objective to maximize collected rewards within a time limit in a partially known maze.&#13;
&#13;
The project has two main components: developing planning algorithms to find approximate solutions to the MOP and using those algorithms to model human behavior with Bayesian inference.&#13;
&#13;
For the planning part, I designed a hierarchical planning framework to solve the MOP as a room-level orienteering problem and a Partially Observable Markov Decision Process (POMDP) inside each room. My evaluation of algorithms shows that a Closest-Room heuristic model for room-level planning performs comparable to Branch-and-Bound exhaustive search while bearing a much smaller computational cost.&#13;
&#13;
For the inference part, I implemented an online Bayesian inverse planning framework to fit candidate hierarchical planners to individual human traces. My experiments of human modeling shows that Closest-Room heuristic model also outperforms BnB in fitting humans’ room-level decisions and predicting their next rooms to visit.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of sparse signaling schemes in fading wideband channels</title>
<link href="https://hdl.handle.net/1721.1/139469" rel="alternate"/>
<author>
<name>Yang, Kathleen</name>
</author>
<id>https://hdl.handle.net/1721.1/139469</id>
<updated>2022-01-15T03:08:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of sparse signaling schemes in fading wideband channels
Yang, Kathleen
The shift to the wideband regime due to the crowded frequency spectrum has resulted in a requirement for a different modulation scheme from previously used modulation schemes in order to succeed. Impulsive frequency shift keying, which is frequency shift keying with a duty cycle, is one such signaling scheme that has been shown to perform better than code division multiple access and orthogonal frequency di-vision multiplexing in the wideband regime without channel state information, but lacks practicality due to its maximum likelihood noncoherent receiver being a bank of frequency-selective filters. In this work, we first explore using a chipping sequence-based compressed sensing receiver, which is simpler and more practical than a bank of frequency-selective filters, to recover impulsive frequency shift keying signals. We show that when the number of chipping sequences is equivalent to the number of frequency-selective filters, the performance of the two receivers are similar. In addition to exploring a sequence-based receiver for impulsive frequency shift keying, we also make alterations to the modulation scheme. We develop the wideband time frequency coding scheme, which incorporates frequency shift keying with pulse position modulation, and show that the capacity of wideband time frequency coding is greater than impulsive frequency shift keying when the duty cycles are equivalent. As wideband time frequency coding is very similar to impulsive frequency shift keying, we also investigate the performance of a chipping sequence-based compressed sensing receiver to recover its signals.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Reinforcement Learning in Factored Markov Decision Processes and Unknown Markov Games</title>
<link href="https://hdl.handle.net/1721.1/139468" rel="alternate"/>
<author>
<name>Tian, Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/139468</id>
<updated>2022-01-15T03:05:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Online Reinforcement Learning in Factored Markov Decision Processes and Unknown Markov Games
Tian, Yi
Reinforcement learning (RL) has gained an increasing interest in recent years, being expected to deliver autonomous agents that can learn to interact with an environment. So far the empirical successes rely heavily on enormous amount of data collected during interaction, hence mostly limited to domains with exact simulators like video games and board games. Even with simulators, the demand for data of RL is sometimes too much a burden, not to mention that for domains in the physical world, like robotics and self-driving, collecting data through interaction is usually costly, in terms of both money and time. Consequently, provably efficient methods are highly valued.&#13;
&#13;
The central problem to achieve data efficiency is to balance the trade-off between exploration and exploitation. The idea of optimism in the face of uncertainty from the bandit literature has been shown to achieve minimax optimality for Markov decision processes (MDPs) in the tabular case. However, such a model may be too general for some problems where certain structures allow for much more efficient learning. In the first part of this thesis, we consider the factored MDP model, where the transitions and rewards are factored. Assuming the factorization is known, we propose two optimism-based algorithms under the same model-based framework. One achieves minimax optimal regret guarantees for a rich class of factored structures, while the other enjoys better computational complexity with a slightly worse regret. We complement our algorithms by presenting structure-dependent lower bounds on regret for factored MDPs that reveal the difficulty of fully characterizing the lower bounds due to the intricacy of the structures.&#13;
&#13;
Another limitation of the MDP model is that it does not capture learning in a multi-agent environment. In the second part of this thesis, we study online learning in unknown Markov games, a problem that arises in multi-agent RL where the actions of the opponents are unobservable. We show that in this challenging setting, achieving sublinear regret against the best response in hindsight is statistically hard. We then consider a weaker notion of regret by competing with the minimax value of the game, and present an algorithm that achieves &#119978;˜(&#119870;2/3 ) regret after &#119870; episodes. This is the first sublinear regret bound (to our knowledge) for online learning in un3 known Markov games. Importantly, the regret bound is independent of the size of the opponents’ action spaces. Even when the opponents’ actions are fully observable, our regret bound improves upon existing analyses by an exponential factor in the number of opponents.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structurally Motivated Deep Learning for Genome Scale Protein Interaction Prediction</title>
<link href="https://hdl.handle.net/1721.1/139467" rel="alternate"/>
<author>
<name>Sledzieski, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/139467</id>
<updated>2022-01-15T04:00:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Structurally Motivated Deep Learning for Genome Scale Protein Interaction Prediction
Sledzieski, Samuel
Protein-protein interaction (PPI) networks have proven to be a valuable tool in systems biology to facilitate the discovery and understanding of protein function. However, experimental PPI data remains sparse in most model organisms and even more so in other species. Existing methods for computational prediction of PPIs seek to address this limitation, and while they perform well when sufficient within-species training data is available, they generalize poorly when specific types and sizes of training data are not available in the species of interest. Here, we predict physical interactions between two proteins using only their primary sequence, and maintain high accuracy with limited training data and across species. We combine advances in neural language modeling and structurally-motivated design to develop D-SCRIPT, a deep learning model which is interpretable and generalizable to species with limited training data. We show that a D-SCRIPT model trained on 38,345 human PPIs enables significantly improved functional characterization of fly proteins compared to the state-of-the-art approach. Evaluating the same D-SCRIPT model on protein complexes with known 3-D structure, we find that the inter-protein contact map output by D-SCRIPT has significant overlap with the ground truth. We apply this work for functional discovery in several non-model species and explore the viability of the D-SCRIPT framework for protein binding pocket classification. Our work suggests that recent advances in deep learning language modeling of protein structure can be leveraged for protein interaction prediction from sequence, even in species where little data is available.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Sudan Nile Water Withdrawals During the 20th Century Using a Water Balance Approach</title>
<link href="https://hdl.handle.net/1721.1/139466" rel="alternate"/>
<author>
<name>Woods, Natalie Elaina</name>
</author>
<id>https://hdl.handle.net/1721.1/139466</id>
<updated>2022-01-15T03:32:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Estimating Sudan Nile Water Withdrawals During the 20th Century Using a Water Balance Approach
Woods, Natalie Elaina
The Nile Basin boundaries define the catchment area for the longest river in the world, which is shared by 11 sovereign states, each with a varying amount of power in basin water negotiations. For several countries in the region experiencing rapid population growth, claiming a portion of shared surface water resources will be key to reducing poverty, increasing agricultural production, improving resilience to climate variability, and powering industrial growth. Egypt and Sudan currently claim the largest share of Nile waters, but exact estimates of withdrawals by Sudan over several years are difficult to find. The goal of this thesis is to estimate Nile water withdrawals by Sudan from the 20th century until the present using water balance concepts.&#13;
&#13;
The analysis of Sudan’s water balance relies on estimates of evaporation from the Nile River and constructed reservoirs, and on estimates of water consumed for agricultural production. This study derives estimates for both, using gridded climate data and reported national blue water evapotranspiration of agricultural commodities. Evaporative losses from the Nile River and major reservoirs within Sudan are estimated at 4.62±0.92 km³ annually, with a substantial contribution of 2.04±0.41 km³ yr ⁻¹ evaporated from Jebel Aulia and Roseires Reservoirs. Surface water withdrawals for primary crop and livestock production average 13.83±0.69 km³ yr ⁻¹ over the most recent decade of available data. National agricultural production follows the same trend as total Nile water withdrawals due to the agricultural sector dominating Sudan’s economy. Uncertainty around the water footprint of the entire agricultural sector and other non-evaporative losses such as aquifer recharge remains significant. For example, we estimate that about 2 km³ yr ⁻¹ (anywhere between 1 and 4 km³ yr ⁻¹) may be lost to groundwater storage, specifically the Nubian sandstone aquifer. It was found that, as of 2005, Sudan’s total water withdrawals, estimated to be within the range of 13.5 to 17.5 km³ yr ⁻¹ are close to the threshold of consuming its entire share of Nile waters according to its agreement with Egypt. This result, along with planned and underway construction of hydroelectric dams and irrigation schemes elsewhere in the basin, emphasizes the need for reasonable and equitable water sharing and transparent, cooperative water resource management among Nile River riparian countries in the coming decades.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Evaluation of Regulatory Frameworks for the Development of Interstate Hydrogen Infrastructure in the United States</title>
<link href="https://hdl.handle.net/1721.1/139462" rel="alternate"/>
<author>
<name>Hernandez, Drake Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139462</id>
<updated>2022-01-15T03:17:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Evaluation of Regulatory Frameworks for the Development of Interstate Hydrogen Infrastructure in the United States
Hernandez, Drake Daniel
Markets for natural gas, electric power, and oil, and associated regulatory frameworks for the development of infrastructure to move said commodities in the United States, are mature – having developed over the last century and a half. In this thesis, I frame hydrogen as a fundamentally different energy commodity than those currently under the purview of federal regulators and assess potential regulatory frameworks for the development of interstate hydrogen transmission infrastructure. This thesis combines qualitative and quantitative methods to assess the use of regulatory frameworks to enable the development of such an interstate hydrogen transmission network. I conduct a historical analysis of commodity market, and infrastructure, development in the United States for the oil, natural gas, and electric power sectors. I then conduct a cross-sectional analysis of other countries’ stated hydrogen strategies to assess why the United States might consider using hydrogen in their energy sector. In order to justify an investigation into regulatory frameworks for the development of interstate hydrogen network development, I develop a linear program to evaluate the hydrogen transmission network which serves to minimize total expenditures on hydrogen based on power price and hydrogen demand assumptions. I find there are many cases in which the construction of a substantial hydrogen transmission network minimizes total expenditure on hydrogen within the United States. The thesis concludes with an evaluation of regulatory frameworks for the development of hydrogen transmission infrastructure.  Across all frameworks assessed, I find an act of Congress is likely necessary if hydrogen is to play a substantive role in the United States’ future energy sector
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Virtual-Reality-Based Digital Twins in Automotive Manufacturing Process Validation</title>
<link href="https://hdl.handle.net/1721.1/139461" rel="alternate"/>
<author>
<name>Reilly, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139461</id>
<updated>2022-01-15T03:02:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Assessment of Virtual-Reality-Based Digital Twins in Automotive Manufacturing Process Validation
Reilly, Daniel
This research analyzes the usage of the digital twin technology, IC.IDO, in the automotive new model launch cycle by assessing the tool's capability to identify issues in the digital design period, prior to testing in the physical phase. The automotive industry is constantly looking to reduce the costs associated with new vehicle launches and this tool is capable of creating digital twins that can be used to address design issues early on in development. At Nissan there was an opportunity to define specific use cases for IC.IDO and the potential savings.&#13;
&#13;
On a recent new vehicle launch, Nissan paid vendors an additional $60M-$70M due to design changes made after the design was initially released. It was determined that IC.IDO is capable of addressing design concerns that account for at least $3M-$10M of the total. On average, a technician's operation could be simulated and evaluated in 3 hours. With 281 operations this would take at least 21 weeks to create and study process digital twins using IC.IDO. Additionally the tool was found capable of simulating 9 out of 13 types of process on the assembly line. With limited time during the design validation phases it is necessary to prioritize which operations to focus on. Optimal tasks to use the tool on are manual, require hand tools, or use non automated equipment. Tasks to avoid are operations with long wire harnesses or flexible parts, automated equipment, and certain aspects of assist equipment. What is evident is that these types of tools are rapidly advancing that will increase the value as it solves more complex tasks.&#13;
 &#13;
Therefore it is the recommendation for Nissan Smyrna to spend up to $200k on a license of IC.IDO with a dedicated user starting in 2021. Adopting the tool with a dedicated user will move them out of the piloting phase and coincide with the upcoming new model design phase for a vehicle. Consideration should also be given that the tool has features not yet evaluated and is rapidly improving, which will unlock further benefits beyond what has been established. However, based on this tool's usage at companies like Ford and Boeing, it is known that it takes time to develop expertise, and only through dedicated resources being embedded into the development process will the real value be unlocked.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study of Chinese Mutual Insurance</title>
<link href="https://hdl.handle.net/1721.1/139460" rel="alternate"/>
<author>
<name>Nie, Gege</name>
</author>
<id>https://hdl.handle.net/1721.1/139460</id>
<updated>2022-01-15T03:14:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Study of Chinese Mutual Insurance
Nie, Gege
Mutual insurance, as the first form of insurance, has been developing for several hundred years. The form has become a mature one in developed areas, such as Europe, North America, and Japan, while in China, mutual insurance just came to the market. With the rapid development of the Internet, mutual insurance—mostly sold on Internet platforms—is becoming more and more popular among Chinese customers. However, since the insurance form is still at a very early stage of development, there arise many problems. This paper investigates these problems and proposes plausible solutions. This paper also reviews the history of mutual insurance of China and the world, in order to form a solid conclusion and predict future trends from historical perspectives.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Are changing margins factored into stock prices?</title>
<link href="https://hdl.handle.net/1721.1/139459" rel="alternate"/>
<author>
<name>Lin Kaishuo, Alfred</name>
</author>
<id>https://hdl.handle.net/1721.1/139459</id>
<updated>2022-01-15T04:01:48Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Are changing margins factored into stock prices?
Lin Kaishuo, Alfred
Stocks returns are often associated with the value of the company. There are many ways in which research has found viable variables to predict the future stock return, including autocorrelation of the stock prices, using the P/E ratio or more recently using the GP/A ratio. Having clear evidence that the GP/A ratio is useful to predict future stock return, this paper asks whether margin, which is GP/net sales, if changed, is factored into stock prices. Qualitatively, a common hypothesis is that when a company improves its profit margin, the company becomes more profitable and increases its returns. In this paper, I verified the hypothesis quantitatively. &#13;
&#13;
In this paper, I explored three types of studies to verify the hypothesis. The three studies are 1) a correlational study, 2) a portfolio study, and 3) a regression study. From the above studies, I found out that there is a significant linear relationship between changing a margin and stock returns, the margin change is directly proportional to both current price difference and forward price difference, and finally the margin change variable is significant irrespective of the industry and the specific year the company is in.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leadership Development Through Mindfulness Based Techniques</title>
<link href="https://hdl.handle.net/1721.1/139458" rel="alternate"/>
<author>
<name>Harari, Tom</name>
</author>
<id>https://hdl.handle.net/1721.1/139458</id>
<updated>2022-01-15T04:04:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Leadership Development Through Mindfulness Based Techniques
Harari, Tom
Mindfulness, the act of present moment non-judgmental awareness, has seen a resurgence in the business community. A centuries old tradition with roots in Eastern philosophy, leaders in the profit and non-profit world have begun incorporating some of its tenets into their practice. To understand the phenomenological and neurophysiological implications of mindfulness, we examine the latest research and form an analysis on how mindfulness might map to current leadership frameworks, namely the Four Capabilities model.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network and Workflow Design and Standardization in a Large Distribution Center</title>
<link href="https://hdl.handle.net/1721.1/139457" rel="alternate"/>
<author>
<name>Frigo, Clare</name>
</author>
<id>https://hdl.handle.net/1721.1/139457</id>
<updated>2022-01-15T03:09:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Network and Workflow Design and Standardization in a Large Distribution Center
Frigo, Clare
The retail industry is experiencing exploding growth in the digital segment and a shift in consumer delivery expectations. While distribution networks and the individual facilities were constructed to fulfill demand for the wholesale and retail channels, this growth is requiring companies to evaluate their network and facility operations to meet digital demand speed that consumers have come to expect. This thesis evaluates both the overall network design and the variability within daily operations that impact overall speed to customer.&#13;
&#13;
Optimization was used to evaluate the overall distribution network. The demand distribution was used to determine where to place facilities, which channels to ship from each facility, and where to ship product from to minimize transportation costs, transportation time to customer, and improve sustainability. The optimization reviews these key metrics for different potential network scenarios that could be implemented to improve the network.&#13;
&#13;
Within the distribution centers, variability in incoming orders causes variability in the time required to complete batches of orders (waves) before shipping them out. A model was developed based on standard times and wave data to capture real-time variability. This model can be used in conjunction with cross training to smooth the work across the major work areas and improve overall predictability.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Sourcing of Serial Production Processes in Jet Engine Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/139456" rel="alternate"/>
<author>
<name>Forehand, Brandy</name>
</author>
<id>https://hdl.handle.net/1721.1/139456</id>
<updated>2022-01-15T04:04:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Strategic Sourcing of Serial Production Processes in Jet Engine Manufacturing
Forehand, Brandy
Fabrication of an individual jet engine component at Pratt &amp; Whitney can go through upwards of 40 manufacturing processes, from an airfoil’s single crystal casting to thermal plasma spray coatings and wire EDM-drilled cooling holes. Many of these processes are extremely specialized, requiring special equipment, environmental controls, and expertise to produce at the precise tolerances required in aerospace.&#13;
&#13;
Utilizing contract manufacturers is an attractive and cost-effective option when lower unit process costs outweigh the associated transaction costs. However, it is not clear that Pratt &amp; Whitney is utilizing an integrated and cost-efficient strategy when making sourcing decisions. Decisions are made locally, by individual production areas with little visibility into overall company impact. Furthermore, outsourcing arrangements established to temporarily supplement capacity end up persisting and becoming longer term or permanent arrangements.&#13;
&#13;
Based on research with Pratt Whitney, this project arrives at a methodology to operationalize continuous analysis of transaction costs in order to arrive an efficient sourcing decision.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation Roadmap and Real Options Analysis for Biopharmaceutical Technology Introduction</title>
<link href="https://hdl.handle.net/1721.1/139455" rel="alternate"/>
<author>
<name>Pan, Long Bin</name>
</author>
<id>https://hdl.handle.net/1721.1/139455</id>
<updated>2022-01-15T04:08:21Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Implementation Roadmap and Real Options Analysis for Biopharmaceutical Technology Introduction
Pan, Long Bin
Biologics manufacturing is complex involving several unit operations for drug production. As the industry moves forward, new technologies are introduced to decrease cycle time, reduce costs, and improve quality for these processes. Amgen aims to innovate, improve, and transform through a culture of continuous improvement, delivering platforms and year over year productivity. The organization strives to improve its reliability, efficiency, agility and differentiation of its drug manufacturing capabilities through introduction of next generation technologies. However, technology implementation at Amgen is overwhelmingly complicated due to the diversity of products at different life cycle stages, differences in manufacturing sites, differences in developing departments, differences in technology usage and stringent regulatory requirements for process changes in GMP facilities. These internal and external factors inadvertently lead to inconsistent technology implementation plans and inefficient execution strategies.&#13;
&#13;
The objective of this research is to identify the best practices for biopharmaceutical project management in order to improve the current technology implementation roadmap at Amgen by targeting specific areas of opportunities. The roadmap aims to reduce waste caused by delays and reworks, improve efficiency, and drive towards right-first-time implementation. The second objective of the internship project involves the evaluation of real options analysis as an alternative financial  analysis approach to the discounted cash flow method currently used at Amgen. This approach adapts well to technology development as decision makers are able to learn new information after each project phase and can act upon the new data. Overall, the two tools work in conjunction to assist in investment decisions for new technologies and streamline technology implementation plans.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power influence in horizontal collaboration relationships</title>
<link href="https://hdl.handle.net/1721.1/139454" rel="alternate"/>
<author>
<name>Suarez Moreno, Juan David</name>
</author>
<id>https://hdl.handle.net/1721.1/139454</id>
<updated>2022-01-15T03:26:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Power influence in horizontal collaboration relationships
Suarez Moreno, Juan David
Supply chain horizontal collaboration has captured the attention of many researchers and practitioners. Horizontal collaboration offers multiple benefits in creating competitive advantages for companies and leveraging their sustainability in the long term. Although collaboration creates value for the supply chain, there is no evidence of what makes companies adopt these schemes since many of these initiatives fail to deliver the expected outcomes. In the core of the collaboration process lies power as an enabler since collaboration relationships arise from the inter-dependency between companies. &#13;
&#13;
This research explores the influence of power in the performance of horizontal collaboration. Using data from the Colombian Ministry of Transportation, a set of 3,276 dyads and 1,095 single companies were identified as performing consolidation during the year 2020. Three different power asymmetries were built to characterize power among these dyads: income, cargo, and network asymmetries.&#13;
&#13;
The effect of power asymmetries was evaluated on two outcome variables: the number of consolidated shipments and the shipping cost per kg. To do this, the augmented inverse propensity weight estimator method (AIPW) is used to analyze the average treatment effects empirically. A set of 16 experiments were conducted to understand the influence of the different asymmetries in the horizontal collaboration performance. &#13;
&#13;
The statistically significant results show that power asymmetries have a negative effect on the number of consolidated shipments, reducing them. However, different effects are account for the shipping cost per kg. Income and Network asymmetries have a positive effect, reducing the shipment cost. Cargo asymmetry has an opposite effect regarding the shipment cost as it is increased when asymmetry is increased. &#13;
&#13;
Significant results are found for network and cargo asymmetry on reducing the number of consolidated shipments. No significant effect is observable on the shipment cost when looking at the asymmetr
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Playing It By Ear: Improvised Music Livestreaming During COVID-19</title>
<link href="https://hdl.handle.net/1721.1/139453" rel="alternate"/>
<author>
<name>Sugarman, Michael Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/139453</id>
<updated>2022-08-09T20:08:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Playing It By Ear: Improvised Music Livestreaming During COVID-19
Sugarman, Michael Philip
The beginning of the COVID-19 pandemic in March 2020 was marked by widespread improvisatory practice, be those teachers and students improvising to accommodate online learning or individuals improvising safety tactics to avoid catching the virus. Within mere weeks of the pandemic rendering in-person concerts untenable, music communities adopted livestreaming on Twitch as an alternate mode of throwing events. This thesis studies a time of mass improvisation by examining how communities built around improvised music — which themselves are often supported by improvisatory DIY event organizing practices — adapted from in-person livestreamed events. Focusing on a period ranging roughly from the beginning of the COVID-19 lockdowns to the advent of the racial justice uprisings following George Floyd’s murder by the hands of police, this study shows how musicians, organizers, and audiences congregating in jazz, experimental music, and DJ scenes created a widespread, dispersed livestreaming infrastructure that became, at once, an artistic outlet, a community gathering place, and a formidable fundraising mechanism. Such infrastructure was synthesized from three unique components: extent technology and livestreaming practices, social formations that spring from improvised music and improvisatory DIY organizing, and community bonds that were unique to these music scenes.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cashing in on Student Data: Standardized Testing and Predatory&#13;
College Marketing in the United States</title>
<link href="https://hdl.handle.net/1721.1/139452" rel="alternate"/>
<author>
<name>Moussapour, Roya Madoff</name>
</author>
<id>https://hdl.handle.net/1721.1/139452</id>
<updated>2022-08-09T20:08:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Cashing in on Student Data: Standardized Testing and Predatory&#13;
College Marketing in the United States
Moussapour, Roya Madoff
In this thesis, I explore the ethics of educational data collection associated with standardized testing in K-12 schools in the United States. While the public has become aware of issues surrounding data collection, distribution, and analysis in online spaces, this discourse has not fully extended into education. I extend the discourse surrounding consumer data privacy to educational spaces in order to investigate how standardized testing organizations such as the College Board violate norms of privacy in an effort to profit off of the sale of student data. I argue that the College Board’s operation of the Student Search Service, a service that not only provides students with marketing outreach from universities but also provides universities and other organizations with large quantities of student data, is an example of surveillance capitalism that enables predatory marketing practices surrounding the college admissions process. I rely upon historical research, policy analysis, primary source research, and interviews in order to analyze the actions of the College Board and connect those actions to predatory practices within higher education, delving into a discussion of enrollment management, predatory lending, and for-profit colleges. Ultimately, I outline a need for greater transparency around organizational data practices, greater enforcement of existing regulations, and enactment of new privacy laws in order to minimize the potential for harm on K-12 students in the United States.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imaging Based Models to Improve Lung Cancer Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/139451" rel="alternate"/>
<author>
<name>Xiang, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/139451</id>
<updated>2022-01-15T03:50:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Imaging Based Models to Improve Lung Cancer Diagnosis
Xiang, Justin
Per the American Cancer Society, lung cancer is the second most common cancer in both men and women, and the leading cause of cancer death, making up almost 25% of all cancer deaths. As such, it is pivotal to better detect and locate lung cancer from chest radiographs (x-rays) and computed tomography (CT) scans, as well as accurately estimate the risk of future lung cancer, all while factoring in the importance of model explainability in clinical settings.&#13;
&#13;
Recent advances in deep learning have led to increased applications of machine learning to medical imaging. In this work, we seek to better understand lung cancer through the computer vision tasks of risk prediction, localization, and incorporating image priors across the clinical imaging modalities of chest radiographs and computed tomography scans. The task of lung cancer tumor risk prediction allows the model to identify current cancers and act as an objective second reader, or help radiologists flag risky exams. The task of localization of both present and future cancers can help radiologists ascertain current and future regions of interest that should be further examined. The task of incorporating image priors allows the risk prediction model to mimic the radiologist screening workflow of using multiple screening images when available.&#13;
&#13;
We develop our methods on chest radiograph data from the National Institute of Health (NIH) dataset and low dose computed tomography (LDCT) data from the National Lung Screening Trial (NLST) dataset. We study the aforementioned three tasks across these two imaging modalities and explain how these tasks can improve patient care in clinical settings. Our results show that models based on LDCT can accurately detect current cancers but also provide longer term risk assessment beyond what can be achieved using risk factors alone. Through this work we aim to improve clinical care, offering new tools to improve patient outcomes.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Interfaces for Data Analytics</title>
<link href="https://hdl.handle.net/1721.1/139449" rel="alternate"/>
<author>
<name>Wellens, Quentin</name>
</author>
<id>https://hdl.handle.net/1721.1/139449</id>
<updated>2022-01-15T04:01:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Natural Language Interfaces for Data Analytics
Wellens, Quentin
As more processes become data-driven, anyone should be able to gather insights into databases without needing to develop complex computer skills typically required for data analytics software. We propose to design new paradigms in which users rely on their own natural language to analyze and visualize data. To that end, we develop three different approaches (unsupervised, rule-based, and supervised) to infer formal specifications from natural language utterances. Contrary to most other work, we developed these approaches in a low-resource environment using synthetically generated training sets, rather than expensive and labor-intensive expert annotations or crowd-sourced examples. Finally, we conducted a study to compare our proposed paradigm to drag-and-drop mechanisms. Not only does our best-performing model, Alcurve, achieve an 86.3% test accuracy on real user input, it also enables users to be 30% more productive when solving analytical tasks, which further highlights the important improvements in usability language-based interfaces can provide.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerodynamics and Impact Simulation of an Air-Dropped Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/139447" rel="alternate"/>
<author>
<name>Poe, Daniel Pekka</name>
</author>
<id>https://hdl.handle.net/1721.1/139447</id>
<updated>2022-01-15T03:58:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Aerodynamics and Impact Simulation of an Air-Dropped Ice Penetrator
Poe, Daniel Pekka
In order to investigate movement of the Ross Ice Shelf in Antarctica, an air-dropped ice penetrator will be employed. Dropping a seismic probe from a helicopter offers several advantages over sending out a conventional crewed mission, such as reduced transit time and access to hard-to-reach locations. However, a new set of problems to be solved arises. The penetrator must fall fast enough to guarantee rigid coupling to the ice shelf, but slow enough to avoid damaging internal components. Aerodynamic analysis is used to select a penetrator geometry, and to suggest a drop altitude of at least 5000 ft (1524 m). Detailed simulations of the impact reveal shock loads up to 566 G from a drop velocity of 42.5 m/s. Finally, the effects of steady wind are analyzed, and point to a maximum recommended wind speed of 7.5 m/s for drop operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Venture Studios: Analyzing a New Asset in the Venture Ecosystem</title>
<link href="https://hdl.handle.net/1721.1/139445" rel="alternate"/>
<author>
<name>Muñoz Abreu, Nelson Dario</name>
</author>
<id>https://hdl.handle.net/1721.1/139445</id>
<updated>2022-01-15T03:33:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Venture Studios: Analyzing a New Asset in the Venture Ecosystem
Muñoz Abreu, Nelson Dario
One of the latest evolutions in the entrepreneurial ecosystem is the rise of the “venture studio” model. The venture studio model aims to systematically build new ventures by trying to combine the financial resources of a venture capital firm, the expertise of accelerators and incubators, and the drive of startup founders. Acting as “factories that build businesses,” venture studios are companies that apply venture building methodologies to systematically identify market opportunities, develop ideas for businesses that can pursue those opportunities, assemble founding teams to operate those businesses independently, and support the businesses’ growth from inception to spin-off.&#13;
&#13;
This research aims to provide an understanding of the venture studio model and how it differs from other methods of creating and supporting new ventures. The research explores the characteristics of venture studios and their structures, the details of the venture building models employed, and the model’s advantages and challenges from the point of view of venture investors. Finally, this research attempts to envision what lies ahead for the venture studio model and what impact it could have on the larger entrepreneurial ecosystem.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Application of Double Machine Learning Onto Genomics Data Associated with Amyotrophic Lateral Sclerosis</title>
<link href="https://hdl.handle.net/1721.1/139444" rel="alternate"/>
<author>
<name>Wang, Crystal</name>
</author>
<id>https://hdl.handle.net/1721.1/139444</id>
<updated>2022-01-15T04:06:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Application of Double Machine Learning Onto Genomics Data Associated with Amyotrophic Lateral Sclerosis
Wang, Crystal
Finding causal relationships between a dataset and an observed outcome is especially important when there is potential for meaningful interventions. One such area of focus is a biological setting, where there are many opportunities for diagnosis, prevention, and treatment research. Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disease for which which there is no cure and relatively little is known about what causes the disease. Previous work has shown certain genes to be associated with ALS and previous work have used machine learning to try and determine the causal features of ALS. In this thesis we experiment with Double Machine Learning [8] to find causal features of ALS. We apply this method on both synthetic and real datasets that are associated with ALS and explain the advantages and shortcomings of this methodology on genetics data where correlation is present.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rotational Transformation Methods for Radio Occultation and Passive Microwave Radiometry Colocation Analysis</title>
<link href="https://hdl.handle.net/1721.1/139442" rel="alternate"/>
<author>
<name>Halperin, Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/139442</id>
<updated>2022-01-15T04:02:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Rotational Transformation Methods for Radio Occultation and Passive Microwave Radiometry Colocation Analysis
Halperin, Lucy
Global Navigation Satellite System Radio Occultation (GNSS-RO) and passive microwave radiometry (MWR) provide useful atmospheric profiles for inputs into Numerical Weather Prediction models. However, both remote sensing techniques face unique challenges that require auxiliary atmospheric data to mitigate. GNSS-RO provides extremely high vertical resolution retrievals in the Marine Boundary Layer but by itself is unable to distinguish between the contributions of water vapor and the “dry” atmosphere. MWR instruments have inherent biases in antenna temperature. GNSS-RO and MWR measurements taken within the same atmospheric volume at approximately the same time are mutually beneficial: each sensing technique provides the constraints needed by the other to solve its aforementioned profiling issue. This work introduces a fast, approximate method for analyzing the presence of colocated GNSS-RO/MWR measurements that requires only Two-Line Element (TLE) MWR data. The method applies a rotational transformation to map GNSS-RO soundings into the coordinate system natural to a cross-track scanning MWR satellite. The rotational transformation method is compared to the typical “brute force” colocation determination method and found to compute colocations 20x faster, with an average accuracy within 1.5% of “brute force” colocated occultations. Two initial applications of the rotational transformation colocation determination method are explored: a comprehensive study of the colocations occurring among active GNSS-RO and MWR missions, and colocation analysis of a proposed MWR constellation aimed to maximize colocations with the COSMIC-2 constellation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation and Characterization Testing of Liquid Fuel Cell Chemistry for Applications in Unmanned Underwater Vehicles</title>
<link href="https://hdl.handle.net/1721.1/139440" rel="alternate"/>
<author>
<name>Roley, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/139440</id>
<updated>2022-01-15T03:21:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evaluation and Characterization Testing of Liquid Fuel Cell Chemistry for Applications in Unmanned Underwater Vehicles
Roley, Andrew
Previous Unmanned Undersea Vehicle (UUV) powering has focused primarily on improved efficiency and energy density. However, these gains are often offset by the need for additional buoyant volume, and drag penalties associated with this larger volume. While fuel cells have been proposed and implemented for both manned and unmanned undersea vehicles, they often rely on compressed and/or cooled liquid &#119867;₂ and &#119874;₂, with bulky containment structures either within or outside a pressure hull. &#13;
&#13;
Massachusetts Institute of Technology (MIT) Lincoln Laboratory (LL) has identified Liquid Fuel Cells (LFCs) (specifically Liquid-to-Liquid) as an especially beneficial energy source for UUVs. Literature examples exist which demonstrate LFC viability, although there is presently little to be found regarding applications to UUVs. Additionally, LFCs often make use of Gas Diffusion Layers (GDLs), despite the lack of gaseous species present on either the oxidation side, reduction side, or both. This thesis seeks to investigate potential fuel cell improvements by eliminating GDLs from a Membrane Electrode Assembly (MEA), and to identify the best candidates for a near-neutrally-buoyant fuel cell.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Primitives in Human Manipulation of Complex Objects</title>
<link href="https://hdl.handle.net/1721.1/139439" rel="alternate"/>
<author>
<name>Stansfield, Stephan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139439</id>
<updated>2022-01-15T03:38:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Dynamic Primitives in Human Manipulation of Complex Objects
Stansfield, Stephan T.
Humans are remarkably adept at interacting with complex objects and display an impressive level of dexterity in manipulating underactuated and nonlinear objects. Inspired by the task of quickly transporting a cup of coffee while inducing minimum internal oscillations, this study investigated the nature of human interactions with a complex object. An open-loop control strategy consisting of composition of submovement and impedance dynamic primitives was proposed and simulated. The effect of shaping command inputs by implementing internal models with different levels of structure was analyzed.&#13;
&#13;
Previous work has proposed maximum smoothness optimization-based criteria to describe observed human interactions with flexible objects: minimum crackle of object (MCO) and dynamically constrained minimum jerk of hand (DCMJH). These models fail to reproduce human subject data in two ways: experimental hand velocities take on asymmetric bimodal profiles with shorter move durations, while the optimization-based models predict purely symmetric velocity trajectories. Additionally, the local minimum velocity between peaks was observed to increase with shorter move durations, while the optimization-based models predict an increasingly negative velocity minimum. Finally, by their nature these models serve a descriptive function but do not account for how motions may be planned and executed by the human central nervous system.&#13;
&#13;
Movement generation using an open-loop strategy of dynamic primitive composition based on an internal model was shown to be a competent descriptor of observed behavior and superior to the optimization-based models. The proposed models generated asymmetric bimodal velocity trajectories, and the choice of internal model influenced the relationship between minimum inter-peak velocity and movement duration, with some models reproducing the negative correlation observed in human subject data. Simulations that used internal models of a lower order than the simulated physical plant and employed feedforward force input fit observed motions better and with more biologically-feasible impedance values than those that employed more precise internal models and neglected feedforward force. These results suggest that internal models may play a key role in human interaction with complex objects, and that humans may rely on less detailed internal models to simplify interaction tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of power-law entrainment on bubble fragmentation cascades</title>
<link href="https://hdl.handle.net/1721.1/139438" rel="alternate"/>
<author>
<name>Gaylo, Declan B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139438</id>
<updated>2022-01-15T03:14:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Effects of power-law entrainment on bubble fragmentation cascades
Gaylo, Declan B.
This thesis considers the evolution of the bulk bubble-size distribution &#119873;(&#119886;,&#119905;) of large bubbles (Weber number &#119882;&#119890; &gt;&gt; 1) under free-surface entrainment described generally by an entrainment size distribution &#119868;(&#119886;) with power-law slope &#120574; and large-radius cutoff &#119886;ₘₐₓ. The focus is the interaction between turbulence-driven fragmentation and free-surface entrainment, and, for simplicity, other mechanisms such as degassing, coalescence, and dissolution are ignored. Of special interest is the equilibrium bulk bubble-size distribution [chemical formula], with local power-law slope [chemical formula], and the time scale &#120591;&#119888; to reach this equilibrium after initiation of entrainment. For bubbles with radii &#119886; &lt;&lt; aₘₐₓ, there are two regimes of [chemical formula] depending on &#120574;: a weak (&#120574; &gt; −4) and a strong (&#120574; ≤ −4) injection regime where [chemical formula] and [chemical formula], respectively. The weak regime provides a general explanation for the commonly observed −10/3 power law originally proposed by Garrett et al. (J. Phys. Oceanogr., vol. 30 (9), 2000, pp. 2163–2171), and suggests that different weak entrainment mechanisms can all lead to this result. For [chemical formula] exhibits a steepening deviation from a power law due to fragmentation and entrainment, similar to what has been previously observed, but here absent other mechanisms such as degassing. The evolution of &#119873;(&#119886;,&#119905;) to [chemical formula] is characterized by the critical time [chemical formula], where &#120576; is the turbulence dissipation rate and [chemical formula] is a new constant that quantifies the dependence on the size distribution of daughter bubbles created during fragmentation. For typical breaking waves, [chemical formula] can be quite small, limiting the time [chemical formula] when direct measurement of &#119873;(&#119886;,&#119905;) might provide information about the underlying entrainment size distribution.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive and Prescriptive Analytics for Airport Slot Allocation</title>
<link href="https://hdl.handle.net/1721.1/139437" rel="alternate"/>
<author>
<name>Schmedeman, Phillip D.</name>
</author>
<id>https://hdl.handle.net/1721.1/139437</id>
<updated>2022-01-15T03:51:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Predictive and Prescriptive Analytics for Airport Slot Allocation
Schmedeman, Phillip D.
Slot allocation is the primary form of strategic demand management practiced at airports globally to address congestion and reduce delay. To perform slot allocation, airport schedulers must account for detailed requests from hundreds of airlines for thousands of flights over a six-month season while adhering to variable airport capacities and the Worldwide Airport Slot Guidelines (WASG). This represents a highly complex combinatorial scheduling problem that has vast implications for airlines and passengers. While previous research has proposed a range of optimization models to support slot allocation, they commonly assume a flight-centric approach, which may extend or eliminate passenger connections without accounting for the costs.&#13;
&#13;
This thesis develops an original approach to airport slot allocation that incorporates passenger considerations. The proposed multi-objective optimization model allocates slots according to the WASG and airport capacity constraints while minimizing one flight-centric metric---schedule displacement---and two passenger-centric metrics---infeasible connections and connection time. Since this approach requires passenger forecasts to account for costs, we use historical itinerary data and machine learning methods to predict passenger flows across a network of flights. We apply this predict-then-optimize framework using real-world data from Singapore Changi Airport to create slot assignments that achieve Pareto optimality in acceptable computation times. The results indicate that schedule-coordinated airports can reduce passenger costs from slot allocation, with relatively small adjustments to schedule displacement. Ultimately, the proposed multi-objective formulation provides a new paradigm that can create more attractive flight schedules at major airports worldwide, based on airport-level considerations, airline-level considerations, and, for the first time, passenger-level considerations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>COVID-19 Therapeutics – A landscape analysis using systematic reviews and clinical data</title>
<link href="https://hdl.handle.net/1721.1/139436" rel="alternate"/>
<author>
<name>Shehu, Elvis</name>
</author>
<id>https://hdl.handle.net/1721.1/139436</id>
<updated>2022-01-15T03:04:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">COVID-19 Therapeutics – A landscape analysis using systematic reviews and clinical data
Shehu, Elvis
2020 was a very unusual year due the COVID-19 pandemic that has caused many fatalities and is disrupting practically every aspect of our lives. It is unprecedented to see PubMed literature&#13;
entries on a subject go from 0 to ~ 90,000 in a year. This effect is a direct result of the necessity of the scientific community to share data and insights generated worldwide. One of the potential unintended consequences of the sheer volume of literature in such a short amount of time is that many of it is not carefully peer-reviewed and vetted, making it difficult to sieve through information and understand it in order to allow informed decision making.&#13;
&#13;
In this thesis, we conduct a critical evaluation of the scientific evidence and present the current landscape for COVID-19 therapeutics. We first discuss efforts to repurpose old drugs and to discover novel drugs against COVID-19. We then evaluate the clinical evidence of the most promising drug candidates that are approved or recommended for emergency use by relying on high quality systematic reviews as guided by the AMSTAR-2 tool and/or latest clinical evidence if no systematic reviews are available. Lastly, we discuss pressing challenges of the COVID-19 pandemic and provide conclusions and recommendations for future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impact of Technology Progress on Bridging the Technological Valley of Death for Future Fusion Energy</title>
<link href="https://hdl.handle.net/1721.1/139435" rel="alternate"/>
<author>
<name>Kuribayashi, Shunsuke</name>
</author>
<id>https://hdl.handle.net/1721.1/139435</id>
<updated>2022-01-15T04:11:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigating the Impact of Technology Progress on Bridging the Technological Valley of Death for Future Fusion Energy
Kuribayashi, Shunsuke
Ensuring energy stability is an essential issue in achieving a sustainable society. Therefore, scientists continue research in pursuit of the ideal energy source, but that has not yet been achieved. Fusion energy has been believed an ideal energy source because it has various merits. Many developed countries have been researching fusion energy for a long time, but it has not yet been commercialized. This is because there are many technical challenges for commercialization. To overcome these difficulties, the ITER ("The Way" in Latin) Organization was established as an international project in 2007, but the ITER Organization still faces technical and organizational challenges, which are barriers to commercialization. This investigation aims to identify the factors that hinder the commercialization of fusion energy. The thesis discusses the possible strategies that the ITER Organization may take to achieve sustainable growth. The research applies the ARIES framework to perform the investigation.&#13;
&#13;
First, the current situation surrounding the fusion energy project is summarized by literature review, enterprise landscape analysis, and stakeholder analysis. These analyses reveal some of the reasons for why the ITER project faces the challenges of schedule delays and increasing construction costs. Followed by these analyses, current architecture analysis and holistic vision of the future are conducted as a basis for proposing several alternative architecture concepts. In addition, these architecture concepts are evaluated based on the selected criteria set. The evaluation suggests the preferred concept is Cost Data Analysis-based Organization. Finally, this study performs future proofing for the selected architecture concept to confirm the robustness and formulates a high-level implementation plan for adopting the selected architecture concept.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grasping Static and Moving Targets with a Soft Drone: Control and Prediction</title>
<link href="https://hdl.handle.net/1721.1/139432" rel="alternate"/>
<author>
<name>Ubellacker, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/139432</id>
<updated>2022-01-15T03:27:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Grasping Static and Moving Targets with a Soft Drone: Control and Prediction
Ubellacker, Samuel
This thesis studies the problem of grasping static and moving targets with a Soft Drone, which is a quadrotor platform where the landing gear is replaced with a soft manipulator. Whereas traditional rigid aerial manipulators are intolerant to positioning errors and susceptible to large contact forces, the Soft Drone leverages a soft, tendon-actuated gripper, whose inherent compliance provides robustness against imprecision and mitigates disturbances. However, the Soft Drone still requires the design of control algorithms for grasping as well as prediction methods to track a moving target.&#13;
&#13;
The first part of the thesis focuses on control algorithms for grasping a static target with specific consideration towards real-world conditions. Unmodeled external disturbances, such as the added mass of the target post-grasp, are estimated and compensated through an adaptive controller. When a motion capture system is not available, the target is localized using purely onboard sensors through a perception-based approach. Experimental results are presented which show that the adaptive control scheme is capable of asymptotically stabilizing the zero tracking error of a static grasp trajectory, despite external disturbances. Initial results of the perception-based grasp are also shown using a photo-realistic simulator.&#13;
&#13;
The second part of the thesis focuses on grasping a moving target. Estimation and prediction of the target’s state become key considerations when the target is moving. We employ an Extended Kalman filter for state estimation and use regularized polynomial fitting to predict the target’s future trajectory. This information is used in a linear model predictive controller, which enables the quadrotor to track the target’s state with minimal error. Simulation results show that our estimation and prediction approaches are robust against varying levels of noise and target predictability and that the control design enables successful grasping of a moving target.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autoclaved Aerated Concrete Tile Vaults for Lightweight Floor Systems</title>
<link href="https://hdl.handle.net/1721.1/139431" rel="alternate"/>
<author>
<name>Jagoe, Grace Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/139431</id>
<updated>2022-01-15T04:10:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Autoclaved Aerated Concrete Tile Vaults for Lightweight Floor Systems
Jagoe, Grace Anne
In the construction of multistory buildings, the reinforced concrete flat slab is a preferred floor system in part due to ease of formwork construction, which does not require skilled labor. However, the flat slab carries load primarily in bending, forcing much of the concrete section to crack because of the low tensile capacity of concrete. Thus, up to 61% of the material in the floor system may contribute to the structural weight without carrying substantial load. Material inefficiencies such as this are expensive not only from a cost perspective, but also from an environmental perspective. The embodied carbon of a structure, the emissions of CO2e associated with material consumption, is directly proportional to the weight of material. With the demand for new construction on the rise, the urgency for more sustainable buildings necessitates more efficient use of material to reduce the embodied carbon associated with structural floor systems. &#13;
&#13;
This thesis explores the use of tile vaults as permanent formwork for concrete floor systems. The proposed system is lighter weight than traditional flat reinforced concrete slabs due to the structural efficiency of material: concrete carries load to the supports in compression and steel resists the outward thrust in tension. Drawing upon traditional masonry construction techniques, a square groin vault can be built using lightweight autoclaved aerated concrete (AAC) tiles and fast-setting mortar without the need for complex falsework. The vaulted floor is designed using equilibrium calculations for various load cases to ensure stability throughout the lifespan of the structure. Alternative geometries and concrete mixes are studied to optimize the system, resulting in a vaulted floor with 67% reduction in structural weight and 61% reduction in embodied carbon compared to a conventional concrete flat slab. Finally, experimental testing validates three possible mortars for the use with AAC tiles for the construction of efficient vaulted floor systems.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Throughput, Multiplex Quantification via Nucleic Acid Chemical Reaction Network Perturbation</title>
<link href="https://hdl.handle.net/1721.1/139430" rel="alternate"/>
<author>
<name>Wu, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/139430</id>
<updated>2022-01-15T03:29:25Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">High Throughput, Multiplex Quantification via Nucleic Acid Chemical Reaction Network Perturbation
Wu, Emily
High throughput, multiplexed quantification has the potential to transform molecular diagnostics. For example, proteomic analysis could be a powerful medical tool to assess a patient's current state of health. However, most of the widely used molecular quantification techniques have yet to achieve both high throughput and multiplexing. This work proposes perturbative quantification, a nucleic acid chemical reaction network-based approach as a potential solution, and builds the foundation for this approach. The fundamental idea of perturbative quantification is to translate the molecular composition of a sample into a nucleic acid signature via perturbation of a nucleic acid chemical reaction network. This signature consists of signal nucleic acid strands with a set of concentrations unique to the sample, and can be efficiently read out by sequencing. A trained machine learning network can then be used to determine the molecular composition of the sample that produced the nucleic acid signature, thus quantifying the sample. &#13;
&#13;
This thesis provides a proof of concept for perturbative quantification by simulating DNA chemical reaction networks perturbed with DNA strands as the target molecules, and uses the simulated data to train multilayer perceptron (MLP) networks to quantify DNA samples. On the experimental side, this work proposes a potential implementation of perturbative quantification, and develops some of the necessary methods. Lastly, a data analysis technique to extract signal DNA sequence counts from noisy sequencing data was also developed. Together, these steps lay the groundwork for realizing perturbative quantification as a high throughput, multiplex approach to quantification.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System for High-Throughput Materials Exploration Driven by Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/139429" rel="alternate"/>
<author>
<name>Siemenn, Alexander E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139429</id>
<updated>2022-01-15T03:39:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A System for High-Throughput Materials Exploration Driven by Machine Learning
Siemenn, Alexander E.
Functional materials have vast and high-dimensional compositions spaces which make discovering optimized compositions intractable with conventional synthesis tools. Conventional experimental methods of exploring material composition spaces are slow and resource intensive due to being manual processes and requiring trial-and-error experimentation. Thus, the question is posed: How can we design an optimized functional material from this highly dimensional and vast composition space such that it has a high performance for a given application?&#13;
&#13;
In this thesis, machine learning algorithms are integrated into novel, high-throughput synthesis hardware to accelerate the rate of material composition exploration by 10000x relative to these conventional methods. First, a novel inkjet droplet deposition hardware system is constructed to generate arrays of unique functional material compositions in the form of droplets using fluid mechanics and motor control theory. Second, computer vision and Bayesian optimization machine learning algorithms are integrated into the droplet synthesis loop to autonomously discover synthesis conditions that generate optimized droplets without any intervention of a domain expert. Third, mulitphysics models are developed to simulate the performance of functional material devices within the gamut of environmental conditions without having to run expensive laboratory experiments. The culmination of these three processes developed in this master's thesis provide validated methods for driving high-throughput, low-cost materials exploration and optimization to be further explored in my doctoral thesis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanomechanical Analysis of Coronavirus Spike proteins and Correlation with Infectivity and Lethality</title>
<link href="https://hdl.handle.net/1721.1/139428" rel="alternate"/>
<author>
<name>Hu, Yiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/139428</id>
<updated>2022-01-15T03:24:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Nanomechanical Analysis of Coronavirus Spike proteins and Correlation with Infectivity and Lethality
Hu, Yiwen
The novel coronavirus disease, COVID-19, has spread rapidly around the world. Its causative virus, SARS-CoV-2, enters human cells through the physical interaction between the receptor-binding domain (RBD) of its spike protein and the human cell receptor ACE2. As an increasing number of variants of SARS-CoV-2 circulates globally, estimates of infectiousness and lethality of newly emerging strains are important. Here, we provide a novel way to develop a deeper understanding of coronavirus spike proteins, connecting their nanomechanical features – specifically the vibrational spectrum and quantitative measures of mobility – with virus lethality and infection rate. The key result of our work is that both, the overall flexibility of upward RBD and the mobility ratio of RBDs in different conformations, represent two significant factors that show a positive scaling with virus lethality and an inverse correlation with the infection rate. A quantitative model is presented to make predictions on the infectivity and lethality of SARS-CoV-2 variants based on molecular motions and vibrational patterns of the virus spike protein. Our analysis shows that epidemiological virus properties can be linked directly to pure nanomechanical, vibrational aspects, offering an alternative way of screening new viruses and mutations against high threat levels, and potentially exploring novel ways to prevent infections from occurring by interfering with the nanoscale motions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods to Reduce Backlogged Maintenance of Los Angeles Class Submarines</title>
<link href="https://hdl.handle.net/1721.1/139425" rel="alternate"/>
<author>
<name>Musselwhite, Steven Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/139425</id>
<updated>2022-01-15T03:28:17Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Methods to Reduce Backlogged Maintenance of Los Angeles Class Submarines
Musselwhite, Steven Andrew
The United States Navy’s submarine fleet operates independently in high-risk situations around the globe. These missions are of vital importance to the nation’s national security, requiring the vessels to maintain very high standards of material condition and readiness. However, increased operational needs, personnel shortages in the civilian workforce, and other factors have resulted in a significant backlog in submarine maintenance. Submarines are governed by stricter standards than other naval assets, preventing them from deploying until required preventive maintenance items and inspections have been completed. This thesis investigates historical performance data to build predictive models for component failures that could be used to shift periodicities for preventive items and reduce the existing backlog.&#13;
&#13;
Test components from the Los Angeles class of attack submarines were chosen for this investigation. Non-parametric and parametric models are fitted to these components, providing quantitative methods to manage the risks associated with periodicity shifts. This process can identify components that consistently fail within the existing periodicity as well as those that have successfully operated beyond that point due to previous deferrals. This presents an opportunity to improve the efficiency of submarine maintenance, although the quality of the Navy’s records was identified as a potential limiting factor.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Sanitation Systems for Densely Populated Regions: Design, Prototyping, and Systems Value Analysis</title>
<link href="https://hdl.handle.net/1721.1/139424" rel="alternate"/>
<author>
<name>Tsang, Andrew Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/139424</id>
<updated>2022-01-15T03:06:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Decentralized Sanitation Systems for Densely Populated Regions: Design, Prototyping, and Systems Value Analysis
Tsang, Andrew Lee
Approximately 7.5 billion people live presently on earth, and 2.3 billion lack access to basic sanitation facilities such as toilets or latrines. The International Water Association estimates that 80% of all wastewater gets discharged into waterways. Untreated wastewater affects the community as easily as water flows. Toilets with septic tanks and latrines are the primary repositories for human waste today. However, the essential subsequent task of disposing that fecal sludge or septage is rarely done in a safe manner. A lack of safe, official dumping sites means this sludge and septage is discretely disposed of in water ways, pits, or drains, which affect the local health and aesthetics.&#13;
&#13;
The main question posed in this thesis is “What are cost effective ways to building sanitation infrastructure in developing countries?” This thesis presents a design of a decentralized system conceptualized, prototyped, and analyzed using tools of systems engineering and systems analysis. The development of a lab-scale processor is presented in this thesis. The lab scale system processes 3.5kg of 20% sludge per hour. Using a trade space analysis, the system is compared to other methods of fecal sludge processing; a decentralized method can obtain similar health results for 15-25% of the cost per person served. A systems complexity analysis was done to compare options, and then the economic implementation was analyzed using Monte Carlo simulation. The findings suggest a decentralized model is very cost effective, but not cost effective enough to be a standalone business outside of government purchase.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Naval Surface Ship Maintenance: An Unconventional Approach to Improve Performance</title>
<link href="https://hdl.handle.net/1721.1/139423" rel="alternate"/>
<author>
<name>Sears, Darien A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139423</id>
<updated>2022-01-15T03:13:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Naval Surface Ship Maintenance: An Unconventional Approach to Improve Performance
Sears, Darien A.
This thesis presents an alternative approach towards meeting the challenge of delays within Private Sector repair of Naval Surface Ships. The quest to create greater efficiency, effectiveness, and excellence at the workplace has been a source of discussion and debate in the Navy for decades, particularly within the complex Private Sector Surface Ship maintenance enterprise. Recently, the Chief of Naval Operations (CNO) emphasized the priority to improve depot-level maintenance of Navy ships, which directly impacts our readiness to project power against our most lethal adversaries. The CNO presented the delay and overall under-performance of depot-level maintenance as a "challenge [that] is not new."[25] I submit that there is too much focus on overcoming this ship repair issue through the use of money and policy and not enough attention directed toward improving the underlying human relationships involved in executing these complex jobs.&#13;
&#13;
To explore this concept, this thesis describes the main stakeholders involved in the Navy non-nuclear surface ship maintenance enterprise; briefly outlines the current maintenance process from contract formation to ship delivery; and discusses the known factors contributing toward private sector surface ship maintenance delays. I make use of direct reports from the Navy, formal analytical reports, other relevant literature, and interviews conducted with 20 respondents including Navy Commanding Officers, a Private Shipyard General Manager, and a Regional Maintenance Center Waterfront Operations Director, among others. Four themes emerged for areas of suggested improvement: a refocused purpose and vision, updated motivation techniques, more systems thinking, and effective communication and coordination.  I also present a case study of two private shipyards at one company which have practiced an alternative approach to maintenance challenges in relation to findings within the four themes. An analysis of this case in the context of the broader literature, in connection to the four themes led to further insights, recommendations, and areas for future research.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Effective AIOps Implementation</title>
<link href="https://hdl.handle.net/1721.1/139422" rel="alternate"/>
<author>
<name>Hua, Yunke</name>
</author>
<id>https://hdl.handle.net/1721.1/139422</id>
<updated>2022-01-15T04:08:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Effective AIOps Implementation
Hua, Yunke
Artificial Intelligence in IT Operations, or AIOps, has gained considerable attention and expectations over the past few years. However, implementing AIOps in organizations is challenging. This research aims to guide effective enterprise-level AIOps implementation by building a general framework using systems thinking methodologies. The framework proposed builds a structure on the rubric of socio aspect, technical aspect, socio-technical intersection, system dynamics, and environmental factors of AIOps implementation. Each aspect has its corresponding methodology from systems thinking theory.&#13;
&#13;
This research is beneficial and critical to organizations wanting to implement or in the process of implementing AIOps. First, this research helps to outline the whole problem space, including both socio and technical aspects. Second, it proposes a comprehensive framework that can be used as a reference for guiding AIOps implementation in real-world scenarios. Based on the actual situation of each organization, companies can build their own AIOps reference models using this framework. The framework bridges gaps between various teams, enabling effective cross-disciplinary collaboration. The framework also provides a big picture and a way to think holistically to all AIOps-related stakeholders and keep their expectations aligned. Moreover, with the systems thinking methodologies embedded in the framework, organizations can guide effective planning, communication, and risk management throughout the AIOps implementation process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ship design through Axiomatic Design approach, sustainable engineering principles and artificial intelligence methods</title>
<link href="https://hdl.handle.net/1721.1/139421" rel="alternate"/>
<author>
<name>Fardelas, Georgios</name>
</author>
<id>https://hdl.handle.net/1721.1/139421</id>
<updated>2022-01-15T03:20:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Ship design through Axiomatic Design approach, sustainable engineering principles and artificial intelligence methods
Fardelas, Georgios
Environmental sustainability, as well as social and economic well-being, must be considered in every stage of a product lifecycle, from conceptual design to its retirement. Even though this sustainability-centric approach represents a critical driver for innovation, it also increases the design complexity. Nowadays, the maritime transport accounts for a large share of transport demand, and the importance of sustainable ship design is increasingly growing, not only for ethical and legislative but also for competitive reasons. The design of a sustainable ship considering all those aspects is a complex problem in this regard. One way to manage the complexity is to identify and address the functional couplings of the system at the early stage of the ship design. The Axiomatic Design methodology has been used for accommodating such a challenge in engineering systems design, and therefore, this thesis investigates the conceptual design of a merchant ship's conventional propulsion system with a view to the Axiomatic Design framework and known sustainable engineering principles. The Bayesian machine learning technique is proposed as a data-driven method for calculating the probability of achieving specific sustainability-related functional requirements, selecting the best design parameters among the proposed alternatives, and identifying hidden design couplings that the designers could not identify in the conceptual design stage. The case presented in this thesis can provide a scalable source for the total ship design following sustainable engineering principles in two aspects: 1) Axiomatic Design as a methodology to control the complexity of sustainable ship design and 2) Bayesian machine learning technique as a supportive tool for improving system's architecture and assessing system's sustainability impact.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Robot Systems and Learning</title>
<link href="https://hdl.handle.net/1721.1/139419" rel="alternate"/>
<author>
<name>Kosowsky-Sachs, Alon</name>
</author>
<id>https://hdl.handle.net/1721.1/139419</id>
<updated>2022-01-15T04:06:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multimodal Robot Systems and Learning
Kosowsky-Sachs, Alon
In this work we broadly explore the engineering design and system analysis of a multimodal, robotic environment. We first give background on why this type of system is unique, describing the different approach we take to sensing, dynamics, and control. We then delve into the robot itself, and review our development of a python control library enabling a high-level abstraction of low cost hardware. Next we explain the multimodal sensing and physical environment we created for the robot, including some of the initial challenges that forced critical design decisions. Following that, we explain different methods for multimodal representation learning that we tried, and reveal the difficulties we discovered in this task. Finally, we explore some critical takeaways and advocate for a specific path of future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hillbilly Talkback: Co-Creation and Counter-Narrative in Appalachia</title>
<link href="https://hdl.handle.net/1721.1/139417" rel="alternate"/>
<author>
<name>Justice, Elon B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139417</id>
<updated>2022-08-09T20:07:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Hillbilly Talkback: Co-Creation and Counter-Narrative in Appalachia
Justice, Elon B.
The Appalachian region has been systematically stereotyped in popular media representations for over a century, contributing to many of the structural, economic, and psychological challenges faced by those who live there. In order to solve this issue, it is necessary to produce compelling counter-representations which undermine the dominant regime of representation around Appalachia.&#13;
&#13;
In this thesis, I explore some of the most common image types used to represent Appalachia in popular media and assess the potential of co-creative documentary practices to create representations which challenge these harmful images. I begin with an explanation of the importance of representation, drawing from the work of Stuart Hall in cultural studies, and an introduction to co-creative methodologies in media production. Next, I recount the history of four tropes commonly used to represent Appalachia in popular media. Finally, I examine two co-creative documentaries set in the Appalachian region – Elaine McMillion Sheldon’s Hollow and my own The Appalachian Retelling Project – to assess these projects’ approaches to co-creation and the counter-narratives that emerge from them. Ultimately, I argue that co-creation is an effective methodology for producing compelling counter-representations of Appalachia and for other groups like it who have been systematically misrepresented.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imitation Learning for Sequential Manipulation Tasks: Leveraging Language and Perception</title>
<link href="https://hdl.handle.net/1721.1/139416" rel="alternate"/>
<author>
<name>Kim, Dain</name>
</author>
<id>https://hdl.handle.net/1721.1/139416</id>
<updated>2022-01-15T03:57:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Imitation Learning for Sequential Manipulation Tasks: Leveraging Language and Perception
Kim, Dain
As robots are increasingly being utilized to perform automated tasks, effective methods for transferring task specifications to robots have become imperative. However, existing techniques for training robots to perform tasks often depend on rote mimicry of human demonstrations and do not generalize well to new tasks or contexts. In addition, learning an end-to-end policy for performing a sequence of operations for a high-level goal remains a challenge. Transferring sequential task specifications is a difficult objective, as it requires extensive human intervention to establish the structure of the task including the constraints, objects of interest, and control parameters.&#13;
&#13;
In this thesis, we present an imitation learning framework for sequential manipulation tasks that enables humans to easily communicate abstract high-level task goals to the robot without explicit programming or robotics expertise. We introduce natural language input to the system to facilitate the learning of task specifications. During training, a human teacher provides demonstrations and a verbal description of the task being performed. The training process then learns a mapping from the multi-modal inputs to the low-level control policies. During execution, the high-level task instruction input is parsed into a list of sub-tasks that the robot has learned to perform.&#13;
&#13;
The presented framework is evaluated in a simulated table-top scenario of a robotic arm performing sorting and kitting tasks from natural language commands. The approach developed in this thesis achieved an overall task completion rate of 91.16% on 600 novel task scenes, with a sub-task execution success rate of 96.44% on 1,712 individual “pick” and “place” tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advanced Laboratory Exercises for MIT’s Electronics First Curriculum</title>
<link href="https://hdl.handle.net/1721.1/139415" rel="alternate"/>
<author>
<name>Kent, Sean Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/139415</id>
<updated>2022-01-15T03:02:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Advanced Laboratory Exercises for MIT’s Electronics First Curriculum
Kent, Sean Jay
Electronics First is a new, introductory course in electrical engineering and electronics which has been in development at MIT over the past several years. Its goal is to provide students with practical, hands-on experience working with electronics. The course is structured around a series of laboratory exercises designed to familiarize students with important, fundamental concepts within the field. Each exercise is accompanied by a physical circuit which reinforces the topics presented in the lab. At the end of each lab, student build the circuit and have the opportunity to observe the theory in action. &#13;
&#13;
The purpose of this project is to expand upon the existing course material by providing additional coverage on a number of more advanced topics. These topics include the application and design of buck, boost, and resonant converters. This project also introduces a new laboratory exercise for MIT’s Power Electronics Laboratory course ( 6.131) based on the boost converter circuit developed for Electronics First.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Information-centric Algorithm for Feature Extraction in High-dimensional Data</title>
<link href="https://hdl.handle.net/1721.1/139414" rel="alternate"/>
<author>
<name>Jin, Jiejun</name>
</author>
<id>https://hdl.handle.net/1721.1/139414</id>
<updated>2022-01-15T03:40:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Information-centric Algorithm for Feature Extraction in High-dimensional Data
Jin, Jiejun
This thesis develops a novel technique for extracting features in high-dimensional data. The proposed method is based on the concept of maximal correlation and local information theory, which demonstrates the importance of the information vector space in feature extraction. More specifically, a hidden Markov model is used to consider the relation between high-dimensional data and their low-dimensional features. Feature extraction is regarded as an optimization problem to figure out the corresponding information vector space. Several approaches are proposed to solve this problem and mathematical proof is provided to validate the effectiveness of them.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physician Entrepreneurship: Evidence from Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/139413" rel="alternate"/>
<author>
<name>Greenblatt, Wesley H.</name>
</author>
<id>https://hdl.handle.net/1721.1/139413</id>
<updated>2022-01-15T03:29:48Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Physician Entrepreneurship: Evidence from Massachusetts
Greenblatt, Wesley H.
Although there has recently been both signs of a growing interest in entrepreneurship among physicians as well as claims of a paucity of entrepreneurial activity in healthcare more generally, there has been little systematic evidence to inform the extant, type, and drivers of entrepreneurship by physicians. Physician involvement in entrepreneurship is thought to result in more innovative and financially successful healthcare companies. I matched the universe of physicians holding a Massachusetts medical license in 2017 with the Massachusetts new business registration records 1960-2017 to identify those companies founded by physicians. While 19.2% of the 33,770 physicians holding a Massachusetts license in 2017 had founded at least one new business, 33.9% of physicians who graduated from medical school in 1974-1978 had founded a business. A total of 9,501 companies were founded, of which 66.0% are clinical practice, real estate or practice management companies; 7.4% of companies are in the public interest including advocacy, public health, and philanthropy; 5.6% are biotechnology, healthcare information technology or medical device companies; and 18.5% are other business pursuits. For physician entrepreneurs, the mean time from medical school graduation to company founding is 20.2 years. Regression analysis demonstrates gender, medical school attended, and specialty are related to both the rate and type of entrepreneurship. Taken together, these findings suggest physicians are robustly involved in entrepreneurship, although there is evidence of substantial disparities by gender.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning an Inclusive Indigenous Energy Transition: Lessons from Tribal Federal Policy and Energy Development to Date</title>
<link href="https://hdl.handle.net/1721.1/139410" rel="alternate"/>
<author>
<name>Nabahe, Sade Kailani</name>
</author>
<id>https://hdl.handle.net/1721.1/139410</id>
<updated>2022-01-15T03:07:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Planning an Inclusive Indigenous Energy Transition: Lessons from Tribal Federal Policy and Energy Development to Date
Nabahe, Sade Kailani
In 2019, Governor Michelle Lujan Grisham announced the New Mexico’s dedication to combating climate change and passed the Energy Transition Act (ETA). The ETA calls for 50% of New Mexico’s electricity to be generated from renewable energy resources by 2030, 80% by 2040, and 100% carbon free by 2045 - dramatically affecting how New Mexico gets its energy. These effects will impact some regions and populations more than others. And these issues are not unique to New Mexico.&#13;
&#13;
Indigenous people will be particularly affected due to a long-term reliance on fossil fuels. Since 2003, fossil fuels have provided tribes with over $11.4 billion in royalties, which are used to maintain public infrastructure, run schools, and provide community services. For some tribes, royalties support most tribal operations. For instance, coal royalties supplied 50 percent of the Crow Indian reservation’s funds and oil royalties provided 90 percent of the Three Affiliated Tribes’ revenue. To mitigate impacts, tribes can tap into renewable energy resources on their land. However, current federal policies, processes, and services prevent tribes from doing so.&#13;
&#13;
The goal of this body of work is to inform federal and state leaders how current policies will negatively impact indigenous peoples and perpetuate energy injustice. The paper also looks at how these issues play out in real time in New Mexico, a state that has tremendous renewable energy potential and a large indigenous presence, but will grapple with a long history with coal, oil, and natural gas. Lessons learned are drawn from tribal federal policies and tribal energy development experience. In the end, the paper develops policy recommendations, in the hopes of creating a more inclusive, equitable indigenous energy future.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Tale of Two Sovereignties: Public Health and Fundamental Rights in COVID-Era Judicial Reasoning</title>
<link href="https://hdl.handle.net/1721.1/139409" rel="alternate"/>
<author>
<name>Cheng, Chung Hon M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139409</id>
<updated>2022-01-15T03:25:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Tale of Two Sovereignties: Public Health and Fundamental Rights in COVID-Era Judicial Reasoning
Cheng, Chung Hon M.
The COVID-19 pandemic has brought to the forefront of law important questions about what to do when public health and constitutionally-guaranteed rights—or public health and political sovereignties—come into conflict. In liberal democracies, courts are usually the authority tasked with resolving clashes of this type. This thesis offers an account of how judges in the United States and other countries have balanced the need for public health protection with the constitutional rights that citizens have been promised. By highlighting the tensions left unresolved by a foundational U.S. case, Jacobson v. Massachusetts (1905), I find new angles to analyze the different ways in which U.S. courts have negotiated this balance, which is mainly by using various forms of purely legal reasoning to justify the wholesale embrace of one type of sovereignty over the other. In France, the Conseil d’État exerts continuous effort to balance the two sovereignties, holding public health authorities to high standards of reasoning; in Austria, the Constitutional Court nominally upholds public health sovereignty but nonetheless often strikes down measures on grounds rooted in political sovereignty; and in Taiwan, courts have leaned heavily toward ratifying public health sovereignty. These different approaches to balancing the tension between the two sovereignties further point toward underlying divergences in the different social compacts implicated in each jurisdiction, as well as competing visions of the individual as a political and as a biological subject.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Inventory Replenishment Strategy</title>
<link href="https://hdl.handle.net/1721.1/139408" rel="alternate"/>
<author>
<name>Thurman, Lydia S.</name>
</author>
<id>https://hdl.handle.net/1721.1/139408</id>
<updated>2022-01-15T03:13:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Assessing Inventory Replenishment Strategy
Thurman, Lydia S.
Target’s supply chain strategy centers around delivering on the in-store experience for customers, with stores serving as both brick and mortar retail locations and online fulfillment hubs for store pick-up orders or home delivery. Traditionally, Target stores only served walk-in customers, and fulfilling in-person and online demand from stores is a new strategy. This omnichannel approach has significant operations impact - they must operate a robust replenishment system that can accurately get the right volume of the right product to the right place at the right time to fulfill demand.  Simultaneously, stores must greet and help in-store customers, maintain stocked shelves, and ensure there is physical space as well as adequate labor to fulfill a variety of types of online orders from store stock.  Today, when more products than can fit on the shelves are delivered, these products spill over into the backroom. This creates re-work for store employees and drives up store labor costs, while also reducing upstream inventory pooling benefits and amplifying risks of shrinkage. &#13;
&#13;
In order to design a replenishment strategy that optimally reduces backroom inventory while maintaining service levels, we examine in this paper three levers that are used to inform the current ordering process: configured leadtime (the time that replenishment logic assumes it will take shipments to arrive in stores after an order is placed), safety stock levels, and shipped unit-of-measure.&#13;
&#13;
Examining historic data and testing optimization strategies on that reveals a set of policies which could improve Target's inventory position substantially.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distribution Network Optimization to Reduce Process Variability&#13;
and Improve Throughput for an Online Retailer</title>
<link href="https://hdl.handle.net/1721.1/139406" rel="alternate"/>
<author>
<name>Schoder, Michael T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139406</id>
<updated>2022-01-15T03:04:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Distribution Network Optimization to Reduce Process Variability&#13;
and Improve Throughput for an Online Retailer
Schoder, Michael T.
In moving from standard two-day to single-day shipping, Amazon Fulfillment Centers (FC) must stock an increasing variety of product in inventory. Amazon uses a hub and spoke model where Inbound Cross-Dock (IXD) facilities split and ship large quantities of vendor product efficiently to numerous FCs, a process known as transshipment. Depending on the volume of product required for optimal inventory placement at the receiving FC, product may depart an IXD either in its original vendor corrugate case packaging or in an Amazon standard plastic yellow tote if the original case was split apart at the IXD.&#13;
&#13;
Furthermore, case and tote containers may be transshipped either in traditional palletized trailers or in floor loads (trailers loaded directly with stacked cases or totes). In an effort to reduce transportation costs, Amazon is transitioning to using floor loads for an increasing proportion of its North American transshipments. Floor loaded trailers enable higher volume utilization by eliminating the gaps between pallets and the pallet material itself, and also reduce the indirect labor associated with palletizing cases and totes at the IXD, and with breaking down pallets at the receiving FC. A hybrid floor load is a trailer that contains both totes and cases mixed together, which provides further flexibility needed to improve product placement and optimize trailer fullness. As Amazon increases IXD throughput and requires each IXD to support an increasing number of destination FCs, hybrid loads have become the norm and continue to increase as a proportion of total transshipments, growing substantially in number between 2019 and 2020.&#13;
&#13;
&#13;
However, while hybrid floor loads bring several advantages, they also increase the complexity and variability within downstream processes, particularly in inbound freight processing at receiving FCs. In particular, newer Amazon Robotics FCs (2019 generation buildings and beyond) use a process known as decant, where product arriving in cases is immediately removed from its corrugate packaging and placed into standard plastic totes to enable uniform processing further down the line. Hybrid loads cause large and unpredictable variability in the inflows of different container types, which results in the decant line often being either over-saturated or starved for work, resulting in inefficient use of labor and lower overall throughput, as well as further compounding effects on dependent downstream processes. Lost labor costs from the decant process alone totaled more than $20M in 2020, and without intervention these costs will continue to increase as Amazon expands its number of decant-enabled sites.&#13;
&#13;
The aim of this project was to investigate the root causes of variability in case flow to the decant process, and to assess different means of reducing this variability in a cost-effective and&#13;
sustainable manner. The research presented in this thesis consisted of four stages: analysis of the current processes to understand the problem, developing hypotheses for potential solutions, testing hypotheses through a combination of simulation modeling and onsite testing, and finally analyzing results to present scalable process change recommendations. This thesis presents results of analysis in three distinct but related areas: FC inbound dock processing, trailer receive scheduling, and IXD trailer loading. Results include a set of process change recommendations to improve FC inbound dock operations, as well as an optimization-based trailer scheduling program to minimize variability across multiple dimensions of inbound product flow. Analysis of the IXD loading process leads to the conclusion that changes to this set of operations would not be cost-effective at this time and should not be undertaken given current constraints. The conclusions in this thesis are specific to Amazon’s network, but the models and frameworks presented may be generalized to a wide variety of networked logistics operations, including applications in warehousing, container shipping, and supply distribution.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reliability Analysis of Boeing's Dreamlifter Large Cargo Freighter</title>
<link href="https://hdl.handle.net/1721.1/139405" rel="alternate"/>
<author>
<name>Park, So Young (Michelle)</name>
</author>
<id>https://hdl.handle.net/1721.1/139405</id>
<updated>2022-01-15T03:13:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Reliability Analysis of Boeing's Dreamlifter Large Cargo Freighter
Park, So Young (Michelle)
Proper maintenance is critical to keep aircraft flying through their designated service life. And once an aircraft reaches the end of its operational life, or if maintenance and repair costs exceed the cost of flying a new aircraft, it is typically replaced, retired, and dismantled. The typical operational lifespan of commercial aircraft is around 30 years. Boeing’s Dreamlifter fleet, the primary air transportation method for several 787/767 major production articles and the topic of this thesis, is an anomaly in that the 30-year-old fleet is far from facing retirement. The unique custom design makes the Dreamlifters an irreplaceable asset, and thus it is critical that the fleet remains operational throughout the lifetime of 787 production, or the limit of validity of the Dreamlifters.&#13;
&#13;
This thesis analytically breaks down the Dreamlifters’ highly complex systems through exploration of various data elements relevant to reliability. Employing reliability- centered maintenance (RCM) concepts, Monte Carlo simulations and historical failure data, we propose an obsolescence management framework that provides a probabilistic mitigation timeline for a component with limited supply. This simulation approach can be expanded to other aircraft components even with relatively small data sets, provide insight into optimal replacement intervals, and help prioritize risk management targets. We also share recommendations for successful project continuity.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modeling and Optimization of Autoinjector Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/139404" rel="alternate"/>
<author>
<name>DeLuke, Levi</name>
</author>
<id>https://hdl.handle.net/1721.1/139404</id>
<updated>2022-01-15T03:34:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Predictive Modeling and Optimization of Autoinjector Manufacturing
DeLuke, Levi
The manufacture of drug delivery devices requires the assembly of multiple device components with the corresponding drug products, often with an abundance of data being collected at each assembly stage. This data represents an underutilized resource for process improvement, due to disparate data sources residing in separate organizations and within different source databases for each individual component or process stage. This research aims to improve the manufacture of drug delivery devices using upstream component and process data to improve patient experience and the performance of assembled devices. An interpretable predictive modeling framework is developed to identify sources of subcomponent and process variability that are predictive of final lot performance. Differences in predictive accuracy across products, user demographics, geography, and with different sources of component data are determined and used to inform ongoing data management and process improvement recommendations. The predictive modeling framework is then applied towards the creation of a production planning tool to predict and optimize the pairing of subcomponent batches to best meet final product specifications and improve performance. A simulation is used to estimate the impact of the proposed pairing strategies, resulting in an estimated 35-45 percent reduction in variability of a final product parameter over existing methods. The initial case study on a single product family is generalized to a modeling framework with broader applicability.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Denial of Service Attacks in MANETs</title>
<link href="https://hdl.handle.net/1721.1/139403" rel="alternate"/>
<author>
<name>Lee, Lucy R.</name>
</author>
<id>https://hdl.handle.net/1721.1/139403</id>
<updated>2022-01-15T03:11:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Denial of Service Attacks in MANETs
Lee, Lucy R.
Denial of service (DoS) attacks are one problem threatening the networks in the Internet of Battle Things (IoBT) world. Since devices move often in this world, the type of network most commonly used in the IoBT world is the Mobile Ad hoc Network (MANET), which dynamically re-configures itself to update the stored paths from one device to another. Routing protocols are used to update these paths. This paper describes two routing protocols designed specifically for use in MANETs - AODV and OLSR. We compare the performances of these two protocols during simulations of an IoBT scenario, and also analytical compare how they respond to two specific DoS attacks - black hole and flooding.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nota Bene V2 - Understanding and Implementing Methods for Synchronous and Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/139402" rel="alternate"/>
<author>
<name>Li, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/139402</id>
<updated>2022-01-15T03:46:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Nota Bene V2 - Understanding and Implementing Methods for Synchronous and Collaborative Learning
Li, Helen
Students have always been encouraged to learn collaboratively as a means of learning from their peers in order to ask questions, build relationships, and receive feedback. Nota Bene (NB) is an online learning and annotation tool that allows students in a course to annotate web documents to foster online discussions. With the current version of NB that relies only on asynchronous annotations, we have decided to add synchronous features to the tool. After initial user research, we implemented various features, such as notifications and chat-like features, to help keep students engaged when learning online. Afterward, we ran a user study experiment to understand the user engagement and usability aspects of our project, and how those metrics compare between the original NB and the NB with synchronous features.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From the Earth to the Moon: Economic Viability of Commercial Spaceports &amp; Science and Technology Planning for MIT Lunar Exploration</title>
<link href="https://hdl.handle.net/1721.1/139401" rel="alternate"/>
<author>
<name>Browder, Rebecca Leigh</name>
</author>
<id>https://hdl.handle.net/1721.1/139401</id>
<updated>2022-01-15T04:05:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">From the Earth to the Moon: Economic Viability of Commercial Spaceports &amp; Science and Technology Planning for MIT Lunar Exploration
Browder, Rebecca Leigh
Despite an overcapacity of launch sites in comparison to demand, there are 11 existing commercial spaceports in the United States and at least another six under consideration. While a spaceport can bring economic growth and STEM development to a region, it requires significant and sustained investments of public funding in an uncertain and volatile market. This thesis conducts a two-case study of the Mid-Atlantic Regional Spaceport (MARS) in Virginia and Spaceport America (SA) in New Mexico, incorporating four analysis methods: financial, business case, economic impact, and profitability. A cross-case analysis studies both cases and reveals lessons learned and recommendations for other commercial spaceports. This research employs a multidisciplinary approach, incorporating policy, economic and business analysis to help policymakers, regulators and the general public understand the operations and impact of commercial spaceports that will enhance stakeholders’ decision-making about proposed spaceports. Ultimately, an improved understanding of commercial spaceports will allow this network of infrastructure to support continued innovation and growth in the commercial space sector.&#13;
&#13;
As the commercial space sector continues to expand through efforts like the first civilian trip to the International Space Station, commercial spaceports will become critical infrastructure to future commercial missions to the Moon. With the renewed global interest in exploring the lunar surface, there is a shift from the Apollo program in that NASA aims to establish a significant number of commercial partnerships.&#13;
&#13;
As countries and companies around the world aim to return to the Moon, including the U.S. through NASA’s Artemis Program, MIT has an opportunity to leverage its knowledge and resources to be part of the next phase of Moon missions. MIT has significant experience in lunar science and exploration, from the early days of the Apollo Program to more recent missions like GRAIL (2011) and collaborations with Israel’s Beresheet mission (2019). MIT is well poised to leverage both its lunar experience and its science and technology expertise to assist in returning humans to the Moon. This thesis presents an analysis of MIT’s unique areas of expertise and its alignment with prominent science and technology goals in order to develop a strategic plan to bring together the entire MIT community to achieve them. Through the use of MIT’s Lunar Open Architecture and extensive data collection, the author has developed a science traceability matrix and a technology multi-domain matrix that are the first step toward charting the future of MIT lunar exploration. This strategic planning exercise revealed many areas of mutual interest among research groups at MIT as well as a broad interest in creating a cohesive, organized strategy for MIT’s next steps on the lunar surface. This work will help the MIT community optimize its efforts toward lunar exploration, maximize investments into lunar research, and develop a cohesive plan for MIT’s role in future lunar exploration. This work also serves as a case study for how a large, complex organization can develop a strategic plan for deep space exploration that leverages its resources while meeting high-level, external science goals. By following the plan laid out in this paper, MIT can add to its expertise in lunar exploration, gather new scientific knowledge, and be part of the team that lands the first woman and the next man on the Moon.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Data Analytics to Evaluate Proactive Interventions to Prevent Inventory Defects</title>
<link href="https://hdl.handle.net/1721.1/139400" rel="alternate"/>
<author>
<name>Wu, Jieyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/139400</id>
<updated>2022-01-15T04:04:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Leveraging Data Analytics to Evaluate Proactive Interventions to Prevent Inventory Defects
Wu, Jieyuan
At an automated fulfillment center typically used in the retail industry, products fallen from a robot-driven shelving pod could cause inventory quality issues and obstructions on the floor, reducing throughput. Leading indicators of fallen products are limited, resulting in a lack of targeted and proactive actions. This project aims to evaluate potential interventions to reduce fallen products based on computer vision signals, accounting for the cost, complexity, and effectiveness of the interventions. &#13;
&#13;
This project developed a framework to perform cost-benefit analyses for the potential interventions that could prevent inventory defects. Characteristics of multiple potential proactive interventions combined with multiple potential vision-based predictive signals form a complex solution space. We start by formulating a common basis of comparison for the options, focusing on how to measure, validate and quantify the effectiveness of the interventions. Experimental data will be derived from a hypothetical pilot that can be used to test hypotheses and evaluate intervention cost and benefit in the context of input signal characteristics and operational complexity. Quantifying the trade-offs and break-even points between use cases ultimately determines the project NPV or ROI hence helping to guide the optimal decision making. &#13;
&#13;
This thesis provides insights into how to leverage analytical tools to evaluate options through the case of preventing inventory defects. This framework could be generalized and applied to any system, be it in logistics or manufacturing, where there are potentially multiple predictive signals and multiple proactive interventions to improve operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Air Source Heat Pump Adoption Propensity and Simulating the Distribution Level Effects of Large-Scale Adoption</title>
<link href="https://hdl.handle.net/1721.1/139399" rel="alternate"/>
<author>
<name>Thompson, Trevor J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139399</id>
<updated>2022-01-15T03:48:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Modeling Air Source Heat Pump Adoption Propensity and Simulating the Distribution Level Effects of Large-Scale Adoption
Thompson, Trevor J.
National Grid, like most utilities and companies in the energy sector, finds itself at a critical juncture for decarbonization. To maintain alignment with regional carbon reduction goals, it must find innovative ways to reduce greenhouse gas emissions in its service territories. For the heating sector in particular, air source heat pump (ASHP) technology presents a promising avenue for decarbonization – especially for residential customers. ASHPs present the lowest carbon emissions heating option for customers in New England today, and are expected to only become "greener" as the electrical grid continues transitioning to cleaner sources of electricity generation. From a cost perspective, ASHPs are on average the most cost effective space conditioning solution available for new construction. However, for the majority of customers in the Northeast who are retrofitting equipment into an existing home, ASHPs lag behind natural gas as the most cost-effective solution – a trend expected to continue through 2050. Nevertheless, ASHPs present an attractive financial savings opportunity for delivered fuel customers without access to natural gas. To meet its stated Northeast 80x50 Pathway goals, National Grid must increase the rate of ASHP adoption by nearly ten times its current pace. Using Rhode Island and Massachusetts as examples, we demonstrate how the use of machine learning can enable utilities to effectively model the ASHP adoption propensity of each household in their jurisdiction using readily available data. The resulting household-level propensity scores can be employed to guide targeted marketing efforts or aggregated to help guide program design. Additionally, we demonstrate the use of ASHP propensity scores to inform distribution feeder load growth simulations – allowing utilities to more efficiently plan infrastructure upgrades in response to load growth caused by SHP adoption. The same methodology can be applied to better understand the adoption trajectory for any technology relevant to the modern utility.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailored Base Surge Policy for Middle Echelon in Biologics Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/139398" rel="alternate"/>
<author>
<name>Pedroni, David</name>
</author>
<id>https://hdl.handle.net/1721.1/139398</id>
<updated>2022-01-15T03:44:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Tailored Base Surge Policy for Middle Echelon in Biologics Supply Chain
Pedroni, David
The Biologics division of AstraZeneca is a growing part of the business, by introducing new and existing products to more markets and therapy areas.  Given uncertainty in both supply and demand, the current strategy includes holding inventory at strategic locations throughout the supply chain to ensure customer access to medicines.  While using inventory to manage supply and demand variation is common practice, the impacts on working capital can be significant, especially for products with strong growth trajectories, long lead times, and numerous stages of manufacturing.  &#13;
&#13;
For strategically selected products, AstraZeneca generally elects to qualify multiple manufacturing sites to mitigate some of the supply risks associated with unplanned events ceasing production for extended periods of time.  The AstraZeneca standard strategy for products with multiple sourcing could be described a form of quota arrangement, where each upstream manufacturing site will supply a portion of customer or next-stage manufacturing site orders.  From a medical-supply perspective, the overarching objective is to provide high quality product to meet all customer demand.  The secondary objective is minimizing supply cost, but not at the expense of risking quality or service.  The primary focus of this thesis work is to evaluate the opportunity of implementing a tailored base-surge policy (TBS) for multiple-sourced global products.  A TBS policy would generally leverage differences in manufacturing site lead time characteristics to deliver a high level of service with less inventory.  In a global market, a quota arrangement policy generally is less complex while TBS takes advantage of pooling demand variability at the most responsive site.  &#13;
The approach will begin by reviewing two products manufactured at a responsive, internal site as well as at an external supplier.  The evaluation will benchmark a quota arrangement strategy and compare the analytically recommended inventory levels.  In addition to this primary inventory driver, regulatory approvals, cost of goods manufactured, and supply constraints will all be considered.  After performing this detailed analysis for a single stage of manufacture, the objective is to generalize the process and identify key supply chain characteristics that would suggest TBS would be effective in reducing inventory.  This general approach  will assist in future decisions to decide between a quota arrangement strategy and a tailored base surge policy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Business Model Strategy Development for Incumbents in B2B Markets</title>
<link href="https://hdl.handle.net/1721.1/139397" rel="alternate"/>
<author>
<name>Toeldte, Tatjana</name>
</author>
<id>https://hdl.handle.net/1721.1/139397</id>
<updated>2022-01-15T04:07:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data-Driven Business Model Strategy Development for Incumbents in B2B Markets
Toeldte, Tatjana
Maschinenfabrik Reinhausen is an incumbent manufacturer in the electrical power equipment industry, engaging in business-to-business (B2B) sales with a strong market position and successful products interested in pursuing the development of new, data-driven business models to generate new sources of revenue. This effort requires both development of hardware products that can provide relevant data and a business model to effectively generate, deliver and capture value for the firm.&#13;
&#13;
Based on case studies, interviews and assessment of the status quo at MR, this project posits a framework for data-driven business model development strategy. This project concludes that while extending connectivity to wider range of sensors through an Ethernet interface would be beneficial, there is no "recipe" or outwardly clear optimal business model structure. However, there is evidence that the internal environment in which incumbents develop new business models has a strong relationship with their success, and there are consistent characteristics across successful examples. Focusing on culture, process, governance, and the use of the right metrics leads to more successful outcomes.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustaining Digital Transformation in the Post-COVID Era:&#13;
Nike Case Study</title>
<link href="https://hdl.handle.net/1721.1/139396" rel="alternate"/>
<author>
<name>Dhesi, Amar Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/139396</id>
<updated>2022-01-15T03:47:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Sustaining Digital Transformation in the Post-COVID Era:&#13;
Nike Case Study
Dhesi, Amar Singh
Digital transformation is often a term that has a broad set of definitions, but consensus as to what it can mean has taken on increasing importance in an increasingly digital world. It is especially important for businesses to have a narrow definition so that they can focus on a particular strategy for digital transformation that can be most effective in this economic environment. The COVID-19 pandemic will become an inflexion point in history for many issues and one major topic will be how companies achieve innovative services and products, while also adapting to the future needs of the workforce. My research will focus on analyzing the latest academic frameworks that can help guide digital transformation strategies in the post-COVID era. By adopting a digital transformation strategy over the short term, business executives can accurately measure the impact their employees, products, services and customers will ultimately have on their longevity and long-term profitability. &#13;
&#13;
To demonstrate successful implementations of these digital transformation frameworks, this research will focus on Nike Inc., (Nike) as a case study. Nike, one of the largest and most well-known sports brands in the world, is also a company that puts digital transformation at the forefront of its business strategy. The firm’s goal of accelerating their digital transformation is aimed at better understanding and improving the customer experience. The study aims to demonstrate how Nike’s customer focused digital transformation over the past decade has led to a competitive advantage and moving forward into the post-COVID era, will they be prepared for the rapidly changing needs of their consumers?
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning Models of Scanner/Vision Tunnel Performance in Sortation Subsystems</title>
<link href="https://hdl.handle.net/1721.1/139395" rel="alternate"/>
<author>
<name>Dumont, Felix</name>
</author>
<id>https://hdl.handle.net/1721.1/139395</id>
<updated>2022-01-15T03:59:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Deep Learning Models of Scanner/Vision Tunnel Performance in Sortation Subsystems
Dumont, Felix
We propose an end-to-end process and tool to deep-dive scanner issues at Amazon’s sorter sites, allowing us to categorize no-reads into operational issues or actual equipment issues.&#13;
&#13;
Our tool sends no-read scanner images to a separate Amazon Web Services (AWS) server and post-processes them through ResNet deep learning models tuned through Bayesian optimization to appropriately assign potential fault reasons. This program will grow the team’s understanding of material handling equipment and best practices to trigger and handle exception case scenarios. A conservative entitlement is approximately $2.2MM for the pilot sites in annual savings excluding customer impact. &#13;
&#13;
Scanner/Vision tunnel performance at Amazon’s large crossbelt sorter sites tends to average around 80-90% read rate success, contributing to a large amount of manual rework and recirculation impacting sorter utilization. Amazon is well away from their target of 98% scanner performance for these sites. Furthermore, the mechanism to deep-dive scanner issues makes it extremely difficult to categorize them into operational issues or actual equipment issues and as a result, we have very little visibility as to no-read causes across sites and cannot properly put together a plan to improve the situation.&#13;
&#13;
A user-friendly interface allows site and operations managers to see which sites are lagging behind, perform a deep-dive into the root cause of the issues and test potential operational or equipment fixes.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of a Video-sharing Platform: The Global Rise of TikTok</title>
<link href="https://hdl.handle.net/1721.1/139394" rel="alternate"/>
<author>
<name>Wu, Jingyi</name>
</author>
<id>https://hdl.handle.net/1721.1/139394</id>
<updated>2022-01-15T03:23:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Study of a Video-sharing Platform: The Global Rise of TikTok
Wu, Jingyi
TikTok, also called Douyin in China, has been the fastest growing social media platform around the world. To understand why TikTok became so successful, this thesis focuses on studying the strategic choices of TikTok. First, this thesis analyzes how TikTok was born and then expanded, mainly in the Chinese and the U.S. market, its two largest markets. Then it introduces the platform strategies and competitive strategies of TikTok and summarizes TikTok’s existing business models. This thesis also examines the threats and opportunities that TikTok is facing and how TikTok responds to them. Finally, this thesis suggests that the case of TikTok as a Chinese company that has successfully dominated overseas markets can be instructive to other companies seeking to expand into foreign contexts.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delivering Locally Sourced Nutritious Food to Indian Households</title>
<link href="https://hdl.handle.net/1721.1/139392" rel="alternate"/>
<author>
<name>Das, Sanchita</name>
</author>
<id>https://hdl.handle.net/1721.1/139392</id>
<updated>2022-01-15T04:06:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Delivering Locally Sourced Nutritious Food to Indian Households
Das, Sanchita
WHO reports that in the South Asian region, the number of undernourished has hardly decreased in the last decade. This situation calls for a concerted effort to combat malnutrition the world over. The effort must be grounded in nutrition and executed through a robust distribution mechanism to reach all segments of the society. In this thesis, we take a step in that direction by combining expertise from the supply chain and nutrition areas to address protein-energy malnutrition among poor households in India. While the country is the largest producer of pulses, milk, and other dairy products, and many food grains, Indian diets are traditionally low in protein intake, especially among the poor. Within our scope of the problem, we target the poorest households in India, which currently hold the Antyodaya Anna Yojana ration cards from the Government of India. We develop a framework to improve their diet diversity nutritionally. We propose matching the demand of food (as recommended by Indian Council of Medical Research for a balanced diet) with locally available, culturally preferred supply by designing ‘customized food baskets’ for different consumer clusters. We suggest distributing the proposed food baskets at scale to all target households via the government Public Distribution System mechanism operational in India. We use PCA and K-means clustering to segment the customers, create a food basket model inspired by the knapsack problem, and use a Mixed Integer Linear optimization program to solve the distribution problem. The key contribution of this thesis is a framework of basket assortment and distribution. The approach is generalizable and can be used on many different customer types and (public or private) distribution channels to match demand with supply of nutritious assortments and enable delivery at scale. We can serve 65 to 75% of recommended daily quantity of cereals and pulses to our target households via the proposed framework.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real bond return parity</title>
<link href="https://hdl.handle.net/1721.1/139391" rel="alternate"/>
<author>
<name>Im, Joanne</name>
</author>
<id>https://hdl.handle.net/1721.1/139391</id>
<updated>2022-01-15T03:18:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Real bond return parity
Im, Joanne
We test a set of assumptions that imply the return parity of long-run, real bonds denominated in different currency numeraire. The joint hypothesis is rejected in our post-2009 sample of developing and developed market currencies; however, we document a strong relationship between changes in the log of bilateral, real exchange rate and real holding period bond returns in the direction of parity, contributing to the Meese-Rogoff puzzle on exchange rate determination.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Competitive Analysis of New Energy Vehicle Market in China</title>
<link href="https://hdl.handle.net/1721.1/139390" rel="alternate"/>
<author>
<name>Li, Jingqiao</name>
</author>
<id>https://hdl.handle.net/1721.1/139390</id>
<updated>2022-01-15T03:00:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Competitive Analysis of New Energy Vehicle Market in China
Li, Jingqiao
After a decade of continuous policy support and market development, China has the largest number of new energy vehicles (NEVs) in the world and over one million NEVs annual sales. Last year, the Chinese government set an ambitious goal that 20% of all passenger cars sold in 2025 would be NEV. Who are the leading players in this vast market? What vehicles are they selling? What are their competitive strategies? What are the trends? What can automakers do? The purpose of this paper is to study the NEV market in China and answer these questions. &#13;
&#13;
The study analyzes the policies, the infrastructures, the sales data, and the companies. Through analysis of five key companies and other major manufacturers, the thesis categorizes the market segments, identifies some trends, and brings forward some recommendations for automakers.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ORCHESTRATING FRIENDSHIP WITHIN A FIRM:&#13;
SOFTENING THE EDGES OF ALGORITHMIC EVALUATION</title>
<link href="https://hdl.handle.net/1721.1/139385" rel="alternate"/>
<author>
<name>Kessinger, Raquel</name>
</author>
<id>https://hdl.handle.net/1721.1/139385</id>
<updated>2022-01-15T03:28:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">ORCHESTRATING FRIENDSHIP WITHIN A FIRM:&#13;
SOFTENING THE EDGES OF ALGORITHMIC EVALUATION
Kessinger, Raquel
How are employers using big data and algorithms to measure employee performance, and with what consequences for employees? Current literature suggests that organizations are increasingly engaging in algorithmic evaluation by using finely-grained, real-time, interactive, and visible data to measure employee performance. In our field study of a digital marketing organization, we find that managers may mitigate some of the negative employee outcomes that scholars have found to be associated with algorithmic evaluation—stress, perceived pressure to constantly improve individual outcomes at the expense of collaborating or learning, and fear of disclosing bad news to managers. Just as the firm used algorithmic evaluation, many managers simultaneously engaged in a type of relational work with employees we call “orchestrating friendship.” Managers provided employees with socioemotional support, used hyper-personalization to create a communal atmosphere, and engaged in voluntary and informal self-disclosure to purposefully soften some of the negative employee experiences associated with the algorithmic evaluation. Yet, these managerial practices carried a set of unintended negative consequences for the middle managers themselves and, in turn, for the senior managers who employed them.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Joker: A Unified Interaction Model For Web Customization</title>
<link href="https://hdl.handle.net/1721.1/139384" rel="alternate"/>
<author>
<name>Katongo, Kapaya</name>
</author>
<id>https://hdl.handle.net/1721.1/139384</id>
<updated>2022-01-15T04:10:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Joker: A Unified Interaction Model For Web Customization
Katongo, Kapaya
Tools that enable end-users to customize websites typically use a two-stage workflow: first, users extract data into a structured form; second, they use that extracted data to augment the original website in some way. This two-stage workflow poses a usability barrier because it requires users to make upfront decisions about what data to extract, rather than allowing them to incrementally extract data as they augment it.&#13;
&#13;
In this thesis, we present a new, unified interaction model for web customization that encompasses both extraction and augmentation. The key idea is to provide users with a spreadsheet-like formula language that can be used for both data extraction and augmentation. We also provide a programming-by-demonstration (PBD) interface that allows users to create data extraction formulas by clicking on elements in the website. This interaction model allows users to naturally and iteratively move between extraction and augmentation during the customization process.&#13;
&#13;
To illustrate our unified interaction model, we have implemented a tool called Joker which is an extension of Wildcard, a prior web customization system. Through case studies, we show that Joker can be used to customize many real-world websites. We also present a formative user study with five participants, which showed that people with a wide range of technical backgrounds can use Joker to customize websites, and also revealed some interesting limitations of our approach. Finally, we present a heuristic evaluation of our design using the Cognitive Dimensions framework.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Parent-Child-Robot Triadic Storybook Reading Interaction</title>
<link href="https://hdl.handle.net/1721.1/139380" rel="alternate"/>
<author>
<name>Jang, Soo Jung</name>
</author>
<id>https://hdl.handle.net/1721.1/139380</id>
<updated>2022-01-15T04:08:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing Parent-Child-Robot Triadic Storybook Reading Interaction
Jang, Soo Jung
With an increasing availability, social robots’ domains and applications have been expanding, yet the research on human-robot interaction (HRI) still mostly focuses on single person to single robot interactions. Contributing to the field of multi-party HRI in educational domain, this thesis presents a novel parent-child-robot interaction paradigm in the context of shared reading. Constructive parent-child shared reading is crucial in children’s early literacy learning, and as a result, we strive to aid children’s learning with productive and engaging parent-child-robot triadic reading interactions.&#13;
&#13;
The thesis work designs and develops an interactive reading system consisting of a robot facilitator, a storybook tablet app, and a teleoperation controller. Using the implemented reading system, we conduct a pilot Wizard of Oz (WoZ) triadic interaction study with four families, observing and analyzing their triadic interactions in shared reading setting.&#13;
&#13;
The pilot study investigates the effects of triadic reading on dyadic reading, and compares the effects of different robot interaction strategies. The study’s results suggest that the triadic reading experience generally have a positive influence on families’ reading behaviors and their perceptions on social robots, and that each robot strategy has a unique set of effects on the interaction. The thesis work’s results, along with its discussions, provide critical insights into parent-child-robot shared reading design considerations
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Iterative Improvement of Practice Exercises By Students and Staff</title>
<link href="https://hdl.handle.net/1721.1/139379" rel="alternate"/>
<author>
<name>Himawan, Jenna</name>
</author>
<id>https://hdl.handle.net/1721.1/139379</id>
<updated>2022-01-15T03:23:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Iterative Improvement of Practice Exercises By Students and Staff
Himawan, Jenna
Practice is an important part of mastering any discipline. As a result, many courses provide students with example problems. However, the processes by which these exercises are created and presented to students can make it difficult to create, review, and edit them. This thesis describes an exercise bank framework that facilitates the authorship of new practice questions and the iterative development of existing ones. The exercise bank uses conceptual models that treat exercises not as seldom-changing pieces of course material but as collections of data that are continually in need of review and revision. Operations on these exercises cause them to transition between different states over the course of their development. This thesis discusses the design and implementation of two exercise bank systems for the course 6.031: Elements of Software Construction. These exercise banks proved useful to both staff and students in generating and understanding course material.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Runtime Monitoring of PLCs In Critical Real-Time Systems</title>
<link href="https://hdl.handle.net/1721.1/139377" rel="alternate"/>
<author>
<name>Hilke, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/139377</id>
<updated>2022-01-15T03:50:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Runtime Monitoring of PLCs In Critical Real-Time Systems
Hilke, Joshua
Critical real-time systems have become a popular target for cyber attacks. Attack vectors exist all the way from the application level down to the hardware level. This paper explores a variety of attack vectors for critical real-time systems with special focus on programmable logic controllers. We also discuss runtime monitoring as a solution for securing critical real-time systems and explore the details and challenges of runtime monitoring by implementing our own runtime monitors based on existing design principles
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Correlated Threats to Department of&#13;
Defense Energy Systems</title>
<link href="https://hdl.handle.net/1721.1/139375" rel="alternate"/>
<author>
<name>Adams Goffinet, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/139375</id>
<updated>2025-11-21T16:42:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Understanding Correlated Threats to Department of&#13;
Defense Energy Systems
Adams Goffinet, Katherine
Climate change poses an existential threat to the United States military’s energy systems. We researched current trends in energy, economics, and weather, translating those trends into quantifiable threats to the military’s secondary power systems. We also assembled a data set about secondary power systems on domestic U.S. military bases. Because that data set was missing critical information, we formulated and then evaluated an imputation method to complete the data set. This imputation method successfully predicted expected cost for the missing installation data. We ran simulations using our quantified trends and data set on existing software to predict the effects of those trends on certain U.S. military bases. Ultimately, we identified threats that could potentially cost 150 million dollars and cause more than a week of additional electrical downtime for those select bases.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Longitudinal Stability Criteria for Surfaced Submarines Through Use of Near Real Time Modeling</title>
<link href="https://hdl.handle.net/1721.1/139374" rel="alternate"/>
<author>
<name>Scott, Alexander Lorne</name>
</author>
<id>https://hdl.handle.net/1721.1/139374</id>
<updated>2022-02-09T14:09:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Development of Longitudinal Stability Criteria for Surfaced Submarines Through Use of Near Real Time Modeling
Scott, Alexander Lorne
Traditional submarine stability analysis has focused heavily on submerged operations.  When surfaced conditions are considered, the analysis has focused on transverse stability.  However, submarine accidents over the past two decades have drawn attention to the need to better understand damaged stability of surfaced submarines, especially longitudinal stability.  This thesis develops a methodology and proposes a design standard to ensure a surfaced submarine is able to maintain adequate longitudinal stability.  It bounds the conditions under which a submarine will be able to achieve a satisfactory static equilibrium, drawing inspiration from the submerged equilibrium polygon.  It also uses tested U.S. Navy surface warship design criteria to identify potentially limiting damage scenarios.  The proposed longitudinal stability standard requires the submarine to be able to avoid excessive trim angles under five scenarios: intact, routine maintenance, head on collision damage, glancing collision damage to the bow, and glancing collision damage to the stern.  An Excel VBA program, Submarine Longitudinal Stability Analysis Program (SuLSA), was also developed as part of this thesis to specifically analyze submarine designs using the proposed methodology and standard.  It reduces the time required to evaluate surfaced longitudinal stability of a submarine from days to hours.  These proposals offer the possibility of making future submarines significantly safer.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NETWORKING KNOWLEDGE AND EXPERIENCE: An Instrumental System for the Personal Development of Individual Designers</title>
<link href="https://hdl.handle.net/1721.1/139370" rel="alternate"/>
<author>
<name>Lu, Bowen</name>
</author>
<id>https://hdl.handle.net/1721.1/139370</id>
<updated>2022-01-15T04:01:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">NETWORKING KNOWLEDGE AND EXPERIENCE: An Instrumental System for the Personal Development of Individual Designers
Lu, Bowen
This research describes a new design thinking technology, which draws knowledge and experience from the learning and ideation of individual designers, and makes this accumulation accessible for future use and inspiration.&#13;
&#13;
Pursuing novelty and diversity, designers are trained through a wide spectrum of different disciplines, requiring a tremendous amount of explicit knowledge and implicit experience. As a result, designers must go beyond the apprentice-based practice long promoted by design education to embrace a more personal exploration of design ideas. However, while a technology surge of computer-aided design (CAD) increases productivity, it limits our imagination to a predefined structure and framework. A technology that facilitates knowledge accumulation and open-ended design ideation is required, especially in long term for individual designers.&#13;
&#13;
In this thesis, I propose a new theory of design ideation representation that integrates combinatory systems of knowledge engineering that extract and simulate symbolic knowledge for decision-making as well as constructive systems of visual calculating that prioritize human visual perception for ambiguous and unrestricted imagination. Based on the integrated theory, I develop a software prototype that augments designers to acquire and take control of their knowledge and experience to generate new, diverse, and creative ideas. I also demonstrate and analyze a constructed knowledge network by the software system and how the system is used in the design process.&#13;
&#13;
This research contributes to a new direction of design technology for ideation as a counterpart of computer-aided design (CAD) technology for productivity. The software prototype inspires new design tools for creative design thinking. It takes one more step towards a promising future of augmented intelligence where more powerful human-computer integration can be actualized.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Velvet Garage Narratives of an Education in Architecture</title>
<link href="https://hdl.handle.net/1721.1/139369" rel="alternate"/>
<author>
<name>González-Cervantes, Marianna</name>
</author>
<id>https://hdl.handle.net/1721.1/139369</id>
<updated>2022-01-15T03:19:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Velvet Garage Narratives of an Education in Architecture
González-Cervantes, Marianna
If you had to share the work you’ve produced in architecture school with your family, what would you say about it? Could you speak about it with the same conviction you do in front of your jury at a final review? Would the things you value in architecture translate to someone who doesn’t study or practice it?&#13;
&#13;
Velvet Garage : Narratives of an Education in Architecture is an exploration into many things, most obviously garages and architecture education, but perhaps, most importantly, the unexpected repercussions of studying architecture : spatially, technically, but also emotionally. This thesis admittedly looks backward as it reflects on old work and past experiences, some dating up to 10 years, but it does so by re-representing them in a new way that attempts to talk about architecture differently, in a more accessible manner.&#13;
&#13;
The thesis, then, is two-fold: As a response to the current remote conditions we find ourselves in due to COVID-19, this thesis transforms the domestic garage of my childhood home in El Paso, TX into a center of architectural production, where the “Velvet Garage” then allows for the reframing of architecture pedagogy as visual narratives in the form of a short film that incorporates both found and designed objects.&#13;
&#13;
This thesis believes that we unconsciously embed ourselves in our work, but that our work is also embedded in us. When we share our work, we also share a part of ourselves, and when we can’t, we fail to communicate a large part of what makes us who we are. Velvet Garage : Narratives of an Education in Architecture attempts to share stories that haven’t ever been shared before with the very new audience of my own family.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a multipurpose near-field imaging platform</title>
<link href="https://hdl.handle.net/1721.1/139367" rel="alternate"/>
<author>
<name>Liu, Lige</name>
</author>
<id>https://hdl.handle.net/1721.1/139367</id>
<updated>2022-01-15T03:26:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Development of a multipurpose near-field imaging platform
Liu, Lige
The idea of combining the high spatial resolution of scanning probe microscopy (SPM) with traditional optical imaging techniques has been revolutionizing for multiple fields - Condensed Matter Physics, Material Science, Chemistry, and Biology. This high potential is met with the special requirements of sub-nanometer SPM, and the resulting instrument is usually sophisticated, expensive, and requires continuous maintenance. Therefore, we propose expanding and augmenting a traditional Raman microscope to a multipurpose near-field imaging platform by integrating onto the existing setup a home-built, open-source, ultra-compact SPM system that is designed to be minimally invasive. Our goal is to develop a complete suite comprising scanning probe microscopy, tip-enhanced Raman spectroscopy (TERS), and scattering-type scanning near-field optical microscopy (s-SNOM) functionalities - all compatible with cryogenic operation. As a result of the simplicity and modular design, the system can be readily reconfigured to fulfill the requirements of different samples or to accommodate other material analysis techniques such as scanning microwave impedance microscopy (sMIM).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operational Innovations to Improve Malawi’s HIV Sample Transportation Network</title>
<link href="https://hdl.handle.net/1721.1/139366" rel="alternate"/>
<author>
<name>Killian, Daniel T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139366</id>
<updated>2022-01-15T03:27:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Operational Innovations to Improve Malawi’s HIV Sample Transportation Network
Killian, Daniel T.
The African nation of Malawi, like other sub-Saharan countries, provides diagnostic testing to its citizens through a centralized laboratory network. Diagnostic samples are collected from patients at remote, point-of-care health facilities and diagnostic tests are performed at centralized laboratories. Sample transportation (ST) systems within these networks are crucial for timely disease diagnosis and treatment. In this thesis, I present my research regarding two operational innovations to improve the ST system, and, consequently, the diagnostic network in Malawi. I first present a report documenting the development, implementation, and testing of a novel, mobile phone-based data collection system which vastly improves the accuracy and visibility of patient sample volumes and locations across the diagnostic network. By making this logistics information available and accessible to ST system administrators, ST systems can become more flexible, limit wasted capacity, and improve the quality of care. Second, I document my work towards understanding how different strategies for deploying point-of-care (POC) diagnostic testing devices influences the performance of the network as a whole. I find allocating POC devices to facilities with the highest sample volumes can cut the average time required for a patient to receive their test results after providing a sample by 60%, from over a month to less than two weeks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Deformable Mirror Demonstration Mission (DeMi) On-Orbit Analysis</title>
<link href="https://hdl.handle.net/1721.1/139364" rel="alternate"/>
<author>
<name>Gubner, Jennifer N.</name>
</author>
<id>https://hdl.handle.net/1721.1/139364</id>
<updated>2022-01-15T03:27:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Deformable Mirror Demonstration Mission (DeMi) On-Orbit Analysis
Gubner, Jennifer N.
The Deformable Mirror Demonstration Mission (DeMi) is a 6U CubeSat mission to demonstrate the use of a 140 actuator microelectromechanical system (MEMS) deformable mirror (DM) and a closed-loop adaptive optics (AO) system in space. DeMi launched to the International Space Station (ISS) on the NG-13 Cygnus resupply mission on February 15, 2020 and was deployed from the ISS into a 51° inclination, 423 km average altitude low-Earth orbit on July 13, 2020. The expected mission lifetime of DeMi was 6 months, however DeMi continues to be operational 9 months post deployment. During its lifetime, DeMi has completed several internal observations with the DM and both imagers using the internal laser source. The team is now working toward external observations of stars and demonstrations of closed-loop wavefront control. The biggest driver of mission success is spacecraft and component health. Looking at spacecraft data over time helps to characterize spacecraft performance and inform adjustments to the lifetime estimate. Additionally, telemetry analysis can alert the operations team of anomalies and provide useful information for resolving those anomalies. This thesis analyzes the spacecraft telemetry received between July 13, 2020 and April 4, 2021 and discusses trends and anomalies in the data. This work provides an overview of spacecraft and payload on-orbit health to date and provides recommendations on paths forward for anomaly resolution.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipboard Fault Detection, Marine Micro-Grid Power Diagnostics and Vessel Ventilation Monitoring</title>
<link href="https://hdl.handle.net/1721.1/139363" rel="alternate"/>
<author>
<name>O'Connell, Joseph William</name>
</author>
<id>https://hdl.handle.net/1721.1/139363</id>
<updated>2023-01-09T19:15:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Shipboard Fault Detection, Marine Micro-Grid Power Diagnostics and Vessel Ventilation Monitoring
O'Connell, Joseph William
Non-intrusive load monitoring has proven its utility for equipment history logging, activity tracking, condition based maintenance, fault detection &amp; diagnostics and energy scorekeeping through numerous installations on various Navy and Coast Guard vessels throughout the last several decades. Using equipment power transients to identify particular equipment operation, and tracking these transients enables a non-intrusive load monitor (NILM) to ‘learn’ healthy load behavior. Changes in transient behavior is indicative of soft faults and potentially progressive failure, which a NILM can alert watch-standers to. Technological upgrades to the NILM prototype have enabled timely installations and calibrations, which provide further evidence for their utility as either alternatives or redundant systems to traditional machinery control and monitoring software (MCMS). Over the past two years prototype installations have accelerated with installations occurring on Coast Guard Cutters MARLIN (WPB-87304) and THUNDER BAY (WTGB-108) and the Navy’s USS INDIANAPOLIS (LCS-17). These installations have provided valuable data, and proven that NILM can successfully disaggregrate, analyze, characterize and identify various engineering equipment. Particular research was focused on marine heating ventilation and air-conditioning (HVAC) systems, with the development of a framework for ventilation diagnostics. A NILM is able to effectively identify fan operation and historical power draw, using aggregate power consumption as a proxy for ventilation cleanliness. Additionally, current data can be analyzed and processed to yield slot harmonics, enabling the NILM to track fan speeds in real time, as well as other induction motor driven loads.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Disposable Microfluidic Tissue Chips</title>
<link href="https://hdl.handle.net/1721.1/139362" rel="alternate"/>
<author>
<name>O'Boyle, Duncan Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/139362</id>
<updated>2022-01-15T03:42:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Integrated Disposable Microfluidic Tissue Chips
O'Boyle, Duncan Allison
This thesis presents the design, implementation, and testing of a new platform for mimicking a wide range of physiological conditions using multi-layer thermoplastic microfluidic chips with integrated elastomeric membranes. The new platform is designed for high-throughput experiments, using disposable thermoplastic chips, and enabling precise control of channel pressures and flowrates. The platform can distribute up to 7 pneumatic signals to 4 microfluidic chips for control of high-throughput experiments in the incubator or on a microscope. The disposable chips are made entirely of Cyclic Olefin Copolymers (COC) and utilize on-chip pumps, pressure regulators, microfluidic accumulators, a novel hydrogel tissue compartment, and a standardized pneumatic interconnect. Channel flowrates are adjustable between 0-3 μL/s and pressures can be controlled up to 2 psi. The 5-layer chips are bonded together using a thin film COC elastomer membrane. The top and bottom layers are laminated using a co-extruded COC film with an easy-to-bond interface. Novel methods for reliable fabrication of these devices are explored, including laser machining of frozen membranes, and infrared bonding in a vacuum chamber. The chips are optically clear, exhibit strong thermal bonds, and display significantly lower levels of hormone absorption than earlier polydimethylsiloxane (PDMS) based devices. The design and analysis of the platform is described in detail, and the biological performance of the system is validated in studies promoting vasculogenesis in a co-culture of endothelial and stromal cells.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solving Time-Alignment Challenges in Shipboard Non-Intrusive Load Monitoring</title>
<link href="https://hdl.handle.net/1721.1/139361" rel="alternate"/>
<author>
<name>Mills, Brian Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/139361</id>
<updated>2022-01-15T04:11:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Solving Time-Alignment Challenges in Shipboard Non-Intrusive Load Monitoring
Mills, Brian Taylor
Non-intrusive load monitoring is the practice of using sensors placed on power cables to monitor downstream loads. A few sensors can provide monitoring of many different loads,making it very cost-effective. Ship requirements for redundancy require multiple generators,and some go beyond a typical radial distribution system to a ring or zonal distribution system. It is not possible for one sensor to monitor the entire ship; multiple sensors are necessary. To preserve the transients necessary for load identification, all sensors must be time-aligned. This thesis presents several options for time-alignment inherent to the power stream itself, forgoing use of external systems: tracking voltage zero-crossings, matching voltage frequency and amplitude patterns, individual power transient alignment, and a closed system reconstruction artifact-minimization.  Recent non-intrusive load monitor (NILM)installations onboard USCGC MARLIN (WPB 87304) and USS INDIANAPOLIS (LCS 17)are documented, identifying and analyzing various loads. USCGC MARLIN marks the first and so far only time NILM has monitored an entire ship vice specific power panels. NILM-assisted Electric Power Load Analysis (EPLA) is demonstrated on USS INDIANAPOLIS.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Biomimetics to Improve the Maneuvering Performance of the Expendable Mobile Antisubmarine Warfare Training Target (EMATT)</title>
<link href="https://hdl.handle.net/1721.1/139360" rel="alternate"/>
<author>
<name>Mellin, Emily M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139360</id>
<updated>2022-01-15T03:07:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Using Biomimetics to Improve the Maneuvering Performance of the Expendable Mobile Antisubmarine Warfare Training Target (EMATT)
Mellin, Emily M.
Using Biomimetics to improve the maneuverability of torpedo shaped Unmanned Underwater Vehicles (UUVs) is a continued topic of study that is rapidly evolving to increase the agility of these vehicles. MIT Sea Grant continues to study the hydrodynamic effects of UUVs and implement biomimetic methods to improve their maneuvering characteristics and efficiency. Triantafyllou et al[1] implemented dorsal fins on the REMUS UUV to quantify this increase in turning rate and maneuverability. This thesis uses the results from the work on the REMUS and applies it to Lockheed Martin’s EMATT UUV, now termed Morpheus, to improve the maneuverability of the vehicle. The mission of Morpheus includes driving on a steady course for periods of time as well as turning quickly and sharply on short notice, requiring both stability and maneuverability. The biomimetic improvement applied to Morpheus allows the dorsal fins to morph in and out of the body as well as deflect in the opposite direction of the rudder.&#13;
&#13;
This thesis derives the equations of motion and hydrodynamic coefficients of the Morpheus vehicle and uses these to investigate the results of adding morphing dorsal fins. Dynamic tow tank experiments were conducted to validate the estimated hydrodynamic coefficients of a 75% Morpheus model as well as verify that the addition of dorsal fins increases the turning rate of the vehicle. This thesis investigates the appropriate size of these dorsal fins as well as the optimal location along the body of the vehicle using both field tests and simulations. The results showed the overall increase in the turning rate with the addition of a new tail design as well as morphing dorsal fins that are able to deflect in the opposite direction of the rudder compared to the original EMATT vehicle.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Ocean Forecasting with the Dynamically Orthogonal Primitive Equations</title>
<link href="https://hdl.handle.net/1721.1/139359" rel="alternate"/>
<author>
<name>Gkirgkis, Kyprianos Agioub</name>
</author>
<id>https://hdl.handle.net/1721.1/139359</id>
<updated>2022-01-15T03:39:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Stochastic Ocean Forecasting with the Dynamically Orthogonal Primitive Equations
Gkirgkis, Kyprianos Agioub
The present work focuses on applying the Dynamically Orthogonal Primitive Equations (DO-PE) for realistic high-resolution stochastic ocean forecasting in regions with complex ocean dynamics. &#13;
In the first part, we identify and test a streamlined process to create multi-region initial conditions for the DO-PE framework, starting from temporally and spatially sparse historical data. The process presented allows us to start from a relatively small but relevant set of measured temperature and salinity historical vertical profiles (on the order of hundreds) and to generate a massive set of initial conditions (on the order of millions) in a stochastic subspace, while still ensuring that the initial statistics respect the physical processes, modeled complex dynamics, and uncertain initial conditions of the examined domain. To illustrate the methodology, two practical examples-one in the Gulf of Mexico and another in the Alboran Sea-are provided, along with a review of the ocean dynamics for each region. In the second part, we present a case study of three massive stochastic DO-PE  forecasts, corresponding to ensembles of one million members, in the Gulf of Mexico region. We examine the effect of adding more dynamic DO modes (i.e., stochastic dimensions) and show that it tends to statistical convergence along with an enhancement of the uncertainty captured by the DO forecast realizations, both by increasing the variance of already existing features as well as by adding new uncertain features. We also use this case study to validate the DO-PE methodology for realistic high-resolution probabilistic ocean forecasting. We show good accuracy against equivalent deterministic simulations, starting from the same initial conditions and simulated with the same assumptions, setup, and original ocean model equations. Importantly, by comparing the reduced-order realizations against their deterministic counterparts, we show that the errors due to the DO subspace truncation are much smaller and growing slower than the fields themselves are evolving in time, both in the Root Mean Square Error (RMSE) sense as well as in the 3D multivariate ocean field sense. Based on these observations, we conclude that the DO-PE realizations closely match their full-order equivalents, thus enabling massive forecast ensembles with practically low numerical errors at a tractable computational cost.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Resilient Supply Chain using Interactive Visualization</title>
<link href="https://hdl.handle.net/1721.1/139357" rel="alternate"/>
<author>
<name>Tripathi, Prabhakar</name>
</author>
<id>https://hdl.handle.net/1721.1/139357</id>
<updated>2022-01-15T04:11:56Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Building Resilient Supply Chain using Interactive Visualization
Tripathi, Prabhakar
As supply chains expand globally and companies pursue speed, efficiency, and reduction in cost, the probability of disruptions propagating through the network grows. There are many documented threats to global supply chains: political instability, natural disasters, dock strikes, poor product quality, communications failures, currency risks, cyber-attacks, and recently a pandemic. These disruptions often incur additional costs and require time to respond and recover from these disruptions. Companies realize the importance of resilience in the supply chain network. However, due to the complex nature of the network, traditional processes, and outdated technology, the leadership team cannot make proper decisions against such disruptions. On the other hand, we found evidence of the importance of interactive visualization in decision-making. This research project introduces the application of interactive visualization in supply chain resilience decision-making. The application can be broken down into three parts. Firstly, a backend mixed-integer linear programming model that solves for a minimum total cost based on the inputs. Secondly, a front-end UI allows users to create any disruption scenarios using parameters - geography, time period, product, to visualize disruption such as demand variation, a shutdown of transshipment location, or a change in transportation mode. Lastly, a JSON file that connects the front and back end seamlessly. We use the application to create scenarios that are relevant for a multinational company. For the first use case, we explore the consequences of a shutdown of airports near distribution centers. For the second use case, we explore the availability of more than one transportation mode per lane. We analyze the results from the use cases to plan mitigation strategies for any such disruptions in the future. In conclusion, by creating scenarios and visualizing the network in a single and easy-to-understand application, we facilitate decision-making to test the network's resilience.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Influence of Physicality and Remote Collaboration in Moments of Design Convergence</title>
<link href="https://hdl.handle.net/1721.1/139356" rel="alternate"/>
<author>
<name>Gowen, Jordan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/139356</id>
<updated>2022-01-15T03:29:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Influence of Physicality and Remote Collaboration in Moments of Design Convergence
Gowen, Jordan H.
The COVD-19 pandemic forced the world into a natural experiment, one that brings into sharp relief the potential and the limitations of our economy’s capacity to support remote collaboration. While the implications of remote work ripple out across an infinite array of economic and social contexts, this study aims to better understand a basic question: How does remote collaboration impact the process of designing and developing physical products?&#13;
&#13;
To explore this question, we combined a bibliographic review with qualitative field research that included conducting interviews with working design professionals—all of whom specialized in physical products of some kind—followed by a survey to expand on and validate interview findings. The driving goal of this investigation was not to determine whether remote collaboration can be applied in the context of physical product design and development, but rather to understand what aspects of the design process are hindered most when collaborators are not co-located. &#13;
&#13;
The results of this study supported the idea that the current state of remote collaboration hinders the speed of the design process, namely due to technical constraints of commonly available remote collaboration tools. Across the board, interview participants and survey respondents reported that these tools are limited in their ability to support aspects of the design process that are both rooted in the physical world and require some act of collaborative decision making. Beyond these more tactical elements of the design process, however, this study also identified how remote collaboration impacts organizational dynamics of design teams—i.e. remote collaboration demands a greater level of administrative work on the part of team leaders and creates a sense of isolation and lack of access to management for more junior members of design teams. &#13;
&#13;
As more companies consider the role of remote collaboration in the future of work, this study serves as a resource for understanding key pain-points design teams face while working on physical products in a remote setting, and articulates why moments of design convergence are of particular consequence. It then provides a set of key takeaways to inform how design teams might adapt their collaborative practices to better address these pain-points and support remote design teams in the short and long term.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Options Analysis of Dual Hydrogen - Natural Gas Fueling:  A Texas Power Plant under Carbon Price</title>
<link href="https://hdl.handle.net/1721.1/139355" rel="alternate"/>
<author>
<name>Etcheverry, Maria Paz</name>
</author>
<id>https://hdl.handle.net/1721.1/139355</id>
<updated>2022-01-15T03:01:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Engineering Options Analysis of Dual Hydrogen - Natural Gas Fueling:  A Texas Power Plant under Carbon Price
Etcheverry, Maria Paz
Natural gas power plants are a crucial player in the energy transition to decarbonize power systems. They can complement renewable generation by providing uninterrupted energy and reducing emissions by replacing coal until affordable energy storage is available. However, future climate change regulations, together with uncertainties in fuel and electricity prices, may affect their profitability. Due to this uncertain future, the need for flexibility in design and operation in natural gas power plants is expected to increase.&#13;
&#13;
This thesis presents a model to value the investment of decarbonizing natural gas power plants in an uncertain future by identifying and quantifying the flexible design option of dual hydrogen fueling. The approach uses the real options analysis, the Net Present Value method (NPV), and the Monte Carlo simulation together with the novel system scale SESAME tool. A 438MW natural gas power plant located in West Texas, USA, is taken as a case study to illustrate this approach. Different scenarios for electricity, fuel, and carbon prices are analyzed.&#13;
&#13;
Results from this study indicate that the value of the hydrogen option does not significantly impact the NPV of the analyzed case. Consequently, a trade space analysis is conducted to explore the model inputs that promise viable solutions. Within the identified space, the flexible design delivers more value to the stakeholders, but the underlying NPV is negative because of the assumed carbon pricing conditions of the model. This gap can be filled using technology incentives, increasing electricity prices, or redesigning the electrical market in which the plant operates.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Transformation, Ecosystem Design, and Platform Strategy: An IIoT Perspective.</title>
<link href="https://hdl.handle.net/1721.1/139354" rel="alternate"/>
<author>
<name>Joshi, Yashodhan Vinay</name>
</author>
<id>https://hdl.handle.net/1721.1/139354</id>
<updated>2022-01-15T03:31:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Digital Transformation, Ecosystem Design, and Platform Strategy: An IIoT Perspective.
Joshi, Yashodhan Vinay
Platform business model and digital transformation are two hot trends that have seen many companies launch their own digital platforms. There are many new companies along with established incumbent companies adopting the platform model. They face common challenges in deciding the use cases, partners, governance, markets, positioning, and timing. I review the platform literature and overview the IIoT technology landscape. GE and Siemens are two established companies that adopted contrasting strategies for their IIoT platform. Siemens’ MindSphere is considered a success, but GE Digital, even though it spent more than $4 billion and coined the term ‘Industrial Internet’, has struggled. I draw lessons for digital platforms by comparing their approaches and results. I extend the learnings further by developing a general approach using network graph and input-output model for bottleneck market selection and market ecosystem design by converging the industry and product platform approaches. I model the selection of additional markets as sides for launch as an optimization problem. Finally, I provide a decision framework for positioning of the platform during market launch. The work done aids platform owners in deciding their strategy and resulting scope of the platform.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Broadband Squeezed Microwaves and Amplification with a Josephson Traveling-Wave Parametric Amplifier</title>
<link href="https://hdl.handle.net/1721.1/139353" rel="alternate"/>
<author>
<name>Qiu, Jack Yanjie</name>
</author>
<id>https://hdl.handle.net/1721.1/139353</id>
<updated>2022-01-15T03:18:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Broadband Squeezed Microwaves and Amplification with a Josephson Traveling-Wave Parametric Amplifier
Qiu, Jack Yanjie
Squeezed light enables precision measurement beyond the standard quantum limit imposed by the Heisenberg uncertainty principle. It has been widely adopted in research fields to achieve scientific breakthroughs from gravitational wave detection to biological microscopy enhancement. The generation of highly-squeezed states using superconducting amplifiers is a foundation for quantum optics and quantum metrology in the microwave domain. In superconducting circuits, researchers commonly use cavity-based Josephson amplifiers to generate squeezed microwaves. These narrow-band squeezers use a resonator and its Q-enhanced circulating field to increase the interaction between photons and a single or few nonlinear elements, but can lead to higher-order nonlinearities that impact squeezing performance. In contrast, a Josephson traveling-wave parametric amplifier (JTWPA) consists of many Josephson junctions in series, effectively distributing the stress on nonlinear elements across the entire amplifier. By eliminating the resonant structure, the JTWPA allows a larger pump current before the junctions become saturated, leading to a higher dynamic range and circumventing the cavity bandwidth constraint. Therefore, JTWPA can generate substantial squeezing with a high dynamic range and emit broadband entangled microwave photons. This thesis will demonstrate non-degenerate four-wave mixing using a dual-dispersion-engineered JTWPA and investigate its squeezing performance.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Force-Velocity Profiling of NFL Athletes via High-Frequency Tracking Data</title>
<link href="https://hdl.handle.net/1721.1/139345" rel="alternate"/>
<author>
<name>Lyons, Kevin Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/139345</id>
<updated>2022-01-15T03:32:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Automated Force-Velocity Profiling of NFL Athletes via High-Frequency Tracking Data
Lyons, Kevin Andrew
The ability to measure key physical parameters of athletes is becoming increasingly critical for today’s sports organizations. Force-velocity profiling is a well-understood and studied technique for measuring the relationship between speed and output force in sport-specific contexts. Accurate force-velocity profiling systems can enable a wide variety of applications for sports organizations to improve player performance, cater better training programs, and potentially reduce injury rates in the long term. A current limitation of many of these systems is that they can require context-specific testing that impacts workflows for players, coaches, and trainers. Given the recent rise of wearable sensor technologies that track player movement in dynamic contexts, there is a clear opportunity to leverage new data streams to enhance this process.&#13;
&#13;
We present a novel system for automated force-velocity profiling using publicly available high-frequency tracking data of NFL players. We demonstrate that our derived force-velocity envelopes match observed position and player performance, and provide a proof of concept framework that would allow teams to leverage automated force-velocity profiling in their internal operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacity Management for Low Cost Storage</title>
<link href="https://hdl.handle.net/1721.1/139344" rel="alternate"/>
<author>
<name>Kahil, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/139344</id>
<updated>2022-01-15T03:01:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Capacity Management for Low Cost Storage
Kahil, Omar
The growth that Amazon is projecting brings implications on the supply chain network, mainly, increasing capacity needs that can be addressed by adding Amazon Robotics (AR) Sortable Fulfillment Centers (FC), which require significant investment. High per unit storage costs, driven by the technology used at the AR-Sortable FCs, along with stock keeping units (SKUs) with high dwell time have created an opportunity to leverage low cost storage (LCS) nodes upstream of the FCs. These nodes reduce the number of required future AR-Sortable FCs allowing for significant savings. &#13;
&#13;
Amazon conceived its first LCS node to address the challenge of high safety stock requirements and costly holding overheads. This solution proved that pooling excess inventory upstream improved turns at the FCs and reduced storage related fixed cost. The LCS node is now established for all excess inventory across imports and domestic retailing businesses which would provide opportunities for additional free cash flow savings. &#13;
&#13;
LCS receives inventory from three flows: Asia Pacific consolidation node, US consolidation node that processes overseas shipment, and domestic. LCS has been experiencing a high backlog meaning trailers waiting at LCS yards to have their freight processed into the sites for a prolonged period of time. A high backlog can cause added out of stock risk, carrier fees and disruptions, and for units to dwell at the LCS below the required period to breakeven with the added processing cost at the sites. The backlog is driven by the fact that LCS nodes have instances where the amount of freight arriving is higher than what can be processed into the facilities. &#13;
&#13;
To support LCS in its capacity management efforts, this thesis explores the redirection of trailers from LCS nodes towards the fulfillment network during instances of high backlog. In addition, the effort will include balancing the backlog at LCS by setting processing capacity (i.e. mechanical and labor capability to transfer freight from trailers into facilities) constraints on incoming arcs into LCS nodes. This will contribute to achieving cost savings by prioritizing inventory that will spend the bigger portion of its dwell time at the LCS nodes, and support the mitigation of out of stock risk by redirecting inventory with low excess coverage in the fulfillment network.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediating the Marginal: A Computational Analysis of Representational Hierarchies, Aesthetic Tourism, and Queer Imagination on Instagram</title>
<link href="https://hdl.handle.net/1721.1/139343" rel="alternate"/>
<author>
<name>Souza, Garrett</name>
</author>
<id>https://hdl.handle.net/1721.1/139343</id>
<updated>2022-01-15T03:52:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Mediating the Marginal: A Computational Analysis of Representational Hierarchies, Aesthetic Tourism, and Queer Imagination on Instagram
Souza, Garrett
Images are world-building technologies, engendering futurity through collective imagination. An ontological trace of visual culture positions media technologies as sites of both regulation of and resistance to racial, sexual, and gender norms.. The rise of computational media and neoliberal sociopolitics has paradoxically both destabilized and bolstered visual hegemony, expanding Black and queer representation and visibility through a new vanguard of empowered visual creators, while also facilitating old traditions of oppression and co-option with an unprecedented precision, surveillance, and opacity. This project leverages a computational analysis of algorithmically curated imagery to situate Instagram within a lineage of technologies used to visually mediate marginality, particularly focusing on how how race, gender, and sexuality are structured within hypersegregated queer spaces on Instagram. Analysis of skin tone presentations, emoji usage, and engagement metrics within the #gay search feed reveal a continued erasure of Blackness within mediated content, in tandem with widespread co-option of Black aesthetics. A coupled differential reading of dominant representational paradigms, hashtag usage, and normative generative modeling within the Explore feed of a gay-coded user further exposes the co-option of Black and queer aesthetics, as well as an overwhelming promotion of hypermasculine and homonormative content. These results suggest that, while contemporary visual power has certainly diffused to previously marginalized positionalities, this reallocation is contingent on market capital, assimilation to normative ideals, and continued marginality. Results are directed towards a discussion of how imagery, image-making, digital media technologies, and computation might be used in service of liberatory praxis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future Flood Mitigation in Charlotte-Mecklenburg</title>
<link href="https://hdl.handle.net/1721.1/139342" rel="alternate"/>
<author>
<name>Sharma, Tanvi</name>
</author>
<id>https://hdl.handle.net/1721.1/139342</id>
<updated>2022-01-15T03:25:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Future Flood Mitigation in Charlotte-Mecklenburg
Sharma, Tanvi
This case study examines the successes of flood mitigation planning in Charlotte-Mecklenburg, beginning with their locally created future conditions flood risk maps, and followed by complementary risk reduction strategies informed by these maps. Charlotte-Mecklenburg’s future conditions maps, known locally as Community Maps, were created in 2000, because after repeated flood losses in the region, residents and local officials realized the need for better data to help “stop the bleeding.” This thesis takes a critical look at existing national level flood mitigation mapping and regulations, and compares them with Charlotte-Mecklenburg’s local strategies. It also looks at what ingredients have allowed Charlotte-Mecklenburg Storm Water Services to achieve this success and where there is still room for improvement. Finally, this paper offers lessons and recommendations for national policy as well as other local communities to help improve flood management across different levels of government.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A UHF Multimode Array Feed for the Westford Radio Telescope</title>
<link href="https://hdl.handle.net/1721.1/139341" rel="alternate"/>
<author>
<name>Sheen, Daniel B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139341</id>
<updated>2022-01-15T03:05:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A UHF Multimode Array Feed for the Westford Radio Telescope
Sheen, Daniel B.
A novel UHF Multimode Array Feed (MMAF) has been created for MIT’s Westford Radio Telescope. The MMAF is a dual-linear focal plane array installed concentrically with the existing QRFH feed on the telescope, enabling a combined system capable of transmit or receive on UHF radar and satcom frequencies simultaneous with reception from 2 GHz to 14 GHz. To demonstrate the capabilities of the MMAF, real-time calibration and beamforming techniques are developed, and the MMAF is used to adaptively steer the beam of the Westford Telescope to follow the downlink signal from MITs DeMi cubesat while matching its received polarization and nulling interferers.&#13;
&#13;
To the author’s knowledge, the MMAF represents the first demonstrated use of a concentric eleven style feed topology to extend the capabilities of a preexisting antenna feed system. The design approach is not specific to Westford, and similar designs can be created to add low frequency coverage to any microwave feed. This brings the possibility of extending the operating frequency range of many other large antenna systems currently in operation worldwide.&#13;
&#13;
Further, the MMAF represents a first step towards a new class of ultra-wideband hybrid feed combining log-periodic elements with a multimodal waveguide feed such as a QRFH. To this end, a possible approach to design a UWB-MMAF using metamaterials is suggested.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing is the Cure: Renter Insecurity in Boston During the COVID-19 Pandemic</title>
<link href="https://hdl.handle.net/1721.1/139340" rel="alternate"/>
<author>
<name>Walker, Ben</name>
</author>
<id>https://hdl.handle.net/1721.1/139340</id>
<updated>2022-01-15T04:03:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Housing is the Cure: Renter Insecurity in Boston During the COVID-19 Pandemic
Walker, Ben
The COVID-19 pandemic triggered a crisis of housing insecurity for Black, Latinx and immigrant Boston renters. This crisis magnified existing dynamics of systemic racial exploitation. It was also tempered by unprecedented expansions of renter protections across local, state, and federal levels of government, secured by the tenacious organizing of the housing justice movement. Given extreme levels of need and new renter protections, this thesis asks: how closely did patterns of rental housing insecurity during the first year of the COVID-19 pandemic follow previous racial disparities experienced by Boston renters? Using eviction records from Eastern Housing Court from March 2020 to March 2021, records of housing quality issues reported to the City of Boston’s 311 call center, and renter testimony gathered through surveying, this research assesses the extent to which COVID-19’s cumulative effects transformed tenant relationships with landlords, neighbors, and government. It finds that Boston’s communities of color continued to disproportionately experience common indicators of housing insecurity, though with less frequency and to perhaps to a lesser degree than before COVID-19. These findings demonstrate the need to expand, enhance, and solidify vigorous renter protections. Doing so, I argue, will begin to abate the deep housing insecurity experienced by Black and Brown renters in Boston.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging spatial relationships and visualization to&#13;
improve public transit performance analysis</title>
<link href="https://hdl.handle.net/1721.1/139338" rel="alternate"/>
<author>
<name>Caros, Nicholas S.</name>
</author>
<id>https://hdl.handle.net/1721.1/139338</id>
<updated>2022-01-15T03:52:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Leveraging spatial relationships and visualization to&#13;
improve public transit performance analysis
Caros, Nicholas S.
Public transit agencies collect a tremendous amount of data in order to measure bus performance, including vehicle positions and passenger counts. These data are typically organized in the same way that the network is organized: split up by route, with each route further divided into stop-to-stop segments or timepoint-to-timepoint segments. This type of structure suffers from three drawbacks. First, it does not capture the spatial relationships between different routes. Second, route and stop identifiers are arbitrary and change over time, making it difficult to compare performance across time periods. Finally, even the stop-to-stop resolution is insufficient for certain applications, such as planning for transit priority infrastructure at the block or intersection level. &#13;
&#13;
This thesis addresses those three issues by developing new practical methods that incorporate the geography of a transit network into bus performance measurement and analysis. These tools can be used to automate transit planning tasks that have typically involved specialized knowledge and considerable manual effort. Furthermore, each of the methods is intentionally designed to support visualization of performance data as an alternative to tabular representation in order to facilitate the identification of spatial patterns. A total of eight case studies, developed in concert with transit agency staff, are included to demonstrate how spatial analysis and visualization can address real transit planning challenges. &#13;
&#13;
First, a map matching algorithm is described that facilitates the identification and classification of corridors served by multiple bus routes. A framework is then established for systematically aggregating different types of performance metrics across parallel routes, even if those performance metrics are only available at the stop-to-stop segment level. Case studies show how corridor identification and performance aggregation can be used to improve transit priority infrastructure planning, schedule coordination for parallel routes, and balancing service in local-express corridors. &#13;
&#13;
Next, a method for increasing the resolution of performance data to the block-to-block level is proposed. Stop-to-stop segments are split at intersections and bus stops to create a unit of analysis that experiences uniform transit service across its 3 length. Performance measures are then assigned to the block-level segments, eliminating the dependence on arbitrary and fungible identifiers. This geography-based representation enables longitudinal comparison of performance that automatically captures changes in transit service as well as route and stop numbers. Two case studies demonstrate how these methods can be used to map the evolution of bus networks and ridership over many years. &#13;
&#13;
Finally, a process is developed for extending the previous methods to include origin-destination (OD) estimates, enabling the spatial analysis and visualization of passenger journeys throughout the transit network. It also allows OD-based performance metrics that are not available from traditional sources to be assigned to blockto-block segments for longitudinal comparison. Case studies illustrate the strength of these methods in mapping the journeys of passengers whose trips involve a transfer, and for exploring travel pattern changes before and after route modifications. &#13;
&#13;
Future research in this area includes the development of new bus performance metrics that incorporate spatial relationships between bus routes. Richer data sources, such as vehicle positions with a high sampling rate, could be leveraged to visualize travel speeds at the block-to-block level. Another research area is facilitated by the longitudinal analysis methods in this thesis: the ability to test theories related to bus network evolution over time. Finally, these methods lay the foundation for future research into new transit planning tools for strategic restoration of bus service after the COVID-19 pandemic.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Co-evolutionary Information to Improve Protein Language Modelling</title>
<link href="https://hdl.handle.net/1721.1/139337" rel="alternate"/>
<author>
<name>Ram, Soumya</name>
</author>
<id>https://hdl.handle.net/1721.1/139337</id>
<updated>2022-01-15T03:04:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Using Co-evolutionary Information to Improve Protein Language Modelling
Ram, Soumya
Protein engineering has the potential to solve complex global problems in medicine, clean energy, and manufacturing. However, current protein engineering efforts are hampered by a lack of supervised data. We help recitify this issue by developing supervised models that perform well in data-constrained settings by generalizing across protein engineering tasks and better incorporating coevolutionary and structural information. We also develop an unsupervised language model that conditions the target sequence on its multiple sequence alignment, allowing us to better model protein families.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Codon: A Framework for Pythonic Domain-Specific Languages</title>
<link href="https://hdl.handle.net/1721.1/139336" rel="alternate"/>
<author>
<name>Ramirez, Gabriel L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139336</id>
<updated>2022-01-15T03:14:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Codon: A Framework for Pythonic Domain-Specific Languages
Ramirez, Gabriel L.
Static languages like C++ provide deep compiler support for optimization and analysis, enabling high performance at the cost of burdening users with a low-level interface. DSLs, or domain-specific languages, have traditionally complemented static languages by adding custom idioms and optimizations in their particular areas. Unfortunately, however, static languages are increasingly sidelined in many domains by dynamic languages such as Python and Ruby, which, despite their flexibility, suffer from lower performance. In this thesis, we propose Codon, a framework for implementing high-performance DSLs based on Python. Whereas previously achieving high-performance in dynamic languages has been difficult, often requiring separate compiled libraries, we provide a robust base for analysis and optimization on a purely Pythonic base, bridging the gap between dynamic and static languages. By combining a purpose-built type checker and novel intermediate representation, Codon enables developers to create intrinsically performant, modular DSLs with minimal implementation effort. We validate this approach by showcasing several Codon DSLs, all of which achieve sizeable speed-ups over Python and sometimes even C++. We further show that Codon can be easily used to implement a variety of analyses and passes. Thus, Codon enables a new class of DSLs that maintain dynamism and expressiveness, without compromising performance.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards the Development of an Adaptive Rehabilitative Device</title>
<link href="https://hdl.handle.net/1721.1/139335" rel="alternate"/>
<author>
<name>Shiozawa, Kaymie S. (Kaymie Sato-Hayashi-Kagawa)</name>
</author>
<id>https://hdl.handle.net/1721.1/139335</id>
<updated>2025-11-17T15:19:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Towards the Development of an Adaptive Rehabilitative Device
Shiozawa, Kaymie S. (Kaymie Sato-Hayashi-Kagawa)
Balance impairments severely affect the health and well-being of patients across multiple population groups. The most common treatment compensates for impaired balance by prescribing canes, which increase support base area, reduce paretic-limb load, and provide somatosensory feedback. However, to improve the quality of life of patients, there is a need to develop a walk-aid that can actively improve the user’s balance. In upper-limb rehabilitation, robot-aided therapy has shown to accelerate the recovery of the hemiparetic arm in stroke patients. The device’s embedded performance-based impedance control algorithm adjusts the support it provides a patient according to their ability, weaning them off dependence.&#13;
&#13;
Deploying the promising potential of robot-aided therapy to address the challenge of improving balance ability in impaired subjects, this study proposes the development of a variable impedance cane that progressively reduces the level of assistance it provides as user performance improves to encourage unaided balance. To achieve the design of this device, this study explored an experimental procedure and a mathematical model that advances our understanding of human balance. Potential adaptive mechanisms and a control feedback loop structure for the device were also proposed.&#13;
&#13;
An instrumented cane that measured load, grip pressure, and cane motion was developed and shown to be capable of measuring user balance performance in a pilot human subject study. The mathematical model successfully quantified neural strategies that humans may be employing under various balance conditions and distinguished its effects from biomechanics. Finally, prototypes of several adaptive impedance mechanisms along with their design specifications were proposed. These results serve as a foundation for the future development of an intelligent, adaptive walk-aid that will improve unaided balance in impaired subjects.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-resolution modeling of a discrete stochastic process identifies causes of cancer</title>
<link href="https://hdl.handle.net/1721.1/139334" rel="alternate"/>
<author>
<name>Yaari, Adam Uri</name>
</author>
<id>https://hdl.handle.net/1721.1/139334</id>
<updated>2022-01-15T04:08:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multi-resolution modeling of a discrete stochastic process identifies causes of cancer
Yaari, Adam Uri
Detection of cancer-causing mutations within the vast and mostly unexplored human genome is a major challenge. Doing so requires modeling the background mutation rate, a highly non-stationary stochastic process, across regions of interest varying in size from one to millions of positions. Here, we present the split-Poisson-Gamma (SPG) distribution, an extension of the classical Poisson-Gamma formulation, to model a discrete stochastic process at multiple resolutions. We demonstrate that the probability model has a closed-form posterior, enabling efficient and accurate linear-time prediction over any length scale after the parameters of the model have been inferred a single time. We apply our framework to model mutation rates in tumors and show that model parameters can be accurately inferred from high-dimensional epigenetic data using a convolutional neural network, Gaussian process, and maximum-likelihood estimation. Our method is both more accurate and more efficient than existing models over a large range of length scales. We demonstrate the usefulness of multi-resolution modeling by detecting genomic elements that drive tumor emergence and are of vastly differing sizes.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-Augmented Algorithms</title>
<link href="https://hdl.handle.net/1721.1/139333" rel="alternate"/>
<author>
<name>Silwal, Sandeep</name>
</author>
<id>https://hdl.handle.net/1721.1/139333</id>
<updated>2022-01-15T03:08:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learning-Augmented Algorithms
Silwal, Sandeep
Traditional worst case analysis of algorithms does not fully capture real world behavior in many instances. Inspired by the great success of machine learning algorithms for various practical tasks, there has been recent interest in moving beyond pessimistic analysis of algorithms through the use of additional learned information.&#13;
&#13;
In this thesis, we consider further application of this “learning-augmented” framework for three classical algorithms problems: &#119896;-means clustering, counting triangles in a graph stream, and estimating the support of a discrete distribution.&#13;
&#13;
The problems we study are fundamental in their own right; clustering is typically one of the first methods used to understand the structure of large datasets and &#119896;-means is the most popular clustering formulation by far. In addition, counting triangles in a graph is a basic tool of network analytics and community detection in social networks. Lastly, the problem of estimating the number of distinct elements in a large data set (or, equivalently, the support size of the distribution induced by the data set) from a random sample of its element occurs in many applications, including biology, genomics, computer systems and linguistics.&#13;
&#13;
In each of these applications, we design algorithms that use predictors (that are based, e.g., on prior instances of the problem) which provide structural information about the inputs. Our theoretical analysis shows that such information can indeed be leveraged to overcome worst case barriers. In addition, we also show that such predictors can be implemented in practice and our algorithms are evaluated on real world datasets. Our experiments demonstrate substantial improvements in the performance compared to prior state-of-the-art algorithms that do not employ any learned information.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen-Induced Transformations in Metastable High Entropy Alloys</title>
<link href="https://hdl.handle.net/1721.1/139329" rel="alternate"/>
<author>
<name>Ronchi, Maria R.</name>
</author>
<id>https://hdl.handle.net/1721.1/139329</id>
<updated>2022-01-15T03:49:48Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Hydrogen-Induced Transformations in Metastable High Entropy Alloys
Ronchi, Maria R.
Hydrogen embrittlement (HE) presents a critical challenge to application of structural alloys in hydrogen (H) environments. Recently, development of high-entropy alloys (HEAs) has opened a new avenue for alloy design against HE: not only do some HEAs indicate resistance to HE, but the immense composition spaces associated with these alloys provide endless prospects for tuning composition and corresponding mechanical behavior. In particular, metastable alloys—those that exhibit a mechanically-induced austenite-to-martensite phase transformation—pose an interesting opportunity for HE resistance, where the toughening mechanisms associated with this transformation could counter HE effects under the right conditions. One alloy system, FeMnCoCr, has been previously shown to include metastable alloys which are of special interest due to the high tunability of deformation mechanisms with respect to composition. For example, tuning just the Mn content enables switching between dislocation slip, twinning, and martensite transformation mechanisms. Thus, in this work, we further explore alloys in the FeMnCoCr system to discover H effects and their interactions with metastability.&#13;
&#13;
In the first part of this work, we explore H-induced transformations in one metastable alloy, Fe₄₅Mn₃₅Co₁₀Cr₁₀. To this end, we electrochemically introduce H to the samples, quantify the hydrogen evolution by thermal desorption spectroscopy, and observe microstructural transformations by scanning electron microscopy techniques. Through these analyses, we find that the hydrogen induces ε-martensite that preferentially forms in &lt;101&gt; and &lt;111&gt; oriented grains and along Σ3 coincident site lattice boundaries. Further addition of hydrogen induces extension twinning within the martensite. We examine the microstructural factors influencing these transformations to better understand the hydrogen-microstructure interactions.&#13;
&#13;
In the second part of this work, we address the compositional complexity of the FeMnCoCr-H system by developing a method to efficiently screen this composition space for interactions between H and metastability. We apply this method to Fe₈₈₋ₓ₋ᵧMn₁₂CoₓCrᵧ alloys, with a focus on microstructure and H effects. To this end, we first select three alloys using predictions from Thermo-Calc, then produce these alloys by suction casting and apply three thermo-mechanical treatment routes to further vary microstructure. Indentation and scanning electron microscopy are employed to screen for deformation mechanisms and cracking. We identify two particular samples which exhibit extreme cases of indentation response and can provide a starting point for future iterations of this investigation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi Array, Conformable Ultrasound Patch for Soft Tissue Imaging</title>
<link href="https://hdl.handle.net/1721.1/139326" rel="alternate"/>
<author>
<name>Mejorado III, David</name>
</author>
<id>https://hdl.handle.net/1721.1/139326</id>
<updated>2022-01-15T03:14:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multi Array, Conformable Ultrasound Patch for Soft Tissue Imaging
Mejorado III, David
Medical ultrasound imaging is a rapidly growing field due to its safety and affordability when compared to other imaging modalities such as MRI and X-Ray. Ultrasound transducers are made in various configurations from dense 2-D arrays for volumetric imaging to single element needle like transducers for intravascular imaging. The performance of an ultrasound transducer is dictated by the properties of the piezoelectric material being used. One current drawback to ultrasound is that it is heavily operator dependent. This has motivated research into developing ultrasound systems that are conformable to the body and capable of obtaining relevant information without the need for an operator. This thesis explores the design and fabrication of a locally rigid globally flexible ultrasound patch using novel piezoelectric ceramics. The ceramics are fabricated into 64 element linear arrays transducers and then their electrical impedance and acoustic properties are characterized. Individual and groups of transducers are then used to image a flat and a curved ultrasound imaging phantoms via the Verasonics system.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Robust Terrain-Aware Locomotion</title>
<link href="https://hdl.handle.net/1721.1/139325" rel="alternate"/>
<author>
<name>Margolis, Gabriel B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139325</id>
<updated>2022-01-15T03:01:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learning Robust Terrain-Aware Locomotion
Margolis, Gabriel B.
Today’s robotic quadruped systems can walk over a diverse set of natural and complex terrains. Approaches to locomotion based on model-based feedback control are robust to perturbations but cannot easily incorporate visual terrain information. Meanwhile, approaches to locomotion based on learning excel at associating visual sensory data with suitable control policies but often fail to generalize across the gap between simulation and deployment settings. This thesis proposes a trajectory-based abstraction for locomotion through which model-free and model-based control layers interface. This approach enables general visually guided locomotion while preserving robustness. We demonstrate that our proposed architecture allows the Mini Cheetah quadruped to match theoretical performance limits in a set of visual tasks. The robustness and practicality afforded by our approach are demonstrated through evaluation on hardware.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synergistic coordination of oxygen functional groups with catalyst surface promotes hydrogenolysis of lignin model compounds</title>
<link href="https://hdl.handle.net/1721.1/139323" rel="alternate"/>
<author>
<name>Phillips, Amber K.</name>
</author>
<id>https://hdl.handle.net/1721.1/139323</id>
<updated>2022-01-15T03:01:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Synergistic coordination of oxygen functional groups with catalyst surface promotes hydrogenolysis of lignin model compounds
Phillips, Amber K.
The development of alternatives to petroleum derived fuels and chemicals will be essential to curb greenhouse gas emissions and ensure a sustainable future. Lignocellulosic biomass is a promising alternative source for producing drop-in substitutes for fuels and chemicals. It is nonedible, carbon neutral, and renewable by nature. While the carboyhydate fraction of lignocellulosic biomass is currently used industrially, the third fraction, lignin is severely underutilized. Lignin is a complex, irregular, recalcitrant biopolymer, making upgrading challenging. Recent research led to the development of reductive catalytic fractionation (RCF) a lignin-first approach to lignin extraction and depolymerization. While this technology is promising, little is understood about the fundamental chemistry of the process. This work aims to provide insight into the kinetics and mechanisms of the catalytic hydrogenolysis step of the RCF process to enable optimization of the process as a whole. We used synthetic archetype lignin model compounds to investigate the interactions between lignin substrates and the catalyst surface. Our study suggests lignin substrates bind to the catalyst surface through coordination of multiple oxygen groups on the substrate and through the formation of Pd–O bonds. We found two parallel reaction pathways during lignin RCF. The desired pathway is the β–O–4 cleavage reaction, with an activation barrier of 75-85 kJ/mol. The second pathway is reduction of the α–OH group, an activation barrier of 60-70 kJ/mol. Reduction of the α–OH group shuts down the potential for β–O–4 cleavage reaction. However, alkoxylation at the a position, a common reaction during organosolv lignin extraction, allows the β–O–4 reaction to proceed, indicating the presence of an α–O functional group in the substrate is essential to facilitate β–O–4 cleavage. While not essential, the phenolic group in the lignin substrate enhances the β–O–4 cleavage turnover frequency (TOF) by approximately 10x. By measuring the kinetics of a lignin trimer 4 model compound, we found the β–O–4 cleavage in a lignin polymer to be a sequential reaction, starting at the polymers phenolic end and moving inwards. This work highlights the critical functional groups of a lignin polymer and their interactions with catalyst surface that enable hydrogenolysis reactions during lignin RCF.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First Principles of Line Drawings</title>
<link href="https://hdl.handle.net/1721.1/139322" rel="alternate"/>
<author>
<name>Chan, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/139322</id>
<updated>2022-01-15T03:01:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">First Principles of Line Drawings
Chan, Caroline
This thesis presents an unsupervised method for creating line drawings from photographs or 3D models. Current methods often rely on high quality paired datasets to automate the creation of line drawings. We observe that line drawings are encodings of scene information that convey 3D shape and semantic meaning. We bake these observations into a set of first principle objectives and train an image translation network to map 3D objects into line drawings. We also explore generation of new styles of line drawings through a novel style confusion loss which averages and combines elements from different styles in a structured manner. User studies and quantitative experiments validate that our method encodes geometry and semantic information into line drawings and improves overall drawing quality.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real Estate Distress on College Campuses: Case Study on Liquidity through Public Private Partnerships and Portfolio Right-Sizing</title>
<link href="https://hdl.handle.net/1721.1/139319" rel="alternate"/>
<author>
<name>Maroti, David</name>
</author>
<id>https://hdl.handle.net/1721.1/139319</id>
<updated>2022-01-15T03:34:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Real Estate Distress on College Campuses: Case Study on Liquidity through Public Private Partnerships and Portfolio Right-Sizing
Maroti, David
The impact of COVID-19 will have a long-run effect on our institutions and academic facilities. How universities adjust their portfolios of real estate assets is a major determinant of their longterm success. This thesis seeks to establish an overview of the financial distress faced by colleges and universities in light of the pandemic, and a review of the hardest hit universities. In light of expected closures in the near future, this thesis uses the potential acquisition of a campus closure in the Philadelphia CBD market as a feasibility test to an investment thesis on forced seller distress.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Deep Learning Based Real-time Pedestrian&#13;
Recognition System</title>
<link href="https://hdl.handle.net/1721.1/139317" rel="alternate"/>
<author>
<name>Sun, Tao</name>
</author>
<id>https://hdl.handle.net/1721.1/139317</id>
<updated>2022-01-15T03:05:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Deep Learning Based Real-time Pedestrian&#13;
Recognition System
Sun, Tao
This research project aims to develop an automated recognition system to understand the behavior of pedestrians in public videos. Such behavior prediction is helpful in a wide variety of security applications, such as loss prevention, missing people identification, etc. A deep learning-based detection framework and customized feature identification algorithms are combined to address this technical challenge. Our system is scalable to a large number of video feeds.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting the influence of stakeholders' mental models on emergent collective awareness in instrumented teamwork workshops</title>
<link href="https://hdl.handle.net/1721.1/139316" rel="alternate"/>
<author>
<name>McDonough, Kevin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/139316</id>
<updated>2022-01-15T03:29:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Detecting the influence of stakeholders' mental models on emergent collective awareness in instrumented teamwork workshops
McDonough, Kevin P.
The use of models to represent, investigate, and explain the world extends across disciplines, professions, and walks of life. From supporting learning in the classroom, to aiding organizational decision making, to influencing people’s daily lives by informing them about the weather, patterns of disease spread, and climate change, models pervade our lives. In Engineering, models provide a mechanism through which teams may organize, align, and share knowledge, communicate across role and domain of expertise, and develop new insights. Models enable stakeholders to learn and make more informed decisions in the face of complexity and uncertainty. While the value of models for representing complex sociotechnical systems-of-systems has been demonstrated, what is lesser well known is how a user’s knowledge and perceptions about the systems being represented mediate the use and efficacy of the models. This work explores one aspect of this phenomena ― how the diversity of a team’s mental models affects, and is affected by, their use of a system model. The design of a series of instrumented team experiments was developed and a teamwork research platform, using an agent based modeling and simulation framework, was created. A series of three instrumented teamwork workshops was conducted. Twelve teams participated in the workshops, role-playing as expert stakeholders, in the exploration of options for population, transportation, and function in site designs for a conceptual human settlement on Mars. A diversity in mental models distinguishable from postulated generative distributions was detected. This work demonstrates the use of instrumented methods to detect, quantify, and analyze mental models and tradespace exploration by users of a system model.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Engineering Systems Approach to Production Planning of Optical Systems</title>
<link href="https://hdl.handle.net/1721.1/139315" rel="alternate"/>
<author>
<name>Hui, Henry A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139315</id>
<updated>2022-01-15T03:23:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Engineering Systems Approach to Production Planning of Optical Systems
Hui, Henry A.
Production enterprises are continuously presented with investment opportunities to improve their production systems through the adoption of new technologies. When&#13;
presented with these opportunities, enterprise leaders are faced with decisions that can have profound viability implications. A value-centric framework to holistically and systematically inform and support decisions is needed.&#13;
&#13;
This research introduces such a strategic framework based on Engineering Systems principles and methods. Within this framework, Enterprise System Architecting and Technology Roadmapping methods have been adapted for production systems to identify stakeholder value, create investment scenarios as project portfolios, assess performance using Discrete Event Simulation (DES) and technical modeling, and visualize options using tradespace plots.&#13;
&#13;
A case study, involving a representation of the National Ignition Facility’s (NIF) Optics Recycle Loop production system, is explored to demonstrate the framework process steps as a guide for its application. Each of the process steps is applied to the optics production enterprise resulting in a recommended project portfolio spanning 20 years. The baseline DES simulation predicts a throughput of 129 optics/month. After investments into debottlenecking, this was boosted to 261 optics/month. In addition, product performance forecasting predicts that after product improvement investments, optics damage threshold - a key product performance figure of merit - will improve by 67%. By using the new strategic framework, production enterprises can make decisions based on projected present and future value.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architectural Support for Effective Data Compression In Irregular Applications</title>
<link href="https://hdl.handle.net/1721.1/139314" rel="alternate"/>
<author>
<name>Yang, Yifan</name>
</author>
<id>https://hdl.handle.net/1721.1/139314</id>
<updated>2022-01-15T04:07:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Architectural Support for Effective Data Compression In Irregular Applications
Yang, Yifan
Irregular applications, such as graph analytics and sparse linear algebra, exhibit frequent indirect, data-dependent accesses to single or short sequences of elements that cause high main memory traffic and limit performance. Data compression is a promising way to accelerate irregular applications by reducing memory traffic. However, software compression adds substantial overheads, and prior hardware compression techniques work poorly on the complex access patterns of irregular applications.&#13;
&#13;
This thesis proposes SpZip, an architectural approach that makes data compression practical for irregular algorithms. SpZip accelerates the traversal, decompression, and compression of the data structures used by irregular applications. In addition, these activities run in a decoupled fashion, hiding both memory access and decompression latencies. To support the wide range of access patterns in these applications, SpZip is programmable, and uses a novel Dataflow Configuration Language to specify programs that traverse and generate compressed data. Our SpZip implementation leverages dataflow execution and time-multiplexing to implement programmability cheaply. We evaluate SpZip on a simulated multicore system running a broad set of graph and linear algebra algorithms. SpZip outperforms prior state-of-the art software-only (hardware-accelerated) systems by gmean 3.0x (1.5x) and reduces memory traffic by 1.7x (1.4x). These benefits stem from both reducing data movement due to compression, and offloading expensive traversal and (de)compression operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collecting Ideals: Re-envisioning ejidos as climate-action platforms</title>
<link href="https://hdl.handle.net/1721.1/139313" rel="alternate"/>
<author>
<name>Meouchi Vélez, Luis Alberto</name>
</author>
<id>https://hdl.handle.net/1721.1/139313</id>
<updated>2022-01-15T03:46:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Collecting Ideals: Re-envisioning ejidos as climate-action platforms
Meouchi Vélez, Luis Alberto
Established by constitutional decree in 1917, ejidos were considered on of the most successful outcomes of the Mexican Revolution's fight to redistribute land back to the indigenous populations in a collective land tenure system or 'commons.' After decades of operation in which neoliberal critics claimed that ejidatarios were insufficiently productive, the Mexican authorities reformed the constitution to allow privatization of ejidal lands. The 1992 NAFTA agreement further incentivized the commodification of such lands, and many ejidos were dismantled or transformed into private property.&#13;
&#13;
While ejidos have been studied by many disciplines, from agrarian law or social-economics to ethnography, urban scholars who have examined their impact on urbanization have focused primarily on ejidos in the periphery of large cities, arguing that ejidal transformation is a key determinant of urban sprawl and intensifying metropolitan inequality. In "Collecting Ideals: Re-envisioning ejidos as climate-action platforms" I argue that ejidos, have and are still playing a major role in the urbanization and development of more rural settings in Mexico, particularly in regions with small towns. I further argue that ejidal dynamics in such regions have their own peculiarities -- particularly in terms of the potential impacts of ejidal privatization on the natural and built environment -- and thus that urban designers and planners need special tools to manage and guide the impact of ejidal production on urbanization in such settings.&#13;
&#13;
More specifically, I hypothesize that ejidos -- which still comprise 52% of Mexico's land -- could play a major role in Mexico's fight to confront climate change in the twenty first century, in a manner that is fair and equitable to its common owners, particularly if the equation of water supply is solved. To support this claim, my thesis uses mapping as a critical device to first spatialize and visualize the different outcomes of ejido privatization. Using the case of Apan, Hidalgo -- in the Pachuca sub-basin region -- I propose a series of measures to guide ejidal development in quasi-rural settings. After developing the Latourian concept of a critical zone to guide such processes, I propose the development of "common platform" for stakeholder engagement that could help visualize different scenarios and accommodate common interests to ensure water sovereignty for all.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using High-Performance Computing to Scale Generative Adversarial Networks</title>
<link href="https://hdl.handle.net/1721.1/139311" rel="alternate"/>
<author>
<name>Flores, Diana J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139311</id>
<updated>2022-01-15T04:04:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Using High-Performance Computing to Scale Generative Adversarial Networks
Flores, Diana J.
Generative adversarial networks(GANs) are methods that can be used for data augmentation, which helps in creating better detection models for rare or imbalanced datasets. They can be difficult to train due to issues such as mode collapse. We aim to improve the performance and accuracy of the Lipizzaner GAN framework by taking advantage of its distributed nature and running it at very large scales. Lipizzaner was implemented for robustness, but has not been tested at scale in high performance computing(HPC) systems. We believe that by utilizing HPC technologies, we can scale up Lipizzaner and observe performance enhancements. This thesis achieves this scale up, using Oak Ridge National Labs’ Summit Supercomputer. We observed improvements in the performance of Lipizzaner, especially when run with poorer network architectures, which implies Lipizzaner is able to overcome network limitations through scale.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Texture-Informed Approach for Hurricane Loss Estimation: How Discounting Neighborhood Texture Leads to Under-Valuing Wind Mitigation</title>
<link href="https://hdl.handle.net/1721.1/139310" rel="alternate"/>
<author>
<name>Manav, Ipek Bensu</name>
</author>
<id>https://hdl.handle.net/1721.1/139310</id>
<updated>2022-01-15T03:01:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Texture-Informed Approach for Hurricane Loss Estimation: How Discounting Neighborhood Texture Leads to Under-Valuing Wind Mitigation
Manav, Ipek Bensu
The focus of emergency management is shifting from response and recovery to predisaster mitigation. And, a grand challenge in championing for this shift is effectively communicating natural hazard risks and the value of mitigating structures (to reduce those risks). Present tools for loss estimation overlook building-level variations in wind loading induced by the configuration of surrounding buildings, called neighborhood texture. By doing so, such tools under-estimate expected wind-related losses and under-value wind mitigation – significantly in densely built-up areas susceptible to adverse texture effects. In this thesis, those texture effects are incorporated into a widely recognized loss estimation framework. The impacts of local texture are approximated on the recurrence of wind loads on structures. And, in the case study, the benefits of mitigating are re-evaluated for the residential building stock of the hurricaneprone state of Florida – with a focus on five densely populated counties representing a range of exposure to wind-related hazards. Each home is individually assessed with its prevailing local texture evaluated and its occupancy and building characteristics probabilistically assigned. Mitigation measures considered include shutters, straps, and tie downs. For these mitigation measures, the model results yield annualized benefits of $8.1 billion statewide (80% higher than conventional estimates) ranging from $2.0 billion in Miami-Dade County to $56 million in Duval County (respectively, 90% and 100% higher than conventional estimates).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Visual Accessibility Options to Empower Grade School Students in Designing Inclusive Mobile Applications</title>
<link href="https://hdl.handle.net/1721.1/139309" rel="alternate"/>
<author>
<name>Dunand, Murielle</name>
</author>
<id>https://hdl.handle.net/1721.1/139309</id>
<updated>2022-01-15T03:20:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Developing Visual Accessibility Options to Empower Grade School Students in Designing Inclusive Mobile Applications
Dunand, Murielle
As technology becomes more available to the general public, it is important that it be as accessible as possible. Accessibility for mobile apps is crucial given how pervasive smartphone use has become. For students who use MIT App Inventor -- an online platform that enables users to make their own mobile apps -- part of being an effective app designer is to appreciate the importance and practice of inclusive design.&#13;
&#13;
I have empowered young students to make their apps more visually accessible by: (1) offering new options for larger text, higher contrast, and alternate text in App Inventor apps, (2) creating a curriculum for students aged 13-18 about the principles of visually accessible design, and (3) running the workshop three times and collecting student feedback on the curriculum. After the workshop, students reported a more accurate understanding of the nature of low vision as well as increased comfort with making visually accessible apps. Overall, this work has shown that it is both simple and effective to teach the principles of accessible design to students as young as middle-school age. The code changes have been added to MIT App Inventor and the curriculum is available on the App Inventor website.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lenticular Objects: 3D Printed Objects with Lenticular Lens Surfaces That Can Change their Appearance Depending on the Viewpoint</title>
<link href="https://hdl.handle.net/1721.1/139308" rel="alternate"/>
<author>
<name>Zhu, Yunyi</name>
</author>
<id>https://hdl.handle.net/1721.1/139308</id>
<updated>2022-01-15T03:04:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Lenticular Objects: 3D Printed Objects with Lenticular Lens Surfaces That Can Change their Appearance Depending on the Viewpoint
Zhu, Yunyi
This thesis describes a method that makes 3D objects appear differently under different viewpoints. We accomplished this by 3D printing lenticular lenses across the curved surface of objects. By calculating the lens distribution and the corresponding surface color patterns, we can determine which appearance is shown to the user at each viewpoint.&#13;
&#13;
We built a 3D editor that takes as input the 3D model, and the visual appearances, i.e. images, to show at different viewpoints. Our editor then calculates the corresponding lens placement and the underlying color patterns. On export, the user can use ray tracing to live preview the resulting appearance from multiple angles. The 3D model, color pattern, and lenses are then 3D printed in one pass on a multi-material 3D printer to create the final 3D object.&#13;
&#13;
To determine the best fabrication parameters for 3D printing lenses, we printed lenses of different sizes and tested various post-processing techniques. To support a large number of different appearances, we compute the lens geometry that supports a large number of viewpoints while protruding least from the object geometry. Finally, we demonstrate our system in practice with a range of use cases for which we show the simulated and physical results side by side.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Unlabeled Data in Supervised Learning to Objectively Assess Depression</title>
<link href="https://hdl.handle.net/1721.1/139306" rel="alternate"/>
<author>
<name>Bhathena, Darian</name>
</author>
<id>https://hdl.handle.net/1721.1/139306</id>
<updated>2022-01-15T03:15:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Leveraging Unlabeled Data in Supervised Learning to Objectively Assess Depression
Bhathena, Darian
Depression is the leading cause of disability, affecting over 250 million people worldwide [34]. Major Depressive Disorder, or MDD, is difficult to assess and diagnose due to lack of resources and the personal, private nature of the disease, which has symptoms that can vary greatly on a patient-by-patient basis [2]. Even so, the current standard methods of assessing depression are subjective and outdated, consisting of surveys and questionnaires first developed 60 years ago [21]. As part of a recent clinical study, data from wearable sensors, smartphones, and surveys were collected from a number of participants, and used to train classical machine learning models aimed at assessing depression. In this thesis, those methods are expanded upon with the intent of improving them, with varied success. Investigations conducted include training a small neural network on the same data, training a Multimodal Autoencoder on additional unlabeled data, and concatenating time features to utilize temporal information.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Neurosymbolic Approach to Abstraction and Reasoning</title>
<link href="https://hdl.handle.net/1721.1/139305" rel="alternate"/>
<author>
<name>Alford, Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/139305</id>
<updated>2022-01-15T03:59:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Neurosymbolic Approach to Abstraction and Reasoning
Alford, Simon
Current deep learning systems are highly specialized to whatever task they are designed to solve. Their application to more general domains is limited by their inability to form explicit, systematic knowledge and reason over it. Such an ability would be required for a machine to, for instance, rediscover the scientific method, and use this method to learn new things. This thesis attempts to make progress on this front by developing an approach for the Abstraction and Reasoning Corpus (ARC), an artificial intelligence benchmark consisting of a set of few-shot visual reasoning tasks which measures the ability for an agent to solve tasks beyond those specified by the developer. We present two approaches that address that challenges posed by ARC. First, we give an approach for learning abstractions on ARC. We apply a program synthesis system called DreamCoder to create symbolic abstractions out of the solutions of tasks solved so far. These abstractions enable the solving of progressively more difficult ARC tasks. Second, we design a reasoning algorithm for ARC motivated by the way humans approach solving ARC tasks. Our algorithm combines execution-guided program synthesis with deductive reasoning based on inverse semantics, enabling a bidirectional, execution-guided program synthesis algorithm for solving ARC tasks. Despite difficulty ultimately achieving high performance on ARC, we believe the approach is a firm basis for a learning-based search algorithm for ARC, especially compared to existing brute-force approaches. We additionally evaluate the bidirectional algorithm on a set of “24 Game” style math puzzles. We conclude by discussing how these two approaches can be combined as well as future research directions relevant to ARC and AI in general.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and optimization of photoredox-mediated methine epimerization</title>
<link href="https://hdl.handle.net/1721.1/139303" rel="alternate"/>
<author>
<name>Wang, Kathleen J</name>
</author>
<id>https://hdl.handle.net/1721.1/139303</id>
<updated>2022-01-15T04:07:25Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Development and optimization of photoredox-mediated methine epimerization
Wang, Kathleen J
Reactions that edit the stereochemistry of individual atoms are enabling tools in synthetic chemistry. The Wendlandt lab has developed a sequential H-atom abstraction and donation strategy to synthesize rare sugars from biomass platform molecules such as D-glucose. Efforts to apply this strategy to the epimerization of steroids gave poor results and demonstrated limited selectivity, but insights gained from this work were used to develop a hypothesis for epimerization. We proposed that H-atom abstraction and donation occur to release diaxial strain and form the most conformationally stable isomer. This hypothesis guided the development of an epimerization method for methine stereocenters. This work describes the optimization and development of methine epimerization of isomenthyl acetate. The epimerization of tertiary alkyl stereocenters is also applied to access an otherwise synthetically challenging stereochemical outcome from a Diels-Alder reaction sequence. Data from primary literature and mechanistic experiments are used to provide a proposed mechanism through which this epimerization proceeds.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geosynchronous Satellite Maneuver Classification and Orbital Pattern Anomaly Detection via Supervised Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/139301" rel="alternate"/>
<author>
<name>Roberts, Thomas González</name>
</author>
<id>https://hdl.handle.net/1721.1/139301</id>
<updated>2022-01-15T03:51:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Geosynchronous Satellite Maneuver Classification and Orbital Pattern Anomaly Detection via Supervised Machine Learning
Roberts, Thomas González
Due to the nature of the geosynchronous (GEO) orbital regime, where space objects orbit the Earth once per sidereal day, GEO satellites can appear fixed to a position in the sky when observed from the Earth’s surface. This unique orbital characteristic makes GEO satellites ideal for telecommunications missions that require Earth-fixed antennas to send and receive signals, such as television broadcasts or military communications. To maintain their position relative to the Earth’s surface, GEO satellites must station-keep, or regularly expend onboard propellant to counteract the natural forces in the near-Earth space environment that perturb their orbital trajectories. Less frequently, GEO satellites perform maneuvers to alter their orbital characteristics more drastically. One such maneuver is a longitudinal shift: changing a GEO satellite’s sub-satellite point from one position on the Earth’s equator to another. Such a maneuver often requires both a series of impulsive thrusts and a period of natural drift. &#13;
&#13;
This work describes an approach for detecting the components of longitudinal shift maneuvers—including the patterns associated with initiating and ending eastward and westward drifts—using convolutional neural networks trained on publicly available two-line element (TLE) data from the U.S. Space Command’s (SPACECOM) space object catalog. A method for converting TLE data to geographic position histories—longitude, latitude, and altitude positions over time in the Earth-centered, Earth-fixed geographic reference frame—and labeling longitudinal shift maneuvers by inspection is described. A preliminary maneuver detection algorithm is designed, trained, and tested on all GEO satellites in orbit from January 1 to December 31, 2020. Performance metrics are presented for algorithms trained on two different training data sets corresponding to five and ten years’ worth of geographic position time-histories labeled with longitudinal shift maneuvers.&#13;
&#13;
When detected, longitudinal shift maneuvers can be used to identify anomalous behavior in GEO. In this work, a satellite’s behavior is considered nominal if it adheres to the satellite’s pattern of life (PoL)—its previous on-orbit behavior made up of sequences of both natural and non-natural behavioral modes, including routine station-keeping, other on-orbit maneuvers, and uncontrolled motion—and anomalous if it deviates from the satellite’s PoL. Identifying anomalous satellite behavior is of critical interest to space situational awareness (SSA) system operators, who may choose to task their sensors to obtain more observations of anomalous behavior, and satellite operators themselves, who may wish to diagnose its root cause. Applications of this work for international space policymaking, including the development of on-orbit norms of behavior and the distribution of spectral and physical space in GEO, is also discussed.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Physician Burnout and Costs in Outpatient Healthcare Settings via Advanced Analytics</title>
<link href="https://hdl.handle.net/1721.1/139300" rel="alternate"/>
<author>
<name>Escribe, Célia</name>
</author>
<id>https://hdl.handle.net/1721.1/139300</id>
<updated>2022-01-15T03:47:15Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Reducing Physician Burnout and Costs in Outpatient Healthcare Settings via Advanced Analytics
Escribe, Célia
National studies show that US primary care physicians are at high risk of burnout. Burnout has severe consequences for physicians, patients, and the healthcare system itself, which makes it one of the top priorities to be addressed by healthcare leaders today. In parallel, electronic health record systems (EHRs) have become ubiquitous in most practices, changing profoundly the work of primary care physicians, while offering at the same time new opportunities to analyze in a quantitative way various aspects of physicians’ work. This thesis offers actionable insights driven by data and analytics to address the previously mentioned key healthcare challenges.&#13;
&#13;
Chapter 2 leverages advanced text analytics to identify work themes of primary care physicians related to inbox message management. A scalable methodology relying on a Latent Dirichlet Allocation model is developed to analyze physicians’ inbox work themes in great level of detail. This methodology could be used to implement appropriate workflow redesign in order to ensure that physicians spend most of their time on issues for which they have significant added value.&#13;
&#13;
Chapter 3 employs a novel approach to examine team dynamics and their impact on physicians’ well-being. While many studies have tried to isolate factors related to physicians’ burnout, most of those studies are not scalable as they rely on self-evaluated surveys. This chapter addresses this question by providing a new quantitative methodology to analyze care team dynamics and structure through the integration of EHR data with social network modeling. Machine learning models are then developed to predict different dimensions of physicians’ well being using predictors related to team care dynamics and structure and work composition.&#13;
&#13;
Chapter 4 finally develops a new modeling framework for a real-time appointment scheduling problem called the minimum peak appointment scheduling (MPAS) problem. While previous studies have applied existing algorithms from online bin packing to solve this problem, this modeling framework leverages unique aspects of appointment scheduling to further optimize scheduling decisions and reduce resource requirements. This chapter describes the first competitive online algorithm to the MPAS problem called the harmonic rematching (HR) algorithm, and proves that the HR algorithm has an asymptotic competitive ratio of 1.5.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Seasonal Forecasting of Application Demand with ELF</title>
<link href="https://hdl.handle.net/1721.1/139299" rel="alternate"/>
<author>
<name>Wu, Priscilla</name>
</author>
<id>https://hdl.handle.net/1721.1/139299</id>
<updated>2022-01-15T03:46:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Efficient Seasonal Forecasting of Application Demand with ELF
Wu, Priscilla
Increased use of data insights to guide ventures have led to an explosion of needs in data services such as data accessibility, mobility, availability, and protection. Particularly in cloud enterprises, this expansion of data services has led to an increased need of AIOps, or intelligent systems that can offer consistent operation while dynamically adjusting their operation for the data services requested. In the field of storage systems, self-management features include proactive management of resources through knowledge of demand and their changing patterns. Previous research on classification, forecasting, trending, and pattern recognition in storage workloads have concluded that there is no universally best predictor for all workload patterns. In addition, these researched methods and their comparisons focus more heavily on accuracy without considering the limitations on overhead and computation power present in a systemoriented approach. This thesis analyzes design tradeoffs and presents ELF, a generic forecasting algorithm of storage workload data that optimizes computation costs in the context of a real-life production system. ELF takes advantage of the fact that the majority of storage workloads possess activity too simple to warrant complex forecasting models. Using a customized classification approach, ELF selects the appropriate predictive model based on the workload’s observed activity and produces accurate forecasts 92 times faster than a generic baseline algorithm while storing 97.5% less data.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Autism and Schizophrenia Using PRISM and Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/139298" rel="alternate"/>
<author>
<name>Wu, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/139298</id>
<updated>2022-01-15T03:35:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Characterizing Autism and Schizophrenia Using PRISM and Deep Learning
Wu, Julia
Schizophrenia and autism spectrum disorder (ASD) are two life-altering neurological diseases whose neurobiological bases are not yet well understood. This thesis explores the phenotypical expression of autism and schizophrenia at the synapse level by applying deep learning to multiplexed immunofluorescence data. Deep convolutional networks are developed and applied to analyze PRISM images of neurons treated with gene knockdown treatments corresponding to genes associated with autism and schizophrenia. Similarities and differences between normal-type and disease-type synapses are identified, and underlying synaptic phenotype groups are discovered and characterized. The results provide potential biologic insights into autism and schizophrenia that can serve as a starting point for further experimental analysis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft</title>
<link href="https://hdl.handle.net/1721.1/139297" rel="alternate"/>
<author>
<name>Wrafter, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139297</id>
<updated>2022-01-15T03:27:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft
Wrafter, Daniel
In this paper, we present the Autonomous Flight Arcade (AFA), a suite of robust environments for end-to-end control of fixed-wing aircraft and quadcopter drones. These environments are playable by both humans and artificial agents, making them useful for varied tasks including reinforcement learning, imitation learning, and human experiments. Additionally, we show that interpretable policies can be learned through the Neural Circuit Policy architecture on these environments. Finally, we present baselines of both human and AI performance on the Autonomous Flight Arcade environments.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Editing Conditional Radiance Fields</title>
<link href="https://hdl.handle.net/1721.1/139295" rel="alternate"/>
<author>
<name>Liu, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/139295</id>
<updated>2022-01-15T03:21:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Editing Conditional Radiance Fields
Liu, Steven
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene. In this thesis, we explore enabling user editing of a category-level NeRF – also known as a conditional radiance field – trained on a shape category. Specifically, we introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region. First, we propose a conditional radiance field that incorporates new modular network components, including a shape branch that is shared across object instances. Observing multiple instances of the same category, our model learns underlying part semantics without any supervision, thereby allowing the propagation of coarse 2D user scribbles to the entire 3D region (e.g., chair seat). Next, we propose a hybrid network update strategy that targets specific network components, which balances efficiency and accuracy. During user interaction, we formulate an optimization problem that both satisfies the user’s constraints and preserves the original object structure. We demonstrate our approach on various editing tasks over three shape datasets and show that it outperforms prior neural editing approaches. Finally, we edit the appearance and shape of a real photograph and show that the edit propagates to extrapolated novel views.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>African Entrepreneurship Ecosystems: A Comparative Study of The Top Five</title>
<link href="https://hdl.handle.net/1721.1/139294" rel="alternate"/>
<author>
<name>Molamu, Keitumetse</name>
</author>
<id>https://hdl.handle.net/1721.1/139294</id>
<updated>2022-01-15T03:58:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">African Entrepreneurship Ecosystems: A Comparative Study of The Top Five
Molamu, Keitumetse
Over the last 20 years, entrepreneurship has become an important field for research and monitoring globally. We have seen the emergence of many success stories of people starting with an idea and then building billion-dollar organizations that make a global impact. These entrepreneurs and their activities do not happen in a vacuum. If a society would like to produce more of them, it has to understand how they came about. &#13;
&#13;
Researchers identified key success factors for entrepreneurship ecosystems, and although there are slight differences based on the researcher, they do have some basic measures in common. Mainly, Entrepreneurship Ecosystems research points to the need for governmental policies that support entrepreneurship, the existence of institutions to support entrepreneurship activities, and a culture that encourages entrepreneurship.&#13;
&#13;
This thesis aims to look at the ecosystems emerging on the African continent and identify the different strategies or key success factors working for the top ecosystems through the application of MIT'S Stakeholder Framework for Building &amp; Accelerating Innovation Ecosystems.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic modeling of performance-based annuities: increasing gene therapy accessibility by managing the uncertainty of costs and treatment value</title>
<link href="https://hdl.handle.net/1721.1/139293" rel="alternate"/>
<author>
<name>Burgunder, Mateusz</name>
</author>
<id>https://hdl.handle.net/1721.1/139293</id>
<updated>2022-01-15T04:04:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Stochastic modeling of performance-based annuities: increasing gene therapy accessibility by managing the uncertainty of costs and treatment value
Burgunder, Mateusz
Durable gene therapies are a new and upcoming form of treatment. They have a high prices and high treatment uncertainty. Payers may feel reluctant to pay for such treatments with traditional financing methods because they are concerned about treatment performance risk, actuarial risk, and payment timing. If payers are unwilling to pay for effective treatments, patients will be left untreated, and developers will be unable to recover their development and manufacturing costs. Performance-based annuities may provide a solution. Although some qualitative studies on performance-based annuities exist, quantitative ones are limited. This thesis provides a basis for quantitatively examining the implications of performance-based annuities and then uses Monte-Carlo simulations to study the behavior of hypothetical performance-based agreements. The results show that performance-based annuities address payers’ concerns by aligning treatment value with costs. When payers are more willing to pay for treatments, developers and patients benefit because the treatments become more available and accessible. The findings support the idea that performance-based annuities can be used as a novel financing method for gene therapies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reinforcement Learning for Energy Storage Arbitrage in the Day-Ahead and Real-Time Markets with Accurate Li-Ion Battery Dynamics Model</title>
<link href="https://hdl.handle.net/1721.1/139292" rel="alternate"/>
<author>
<name>Kumar, Dheekshita</name>
</author>
<id>https://hdl.handle.net/1721.1/139292</id>
<updated>2022-01-15T04:00:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Reinforcement Learning for Energy Storage Arbitrage in the Day-Ahead and Real-Time Markets with Accurate Li-Ion Battery Dynamics Model
Kumar, Dheekshita
Decarbonizing power systems will require introducing renewable sources to the energy supply mix. Intermittent sources in the supply mix, however, make balancing energy supply and demand more challenging. Energy storage systems can be used to balance supply and demand by storing energy when renewable sources generate more energy than needed, and providing energy when generation is insufficient. Failing to account for degradation, however, when operating a battery can dramatically reduce the battery’s life span and increase degradation-related costs. Existing optimization techniques that account for degradation when determining the optimal battery operation policies are both computationally intensive and time-consuming. Machine Learning techniques like reinforcement learning, can develop models that calculate action-policies in milliseconds and account for complicated system dynamics. In this thesis, we consider the problem of battery operation for energy arbitrage. We explore the use of reinforcement learning to determine arbitrage policies that account for degradation. We compare policies learned by reinforcement learning to the optimal policy, as determined by an advanced mixed-integer linear programming (MILP) model, on NYISO 2013 day-ahead electricity price data. We show that accounting for reinforcement learning results in learned policies that are comparable to the behavior of MILP-determined policies with degradation. We then present a case study that uses reinforcement learning to determine arbitrage policies on PJM 2019 real-time electricity price data, and we find that the use of reinforcement learning for real-time battery operations in the case of energy arbitrage, has promise.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sex, Power, and Technology: A Relational Engineering Ethos as Feminist Utopia</title>
<link href="https://hdl.handle.net/1721.1/139290" rel="alternate"/>
<author>
<name>Wagman, Kelly B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139290</id>
<updated>2022-08-09T20:05:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Sex, Power, and Technology: A Relational Engineering Ethos as Feminist Utopia
Wagman, Kelly B.
This thesis proposes relational engineering as a framework for developing technology that stands in contrast to dominant notions in US tech culture that prioritize profit, scale, productivity, and solutionism. Relational engineering serves as a feminist utopia that envisions the design and development of technology as the crafting of social relations between humans and non-humans in a sociotechnical system. I investigate how relational engineering might be operationalized in the US tech sector by first reviewing the sector's current ideological landscape and then investigating two case studies. One case study looks at the norms and practices found in a feminist data science lab and how it created an inclusive engineering space outside of dominant tech culture. The second case study defines the term "social machines" and considers how these might be designed to promote equity and justice by crafting non-domineering human-machine relations. The case studies are just two examples of how technology can be developed from the perspective of creating caring relations among actors in a sociotechnical system. A relational engineering ethos is intended as an actionable mindset to help technology designers and developers grapple with the fact that they are building social relations as opposed to neutral artifacts.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disputing facts, disputing the economy: Media controversies at the decline of the Peruvian Miracle</title>
<link href="https://hdl.handle.net/1721.1/139289" rel="alternate"/>
<author>
<name>Cerna Aragon, Diego Alonso</name>
</author>
<id>https://hdl.handle.net/1721.1/139289</id>
<updated>2022-08-09T20:04:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Disputing facts, disputing the economy: Media controversies at the decline of the Peruvian Miracle
Cerna Aragon, Diego Alonso
During the first decades of the twenty-first century, Peru experienced a major GDP growth catalogued by authoritative sources as an economic miracle. Many credited the miracle to two factors: the boom of mineral prices – Peru’s primary export – and the free market reforms implemented by the authoritarian government of Alberto Fujimori in the early nineties. This period of prosperity lead some to suggest – either favorably or critically – the existence of a monolithic optimistic consensus in Peruvian society.&#13;
&#13;
In this thesis I put the monolithic quality of this consensus to the test by surveying media controversies in recent years (2016 – 2019), a period marked by a decline of GDP growth. These controversies confronted different actors that are normally considered part of this consensus: government technocrats, business journalists, corporate leaders, etc. The analysis employs key concepts from Actor-Network Theory and other Science and Technology Studies works to examine how these actors mobilized information about the national economy in their public interventions.&#13;
&#13;
The main argument advanced in this thesis is that economic information employed in these controversies operated as sociotechnical nonfictions that attained “enough realness” through their circulation in media and the affective states they evoked. The concept of sociotechnical nonfictions highlights the role that expertise and media assemblages played in the (re)production of facts. Furthermore, it also compels to evaluate the force that media researchers assign to the effects of an economic regime.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pipeline for Zoomed Fetal MRI</title>
<link href="https://hdl.handle.net/1721.1/139286" rel="alternate"/>
<author>
<name>Zhang, Molin</name>
</author>
<id>https://hdl.handle.net/1721.1/139286</id>
<updated>2022-01-15T04:09:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Pipeline for Zoomed Fetal MRI
Zhang, Molin
Fetal Magnetic Resonance Imaging (MRI) is heavily constrained by unpredictable and substantial fetal motion that causes image artifacts and limits the set of viable diagnostic image contrasts. Fast, single-shot MRI of the fetal brain, such as HASTE, is a commonly applied acquisition method to mitigate artifacts from fetal motion by imaging over a sub-second timeframe per slice. For this application, although the fetal brain of interest is only a small fraction of the total field of view across the gravid abdomen, conventional slice-selective excitation followed by Cartesian sampling carries significant overhead in encoding time, which could be reduced with restricted slice RF excitation.&#13;
&#13;
In this thesis, we propose a pipeline to exploit novel zoomed or restricted slice RF excitation for imaging for fetal MRI with benefits that include the mitigation of motion artifacts, reduced RF power deposition, and shorter overall scan time. The three dominant contributions of this thesis are 1) a model for fetal pose estimation which benefits the tracking of ROI; 2) selective excitation design which aims at exciting only the ROI for zoomed imaging and 3) a model for post-processing to improve the SNR; which taken together open new possibilities for improved MRI in pregnancy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilitating Giving and Receiving Support in Existing Social Groups with a Journaling Chatbot</title>
<link href="https://hdl.handle.net/1721.1/139285" rel="alternate"/>
<author>
<name>Wong, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/139285</id>
<updated>2022-01-15T03:55:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Facilitating Giving and Receiving Support in Existing Social Groups with a Journaling Chatbot
Wong, Andrew D.
A chatbot was designed to promote positive mental health by facilitating giving and receiving support within existing social groups. Surveys and interviews were conducted to evaluate the suitability of journaling prompts for use by the chatbot. A 2-week in-the-wild study was conducted with 4 groups of 4-6 friends (n=20). Twice a week, the chatbot asked a personal question to a group, collected and shared answers among that group, then directed each user to respond to another user’s answer.&#13;
&#13;
Exit interviews indicated that: (1) some chatbot interactions led to later interactions outside of the chatbot, (2) participants learned new things about their group members, even those they had other frequent contact with (3) the social aspect of the chatbot affected user's responses to journaling prompts. Prestudy and post-study survey results suggest that, after using the chatbot for two weeks, individuals felt closer to their social group and enjoyed sharing updates with their friends and families more. Based on these results, the presented chatbot could be used to encourage meaningful social interactions that may not happen spontaneously and strengthen existing social relationships.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Demonstration of Lindblad Tomography on a Superconducting Quantum Device</title>
<link href="https://hdl.handle.net/1721.1/139283" rel="alternate"/>
<author>
<name>Samach, Gabriel Orr</name>
</author>
<id>https://hdl.handle.net/1721.1/139283</id>
<updated>2023-07-31T12:29:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Experimental Demonstration of Lindblad Tomography on a Superconducting Quantum Device
Samach, Gabriel Orr
Information loss in experimental quantum devices is traditionally characterized using metrics such as T₁ and T₂, which are readily accessible from standard time-domain measurement.  While T₁ and T₂ times provide rough heuristics for interaction between single qubits and their lossy environments, these numbers stand in as mere proxies for the full multi-qubit loss channel of interest, which can be described more fully with a Lindbladian operator in the master equation formalism.  In this thesis, I outline and present the results of the first experimental demonstration of Lindblad Tomography, a novel technique for tomographically reconstructing the Hamiltonian and Lindbladian operators of an arbitrary quantum channel from an ensemble of time-domain measurements.  Starting from a theoretically minimal set of assumptions, I show that this method is resilient to state-preparation and measurement (SPAM) errors and places strong bounds on the degree of non-Markovianity in the channel of interest.  Comparing the results for single- and two-qubit tomography of a superconducting quantum processor, I demonstrate how Lindblad Tomography can be used to identify sources of crosstalk on large quantum processors, particularly in the presence of always-on qubit-qubit interactions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Snapdown: A Text-Based Snapshot Diagram Language for Programming Education</title>
<link href="https://hdl.handle.net/1721.1/139282" rel="alternate"/>
<author>
<name>Whatley, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139282</id>
<updated>2022-01-15T03:15:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Snapdown: A Text-Based Snapshot Diagram Language for Programming Education
Whatley, Daniel
Snapshot diagrams, which visualize in-memory program state, are frequently used in programming education to demonstrate new concepts and help students develop a better understanding of program functionality. This thesis introduces Snapdown, a textual language for drawing snapshot diagrams, designed for use by both students and instructors of programming courses. Snapdown is designed with an emphasis on learnability and simplicity: both to be picked up by students in a classroom setting in a matter of minutes, and to enable creation and maintenance of diagrams in instructional content with minimal overhead. I introduce several use cases of Snapdown and describe the design and features of its textual language. I also describe a deployment of Snapdown during two semesters of emergency remote teaching in MIT software engineering course 6.031 Software Construction, in which students used it to complete pre-class reading exercises and in-class collaborative exercises. Finally, I demonstrate that Snapdown is generally applicable by using it to replicate over 100 diagrams from introductory- and intermediate-level courses at a variety of institutions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Actual Causality in Autumn</title>
<link href="https://hdl.handle.net/1721.1/139281" rel="alternate"/>
<author>
<name>Weeks, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/139281</id>
<updated>2022-01-15T03:27:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Actual Causality in Autumn
Weeks, Elizabeth
Actual causality is determining causation for specific events. This includes questions such as: did the pasta on the stove cause the fire alarm to go off or did the power outage cause the lights to turn off? Another example follows: If Suzie throws a rock at a bottle and later Billy throws a rock at the same bottle, a question of actual causality is, did Suzie throwing a rock cause the bottle to break? One common method to answer questions of actual causality is to evaluate the situation if the first event did not occur. This is known as a counterfactual approach. If Suzie had not thrown the rock, then the bottle would still have broken. This means that the counterfactual approach does not lead to the same conclusion that most humans arrive at. In this paper, we define a new method for questions of actual causality, which we call the causal path method. To evaluate this method, we compare it to the counterfactual approach and the expectations of humans from related work. To test this method, we use Autumn models. Autumn is a programming language for expressing causal probabilistic models that provides the infrastructure to express real-world mechanisms in code. In Autumn we are able to model and test the Billy and Suzie rock example. After evaluating the method on these Autumn models, we explore further applications for the new causal path method.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Text Translation Through the Application of Generative Adversarial Networks</title>
<link href="https://hdl.handle.net/1721.1/139280" rel="alternate"/>
<author>
<name>Wang, Xiaoyi</name>
</author>
<id>https://hdl.handle.net/1721.1/139280</id>
<updated>2022-01-15T03:04:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Unsupervised Text Translation Through the Application of Generative Adversarial Networks
Wang, Xiaoyi
Text translation is a very broad subfield of natural language processing that tries to generate output text with different characteristics conditioned on some input text. More specifically, we seek to find a translation of the input that retains the semantic contents of the original while changing the style. This encompasses many tasks including sentiment transfer, text summarization, and language translation. One defining characteristic of these problems is the lack of access to paired training data, which inhibits training via a straightforward maximum likelihood estimation approach. This requires us to focus on unsupervised techniques for text translation that depend only on access to large domains of unpaired data. &#13;
&#13;
For unsupervised translation, one approach involves the use of generative adversarial techniques for sequence generation. Unfortunately, prior work using these techniques suffer from poor alignment and training instability. This thesis proposes two alternative models for unsupervised text translation that attempt to alleviate these issues through the incorporation of additional information and the introduction of a different training regime. We demonstrate several translation applications that benefit from these approaches and evaluate performance using framework that is independent of a ground truth paired dataset. Through the experiments, we find improvements over the baseline, particular in the accuracy of the style transfer. We also demonstrate the efficacy of text translation as a data augmentation technique to generate new labeled data with different styles. This mechanism yields significant improvements in classifier robustness. Lastly, we evaluate performance under a semi-supervised training regime and compare against popular baselines. The results reveal significant alignment improvements from the incorporation of an extremely low amount of paired data which is one order-of-magnitude smaller than that of prior studies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovating by Behaving: How to Adopt the Startup Culture in Large Companies</title>
<link href="https://hdl.handle.net/1721.1/139279" rel="alternate"/>
<author>
<name>Ou, Shi Chao</name>
</author>
<id>https://hdl.handle.net/1721.1/139279</id>
<updated>2022-01-15T03:10:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Innovating by Behaving: How to Adopt the Startup Culture in Large Companies
Ou, Shi Chao
Increasingly, large company leaders want their organizations to act like startups! They want to take action against the threat of disruption by startups.  They see many industries disrupted by startups, as evident by the rise of Airbnb and SpaceX. Rather than waiting to be disrupted, business leaders want their research \&amp; development divisions to act like startups. Yet, many companies with their attempts to build an internal startup innovation environment report challenges. These challenges are the symptoms of innovation tensions, tensions that business leaders should manage to foster a startup culture within the corporate innovation culture. &#13;
&#13;
This work identifies a set of eleven innovation management rules for business leaders to manage these tensions experienced by individual intrapreneurs and innovation teams. In developing these rules, each startup and corporate innovation mentality is categorized using the Affective, Behavioral and Cognitive (ABC) model of attitude and the Galbraith Star Model. With these categorizations, the negative psychological effects of these mentalities are analyzed further to understand the brutal side of the innovation culture. Based on innovative behaviors collected in the literature research and a set of eleven interviews, startup and corporate innovation cultures are modeled as a system of interdependent behaviors in causal loop diagrams to expose unknown and undesirable tensions. These tensions expose a set of root causes of the challenges in fostering a startup culture in large companies. In addition to managing these tensions, business leaders are forced to make compromised strategic choices given the innovation paradoxes in intrapreneurs’ risk versus reward profile and the willingness to fire incompetent intrapreneurs.&#13;
&#13;
These tensions and paradoxes confirm that the corporate-startup innovation culture is paradoxical, and it is not sustainable from a psychological perspective. Yet, with these mental models of the innovation culture and these eleven innovation management rules, business leaders are better prepared to manage the brutal side of the innovation culture while leading the next disruption in their industries.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Co on the Deformation Response of Fe-Mn Alloys</title>
<link href="https://hdl.handle.net/1721.1/139278" rel="alternate"/>
<author>
<name>Fountain, Timothy S.</name>
</author>
<id>https://hdl.handle.net/1721.1/139278</id>
<updated>2022-01-15T03:37:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Effect of Co on the Deformation Response of Fe-Mn Alloys
Fountain, Timothy S.
Fe-Mn alloys are well studied and exhibit several interesting responses to deformation which lead to the possibility of desirable mechanical properties for many engineering applications. The addition of other elements to the system further improves properties and can lead to interesting effects like transformation induced plasticity, twinning induced plasticity, and the shape memory effect. Co is one such alloying agent which has seen use in several alloys which exhibit these behaviors. While Co is known to have an effect on the stacking fault energy of alloys, its precise effect on the Fe-Mn system is somewhat less explored. This study seeks to understand what effect Co has on the Fe-Mn system in terms of its effect on thermodynamic properties, phase composition, deformation induced phase transformation, and mechanical properties. Using a thermodynamic model, three alloys of varying Co concentration with a fixed Fe:Mn ratio of 4 were selected for study to systematically examine the effect that Co has on their response to deformation. An additional alloy of equiatomic composition was created as a basis of comparison. X-ray diffraction, scanning electron microscopy, and microhardness testing were used for evaluation. It is seen that Co has a somewhat complicated effect on the deformation behavior of Fe-Mn alloys. In all alloys, &#120574;→&#120576; martensitic transformation occurs. At concentration below 8 at. % Co, increased &#120572;′ martensite formation within the &#120576;-phase is observed. Possible causes of &#120572;′ formation within the &#120576;-phase and the effect on microhardness are explored. At concentrations of 8 at. % Co, the &#120576;-phase seems to be stabilized and only &#120574;→&#120572;′ transformation is observed. The equiatomic alloy exhibits only &#120574;→&#120576; transformation. Several examples of deformation twinning are shown. The thermodynamic model has good agreement with experimental results at low Co concentration, but seems to break down when used for the equiatomic alloy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LOW-THRUST CONTROLLER FOR SLOT-BASED SATELLITE CONSTELLATIONS</title>
<link href="https://hdl.handle.net/1721.1/139276" rel="alternate"/>
<author>
<name>Contreras, Mario Melendrez</name>
</author>
<id>https://hdl.handle.net/1721.1/139276</id>
<updated>2022-01-15T03:07:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">LOW-THRUST CONTROLLER FOR SLOT-BASED SATELLITE CONSTELLATIONS
Contreras, Mario Melendrez
Due to the increase in popularity of satellite constellations, some altitudes in LEO have experienced a large increase in number of satellites. This is a trend expected to continue in the future, which could potentially lead to an increased risk of collision between satellites. Collision avoidance is therefore paramount to maintain normal operations and to prevent runaway growth of space debris in LEO. To that end, this thesis develops state-space LQR and tube MPC controllers for LEO satellites operating in near-circular orbits with low-thrust engines. This is done using a linearized model of the dynamics under the Earth gravitational potential and the atmospheric drag.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gadgets and Gizmos: A Formal Model of Simulation in the Gadget Framework for Motion Planning</title>
<link href="https://hdl.handle.net/1721.1/139275" rel="alternate"/>
<author>
<name>Hendrickson, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/139275</id>
<updated>2022-01-15T03:49:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Gadgets and Gizmos: A Formal Model of Simulation in the Gadget Framework for Motion Planning
Hendrickson, Dylan
Recent work has developed a theory of motion-planning gadgets, which are a useful tool for proving hardness for a variety of problems that can be thought of in terms of an agent navigating a dynamic environment. We introduce formal objects representing motion-planning gadgets, which we call gizmos, and ask which gizmos simulate each other—simulations between gizmos yield reductions between natural decision problems, so this has consequences for complexity.&#13;
&#13;
We define several classes of gizmos, and prove that they are closed under simulation: gizmos with some property cannot simulate gizmos without it. For several of these classes, we also find a gizmo which can simulate every gizmo in the class; this is analogous to completeness for a complexity class. We consider gizmo simulation and prove unsimulability and universality in two restricted settings: planar simulation, where the simulation must embed in the plane without crossings, and input/output simulation, which is a model for fully deterministic settings. We mostly focus on simulations with finitely many gizmos and gizmos with finitely many states (called regular), but many of our results carry over to more exotic infinite gizmos.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What Determines the Allocation of Government Resources to Local Areas?</title>
<link href="https://hdl.handle.net/1721.1/139274" rel="alternate"/>
<author>
<name>Jensen, Jonathan E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139274</id>
<updated>2022-01-15T03:26:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">What Determines the Allocation of Government Resources to Local Areas?
Jensen, Jonathan E.
I study the allocation of resources by state governments to individual counties in the United States. I test competing hypothesis that government resources are allocated to advance political motives, equitable redistribution, or output efficiency. Contrary to previous studies on federal spending, I find that political motives play almost no role in the state allocation of resources. I also find suggestive evidence that efficiency outweighs equity.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Going by the Book: Valuation Ratios and Stock Returns</title>
<link href="https://hdl.handle.net/1721.1/139273" rel="alternate"/>
<author>
<name>Choi, Ki-Soon</name>
</author>
<id>https://hdl.handle.net/1721.1/139273</id>
<updated>2022-10-06T15:27:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Going by the Book: Valuation Ratios and Stock Returns
Choi, Ki-Soon
I explore how institutional frictions interact with the changing nature of the book value of equity to impact stock returns. I first find that book-to-market is relatively less informative of future returns when it significantly deviates from other valuation multiples, and employing refined signals improve return predictability. Then, I find that a firm’s stock returns are still strongly correlated with its book-to-market portfolio returns even when book-to-market is less informative. Together, my findings suggest that institutional investors follow “brand indices” that overweight firms’ book-to-market to attract capital, which induces excess correlations along the book-to-market dimension, even when book-to-market is less informative of long-term future returns.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying Real Estate Development Opportunities: Web-Scraping, Regex Patterns &amp; String-Searching Algorithms</title>
<link href="https://hdl.handle.net/1721.1/139272" rel="alternate"/>
<author>
<name>Williams, Oscar</name>
</author>
<id>https://hdl.handle.net/1721.1/139272</id>
<updated>2022-01-15T04:00:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Identifying Real Estate Development Opportunities: Web-Scraping, Regex Patterns &amp; String-Searching Algorithms
Williams, Oscar
Web-scraping and data mining algorithms are used extensively by hedge funds, equities traders, digital marketers and in the technology sector more broadly. Contrastingly, the real estate development industry continues to use traditional, manual methods to identify and pursue new development opportunities with the exception of mapping software which has been widely adopted. The lack of adoption of these technologies is primarily due to the difficulty in identifying, retrieving and processing the required data rather than an inherent lack of data. To the contrary, there is a wealth of public and private information available to the real estate development industry that can provide value if collected and analyzed efficiently and at scale using algorithms. To test this hypothesis, the author has built a functioning web-scraping and data collection platform that demonstrates how large amounts of data can be retrieved and processed at scale. This thesis evaluates the effectiveness of using web-scraping algorithms to search for real estate development and land rezoning opportunities from publicly available local Government data. The focus area of the thesis is Sydney, Australia and the subject of the thesis is the Aiden1 platform that is owned by the Principal Investigator and author. The platform uses automated web-scraping algorithms to parse publicly available local Government data for keywords that indicate a prospective development opportunity or an instance of imminent land rezoning. The results of this research demonstrate the effectiveness of adopting web-scraping technologies and the usefulness to real estate development professionals.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Assessment for Amine-Based Shipboard Carbon Capture</title>
<link href="https://hdl.handle.net/1721.1/139271" rel="alternate"/>
<author>
<name>Pineda, Stefano</name>
</author>
<id>https://hdl.handle.net/1721.1/139271</id>
<updated>2022-01-15T03:15:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Feasibility Assessment for Amine-Based Shipboard Carbon Capture
Pineda, Stefano
The International Maritime Organization (IMO) set goals to reduce CO₂ emissions by 40% in 2030 with efforts toward 70% in 2050 when compared with CO₂ released per ton-mile in 2008. Maritime traffic relies on non-renewable energy-dense fossil fuels and alternative energy sources have yet to prove feasible for a large sector of the industry. Shipboard carbon capture systems (SCC) offer a possible solution to maritime CO₂ emissions. Here, MEA-based carbon capture systems are designed and evaluated for a representative ultra large container vessel (RULCV) at various lean and rich amine loading pairs, and for 5 ships representative of various ship size categories with average shaft powers ranging from 36 MW to 256 kW. These test cases are evaluated using Aspen Plus and the Aspen Plus Economic Analyzer. To size components, reboiler duty, reboiler diameter, absorber, height, and absorber diameter are all designed for the system to operate at a 90% carbon capture rate with columns at an 80% approach to flooding. In addition to the absorber and stripper, heat exchangers, pumps, and a compressor are designed for these SCCs. The ship system components are evaluated independently and the overall cost of the system is determined from the sum of constituent costs. The carbon capture cost for these MEAbased systems is calculated at $100 to $200 per ton of CO₂.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pretending to be Quantum: A study of IQP-based tests of quantumness</title>
<link href="https://hdl.handle.net/1721.1/139268" rel="alternate"/>
<author>
<name>Joshi, Malvika</name>
</author>
<id>https://hdl.handle.net/1721.1/139268</id>
<updated>2022-01-15T03:37:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Pretending to be Quantum: A study of IQP-based tests of quantumness
Joshi, Malvika
We examine the IQP protocol for verifying quantumness presented by Shepherd and Bremner in 2009 [SB09]. In this protocol, the classical verifier sends a prover an IQP circuit and expects back samples from its output distribution as evidence that the prover has quantum capabilities. To test that the samples indeed came from the circuit, the verifier checks that they are consistent with the bias of the output distribution of the circuit in the direction of a secret string s. This bias is given by (1 + 2−&#119892;/2 )/2 where &#119892; is a parameter associated with the Hamiltonian &#119867; and s [YC20]. We study this parameter and give a strategy for forging samples to fool the verifier into believing that a classical prover is quantum, with a constant probability dependent on &#119892;. We also give a natural method for constructing random circuits with a particular value of &#119892;, either &#119892; = 0 or &#119892; = 1. We use the classical forging strategy, along with the construction methods to show that when &#119892; is small, an adversary can extract s from just &#119867; and forge samples to fool the verifier with high probability. We give heuristic arguments for the validity of these attacks and demonstrate their success numerically.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cross-Frame Association of Through-Wall Handheld-Radar-Based Detections</title>
<link href="https://hdl.handle.net/1721.1/139267" rel="alternate"/>
<author>
<name>Hiebert, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/139267</id>
<updated>2022-01-15T03:09:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Cross-Frame Association of Through-Wall Handheld-Radar-Based Detections
Hiebert, Michael
Recent work with radar-based detection systems has demonstrated its efficacy in identifying [10], classifying [6] [8] [2] humans and animals, and even recognizing gestures [5] in low-light environments and through walls, cases where conventional vision-based systems fail. Most previous research has involved onsite (edge) gathering of data and offsite (non-edge) processing to produce detections, i.e. the experimental platforms have not been productionized nor tested live in applicable environments. Further, many of the proposed architectures rely on the specific motion paths of subjects to identify them. MIT Lincoln Laboratory’s (MITLL) Group 45 has designed a prototype portable radar system capable of producing similar radar data to that collected in the aforementioned research and then identifying individuals in-frame, solely based on vital-signs and regardless of motion. I propose a computational architecture which can incorporate some of the previous advances with tracking in computer vision to detect and identify individuals while operating on the edge under the compute and power constraints of the handheld radar system in which it will be embedded.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying and Assessing Aerospace Parts for&#13;
Production in Additive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/139265" rel="alternate"/>
<author>
<name>Nickles, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/139265</id>
<updated>2022-01-15T03:48:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Identifying and Assessing Aerospace Parts for&#13;
Production in Additive Manufacturing
Nickles, Alexander
Pratt &amp; Whitney is a major aerospace Original Equipment Manufacturer (OEM) of gas turbine engines for both the commercial and military sectors. Additive Manufacturing (AM) presents Pratt &amp; Whitney with the opportunity to improve their supply chain and increase performance. Yet, given the complexity of their product and the volume of parts required to make it, Pratt &amp; Whitney faces a significant challenge in identifying appropriate parts to be produced additively from engineering, supply chain, and business standpoints.&#13;
&#13;
The motivation of this project is to optimize an existing process which was limited to close review of a small number of parts into a cohesive and sustainable methodology that will identify parts across a large catalog that are suitable for sustained production in AM. This thesis describes a two-part methodology for identifying parts based on their suitability to be manufactured additively. The first part of the process ranks a large set of parts using the Analytic Hierarchy Process (AHP). The second part is a prescriptive expert review of the top parts at the end of which, a recommendation on whether to produce a part through AM can be made. This thesis includes a field study of a data set of 25,000 parts in which the AHP process was applied and analyzed. The field study succeeded in producing a ranked list of parts, the best of which moved on to further review by Pratt &amp; Whitney for production consideration.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating Optimized Value Creation Conditions: An Additive Manufacturing Model</title>
<link href="https://hdl.handle.net/1721.1/139264" rel="alternate"/>
<author>
<name>Epperson, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/139264</id>
<updated>2022-01-15T03:21:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Creating Optimized Value Creation Conditions: An Additive Manufacturing Model
Epperson, Jeffrey
Stryker's additive manufacturing (AM) business unit has pioneered cost competitive Metal AM capabilities at scale. Recently the company has been exploring the possibility of expanding AM capabilities into other materials, processes, and business segments. The opportunities for growth have revealed system limiting factors that are slowing the speed at which the organization is able to create additional value with the technology. This research has proposed a model called the value creation model as a framework for how AM organizations must think about their technological capabilities in the context of organizational maturity. At the center of the value creation model is the amount of value being created for an organization. The variables that determine the level of value that can be created are; business structures &amp; systems, intellectual property protections, competitive advantages, business strategy, and technology development &amp; innovation. In order to create maximum value through AM technology, the technology development and business transformation must happen in parallel. If any of the variables in the value creation model become a limiting factor then the maximum value created for the organization is potentially capped. In the case of the Stryker AM business unit, it is recommended that the organization can increase value creating opportunities by migrating their business model to a wholly-owned subsidiary. This business transformation provides significant value creating opportunities through supply chain efficiencies, simplification of business systems, tax &amp; financial freedom, and opportunities to create sustainable competitive advantages &amp; IP.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying Heterogeneity in Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/139263" rel="alternate"/>
<author>
<name>Lim, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/139263</id>
<updated>2022-01-15T03:05:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Identifying Heterogeneity in Decision-Making
Lim, Justin
Individuals often make different decisions when faced with the same problem. This variation arises naturally in decision-making processes due to differences in agent-specific factors, such as training, personal preference, and opinion. In many real-world scenarios, understanding where this variation occurs is critical to inform research and policy-making. For instance, in a clinical context, it is important to identify the types of patients on which the treatment they receive depends greatly on which doctor they visit. Identifying such regions of disagreement can reveal gaps in knowledge or opportunities where best practices can be clarified. In the medical context, it can directly assist the development of more comprehensive treatment guidelines, or suggest hypotheses to be tested in medical trials. In this thesis, we present algorithmic methods to identify heterogeneity in decision-making, by characterizing the regions of disagreement where variation can be attributed largely to the decision-maker. These methods range from approximate methods to exact solutions. We provide generalization bounds where possible and test each method’s performance and computational efficiency using a comprehensive set of synthetic experiments. To demonstrate how these algorithms can be used to obtain insights in clinical decision-making, we present an extensive case study on decision-making for first-line diabetes patients, using an observational dataset from a large insurance provider. We identify subpopulations of patients where this first-line decision varies by provider, and evaluate the effect of this variation on downstream outcomes. Our algorithms are implemented in an easily-usable way and are available to the public.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiplexed Quantum Networks for High-Fidelity Entanglement Distribution</title>
<link href="https://hdl.handle.net/1721.1/139262" rel="alternate"/>
<author>
<name>Lee, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/139262</id>
<updated>2022-01-15T03:55:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multiplexed Quantum Networks for High-Fidelity Entanglement Distribution
Lee, Yuan
The past decade has seen tremendous progress in experimentally realizing the building blocks of quantum repeaters. Repeater architectures with multiplexed quantum memories have been proposed to increase entanglement distribution rates, but an open challenge is to maintain entanglement fidelity over long distances. In this thesis, I present a quantum router architecture comprising many quantum memories connected in a photonic switchboard to broker entanglement flows across quantum networks. The quantum router achieves channel-loss-invariant fidelity and automatically prioritizes entanglement flows across repeater chains without requiring global network information. I further propose algorithms for local entanglement routing across general networks of multiplexed repeaters to optimize entanglement rates and fidelities.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How can startup leaders strategically disclose vulnerabilities during periods of crisis?</title>
<link href="https://hdl.handle.net/1721.1/139261" rel="alternate"/>
<author>
<name>Lim, Denise</name>
</author>
<id>https://hdl.handle.net/1721.1/139261</id>
<updated>2022-01-15T03:56:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">How can startup leaders strategically disclose vulnerabilities during periods of crisis?
Lim, Denise
The contours of a crisis can vary based on the situation. Still, regardless of the nature or magnitude of a crisis, these trigger uncertainties and ambiguities in team dynamics that can inhibit functional effectiveness and threaten emotional wellbeing. In a startup environment, leaders are especially exposed to the risks of failure and are themselves not immune from personal and professional vulnerability owing to the fallout of a crisis. &#13;
&#13;
The idea of marketing oneself as a strong and invulnerable leader who acts with complete certitude is one that has credence. However, this has over time, been disproven by both research and popular sentiment as the appropriate and most effective choice in all circumstances. Rather, considerations of authenticity are now popular, with leaders’ self-disclosure of business and personal vulnerability during these bleak moments being vaunted in popular discourse. Still, doing so recklessly is not only oversentimental, but can be un-strategic in attaining functional outcomes for both team performance and team cohesion purposes. Leaders need to maintain interpersonal credibility and technical credibility when communicating vulnerabilities to their team. This is especially considering the unique context of startups, which are characterized by flat hierarchies, frothy circumstances, and developing governance and oversight. This paper looks to provide some recommendations based on interviews, public practitioner sharing, and academic research on how startup leaders can best strike the balance between communicating vulnerabilities while retaining professional effectiveness.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Few-Shot Semi-Supervised Robust Text Classification with MAML</title>
<link href="https://hdl.handle.net/1721.1/139260" rel="alternate"/>
<author>
<name>Kang, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/139260</id>
<updated>2022-01-15T03:50:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Few-Shot Semi-Supervised Robust Text Classification with MAML
Kang, Isabella
The need for few-shot semi-supervised text classification arises in a variety of applications, including, e.g., recommendation systems classifying textual content such as product descriptions or news articles based on limited amounts of user feedback. In such settings, existing supervised methods lack a way to leverage unlabeled data, which may be available in larger amounts. &#13;
&#13;
We develop a method for improving the accuracy and robustness of a supervised meta-learning algorithm (Model-Agnostic Meta-Learning) applied to few-shot natural language text classification tasks. We also detail a way to incorporate semi-supervised learning into MAML by designing a procedure to create self-supervised tasks from unlabeled text examples. We present the test accuracies in experimental results for sentiment classification and topic classification. As a representative example, we achieved gains in accuracy ranging from 1% to 3% on Amazon review and news headline datasets
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Virtual Worldmaking: A Phantasmal Media Approach to VRChat</title>
<link href="https://hdl.handle.net/1721.1/139259" rel="alternate"/>
<author>
<name>Kim, Andrea Shinyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/139259</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Virtual Worldmaking: A Phantasmal Media Approach to VRChat
Kim, Andrea Shinyoung
Social VR expresses human subjectivities on multiple scales, from within its computational structure to interpersonally between users. Theorizing bodies as situated, distributed, and imbued with affect, this thesis analyzes how the systemic proliferation of the “anime girl” avatar in the social VR platform VRChat reflects gendered biopolitics of power and control. Positioning contemporary social VR at a unique moment of media convergence and sociopolitical unrest, this thesis argues for more pluriversal negotiations of virtual realities through the lens of virtual worldmaking.&#13;
&#13;
Drawing from cultural theory and D. Fox Harrell’s phantasmal media framework, I unravel the boundaries between subjective experience and computational modes of being at the perceptual interface of social VR. First, in two auto-ethnographically inspired close readings of my experience in VRChat, I find that despite positivistic promises of heightened social presence, social VR reproduces gendered exclusions and discriminatory representational norms in sociotechnical ways. In particular, the technical form of the anime girl avatar reinscribes fantasy tropes about Asian women rooted in techno-orientalist cultural histories. Complicating notions of the “anime girl” avatar as a neutral, post-racial virtual citizen, I instead argue that practices of proliferating whiteness and appropriating bodies coded as female are well situated within the harrowing realities of globalization. Understanding that avatar bodies possess affective investments with operative power, a material history, and technical agency is essential to developing more co-creative approaches toward virtual embodiment. I propose cybershamanic world-making as a creative praxis for constructing new, embodied knowledges by centering cultural memory. To conclude, I then present my work A Place of Care, a VR performance that centers the contemporary realities of violence against Asian and migrant women to consider how a greater respect for issues of transnational identity could forefront engagements with virtual space.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monkey: Platform-Agnostic Hybrid-Cloud Cluster Compute Orchestration Designed for AI/ML</title>
<link href="https://hdl.handle.net/1721.1/139258" rel="alternate"/>
<author>
<name>Lamp, Avery</name>
</author>
<id>https://hdl.handle.net/1721.1/139258</id>
<updated>2022-01-15T03:38:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Monkey: Platform-Agnostic Hybrid-Cloud Cluster Compute Orchestration Designed for AI/ML
Lamp, Avery
As AI/ML research progresses, the amount of compute needed to train and evaluate state-of-the-art AI algorithms consistently increases. With increasing needs for compute, researchers spend time designing distributed systems to scalably train and hyper-parameter optimize their latest model rather than focusing on their core research. We aim to build a fault-tolerant distributed system capable of cheaply and flexibly scheduling reproducible research training jobs on heterogeneous hybrid-cloud compute clusters including local machines and provider agnostic cloud machines. Our system focuses on ML researchers with two main goals, minimizing costs (using preemptible/spot-instances) and user friendliness. The system aims to require minimal user setup and configuration, allowing researchers to quickly get started training models. The Monkey System includes a web console and visualization dashboard to track, evaluate, and compare multiple jobs’ progress and results.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model Compression and AutoML for Efficient Click-Through Rate Prediction</title>
<link href="https://hdl.handle.net/1721.1/139253" rel="alternate"/>
<author>
<name>Gschwind, Katharina</name>
</author>
<id>https://hdl.handle.net/1721.1/139253</id>
<updated>2022-01-15T03:25:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Model Compression and AutoML for Efficient Click-Through Rate Prediction
Gschwind, Katharina
Novel machine learning architectures can adeptly learn to predict user response for recommender systems. However, these model architectures are often effective at the cost of large computational, and memory, cost. This limits their ability to run on edge devices with smaller hardwares, such as smartphones, which is a popular use case for recommender systems. We address this issue in this thesis by studying how compression of recommender system models can significantly reduce model computation cost, and edge device runtime, while preserving prediction accuracy. Furthermore, we present a new compression-based AutoML method for feature set generation in architectures which incorporate explicit feature interactions. This works as a tool to build efficient recommender system models, and is applicable to many state of the art model designs. Applying this AutoML shows initial gains in model performance.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Climate Change through Community Organizing and Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/139251" rel="alternate"/>
<author>
<name>Leshchinskiy, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/139251</id>
<updated>2022-01-15T03:25:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Addressing Climate Change through Community Organizing and Machine Learning
Leshchinskiy, Brandon
Climate change is the challenge of our time. It is global, slow-moving, and impersonal – yet it has already impacted everyone from California to Kuwait. Humans are ill-adapted to this type of problem, as the scale – both in time and space – is too large to spur action. But to ignore the difficult choices ahead poses a catastrophic threat to humanity. To address the climate crisis – to mobilize societal change – we must make it meaningful to both decision-makers and the public. &#13;
&#13;
In this thesis, I investigate two aims. Aim one is developing EarthDNA Ambassadors, a community that “ripens” the issue of climate change by connecting leaders, empowering students, and engaging the world on climate. Aim two is developing the Earth Intelligence Engine, which uses AI to generate satellite images of the future, bridging the gap between AI experts, climate models, and decision-makers – starting with floods, the most frequent disaster in the US.&#13;
&#13;
I developed and deployed training materials with dozens of young people to address the need for climate leaders. Surveys show the Ambassadors training program not only improves participants’ negotiation and communication skills, but also improves their mindset for learning leadership. Furthermore, EarthDNA’s Climate 101 workshop improves climate literacy and climate behaviors, and although long-lasting change is unlikely after one session, Climate 101 creates a privileged moment in which participants are more likely to increase their involvement in climate activism. &#13;
&#13;
Working with a team, we also developed an initial framework for the Earth Intelligence Engine (EIE) to  generate satellite imagery of future floods. The EIE outperforms both our handcrafted baseline and state-of-the-art AI models. We intend to deploy our flood visualization model with the National Oceanic and Atmospheric Administration (NOAA) – integrating flood forecasts with aerial imagery along the entire US East Coast – and then to expand to other areas and events. &#13;
&#13;
Ultimately, climate change requires a mobilization of society at all levels. It demands both technical and adaptive work: new technologies and policies, yes, but also new ways of looking at ourselves and our world. Mitigating our climate crisis hinges on progress in complementary areas: inequity, polarization, and institutional backsliding. Climate change demands that we address many of the world’s most pressing issues – and it will take all of us to succeed.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Insurance Design and Pharmaceutical Innovation</title>
<link href="https://hdl.handle.net/1721.1/139250" rel="alternate"/>
<author>
<name>Kim, Soomi</name>
</author>
<id>https://hdl.handle.net/1721.1/139250</id>
<updated>2023-11-08T21:37:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Insurance Design and Pharmaceutical Innovation
Kim, Soomi
This paper studies how insurance coverage policies impact pharmaceutical innovation. In the United States, most patients obtain prescription drugs through insurance plans administered by Pharmacy Benefit Managers (PBMs). Beginning in 2012, PBMs began refusing to provide coverage for many newly approved drugs when cheaper alternatives were available. We show that this policy reshaped upstream pharmaceutical R&amp;D, shifting investments away from therapeutic classes at greater risk of exclusion. This move translated into a relative decline in the development of drug candidates that appear more incremental: that is, those in drug classes with more pre-existing therapies and with less scientifically novel research.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tolerant Testing of Regular Languages in Sublinear Time</title>
<link href="https://hdl.handle.net/1721.1/139249" rel="alternate"/>
<author>
<name>Gong, Linda</name>
</author>
<id>https://hdl.handle.net/1721.1/139249</id>
<updated>2022-01-15T03:56:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Tolerant Testing of Regular Languages in Sublinear Time
Gong, Linda
A classic problem in property testing is to test whether a binary input word &#119908; is in regular language &#119871;. Such testers distinguish the case that &#119908; is in &#119871; from the case where &#119908; is &#120598;-far from &#119871; (&#120598;-far means that at least &#120598; fraction of the bits in &#119908; must be modified to change &#119908; into a word in &#119871;. Otherwise, &#119908; is &#120598;-close). When it is known that &#119908; is noisy, it can be useful to provide tolerant testers: algorithms that accept when &#119908; is &#120575;-close and reject when &#119908; is &#120598;-far, for &#120575; &lt; &#120598;. We build on the work of Alon, Krivelevich, Newman and Szegedy [1] to provide a tolerant, constant time property tester for regular languages. Our main result is that given a regular language &#119871; ∈ {0, 1} * and an integer &#119899;, there exists a randomized algorithm which accepts a word &#119908; of length &#119899; if it is &#120575;-close (&#120575; &lt; &#120598;) to a word in &#119871; and rejects with high probability if &#119908; is &#120598;-far from a word in &#119871;. The algorithm queries polynomial in 1 &#120598; bits in &#119908;.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining Optimal Supply Level for Intermittent and Low Demand Parts</title>
<link href="https://hdl.handle.net/1721.1/139248" rel="alternate"/>
<author>
<name>Lee, Jason (Jin Soo)</name>
</author>
<id>https://hdl.handle.net/1721.1/139248</id>
<updated>2022-01-15T03:32:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Determining Optimal Supply Level for Intermittent and Low Demand Parts
Lee, Jason (Jin Soo)
One of the challenges in the heavy equipment aftermarket business is determining what optimal level of supply looks like for intermittent and low demand parts. For these parts, it is difficult to forecast the needed demand and set the right amount of inventory level. A number of these parts are Made-as-Order (MAO) with long lead times, therefore when a part is needed, customers may need to wait up to several months to receive the part. Thus, there is potential for negative impact on a heavy equipment manufacturer’s brand image and customer relationships.&#13;
&#13;
The goal of this project will be to determine the optimal inventory strategy for these parts and evaluate the impact on Caterpillar’s value stream. This requires a balance between maintaining reasonable level of inventory and ensuring customers receive the parts in a reasonable amount of time. In determining the optimal level, the company would be able to achieve savings in inventory costs and improve brand reputation and customer relationships due to better availability.&#13;
&#13;
The project will disseminate the issue into four steps: 1) understand the demand profiles and segment the parts into granular categories, 2) calculate inventory level based on various inventory strategies, 3) select the optimal inventory strategy, and 4) evaluate the performance of selected strategy. From evaluating various demand profiles, low and intermittent demand parts will be segmented further into granular categories to ensure flexibility in their inventory strategy. For each category, inventory levels will be calculated using various inventory models. Then, based on probability of demand, profitability of the part, and holding/opportunity costs, an optimal inventory strategy will be built around maximizing profit and service levels. This optimal strategy will be evaluated based on a number of metrics (financial and reputational) to determine its performance against current inventory model within various scenarios.&#13;
&#13;
For low and intermittent demand loader and scraper family parts (~5,600 parts and ~6,800 parts respectively), the model showed close to 20% improvement in profitability and 40% improvement in service level over Caterpillar’s current inventory strategy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physically Constrained PCB Placement Using Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/139247" rel="alternate"/>
<author>
<name>Crocker, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/139247</id>
<updated>2022-01-15T03:23:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Physically Constrained PCB Placement Using Deep Reinforcement Learning
Crocker, Peter
This thesis provides an in depth exploration of Reinforcement Learning (RL) based PCB component placement with emphasis on physically verified placements. Unlike prior methods that rely on heuristic proxies for placement quality, this work focuses entirely on routing based metrics that result in functioning placements without the need for fine tuning. Additionally, this exploration considers true use cases of PCB auto-placement where a human-in-the-loop pre-places a set list of components and the auto-placer places the remaining. This is achieved by first restricting the placement domain to only ring placements; a domain where routing calculations become accessible. Within the ring placement domain, an RL agent is trained to place components on a simulated PCB canvas such that there are no component overlaps or wire crossings upon manufacture. Through the use of an unbounded reward system, the agent is trained progressively with PCB complexity gradually increasing as training steps are run. The resulting placements are robust to varying numbers of components as well as component shape and size. Finally, this thesis concludes with a discussion about further work and challenges facing the future of PCB auto-placement.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Digital-to-Print Fabrication Pipeline for Multi-Color Photochromic 3D Printing</title>
<link href="https://hdl.handle.net/1721.1/139246" rel="alternate"/>
<author>
<name>Chen, Sabina W.</name>
</author>
<id>https://hdl.handle.net/1721.1/139246</id>
<updated>2022-01-15T03:24:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Developing a Digital-to-Print Fabrication Pipeline for Multi-Color Photochromic 3D Printing
Chen, Sabina W.
Stereolithography (SLA) and digital light processing (DLP) 3D printing are two common resin-based 3D printing processes. These printing processes work by projecting a light source onto specific areas of resin, thereby forming thin layers of plastic that eventually stack up to create solid objects. However, one limitation of SLA and DLP resin printing is that they typically only produce single-color prints because only one resin type can be used at a time. Therefore, we present a novel approach that enables multi-color resin printing using photochromic dyes. By combining DLP with photochromic materials, our end-to-end 3D printing fabrication pipeline can create multi-colored objects using only one type of resin-based material. &#13;
&#13;
Instead of using a standard, single-color resin, our resin contains a mixture of photochromic inks that can change color when exposed to different wavelengths of light. By integrating photochromic materials into a UV curable resin, we can programmatically change the color of the resin depending on the type of RGB light projected. To build our 3D printing system, we modified an existing resin-based 3D printer to incorporate both a UV and visible light projection system. This enables us to control both the curing and coloring of an object separately. By saturating the dyes prior to printing, and then projecting combinations of RGB light onto each layer after it has been cured, we can color objects directly during the printing process. &#13;
&#13;
In this thesis, we provide the implementation details and design decisions that went into building this integrated 3D printing infrastructure. We discuss the user interface, printer hardware, software implementation, and photochromic resin formulation. We also provide operational instructions and explanations for key design decisions of our system. Finally, we evaluate the capabilities of our photochromic resin and printer system, and propose topics for future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Validation, Calibration, and Uncertainty Quantification of the Wofost Crop Simulation Model</title>
<link href="https://hdl.handle.net/1721.1/139245" rel="alternate"/>
<author>
<name>Kahraman, Sule</name>
</author>
<id>https://hdl.handle.net/1721.1/139245</id>
<updated>2022-01-15T03:29:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Validation, Calibration, and Uncertainty Quantification of the Wofost Crop Simulation Model
Kahraman, Sule
The Digital Agriculture in Africa project [44] aims to unlock the agricultural potential in Africa and help farmers improve their crop productivity by enabling them to make more precise and timely decisions about crop management via better prediction tools. The absence of large, comprehensive, and structured agricultural data, however, limits the performance of these predictions. To address this problem, we propose a Data Platform that aggregates data from a variety of publicly available resources, processes them into reusable data formats, and augments the sparse data via synthetic data generation tools. &#13;
&#13;
In this thesis, we focus on the synthetic generation of agricultural yield data via the WOFOST (WOrld FOod STudies) [20] crop simulation model. Through our validation, calibration, and uncertainty quantification steps, we seek to answer the following question: How can we reliably generate yield data for different regions of the world using simulation models? &#13;
&#13;
Due to unavailability of large agricultural data from Africa, we chose to make use of data from the United States for the validation and calibration steps. Our empirical findings from the validation step demonstrated that the off-the-shelf usage of the WOFOST model, which was originally developed in Europe, may not be suitable for the agricultural studies in the United States, when the input parameters are not precise and accurate. This insight led us to perform the calibration step, where we discovered that the performance of the WOFOST model can be improved by estimating the correct crop parameters using evolutionary algorithms. Through our uncertainty quantification step, we shortlisted a number of input parameters that the model seems most sensitive to and developed a simpler but more tractable and effective model to WOFOST that has an analytical solution. Finally, we provided a quantitative analysis of how the uncertainty from the input parameters propagates through our proposed model to the generated data.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What are the local spillover effects of innovation?</title>
<link href="https://hdl.handle.net/1721.1/139244" rel="alternate"/>
<author>
<name>Bidanda, Maya</name>
</author>
<id>https://hdl.handle.net/1721.1/139244</id>
<updated>2022-01-15T03:24:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">What are the local spillover effects of innovation?
Bidanda, Maya
I investigate the spillover effects of labor displacement from technological innovation in local U.S. labor markets. Previous work on the direct effect uses textual matching of patents to occupations and finds that occupations more exposed to innovation face a decline in employment and wages. I document the indirect effect of how the displacement lowers aggregate demand and therefore hurts industries who fully rely on local demand. In addition, I show how the labor displacement from technology as well as the lower local aggregate demand alters the local supply/demand of labor. The existence of local spillovers ties the discussion of innovation’s welfare consequences to location. Even if a worker is not directly displaced, they may still be affected by technological advancement. While there are many possible local spillovers, this paper focuses on the aggregate demand channel. I first document the direct effect of labor displacement at the commuting zone level and find that occupations in the 75th percentile of innovation exposure vs. occupations in the 25th percentile experience a decline in within-industry employment of 3.5% (though not statistically significant) and a decline in wages of 4%. I then create a shift-share instrument of local innovation exposure using local occupation shares and the national measure of occupation exposure. To isolate the effect of lower local aggregate demand, I focus on low-exposure occupations in non-tradeable industries, as they are not impacted by labor displacement but are impacted by lower local aggregate demand. I find that these workers in the 75th percentile of local area exposure vs. the 25th percentile face a 6% decline in employment and no meaningful change in wages. Lastly, I find that the higher quantiles of wages are more impacted by both the direct and indirect effect. And though wage inequality decreases in more impacted areas, it is driven by a decrease in wages for both tails so is not welfare improving.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CheckSync: Transparent Primary-Backup Replication for Go Applications Using Checkpoints</title>
<link href="https://hdl.handle.net/1721.1/139242" rel="alternate"/>
<author>
<name>Kaashoek, Nicolaas M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139242</id>
<updated>2022-01-15T03:02:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">CheckSync: Transparent Primary-Backup Replication for Go Applications Using Checkpoints
Kaashoek, Nicolaas M.
Many distributed systems have singular, mission-critical components. The MapReduce coordinator, lock servers, etc are all examples of such components. Due to their importance, they require high availability and fault tolerance. The most common way to achieve this is through the use of replicated state machines, an approach in which the application is replicated across multiple machines. There could be as few as two in a primary/backup arrangement, or more to reduce the risk of downtime. Each instance starts in the same state, and then advances to new states in the same order. This allows for easy failover to one of the replicas in case the primary machine fails. &#13;
&#13;
The use of replicated state machines, however, requires an application to expose the correct stream of operations to ensure that each machine ends up in the same final state. This abstraction is not well-suited to all applications, as it can’t support multithreading and can add extra complexity for application developers. This thesis proposes CheckSync, a protocol for achieving high availability and fault tolerance via the use of checkpoints. CheckSync is designed with transparency as a primary goal: applications require little to no modification to use it. It achieves this by checkpointing the memory of an application and replicating that state from primary and a backup. Upon failure, the backup resumes from the checkpoint and continues running.&#13;
&#13;
CheckSync’s transparency sets it apart. Unlike the operation stream required for replicated state machines, CheckSync doesn’t place constraints on the design of the application. It can suspend and capture the memory of Go applications without knowledge of the specifics of the application, as well as restore them on the backup. This is accomplished through careful analysis and recreation of the application’s memory space, as well as efficient transmission of the checkpoint files to minimize performance overhead. CheckSync is evaluated with three different applications, and supports all three without any changes to their code.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the Position-Dependent Error in FTM RTT Indoor Navigation</title>
<link href="https://hdl.handle.net/1721.1/139240" rel="alternate"/>
<author>
<name>Houle, David E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139240</id>
<updated>2022-01-15T03:04:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Analysis of the Position-Dependent Error in FTM RTT Indoor Navigation
Houle, David E.
Fine time measurement (FTM) of the round-trip time (RTT) of a signal between an initiator (smartphone) and a responder (Wi-Fi access point) provides a promising method for indoor positioning. Accurate indoor positioning is a requirement for a wide range of applications, such as asset tracking, indoor navigation, and contact tracing. Unfortunately, the error of reported FTM RTT distance measurements has been shown to have a standard deviation that ranges from 1-2 meters in ideal setups. A major FTM RTT error source was discovered and coined as the “position-dependent error”. This error is heavily depend on the position of an initiator relative to a responder, with the reported measurement fluctuating by meters from an initiator position change of millimeters. Using an Android app and a CNC machine for 2D and 3D positioning, these unusual error properties are explored in depth through experimentation. This experimentation includes evaluating the position-dependent error in both the spatial and frequency domains when varying the test setup, using different smartphones and Wi-Fi access points, and changing the bandwidth and central frequency of the Wi-Fi access points. Possible causes of the position-dependent error are analyzed, such as inaccurate time of arrival or super-resolution algorithms, a dependence on received signal strength, and clock instability. In the end, recommendations for error amelioration are made, and the future of FTM RTT is discussed.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Venture Growth in Nigeria Through 'Entrepreneur-Centered' Design: A Framework for Accelerating Entrepreneurship Development Applied to Consumer Brand Entrepreneurs</title>
<link href="https://hdl.handle.net/1721.1/139237" rel="alternate"/>
<author>
<name>Ukuku, Ogbogu Dike</name>
</author>
<id>https://hdl.handle.net/1721.1/139237</id>
<updated>2022-01-15T03:33:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Addressing Venture Growth in Nigeria Through 'Entrepreneur-Centered' Design: A Framework for Accelerating Entrepreneurship Development Applied to Consumer Brand Entrepreneurs
Ukuku, Ogbogu Dike
Entrepreneurship has long been seen by many as the key to socioeconomic growth and prosperity across Africa and particularly in Nigeria, where rising unemployment, the need to diversify away from crude oil export, and the high potential that exists across multiple sectors creates the opportunity for local entrepreneurs and their enterprises to be the driving force behind realizing this desired growth and prosperity. Despite this, various initiatives to drive entrepreneurship have been mixed at best, and a lot of commentary has made it clear that current approaches to overcome present barriers and accelerate entrepreneurship have not delivered as intended. &#13;
&#13;
In response, this work explores the application of human centered design towards entrepreneurship development in Nigeria by centering the entrepreneur and producing a framework that can be used by other stakeholders to more deeply understand the entrepreneur’s needs and lived experiences, which can then serve as a foundation for more genuine and impact solutions. Through this research, the case is also made for framing this challenge as the responsibility of the entire ecosystem (as opposed to pinning it all on a single entity) and intentionally focusing on venture growth and scale (as opposed to venture creation alone), with the approaches to interviews, persona creation, and need discovery all reflecting this position. A demonstration of this framework is also included with the needs, desires, and experiences of consumer brand entrepreneurs based in Nigeria being explored and surfaced. In this demonstration, additional guidance on how to apply this approach more generally is provided, as well as recommendations on what is truly needed to enable this class of entrepreneurs to take advantage of their unique position to contribute greatly to the nation’s economic development.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subcubic Min-Plus Product of Structured Matrices</title>
<link href="https://hdl.handle.net/1721.1/139234" rel="alternate"/>
<author>
<name>Xu, Yinzhan</name>
</author>
<id>https://hdl.handle.net/1721.1/139234</id>
<updated>2022-01-15T03:01:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Subcubic Min-Plus Product of Structured Matrices
Xu, Yinzhan
The All-Pairs Shortest Paths (APSP) problem is one of the most basic problems in computer science. The fastest known algorithms for APSP in &#119899;-node graphs run in &#119899;³⁻⁰⁽¹⁾ time, and it is a big open problem whether a truly subcubic, &#119874;(&#119899;³⁻ superscript &#120576;) for &#120576; &gt; 0 time algorithm exists for APSP. The Min-Plus product of two &#119899; × &#119899; matrices is known to be equivalent to APSP, where the optimal running times of the two problems differ by at most a constant factor. A natural way to approach understanding the complexity of APSP is thus understanding what structure (if any) is needed to solve Min-Plus Product in truly subcubic time. The goal of this thesis is to get truly subcubic algorithms for Min-Plus products for less structured inputs than what was previously known, and to apply them to versions of APSP and other problems. This thesis gives sub-cubic algorithms for two interesting cases of structured Min-Plus Products: Min-Plus product between matrices with a constant additive approximate rank and Min-Plus product between monotone matrices, whose definitions are deferred to the main text. These faster algorithms have a wide range of applications, including Geometric APSP, Maximum Subarray, Range Mode and Single Source Replacement Paths.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing and Testing a Mobile Creative Coding Application for Children</title>
<link href="https://hdl.handle.net/1721.1/139229" rel="alternate"/>
<author>
<name>Green, Rachel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139229</id>
<updated>2022-01-15T04:11:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing and Testing a Mobile Creative Coding Application for Children
Green, Rachel A.
Children are becoming increasingly engaged with applications on mobile phones. They use mobile apps to socialize, communicate, and play games. There is an opportunity to channel this familiarity and fascination with mobile phones in order to introduce children to computational thinking and creative expression. Towards that end, the Lifelong Kindergarten (LLK) Group is designing a free mobile application that will provide a motivating, creative, and accessible way for children to learn how to code through creating interactive animations that they can send to friends and family. This thesis identifies key design questions and challenges involved in the design of this new coding platform for children, reviews the strategies other mobile applications have employed in addressing related design challenges, and introduces a variety of solutions that our research team has developed, along with their affordances and limitations. This thesis also presents an analysis of data from conducting playtests and semi-structured interviews, and suggests lessons learned for other designers of mobile coding applications for children.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smooth Interpolation on Series of Measures</title>
<link href="https://hdl.handle.net/1721.1/139228" rel="alternate"/>
<author>
<name>Goul, Edward Masson</name>
</author>
<id>https://hdl.handle.net/1721.1/139228</id>
<updated>2022-01-15T03:13:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Smooth Interpolation on Series of Measures
Goul, Edward Masson
In this thesis, we explore concepts related to interpolation between series of measures with a focus on trajectory inference. In many application scenarios, we seek to model continuous phenomena with sequences of discrete data. This task is particularly important when working with time-series data, where we have access to snapshots of a process at discrete time points and wish to infer behavior at unmeasured time steps. Due to realities of obtaining measurements, it is infeasible to measure the same samples multiple times. In the field of biology, for example, the development of singlecell sequencing methods has enabled study of cell development at an unprecedented resolution, but cells are destroyed when measured. As a result, there is no direct correspondence between data at different time steps, rendering learning about the evolution of a single cell over time difficult. Previous methods have focused primarily on the case of two time steps, and also suffer from a number of issues, ranging from expressiveness to the quality of their predictions. We present a model based on continuous normalizing flows which simultaneously interpolates within and across time steps. Our model’s trajectories have a number of desirable geometric properties such as smoothness and continuity. We also provide an extension of our model, linking it to previous work on measure-valued splines, and suggest modifications to increase model expressiveness.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Role of Biological Constraints in Adversarial Robustness via Modeling and Representational Geometry</title>
<link href="https://hdl.handle.net/1721.1/139227" rel="alternate"/>
<author>
<name>Le Thi Nguyet, Hang</name>
</author>
<id>https://hdl.handle.net/1721.1/139227</id>
<updated>2022-01-15T03:13:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigating the Role of Biological Constraints in Adversarial Robustness via Modeling and Representational Geometry
Le Thi Nguyet, Hang
Although deep neural networks (DNNs) achieve excellent performance and even outperform humans on various computer vision tasks, the robustness of DNNs to small perturbations is still far from being comparable to the human visual system. Indeed, adversarial attacks, which are very small worst-case perturbations, can reduce the accuracy of state-of-the-art models dramatically to close to random chance while remaining humanly indistinguishable. Since the human visual system has a high tolerance to small input perturbations, Dapello et al developed VOneNet, a model with architecture similar to the V1 brain area as the front-end and standard DNNs architecture as the back-end, and demonstrated that VOneNet has significantly better adversarial robustness than the standard ResNet.&#13;
&#13;
In this work, we analyze the internal representations of adversarial examples to dissect how adversarial perturbations alter the geometric structure and encoded information of the representations and to understand how brain-like components such as representational noise and neural normalization can help to improve adversarial robustness. Firstly, we show that internal representations from adversarial examples are linearly separated and still encode a significant amount of class information. Secondly, we demonstrate that representational noise can create an overlap between noise-injected clean and adversarial examples, therefore improving the robustness of the model. Finally, we show that neural normalization, which is based on divisive normalization and lateral inhibition, achieves better adversarial performance compared to traditional normalization methods such as batch normalization, which is based on standardization.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Machine Learning into Data Analysis and Plant Performance</title>
<link href="https://hdl.handle.net/1721.1/139225" rel="alternate"/>
<author>
<name>Morey, Zachariah Keith</name>
</author>
<id>https://hdl.handle.net/1721.1/139225</id>
<updated>2022-01-15T03:40:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Integrating Machine Learning into Data Analysis and Plant Performance
Morey, Zachariah Keith
In the current manufacturing environment, the push for high levels of plant performance has led to scrutinizing, optimizing, and improving every step of the manufacturing process. While improvements are being made in physical and software technology that enable advancements like automated robots or additive manufacturing, data management and analysis continues to be an area of opportunity. Challenges with data analysis are exacerbated by the ever increasing influx of data from every point of the product manufacturing process as well as the integration of that data with legacy and novel equipment, software, and employee capabilities. Identifying improvements in processing and utilizing data can contribute to a better understanding of the data itself as well as insights to drive improved manufacturing and plant performance. This thesis shows, drawing from a recent project at Nissan's Canton, Mississippi manufacturing facility and utilizing data from a global group of Nissan manufacturing plants, that machine learning can be applied to plant performance data to identify and prioritize metrics and to better understand the impact of those metrics on overall plant performance.&#13;
&#13;
Nissan already benchmarks plant performance between its manufacturing facilities and uses that to drive improvement and investment opportunities. By examining the data set used for that benchmarking analysis we gain an understanding of both how plants have performed in recent history and what successful plants are doing that contributes to better performance. We then run this data through a linear regression model and an XGBoost machine learning model to compare how the machine learning model performs when compared to a standard linear regression. We show that while both models perform well, the machine learning model outperforms the linear regression model. Specifically the machine learning model achieves a 10% improvement on R squared with a value of .88 while the linear regression achieves an R squared value of .80. In addition, the machine learning model better handles missing data and shows that the Design Standard Time Ratio and Delivery Scheduled Time Achievement Ratio are metrics that need to be prioritized for better plant performance. This thesis argues that while our project focused on a small benchmarking data set, machine learning and its benefits can be applied more broadly to data from the manufacturing facilities. We conclude by presenting some examples and opportunities for how a manufacturing company like Nissan can set up its data, utilize models, and train employees to take advantage of the growing knowledge base around data management, machine learning, and plant performance.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Overheating Preventative Measures in Residential Buildings and Passive Survivability</title>
<link href="https://hdl.handle.net/1721.1/139224" rel="alternate"/>
<author>
<name>Oladipo, Yesufu G.</name>
</author>
<id>https://hdl.handle.net/1721.1/139224</id>
<updated>2022-01-15T04:07:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evaluating Overheating Preventative Measures in Residential Buildings and Passive Survivability
Oladipo, Yesufu G.
Buildings that are thoughtfully planned for future climate scenarios, designed well, and properly maintained have the potential to provide thermally comfortable environments. These same buildings can significantly reduce energy consumption and decrease CO2 emissions. This research evaluates the impact of the use of natural ventilation and modifications to the exterior wall to decrease the probability of heat-related illness and overheating. Assessments within this research are within a two-story residential building. The outdoor weather data selected for the assessments is from New York, NY during an extreme hot week. Assessments made within this research are intended to give guidance on the selection of the most appropriate combination of exterior wall properties and natural ventilation strategies within a well-insulated and tightly sealed building. &#13;
 &#13;
The daily operations of buildings and the number of occupants in buildings generate internal heat loads. Additionally, indoor air temperatures are impacted by solar heat gain from glazed openings and heat transmitted by conduction through exterior wall surfaces. Natural ventilation strategies can reduce indoor air temperatures and increase air velocities close to the skin.  Increasing air velocities close to the skin can supplement an individual’s thermoregulatory system. Air flows near the skin allow the body to expel heat in a manner that reduces the necessity of skin wettedness. Skin wettedness aids in reducing the surface and core temperatures of an individual through the dissipation of heat. Both surface and core temperatures can help to indicate the level of heat stress encountered by an individual. The primary metric used in this research is the thermal sensation standard effective temperature (SET). Standard effective temperature incorporates heat loss to the environment and an evaluation of it is currently recognized by LEED as a measure to promote passive survivability. The results from the simulations in this research show a drastic distinction for the potential of heat stress between the models that use natural ventilation through open windows and open interior doors versus models that do not.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximating the Log-Partition Function</title>
<link href="https://hdl.handle.net/1721.1/139223" rel="alternate"/>
<author>
<name>Cosson, Romain</name>
</author>
<id>https://hdl.handle.net/1721.1/139223</id>
<updated>2022-01-15T03:10:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Approximating the Log-Partition Function
Cosson, Romain
Graphical Models are used to represent structural information on a high-dimensional joint probability distribution. Their expressiveness offers simple reductions from a large number of NP-hard problems to inference tasks such as computing the partition function (exact inference) or approximating the log-partition function (approximate inference). In this master thesis, we will motivate the need for a general constant-factor approximations of the log-partition function and prove that a variant of the well studied tree-reweighted algorithm [1] achieves constant factor guarantees. We will express the corresponding approximation ratio &#120581;(&#119866;) solely as a function of the graph structure &#119866;.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robustness of Consistent Loss Functions for Multinomial Outcome Models</title>
<link href="https://hdl.handle.net/1721.1/139222" rel="alternate"/>
<author>
<name>Vivatsethachai, Suchan</name>
</author>
<id>https://hdl.handle.net/1721.1/139222</id>
<updated>2022-01-15T03:41:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Robustness of Consistent Loss Functions for Multinomial Outcome Models
Vivatsethachai, Suchan
Maximum likelihood estimation, which uses the logarithmic loss function, is the default method used to estimate latent parameters consistently in multinomial outcome models. However, it is sensitive to even a tiny fraction of corruption in the training data. Alternatively, other loss functions in the family of strictly consistent loss functions can be used to consistently estimate model parameters. In this thesis, we study the robustness properties of different loss functions in the family, mainly the logarithmic loss function, the quadratic loss function, and the spherical loss function. We introduce two notions of robustness properties of loss functions. A loss function is partially robust if its corresponding influence function, a proxy for the bias from corruption, has bounded 2-norm. On the other hand, a loss function is strongly robust if the 2-norm of the bias itself is bounded. When some mild assumptions are met, the quadratic loss function can be shown to be both partially robust and strongly robust, while the logarithmic loss function is not. We also demonstrate that the behaviors of each loss function agree with their theoretical properties when used to estimate parameter in two synthetic models: a price-purchase model and a multinomial logit with intercepts model for two products. This thesis thus not only advocates more use of the quadratic loss function in parameter estimation of multinomial outcome models but also serves as a framework to conduct future research of the cross section between the robustness of loss functions and the consistency of parameter estimation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating Novel Interactions with EIT-Based Devices through a Mobile Enabled API</title>
<link href="https://hdl.handle.net/1721.1/139221" rel="alternate"/>
<author>
<name>Verdejo, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/139221</id>
<updated>2022-01-15T03:05:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Creating Novel Interactions with EIT-Based Devices through a Mobile Enabled API
Verdejo, Joshua
Traditionally, electronic devices have some type of physical input. Whether those are the buttons, triggers, and joysticks of a video game controller, or the flat, multi touch screens of mobile devices we have become used to, technology has pushed the limits on these physical and visible inputs. While developments continue to be made, such as pressure sensitive displays and flexible touch surfaces, there is an entirely different aspect of technology that has not been as utilized: the invisible input. Electrical Impedance Tomography (EIT) devices measure electrical conductivity and impedance of a part of the body using noninvasive surface electrodes, e.g., a wristband, and form a tomography image of that part, which can be used to measure internal muscular changes. In this way, users are able to imagine pressing a button, or simply make a gesture in open space, and have that gesture correspond to some command. Moreover, these gestures do not have to be sophisticated at all -- the simple act of flexing and relaxing a muscle would be enough to generate a signal to respond to. EIT technologies can create new interaction interfaces and make older interactions more accessible, because the movements required to interact with an EIT based system would be much less intricate. This research looks to create a novel method of interacting with EIT based devices, moving the interaction to a mobile medium in through a mobile API. By using a mobile device and focusing on interactive applications, more specific features can be implemented and are explored in this paper.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Proximal Point Method to Accelerated Methods on Riemannian Manifolds</title>
<link href="https://hdl.handle.net/1721.1/139219" rel="alternate"/>
<author>
<name>Ahn, Kwangjun</name>
</author>
<id>https://hdl.handle.net/1721.1/139219</id>
<updated>2022-01-15T03:49:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">From Proximal Point Method to Accelerated Methods on Riemannian Manifolds
Ahn, Kwangjun
Recently, there has been significant effort to generalize successful ideas in Euclidean optimization to Riemannian optimization. However, one landmark result of Euclidean optimization has eluded the Riemannian setting: namely, a Riemannian analog of Nesterov's accelerated gradient method (AGM). In this thesis, we establish the first globally accelerated gradient method for Riemannian manifolds.&#13;
&#13;
Toward establishing our result, the first part of the thesis revisits Nesterov's AGM and develops a conceptually simple understanding of it based on the proximal point method (PPM). The main observation is that AGM is in fact an approximation of PPM, which results in simple derivations and analyses of different versions of AGM.&#13;
&#13;
The second part of the thesis then extends our simple approach to the Riemannian case. In our extension, we handle a technical hurdle inherent to the Riemannian case by introducing an appropriate notion of ``metric distortion.'' We control this distortion via a novel geometric inequality, which enables us to formulate and analyze global Riemannian acceleration.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language Models Predict Drug Resistance from Complex Sequence Variation</title>
<link href="https://hdl.handle.net/1721.1/139217" rel="alternate"/>
<author>
<name>Tso, Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/139217</id>
<updated>2022-01-15T03:12:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Language Models Predict Drug Resistance from Complex Sequence Variation
Tso, Andy
Mutation in viruses and bacteria presents a major barrier to the development of vaccines, antiviral drugs, and antibiotics. Recently, neural language models trained on viral protein sequence evolution have shown promise in their ability to predict viral escape mutations, potentially enabling more intelligent therapeutic design [6]. Hie et al.’s work puts forth the key conceptual advance that viral escape from human immunity occurs in the event of a mutation which simultaneously generates meaningful antigenic change while also preserving viral fitness. These ideas are analogous to the semantics and grammar of a language.&#13;
&#13;
Theoretically, mutations that confer high semantic change while preserving high grammaticality may also be predictive of resistance to other types of evolutionary pressure as well. In this thesis, we show that language modeling of protein evolution can also predict mutations that confer drug resistance. We validate our language model predictions using known drug resistance mutations in HIV-1 protease and reverse transcriptase proteins and Escherichia coli beta-lactamase protein. Our results suggest a way to identify and potentially anticipate drug resistance mutations that generalizes across viruses and bacteria
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computationally-Derived Design Principles for Water Oxidation Catalysts</title>
<link href="https://hdl.handle.net/1721.1/139216" rel="alternate"/>
<author>
<name>Harper, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139216</id>
<updated>2022-01-15T03:35:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computationally-Derived Design Principles for Water Oxidation Catalysts
Harper, Daniel
The water oxidation reaction can be used to produce renewable solar fuels, but efficient catalysts need to be discovered to enable its use at the industrial scale. The most active known water oxidation catalysts (WOCs) follow a common theme in chemical catalysis, relying on rare metals such as ruthenium and iridium. To discover alternatives that retain this level of activity while instead utilizing earth-abundant metals, tools need to be developed which leverage knowledge from computation and from existing systems to accelerate catalyst design. This thesis focuses on developing such tools for homogeneous transition metal complexes (TMCs), which are a promising for catalyst development because their properties can be finely tuned through precise ligand modification. To understand the underlying properties which drive water oxidation, we begin by studying the TMCs with the highest activity known thus far: ruthenium WOCs. By leveraging results from density functional theory (DFT), we identify a computational descriptor which correlates well with experimentally observed activity among these catalysts. This descriptor provides a link between computation and experiment, enabling in silico screening for novel WOCs, but it alone is not sufficient. Machine learning (ML) can be used in combination with DFT to further accelerate virtual screening and to extract chemical meaningful design criteria. To enable ML for our application, we next propose a new featurization method which more readily encodes known chemical trends. Our new featurization method, eRAC-185, demonstrates improved performance on data sets which simultaneously incorporate 4d metals, which are common in catalysis, and 3d metals, which are significantly more abundant. Together, our descriptor and featurization method provide the foundation for the computationally accelerated discovery of more active WOCs with earth-abundant metals.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sancus: Cryptographic Audits for Virtual Currency Institutions</title>
<link href="https://hdl.handle.net/1721.1/139214" rel="alternate"/>
<author>
<name>Rahman, Ravi</name>
</author>
<id>https://hdl.handle.net/1721.1/139214</id>
<updated>2022-01-15T03:42:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Sancus: Cryptographic Audits for Virtual Currency Institutions
Rahman, Ravi
Sancus introduces fully accountable, privacy preserving, cryptographic audits for virtual currency institutions – entities that allow users to deposit, exchange, and withdraw blockchain-based funds. These audits, verifiable by the public, provide irrefutable proofs that institutions not only have accounted for all customer transactions but also own at least as much in blockchain assets as they owe to their users. Sancus addresses major limitations in previous works for blockchain auditing: it supports institutions that offer multiple currencies on multiple blockchains, including Bitcoin and Ethereum; it follows security best practices and uses offline wallets; it preserves privacy for the institutions and their customers by hiding transaction amounts and blockchain addresses; and it produces definitive proofs of solvency as individual customers take no part in the auditing process. Evaluation of our reference implementation of Sancus demonstrated that the audit generation time, audit validation time, and size of audits scale linearly with the number of users, number of transactions, and privacy parameters. With efficient runtimes for audit generation and validation in a multi-threaded environment and megabyte order-of-magnitude audit sizes, Sancus offers a promising, new approach for continuous auditing of virtual currency institutions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Relationship Between Spatial-Temporal Outdoor Thermal Comfort Simulations and Bike Ridership</title>
<link href="https://hdl.handle.net/1721.1/139213" rel="alternate"/>
<author>
<name>Young Li Wen, Elizabeth Lyn</name>
</author>
<id>https://hdl.handle.net/1721.1/139213</id>
<updated>2022-01-15T03:30:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">On the Relationship Between Spatial-Temporal Outdoor Thermal Comfort Simulations and Bike Ridership
Young Li Wen, Elizabeth Lyn
Predicting resident comfort throughout a city over time and predicting the impact of these thermal sensations on mobility mode choice are key information required by urban planners and policy makers to promote and implement thermal comfort concepts. The Universal Thermal Climate Index (UTCI) has been linked to outdoor activity patterns and used to evaluate the effectiveness of urban interventions to improve thermal comfort. However, calculating the UTCI at high resolutions in urban spaces is complex as it requires inputs such as the ambient temperature, relative humidity, wind speed and mean radiant temperature at the point of interest.&#13;
&#13;
This thesis investigates how simulating the urban environment at increasing levels of spatial refinement impacts UTCI values along three bike routes in Cambridge, MA. As a baseline, UTCI is estimated using data from a local weather file. Then, shading from buildings and trees along the routes are considered. Next, local wind speeds are incorporated from computational fluid dynamics simulations. Finally, surface temperatures of the surrounding environment are included. Subsequently, with the UTCI simulations and publicly available bike ridership data from Bluebikes, Boston’s bike-sharing program, the relationship between bike ridership patterns and UTCI values along each route is studied. Supervised machine learning models are applied to predict bike ridership based on UTCI and other predictors.&#13;
&#13;
UTCI simulation results show that incorporating the various increments of spatial resolution does influence hourly UTCI values and the comfort bands that they fall into, especially in urban areas. Incorporating local wind speeds provides the largest impact on UTCI values, and causes a 10% reduction in annual cold stress hours. While the increments in spatial refinement also impacts UTCI in unshaded and exposed areas, the impact is smaller than in urban areas. The statistical models trained to predict hourly bike trip counts based on UTCI and other demand and weather predictors achieved a root mean squared error of 1.02 trips. 48% of predictions were correct, and an additional 40% of predictions were off by 1 trip.&#13;
&#13;
This thesis demonstrates the importance of spatial refinement in simulating UTCI, and motivates future research into efficient simulation methods or rules-of-thumb for deriving spatial-temporal UTCI values. Future work into building a robust predictive model would motivate the design of thermally comfortable environments for human-powered transportation in cities.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mandela, Massachusetts: Design Futures for a Proposed City</title>
<link href="https://hdl.handle.net/1721.1/139212" rel="alternate"/>
<author>
<name>Gulaid, Sofia Asli</name>
</author>
<id>https://hdl.handle.net/1721.1/139212</id>
<updated>2022-01-15T03:30:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Mandela, Massachusetts: Design Futures for a Proposed City
Gulaid, Sofia Asli
In 1986 and 1988 there was an unsuccessful referendum for majority Black neighborhoods across Boston to incorporate and form an independent city called Mandela. The referendum, motivated by widespread dissatisfaction with Boston’s treatment of Black neighborhoods, and a community desire for land control and self-determination, has been all but forgotten at present, despite its motivations being as salient as ever.&#13;
&#13;
The following thesis presents my ongoing art project Mandela, Massachusetts: Design Futures for a Proposed City, which asks the question:&#13;
“How might we use design to spark conversation about the hidden history and potential design of Mandela?”.&#13;
During Spring 2021, I designed realistic posters, postcards and stickers about Mandela, and disseminated them in public spaces across Greater Roxbury. All pieces have a QR code linked to a visioning survey that gives participants the opportunity to imagine what Mandela could be.&#13;
&#13;
The thesis starts by exploring the background of Mandela, Massachusetts, including the proposal, its architects, various Black Power precedents, outcomes, and legacy, drawing upon the previous research of Zebulon Miletsky and Tomas Gonzalez. Then the public art project is explained including the materials, design process, project precedents from Monument Lab, Paper Monuments, and various Black and Indigenous women artists, and explore use of the materials in-situ. The final section then reflects on the implications of public art projects, evaluating the methodology of an ephemeral art installation like Mandela, and providing recommendations on how to ground this work in the built environment and continue these conversations
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Houses on Hudson: Using Documentary Film to Explore Exclusionary Zoning and Affordable Housing Development in the New York Suburbs</title>
<link href="https://hdl.handle.net/1721.1/139211" rel="alternate"/>
<author>
<name>Gourevitch, Ruth</name>
</author>
<id>https://hdl.handle.net/1721.1/139211</id>
<updated>2022-01-15T04:01:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Houses on Hudson: Using Documentary Film to Explore Exclusionary Zoning and Affordable Housing Development in the New York Suburbs
Gourevitch, Ruth
In this thesis, I use the medium of documentary filmmaking to examine efforts to develop fair and affordable housing in Hastings-on-Hudson, New York. Hastings is an affluent, predominantly white suburban town located in Westchester County; increasingly high housing prices and exclusionary land use practices uphold local housing segregation, mirroring many other suburban communities across the country. Specifically, this thesis explores how stated values of inclusion and progressive ideology come in tension with underlying desires related to homeownership. The result is a short documentary, “Houses on Hudson,” and an accompanying narrative outlining the motivation, process, and takeaways from this artistic intervention.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating climate, economic, and racial justice through a Boston FutureCorps</title>
<link href="https://hdl.handle.net/1721.1/139210" rel="alternate"/>
<author>
<name>Costantini, Winn Elliott</name>
</author>
<id>https://hdl.handle.net/1721.1/139210</id>
<updated>2022-01-15T03:53:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Integrating climate, economic, and racial justice through a Boston FutureCorps
Costantini, Winn Elliott
Amidst a rapidly evolving political landscape, with the 2021 Boston Mayoral Election, recently passed Massachusetts State climate policy, and President Biden’s Executive Order to create a Civilian Climate Corps, the City of Boston has the opportunity to integrate its response to climate change, economic inequity, and racial injustice through the creation of, what I have titled, the Boston FutureCorps. Following Councilor Michelle Wu’s call for an Urban Climate Corps and Councilor Kenzie Bok’s proposal for a Boston Conservation Corps, the Boston City Council is now in the process of developing a new corps program that will join the city’s existing network of green workforce development infrastructure. In order to strengthen, rather than duplicate, this existing infrastructure, this thesis examines the complex cross-section of current public, private and nonprofit efforts to prepare Boston residents for green jobs and address racial inequity in green sectors. This work contributes to the City of Boston’s collective response to the climate crisis through a city-level ecosystem analysis for the operationalization of the Green New Deal-based Boston FutureCorps.&#13;
&#13;
I participated in two Boston City Council meetings, convened a focus group, and conducted 46 semi-structured interviews with stakeholders — including Boston workforce development programs, as well as environmental, community, and labor organizations — and visualized the current organizational landscape in Boston through a series of ecosystem maps. The ecosystem maps relay the existing relationships among stakeholders, potential green career pathways, and external factors necessary for the consolidation of an equitable and just corps.&#13;
&#13;
Critically, this thesis also explores stakeholders’ perceptions of this current system, the concept of “green jobs”, and the potential design and impacts of the Boston FutureCorps. Stakeholders stressed the need for a participatory program design process and partnerships with community organizations, long-term and reliable funding sources, and the need for the corps to connect participants to meaningful jobs with living wages. In conclusion, I consider how such stakeholder perspectives can inform the institutionalization of this effort; I then recommend a series of values-based indicators that decision-makers can use to ensure that policy efforts to introduce a Boston FutureCorps are rooted in climate, economic, and racial justice, both in theory and practice.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Community Composting: Public-Nonprofit Partnerships and Equity in New York City Organic Waste Programs</title>
<link href="https://hdl.handle.net/1721.1/139208" rel="alternate"/>
<author>
<name>Chancey, Bahij V.</name>
</author>
<id>https://hdl.handle.net/1721.1/139208</id>
<updated>2022-01-15T03:08:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Community Composting: Public-Nonprofit Partnerships and Equity in New York City Organic Waste Programs
Chancey, Bahij V.
This thesis explores the ecosystem of public, nonprofit, and private organizations that contribute to household organic waste composting in New York City, and the public-nonprofitpartnership (PNPP) efforts that the City’s Department of Sanitation (DSNY) has employed to realize its compost education, outreach, collection, and processing efforts. These PNPPs evolved in the context of an ecosystem of nonprofit and private sector actors, as well as purely municipal efforts to collect organic material and divert it from the city’s waste stream. The research investigates how DSNY’s PNPP strategies affected the geographic, demographic, and social equity of its compost initiatives through a series of 16 semi-structured interviews with people both involved in and working outside of the partnerships, and geographic and quantitative analysis of public data. The research reveals the unique bureaucratic circumstances under which the PNPPs formed, how their configuration limited their ability to work with informal and commercial entities also relevant in the space and affected the equity of program services. The study finds that PNPPs succeeded in serving a diverse and representative population, but that public funds may have directly benefited wealthier and Whiter communities while more marginalized Black, brown, and poor communities were left to be served by volunteers. In addition, the research finds that community composting as practiced by the PNPPs fostered numerous ancillary social benefits like the creation of community cohesion, the development of local green jobs, and the encouragement of deep volunteerism that went beyond collecting and processing household organic waste and therefore escaped the comprehensive measurement, reporting, and support of the Sanitation agency.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In Situ Perturb-Seq of Transcriptomes and RNA Neural Recordings</title>
<link href="https://hdl.handle.net/1721.1/139207" rel="alternate"/>
<author>
<name>Romero, Cipriano William</name>
</author>
<id>https://hdl.handle.net/1721.1/139207</id>
<updated>2022-01-15T03:48:24Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">In Situ Perturb-Seq of Transcriptomes and RNA Neural Recordings
Romero, Cipriano William
In this work, we explore the intersection of in situ sequencing, neural recording, and CRISPR screens. An intracellular technology is outlined for encoding neural activity in the form of RNA, theoretically enabling single-cell resolution recording of whole-brain activity. This neural recording system can be coupled with perturb-seq in order to observe high-throughput genetic perturbations of neurons with both temporal and transcriptomic information. Untargeted expansion sequencing (ExSeq) can be used to generate a high-resolution spatiotemporal dataset that includes single guide RNAs (sgRNAs), neural activity, and transcriptomics. Targeted ExSeq, with the inclusion of no-gap padlock probes and SplintR ligase, can be applied to enhance the detection of sgRNA barcodes and targeted transcripts. In vitro and in vivo experimental pipelines are proposed for the fusion of these technologies, in this theoretical thesis.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expected Possession Value: An Evaluation Framework for Decision-Making, Strategy, and Execution in Basketball</title>
<link href="https://hdl.handle.net/1721.1/139205" rel="alternate"/>
<author>
<name>Jutamulia, Ivan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/139205</id>
<updated>2022-01-15T04:04:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Expected Possession Value: An Evaluation Framework for Decision-Making, Strategy, and Execution in Basketball
Jutamulia, Ivan C.
Quantifying decision-making in professional basketball has been an extremely challenging area of research in the past decade, with potentially very fruitful and powerful insights to be drawn as NBA organizations want to understand cognitive aspects of athlete performance. This work seeks to develop an objective framework for evaluating decision-making, while simultaneously making inferences around strategy and execution efficacy.&#13;
&#13;
I construct a metric called Expected Possession Value (EPV) computed through tracking data that is then leveraged to identify scoring opportunities throughout a game. I then analyze these opportunities as instances of decision-making, quantifying how often those opportunities are missed and how good those opportunities were. Looking at team opportunities as a whole and relying on the notion of expectation, I am then also able to make judgements on how much of a team’s performance can be attributed to their strategy versus their execution. Through this analysis, I show that using EPV is an effective framework for extracting quantitative measures to aid in decision-making evaluation through tracking data.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyhedral Code Transformation for Julia</title>
<link href="https://hdl.handle.net/1721.1/139203" rel="alternate"/>
<author>
<name>Julian, Meredith</name>
</author>
<id>https://hdl.handle.net/1721.1/139203</id>
<updated>2022-01-15T03:34:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Polyhedral Code Transformation for Julia
Julian, Meredith
Central processing unit (CPU) code performance is highly dependent on instruction and data parallelism. Instruction parallelism is largely focused on CPU cores and threading, while data parallelism relies on specialized vectors. Single-instruction, multiple-data (SIMD) vectors allow for a single instruction to be executed on multiple data elements in parallel, and is very fine-grained compared to instruction parallelism.&#13;
&#13;
This project focuses on data parallelism. One way to increase performance on CPU is to rearrange code to allow for more vectorization, such as considering striding of array accesses. The polyhedral model of computation describes a series of operations and abstractions for manipulating code without violating dependencies or correctness, but is more aggressive than typical compiler optimizations.&#13;
&#13;
This paper describes the process of applying the polyhedral model to the Julia language with the acceleration of code via increased vectorization and memory locality in mind. It succeeds as a tool for both experienced developers and those new to Julia and allows for more simple analysis and generation of algorithms for new and existing code.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Robotic Process Automation in the Banking Industry</title>
<link href="https://hdl.handle.net/1721.1/139202" rel="alternate"/>
<author>
<name>Wang, Yucun</name>
</author>
<id>https://hdl.handle.net/1721.1/139202</id>
<updated>2022-01-15T03:28:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Applying Robotic Process Automation in the Banking Industry
Wang, Yucun
In recent years, Robotic Process Automation (RPA) has attracted much attention. With predetermined programs, it can execute tasks that are rule-based, high-information, and repetitive. Nowadays, RPA is used in many areas such as finance, manufacturing, accounting, retail, and supply chains to save time and improve efficiency. However, RPA is seldom used in banking. This thesis conducts a comprehensive analysis of RPA technology, proposing practical suggestions for applying RPA in banking scenarios. The study introduces the concepts, characteristics, and industry status of RPA and presents a case study of a bank integrating RPA; this case study quantifies the cost reduction and efficiency promotion for a particular bank. In addition to the potential benefits, the study also highlights risks and challenges of adopting the RPA technology and proposes efficient methods to mitigate them. Based on the analysis and extensive literature review, this study develops a 5-Step RPA Application Model and introduces three sourcing modes for RPA adoption in the banking industry. Finally, some directions for future research are presented.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unified Graph Framework: Optimizing Graph Applications across Novel Architectures</title>
<link href="https://hdl.handle.net/1721.1/139201" rel="alternate"/>
<author>
<name>Hsu, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/139201</id>
<updated>2022-01-15T03:44:44Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Unified Graph Framework: Optimizing Graph Applications across Novel Architectures
Hsu, Claire
High performance graph applications are crucial in a wide set of domains, but their performance depends heavily on input graph structure, algorithm, and target hardware. Programmers must develop a series of optimizations either on the compiler level, implementing different load balancing or edge traversal strategies, or on the architectural level, creating novel domain-specific accelerators optimized for graphs. In recent years, there has been rapid growth on the architectural end, with each novel architecture contributing new potential optimizations.&#13;
&#13;
To help compiler development scale with the growth of the architecture domain, we develop the Unified Graph Framework (UGF) to achieve portability for easy integration of novel backend architectures into a high-level hardware-independent compiler. UGF builds on the GraphIt domain-specific language, which divides algorithm specification from scheduling optimizations, and separates hardware-independent from hardwarespecific scheduling parameters and compiler passes. As part of UGF, we introduce GraphIR, a graph-specific, hardware-independent intermediate representation; an extensible scheduling language API that enables hardware-independent optimizations and programmer-defined hardware-specific optimizations; and the GraphVM, the compiler backend implemented for each hardware architecture.&#13;
&#13;
Lastly, we evaluate UGF by implementing a GraphVM for Swarm, a recently developed multicore architecture. We integrate several scheduling optimizations built around Swarm’s ability to speculate on fine-grained tasks in future loop iterations. When evaluated on five applications and 10 input graphs, UGF successfully generates highly optimized Swarm code and achieves up to 8x speedup over baseline implementations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a File Architecture for a Database Operating System</title>
<link href="https://hdl.handle.net/1721.1/139200" rel="alternate"/>
<author>
<name>Hong, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/139200</id>
<updated>2022-01-15T03:47:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Implementing a File Architecture for a Database Operating System
Hong, Daniel
Widely used operating systems such as Linux are becoming outdated. Because they were optimized for the limited processing power of several decades ago, scalability is a growing concern given the powerful computing environments available now. Instead of adding onto the current operating system design to address these problems, our team proposed a design for a system we call the Database Operating System (DBOS) that uses database tables to represent the state, and queries to represent operations to the state. In this study, I show that the performance of this new OS design is competitive with current operating systems. In order to obtain performance metrics, the following few key components are in focus: the file system, scheduler, and IPC handler. This study focuses on the file system implementation. The file system is a simple file architecture using VoltDB as our in-memory database, with tables representing files and stored procedures representing IO tasks such as read and write. DBOS uses main memory as the primary storage, and mechanisms were implemented to spill data to disk when necessary. Benchmark tests were conducted against DBOS and other existing operating systems, which prove that DBOS is not just competitive with, but can outperform, existing operating systems.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ProgGen: Automatic Dataset Generation for the Halide Domain Specific Language</title>
<link href="https://hdl.handle.net/1721.1/139199" rel="alternate"/>
<author>
<name>Holbrook, Zachary</name>
</author>
<id>https://hdl.handle.net/1721.1/139199</id>
<updated>2022-01-15T04:00:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">ProgGen: Automatic Dataset Generation for the Halide Domain Specific Language
Holbrook, Zachary
Compilers use cost models to choose between different optimization opportunities, and increasingly these cost models are developed using data-driven techniques. Compilers for general-purpose languages rely on large real-world program datasets to train their cost models. However, cost models for domain-specific languages often have to use program generators due to a lack of large datasets of real-world programs. Program dataset generators are typically manually constructed or handwritten to generate programs in a randomly guided way. However, writing a program generator is time-consuming and requires considerable tuning to produce programs with realistic computation patterns in the desired domain.&#13;
&#13;
This thesis presents ProgGen, a program generator inspired by genetic programming for automatically generating program datasets used in training compiler cost models. ProgGen automatically produces program datasets in different domains by starting with a small initial set of programs in the desired domain. I compare ProgGen with the random program generator used in the Halide Autoscheduler [1]. While the Halide random program generator performs better in the image processing and neural network domains it was designed for, ProgGen is competitive in video processing and linear algebra domains. Due to the automatic nature of ProgGen, ProgGen can also generate programs in new domains with far less engineering time.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Just Transition: Lessons from Mexico</title>
<link href="https://hdl.handle.net/1721.1/139197" rel="alternate"/>
<author>
<name>Hodgkins, Chelsea</name>
</author>
<id>https://hdl.handle.net/1721.1/139197</id>
<updated>2022-01-15T04:10:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Just Transition: Lessons from Mexico
Hodgkins, Chelsea
In 2013, Mexico undertook a series of national energy reforms that promoted largescale, privately-funded renewable energy development. The stated goals of the reforms were to fill investment gaps in the public energy sector and to help meet CO2 reduction targets. Human rights and environmental organizations in Mexico, however, have criticize this model of development promoted by the reforms for their apparent contributions to increasing human rights abuses and generating new “socio-economic conflicts.”¹&#13;
&#13;
Using data collected between 2010-2020 at the Business and Human Rights Resources Centre on abuses in renewable energy development across Latin America and a review of policy, regulatory and legal regimes of the reforms, this thesis explores three primary questions: 1) Why are large-scale, private sector projects the preferred model of renewable energy development?; 2) What legal and regulatory structures created by the reforms enable the present violence and conflict?; and 3) What lessons can the global community learn from Mexico’s model and experience? &#13;
&#13;
My key finding is that the energy reforms in Mexico, and the model of renewable energy development they promote, need to be reconsidered. A just energy transition model, that moves from fossil fuels to renewables, would not encourage the current patterns of land use and dispossession. Further, the rights of indigenous peoples must be secured through full recognition, legally and in practice, of their customary land rights and community practices regardless of the interests of private investors, companies, and governments in renewable energy.&#13;
&#13;
¹ Puentes, A., Peña Lizarazo, R. 8 July 2020. “Towards Energy Justice in Mexico: Challenges and Conditions.”&#13;
Business &amp; Human Rights Resource Centre. https://www.business-humanrights.org/en/blog/desaf%C3%ADosy-condiciones-para-avanzar-hacia-la-justicia-energ%C3%A9tica-en-m%C3%A9xico/
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Multifunctional Patch for Minimally Invasive Tissue Sealing: Design Strategies and Applications</title>
<link href="https://hdl.handle.net/1721.1/139196" rel="alternate"/>
<author>
<name>Wu, Sarah J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139196</id>
<updated>2022-01-15T03:08:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Multifunctional Patch for Minimally Invasive Tissue Sealing: Design Strategies and Applications
Wu, Sarah J.
Bioadhesive materials have garnered great attention due to their potential to replace sutures and staples during surgical procedures. Compared to the traditional mechanical sealing modalities, bioadhesive materials are generally associated with short application times and reduced tissue damage. However, the complexities of delivering bioadhesive materials through narrow spaces and achieving strong adhesion in fluid-rich physiological environments continue to present substantial limitations to the broader surgical translation of existing glues and sealants, particularly in the domain of minimally invasive surgery. This thesis presents the design and testing of a surgically implantable tissue-sealing patch for versatile minimally invasive wound-closing applications. The design approach is guided by the clinical needs to resist contamination caused by pre-exposure body fluids, achieve fast, strong, and fluid-tight tissue adhesion, and prevent postsurgical biofouling and inflammation. These criteria are realized through the synergistic integration of multiple distinct functional layers, including (1) a microtextured bioadhesive layer comprised of an interpenetrating NHS-grafted PAAc and biopolymer network, (2) a blood-repellent hydrophobic fluid layer infused into the microtextured bioadhesive layer, and (3) an antifouling zwitterionic polymer-interpenetrated elastomer backing. Strategies guiding the design and characterization of each layer are discussed, and tailored form factors for specific minimally invasive clinical applications are further demonstrated. This platform provides a basis for the future design of multifunctional, antifouling, and bioadhesive materials.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Evaluative and Recommendatory tool for Policy Makers to make Sustainable Urban Development Decisions</title>
<link href="https://hdl.handle.net/1721.1/139194" rel="alternate"/>
<author>
<name>Hernandez, Anthony</name>
</author>
<id>https://hdl.handle.net/1721.1/139194</id>
<updated>2022-01-15T03:49:51Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Evaluative and Recommendatory tool for Policy Makers to make Sustainable Urban Development Decisions
Hernandez, Anthony
Frustrations from decisions of policy makers often stem from a lack of information or a good understanding of dense/difficult to understand information. These can include the general public lacking an understanding of the reasoning behind legislative decisions, the policy makers not considering the aspects of urban metabolism that should influence these decisions, a lack of access to the relevant data and a lack of insightful aids to relevant data for both of these parties. This tool is intended to provide public access to both publicly available data about their region (country, city, etc) as well as be able to generate insights via visual aids to this data. This document details the actualization and deployment of a tool to accomplish these goals for the public and relevant policy makers.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Learning curve: An exploration of the digital literacy dimension to ISPs</title>
<link href="https://hdl.handle.net/1721.1/139193" rel="alternate"/>
<author>
<name>Maina, David Kambo</name>
</author>
<id>https://hdl.handle.net/1721.1/139193</id>
<updated>2022-01-15T04:06:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Learning curve: An exploration of the digital literacy dimension to ISPs
Maina, David Kambo
As the world becomes increasingly digital, there is widespread recognition of the opportunities and potential benefits of expanding access to the Internet in developing countries. In response to the optimism, the current state of the internet access landscape in Kenya is characterized by a diverse mix of dominant market-driven models, which, despite increased availability are still restrictive to low-income earners, and a growing set of non-traditional service providers seeking to anchor themselves as sustainable service providers within low-income markets. As a result, these service providers are testing new business models and technologies, incorporating digital literacy programs to reach consumers in poor neighborhoods, and sustain adoption. This study, therefore, seeks to understand how the diverse menu of internet providers, from the recent entrants to the more significant players, use digital literacy programs, to foster internet adoption in the low-income community of Kibera. This study will look at a cross-section of internet providers in Kibera to understand whether they provide ways to educate the potential user about the possibilities of internet use and if they learn from the success and failures of their approach. A focused investigation on the digital literacy engagements used by the abovementioned ISPs is analyzed together with their service provider for internet adoption, revealing that the current digital literacy environment favors profit-led internet service providers. At the same time, neighborhood and community-led service providers are disproportionately burdened, adding to the challenges faced in using data literacy to build local relevance in accessing the Internet.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Total Delivered Cost in the Automotive Industry</title>
<link href="https://hdl.handle.net/1721.1/139191" rel="alternate"/>
<author>
<name>Queiros, Pedro Vasconcelos Bettencourt Teixeira</name>
</author>
<id>https://hdl.handle.net/1721.1/139191</id>
<updated>2022-01-15T03:47:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Modeling Total Delivered Cost in the Automotive Industry
Queiros, Pedro Vasconcelos Bettencourt Teixeira
Automotive part sourcing is a large scale complex problem that involves the procurement of thousands of individual parts from hundreds of different suppliers. This project focuses on the development of a new Total Delivered Cost (TDC) methodology for automotive part sourcing at Nissan. Initially, a calculation methodology was developed using operational data from the Nissan Smyrna plant. This methodology, which aims at capturing the direct and indirect costs of sourcing, comprises 10 different cost drivers including part, tooling, packaging, transportation, last mile, storage, inventory, obsolescence, quality, and tariff. Subsequently, the methodology was integrated in a TDC tool for ease of use and applied in a preliminary TDC analysis. The results show that: (1) part cost is the main driver of TDC with 90% of parts studied having non-part cost less than 15% of TDC; (2) non-part cost has an important compounding effect due to correlation between transportation, last mile, storage, inventory, obsolescence, and pipeline quality costs; (3) the relation between available information, TDC accuracy, and TDC value is highly asymmetric across time. Overall, the results highlight the importance of TDC methodologies in improving automotive part sourcing. Lastly, a review of typical implementation challenges associated with TDC methodologies was performed with the objective was identifying strategies that increase business impact. Two strategies were investigated including adopting a gradual implementation plan to mitigate the negative impact of initial low TDC accuracy and deploying TDC methodologies as filter in the sourcing process to reduce workload and allow for better resource allocation. The incorporation of cost of complexity and cost of supply chain risk in TDC is discussed as future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Machine Learning Approach for Forecasting with Limited Data and for Distant Time Horizons</title>
<link href="https://hdl.handle.net/1721.1/139189" rel="alternate"/>
<author>
<name>Eiskowitz, Skylar</name>
</author>
<id>https://hdl.handle.net/1721.1/139189</id>
<updated>2022-01-15T03:37:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Machine Learning Approach for Forecasting with Limited Data and for Distant Time Horizons
Eiskowitz, Skylar
Time series forecasting has attracted the attention of the machine learning (ML) community to produce accurate forecasting models that address the limitations of classical methods. A large part of ML research focuses on innovative algorithms, but another important area is transitioning ML to industry settings. The objective of this Thesis is to apply ML in realistic scenarios by devising methods that make practical, usable forecasts and models.&#13;
&#13;
We focus on three areas that contribute to more practical forecasts. First, we improve the problem formulation of multi-step ahead forecasting by including a notion of an offset to create a more customizable forecasting window. A comparative analysis across three datasets shows that at further out horizons, two models that include the notion of an offset consistently outperform their original counterparts. Secondly, we simulate a scenario where an ML model does not have access to the immense amount of training data normally necessary to train deep neural networks. We use transfer learning with a weight sharing algorithm and observe that it improves all seven target model accuracies even after they have accumulated two weeks of their own data. However, the input data to transfer knowledge from must be chosen wisely to avoid negative transfer. Finally, we address the challenge of deploying a safe and robust ML model by outlining key features in a time series forecasting library and compare 13 notable, actively maintained libraries, finding a wide variability in the features included in the libraries. A surprisingly low number of libraries include a benchmarking system, and around half of the libraries provide a pre/post-processing engine that would allow for modular ML models in a pipelining manner for easy deployment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Cost of CO2 Transport and Storage in Global Integrated Assessment Modeling</title>
<link href="https://hdl.handle.net/1721.1/139188" rel="alternate"/>
<author>
<name>Smith, Erin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139188</id>
<updated>2022-01-15T03:04:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Cost of CO2 Transport and Storage in Global Integrated Assessment Modeling
Smith, Erin E.
Carbon capture and storage (CCS) is one of many critical tools to mitigate global climate change. Much analytic work has been dedicated to evaluating the cost and performance of various CO2 capture technologies, but less attention has been paid to evaluating the cost of CO2 transport and storage. This paper assesses the range of CO2 transport and storage costs and evaluates their impact on economy-wide modelling results of decarbonization pathways. Many integrated assessment modeling studies assume a combined cost for CO2 transport and storage that is uniform in all regions of the world, commonly estimated at $10/tCO2. Realistically, the cost of CO2 transport and storage is not fixed at $10/tCO2 and varies across geographic, geologic, and institutional settings. I surveyed the literature to identify key sources of variability in transport and storage costs and developed a method to quantify and incorporate these elements into a cost range. I find that onshore pipeline transport and storage costs vary from $4 to 45/tCO2 depending on key sources of variability including transport distance, scale (i.e. quantity of CO2 transported and stored), monitoring assumptions, reservoir geology, and transport cost variability such as pipeline capital costs. Using the MIT Economic Projection and Policy Analysis (EPPA) model, I examined the impact of variability in transport and storage costs by applying a range of uniform costs in all geographic regions in a future where global temperature rise is limited to 2°C. I then developed several modeling cases where transport and storage costs vary regionally. In these latter cases, global cumulative CO2 captured and stored through 2100 ranges from 290 to 377 Gt CO2, compared to 425 Gt CO2 when costs are assumed to be uniformly $10/t CO2 in all regions. I conclude that the widely used assumption of $10/tCO2 for the transport and storage of CO2 is reasonable in some regions, but not in others. Moreover, CCS deployment is more sensitive to transport and storage costs in some regions than others, particularly China. Several transport and storage options should be taken into account when modeling large-scale deployment of CCS in decarbonization pathways. However, cost data are scarce and there is still a significant amount of uncertainty and variability in available transport and storage costs.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robustness and Adaptation via a Generative Model of Policies in Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/139186" rel="alternate"/>
<author>
<name>Derek, Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/139186</id>
<updated>2022-01-15T03:31:56Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Robustness and Adaptation via a Generative Model of Policies in Reinforcement Learning
Derek, Kenneth
In the natural world, life has found an uncountable number of ways to survive and often thrive. Between and even within species, each individual has a slightly unique way of existing, and this diversity lends robustness to life in general. In this work, we aim to incentivize diversity of agent policies while optimizing for an external reward. To this end, we introduce a generative model of policies which maps a low-dimensional latent space to an agent policy space. In order to learn a broad range of solutions, our generative model uses a diversity regularizer that incentivizes different agent behaviors given the same state. Agents are assigned a specific latent vector persistent throughout their trajectory, and the generator learns to encode behavioral preferences in the latent space. Results show that our generator is able to find an array of policies that can express agent individuality through distinct and unique agent policies. Of particular interest, we find that having a diverse policy space allows us to rapidly adapt to unforeseen environmental ablations simply by optimizing generated policies in the low-dimensional latent space. We test this adaptability in an open-ended grid-world, as well as in a competitive, zero-sum, two-player soccer environment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New Revenue Management and Distribution Technologies in the Airline Industry: Legal, Regulatory, and Commercial Implications</title>
<link href="https://hdl.handle.net/1721.1/139185" rel="alternate"/>
<author>
<name>Sanchez, Benjamin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/139185</id>
<updated>2022-01-15T03:30:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">New Revenue Management and Distribution Technologies in the Airline Industry: Legal, Regulatory, and Commercial Implications
Sanchez, Benjamin C.
IATA’s New Distribution Capability (NDC) is a communication standard for the distribution of airline tickets. This new standard will take advantage of modern technology and give airlines the ability to transmit new information to intermediaries and passengers during the flight search and booking process. NDC will also allow for more advanced revenue management and pricing techniques like dynamic offer generation and has inspired researchers to develop and test these new techniques. &#13;
&#13;
This thesis investigates the potential changes in the industry from the impending implementation of NDC. It considers potential legal, regulatory, and commercial barriers to NDC implementation and the implications of NDC adoption. The contributions of this thesis are three-fold. First, the history and development of airline revenue management and distribution are comprehensively documented. This documentation provides the context from which to understand how NDC may change the airline industry and is among the most thorough available in current literature. Second, NDC implementations are analyzed through a policy and legal lens. While this analysis is not comprehensive given the extent and variety of potential NDC implementations, the thesis offers one of the first attempts to understand the legal and regulatory issues that NDC implementation will raise. Finally, policies are explored that consider the current legal and regulatory landscape, the potential benefits of NDC to the industry, and the possible impacts on consumers from NDC implementation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learned Encodings in SageDB</title>
<link href="https://hdl.handle.net/1721.1/139184" rel="alternate"/>
<author>
<name>Cen, Lujing</name>
</author>
<id>https://hdl.handle.net/1721.1/139184</id>
<updated>2022-01-15T03:40:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learned Encodings in SageDB
Cen, Lujing
As the demand for data outpaces diminishing improvements in the hardware used to store and query them, we must find intelligent ways to increase database performance on existing systems. This project is focused on integrating learned encodings into SageDB, a database capable of accelerating queries by analyzing and adapting to different workloads. Encodings improve query performance through lossless compression, thereby reducing I/O time during scans. Different encoding types exhibit different characteristics depending on properties of the underlying data and the hardware on which queries are executed. We implement a variety of common encodings in SageDB and propose a learning-based approach to select the optimal encoding for a given data block by combining block-level statistics with sampling. In addition, we demonstrate how to leverage properties of encoded data along with vectorized processing units in modern CPUs to more efficiently execute queries without the need to decode every value.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drug Substance and Drug Product Manufacturing Strategy Assessment for siRNAs</title>
<link href="https://hdl.handle.net/1721.1/139182" rel="alternate"/>
<author>
<name>Gabriela, Monica</name>
</author>
<id>https://hdl.handle.net/1721.1/139182</id>
<updated>2022-01-15T03:52:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Drug Substance and Drug Product Manufacturing Strategy Assessment for siRNAs
Gabriela, Monica
Amgen currently has its first siRNA program, Asset #1 in phase 2 clinical trials. Until recently, Amgen has been outsourcing the Drug Substance (DS) and Drug Product (DP) manufacturing to external manufacturers, but with a growing siRNA portfolio even beyond Asset #1, the building of a new facility is of great interest and value.&#13;
&#13;
As there are hundreds of potential manufacturing scenarios, this thesis will first shortlist those into three most feasible ones to be analyzed with a supply chain model and eventually a business model. The supply chain model will include resilience and weak link analysis, which will result in a risk-to-cost input for the overall business model, currently built only for Asset #1 due to limited information on other assets in earlier development phases. The business model, equipped with mixed integer program, calculates the 20-year Present Value of Expense (PV of Expense) to identify the optimal capacity progression and scenario, even beyond the three predefined ones, with the least expense.&#13;
&#13;
It was eventually found that the best scenario is indeed beyond the three predefined ones, suggesting internalization very soon after Asset #1’s commercial launch. However, it is decided on the delay to see what product demand would be, so that Amgen would only invest if the program showed a need/showed a profitable outcome. It is recommended that Amgen keep updating the model and continue monitoring the market to understand supply and demand dynamics on siRNA, as well as innovating on Amgen’s siRNA process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Mold on Job Shops</title>
<link href="https://hdl.handle.net/1721.1/139180" rel="alternate"/>
<author>
<name>Braun, Caitlin M.</name>
</author>
<id>https://hdl.handle.net/1721.1/139180</id>
<updated>2022-01-15T04:09:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Breaking the Mold on Job Shops
Braun, Caitlin M.
Job shops lack disciplined manufacturing strategy and execution. High variation of work results in high levels of WIP, unbalanced loads, poor shop floor visibility, underutilized machinery, and ultimately poor on time delivery, high costs, long lead times, and poor quality.  To break the mold on mediocrity, job shops must build a strong foundation from which the company can grow, synchronize operations, and build the factory of the future.  The research outlines the key initiatives and tools necessary for a job shop to execute this strategy and achieve operational excellence.  &#13;
&#13;
The strategy is explored through the lens of Company X, a CNC machining supplier serving the aerospace and defense industry.  Execution to this strategy introduced a strong culture of continuous improvement and bias for action.  Initial results include an improvement in on-time delivery performance at Company X from 50% to 96%.  The on-time delivery performance led booked orders to hit record levels, during a global pandemic.  Company X is now better positioned to execute the strategic initiatives to sustain on time delivery performance and achieve decreased costs, increased quality, and shortened lead times.  Company X will break the mold on job shops and become the industry standard supplier for aerospace and defense.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>STPA Hazard Analysis of Human Supervisory Control of Multiple Unmanned Aerials Systems</title>
<link href="https://hdl.handle.net/1721.1/139179" rel="alternate"/>
<author>
<name>Johnson, Elias B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139179</id>
<updated>2022-01-15T03:33:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">STPA Hazard Analysis of Human Supervisory Control of Multiple Unmanned Aerials Systems
Johnson, Elias B.
Unmanned Aircraft Systems (UAS) operations are shifting from multiple operators controlling a single-UAS to a single operator supervising multiple-UAS engaged in complex mission sets. To enable this paradigm change, there is wide consensus in the literature that limitations in human cognitive capacity require shifting low-level control responsibilities to automation so that human operators can focus on supervisory control. However, hazard analyses to identify related safety concerns have largely used traditional hazard analysis techniques that cannot handle the level of complexity of these systems and none can provide recommendations for the early stages of system development. To begin to address this shortfall, this thesis applies System-Theoretic Process Analysis (STPA) on a model of a multi-UAS system with human-supervisory control. This hazard analysis approach handles complex software and human-machine control interactions together. This thesis details both how the hazard analysis was executed and the implications of the analysis results. Numerous traceable causal scenarios are systematically identified and used to generate design recommendations. These recommendations, if applied, will help ensure multi-UAS systems with human supervisory control are designed with safety in mind.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Encoding Schemes for String Indexing</title>
<link href="https://hdl.handle.net/1721.1/139178" rel="alternate"/>
<author>
<name>Yang, Adela</name>
</author>
<id>https://hdl.handle.net/1721.1/139178</id>
<updated>2022-01-15T03:08:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Analysis of Encoding Schemes for String Indexing
Yang, Adela
Lookup of strings into in-memory database indexes is a problem with different considerations from those using integer keys. With their variable sizes, efficiently inserting strings into indexes should account for properties specific to strings. We investigate learning alternate schemes for encoding and inserting strings into index structures such as the adaptive radix tree (ART) and their impact on memory and lookup performance. In this thesis, we examine three different properties of string datasets and perform three experiments aimed at taking advantage of these properties. While using a character frequency based encoding was successful in increasing throughput on a theoretical read-heavy workload, it did not preserve lexicographical order and is unlikely to be useful in most workloads. Meanwhile, the experiments that did preserve lexicographical order were unsuccessful in demonstrating space or throughput improvements. We suggest improvements on these approaches for further experimentation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Orbit Servicing System Architectures for Proliferated Low Earth Orbit Constellations</title>
<link href="https://hdl.handle.net/1721.1/139177" rel="alternate"/>
<author>
<name>Luu, Michael Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/139177</id>
<updated>2022-01-15T03:48:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">On-Orbit Servicing System Architectures for Proliferated Low Earth Orbit Constellations
Luu, Michael Adam
On-orbit servicing (OOS) presents new opportunities for refueling, inspection, repair, maintenance, and upgrade of spacecraft (s/c). OOS is a significant area of need for future space growth, enabled by the maturation of technology and the economic prospects. This congestion is leading s/c operators to explore how they can leverage OOS. OOS missions for s/c in geostationary orbit (GEO) are currently underway. This is being driven by the closure of the business case for refueling long lived monolithic chemically propelled GEO assets. However, there are currently no plans for OOS of low-earth orbit (LEO) s/c, aside from technology demonstrations, because of their shorter design life and lower cost. It will become particularly important to enable the servicing of LEO s/c as the industry shifts its focus towards LEO. Designing OOS systems for LEO constellations differs from that of GEO based systems, this difference is attributed to LEO’s proliferation of satellites, environmental effects (J2 nodal precession, drag), and different constellation patterns. Satellite constellations in LEO are becoming more distributed due to increased access, distributed risk, flexibility, and cost. OOS of s/c may enable the reduction of requirements on subsystems such as safety and the need for redundancy. These requirement reductions will enable lower risks, lower costs, and increased system resilience. This paper analyzes the benefits of OOS in proliferated LEO constellations. Several OOS system architectures are modeled in various scenarios; in each system architecture the model will vary qualities such as mass, altitudes, time, propulsion system, maneuver, and type of service. The objective of the model will be to optimize for cost, time, and utility to generate a tradespace for an OOS system architecture. OOS provides higher utility over the comparative alternative of using spare satellites in some scenarios. The utility of OOS provides even more utility when considering failure rates of satellites and allowing for an increase in failure rates when adopting an OOS system.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Road Traffic Flow Prediction Using Aerial Imagery</title>
<link href="https://hdl.handle.net/1721.1/139176" rel="alternate"/>
<author>
<name>Pabla, Simran K.</name>
</author>
<id>https://hdl.handle.net/1721.1/139176</id>
<updated>2022-01-15T04:07:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Road Traffic Flow Prediction Using Aerial Imagery
Pabla, Simran K.
Technological advancements have increased the potential and feasibility of widespread drone networks. Among other tasks, monitoring road traffic flow is a task well-suited for such networks. While real-time traffic flow estimation systems have been explored at length and exist as commercial services, these systems have limited spatial reasoning and suffer in accuracy when predicting future traffic conditions. To that end, graph neural networks can account for spatial patterns, and can more effectively capture the impact of a region’s current traffic conditions on neighboring regions in the future. Our work builds on prior graph neural network architectures for traffic flow prediction. While current traffic prediction models are trained on ground-based data with limited features, we propose leveraging aerial traffic data to train spatiotemporal models with richer feature spaces. Our research makes contributions towards assembling a dataset from aerial footage and predicting traffic across a road network given aerial images from a small set of drones.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Hybrid Micro Aerial Vehicle Concept with Multicopter and Vectored Thrust Modes of Flight</title>
<link href="https://hdl.handle.net/1721.1/139175" rel="alternate"/>
<author>
<name>Biberstein, Josef Xavier</name>
</author>
<id>https://hdl.handle.net/1721.1/139175</id>
<updated>2022-01-15T03:24:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of a Hybrid Micro Aerial Vehicle Concept with Multicopter and Vectored Thrust Modes of Flight
Biberstein, Josef Xavier
In recent years, the speed and agility of micro-aerial vehicles (MAVs) have been greatly improved by advances in embedded computing and control theory. Controllers utilizing techniques such as incremental nonlinear dynamic inversion have enabled tracking of aggressive trajectories and high rate cameras and inertial sensors --- combined with compact FPGA and embedded graphics technology --- allow the creation of estimators that can support this aggressive flight. This leap in performance suggests the possibility that autonomous aerial vehicles may be able to compete directly with the best human pilots. Indeed, considering quadrotors specifically, autonomous drone racing competitions have already been organized with the goal of pushing the autonomous quadrotor technology to a level beyond any human pilot. With this goal in mind, and considering the prevalence of the quadrotor as the lingua franca of the field of autonomous MAVs, the performance limits of the traditional brushless outrunner motor quadrotor dynamics may be considered a barrier to the continued development of control theory and embedded computing for fast and agile MAVs.&#13;
&#13;
This thesis seeks to address this limitation by designing a MAV platform prototype --- the rocket-enhanced aerial vehicle with extendable rotors (REAVER) --- which allows for significantly greater acceleration than a quadrotor while maintaining agility and the station-keeping abilities of a quadrotor. REAVER is a hybrid vehicle equipped with both a quadrotor-like mode of flight and a rocket-like mode of flight controlled via thrust vectoring. We discuss the mechanical design of the REAVER prototype, including an initial trade study on propulsion methods, the mechanical design of the vehicle, and its simulated aerodynamic performance. We also evaluate the performance of the design through a trajectory planning study using pseudospectral optimization. Time optimal trajectories which pass through a series of gates are found for the REAVER vehicle dynamics, simulating completing a racecourse. The results are compared to the performance of current autonomous drone airframes in similar tasks. Finally, the development of the jet vane thrust vector control (TVC) system used to steer REAVER during rocket-like flight is discussed. A novel small-scale jet vane design utilizing additive manufacturing is presented and conjugate heat transfer finite element simulation is performed to estimate the jet vane performance. A test stand for verifying the simulations is also presented.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Empirical and Theoretical Analysis of the Role of Depth in Convolutional Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/139174" rel="alternate"/>
<author>
<name>Nichani, Eshaan</name>
</author>
<id>https://hdl.handle.net/1721.1/139174</id>
<updated>2022-01-15T03:02:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Empirical and Theoretical Analysis of the Role of Depth in Convolutional Neural Networks
Nichani, Eshaan
While over-parameterized neural networks are capable of perfectly fitting (interpolating) training data, these networks often perform well on test data, thereby contradicting classical learning theory. Recent work provided an explanation for this phenomenon by introducing the double descent curve, showing that increasing model capacity past the interpolation threshold can lead to a decrease in test error. In line with this, it was recently shown empirically and theoretically that increasing neural network capacity through width leads to double descent. In this thesis, we analyze the effect of increasing depth on test performance. In contrast to what is observed for increasing width, we demonstrate through a variety of classification experiments on CIFAR10 and ImageNet32 using fully-convolutional nets, ResNets and the convolutional neural tangent kernel (CNTK) that test performance is U-shaped and in fact worsens beyond a critical depth. To better understand this phenomenon, we conduct a theoretical analysis on the impact of depth on generalization in linear convolutional networks of infinite width. In particular, we derive the feature map for the linear CNTK for arbitrary depths and identify the depth which minimizes the bias and variance terms of the excess risk. The findings of this thesis imply that increasing depth for interpolating convolutional networks can in fact lead to worse generalization.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Algorithms and Systems for Tiny Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/139171" rel="alternate"/>
<author>
<name>Lin, Ji</name>
</author>
<id>https://hdl.handle.net/1721.1/139171</id>
<updated>2022-01-15T03:58:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Efficient Algorithms and Systems for Tiny Deep Learning
Lin, Ji
Tiny machine learning on IoT devices based on microcontroller units (MCUs) enables various real-world applications (e.g., keyword spotting, anomaly detection). However, deploying deep learning models to MCUs is challenging due to the limited memory size: the memory of microcontrollers is 2-3 orders of magnitude smaller even than mobile phones. In this thesis, we study efficient algorithms and systems for tiny-scale deep learning. We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers. TinyNAS adopts a two-stage neural architecture search approach that first optimizes the search space to fit the resource constraints, then specializes the network architecture in the optimized search space. TinyNAS can automatically handle diverse constraints (i.e. device, latency, energy, memory) under low search costs. TinyNAS is co-designed with TinyEngine, a memory-efficient inference library to expand the search space and fit a larger model. TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing the memory usage by 3.4×, and accelerating the inference by 1.7-3.3× compared to TF-Lite Micro and CMSIS-NN. For vision applications on MCUs, we diagnosed and found that existing convolutional neural network (CNN) designs have an imbalanced peak memory distribution: the first several layers have much higher peak memory usage than the rest of the network. Based on the observation, we further extend the framework to support patch-based inference to break the memory bottleneck of the initial stage. MCUNet is the first to achieves &gt;70% ImageNet top1 accuracy on an off-the-shelf commercial microcontroller, using 3.5× less SRAM and 5.7× less Flash compared to quantized MobileNetV2 and ResNet-18. On visual&amp;audio wake words tasks, MCUNet achieves state-of-the-art accuracy and runs 2.4- 3.4× faster than MobileNetV2 and ProxylessNAS-based solutions with 3.7-4.1× smaller peak SRAM. Our study suggests that the era of always-on tiny machine learning on IoT devices has arrived.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-resolved linear and non-linear rheology of thixotropic and aging complex fluids: Application to particulate and biopolymeric physical gels</title>
<link href="https://hdl.handle.net/1721.1/139169" rel="alternate"/>
<author>
<name>John Rathinaraj, Joshua David</name>
</author>
<id>https://hdl.handle.net/1721.1/139169</id>
<updated>2022-01-15T03:58:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Time-resolved linear and non-linear rheology of thixotropic and aging complex fluids: Application to particulate and biopolymeric physical gels
John Rathinaraj, Joshua David
Temporal changes in microstructure and relaxation dynamics are ubiquitously observed in materials such as hydrogels, food products and drilling fluids. These materials are in general known as mutating materials and the build-up or breakdown of microstructure is commonly both time– and shear-rate (or shear-stress)–dependent resulting in a range of complex phenomena collected under the term thixotropy. It is becoming increasingly im- portant to develop time-resolved rheometric techniques to quantify the behavior of mutating materials accurately.&#13;
&#13;
In the present study we first discuss the introduction of better time-resolved techniques in superposition rheometry. Conventional superposition rheometry consists of combining Small Amplitude Oscillatory Shear (SAOS) with a steady unidirectional shear rate to gain insight into the shear-induced changes to the viscoelastic properties of a complex fluid. Orthogonal superposition (OSP), in which the two modes of deformation are perpendicular, has been preferred over parallel superposition to avoid non-linear cross-coupling of the steady shear and oscillatory deformation fields. This cross coupling can lead to unphysi- cal sign changes in the measured material properties, and makes it difficult to interpret the flow-induced mechanical properties. Recently, orthogonal superposition has been used to investigate the shear-induced anisotropy taking place in colloidal gels by comparing the transient evolution of orthogonal moduli with the parallel moduli immediately after cessa- tion of shear. However, probing transient evolution using the OSP technique can be chal- lenging for rapidly mutating complex materials which evolve on time scales comparable to the time scale of the experiment. Using a weakly associated alginate gel, we demonstrate the potential of superimposing fast optimally windowed chirp (OWCh) deformations or- thogonally to the shear deformation which substantially reduces the measurement time. We evaluate the changes in the rate-dependent relaxation spectrum in the direction of applied unidirectional shear rate and in the orthogonal direction deduced from the damping function and orthogonal moduli data respectively. We measure systematic changes between the two spectra measured in orthogonal directions thus revealing and quantifying flow-induced anisotropy in the alginate gel.&#13;
&#13;
Secondly, we develop a signal processing technique to monitor accurate temporal evolution of the complex modulus for a specified deformation frequency. Oscillatory rheometric techniques such as Small Amplitude Oscillatory Shear (SAOS) and, more recently, Large Amplitude Oscillatory Shear (LAOS) are now quite widely used for rheological characterization of the viscoelastic properties of complex fluids. However, the conventional application of Fourier transforms for analyzing oscillatory data assume the signals are time- translation invariant, which constrains the rate of mutation of the material to be extremely small. This constraint makes it difficult to accurately study shear-induced microstructural changes in thixotropic and gelling materials. We explore applications of the Gabor transform (a Short Time Fourier Transform (STFT) combined with a Gaussian window), for providing optimal joint time-frequency resolution of a mutating material’s viscoelastic properties. First, we show using simple analytic models that application of the STFT enables extraction of useful data from the initial transient response following the inception of oscillatory flow. Secondly, using measurements on a Bentonite clay we show that using a Gabor transform enables us to more accurately measure rapid changes in both the storage and loss modulus with time, and also extract a characteristic thixotropic/aging time scale for the material. Finally, we consider extension of the Gabor transform to non-linear oscillatory deformations using an amplitude-modulated input strain signal, in order to track the evolution of the Fourier-Chebyshev coefficients characterizing thixotropic fluids at a specified deformation frequency. We show that there is a trade-off between frequency and time resolution (effectively a rheological uncertainty principle). We refer to the resulting test protocol as Gaborheometry and construct an operability diagram in terms of the imposed ramp rate and the mutation time of the material. This unconventional, but easily implemented, rheometric approach facilitates both SAOS and LAOS studies of time-evolving materials, reducing the number of required experiments and the data post-processing time significantly.&#13;
&#13;
Finally, we use the time-resolved techniques developed in this thesis to understand the thixotropic aging behavior of bentonite dispersions. In soft glassy materials such as ben- tonite clays, the relaxation dynamics and the microstructure slowly but continuously evolve with time to progressively form more stable structures. We investigate and quantify this complex aging behavior of bentonite dispersions by measuring the evolution in the linear viscoelastic behavior at different age times and temperatures. We model the linear viscoelastic properties using a material time domain transformation and a fractional Maxwell gel model which allows us to develop a rheological master curve to quantify and predict the aging behavior of this soft glass over a range of temperatures and time scales.&#13;
&#13;
The time-resolved rheometric techniques and procedures for quantifying the rheology of rapidly mutating complex fluids can be extended to a wide range of soft materials and allows us to obtain insight into how microstructural changes drive the evolution in the bulk rheological behavior for thixotropic and aging materials.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Untold Narratives: Realizing Personal Design Identities</title>
<link href="https://hdl.handle.net/1721.1/139168" rel="alternate"/>
<author>
<name>Kaadan, Rania</name>
</author>
<id>https://hdl.handle.net/1721.1/139168</id>
<updated>2022-01-15T03:11:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Untold Narratives: Realizing Personal Design Identities
Kaadan, Rania
This thesis introduces alternative possibilities for structuring design education to emancipate designers’ personal creative identities. It was motivated by personal experiences and a series of observations and case studies recorded and conducted at MIT’s graduate and undergraduate architecture and design studios. My study examines a crucial set of dialectics: subjectivity and objectivity, agency and structure, and political and personal narratives.&#13;
&#13;
The hypothesis is that the structures embodying students’ relationships — the self and society, the self and others, and the self and self — are all essential to how design identities develop, yet these relationships are often unintentionally unrealized due to the inherent challenge of developing personal design intentions. Examination of this hypothesis led me to instrumentalize students’ personal narratives as a design tool to emancipate their agency through worldmaking exercises, and thus promoting students’ agency in a process of developing a personal design language, geometries, and visual imagination.&#13;
&#13;
The study herein offers a pedagogical framework — experimental case studies part of a larger aspired transformative reform — the first running in tandem with core studios, and the second a workshop that followed. Both case studies utilized introspective and performative design practices to help students harness a personal sense of narrative, methods of representation, design language and their embodying social and cultural identity. Through this framework, students cultivate their own personal “worlds,” in awareness of the embedded structures. This framework is a step towards a pedagogically transformative and socially solidaristic project of decolonizing personal narratives – a tale of designers’ voice realization.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Analysis and Technology Roadmap for Autonomous Long-Haul Cargo Transport</title>
<link href="https://hdl.handle.net/1721.1/139167" rel="alternate"/>
<author>
<name>Chafekar, Tejas</name>
</author>
<id>https://hdl.handle.net/1721.1/139167</id>
<updated>2022-01-15T03:40:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Systems Analysis and Technology Roadmap for Autonomous Long-Haul Cargo Transport
Chafekar, Tejas
The automotive industry is undergoing radical changes due to increased focus on electrification, automation, and ride sharing. Several OEMs and technology startups are making significant advances in autonomous technologies to enable driverless operations. Long haul trucking/ freight applications are expected to see the deployment of autonomous technologies before they are deployed in consumer cars given the deterministic operational design domain the trucks operate in. Most of the current R&amp;D on driverless applications is focused on propulsion (i.e moving the vehicle from one point to another without the assistance of a human driver). To realize truly autonomous long-haul cargo transport, several other ancillary functions outside propulsion would have to be designed to be autonomous. This thesis attempts to take a top-down system thinking approach to explore such functions and propose architectures that would enable end-to-end autonomy and a roadmap towards achieving this over the next decade.&#13;
&#13;
Use case analysis is performed to understand typical functions carried out during cargo transport. The technology readiness, societal readiness, and perceived return on investment of the technologies required is assessed. These high-level functions are then categorized into a set of architectural decisions and an architectural space is created by possible combinations of these decisions. The architectural space is represented as a technology readiness versus return on investment tradespace and architectural choices on the pareto frontier are analyzed. A technology roadmap of necessary is proposed. An analysis of possible off nominal scenarios is conducted, relative to the ability of the architectures to deal with them.&#13;
&#13;
The main takeaway from this work suggest focusing on truck platooning as a near term goal towards partial autonomy which would realize immediate fuel saving benefits. Real time weight sensing, additional automation in performing activities like loading/ unloading cargo (for minimizing trip delays and increasing fleet throughput), pre-trip vehicle checks, automation in fault actions while en-route are also achievable within the next decade and would lead to significant cost savings and minimize operational losses for fleets. The analysis also indicates the need of onboard technologies to facilitate interactions with external human agents (a human machine interface) and increased reliance on faster data connectivity, transfer, and bigger data storage. The study of current state of art technological development suggests that the challenges in realizing autonomous long haul cargo transportation lie not only in the maturity of low TRL technologies, but also in the integration and tuning of existing technological solutions to suit the freight industry. Achieving full autonomy is not possible within next ten year timeframe due to the maturity of technologies required to address certain critical off nominal scenarios (e.g a truck getting hijacked or vandalized, malicious actors filling incorrect fuel in the truck, cargo spilling out on the freeway while a driverless truck is enroute etc) and associated infrastructural frameworks (laws, insurance, ownerships). The study synthesizes these insights and presents the levels of autonomy that would be achievable within next decade and technological needs to achieve fully autonomous operations in longer run.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An EM algorithm for Lidar deconvolution</title>
<link href="https://hdl.handle.net/1721.1/139166" rel="alternate"/>
<author>
<name>Yuan, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/139166</id>
<updated>2022-01-15T03:58:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An EM algorithm for Lidar deconvolution
Yuan, Matthew
Airborne Lidar is a range sensing method which is effective in determining ground terrain from a distance. However, the return signal we observe is a noisy, convolved distortion of the ground return. Deconvolution is one approach to restore the original ground return from the observed return signal. The expectation-maximization (EM) algorithm has been used in signal deconvolution, to produce a maximum-likelihood estimate (MLE) for the original signal. We explain the benefits of the EM algorithm over other benchmark algorithms in Lidar deconvolution, then propose a modified EM algorithm with smoothing and denoising parameters to address some issues with the standard EM algorithm. We then derive a quality metric to test the proposed EM algorithm on simulated and actual data and evaluate its performance. Using our quality metric on simulated data, the proposed algorithm recovers 95% of signal compared to 79% by the benchmark Richardson-Lucy (RL) algorithm, and we show improved image quality and reduced noise on real-life Lidar scenarios.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting incidents, accelerating dataset annotation, and estimating depth with multi-view invariants</title>
<link href="https://hdl.handle.net/1721.1/139162" rel="alternate"/>
<author>
<name>Weber, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/139162</id>
<updated>2022-01-15T03:57:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Detecting incidents, accelerating dataset annotation, and estimating depth with multi-view invariants
Weber, Ethan
Computer vision has seen incredible growth since the introduction of large datasets, deep neural networks, and modern computing resources. Current algorithms can perform scene understanding, or the ability to understand and interpret the world through visual perception (e.g., images or videos). In this thesis, we push the boundaries of current scene understanding algorithms with three distinct projects. (1) In the first project, we address limitations of current algorithms to understand natural disasters, damage, and incidents through images. To do this, we create the Incidents Dataset, train a detection model, and present applications to identify incidents in social media streams to inform emergency responders during disaster relief situations. (2) In the second project, we address the issue of costly dataset construction and present a novel framework that reduces the cost of creating large-scale instance annotation datasets. (3) In the third and final project, we move to 3D scene understanding and present an intuitive technique to train monocular depth estimation networks by enforcing consistency of multi-view geometric invariants between image pairs observing the same scene or objects from the same category.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe Tumbling of Heavy Objects Using a Two-Cable Crane</title>
<link href="https://hdl.handle.net/1721.1/139160" rel="alternate"/>
<author>
<name>O'Neill, Cormac</name>
</author>
<id>https://hdl.handle.net/1721.1/139160</id>
<updated>2022-01-15T03:06:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Safe Tumbling of Heavy Objects Using a Two-Cable Crane
O'Neill, Cormac
In heavy industries, large, heavy objects must be tumbled to access features on their bottoms and sides for assembly and maintenance. Traditional manual operations using a single-cable crane are high-risk, and difficult for less experienced workers. Automating the tumbling process is made challenging due to the presence of kinematic and static singularities which are shown to occur when a single-cable crane loses control over the block being tumbled. Here, an autonomous method for safely tumbling a heavy block sitting on a surface using a two-cable crane is presented. Two winches controlling a pair of cables on a crane are coordinated in such a way that a) the block cannot slip on the floor, b) the block is not lifted into the air, and c) the block is under quasi-static balanced control at all times. A control algorithm for coordinating the two winches is developed for safely tumbling a block without slipping or becoming airborne as well as for eliminating the effect of singularities. A small-scale prototype is developed and the control algorithm is implemented and evaluated experimentally.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring the Existence of Geometric Primitives to Represent Non-Discriminable Data</title>
<link href="https://hdl.handle.net/1721.1/139159" rel="alternate"/>
<author>
<name>Peraire-Bueno, James A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139159</id>
<updated>2022-01-15T03:02:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Inferring the Existence of Geometric Primitives to Represent Non-Discriminable Data
Peraire-Bueno, James A.
In this thesis, we set out to find an algorithm that uses only geometric primitives to represent an input pointcloud. In addition to the problems faced in general primitive fitting, non-discriminable data presents additional data association challenges. We propose to address these challenges by estimating the existence rather than parameters of geometric primitives, and explore various options to do so. We first explore a sampling-based Markov-Chain Monte-Carlo approach together with a ray likelihood model. We then explore a neural network approach and finish by presenting a method to make the Chamfer distance differentiable with respect to primitive existence.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient CNNs and Energy Efficient SRAM Design for ubiquitous medical devices</title>
<link href="https://hdl.handle.net/1721.1/139156" rel="alternate"/>
<author>
<name>Brahma, Kaustav</name>
</author>
<id>https://hdl.handle.net/1721.1/139156</id>
<updated>2022-01-15T04:04:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Efficient CNNs and Energy Efficient SRAM Design for ubiquitous medical devices
Brahma, Kaustav
Intermittent monitoring of urinary bladder volume aids management of common conditions such as post-operative urinary retention. Urinary retention is prevented by catheterization, an invasive procedure that greatly increases urinary tract infection. Ultrasound imaging has been used to estimate bladder volume as it is portable, non-ionizing, and low-cost. Despite this, ultrasound technology faces fundamental challenges limiting its usability for next generation wearable technologies. (1) Current systems require skilled manual scanning with attendant measurement variability. (2) Current systems are insufficiently energy-efficient to permit ubiquitous wearable device deployment. We propose to develop an energy efficient system capable of real-time bladder volume monitoring. This system will incorporate several key innovations, including (1) Convolutional Neural Network (CNN) based segmentation algorithms employed to generate spatiotemporally accurate bladder volume estimates and (2) energy efficient static random access memory (SRAM) with in-memory dot-product computation for low-power segmentation network implementation. The aim is to develop platform technology embodiments deployable across a wide range of health-monitoring wearable device applications requiring accurate, real-time and autonomous tissue monitoring. We have selected bladder volume as the initial target application for development of these technologies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Computation of Map-scale Continuous Mutual Information on Chip in Real Time</title>
<link href="https://hdl.handle.net/1721.1/139155" rel="alternate"/>
<author>
<name>Gupta, Keshav</name>
</author>
<id>https://hdl.handle.net/1721.1/139155</id>
<updated>2022-01-15T03:59:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Efficient Computation of Map-scale Continuous Mutual Information on Chip in Real Time
Gupta, Keshav
Exploration tasks are essential to many emerging robotics applications, ranging from search and rescue to space exploration. The planning problem for exploration requires determining the best locations for future measurements that will enhance the fidelity of the map, for example, by reducing its total entropy. A widely-studied technique involves computing the Mutual Information (MI) between the current map and future measurements, and utilizing this MI metric to decide the locations for future measurements. &#13;
&#13;
However, computing MI for reasonably-sized maps is computationally and energywise expensive, often prohibitive for smaller robots and drones, which has been the bottleneck towards fast and efficient robotic exploration. As a workaround, MI is often only computed for a sparse set of points or computed at a rate slower than the update of the map, techniques which fail to provide theoretical guarantees on the efficiency of exploration. &#13;
&#13;
In this thesis, we introduce a new hardware accelerator architecture for MI computation that features a high-efficiency MI compute core and an optimized memory subsystem that provides sufficient bandwidth to keep the cores fully utilized. The core employs interleaving to counter the recursive algorithm, and workload balancing and numerical approximations to reduce latency and energy consumption. We demonstrate this optimized architecture on a Field-Programmable Gate Array (FPGA) implementation, which can compute MI for all cells in an entire 201-by-201 grid (e.g., representing a 20.1m-by-20.1m map at 0.1m resolution) in 1.55 ms while consuming 1.7 mJ of energy, thus finally rendering MI computation for the whole map real time and at a fraction of the energy cost of traditional compute platforms. For comparison, this particular FPGA implementation running on the Xilinx Zynq-7000 platform is two orders of magnitude faster and consumes three orders of magnitude times less energy per MI map compute, when compared to a baseline GPU implementation running on an NVIDIA GeForce GTX 980 platform. The improvements are more pronounced when compared to CPU implementations of equivalent algorithms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Third Landscape</title>
<link href="https://hdl.handle.net/1721.1/139154" rel="alternate"/>
<author>
<name>Carmeliet, Dries</name>
</author>
<id>https://hdl.handle.net/1721.1/139154</id>
<updated>2022-01-15T03:44:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Third Landscape
Carmeliet, Dries
The global energy sector accounts for over 60 percent of all greenhouse gas emissions. The aim of this research is to investigate the agency of bottom-up approaches and local initiatives to build energy projects with renewable technologies. The greatest challenge these projects face are their enormous spatial requirement. Externalizing these technologies to remote locations, like oceans and deserts where land is easily available, significantly increases the cost of power and the siting difficulties of the accompanying large-scale transmission infrastructures. By internalizing energy production into populated landscapes, renewable technologies not only are more cost-effective, but can become important drivers for local economies as well. To unlock the potential of such bottom-up strategies for energy projects, a radical shift from established practices is needed. At its core, the energy transition requires a cultural change. It demands a process of negotiation between existing natural and human landscapes, and the need for productive energy landscapes. This research is scalar in nature and takes the U.S. Northeast, Massachusetts, and two municipalities in Massachusetts as case-studies. The premise of these bottom-up strategies is to move beyond climate action as a moral obligation, and instead search for a new form of climate action that is based on a series values shared by communities, like the access to clean water, to unpolluted air, to well-paid jobs, and to affordable power. Hence, this transition requires new co-location and co-habitation models. This research therefore calls for a “third landscape” that is neither technological or natural, but both at the same time. To build such a landscape, seven shifts in how energy projects should be approached are proposed: (1) Energy production should shift from sites external to populated areas, to sites within them; (2) Energy planning should shift from top-down, developer-led approaches that maximize profits, to bottom-up approaches of local government-led projects that forward sustainability; (3) Energy land-use should shift from monofunctional zones to multifunctional landscapes; (4) Energy agendas should shift from human agendas that push cheap power, to environmental agendas that prioritize sustainability; (5) Energy design should shift from technological, engineering-optimized methodologies to spatial approaches that include urban and natural agendas; (6) Energy economies should shift from linear ‘take-make-waste’ systems to circular economies that engage local residents in their construction, operation, and value chains; (7) Energy transmission should shift from a loose to a strong regional cohesion, using landscape features for their siting.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Aortic Stenosis Severity using Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/139153" rel="alternate"/>
<author>
<name>Guo, Xiaolu</name>
</author>
<id>https://hdl.handle.net/1721.1/139153</id>
<updated>2022-01-15T03:11:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Predicting Aortic Stenosis Severity using Deep Learning
Guo, Xiaolu
Recent years have seen the rise of AI-based solutions to understanding, predicting, and treating heart disease, the leading cause of death globally. This thesis focuses on aortic stenosis, one of the most common and severe valve diseases. The evaluation of patients with suspected aortic stenosis includes echocardiography - an ultrasound based procedure that is used to visualize and evaluate the aortic valve. Interpretation of echocardiographic images currently requires expert evaluation by a cardiologist trained in the analysis of cardiac ultrasound. Our hypothesis is that deep learning can be used to learn structures within echocardiographic images, yielding sophisticated tools that can improve our ability to prognosticate patients with aortic stenosis. Our goal is to develop video-based deep learning models that predict the severity of aortic stenosis, which is numerically defined by the mean gradient and the valve area, in a given patient. The results from this thesis will pave the way for understanding more about aortic stenosis and providing better clinical care for patients with this disorder.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rewriting the Rules of a Classifier</title>
<link href="https://hdl.handle.net/1721.1/139151" rel="alternate"/>
<author>
<name>Elango, Mahalaxmi</name>
</author>
<id>https://hdl.handle.net/1721.1/139151</id>
<updated>2022-01-15T03:02:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Rewriting the Rules of a Classifier
Elango, Mahalaxmi
Observations of various deep neural network architectures indicate that deep networks may be spontaneously learning representations of concepts with semantic meaning, and encoding a relational structure or rule between these concepts. We refer to these encoded relationships between concepts in the network as rules. In classifiers, we rewrite an existing rule in the network as desired, referred to as the rewriting technique.&#13;
&#13;
We demonstrate that using our rewriting technique and simple human knowledge about how to classify the world around us, we can generalize existing classes to unseen variants, identify spurious correlations present in the dataset, mitigate the effects of spurious correlations, and introduce new classes. We find that our technique reduces the need for: computing resources, because we only re-train a single layer’s weights; new training images, because our rewriting technique can rewrite using concepts already encoded in the network; and domain knowledge, because what we choose to edit to improve classification is derived from logical rules a human would construct to classify images.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding and Estimating the Adaptability of Domain-Invariant Representations</title>
<link href="https://hdl.handle.net/1721.1/139150" rel="alternate"/>
<author>
<name>Chuang, Ching-Yao</name>
</author>
<id>https://hdl.handle.net/1721.1/139150</id>
<updated>2022-01-15T04:05:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Understanding and Estimating the Adaptability of Domain-Invariant Representations
Chuang, Ching-Yao
When the test distribution differs from the training distribution, machine learning models can perform poorly and wrongly overestimate their performance. In this work, we aim to better estimate the model’s performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels, where the error of this estimation is bounded by the target risk of the proxy model. Therefore, we study the generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our estimation approach can self-tune to find the optimal model complexity and the resulting models achieve good target generalization, and estimate target error of other models well. Applications of our results include model selection, deciding early stopping, error detection, and predicting the adaptability of a model between domains.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation on Ultra-miniature and Ultra-low-power Non-invasive CMOS pH Sensor for Intracellular Monitoring</title>
<link href="https://hdl.handle.net/1721.1/139149" rel="alternate"/>
<author>
<name>Zou, Xingyu</name>
</author>
<id>https://hdl.handle.net/1721.1/139149</id>
<updated>2022-01-15T04:04:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigation on Ultra-miniature and Ultra-low-power Non-invasive CMOS pH Sensor for Intracellular Monitoring
Zou, Xingyu
The intrinsic multilayer process and the dense transistor capacity of complementary metal oxide semiconductor (CMOS) technology has offered a great compact solution for miniaturized noninvasive pH sensors. Existing solutions to intracellular sensing are mainly through invasive approaches, which will likely damage the target cell. However, learning what is going on inside the cell while keeping the cell alive and safe is highly demanded. Currently, no non-invasive pH sensors have been demonstrated for intracellular activity monitoring yet, mainly due to the constraints in device size and power consumption. This thesis designs a non-invasive fully-scalable CMOS pH sensor with a diameter smaller than 30 &#120583;m and a power consumption of 23 nW using TSMC 65 nm CMOS technology, targeting the application for intracellular activity monitoring.&#13;
&#13;
This thesis focuses on the design of a pH sensing pixel and its signal processing unit, which has the potential to work with different communication and powering units. We conduct simulations to show that our sensor node can sense pH information from the environment, and the sensed data can be encoded and transformed into a digital waveform that carries the pH information. Simulation results have also showed that our sensor is very promising for intracellular sensing, and it can help expand our understanding for intracellular activities. Moreover, this design is fully-scalable, so it can be easily adopted in a more advanced technology node, which can potentially lead to extensive biomedical applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Grade Prediction for Better Student Support in MIT’s Introductory Programming Course</title>
<link href="https://hdl.handle.net/1721.1/139148" rel="alternate"/>
<author>
<name>Demissew, Alenta</name>
</author>
<id>https://hdl.handle.net/1721.1/139148</id>
<updated>2022-01-15T03:42:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Integrating Grade Prediction for Better Student Support in MIT’s Introductory Programming Course
Demissew, Alenta
We focus on grade prediction in the context of the 6.0001/2 course by utilizing student data – including assignment, assessment, and participation scores/metrics and click data – from past iterations of the course. In doing so, we explore various machine learning algorithms to create expressive, accurate predictive models. We have created and integrated the predictive modeling tool into the current course site to allow course staff to monitor student grade trajectories while guiding and assisting struggling students. Staff are able to interface with this tool which allows them to see this grade prediction along with other useful attributes for any student enrolled in the course. Students, although not directly able to interface with the tool, can be alerted and offered assistance if their grade trajectory is predicted to be a failing grade or below a certain threshold in order to provide them with the necessary resources to succeed in the course. Finally, we analyze the grade predictions and compare them to the final grade outcomes to find trends and patterns with regards to how students with different trajectories through the semester adjust their behaviors in the course based on the marks they receive.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving inventory management to increase profitability</title>
<link href="https://hdl.handle.net/1721.1/139147" rel="alternate"/>
<author>
<name>Go, Deborah</name>
</author>
<id>https://hdl.handle.net/1721.1/139147</id>
<updated>2022-01-15T04:05:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Improving inventory management to increase profitability
Go, Deborah
ShopSabre, a leading designer and manufacturer of computer numerical control (CNC) routers and plasma tables, is experiencing robust growth, with revenues growing an average of 40% each year since 2015. Due to this growth, ShopSabre maintains an average backlog of 130-140 machines and an average lead time regularly exceeding 12 weeks. With each machine bringing in an average of $40,000 in revenue, half of which upon shipment, the backlog can easily account for over $2 million in unreceived revenue.&#13;
&#13;
One cause of the large backlog is a lack of parts and subsequent work stoppages. Apart from a monthly physical inventory, ShopSabre does not have the means to assess inventory on hand at any given time. There are few common systems across sales, purchasing, and production, resulting in occasional miscommunication as well as a lack of data comparing cost of goods sold to revenue on ShopSabre’s&#13;
built-to-order machines. &#13;
&#13;
This research develops a perpetual inventory system across departments by first gaining a deep understanding of existing inventory and processes, then building bills of materials for all ShopSabre products, and finally integrating existing processes into a common inventory system. The research then takes preliminary data from the system to demonstrate possible applications and improvements.&#13;
&#13;
As of the conclusion of on-site research, ShopSabre is fulfilling machine orders with historically low lead times of four to five weeks for routers and seven weeks for plasmas/23s, marking ShopSabre’s first year under 10-12-week lead times.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Surgically-Viable Umbo Microphone For Implantable Assistive Hearing Devices</title>
<link href="https://hdl.handle.net/1721.1/139146" rel="alternate"/>
<author>
<name>Cary, Benjamin G.</name>
</author>
<id>https://hdl.handle.net/1721.1/139146</id>
<updated>2022-01-15T03:56:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of a Surgically-Viable Umbo Microphone For Implantable Assistive Hearing Devices
Cary, Benjamin G.
Assistive hearing devices have enabled the restoration of one the most important senses. These technologies have changed the lives of millions, but there are also hindrances which arise. Today’s hearing devices are bulky, can only be worn during the day, and do not function well in noisy environments. These issues can be addressed by implantable devices. As these devices are entirely encapsulated in the body, these implants can take advantage of the natural acoustic filtering of the ear, are not a detriment to a person’s ability to be physically active and can be worn at night. However, microphones within these devices are a limiting factor. &#13;
&#13;
This thesis develops key components for a fully-implantable assistive hearing device. Specifically, a microphone that transduces umbo motion together with a signal conditioning amplifier is presented. The output of the system ultimately drives a cochlear implant. The microphone takes the form of a 3-mm-diameter drum pressed by the umbo in which the drum head is piezoelectric PVDF. The amplifier is a lowinput-impedance charge amplifier. &#13;
&#13;
Continuum electromechanical modeling of the microphone, electrical modeling of the charge amplifier, system design based on the models, and microphone fabrication are presented. A system demonstration of both bench-top and cadaveric temporalbone experiments yield results which match well to expectations from theory. The system achieves a bandwidth of 200 Hz to 7 kHz, a sensitivity of 220 dB ref. 1mV/m of voltage output per meter of umbo displacement and 62 dB ref. 1mV/Pa of voltage output per Pascal of ear canal pressure at 1 kHz. The signal-to-noise ratio is 30 dB. &#13;
&#13;
The importance of an umbo-induced static offset on the microphone, mechanical coupling between the umbo and microphone, and shielding are demonstrated. The microphone behaves according to plate bending mechanics, yet plate stretching produces the charge which is amplified.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subway Shuffle, 1 × 1 Rush Hour, and Cooperative Chess Puzzles: Computational Complexity of Puzzles</title>
<link href="https://hdl.handle.net/1721.1/139145" rel="alternate"/>
<author>
<name>Brunner, Josh</name>
</author>
<id>https://hdl.handle.net/1721.1/139145</id>
<updated>2022-01-15T03:33:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Subway Shuffle, 1 × 1 Rush Hour, and Cooperative Chess Puzzles: Computational Complexity of Puzzles
Brunner, Josh
Oriented Subway Shuffle is a game played on a directed graph with colored edges and colored tokens present on some vertices. A move consists of moving a token across an edge of the matching color to an unoccupied vertex and reversing the orientation of that edge. The goal is to move a token across a target edge. We show that it is PSPACE-complete to determine whether a particular target edge can be moved across through a sequence of Oriented Subway Shuffle moves. We show how this can be interpreted in the context of the motion-planning-through-gadgets framework, thus showing PSPACE-completeness of certain motion planning problems. In contrast, we show that polynomial time suffices to determine whether a particular token can ever move.&#13;
&#13;
This hardness result is motivated by three applications of proving other puzzles hard. A fairly straightforward reduction shows that the puzzle game Rush Hour is PSPACE-complete when all of the cars are 1 × 1 and there are fixed immovable cars. We show that two classes of cooperative Chess puzzles, helpmates and retrograde Chess, are also PSPACE-complete by reductions from Oriented Subway Shuffle.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating Your Own Conversational Artificial Intelligence Agents Using Convo, a Conversational Programming System</title>
<link href="https://hdl.handle.net/1721.1/139144" rel="alternate"/>
<author>
<name>Zhu, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/139144</id>
<updated>2022-01-15T03:34:15Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Creating Your Own Conversational Artificial Intelligence Agents Using Convo, a Conversational Programming System
Zhu, Jessica
Smart assistants like Amazon’s Alexa, Apple’s Siri, and Google’s Google Home have become commonplace in many people’s lives, appearing in their phones and homes. Despite their ubiquity, these conversational AI agents still largely remain a mystery to many, in terms of how they work and what they can do.&#13;
&#13;
To lower the barrier to entry to understanding and creating these conversational AI agents for young students, I expanded on Convo, a conversational programming agent that can respond to both voice and text inputs. I created a simple and intuitive user interface for students to input training data, create programs, and test the conversational AI agents they create. To further assist anyone in using Convo, I also produced a couple of video and PDF tutorials that outline how to use Convo. Additionally, I also developed a curriculum to teach students about key concepts in AI and conversational AI in particular, including the Big 5 AI Ideas and the difference between constrained and unconstrained natural language models.&#13;
&#13;
I ran a 3-day workshop in partnership with MIT’s eSPARK program, with a total of 15 participating middle school students. Through the data collected from the preand post-workshop surveys as well as a mid-workshop brainstorming session, I was able to explore how students’ perceptions, understanding, literacy, and visions of conversational AI agents changed. During the workshop, students were able to create their own conversational AI agents. I also found that after the workshop, students tended to think that conversational AI agents were less intelligent than originally perceived, gained confidence in their abilities to build these agents, and learned some key technical concepts about conversational AI as a whole. Based on these results, I am optimistic about Convo’s ability to teach and empower students to develop conversational AI agents in an intuitive way.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents</title>
<link href="https://hdl.handle.net/1721.1/139143" rel="alternate"/>
<author>
<name>Alumootil, Varkey</name>
</author>
<id>https://hdl.handle.net/1721.1/139143</id>
<updated>2022-01-15T04:00:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents
Alumootil, Varkey
Performance of state-of-the art offline and model-based reinforcement learning (RL) algorithms deteriorates significantly when subjected to severe data scarcity and the presence of heterogeneous agents. In this work, we propose a model-based offline RL method to approach this setting. Using all available data from the various agents, we construct personalized simulators for each individual agent, which are then used to train RL policies. We do so by modeling the transition dynamics of the agents as a low rank tensor decomposition of latent factors associated with agents, states, and actions. We perform experiments on various benchmark environments and demonstrate improvement over existing offline approaches in the scarce data regime.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods of Aircraft Charge Control During Flight</title>
<link href="https://hdl.handle.net/1721.1/139140" rel="alternate"/>
<author>
<name>Martell, Benjamin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/139140</id>
<updated>2022-01-15T03:41:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Methods of Aircraft Charge Control During Flight
Martell, Benjamin C.
The accumulation of aircraft electrical charge can impact lightning initiation, cause radio interference, or make it difficult for research aircraft to make accurate measurements of atmospheric electric fields. Two experimental campaigns are described in this thesis that relate to the control of net electrical charge on aircraft, and the associated physical phenomena. First, a successful flight demonstration of actively controlled charging in fair weather conditions is described. A corona discharge wire is used for remote charging of a 1.9 m wingspan plane to both positive and negative polarities. By applying a voltage between -13 and +13 kV to the wire, the plane charged to +23 and -30 kV, respectively. The system demonstrates comparable results to previous modeling and wind tunnel experiments, which showed an increase in plane potential and a decrease in corona current as wind speed increased. There are technological limitations to the charge control strategy, such as a saturation voltage due to spurious corona and the delicacy of the corona wire. The second experiment is the study of the behavior of streamer corona discharges in wind. A point-to-plane geometry is used to provide a baseline to compare to precipitation static (p-static) dischargers, which are in use on most airplanes today. We find that the discharge characteristics are strongly influenced by the wind: the frequency of pulsations and the average current increase with wind speed, but both become less consistent. In general, two types of streamer bursts emerged upon adding wind: first, the streamers tended to point with the wind especially at lower wind speeds and higher voltages; at higher wind speeds and lower voltages, the streamer bursts tend to point against the wind. To the authors knowledge, this phenomenon has not been experimentally observed before.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective Integration of Additive Manufacturing at a Large Manufacturing Company</title>
<link href="https://hdl.handle.net/1721.1/139139" rel="alternate"/>
<author>
<name>Fabian, Andrew S.</name>
</author>
<id>https://hdl.handle.net/1721.1/139139</id>
<updated>2022-01-15T03:10:56Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Effective Integration of Additive Manufacturing at a Large Manufacturing Company
Fabian, Andrew S.
Additive manufacturing (AM) has the potential to create significant value at large manufacturing companies such as Corning, but to date adoption has been slow. Corning invested in AM early, establishing a central AM team in 2006. While the team has developed technical AM expertise and supports numerous projects, there are a number of organizational and technical challenges limiting the more effective integration of AM at Corning. These challenges include lack of AM knowledge across the company, lack of a coordinated AM effort, conflicting financial expectations, and technical limitations. Despite these challenges, the Corning AM team has developed successful practices such as a tailored business model for supporting projects and an internally recognized level of technical expertise that has led to a number of wins including repeat customers and direct involvement in a project considering AM for high-volume production. To support more effective integration and adoption of AM, this research proposes a framework for understanding these challenges and five best practice initiatives to overcome them: spread AM in-depth knowledge through targeted training to better educate stakeholders, provide clear information about AM capabilities and costs, coordinate AM efforts across the company, leverage relationships to overcome organizational inertia, and identify and invest in high impact technical gaps. To further support AM adoption, this research proposes a roadmap for the qualification of metal Laser Powder Bed Fusion (L-PBF) AM for high volume production, a pre-identified technical gap. This is a need of a high-profile project at Corning which has design requirements that make AM a prime production candidate. This roadmap provides a thorough understanding of qualification processes and requirements for AM, allowing for a reduction of technical and financial risk for projects utilizing AM for production.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consistent Depth Estimation in Data-Driven Simulation for Autonomous Driving</title>
<link href="https://hdl.handle.net/1721.1/139136" rel="alternate"/>
<author>
<name>Beveridge, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/139136</id>
<updated>2022-01-15T03:08:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Consistent Depth Estimation in Data-Driven Simulation for Autonomous Driving
Beveridge, Matthew
In this work we propose consistent depth estimation for viewpoint reconstruction in data-driven simulation, combining aspects of learning-based monocular depth prediction and structure-from-motion to increase temporal video depth accuracy. We demonstrate efficacy in VISTA, an end-to-end autonomous vehicle simulation engine capable of training robust control policies directly applicable to the real-world. Taking advantage of geometrically consistent depth map estimations, we see a several order of magnitude improvement in whole-frame depth accuracy averaged over the course of input traces compared to VISTA’s current depth method, and a 39% reduction in intra-frame depth variance compared to current state of the art methods (i.e. Monodepth2) while maintaining similar error. Better depth enables more accurate viewpoint reconstruction thus improving the training of reinforcement learning (RL) control policies in simulation, increasing RL-based control’s practicality. We train several end-to-end policy gradient models in varying versions of VISTA, each utilizing a different depth method, and see that end-to-end models trained in the consistent depth version of VISTA deviate least from the human driven center line.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guarda: A web application firewall for WebAuthn transaction authentication</title>
<link href="https://hdl.handle.net/1721.1/139135" rel="alternate"/>
<author>
<name>Barabonkov, Damian</name>
</author>
<id>https://hdl.handle.net/1721.1/139135</id>
<updated>2022-01-15T03:06:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Guarda: A web application firewall for WebAuthn transaction authentication
Barabonkov, Damian
Transaction authentication is an attractive extension to two-factor authentication. It is proposed in the WebAuthn standard by the World-Wide-Web Consortium (W3C) as a mechanism to secure individual “high-risk” operations of a website via a hardware authenticator device. It defends against a stringent threat model where an adversary can modify or create HTTP requests between the user and the web service. Transaction authentication as defined by WebAuthn is not yet adopted in practice, partially because it requires intrusive web application changes. &#13;
&#13;
This thesis presents Guarda, a firewall for integrating transaction authentication into a new or existing web service with relatively few code changes. The firewall intercepts all HTTP traffic sent to the web service, and based on the configuration, any requests deemed safe are proxied directly to the web service. All other requests are considered high-risk and are held back and validated using transaction authentication. Only if the validation passes are they also permitted to pass through to the web service. &#13;
&#13;
This thesis uses the firewall approach to integrate transaction authentication into three web applications: a blogging site named Conduit, a WordPress admin panel named Calypso and a self-hosted Git service named Gogs. Compared to directly modifying them to support transaction authentication, the firewall approach is close to 8 times more concise. Under heavy load, there is an associated latency of at worst 1.5x slower when using Guarda to secure Gogs versus accessing the web service directly without WebAuthn.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Activity-Scaling SAR with Direct Hybrid Encoding for Signed Expressions for AIoT Applications</title>
<link href="https://hdl.handle.net/1721.1/139133" rel="alternate"/>
<author>
<name>Chen, Ruicong</name>
</author>
<id>https://hdl.handle.net/1721.1/139133</id>
<updated>2022-01-15T03:12:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Activity-Scaling SAR with Direct Hybrid Encoding for Signed Expressions for AIoT Applications
Chen, Ruicong
Designing an AIoT system with low standby power and high efficiency has become increasingly challenging. The AIoT system is an IoT device with artificial intelligence. A typical AIoT system is an always-on portable ECG monitoring system with AI algorithm to detect irregular event. ADC is the bottleneck of the current AIoT systems as it bridge the gap between analog world and digital computation. To address the challenge, this thesis presents a SAR ADCs with two modes, activityscaling and direct hybrid Encoding for signed expressions.&#13;
&#13;
In the activity-scaling mode, the proposed ADC can finish the conversion in just one cycle in the optimal case compared with N cycles for typical SAR. In the direct hybrid encoding for signed expressions (HESE) mode, it directly provides hybrid encoding for signed expressions which paves the way for high efficient digital inference. The proposed ADC has two thresholds. The activity-scaling mode has an initial guess and takes two steps per cycle to approach the sampled input until overshoot. After that, it performs a ternary search to the LSB. The direct hybrid encoding for signed expressions mode places one of the thresholds at the normal binary conversion threshold and the other for two bits look ahead to produce one-pass encoding. A proof-of-concept SAR ADC has been designed in 65nm CMOS technology.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subgrouping Ulcerative Colitis Patients using Administrative Claims Data</title>
<link href="https://hdl.handle.net/1721.1/139132" rel="alternate"/>
<author>
<name>Berlin, Heather</name>
</author>
<id>https://hdl.handle.net/1721.1/139132</id>
<updated>2022-01-15T03:13:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Subgrouping Ulcerative Colitis Patients using Administrative Claims Data
Berlin, Heather
Approximately 3 million patients in the US have been diagnosed with Ulcerative Colitis, a chronic inflammatory disease affecting the colon. Uncovering patient subgroups could improve treatment guidelines and help physicians choose an appropriate treatment plan for a patient. Here, we outline a Python implementation to generate a cohort from a dataset in the OMOP Common Data Model (CDM), propose a patient timeline visualization tool, create and analyze a cohort of Ulcerative Colitis patients using a claims dataset. We extract patient features and use dimensionality reduction techniques along with clustering to identify patient subgroups. We observe four patient subgroups consisting of distinct patient characteristics, most prominently age, insurance type, sex, and type of initial conventional therapy prescription.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Latent Factor Representation</title>
<link href="https://hdl.handle.net/1721.1/139130" rel="alternate"/>
<author>
<name>Yang, Cindy X.</name>
</author>
<id>https://hdl.handle.net/1721.1/139130</id>
<updated>2022-01-15T03:25:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Latent Factor Representation
Yang, Cindy X.
Offline reinforcement learning, where a policy is learned from a fixed dataset of trajectories without further interaction with the environment, is one of the greatest challenges in reinforcement learning. Despite its compelling application to large, real-world datasets, existing RL benchmarks have struggled to perform well in the offline setting. In this thesis, we consider offline RL with heterogeneous agents (i.e. varying state dynamics) under severe data scarcity where only one historical trajectory per agent is observed. Under these conditions, we find that the performance of stateof-the-art offline and model-based RL methods degrade significantly. To tackle this problem, we present PerSim, a method to learn a personalized simulator for each agent leveraging historical data across all agents, prior to learning a policy. We achieve this by positing that the transition dynamics across agents are a latent function of latent factors associated with agents, actions, and units. Subsequently, we theoretically establish that this function is well-approximated by a “low-rank” decomposition of separable agent, state, and action latent functions. This representation suggests a simple, regularized neural network architecture to effectively learn the transition dynamics per agent, even with scarce, offline data. In extensive experiments performed on RL methods and popular benchmark environments from OpenAI Gym and Mujoco, we show that PerSim consistently achieves improved performance, as measured by average reward and prediction error.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyurethane Sealant to Mitigate Crack Effects in Glass-to-Metal Sealed Underwater Connectors</title>
<link href="https://hdl.handle.net/1721.1/139129" rel="alternate"/>
<author>
<name>Rico, Catalina Kim Le</name>
</author>
<id>https://hdl.handle.net/1721.1/139129</id>
<updated>2022-01-15T03:38:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Polyurethane Sealant to Mitigate Crack Effects in Glass-to-Metal Sealed Underwater Connectors
Rico, Catalina Kim Le
Glass insulated underwater connectors allow for deep sea exploration. However, the long term reliability of the insulation is reduced due to surface cracks formed during manufacturing. In this thesis, we propose and test a sealant to overcome  limitations associated with internal cracks. The need for a sealant was identified following a series of experimental tests simulating environmental, operational, and human handling conditions. Analysis shows there is an insignificant effect of insulation resistance attributed to any undetectable crack growth formed during operational and handling conditions. Seawater intrusion, however, is determined to be the primary failure mode. An extensive cleaning procedure allows for partial recovery from salt damage, although the process is unsuitable for field maintenance.  We show promising test results in easing cleaning requirements for a recoverable connector by applying a polyurethane sealant to the connector face.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mini-Portable Rheometer: A device for the on-site rheological characterization of viscoelastic fluids</title>
<link href="https://hdl.handle.net/1721.1/139126" rel="alternate"/>
<author>
<name>Bustos, Nicole Alejandra</name>
</author>
<id>https://hdl.handle.net/1721.1/139126</id>
<updated>2022-01-15T03:02:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Mini-Portable Rheometer: A device for the on-site rheological characterization of viscoelastic fluids
Bustos, Nicole Alejandra
The on-site rheological characterization of complex fluids is important for a number of industrial, medical, and academic applications. Typically, laboratory-based rheometers are used to characterize rheological properties of fluids, which include, elasticity, relaxation time, and shear viscosity. However, this can be challenging as some samples, in particular biological fluids, may degrade over time and therefore do not retain their natural properties after collection and transport. In preparation for a human subject study, we investigated protocols to collect and preserve mucosalivary samples collected in a clinical location. Preliminary investigation and previous literature evidenced that mucosalivary fluid degraded with time as a result of protease and enzymatic activity on mucin polymers which contribute to the fluids bulk rheological properties. Therefore, we found it necessary to develop a portable, economical rheometer that can provide rapid results in the field. First, we investigated the sensitivity of capillary breakup measurements, the method chosen for measuring elasticity, to initial stretch parameters in using the commercial, Capillary Breakup Extensional Rheometer (CaBER), using analog polymer solutions. Finally, our study highlighted the need to measure elastic properties directly on site. This aided in determining the appropriate stretching parameters for characterization of biological fluids such as mucosalivary fluid without the effect of degradation and additionally with the ability to tune the rheological properties of fluids. We built a portable device for measuring elasticity using two modes: 1) direct imaging of the fluid capillary breakup and 2) an integrated electrical circuit to measure breakup time. The results showed that our portable device had comparable performance to the laboratory rheometer, CaBER.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust and Lightweight Localization and Dense Mapping for Multi-Robot Systems</title>
<link href="https://hdl.handle.net/1721.1/139125" rel="alternate"/>
<author>
<name>Chang, Yun</name>
</author>
<id>https://hdl.handle.net/1721.1/139125</id>
<updated>2022-01-15T03:05:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Robust and Lightweight Localization and Dense Mapping for Multi-Robot Systems
Chang, Yun
In this thesis, we propose novel approaches to robust multi-robot Simultaneous Localization and Mapping (SLAM). We explore multi-robot SLAM in both the centralized and distributed setting, with point-cloud or mesh-based representations to create a lightweight but dense map of the environment from vision or lidar data.&#13;
&#13;
We present and discuss four different approaches to multi-robot SLAM. The first approach, named Large-scale Autonomous Mapping and Positioning (LAMP), is a centralized lidar-based approach that creates a dense point-cloud map of the environment and uses incremental Pairwise Consistency Maximization (PCM) to reject outliers in the loop closure measurements. The second approach, named meshLAMP, is an extension of LAMP that uses Graduated Non-Convexity to reject ouliers and creates a lightweight mesh map of the environment. To address the limitations in communication range and scalability of the centralized approaches, we also present two distributed approaches. The first distributed approach, named Distributed, Online, and Outlier Resilient SLAM (DOOR-SLAM), is a vision-based distributed approach to multi-robot SLAM that extends PCM to reject outliers without relying on centralized computation. The last approach, named Kimera-Multi, is a visionbased distributed approach that uses PCM for outlier rejection, and extends the lightweight mesh-based mapping module in meshLAMP to operate in a distributed fashion and generate a semantic mesh map of the environment. We demonstrate the four approaches in a variety of conditions, from indoor environments to photo-realistic simulators, to underground spaces in the context of the DARPA Subterranean Challenge, and show that they are able to perform reliably in the field. We conclude by commenting on the advantages and possible improvements for each approach.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Analytics During Black Swan Events&#13;
A Case Study of the Covid-19 Global Pandemic</title>
<link href="https://hdl.handle.net/1721.1/139124" rel="alternate"/>
<author>
<name>Kaminski, Erez</name>
</author>
<id>https://hdl.handle.net/1721.1/139124</id>
<updated>2022-01-15T03:20:34Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Limits of Analytics During Black Swan Events&#13;
A Case Study of the Covid-19 Global Pandemic
Kaminski, Erez
During statistically unlikely events (Black Swan events) analytical models fail to provide their expected level of fidelity: their error can increase by several hundred percent. The modern economy is built on such analytical models, which are intended to provide useful results during routine conditions. All models eventually fail to provide the expected level of fidelity under extreme conditions. This thesis investigates the critical limitations of analytical methods during Black Swan events. Specifically, we study the space of possible model errors for statistical forecasting models and their respective implications for supply chain systems. We explore the forecast errors through numerical simulation and a real-world case study of a global manufacturing company experiencing the Covid-19 pandemic, a Black Swan event. We demonstrate that in some cases demand can shift by over 60%, leading to the bifurcation of the forecast error space, and resulting in an 500% increase in forecast error. This new regime causes supply chain planning systems to grind to a halt as existing inventory models become irrelevant. Such a drastic change in a company’s operational environment requires urgent action to ensure continued operations. For management to make correct decisions, it is critical for them to understand the limits of analytics during Black Swan events.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation and Implementation of Augmented Reality for Aerospace Operations and Sustainment</title>
<link href="https://hdl.handle.net/1721.1/139123" rel="alternate"/>
<author>
<name>Auffinger, Caitlin Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/139123</id>
<updated>2022-01-15T04:05:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Evaluation and Implementation of Augmented Reality for Aerospace Operations and Sustainment
Auffinger, Caitlin Elizabeth
Industry is eager to adopt new technology and realize its predicted benefits, but it is difficult to justify a risky investment in an unproven technology. In the context of the aerospace and defense industry, new technology must meet stringent security standards in addition to being compatible with legacy systems. This thesis defines a collaborative framework for successful augmented reality technology development and implementation, including a process to identify a use case, define requirements, and evaluate existing commercial off-the-shelf solutions. The thesis application case study is motivated to support strategic development at Raytheon Technologies – Raytheon Missiles &amp; Defense. The objectives include proposals for technology down selection and development processes to enable augmented reality capabilities for operations and sustainment of fielded products and to leverage those capabilities for additional applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Adsorption Systems for Automotive Climate Control</title>
<link href="https://hdl.handle.net/1721.1/139121" rel="alternate"/>
<author>
<name>Jacobucci, Cody L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139121</id>
<updated>2022-01-15T03:16:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Adsorption Systems for Automotive Climate Control
Jacobucci, Cody L.
Adsorption systems have a wide range of applications that sit at the forefront of challenges presented by climate change, spanning direct air carbon capture, to atmospheric water harvesting, to thermal energy storage.  Decades of research and development have led to the optimization of adsorption materials to customize them for a specific application by tailoring their affinity for particular molecules, among other properties. The rapid discovery and development of new metal organic frameworks (MOF), a class of materials that prove to be more customizable than more traditional adsorbents such as zeolites and silica gel, offer promise for sorption systems to be further enhanced.  However, the practical design of these systems for climate control and atmospheric water harvesting has yet to be perfected.&#13;
&#13;
Adsorption systems need to balance many factors to be successful for a given objective, weighing the kinetics of a given process against the device mass and volume for a variety of operational conditions.  This thesis aims to elucidate design principles and optimization guidelines to facilitate the design and analysis of future sorption systems that are general enough to grow with the field as material and manufacturing capabilities expand.&#13;
&#13;
In this work, we describe the theoretical model used to design and optimize a waste heat driven air conditioning system for an internal combustion engine vehicle.  The proposed device has a tube and fin architecture, where each fin has copper foam brazed to it to serve as a porous, conductive scaffold for the deposition of AQSOA Z02.  The device will use the waste heat from the engine coolant at 90 ℃ for desorption, and produce 1.5 kW cooling power over a 400 second cycle. The proposed design met all of the specifications proposed by Ford for automotive air conditioning systems, marking a significant milestone for the deployment of adsorption-based cooling for portable cooling applications. The design optimization process is repeated to produce a 1:10 scale prototype delivering an average cooling power of 150 W, which is currently under fabrication.&#13;
&#13;
In order to validate our model, we conducted a series of adsorbent coating characterizations in a custom adsorption bed simulator.  We found good agreement with the model for traditional immersion drying fabrication techniques. We also propose a new boiling assisted channel templating (BACT) method to facilitate better vapor transport through the coatings to increase the potential cooling power via enhanced adsorption kinetics, reduce material waste, and decrease required fabrication time.  This resulted in a performance of a specific cooling power of 1875 W/kg Z02 for a 120 second cycle- a record high number for Z02 under these operating conditions.  Preliminary analysis suggests it would enable a system level specific cooling power of 375 W/kg of the entire adsorbent bed, compared to our previously proposed design with a specific cooling power of 200 W/kg.&#13;
&#13;
In the final chapter, we will review the lessons learned from this work and describe the next steps that we think are essential in translating adsorption technology out of the lab and into real devices for adsorption driven cooling.   We also provide recommendations as to how this framework can easily be applied to other adsorption systems, such as atmospheric water harvesting, and next steps for enhancing the performance of BACT samples.&#13;
&#13;
Thesis Supervisor: Evelyn N. Wang&#13;
&#13;
Title: Department Head; Gail E. Kendall Professor of Mechanical Engineering
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equality of opportunity in travel behavior prediction with deep neural networks and discrete choice models</title>
<link href="https://hdl.handle.net/1721.1/139120" rel="alternate"/>
<author>
<name>Zheng, Yunhan</name>
</author>
<id>https://hdl.handle.net/1721.1/139120</id>
<updated>2022-01-15T03:59:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Equality of opportunity in travel behavior prediction with deep neural networks and discrete choice models
Zheng, Yunhan
Although researchers increasingly adopt machine learning to model travel behavior, they predominantly focus on prediction accuracy, while largely ignore the ethical challenges and the adverse social impacts embedded in the machine learning algorithms. This study introduces the important missing dimension - computational fairness - to travel behavioral analysis. It highlights the accuracy-fairness tradeoff instead of the single dimensional focus on prediction accuracy in the contexts of deep neural network (DNN) and discrete choice models (DCM). The author firstly operationalizes computational fairness by equality of opportunity, then differentiates between the bias inherent in data and the bias introduced by modeling. The models inheriting the inherent biases can risk perpetuating the existing inequality in the data structure, and the biases in modeling can further exacerbate it. The author then demonstrates the prediction disparities in travel behavioral modeling using the National Household Travel Survey 2017. Empirically, DNN and DCM reveal consistent prediction disparities across multiple social groups, although DNN can outperform DCM in prediction disparities because of DNN’s smaller misspecification error. To mitigate prediction disparities, this study introduces an absolute correlation regularization method, which is evaluated with the synthetic and the real-world data. The results demonstrate the prevalence of prediction disparity in travel behavior modeling, which can exacerbate social inequity if the prediction results without fairness adjustment are used for transportation policy making. As such, the author advocates for careful considerations of the fairness problem in travel behavior modeling, and the use of bias mitigation algorithms for fair transport decisions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>‘Autogestión’: Community-led Squatting as a Means of Transformative Revitalization of Abandoned Spaces in Puerto Rico</title>
<link href="https://hdl.handle.net/1721.1/139119" rel="alternate"/>
<author>
<name>Zayas del Rio, Gabriela B.</name>
</author>
<id>https://hdl.handle.net/1721.1/139119</id>
<updated>2022-01-15T03:26:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">‘Autogestión’: Community-led Squatting as a Means of Transformative Revitalization of Abandoned Spaces in Puerto Rico
Zayas del Rio, Gabriela B.
This thesis documents and reflects on the emergence of a new form of squatting led by collectives in urban communities across Puerto Rico. This new form of squatting is using squatting beyond a means of survival and more substantively as a planning practice. This thesis relies on a mix of ethnographic methods (semi-structured interviews, participant observation, and policy analysis) to understand the context in which these collectives emerge, how one collective is doing such work, and what recommendations could transform planning practice to better support this work. The first part of this thesis sets the context and argues that these collectives emerge to respond to a polycrisis that has produced abandoned spaces resulting from economic development and planning approaches (incentive-based models and imported development models such as suburbanization and urban renewal) that are extractive, colonizing, and devoid of Puerto Rican voices.&#13;
&#13;
The thesis then follows a case study of one collective, Urbe Apié, in Caguas, Puerto Rico. Urbe Apié is a horizontal and decentralized organization that uses a planning area, not just a building, to turn squatting into a comprehensive form of community-led revitalization. Its approach to spatial planning is flexible, embracing uses that shift with time and with changing community needs and aspirations. Most importantly, it repurposes abandoned spaces to materialize collective ownership of the decisionmaking process of city-making, ultimately subverting orthodox notions of private property rights and top-down planning. Its planning approaches have proven to be better equipped for planning in crisis, as they foment deep democracy and multiple sovereignties that empower communities to exercise their self-determination and ensure their permanency in urban centers despite the polycrisis and an increasingly absent government.&#13;
&#13;
The thesis concludes with a set of recommendations for planning to adopt more liberatory practices that support and legitimize these collectives’ work. These include examples on how to decolonize planning tools, such as eminent domain and for-profit revitalization, to remove legal barriers and provide formal avenues for collectives to do this work, which is finally reflecting Puerto Rican voices and not outside and fleeting interests.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Material Continuum Topology Optimization for Embodied Carbon Objectives</title>
<link href="https://hdl.handle.net/1721.1/139118" rel="alternate"/>
<author>
<name>Holley, Claire Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/139118</id>
<updated>2022-01-15T03:25:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multi-Material Continuum Topology Optimization for Embodied Carbon Objectives
Holley, Claire Elizabeth
Recent years have seen an increase in research and practical interest that seek to lower the carbon footprint of infrastructure and building design. Typically, carbon emissions from the built environment are divided into two categories: operational emissions and embodied carbon. Over the past decades, most work has focused on lowering the operational carbon, so now attention has turned to lowering the embodied carbon, which constitutes a significant proportion of the carbon emissions over the lifecycle of a building. Within structural design of buildings and infrastructure, topology optimization is an emerging technology, often seeking to make structures more materially efficient. It therefore offers a means to reduce the structural weight, and as such, minimize the global warming potential (GWP). This research provides an exploration of bi-material optimization problems that minimize GWP as well as compliance for a series of representative models. Two materials are considered; one with a stiff, high embodied carbon coefficient (ECC) material, such as steel, and a less stiff, lower ECC material, such as timber or concrete.&#13;
&#13;
This work presents multi-material topology optimization frameworks that lower the embodied carbon for continuum design. For both cases of compliance and GWP minimization, an additional set of design variables are used to control the material selection. The framework uses a density-based approach to topology optimization and existing multi-material formulations. For the design, the Solid Isotropic Material with Penalization (SIMP) method is used to penalize intermediate material choices and fmincon is taken as the gradient based optimizer. The frameworks are demonstrated on several benchmark examples and compared between the two optimization problems. In both cases, the stiffer material was generally placed near supports and where loading is applied. The results show not only optimization through material selection, but topology optimization in shape and size.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph-Theoretic Outlier Rejection: From Instance&#13;
to Category-Level Perception</title>
<link href="https://hdl.handle.net/1721.1/139117" rel="alternate"/>
<author>
<name>Shi, Jingnan</name>
</author>
<id>https://hdl.handle.net/1721.1/139117</id>
<updated>2022-01-15T03:28:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Graph-Theoretic Outlier Rejection: From Instance&#13;
to Category-Level Perception
Shi, Jingnan
In this thesis, we study the problem of outlier pruning for robust estimation. Robust estimation is the workhorse for many perception problems, from object pose estimation to robot localization and mapping. In these problems, the robot has to estimate quantities of interest in the face of outliers. Such outliers can be the result of incorrect data association, and it is not unusual to have problems where more than 90% of the input measurements are outliers.&#13;
&#13;
Our first contribution is ROBIN (Reject Outliers Based on INvariants), a graphtheoretic approach that employs invariance to find mutually compatible measurements and prune outliers. ROBIN captures the mutual compatibility information by modeling measurements as vertices and mutual compatibility as edges in a compatibility graph. We generalize existing results showing that the inliers form a clique in this graph and typically belong to the maximum clique. We also provide a general definition of invariance for noisy measurements. We test ROBIN in various instance-level perception problems such as single rotation averaging and 3D point cloud registration. ROBIN boosts robustness of existing solvers (making them robust to more than 95% outliers), while running in milliseconds in large problems.&#13;
&#13;
With ROBIN developed, we then consider a category-level perception problem, where one is given 3D sensor data picturing an object of a given category (e.g., a car), and has to reconstruct the pose and shape of the object despite intra-class variability (i.e., different car models have different shapes). To solve this problem, we develop the first certifiably optimal solver for pose and shape estimation. We demonstrate that ROBIN can also be applied in this scenario, using compatibility checks based on convex hulls. We evaluate our approach through extensive experiments on both simulated and real datasets (PASCAL3D+ and ApolloScape), demonstrating that the resulting approach improves over the state of the art.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superfuture: How global superminds can use immersive experiences to build a positive future</title>
<link href="https://hdl.handle.net/1721.1/139116" rel="alternate"/>
<author>
<name>Bonime, Western</name>
</author>
<id>https://hdl.handle.net/1721.1/139116</id>
<updated>2022-01-15T03:47:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Superfuture: How global superminds can use immersive experiences to build a positive future
Bonime, Western
In the face of the ever increasing crisis due to global warming and the ensuing risk and uncertainty, there is an urgent need for organizations to develop resilience and adaptability.  In this paper, I propose ways that immersive experiences can provide new pathways for change.  Through examination of 16 case examples of resilient organizations utilizing immersive experiences, I present insights and suggestions for how other organizations can use some of the same methods to: 1) use collective intelligence and knowledge share 2) support a culture of creative thinking  3) design future risk response strategies through visioning and foresight strategy  4) build business strategies that include well-being of their employees, stakeholders and the planet. Close examination of these case examples demonstrates that these four areas are a large part of their success.  Context for the insights and suggestions in this research, is provided through a brief exploration of challenges to change, components of resilience and the benefits of collective intelligence, creativity, vision, and holistic decision making.  These organizations are proving their success and adaptability,  using a broad range of immersive experience including: Extended reality (XR, virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), games, 360 cameras and location based experiences (LBE) in interesting and unique ways.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stabilizing demonstration trajectories of linear deformable objects for robotic shoe tying</title>
<link href="https://hdl.handle.net/1721.1/139115" rel="alternate"/>
<author>
<name>Tan, Michelle</name>
</author>
<id>https://hdl.handle.net/1721.1/139115</id>
<updated>2022-01-15T03:28:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Stabilizing demonstration trajectories of linear deformable objects for robotic shoe tying
Tan, Michelle
Tying a shoelace knot is commonly seen as a milestone for young children as they learn how to use their hands to execute complex motions. Humans are able to consistently tie a shoelace knot on a wide variety of shoes, even if they haven’t seen the shoe before. In comparison, shoe tying is a problem that the robotics community is still very far from solving, since it breaks many assumptions of existing algorithms in robotic manipulation. Some key difficulties of getting a robot to consistently tie any shoe include the complex dynamics, the deformable nature of the shoelaces, the long time horizon, the dexterity needed to manipulate flexible objects, and the large variation between different shoes.&#13;
&#13;
In this thesis, we make progress towards shoe tying by making a robot that is able to improve on a given demonstration by making it more robust to initial conditions of a shoe in simulation. This is motivated by the fact that when humans learn to tie a shoe for the first time, they are carefully taught a procedure for making the knot, including how to hold the shoelaces and how to be able to tell if a shoelace knot is good or bad. The impressive part is that they can quickly adapt this procedure to any shoe. In this thesis, I discuss the following three contributions towards refining a demonstration to work for any shoe. The first contribution is the development of an open-sourced configurable shoe simulator environment that allowed us to tie shoes completely in simulation. The second contribution is a formulation and evaluation of direct policy search via CMA-ES with the goal of optimizing a given shoe tying policy for robustness. The third contribution is a formulation and analysis for learning the dynamics and cost on a latent state and an evaluation of the learned model for control. We found that CMA-ES and learning latent approximate information states were both successful techniques. Both were able to stabilize a demonstration for robustness on initial conditions of the shoelaces &gt;95/100 of the time.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Driven Surrogate Models for Faster SPICE Simulation of Power Supply Circuits</title>
<link href="https://hdl.handle.net/1721.1/139114" rel="alternate"/>
<author>
<name>Smith, Tanya N.</name>
</author>
<id>https://hdl.handle.net/1721.1/139114</id>
<updated>2022-01-15T03:08:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data Driven Surrogate Models for Faster SPICE Simulation of Power Supply Circuits
Smith, Tanya N.
The stiffness of power supply circuits with large power distribution networks makes simulation through the industry standard Simulation Program with IC Emphasis (SPICE) often non-convergent or prohibitively expensive. The existing solution of piecewise linear (PWL) simulation addresses these issues with reasonable accuracy, but lacks practicality. This research implements a system to train surrogate models for an n-type MOSFET that can replace the nMOS device in SPICE simulation to improve performance while maintaining accuracy, regardless of the larger circuit context. We explore a variety of surrogate modeling and adaptive sampling techniques on low-dimensional functions, showing that adaptive sampling improves surrogate prediction accuracy compared to space-filling sampling. We implement a deep feed-forward artificial neural network (ANN) surrogate for the MOSFET, but the current implementation fails to achieve sufficient accuracy to be useful in SPICE simulation. Future work might explore hyperparameter search and alternative neural architectures or adaptive sampling approaches to improve accuracy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Attentional Modulation for Zero-shot Learning in Object Recognition</title>
<link href="https://hdl.handle.net/1721.1/139113" rel="alternate"/>
<author>
<name>Singh, Aaditya</name>
</author>
<id>https://hdl.handle.net/1721.1/139113</id>
<updated>2022-01-15T03:11:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Deep Attentional Modulation for Zero-shot Learning in Object Recognition
Singh, Aaditya
In the human brain, top-down attention plays a crucial role in the human ability to recognize seemingly infinite visual concepts using the same visual pathway. Even more impressive, humans have the ability to recognize objects from just a description (zero-shot) or a few examples (few-shot). Traditionally, artificial neural networks have struggled at reproducing this ability, with large performance drops in the zero-and few-shot domains caused by overfitting. Most methods are focusing on learning a good, fixed feature extractor, then tying those features to new classes using linear transformations, which are less prone to overfitting on few examples. On the opposite side of this spectrum of simpler models are meta-learning techniques that finetune whole feature extractors to fit the few examples. While both of these methods have shown reasonable success, we believe that a middle ground, taking into account inductive biases inspired biological attention, can lead to improved performance. In this work, we study the use of top-down attentional modulation, already shown to be useful in visual question answering, in the domain of zero- and few-shot object recognition. We find that deep modulation can be critical in distinguishing unseen classes from previously seen classes in the zero-shot setting, and also provides gains in distinguishing between unseen classes in the few-shot domain. We hope that the insights brought to light in this work can contribute to growing need for computer vision systems that generalize to novel concepts and new environments.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph factorization and pseudofactorization with applications to hypercube embeddings</title>
<link href="https://hdl.handle.net/1721.1/139112" rel="alternate"/>
<author>
<name>Sheridan, Kristin</name>
</author>
<id>https://hdl.handle.net/1721.1/139112</id>
<updated>2022-01-15T03:40:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Graph factorization and pseudofactorization with applications to hypercube embeddings
Sheridan, Kristin
In many contexts, it is useful to determine if a particular distance metric can be broken down into other metrics about which more is known. In particular, if a metric can be embedded into a hypercube, the plethora of preexisting knowledge about the structure of a hypercube can provide knowledge about the structure in question. In this paper, we examine the concepts of graph factorization and pseudofactorization, in which a graph is broken up into smaller graphs whose Cartesian product it is isomorphic to or is an isometric subgraph of, respectively. We show that the same or slightly modified versions of the techniques used for this process in the context of unweighted graphs also work for weighted graphs. While it is NP-hard to decide if a general distance metric is hypercube embeddable, we also discuss how these results expand the number of known types of graphs and distance metrics for which this problem is polynomial time decidable. We also discuss why this kind of decomposition of graphs and distance metrics may be of interest in a variety of fields.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive History Support for the Exploratory Design of Data Visualizations</title>
<link href="https://hdl.handle.net/1721.1/139110" rel="alternate"/>
<author>
<name>Sefah, Ebenezer</name>
</author>
<id>https://hdl.handle.net/1721.1/139110</id>
<updated>2022-01-15T03:24:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Interactive History Support for the Exploratory Design of Data Visualizations
Sefah, Ebenezer
Research into the design process when building data visualization has yielded many tools that support linear versioning techniques, such as annotation, bookmarking, and interactive history tracking. However, visualization design processes are usually non-linear, and support for exploratory techniques such as branching to build on past iterations and merging operations are underexplored. In this thesis, I investigate how to adapt these related techniques to support the exploratory design phase when designing and building data visualizations. I also introduce novel approaches for merging on a subcomponent level, that allow designers to merge either axes, marks, or legend properties individually. This is motivated by the need to cherry pick different aspects or components of visualizations created at different points of the exploration process to create new ones. At the end, I present a preliminary implementation to support and enhance the exploratory design of data visualizations using Lyra 2, a visual design environment. I also present an evaluation, in the form of supported use cases, which assesses the impact of these features on the exploratory design phase when designing and building data visualisations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Large Format Additive Manufacturing in Aerospace Tooling via Process Integration and Finite Element Analysis of Print Performance</title>
<link href="https://hdl.handle.net/1721.1/139109" rel="alternate"/>
<author>
<name>Cotter, Philip D.</name>
</author>
<id>https://hdl.handle.net/1721.1/139109</id>
<updated>2022-01-15T03:23:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Implementing Large Format Additive Manufacturing in Aerospace Tooling via Process Integration and Finite Element Analysis of Print Performance
Cotter, Philip D.
Ascent Aerospace (Ascent) designs and manufactures a diverse array of customized aerospace tooling, creating a low-volume/high-mix production environment where precision is critical. As a result, Additive Manufacturing (AM)—and, more specifically, Large Format Additive Manufacturing (LFAM)—stands to provide Ascent a significant competitive advantage by reducing lead time, cutting costs, and enabling the rapid production of novel tooling solutions. This project explores the integration of the Large Scale Additive Manufacturing Machine (LSAM) into Ascent’s production processes with the goal of maximizing the technology’s value impact. To this end, it focuses on two components: understanding, controlling, and planning the production of LSAM-printed tools, and simulating the behavior of LSAM-printed tools to better predict their performance.&#13;
&#13;
First, a framework for the operational integration of the LSAM is developed. Comparison of traditional (current state) and LSAM-specific (future state) process maps provides a means to identify and address probable bottlenecks. Next, a test plan is described enabling a clear understanding of the LSAM’s capabilities and limitations. From these findings, Design for LSAM (DfLSAM) Guidelines and various other continuous improvement initiatives are motivated and codified.&#13;
&#13;
Next, this project develops an approach to the Finite Element Analysis (FEA) of LSAM-printed objects. Current design principles are largely rooted in empirically calibrated processes which require extensive trial and error. Due to the size of LSAM prints, this approach can be expensive and unscalable. The FEA approach presented herein begins with characterization of material properties of a common carbon fiber reinforced ABS feedstock. Based on these inputs, various modeling approaches are explored for this anisotropic, composite material. Model outputs are then validated against the results of physical experiments. An orthotropic solid modeling approach is shown to compare best with physical reality, suggesting a promising direction for further development.&#13;
&#13;
Organizational impacts and change management are considered throughout this document. Future directions of both the integration and modeling work are also discussed. These findings, abstracted from Ascent, comprise a framework for the implementation of LFAM in manufacturing operations.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis of Deoxysugars through Manganese-promoted Redox Isomerization</title>
<link href="https://hdl.handle.net/1721.1/139107" rel="alternate"/>
<author>
<name>Suh, Carolyn E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139107</id>
<updated>2022-01-15T03:56:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Synthesis of Deoxysugars through Manganese-promoted Redox Isomerization
Suh, Carolyn E.
Deoxysugars feature prominently in many bioactive natural products and pharmaceutical compounds. Many synthetic routes towards deoxysugars rely on protecting groups to achieve selective outcomes. Here we report a concise synthetic strategy to access a diverse set of 2- and 4-deoxysugars using a Mn-promoted redox isomerization step that avoids lengthy protecting group manipulations. We determine the resting state of the manganese catalyst to be Mn(II). We demonstrate subsequent derivatizations with the ketone moiety to access branched sugars and amino sugars as well, showcasing the versatility and utility of this method.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hierarchical Algorithm for Probabilistically Complete Path Planning in Multi-Floor Environments</title>
<link href="https://hdl.handle.net/1721.1/139106" rel="alternate"/>
<author>
<name>Curtis, Shiloh</name>
</author>
<id>https://hdl.handle.net/1721.1/139106</id>
<updated>2022-01-15T03:02:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Hierarchical Algorithm for Probabilistically Complete Path Planning in Multi-Floor Environments
Curtis, Shiloh
Navigation in multi-floor, multi-building environments is increasingly important in robotics. For wheeled robots, these environments can be conveniently modeled as a set of 2D maps, representing floors, connected by “wormholes”, which represent elevators and other between-floor connections. The full topological structure of the space can thus be described as a weighted graph. However, existing planning algorithms for multi-floor environments modeled in this way do not extend the guarantees on completeness and optimality provided by the underlying motion planning algorithms used within the 2D maps. &#13;
&#13;
This work proposes a new algorithm, HRG*, for probabilistically complete and asymptotically optimal multi-floor path planning that carries these guarantees, together with a reference implementation whose performance is characterized in comparison to the native version.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>StarLogo Nova Dashboard for Teachers</title>
<link href="https://hdl.handle.net/1721.1/139105" rel="alternate"/>
<author>
<name>Bowen, Kalyn</name>
</author>
<id>https://hdl.handle.net/1721.1/139105</id>
<updated>2022-01-15T03:24:07Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">StarLogo Nova Dashboard for Teachers
Bowen, Kalyn
As remote and online education became more prevalent with the COVID-19 pandemic, the importance and influence of online learning resources gained more traction. The Scheller Teacher Education Program at MIT focuses on educational technologies that support learning, with one of the projects being StarLogo Nova, an agent-based game and simulation programming environment [1]. While there are workshops that help teachers learn to integrate StarLogo into their classrooms, the tool’s support for teaching can be improved, especially in classroom orchestration and assessing student performance in simulation-based learning. In this thesis, we design and implement a StarLogo teacher dashboard in order to provide better support for educators new to using simulation-based learning in their classrooms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous Sensing and Navigation in Challenging Environments Using Unmanned Air Vehicles in Single- and Multi-Agent Settings</title>
<link href="https://hdl.handle.net/1721.1/139104" rel="alternate"/>
<author>
<name>Torgesen, Andrew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139104</id>
<updated>2022-01-15T03:38:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Autonomous Sensing and Navigation in Challenging Environments Using Unmanned Air Vehicles in Single- and Multi-Agent Settings
Torgesen, Andrew J.
Small unmanned air systems (UAS), due to their navigational versatility and ability to operate autonomously, serve as an intriguing platform on which to carry out advanced sensing operations in otherwise untraversable or prohibitively dangerous environments. The need for UAS to be able to autonomously navigate and explore their environments with limited payload, communication, and computational capacity, however, poses its own challenges–particularly when subjected to the non-ideal environmental disturbances and feature spaces present in real-world scenarios. This thesis addresses these issues by presenting two complementary projects enabling UAS-based autonomous sensing in real-world environments using relatively low-cost and lightweight hardware. The first project presents a UAS capable of measuring air wakes while flying tethered behind a moving vessel. The unique challenges of tethered flight control and relative state estimation in a feature-starved environment are addressed with a novel planning and control architecture together with an error-state Kalman filter that achieves centimeter-level relative position accuracy. The second project presents a multi-agent UAS navigation system for GPS-denied environments that expands on the state-of-the-art in collaborative simultaneous localization and mapping (CSLAM) for the purpose of facilitating fast and accurate radiation mapping in contaminated and cluttered zones. CSLAM capabilities are made more robust to communication deficiencies through the novel incorporation of ultra-wideband range sensors into a distributed range-enhanced pose graph optimization (DRPGO) scheme. The experimental demonstrations of the two presented systems, considered in tandem to overcome hurdles to sensing from aerodynamic disturbances, feature-starved environments, and communication bandwidth limitations, strengthen the promise of small UAS as an effective tool for demanding real-world data collection applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Runtimes and Lower Bounds for Dual-Edge Failure Replacement Path Algorithms</title>
<link href="https://hdl.handle.net/1721.1/139098" rel="alternate"/>
<author>
<name>Woldeghebriel, Eyob W.</name>
</author>
<id>https://hdl.handle.net/1721.1/139098</id>
<updated>2022-01-15T03:55:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Improved Runtimes and Lower Bounds for Dual-Edge Failure Replacement Path Algorithms
Woldeghebriel, Eyob W.
Given a graph G and a fixed pair of nodes s and t, the Replacement Paths problem is to compute the new shortest distance from s to t when there are edge failures in G (i.e. those edges can no longer be used for any path). While there has been extensive research into the single-failure Replacement Paths problem, less progress has been made on multiple-failure algorithms. This thesis provides a new algorithm for the two-failure variant of the Replacement Paths problem, and shows a new combinatorial lower bound for the runtime of k-failure Replacement Paths for any positive integer k.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining the Optimal Amount of Computation Pushdown to Minimize Runtime for a Cloud Database</title>
<link href="https://hdl.handle.net/1721.1/139097" rel="alternate"/>
<author>
<name>Woicik, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/139097</id>
<updated>2022-01-15T03:28:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Determining the Optimal Amount of Computation Pushdown to Minimize Runtime for a Cloud Database
Woicik, Matthew
Many cloud databases separate their compute from their storage resources. This design introduces a network bottleneck during query execution that can be mitigated through caching and computation pushdown. Depending on the environmental settings and the specific query, the amount of computation pushdown needed to achieve the optimal runtime may vary. This work presents a runtime prediction model that determines the amount of computation pushdown that results in the fastest runtime and analyzes a real-world implementation of this model on the FlexPushdownDB system running in AWS.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New Models And Algorithms For Distribution Testing: Beyond Standard Sampling</title>
<link href="https://hdl.handle.net/1721.1/139095" rel="alternate"/>
<author>
<name>Narayanan, Shyam</name>
</author>
<id>https://hdl.handle.net/1721.1/139095</id>
<updated>2022-01-15T04:05:15Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">New Models And Algorithms For Distribution Testing: Beyond Standard Sampling
Narayanan, Shyam
Distribution testing is a crucial area at the interface of statistics and algorithms, where one wishes to learn properties of datasets from a small number of samples. Classic distribution testing problems occur in many applications, including biology, genomics, computer systems, and linguistics. In this thesis, we study distribution testing under two models: the Conditional Sampling Model and the Learning-Based Frequency Model. In the traditional distribution testing framework, one is only allowed random samples from the data, but these two models allow for more powerful queries (as we will further describe). We improve query/sample complexity bounds for classic distribution testing problems in these models.&#13;
&#13;
In the conditional sampling model, one is allowed more powerful queries where each query specifies a subset &#119878; of the domain, and the output received is a sample drawn from the distribution conditioned on being in &#119878;. In this model, we first prove that tolerant uniformity testing can be solved using Õ(&#120576;⁻²) queries, which is optimal and improves upon the Õ(&#120576;⁻²⁰)-query algorithm of Canonne et al. [18]. This bound even holds under a restricted version of the conditional sampling model called the Pair Conditional Sampling model. Next, we prove that tolerant identity testing in the conditional sampling model can be solved in Õ(&#120576;⁻⁴) queries, which is the first known bound independent of the support size of the distribution for this problem. Next, we use our algorithm for tolerant uniformity testing to get an Õ(&#120576;⁻⁴)-query algorithm for monotonicity testing in the conditional sampling model, improving on the Õ(&#120576;⁻²²)-query algorithm of Canonne [14]. Finally, we study (non-tolerant) identity testing under the pair conditional sampling model, and provide a tight bound of Θ˜(√ log &#119873; · &#120576;⁻²) for the query complexity, where the domain of the distribution has size &#119873;. This improves upon both the known upper and lower bounds in [18].&#13;
&#13;
We next consider the problem of estimating the number of distinct elements (also known as support size estimation) in a large data set from a random sample of its elements. This problem has been especially well-studied, with a partial bibliography (available at https://courses.cit.cornell.edu/jab18/bibliography.html) from 2007 containing over 900 references, both theoretical and applied, relating to 3 this problem alone! A line of research spanning the last decade resulted in algorithms that estimate the support up to ±&#120576;&#119873; from a sample of size &#119874;(log²(1/&#120576;) · &#119873;/ log &#119873;) [61], where &#119873; is the data set size. Unfortunately, this bound is known to be tight, limiting further improvements to the complexity of this problem. To overcome this issue, we introduce the Learning-Based Frequency Model, where we consider estimation algorithms augmented with a machine-learning-based predictor that, given any element, returns an estimation of its frequency. We show that if the predictor is correct up to a constant approximation factor, then the sample complexity can be reduced significantly, to log(1/&#120576;)· &#119873;¹⁻ᶿ⁽¹⸍ ˡᵒᵍ⁽¹⸍&#120576;⁾⁾ . In addition, we evaluate the proposed algorithms on a collection of data sets, using the neural-network based estimators from Hsu et al. [35] as predictors. Our experiments demonstrate substantial (up to 3x) improvements in the estimation accuracy compared to the state of the art algorithms.&#13;
&#13;
This thesis combines two papers:&#13;
&#13;
• Shyam Narayanan. On Tolerant Distribution Testing in the Conditional Sampling Model. In Proceedings of the 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2021.&#13;
&#13;
• Talya Eden, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, and Tal Wagner. Learning-based Support Estimation in Sublinear Time. In Proceedings of the 9th Annual International Conference on Learning Representations (ICLR), 2021 (Spotlight Presentation).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating launch vehicle trajectories and atmospheric emissions</title>
<link href="https://hdl.handle.net/1721.1/139094" rel="alternate"/>
<author>
<name>Pradon, Cassandre Victoria Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/139094</id>
<updated>2022-01-15T03:03:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Estimating launch vehicle trajectories and atmospheric emissions
Pradon, Cassandre Victoria Marie
Launch vehicles enable Earth observation, navigation or space exploration. In doing so, they cause direct anthropogenic emissions in the troposphere and above. This comes with an environmental cost. They can emit carbon dioxide (CO2), water vapor, chlorine, aluminum oxide, black carbon and nitrogen oxides that lead to atmospheric changes such as ozone depletion. Historically, because launches were part of national security concerns, rockets have not been subjected to environmental regulation. The few studies in the 1990s due to the decreasing number of launches have helped rockets avoid any policy regulation. However, the development of commercial launches and the evolution of engine designs has created a need to reassess the impact of rocket launches. My work presents an inventory of stoichiometric emissions of rocket launches between the years 2009 and 2018. I first compile all publicly available data of launches between 2009 and 2018, then I design a program to simulate the trajectory of any launch vehicle under an altitude of 100 km. This model gives a profile of fuel burn and stoichiometric emissions as a function of altitude for many launch vehicles. The exhaust products of interest are CO2, water vapor, chlorine and aluminum oxide. Between 2009 and 2018, 140.5 kt of CO2 were emitted in the atmosphere, 78.9 kt of water vapor, 5 kt of chlorine and 7.8 kt of alumina were emitted above the tropopause. The increase in the number of launches has made CO2 emissions grow by 73% between 2009 and 2018, while water and chlorine emissions have decreased by 25 and 58% since 2009 because of the retirement of the Space Shuttle. The rise in kerosene-fueled rockets launches is making CO2 emissions increase faster than the number of launches and suggests that water vapor emissions are going to increase again. Launch vehicles are compared and reveal that trade-offs are necessary to minimize the different emissions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the "Black Box": Enabling Meaningful Transparency of Algorithmic Decision-Making Systems through Public Registers</title>
<link href="https://hdl.handle.net/1721.1/139092" rel="alternate"/>
<author>
<name>Murad, Maya</name>
</author>
<id>https://hdl.handle.net/1721.1/139092</id>
<updated>2022-01-15T03:03:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Beyond the "Black Box": Enabling Meaningful Transparency of Algorithmic Decision-Making Systems through Public Registers
Murad, Maya
Deployments of algorithmic decision-making systems (ADMs) by the public sector have been plagued with opacity. There is a baseline lack of visibility of the context and purpose of the ADM system as well as its potential risks to individuals and collective goods. In many cases, citizens are unaware of the very existence of algorithmic systems that they interact with or that help decide their access to benefits or influence policing. Moreover, disclosures concerning algorithmic systems often take place when their shortcomings (potential harms) are inadvertently exposed, often through the work of public interest groups.&#13;
&#13;
Given the increasing adoption of algorithmic systems to automate decisions and services in the public sector, there is a need to operationalize transparency requirements to enable better accountability. While algorithmic transparency can take on many forms, this thesis mainly focuses on the role of public ADM registers in enabling meaningful transparency to the public. In the past year, at least five local governments have launched their very first ADM registers. Drawing upon these early experiences, relevant stakeholder interviews and specifically considering Amsterdam as a case study, we attempt to formalize the concept of a register as both a standardized and interpretable ADM disclosure mechanism, as well as a governance framework that enables coordination between a number of stakeholders to provide of transparency to the public. We also propose models through which public interest groups and civilians can be engaged in the creation, development and launch of public ADM systems through the governance of a register, and outline key benefits and limitations of such models.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Resources for Debugging Education using Block-based Languages</title>
<link href="https://hdl.handle.net/1721.1/139091" rel="alternate"/>
<author>
<name>Wang, Brandon L.</name>
</author>
<id>https://hdl.handle.net/1721.1/139091</id>
<updated>2022-01-15T03:13:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Developing Resources for Debugging Education using Block-based Languages
Wang, Brandon L.
Computer science education and programming are increasingly making their way into K-12 curricula. The ability to correct errors (“bugs'') is fundamental to learning how to program. Early experiences with debugging can be critically important in setting up new programmers for long-term success. Meanwhile, many beginners first experience programming through block-based programming languages. There has been much research into block-based programming languages and in debugging education, but less focus on the intersection of these two topics. More broadly, there is also a need for accessible and adaptable resources that assist beginners in learning debugging. In this thesis, we report on our development of tools and curricula that support beginners in developing their debugging skills. Our materials assume that students use the block-based programming language Scratch.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Powered Ankle Exoskeleton on Human Stability and Balance</title>
<link href="https://hdl.handle.net/1721.1/139090" rel="alternate"/>
<author>
<name>Gonzalez, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/139090</id>
<updated>2022-01-15T03:27:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Assessment of Powered Ankle Exoskeleton on Human Stability and Balance
Gonzalez, Sarah
While wearable robotic systems such as exoskeletons are designed to assist in human motion, they are often only studied in situations of level walking. In order to expand the types of actions for which exoskeletons can assist, the exoskeletons must be tested in a variety of situations to understand how users will respond to these systems. This thesis examines how a lower-limb exoskeleton (Dephy ExoBoot) in both actuated and unactuated states affect balance and stability when performing a balancing tasking on a beam. Data was collected via inertial measurement units and analyzed on a pooled level (with data from all subjects) and on an individual level. It was found that the exoskeleton in both states affects stride stability metrics (e.g., stride length, stride duration, and stride speed). Despite the changes in stride stability, balance overall (as measured by torso sway) remains unaffected by either exoskeleton state when considering the pooled subject data. This result indicates that the ExoBoot can be used in balance tasks without compromising the balance of the user. On an individual level, it was found that not all subjects followed these general trends, as each person moves in a unique manner. It was also found that subject who experienced the unpowered exoskeleton prior to the powered exoskeleton state developed stride strategies on the balance beam that were initially more conservative. Our findings suggest that lower-limb exoskeletons can be used for balancing tasks, and we recommend that balancing tasks should be included in the standards for exoskeleton evaluation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>TaskLight: A Groupware System to Facilitate Requesting and Managing Help in Teams</title>
<link href="https://hdl.handle.net/1721.1/139089" rel="alternate"/>
<author>
<name>Vishwabhan, Stuti</name>
</author>
<id>https://hdl.handle.net/1721.1/139089</id>
<updated>2022-01-15T03:27:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">TaskLight: A Groupware System to Facilitate Requesting and Managing Help in Teams
Vishwabhan, Stuti
People working in collaborative settings often have to request help from their colleagues due to their expertise or authority. Previous research in request management found that a performer (a person who is asked to conduct a task) would benefit from meta-data about the request. In this work, we use requesters’ motivation to help their performers and their goodwill to construct informative requests; in turn, performers are able to better and more efficiently finish tasks. &#13;
&#13;
We propose a collaborative request management tool called TaskLight, where requesters can provide and curate contextual information for performers. Performers are able to prioritize and manage requests based on information provided and also easily collect additional information through nuanced discussion and negotiation around requests, if necessary. The design of TaskLight is inspired by preliminary interviews with individuals who were part of collaborative settings, in order to understand their current practices of requesting help and using tools. We further investigate how different models of request can assist performers’ attention management via a field study. We demonstrate the diverse use cases of TaskLight through implementations of requests such as collaborative writing, getting approvals, and making group decisions. We derive insights via a user study from our first deployed version of TaskLight and use it as a stepping stone for future direction of work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Domain-Specific Accelerator for Graph Pattern Mining</title>
<link href="https://hdl.handle.net/1721.1/139088" rel="alternate"/>
<author>
<name>Huang, Tianhao(Data scientist)</name>
</author>
<id>https://hdl.handle.net/1721.1/139088</id>
<updated>2026-01-05T12:54:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing a Domain-Specific Accelerator for Graph Pattern Mining
Huang, Tianhao(Data scientist)
Graph pattern mining (GPM) is used in a variety of domains such as bioinformatics, e-commerce and social sciences. GPM is a computationally intensive problem with an enormous amount of coarse-grain parallelism and therefore, attractive for hardware acceleration. Unfortunately, existing GPM accelerators have not used the best known algorithms and optimizations, and thus offer questionable benefits over software implementations. We propose a software/hardware co-designed GPM accelerator that improves the efficiency without compromising the generality or productivity of state-of-the-art software GPM frameworks. It exploits the massive amount of coarse-grain parallelism in GPM with a large number of cheap, specialized processing elements. For efficient searches, the system adopts pattern-specific execution plans, which are generated automatically by a compiler from the given pattern(s). To avoid repetitive connectivity computation, an on-chip scratchpad is employed to memoize reusable intermediate results in the form of a connectivity map which enables fast vertex connectivity lookups. The proposed accelerator is implemented in a cycle-accurate simulator for performance evaluation. Key hardware modules are synthesized for an estimate of area costs. The results have shown that with similar core area as one modern CPU core, our design could outperform general-purpose systems by an order of magnitude.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of UHPC Columns for Stress-Strain Behaviour, Economic and Environmental Feasibility</title>
<link href="https://hdl.handle.net/1721.1/139087" rel="alternate"/>
<author>
<name>VOO, Brandon Tsun Leong</name>
</author>
<id>https://hdl.handle.net/1721.1/139087</id>
<updated>2022-01-15T03:46:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigation of UHPC Columns for Stress-Strain Behaviour, Economic and Environmental Feasibility
VOO, Brandon Tsun Leong
Ultra High Performance Concrete (UHPC) also known as Reactive Powder Concrete (RPC) was developed by in France circa 1995 (Richard &amp; Cheyrezy, 1995). The most notable characteristics of RPC are its mechanical properties, which are an “ultra high” compressive strength (&#119891;′&#119888; ≥ 150 &#119872;&#119875;&#119886;), a high flexural strength (modulus of rupture) (&#119891;′&#119888;&#119891; ≥ 30 &#119872;&#119875;&#119886;) and a high Young’s (Elastic) modulus (&#119864; ≥ 50 &#119866;&#119875;&#119886;). &#13;
&#13;
There seems to be a noticeable void in the research on UHPCs, that is on the utilization of UHPC in axial members (columns). Hence, a logical progression of research would be to consider the utilization of UHPC in columns, of which this study intends to explore. &#13;
&#13;
Material Models (Gilbert &amp; Gowripalan, 2000), are utilized to Numerically Analyze UHPC Column Sections to determine its Structural Performance specifically the Moment Capacity and Ductility of the section. &#13;
&#13;
In this study, the viability of UHPC replacing High Strength Concretes (HSC) in Concrete Columns is explored. Parametric Studies are conducted to enable a better understanding of the structural performance of UHPC Columns, where both Normal Steel Rebars and Prestressing Strands are considered as Steel Reinforcement. An Environmental Feasibility Analysis and an Economical Feasibility Analysis are performed comparing the Environmental and Economic Viability of a Normal Strength Concrete (NSC) Column to a Structurally Equivalent UHPC Column. &#13;
&#13;
This study has found that UHPC is viable for adoption by industry for use in columns when only structural performance and economy (viable in high value property with Non-Proprietary UHPC (Graybeal, 2013)) are considered. But more research needs to be done on replacing Component Materials in UHPC design mixes to lower the higher Embodied Carbon content of UHPC Columns compared to NSC Columns.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learned scheduling for database management systems</title>
<link href="https://hdl.handle.net/1721.1/139086" rel="alternate"/>
<author>
<name>Ukyab, Tenzin Samten</name>
</author>
<id>https://hdl.handle.net/1721.1/139086</id>
<updated>2022-01-15T03:40:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learned scheduling for database management systems
Ukyab, Tenzin Samten
Parallel database management systems need efficient job scheduling. Currently systems use simple heuristics ignoring the characteristics of database workloads. Therefore, we created an effective scheduler that uses machine learning techniques, such as reinforcement learning and neural networks, and does not require human intervention beyond an objective, such as reducing average job completion time. We use existing training techniques for job schedulers with dependency constraints. However, the model is specialized for database workloads using features specific to database queries, such as node operator type. In addition, we represent pipelining scheduling opportunities between operator tasks. With further training time our learned scheduler will be able to improve the average job completion time in comparison to heuristic schedulers, such as FIFO and fair scheduling.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>End-User Customization by Direct Manipulation of Tabular Data</title>
<link href="https://hdl.handle.net/1721.1/139085" rel="alternate"/>
<author>
<name>Litt, Geoffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/139085</id>
<updated>2022-01-15T03:51:40Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">End-User Customization by Direct Manipulation of Tabular Data
Litt, Geoffrey
Customizing software should be as easy as using it. Unfortunately, most customization methods require users to abruptly shift from using a graphical interface to writing scripts in a programming language.&#13;
&#13;
We introduce data-driven customization, a new way for end users to extend software by direct manipulation without doing traditional programming. We augment existing user interfaces with a table view showing the structured data inside the application. When users edit the table, their changes are reflected in the original UI. This simple model accommodates a spreadsheet formula language and custom data-editing widgets, providing enough power to implement a variety of useful extensions.&#13;
&#13;
We illustrate the approach with Wildcard, a browser extension that implements data-driven customization on the web using web scraping. Through concrete examples, we show that this paradigm can support useful extensions to many real websites, and we share reflections from our experiences using the tool.&#13;
&#13;
Finally, we share our broader vision for data-driven customization: a future where end users have more access to the data inside their applications, and can more flexibly repurpose that data as part of everyday software usage.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oystamaran: An Implementation of Autonomy in Surface Vehicles for Oyster Farming</title>
<link href="https://hdl.handle.net/1721.1/139084" rel="alternate"/>
<author>
<name>Tung, Matthew Chhamnan</name>
</author>
<id>https://hdl.handle.net/1721.1/139084</id>
<updated>2022-01-15T03:05:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Oystamaran: An Implementation of Autonomy in Surface Vehicles for Oyster Farming
Tung, Matthew Chhamnan
With the evergrowing field and demand of aquafarming, many farms must find ways to optimize their processes. For Ward Aquafarms down in Cape Cod, one particular bottleneck is the flipping of oyster bags. The workers must regularly flip the bags filled with oysters and other products in order to encourage the flow of oxygen. With hundreds of bags, this is a strenuous and time-consuming process that is currently done by hand. In this thesis, we present a design and approach of the software and hardware design components to reaching a robust and cost-effective, automated solution via surface vehicle. This system is currently being designed and implemented for testing in cooperation with Ward Aquafarms and MIT SeaGrant.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relaxation of Dense Suspension</title>
<link href="https://hdl.handle.net/1721.1/139082" rel="alternate"/>
<author>
<name>Griese, Andrew Herman</name>
</author>
<id>https://hdl.handle.net/1721.1/139082</id>
<updated>2022-01-15T03:27:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Relaxation of Dense Suspension
Griese, Andrew Herman
Dense suspensions exhibit complex rheological behavior by behaving liquid-like at low shear stresses and solid-like at high shear stresses. The microscopic interactions between individual particles create non-Newtonian macroscopic behaviors by transitioning from hydrodynamic interactions to frictional contacts. As the particles are forced into frictional contact, the suspension’s viscosity discontinuously increases with respect to the shear rate, leading to solid-like characteristics. While much research has gone into how a suspension enters this solid-like state, little is known about how the suspension relaxes out of this stressed rheological state. To understand the relaxation behavior and its underlying physical mechanism, we investigate the relaxation of water-cornstarch mixtures at different cornstarch mass fractions. The relaxation of these cornstarch suspensions is explored by measuring the stress decay upon flow cessation with a rheometer and a texture analyzer, and by capturing the spreading dynamics of suspension drops upon the cessation of vibrations with a permanent magnet shaker. We show that the dense suspensions relax with two distinct timescales, and that both of these timescales are linearly dependent on the suspension viscosity in the stressed state.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Energetic Costs of Photovoltaic Pumping Systems (PVPS) for Sub-Saharan African Smallholder Farms</title>
<link href="https://hdl.handle.net/1721.1/139081" rel="alternate"/>
<author>
<name>Liang, ZhiYi</name>
</author>
<id>https://hdl.handle.net/1721.1/139081</id>
<updated>2023-01-08T15:49:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Quantifying the Energetic Costs of Photovoltaic Pumping Systems (PVPS) for Sub-Saharan African Smallholder Farms
Liang, ZhiYi
With the solar panel prices falling recently, photovoltaic pumping systems (PVPSs) have become an affordable and effective technology for off-grid smallholder farmers in developing markets like Sub-Saharan Africa. Yet the high upfront cost of PVPSs remains a financial burden for many low-income farming communities. Although numerous efforts have been made to further increase the affordability of PVPSs, there is still a lack of investigation into potential energetic cost savings by improving solar pump efficiency from an architectural design perspective. In this study, a technoeconomic framework was developed to quantify the energetic costs of different solar pump architectures. The energetic cost is defined as the total cost of the solar array, which enables a direct comparison between efficiency and capital cost. New efficiency prediction models were formulated for 4-inch borehole pump hydraulics and submersible motors based on surveyed manufacturer specifications. Two types of case studies on SSA farms were conducted as example analyses in applying the framework. The operating space level analysis provides a bird's-eye view of the energetic cost-savings over the operating space when comparing two solar pump architectures. The operating point level analysis demonstrates a similar energetic cost analysis to identify the most energetically cost-effective solar pump architectures for the operating conditions of a specific SSA farm. By adopting highly efficient BLDC motors in 4-inch solar-powered borehole pumps, energetic cost-savings were found and operating regions not currently served by high-efficiency solar pumps can now be reached. These results highlight economic incentives for manufacturers to provide high-efficiency solar pumps to more smallholder farmers in SSA while reducing the overall upfront cost of PVPSs.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Air Force Crew Scheduling: An Integer Optimization Approach</title>
<link href="https://hdl.handle.net/1721.1/139080" rel="alternate"/>
<author>
<name>Koch, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139080</id>
<updated>2022-01-15T03:31:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Air Force Crew Scheduling: An Integer Optimization Approach
Koch, Matthew J.
Air Force flight, training, and crew scheduling is a labor-intensive and largely manual process across all flying squadrons. Complex training requirements and dependencies, operational constraints, numerous qualifications, and unforeseen missions confound the schedule development process. We develop multiple optimization formulations for the Air Force crew scheduling problem. Furthermore, we present multiple objective functions aiming at mimicking reality to account for pilot qualification upgrades and their ability to stay current and mission ready. To compare candidate schedules, we identify numerous metrics that show the impact of the different objective functions. Finally, we briefly discuss how to incorporate scheduler preferences and focus on creating human-interpretable schedules so that the scheduler can select the most desired schedule for the squadrons' current needs.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Understanding Human-aligned Neural Representation in the Presence of Confounding Variables</title>
<link href="https://hdl.handle.net/1721.1/139079" rel="alternate"/>
<author>
<name>Simonovikj, Sanja</name>
</author>
<id>https://hdl.handle.net/1721.1/139079</id>
<updated>2022-01-15T03:34:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Towards Understanding Human-aligned Neural Representation in the Presence of Confounding Variables
Simonovikj, Sanja
Deep Neural Networks (DNNs) find one out of many possible solutions to a given task such as classification. This solution is more likely to pick up on spurious features and low-level statistical patterns in the train data rather than semantic features and highlevel abstractions, resulting in poor Out-of-Distribution (OOD) performance. In this project we aim to broaden the current knowledge surrounding spurious correlations as they relate to DNNs. We do this by measuring their effect on generalization under various settings, determining the existence of subnetworks in a DNN that capture the core features and examine potential mitigation strategies. Finally, we discuss alternative approaches that are reserved for future work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photonic Integrated Circuit Packaging Using Silicon Based Optical Interconnects</title>
<link href="https://hdl.handle.net/1721.1/139078" rel="alternate"/>
<author>
<name>Weninger, Drew Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/139078</id>
<updated>2022-01-15T03:00:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Photonic Integrated Circuit Packaging Using Silicon Based Optical Interconnects
Weninger, Drew Michael
As electrical interconnections reach their physical limit, optoelectronic packaging solutions are a critical bottleneck in the effort to seamlessly integrate silicon photonic devices - a technology capable of handling high data rates for next generation data center telecom applications. In this thesis, novel silicon based optical couplers capable of low loss, robust connections from an optical fiber to a photonic integrated circuit (PIC) and a PIC to another PIC are presented. The first of these, the fiber-to-chip coupler, utilizes a graded index material stack in the vertical direction and a non-adiabatic taper in the horizontal direction to focus light into a single mode waveguide, all while maintaining planarity and a monolithic design. Distinguishing features from prior designs are made in the form of coupler rotation, added structural elements, and removal of all curved facets. Notable advantages include customization of the output PIC waveguide, high intrinsic coupling efficiency, wide alignment tolerances, CMOS compatibility, and scalability to mass manufacturing.&#13;
&#13;
Following this will be the description of the adiabatic, inverse SiₓOᵧN subscript z cross tapers which provide vertical evanescent coupling of light from one PIC to another PIC. A final beam expansion design is presented for chip-to-cip coupling, one capable of maintaining silicon input and output waveguides while increasing tolerances to within the accuracy capabilities of high speed pick and place die bonders. Simulations using 3D finite-difference time-domain (FDTD) and eigenmode expansion (EME) methods were utilized to determine quantitative performance metrics including coupling efficiency, packaging misalignment tolerance, and wavelength and polarization dependence. The comparison of these coupling designs to other state of the art fiber-to-chip and chip-to-chip coupling designs was also done to evaluate the their performance in the context of their peers.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Control of a Mounted Robotic Arm Tool Changer and Measurement Tools for Agriculture</title>
<link href="https://hdl.handle.net/1721.1/139075" rel="alternate"/>
<author>
<name>Poon, Ryan Joseph Mar</name>
</author>
<id>https://hdl.handle.net/1721.1/139075</id>
<updated>2022-01-15T03:57:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design and Control of a Mounted Robotic Arm Tool Changer and Measurement Tools for Agriculture
Poon, Ryan Joseph Mar
This thesis describes the design and manufacturing of a robotic arm tool changer system designed for an agricultural usage, built to be low cost, robust in its simplicity, and powered by actuators with high torque densities. Supporting a universal socket interface, the tool changer can swap between a broad array of instruments created for this project, including a thermal camera, an impedance analyzer, a pH probe, and near-infrared and visible light spectroscopes. Each of these tools are shown to have some application in the field, such as by measuring plant health or soil properties. However, with the variation of tools came the development of an autotuning library that could rapidly generate stable PID gains for a serial linkage with dynamics that are constantly expected to change with changes of endpoint mass. Combined with a custom-written trajectory optimizer, the process outlined by this PID autotuning library demonstrates a 40% improvement in root-mean-squared tracking error, a 14% improvement in average settling time, and a near 100% improvement in average percent overshoot compared to a untuned system with poorly selected gains.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capillary Effects of Nanoporous Networks on Aerospace Autoclave-grade Prepreg Composites Enabling Vacuum-bag-only Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/139073" rel="alternate"/>
<author>
<name>Hank, Travis J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139073</id>
<updated>2022-01-15T03:51:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Capillary Effects of Nanoporous Networks on Aerospace Autoclave-grade Prepreg Composites Enabling Vacuum-bag-only Manufacturing
Hank, Travis J.
The use of carbon fiber reinforced polymer (CFRP) composite systems has grown in the aerospace industry due to their high mass-specific stiffness and strength and other properties. Yet, many advanced aerospace-grade CFRPs require autoclave pressure vessels to thermally process composites, which has numerous drawbacks such as high capital costs, hours-long heating and cooling cycles, high energy use, and bottlenecks due to fixed size. This work investigates various nanomaterial systems with nanoscale porosity, termed nanoporous network (NPN) materials, which are applied to each ply-ply interface in the composite prepreg laminates to remove voids by encouraging resin infusion through capillary effects. This removes the need for applied autoclave pressure and enables vacuum-bag-only (VBO) curing of autoclave-grade composites using either conductive or convective heating under vacuum. Aligned carbon nanotubes (A-CNTs) have been successfully utilized as textured NPNs with aligned capillaries, but alternative, particularly scaled and lower-cost, NPN materials are of interest. Commercially-available electrospun polymer nanofiber (EPN) films with different fiber diameters, film thickness, and polymer material are investigated extensively and a bespoke-commercial polyimide (PI) aerogel is preliminarily investigated. EPN NPN interlayers are able to create void-free laminates as revealed by micro-computed tomography (µCT) on flat aerospace-grade CFRP laminates using unidirectional epoxy-based prepreg plies. The VBO cured EPN NPN laminates are also found to have the same interlaminer shear strength as the autoclave-cured baseline laminates. It is found that polymer (polyimide) aerogels with a given porosity of 96 vol% are also a viable NPN. L-shape geometries with autoclave-grade laminates are preliminary examined with the investigation revealing void elimination, including in the problematic curved section using EPN NPN interlayers when cured under VBO conditions. Investigations using interlaminar NPNs to reduce voids in laminates that contain woven-woven and unidirectional-woven prepreg interfaces reveal significant void reductions, but not elimination, utilizing various NPNs, such that continuing challenges exist for full void elimination. Future work includes parametric studies of various NPN materials to further increase the breadth of NPNs available as well as quantifying the precise capillary pressures needed in different prepreg systems to eliminate interlaminar voids.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Individual Components of the SOFA Score using Multi-Task Learning</title>
<link href="https://hdl.handle.net/1721.1/139072" rel="alternate"/>
<author>
<name>Yang, Alexander Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/139072</id>
<updated>2022-01-15T03:32:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Predicting Individual Components of the SOFA Score using Multi-Task Learning
Yang, Alexander Y.
The Sequential Organ Failure Assessment (SOFA) score is a scoring system useful for predicting clinical outcomes in the intensive care unit, such as organ failure and mortality. The availability of increasingly detailed electronic clinical data has allowed for the creation of more powerful models to better predict and improve patient outcomes. However, there is little work in predicting the underlying physiological measurements that define the SOFA score. In this paper, we consider predicting changes to the individual components of the SOFA score. We use multi-task learning frameworks to predict future values for the SOFA score components, with the goal of sharing information across the different tasks to improve overall predictive performance. We use approximately 53,000 days of time-series Electronic Health Record (EHR) data taken from 10,000 ICU stays in the Multiparameter Intelligent Monitoring in Intensive Care IV (MIMIC-IV) dataset. Evaluating on a test holdout set of 10% of our data, we compare performance of our multi-task learning models to individually-trained deep networks that predict each component without parameter sharing. Model performance suggests that there is a small advantage to multi-task learning over unregularized networks, but no advantage compared to networks that employ regularization.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Rehosting and Instrumentation of Embedded Firmware</title>
<link href="https://hdl.handle.net/1721.1/139071" rel="alternate"/>
<author>
<name>Ramseyer, Ryan William</name>
</author>
<id>https://hdl.handle.net/1721.1/139071</id>
<updated>2022-01-15T03:40:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Automated Rehosting and Instrumentation of Embedded Firmware
Ramseyer, Ryan William
Vulnerable embedded systems continue to proliferate as the Internet of Things (IoT) grows. Rehosting enables security analysis of these devices by separating embedded firmware from its host hardware, allowing the firmware to be run and inspected in virtual environments. I present a system to perform automated rehosting and instrumentation of embedded firmware: ARI. ARI improves upon previous methods by performing progressive fidelity assessments and automatically applying various failure-oblivious, network, and filesystem fixes necessary to enable web service operation. On successfully emulated systems, ARI further instruments and tests embedded web servers using the popular dynamic analysis tool, Valgrind. On a corpus of 1709 Linux-based firmware samples, representing 617 unique IoT products, ARI enables successful web service execution on 1017 samples, a 125% improvement over an existing system, Firmadyne. Results are used to inform analysis of rehosting as a technique to improve security assessments of Department of Defense (DoD) embedded systems. Barriers to adoption, including intellectual property and lack of standardization, are outlined and mitigations leveraging existing digital acquisition methods are suggested.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Electromechanical Attachments for Improved Ultrasound Imaging Repeatability</title>
<link href="https://hdl.handle.net/1721.1/139070" rel="alternate"/>
<author>
<name>Koeppen, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/139070</id>
<updated>2022-01-15T03:34:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of Electromechanical Attachments for Improved Ultrasound Imaging Repeatability
Koeppen, Ryan
Ultrasound imaging (or ultrasonography) is a common tool used for medical diagnostics. It has many advantages over other imaging modalities (such as MRI and CT) such as being more portable, less expensive, and lower power. Ultrasound imaging is emerging as a noninvasive diagnostic alternative in many applications that traditionally rely on biopsies.&#13;
&#13;
Ultrasound imaging also has notable limitations, such as being highly operator dependent and having low resolution at large imaging depths. In recent years, several engineering solutions have been designed to overcome these limitations, such as force-coupled ultrasound, external mechanical vibration (EMV) for shear wave elastography (SWE), and volume ultrasound. Each of these technologies also has its limitations and some have not been optimized for clinical settings.&#13;
&#13;
In this work, these technologies are developed further into attachments to allow for easier and simultaneous use in clinical ultrasound settings. A more compact force coupling attachment was designed using a linear DC servomotor and validated with external sensors. An external vibration system for SWE, designed in previous work, was developed to improve resistance to debris and its dynamic performance was experimentally validated. An optical tracking module was incorporated for estimating the probe’s 6 degrees of freedom and its performance was quantified. Electronic hardware and a Robot Operating System (ROS) network were developed to synchronize the three attachments for control through a single, custom MATLAB application. &#13;
&#13;
The ultrasound probe attachments were used in experiments on calibrated phantoms and human subjects. Initial experimental results validated the effectiveness of force coupling on improving imaging variability. The combination of force coupling and optical tracking enabled force-coupled, elastogram volumes to be created in post-processing.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effort-Independent Asthma Severity Classification</title>
<link href="https://hdl.handle.net/1721.1/139069" rel="alternate"/>
<author>
<name>Lynch, James C., III</name>
</author>
<id>https://hdl.handle.net/1721.1/139069</id>
<updated>2022-01-15T03:37:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Effort-Independent Asthma Severity Classification
Lynch, James C., III
Asthma is an obstructive pulmonary disorder. It impacts the lives of over 24 million individuals in the United States alone, a large segment of which are children. We propose to investigate capnography as a viable diagnostic modality to guide the treatment of asthma as an alternative to the gold standard, spirometry. Capnography shows promise in the detection of similar pulmonary disorders, and would serve as a noninvasive and effort-independent tool, providing critical information to clinicians when patients are unable or unwilling to comply with spirometry testing. In this work, we demonstrate the viability of using features extracted from time-based capnography to determine underlying patient symptom severity, using logistic regression classification models. Applications in both controlled, pulmonary function laboratories and emergency department triage conditions are explored. We show that for an adult population undergoing methacholine challenge pulmonary function testing, capnography recordings from subjects with asthmatic exacerbation may be distinguished from their normal/baseline recordings with an AUROC of 0.92 (0.84 -- 1.00). Additionally, using data from an acute pediatric setting we show that recordings from subjects with severe asthmatic exacerbation may be distinguished from subjects with mild or moderate asthma symptoms with an AUROC of 0.86 (0.72 -- 1.00).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patterns of Optimal Structural Layouts</title>
<link href="https://hdl.handle.net/1721.1/139068" rel="alternate"/>
<author>
<name>Prendergast, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/139068</id>
<updated>2022-01-15T04:06:17Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Patterns of Optimal Structural Layouts
Prendergast, Stephen
Flexural systems are ubiquitous in structural design and constitute a significant portion of the structural material in many infrastructural constructions. The study of optimization of flexural systems is of great interest to structural designers and architects seeking to reduce embodied carbon. A significant reduction of material of a given flexural system may be achieved by optimal layout of flexural elements in a twodimensional space. Analytical solutions for optimal structural layouts of plates and beam systems comprised of one-dimensional elements were derived by Rozvany et al. in the 1970s. This thesis explores optimal structural layouts in the context of conventional and other nonconventional beam layouts (e.g. derived from principal stresses) and proposes interpretations of patterns. A variety of beam layouts for a variety of boundary and support conditions are compared for optimality, including square domains with both clamped and simple corner supports, and other regular polygonal domains (triangular, hexagonal) with clamped point supports. A method for constructing optimal beam layouts for regular polygonal column grids using transformations of the Delaunay mesh and Voronoi diagram is presented, and certain Euclidean tilings are presented as optimal beam layouts.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for Human Design: Developing Next Generation Sketch-Based Tools</title>
<link href="https://hdl.handle.net/1721.1/139067" rel="alternate"/>
<author>
<name>Ong Wen Xi, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/139067</id>
<updated>2022-01-15T04:00:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Machine Learning for Human Design: Developing Next Generation Sketch-Based Tools
Ong Wen Xi, Bryan
Formal computational approaches in the realm of engineering and architecture, such as parametric modelling and optimization, are becoming increasingly powerful, allowing for systematic and rigorous design processes. However, these methods often bring a steep learning curve, require previous expertise, or are unintuitive and unnatural to human design. On the other hand, analog design methods such as hand sketching are commonly used by architects and engineers alike. They constitute quick, easy, and almost primal modes of generating and transferring design concepts, which in turn facilitates the sharing of ideas and feedback. In the advent of increasing computational power and developments in data analysis, deep learning, and other emerging technologies, there is a potential to bridge the gap between these seemingly divergent processes to develop new hybrid approaches to design. Such methods can provide designers with new opportunities to harness the systematic and data-driven power of computation and performance analysis while maintaining a more creative and intuitive design interface.  This thesis presents a new method for interpreting human designs in sketch format and predicting their structural performance using recent advances in deep learning.  Furthermore, the thesis will also demonstrate how this new technique can be used in design workflows including performance-based guidance and interpolations between concepts.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Load Balancing in NetApp’s Clustered Storage Systems</title>
<link href="https://hdl.handle.net/1721.1/139065" rel="alternate"/>
<author>
<name>Tran, Tho</name>
</author>
<id>https://hdl.handle.net/1721.1/139065</id>
<updated>2022-01-15T03:34:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Load Balancing in NetApp’s Clustered Storage Systems
Tran, Tho
To address the problem of load balancing in NetApp’s storage system, this thesis aims to design and implement an algorithm that results in more evenly distributed cluster reconfigurations with minimal disturbance to clients’ workloads. I implement three different greedy algorithms to find a more balanced workload-node assignment that lowers the maximum number of operations across the cluster. To analyze the performance of the greedy algorithms, I compare their results with those of the evolutionary and brute force algorithms. I also examine whether clusters’ characteristics affect the algorithms’ performance. The key findings are that the greedy algorithm with the advanced heuristic outperforms or does as well as the naive and intermediate greedy algorithms in five clusters that are representative of NetApp data. However, the tradeoff is that advanced greedy algorithm takes more time to run and costs more migration moves, thus causing NetApp clients or support engineers the inconvenience of manually moving multiple workloads. On the other hand, the naive greedy algorithm performs well on large clusters that primarily have small, non-dominating workloads but is more likely to get stuck in local minimums when it comes to small clusters that have one or more dominating workloads. The intermediate algorithm performs as well as the naive greedy algorithm in these clusters. Finally, the evolutionary algorithm is suitable for clusters with fewer nodes and workloads. Based on these findings, it is recommended that NetApp should use the naive greedy algorithm to balance large clusters that mostly have small, non-dominating workloads. If clusters have one or more large, dominating workloads, then it is best to use the advanced greedy algorithm to do load balancing.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aligning policy goals with planning outcomes: A client-based thesis in Portland, Maine</title>
<link href="https://hdl.handle.net/1721.1/139064" rel="alternate"/>
<author>
<name>Wight, Seth</name>
</author>
<id>https://hdl.handle.net/1721.1/139064</id>
<updated>2022-01-15T04:01:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Aligning policy goals with planning outcomes: A client-based thesis in Portland, Maine
Wight, Seth
In 2018, Portland, Maine began the process of rewriting its Land Use Code for the first time in over fifty years. A primary motivation for the ReCode effort was the opportunity to align the code with Portland’s Plan 2030, the city’s comprehensive plan adopted in 2017. With the initial phase of the ReCode complete, streamlining and simplifying the existing code, subsequent phases will evaluate existing zones and regulations for consistency with the comprehensive plan’s goals. This thesis complements that work through the analysis and envisioning of two sites in the city facing development pressure. It asks: how can zoning facilitate development aligned with broader planning goals, and what factors influence the likelihood of that development? First, I situate the sites in their present and historical contexts. Existing planning relevant to each site is then considered both for its broader themes and future implications. An economic assessment reveals the market factors and local/regional dynamics that influence development in the area. I then present plausible development scenarios for each site, and utilize case studies to illustrate alternative scenarios. I conclude with recommendations for the city to consider throughout its ReCode process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Convolutional Neural Net Models and Image Processing Methods for Predicting Surgical Site Infection</title>
<link href="https://hdl.handle.net/1721.1/139063" rel="alternate"/>
<author>
<name>Schneider, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/139063</id>
<updated>2022-01-15T03:57:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Convolutional Neural Net Models and Image Processing Methods for Predicting Surgical Site Infection
Schneider, Gabriel
Surgical site infections are an important cause of disability, cost, and even mortality, especially in low-resource settings, where patients have limited access to clinical facilities or trained medical professionals. As an alternative to a hospital-based diagnosis by a doctor, there is interest in making use of community health workers to help identify infections in patients’ homes. As an aid for diagnosis, we propose using mobile phone devices to capture an image of a wound and then apply a convolutional neural network (CNN) model to identify features of infection. For this thesis, I have explored both RGB images captured using a mobile phone camera and also thermal images captured using an external thermal camera module. The data for this work was collected as part of clinical studies conducted in rural Rwanda by Harvard University, consisting of two datasets: Dataset A (60 infected, 500 non-infected), and Dataset B (70 infected, 1,100 non-infected). From these datasets, separate na¨ıve CNN and transfer learning CNN models were constructed. The overall median AUC values for each model, based on the ROC curve, were as follows: Na¨ıve CNN for Dataset A (Median AUC = 0.65), Transfer learning CNN for Data A (Median AUC = 0.64), Na¨ıve CNN for Dataset B (Median AUC = 0.68), Transfer learning CNN for Data B (Median AUC = 0.86), Na¨ıve CNN for Thermal imaging (Median AUC = 0.86), Transfer learning CNN for thermal imaging (Median AUC = 0.90). In addition to model development, an image pre-processing pipeline was also developed through an extensive series of experiments to study the effect of image blur, pixel resolution, and color calibration. The performance of our models compares favorably to prior work done in the field of wound infection prediction, and to our knowledge, this is the first reported work using thermal imaging to predict infection. These results demonstrate that prediction of surgical infection is feasible using mobile phone imaging tools; it is hoped that this work can lead to new methods for identification of surgical site infection in low-resource areas as well as for outpatient care in developed countries.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Bayesian Computation in Earth Remote Sensing Problems</title>
<link href="https://hdl.handle.net/1721.1/139062" rel="alternate"/>
<author>
<name>Leung, Kelvin Man Yiu</name>
</author>
<id>https://hdl.handle.net/1721.1/139062</id>
<updated>2022-01-15T03:03:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Accelerating Bayesian Computation in Earth Remote Sensing Problems
Leung, Kelvin Man Yiu
Earth atmospheric remote sensing is an inverse problem that fits surface and atmospheric models to imaging spectrometer data and is critical to the analysis of the composition and biodiversity of the Earth surface. Current methods for remote sensing generally involve retrieving a point estimate of the surface reflectance and atmospheric parameters.&#13;
&#13;
This thesis presents a more robust Bayesian approach to quantify the uncertainty of the retrieval, but this is computationally intractable given the high dimensionality of the problem. In many Bayesian inverse problems, however, there exists a low-dimensional likelihood-informed subspace that describes both optimal projections of the data and directions in parameter space that are most informed by the data.&#13;
&#13;
In the Bayesian approach, Markov chain Monte Carlo (MCMC) is implemented within this low-dimensional subspace to increase sampling efficiency. For an example retrieval, reducing the parameter dimension by a factor of 4 increased the effective sample size of the MCMC chain by more than two orders of magnitude. This low-dimensional subspace was shown to be able to capture the key features of the posterior structure from a higher dimension.&#13;
The posterior variance obtained through MCMC was also shown to better represent the uncertainty of the problem over the existing method.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Salt Flats, Finger Islands, and Ponds: Reading the Landscape through Infrastructure in Tampa, Florida</title>
<link href="https://hdl.handle.net/1721.1/139061" rel="alternate"/>
<author>
<name>Mueller Gámez, Michelle</name>
</author>
<id>https://hdl.handle.net/1721.1/139061</id>
<updated>2022-01-15T03:26:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Salt Flats, Finger Islands, and Ponds: Reading the Landscape through Infrastructure in Tampa, Florida
Mueller Gámez, Michelle
We are moving through strange times with the environment. In the greater Tampa Bay area, people commute to the Everglades to hunt pythons, iguanas fall from the sky, and planners build desalination plants to turn saltwater into freshwater. This thesis is an inquiry into the beliefs and ideas that have led to these environmental happenings. It looks at racial capitalism, the teleology of progress, the frontier, and ideas about nature; all of which people have used to create a material infrastructure of residential development in the landscape. Through a historical and cultural analysis, this thesis looks at how tourists, homeowners, critters, planners, environmentalists, engineers, activists, and regular people operate within the bounds of these ideas. Some of their actions and imaginations are limited by what they know and believe, some people work with the natural world to operate and survive in the 21st century, and others take actions to formulate new ways of life. The infrastructures of our times are a product of history and dominant ways of knowing, this thesis seeks to trouble western knowledge that has foreclosed other ways of knowing the natural world.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Slumlords? The Economics and Finances of Small-Scale Low-Income Housing</title>
<link href="https://hdl.handle.net/1721.1/139060" rel="alternate"/>
<author>
<name>Morrison, Drew Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/139060</id>
<updated>2022-01-15T03:41:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Slumlords? The Economics and Finances of Small-Scale Low-Income Housing
Morrison, Drew Edward
The American urban poor suffer from our collective policy failure to guarantee all citizens access to a quality home. Low-quality housing has implications for neighborhood stability, adult and child health, and quality of life for those who live there. America’s history of racial segregation means that this low-quality housing has affected low-income, communities of color generationally. And yet, though 95 percent of low-income Americans live in private housing, the private low-income rental market is relatively understudied. The 1-4-unit market, which represents nearly half of all housing units for the urban poor, is particularly overlooked in both the academic literature and in policymaking. This paper seeks to improve our collective understanding of this market by bringing together existing economic and sociological theories of how the private, small-scale low-income rental housing market operates into a cohesive economic and financial framework.&#13;
&#13;
To understand the market, I consider the economic and behavioral incentives of landlords and property investors. I differentiate the operational behaviors of specific types of landlords, evaluate the property and portfolio-level economics of small buildings in the low-income market, and consider the incentives created by the limited nature of financing in this market. Altogether, these economic and financial incentives and behaviors generate a market that is actively aligned toward degrading property conditions in favor of landlord and investor profit. This paper builds on the existing academic literature through discounted cash flow analyses that model the economic considerations of low-income landlords and through GIS mapping of the presence of large-scale landlord operations in communities in New Haven, CT.&#13;
&#13;
Having articulated the frameworks for understanding the market, I consider how the current COVID-19 crisis has exacerbated issues of quality and financial sustainability. I then identify a three-pronged approach for addressing housing quality and the broader market failures in the low-income market through (a) renewed approaches to code enforcement, (b) innovative landlord approaches that would bring better actors into the market, and (c) broad policy reform to improve housing for low-income Americans. I conclude with an evaluation of how housing quality policy can tie into current trends around inequality, infrastructure investment, and postCOVID recovery.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unit Hours as a Key Performance Indicator</title>
<link href="https://hdl.handle.net/1721.1/139057" rel="alternate"/>
<author>
<name>Papa, Anthony J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139057</id>
<updated>2022-01-15T03:25:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Unit Hours as a Key Performance Indicator
Papa, Anthony J.
In a complex manufacturing environment, tracking the cause of cost overruns and other issues can be challenging. At one of Boeing’s plants, a tool was built to track a key performance indicator based on labor hours. Boeing's measurement of labor hours will capture delays due to part shortages, additional work hours required to complete rework, improper clocking, unbalanced workloads, overtime and any other delays or issues that cause work to take longer than scheduled. The tool also forms the foundation for a visual control system on the factory floor. This web-based tool is designed to improve problem solving at all levels of the organization. Using this tool, the plant’s stakeholders can effectively identify and prioritize labor overages before they accumulate, resulting in an improvement in data-driven problem finding and solving.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cultivating Creative Learning in Community — An iterative design process</title>
<link href="https://hdl.handle.net/1721.1/139056" rel="alternate"/>
<author>
<name>Rege, Sarah Evelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/139056</id>
<updated>2022-01-15T04:05:55Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Cultivating Creative Learning in Community — An iterative design process
Rege, Sarah Evelyn
Creativity is an inherent part of human development, it is the progress of thought; our natural inclination to innovate, invent, and create. Simply put, we have the ability to process, analyze and imagine future outcomes of everyday situations. Evidence suggests that creative education can support cognitive development and enhance our ability to problem solve. In the past decade, curriculum reform in Kenya has shifted from teacher-based approaches to learner-centered approaches (with the introduction of Competency-Based Curriculum). While this may support the next generation of creative thinkers, the lack of funding to support creative education on a national level and the insufficient supply of free resources to most public schools pose a challenge to educators. Despite the appeal of the new curriculum, requirements for parents to be more involved in their children’s learning further pronounce wealth and education gaps amongst students, further impeding the child’s success.&#13;
&#13;
This design thesis presents the iterative process behind the ongoing development process of Somoto, a creative learning service established through the DesignX accelerator at MIT. Through community collaboration, Somoto aims to establish a network of creative learning spaces in Nairobi, Kenya. Drawing from a place-based community development approach; the design proposal identifies community assets and existing resources such as community libraries and cyber cafes to enhance creative education programming within low-income communities in Nairobi.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of Nonthermal Plasma Electrolytic Cells for Ammonia Synthesis</title>
<link href="https://hdl.handle.net/1721.1/139055" rel="alternate"/>
<author>
<name>Chen, Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/139055</id>
<updated>2022-01-15T03:47:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design and Analysis of Nonthermal Plasma Electrolytic Cells for Ammonia Synthesis
Chen, Ann
Haber-Bosch production of ammonia, a fertilizer and energy-dense alternative fuel, accounts for 1.8 % of global fossil fuel demand and 1-2 % of global CO2 emissions. Nonthermal plasma-assisted ammonia synthesis is one solution that is able to counter these carbon emissions. The technology has been shown to be a viable and scalable ammonia production alternative to the Haber-Bosch process that also avoids expensive catalysts and minimizes the use of rare metal electrodes. The technology is able to cleanly produce ammonia at atmospheric pressures and ambient temperatures from just air and water without generating CO2 or harmful equivalents. This work developed two modular and portable designs for plasma-assisted ammonia synthesis and investigated their implementation on industrial farms as a way of producing ammonia directly in the places that need it. The two architectures utilized a piezoelectric direct discharge plasma and a glow discharge plasma. The design and fabrication processes for both apparatuses are detailed, and ammonia generation behavior in response to headspace gas concentration and run time is also reported. Without optimization of plasma operating parameters, the piezoelectric direct discharge device was already able to produce 10.5 µg of ammonia in 15 minutes, and the glow discharge electrolytic cell was able to produce 44.6 µg of ammonia in one hour. Samples collected were processed with an ammonia/ammonium assay using the o-phthalaldehyde method. Resulting fluorescence intensity measurements were then processed using statistical analysis techniques such as ANOVA, nested variance, and effects leverage. In the process, a number of useful observations regarding design improvements for subsequent plasma-assisted ammonia synthesis devices were made. The suggestions put forth in this research can be applied for better production efficacy and more consistent device operation during the research phase.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Attributes of Bi-Directional Turbomachinery for Pumped Thermal Energy Storage</title>
<link href="https://hdl.handle.net/1721.1/139054" rel="alternate"/>
<author>
<name>Chiapperi, Joseph Donald</name>
</author>
<id>https://hdl.handle.net/1721.1/139054</id>
<updated>2022-01-15T03:29:04Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Attributes of Bi-Directional Turbomachinery for Pumped Thermal Energy Storage
Chiapperi, Joseph Donald
In this thesis we (i) present a methodology for determining the aerodynamic performance of bi-directional turbomachines for pumped thermal energy storage, i.e., turbomachines designed to operate with both forward and backward flow, (ii) carry out performance computations for such turbomachines, and (iii) propose principles for conceptual design of these devices. Focus is placed on using the energy storage cycle not to only identify the unique requirements placed on bi-directional turbomachines, but also to estimate what effect these requirements have on the round-trip efficiency of the energy storage process. In particular, it is shown how the difference between aerodynamic loading in forward and in backward operation causes the blading to work at incidences leading to performance below the blading’s maximum efficiency. &#13;
&#13;
The proposed design principles use a 50MW counter-rotating bi-directional turbomachine, being developed by Brayton Energy LLC, as a context from which to assess different features. The description of the design principles includes determination of the appropriate number of stages, definition of relevant non-dimensional parameters for blading selection, and optimization of two-dimensional blading for bi-directional operation. The assessment of stage count shows the relationship between relative Mach number, pressure ratio, and round-trip efficiency. The non-dimensional parameter assessment creates a bi-directional analogue to existing “Smith charts”, for single direction turbomachines, using camber and stagger. The two-dimensional blade shape evaluation and optimization shows how the blade profile can be modified to address the unique requirements of a bi-directional turbomachine, enabling an increase in round-trip efficiency of 2 percentage points compared to a baseline configuration.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power, Risk, and Democratic Control in State-Local Finance:&#13;
The Effect of State Tax and Expenditure Limits on Municipal Debt and Risk</title>
<link href="https://hdl.handle.net/1721.1/139053" rel="alternate"/>
<author>
<name>McDaniel, Noah Jefferson</name>
</author>
<id>https://hdl.handle.net/1721.1/139053</id>
<updated>2022-01-15T03:28:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Power, Risk, and Democratic Control in State-Local Finance:&#13;
The Effect of State Tax and Expenditure Limits on Municipal Debt and Risk
McDaniel, Noah Jefferson
Municipal finance is important, if opaque, to the daily lives of people across the United States. Cities and towns provide essential services which are often financed through debt. Over the course of the 20th century, the use of debt in municipalities and the state-local relationship has transformed dramatically. While debt historically was used for infrastructure projects supported with tax revenue, its use expanded in the postwar era into private purposes and non-capital expenditures. Revenue bonds in particular, a form of non-guaranteed debt, are a popular mechanism to finance local services. States have also increasingly exercised control over municipal finance. Particularly in latter half of the last century, states passed tax and expenditure limits, or TELs, to regulate municipal governments.&#13;
&#13;
In this thesis I explore the empirical effects of TELs, particularly limitations on property taxes, on fiscal risk metrics. The study is motivated by the increased financialization of the public sector, and concordant growth in risk in municipal government. How has the state-local relationship contributed to local risk? Using a panel regression with fixed effects on data from all 50 states 1967-2004, I find that TELs have significant effects on municipal risk. Property tax rate limits in particular increase risk, suggesting that some TELs may not constrain the size of local governments, but induce substitution towards higher-risk fiscal practices.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preserving Memory Safety in Safe Rust during Interactions with Unsafe Languages</title>
<link href="https://hdl.handle.net/1721.1/139052" rel="alternate"/>
<author>
<name>Rivera, Elijah E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139052</id>
<updated>2022-01-15T03:02:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Preserving Memory Safety in Safe Rust during Interactions with Unsafe Languages
Rivera, Elijah E.
Rust is a programming language that simultaneously offers high performance and strong security guarantees. However, these guarantees come at the cost of strict compiler checks that sometimes prevent necessary code patterns. The unsafe keyword allows developers to bypass these compiler checks, and is used in both pure Rust and mixed-language applications. But the use of unsafe undermines the security guarantees of Rust that make it an attractive option in the first place.&#13;
&#13;
We first demonstrate that within a real-world pure Rust application, many uses of unsafe can be eliminated,or reduced to formally verifiable standard libraries. We then present Galeed, a system for isolating and protecting the Rust heap from access by other programming languages using Intel’s Memory Protection Key (MPK) technology. We demonstrate both the effectiveness and efficiency of Galeed on Firefox, a web browser written in Rust and C++.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beneficial Initializations in Over-Parameterized Machine Learning Problems</title>
<link href="https://hdl.handle.net/1721.1/139050" rel="alternate"/>
<author>
<name>Prasad, Neha</name>
</author>
<id>https://hdl.handle.net/1721.1/139050</id>
<updated>2022-01-15T03:11:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Beneficial Initializations in Over-Parameterized Machine Learning Problems
Prasad, Neha
We theoretically and empirically analyze the phenomenon of transfer learning in overparameterized machine learning. We start by showing that in over-parameterized linear regression, transfer learning is equivalent to solving regression from a non-zero initialization. We use this finding to propose LLBoost, a theoretically grounded, computationally efficient method to boost the validation and test accuracy of pretrained, over-parameterized models without impacting the original training accuracy. We evaluate LLBoost on CIFAR10, ImageNet-32, and ImageNet and also prove that it reduces the generalization error of any interpolating solution with high probability. By extending our analysis of transfer learning in linear regression, we present an approach for transfer learning in kernel regression. Namely, we demonstrate that transfer learning corresponds to adding a function to the minimum norm solution that produces zero error on the training data. We use this approach to perform transfer learning on image classification tasks using the neural tangent kernel.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Transfer Learning for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/139048" rel="alternate"/>
<author>
<name>Lin, Yen-Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/139048</id>
<updated>2022-01-15T03:25:01Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Visual Transfer Learning for Robotic Manipulation
Lin, Yen-Chen
Humans are remarkable at manipulating unfamiliar objects. For the past decades of robotics, tremendous efforts have been dedicated to endow robot manipulation systems with such capabilities. As classic solutions typically require prior knowledge of the objects (e.g., 3D CAD models) which are not available in the unstructured environments, data-driven solutions that learn from robot-environment interactions (e.g., trial and error) have emerged as a promising approach for autonomously acquiring complex skills for manipulation. For data-driven methods, the ability to do more with less data is incredibly important, since data collection through physical interaction between the robots and the environment can be both time consuming and expensive. In this thesis, we develop transfer learning algorithms for robotic manipulation in order to reduce the amount of robot-environment interactions needed to adapt to different environments. With real robot hardware, we show that our algorithms enable robots to learn to pick and grasp arbitrary objects with 10 minutes of trial and error, and help robots learn to push unfamiliar objects with 5 interactions.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Engineering of Reactive Molecular Dynamics Simulations</title>
<link href="https://hdl.handle.net/1721.1/139047" rel="alternate"/>
<author>
<name>He, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/139047</id>
<updated>2022-01-15T03:05:49Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Performance Engineering of Reactive Molecular Dynamics Simulations
He, Helen
Reactive molecular dynamics is the best-performing option for simulating chemical systems on the order of thousands of atoms, but its high computational cost often limits the temporal scale of simulation. In order to observe scientific phenomena of interest, we need implementations of interatomic potentials which are highly efficient and scalable on modern architectures. Parallel computing is now ubiquitous, and today’s supercomputing clusters often consist of multicore nodes with high on-node parallelism. Current implementations of ReaxFF display good scaling across many distributed nodes, but fall short in taking full advantage of compute available on an individual CPU or GPU. &#13;
&#13;
This thesis presents analysis and performance optimization of the widely used LAMMPS ReaxFF implementations. I analyze the performance characteristics of the USER-REAXC, USER-OMP, and Kokkos implementations in LAMMPS, profiling and describing bottlenecks in each. I then provide optimizations to serial and parallel CPU code which increase the efficiency and parallel thread scaling of USER-OMP. Using an Intel Xeon Platinum 8260, the resulting code obtains a speedup of 1.5-3x and shows scaling with twice as many OpenMP threads on a 1152-atom Hafnium Diboride simulation. I show performance improvements on various simulation sizes up to 44K atoms, and present independently verified correctness on an AMD Ryzen Threadripper 3970X.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>JamNSync: A User-Friendly, Latency-Agnostic Virtual Rehearsal Platform for Music Ensembles</title>
<link href="https://hdl.handle.net/1721.1/139045" rel="alternate"/>
<author>
<name>Wu, Nanette</name>
</author>
<id>https://hdl.handle.net/1721.1/139045</id>
<updated>2022-01-15T04:04:13Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">JamNSync: A User-Friendly, Latency-Agnostic Virtual Rehearsal Platform for Music Ensembles
Wu, Nanette
Existing technologies that support virtual rehearsals are complex and unintuitive. Network music performance platforms promise real-time interactions between physically separated musicians, but they demand hard-to-achieve network conditions, lack rehearsal-specific features, and require extensive knowledge of network jargon. JamNSync, designed with musicians in mind, proposes a hassle-free alternative. The JamNSync web application offers a recording-based, synchronous rehearsal experience using a novel consensus protocol. Each musician only needs a basic understanding of playback systems and common audio devices. To account for non-deterministic audio latency in the browser, JamNSync provides a user-friendly audio alignment tool that efficiently automates audio processing and produces a group mix ready for immediate feedback. Three rounds of user testing show that JamNSync holds significant advantages over current virtual rehearsal solutions. By providing an intuitive platform for physically distanced groups to rehearse, JamNSync enables musicians around the world to remotely make music together. This is especially valuable during times of social isolation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using machine learning to increase the predictive value of humanized mouse models for the human immune response to YFV-17D</title>
<link href="https://hdl.handle.net/1721.1/139044" rel="alternate"/>
<author>
<name>Ravinder, Divya</name>
</author>
<id>https://hdl.handle.net/1721.1/139044</id>
<updated>2022-01-15T04:10:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Using machine learning to increase the predictive value of humanized mouse models for the human immune response to YFV-17D
Ravinder, Divya
Despite their utility as models for human systems, intrinsic differences between mouse and human biology limit direct translation of findings. Immunocompromised mice have been engrafted with functional human immune system components, or “humanized”, to better model the human immune response. Continued translational failure led to the development of a second-generation humanized mouse model with enhanced myeloid and natural killer (NK) cell compartments, producing a stronger immune response after exposure to the live attenuated yellow fever vaccine (YFV17D). Additionally, semi-supervised machine learning algorithms have been shown to further uncover translational insights from mouse models. I hypothesized that these strategies may be synergistic when combined, further improving the predictive value of these models. Here, I combine expression data from three humanized mouse models (NRG-HIS, NRG-HIS/Fluc, and NRG-HIS/Flt3LG) challenged with live attenuated yellow fever vaccine with machine learning (ML) models (k-nearest neighbors (KNN), random forest (RF), support vector machine (SVM), and neural network (NN)). Model predictions were evaluated for accuracy using F-scores and Matthews correlation coefficients. Several algorithms combined with mouse models made significantly better predictions about differential expression in the human immune response than models made with randomly classified samples. Semi-supervised NN, SVM, and RF algorithms combined with NRG-HIS/Fluc mice performed best for the tested human cohorts. Unexpectedly, the NRG-HIS/Fluc model outperformed the NRG-HIS model; the adenoviral vector itself may have increased production or recruitment of lymphocytes during infection. The best-performing models uncovered DEGs involved in detecting pathogens, innate immune response, and interferon signaling. TRIM22 and TRIM5, and potentially related genes, may be reliably uncovered by semi-supervised ML models applied to humanized mouse models, though further study is required for verification. Overall, modified humanized models, combined with ML approaches, can improve predictions of the human immune response.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconfigurable Satellite Constellations for Mobile Target Tracking</title>
<link href="https://hdl.handle.net/1721.1/139043" rel="alternate"/>
<author>
<name>Morgan, Sarah J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139043</id>
<updated>2022-01-15T03:04:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Reconfigurable Satellite Constellations for Mobile Target Tracking
Morgan, Sarah J.
Current storm monitoring satellites offer unsatisfactory coverage of ongoing storms, either obtaining low spatial resolution, persistent coverage or high spatial resolution coverage with low temporal sampling. A reconfigurable constellation of satellites (ReCon) offers a way to augment these data sources with higher resolution coverage and improved temporal sampling. A ReCon can respond dynamically to different objectives throughout its mission lifetime, offering a more responsive, adaptable alternative to traditional Earth-observing satellite constellations. In the ReCon concept of operations, the constellation is nominally positioned to obtain global coverage. If an event of interest occurs at a particular latitude and longitude location, satellites can be maneuvered to obtain more frequent accesses of this target than otherwise achieved in the nominal configuration. While this architecture has been primarily explored with static ground targets in mind, for more dynamic events of interest, such as hurricanes, an additional layer of responsiveness can be added. A method of mobile target tracking through planning a series of low-thrust maneuvers holds promise. This method has been shown to improve the coverage characteristics of a single satellite in a hurricane case study when compared to a non-maneuvering satellite. This thesis explores and expands upon this concept, reviewing the existing work and applying an alternative approach. Throughout this thesis, procedures for adaptive reconfigurable maneuver planning are laid out and used for two hurricane case studies. This more flexible approach finds solutions for a single satellite case study observing Typhoon Megi using around 2 m/s delta-V, in comparison to similarly performing solutions previously found of around 13.5 m/s delta-V. The inclusion of an optimizer in maneuver planning enhances the prior work, exploring a continuous design space of possible maneuver options. This reveals alternative solutions and a more complete view of the entire design space. This optimization approach also allows a future user to explore the objectives of increased storm access time, closer flyovers, and total delta-V cost of maneuvers. For example, in a single satellite maneuvering case, total target access time can be doubled in comparison to a non-maneuvering case with the use of only around 2.5 m/s delta-V. Overall, this approach has resulted in the exploration of key tradeoffs between these objectives. For example, increased access time or closer passes each come at the cost of increased delta-V requirements; solutions that provide the same total access time require greater delta-V to achieve closer passes and vice versa.  When considering the inclusion of multiple maneuvering satellites, diminishing returns of maneuverability are observed, with greater natural accesses occurring with a greater number of satellites in the constellation. Additionally, the concept of executing this method in real time with uncertain targets based upon hurricane forecasts is explored. This reveals the need to incorporate robustness into this optimization. Finally, the prospect of executing this theoretical concept with a ReCon demonstrator is evaluated, including taking into account potential errors in maneuver planning.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/139042" rel="alternate"/>
<author>
<name>Oestreich, Charles E.</name>
</author>
<id>https://hdl.handle.net/1721.1/139042</id>
<updated>2022-01-15T03:58:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
Oestreich, Charles E.
As the number of spacecraft and debris objects in orbit rapidly increases, active debris removal and satellite servicing efforts are becoming critical to maintain a safe and usable orbital environment. At the same time, future unmanned solar system exploration missions are targeting challenging destinations for scientific data collection. For practical realization of these technologies, the involved spacecraft must be highly autonomous and able to perform complex proximity operations maneuvers in a safe manner. This requires that the guidance and control system must reliably address inevitable sources of uncertainty while performing the maneuvers.&#13;
&#13;
This thesis seeks to improve the flexibility and performance of autonomous spacecraft in uncertain scenarios by leveraging robust control theory and reinforcement learning. A novel algorithm, termed online tube-based model predictive control, is proposed and applied to a simulated mission involving the intercept of an tumbling target with unknown inertial properties. This algorithm demonstrates superior performance and exhibits less reliance on initial knowledge of the uncertainty when compared to standard robust control methods. Separately, reinforcement learning is utilized to develop a policy (to be employed as a feedback control law) for six-degree-of-freedom docking with a rotating target. The policy provides near-optimal performance in a simulated Apollo transposition and docking maneuver with uncertainty in the initial conditions. Both of these methods enhance the level of autonomy in their respective scenarios while also maintaining practical computational run-times. As such, this thesis represents an incremental step towards making missions based on highly autonomous proximity operations a reality.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Examples in Simpler Settings</title>
<link href="https://hdl.handle.net/1721.1/139041" rel="alternate"/>
<author>
<name>Wang, Tony T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139041</id>
<updated>2022-01-15T03:47:54Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Adversarial Examples in Simpler Settings
Wang, Tony T.
In this thesis we explore adversarial examples for simple model families and simple data distributions, focusing in particular on linear and kernel classifiers. On the theoretical front we find evidence that natural accuracy and robust accuracy are more likely than not to be misaligned. We conclude from this that in order to learn a robust classifier, one should explicitly aim for it either via a good choice of model family or via optimizing explicitly for robust accuracy. On the empirical front we discover that kernel classifiers and neural networks are non-robust in similar ways. This suggests that a better understanding of kernel classifier robustness may help unravel some of the mysteries of adversarial examples.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RadioSTAR (Radio Spacecraft for Telecommunications Assessment and Risk-reduction): A 3U CubeSat for validation of ground stations and link budgets</title>
<link href="https://hdl.handle.net/1721.1/139040" rel="alternate"/>
<author>
<name>Murphy, Thomas J.</name>
</author>
<id>https://hdl.handle.net/1721.1/139040</id>
<updated>2022-01-15T03:09:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">RadioSTAR (Radio Spacecraft for Telecommunications Assessment and Risk-reduction): A 3U CubeSat for validation of ground stations and link budgets
Murphy, Thomas J.
As the number of small satellites in orbit increases, an increasing number of ground stations must be constructed, recommissioned, or updated in order to provide uplink and downlink access. However, prior to deployment of a spacecraft, it is difficult to evaluate a ground station’s performance capabilities. Over-the-air testing with the spacecraft, prior to launch, may not be possible for remotely located ground stations, or stations in environmentally challenging locations due to humidity or season. In the event that a spacecraft is deployed, and the ground station has a flaw, the critical first days on-orbit may be spent debugging ground station hardware, leaving the spacecraft uncontacted and in an unknown state. In the event that a spacecraft has an unusually short lifetime, such as a low-orbiting CubeSat or a mission with limited fuel or power, a non-functional ground station could make the difference between getting little data and getting no data at all. This paper proposes a spacecraft with a versatile software-defined radio onboard, which can simulate nearly any upcoming spacecraft’s radio system, thus qualifying a ground station’s readiness for on-orbit operations. Furthermore, this spacecraft can be used for refining the link budget design, eliminating uncertainties like proper values to use for system noise temperature. Additional applications for such a spacecraft will be explored, including acting as a known signal source for calibration of radio astronomy installations. This paper acts as a first look at the feasibility of a RF calibration and validation spacecraft.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fall Prediction Model for a Reconfigurable Mobile Support Robot</title>
<link href="https://hdl.handle.net/1721.1/139039" rel="alternate"/>
<author>
<name>Kamienski, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/139039</id>
<updated>2022-01-15T03:18:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Fall Prediction Model for a Reconfigurable Mobile Support Robot
Kamienski, Emily
This work presents a fall prediction model to be used in conjunction with a reconfigurable robot for elderly mobility support. The fall prediction model is based on a Long Short Term Memory network. A predicted fall will inform a reconfigurable robot to expand its base of support to avoid possible tipping induced by the fall. A wearable support interface consisting of an instrumented harness and auto retracting cable system is developed for supporting the body and preventing a fall from occurring. The prediction model was developed using data taken of simulated falls and activities of daily living while using a test platform with the wearable support interface solution. The reconfigurable robot concept explored resembles a walker and provides mobility assistance during normal use, but it can also expand its base of support during a falling emergency. The model results show that a fall can be predicted 530 ms from the initial observance of instability in the user, which allows sufficient time to reconfigure a robot into a more stable configuration.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Outlier-Robust Multi-View Triangulation Using Graduated Non-Convexity for Space Vehicle Navigation</title>
<link href="https://hdl.handle.net/1721.1/139038" rel="alternate"/>
<author>
<name>Mitchell, Adriana Macieira</name>
</author>
<id>https://hdl.handle.net/1721.1/139038</id>
<updated>2022-01-15T03:57:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Outlier-Robust Multi-View Triangulation Using Graduated Non-Convexity for Space Vehicle Navigation
Mitchell, Adriana Macieira
Triangulation is used in terrain relative navigation (TRN) to identify the position of terrain features with respect to the spacecraft and to ultimately estimate the spacecraft’s pose during planetary landing and navigation. This thesis seeks to improve current multi-view triangulation methods to optimally perform with an outlier-dominant (&gt;50% outlier) dataset. Outlier-dominant situations may occur in cases of extreme environmental changes (poor lighting conditions, obscured vision), unreliable sensors, and in the case of TRN, incorrect feature matching. We show that current multi-view triangulation solvers fail at &gt;=10% outlying measurements. We apply Yang et al.’s (2020) Graduated Non-Convexity (GNC) algorithm to three chosen multi-view triangulation solvers and improve the best performing solver’s robustness to 50% outliers which outperforms the current state-of-the-art outlier removal method RANSAC. We apply the robust multi-view triangulation solver to a simulated lunar landing trajectory and reported the variance of the returned 3D error to verify the accuracy of the estimate. This new technique of making current multi-view triangulation solver robust to outliers using GNC provides exciting performance and displays promise for future implementation on a terrain relative navigation system. This combination has not been shown before in previous work, but this thesis enables the conclusion that outlier-robust methods can be applied to TRN.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Mesh Recovery Using Radio Signals</title>
<link href="https://hdl.handle.net/1721.1/139037" rel="alternate"/>
<author>
<name>Liu, Yingcheng</name>
</author>
<id>https://hdl.handle.net/1721.1/139037</id>
<updated>2022-01-15T03:13:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Human Mesh Recovery Using Radio Signals
Liu, Yingcheng
This thesis presents RF-Avatar, a neural network model that can estimate 3D meshes of the human body in the presence of occlusions, baggy clothes, and bad lighting conditions. We leverage that radio frequency (RF) signals in the WiFi range traverse clothes and occlusions and bounce off the human body. Our model parses such radio signals and recovers 3D body meshes. Our meshes are dynamic and smoothly track the movements of the corresponding people. Further, our model works both in single and multi-person scenarios. Inferring body meshes from radio signals is a highly under-constrained problem. Our model deals with this challenge using: 1) a combination of strong and weak supervision, 2) a multi-headed self-attention mechanism that attends differently to temporal information in the radio signal, and 3) an adversarially trained temporal discriminator that imposes a prior on the dynamics of human motion. Our results show that RF-Avatar accurately recovers dynamic 3D meshes in the presence of occlusions, baggy clothes, bad lighting conditions, and even through walls.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nationwide Pedestrian Safety Analysis Using Crash&#13;
and Survey Data</title>
<link href="https://hdl.handle.net/1721.1/139036" rel="alternate"/>
<author>
<name>Koch, Zade</name>
</author>
<id>https://hdl.handle.net/1721.1/139036</id>
<updated>2022-01-15T03:01:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Nationwide Pedestrian Safety Analysis Using Crash&#13;
and Survey Data
Koch, Zade
Pedestrian safety is studied using two approaches: injury severity modeling using NHTSA’s Crash Report Sampling System crash data and administering a nationwide survey on roadway safety topics. The crash data models identified seven significant independent variables which relate to severe pedestrian injuries: weather, lighting condition, speed limit, speeding violation, vehicle body type, driver impairment, and pedestrian age. When crashes at intersections and non-intersections were compared, the effects of these variables did not significantly vary. The nationwide survey concentrated on topics unavailable or incompletely described in the crash data, including pedestrian safety perceptions and four unsafe behaviors: intoxicated driving, cell phone use while driving, intoxicated walking, and cell phone use while walking. Public beliefs about dangerous pedestrian scenarios largely agreed with findings from the crash data. The Theory of Planned Behavior was applied to the unsafe behaviors, leading to distinct suggestions for public awareness messaging for each of the behaviors.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peak Current Mode Driver for Thermoelectric Cooler</title>
<link href="https://hdl.handle.net/1721.1/139035" rel="alternate"/>
<author>
<name>Persad, Ashisha N.</name>
</author>
<id>https://hdl.handle.net/1721.1/139035</id>
<updated>2022-01-15T03:29:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Peak Current Mode Driver for Thermoelectric Cooler
Persad, Ashisha N.
Thermoelectric coolers (TECs) are solid state devices that use the Peltier effect to provide heating or cooling for an enclosed area when a voltage is applied. In order to both heat and cool, a bidirectional current must be supplied to the TEC. Therefore, a driver circuit is needed to supply the TEC with this bidirectional input. This thesis explores a design for an ultra-compact driver for a TEC that allows the system to quickly respond to disturbances, and efficiently maintain a precise temperature. Existing integrated TEC driver products currently do not meet the design targets set in this thesis. The products only operate up to 2 MHz frequency, are less than 90 % efficient, and are quite large. This motivates the design of an improved TEC driver. This thesis provides an investigation into a peak current mode controlled TEC driver architecture that operates at 5 MHz with a 2.7-5.5 V input, and supplies ± 1.5 A to the TEC. This TEC driver was targeted to achieve a 95 % efficiency, and will be incorporated with other circuity as part of an ultra-compact integrated circuit (IC) package design. After exploring various architectures, a peak current mode dual buck Hbridge TEC driver comprising the architectural blocks of a gate drive circuit, outer voltage loop, and inner current loop was designed. This design ensures that the targets of small size, high efficiency and stability are met. The experimental results, along with analysis and simulation of the design presented in this thesis demonstrate that this architecture can be used in TEC driver applications, and shows great promise for use in other applications due to its size and efficiency.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parking policy as a mechanism to reduce car ownership and use</title>
<link href="https://hdl.handle.net/1721.1/139034" rel="alternate"/>
<author>
<name>Farr, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/139034</id>
<updated>2022-01-15T03:34:38Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Parking policy as a mechanism to reduce car ownership and use
Farr, Elizabeth
The vast majority of Americans own a car, despite its high cost and low utilization rate. Through a stated preference survey in Washington D.C., Chicago, Dallas, and Seattle metro areas, I find that people value their car at $11,197, and the majority of that value comes from owning the car, rather than from using it. The ownership value comes in part from the “option value,” that the car is sitting in a parking spot, waiting to be used whenever and however the owner wishes. Parking enables this value. In the next set of results, I find that parking-related variables, like the built environment and employer-provided parking benefits, also impact car use. Though these findings point to the potential for policymakers to use parking policy to reduce car ownership and use, American cities are notorious for having an oversupply of underpriced parking. To investigate why parking policies may be underutilized, I interview and survey parking officials. I find that officials are not trying to use parking policy to reduce car ownership or to disincentivize cars. Officials face strong resistance to such policies by businesses and residents that live near to where they will be implemented. I conclude with several policy recommendations aimed at enabling policymakers to better utilize parking policy to reduce car ownership and use, including policies relating to reframed goals and metrics and shifting power to balance localized stakeholder needs better with recipients of larger scale benefits. Lastly, in order for parking policy to be truly effective in reducing car use and ownership, I recognize that land use policy and mobility system improvements must be deployed to provide truly viable non-private car alternatives that replicate the option value of car ownership.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A moral document? Expanding conversations about public safety budgets in Minnesota in the wake of George Floyd’s murder</title>
<link href="https://hdl.handle.net/1721.1/139031" rel="alternate"/>
<author>
<name>Arnosti, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/139031</id>
<updated>2022-01-15T04:11:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A moral document? Expanding conversations about public safety budgets in Minnesota in the wake of George Floyd’s murder
Arnosti, Nathan
Minneapolis, Minnesota, became the epicenter of mass protests for racial justice in 2020 following the police killing of George Floyd on May 25. In the months since, local activists and elected officials have advanced ambitious visions for how the city can reimagine its systems of public safety.&#13;
&#13;
This thesis builds on these perspectives by looking at public safety budgets, specifically, across the state of Minnesota. I make a case for expanding and refining current conversations about public safety budgets in several ways. First, I see existing conversations in Minnesota – centered around reimagining police budgets in major cities like Minneapolis and Saint Paul – as insufficiently narrow. I contend that police departments belong to a broader system of public safety services whose role in society needs to be re-examined, including sheriffs, highway patrols, and correctional agencies. Given that the harms associated with existing systems of public safety, including police killings, serious crime, and incarceration are present statewide – as are residents of color, who are most likely to suffer harms from these systems – I argue that conversations to reimagine public safety must also take place in communities across the state (Section I). Second, I explore the availability of existing public safety resources across the state, identifying what can be considered the upper limit of public safety reinvestment for communities in a first-of-its-kind analysis (Section II). Third, I find that current conversations around public safety budgets are complicated by the different visions, tactics, and roles that community leaders take on, and suggest that conversations around public safety budgets need to be responsive to this variation (Section III). I conclude by identifying questions that can guide further conversations about public safety budgets, examining relevant efforts to reimagine public safety systems in Austin, Los Angeles County, and Oregon, and outlining actions that Minnesotans can take to initiate conversations in their community (Section IV).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spoken ObjectNet: Creating a Bias-Controlled Spoken Caption Dataset</title>
<link href="https://hdl.handle.net/1721.1/139030" rel="alternate"/>
<author>
<name>Palmer, Ian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/139030</id>
<updated>2022-01-15T03:52:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Spoken ObjectNet: Creating a Bias-Controlled Spoken Caption Dataset
Palmer, Ian A.
Visually-grounded spoken language datasets can enable models to learn cross-modal correspondences with very weak supervision. However, modern audio-visual datasets contain biases that undermine the real-world performance of models trained on that data. We introduce Spoken ObjectNet, which is designed to remove some of these biases and provide a way to better evaluate how effectively models will perform in real-world scenarios. This dataset expands upon ObjectNet, which is a large-scale image dataset that features controls for biases encoded into many other common image datasets.&#13;
&#13;
We detail our data collection pipeline, which features several methods to improve caption quality, including automated language model checks. We also present an analysis of the vocabulary of our collected captions. Lastly, we show baseline results on several audio-visual machine learning tasks, including retrieval and machine captioning. These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned. We also show evidence that the performance decrease is due to the dataset controls, and not the transfer setting. We intend to make our dataset openly available to the general public to encourage new lines of work in training models that are better equipped to operate in the real world.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Scalar Politics of Mobility in Detroit</title>
<link href="https://hdl.handle.net/1721.1/139028" rel="alternate"/>
<author>
<name>Glynn, Russell T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139028</id>
<updated>2022-01-15T03:45:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Scalar Politics of Mobility in Detroit
Glynn, Russell T.
Over roughly the last decade, scholars and practitioners have recognized a so-called “new mobility” revolution associated with automated vehicle technologies, data-hungry platform firms (e.g., Uber), and international circuits of venture capital. Importantly, this period of evolution in the political economy of transportation is accompanied by, and in part reflects, a broader realignment in capitalist urbanization. Where industrial capital once fixed to regions of the global north in order to stoke systems of mass production, today the economic fortunes of these regions are tied to a knowledge economy in which urban space yields precious innovation and greases the wheels of consumption. This thesis explores questions of social and technological change in mobility and, by doing so, considers how movement figures in the concepts, frameworks, and theories scholars use to understand the spatial arrangement of capitalism. According to one perspective in the geography literature, spatial scales – the nested hierarchies of city, region, and nation that order the world – are an outcome of capital’s uneven development of space. An enduring task of human geographers is to understand how particular scales are constituted and transformed amid changes in sociospatial relations. I position the emerging new mobility ecosystem as one such sociospatial change that presents a productive point of entry into scalar thinking. Building upon a critique of the literature for its relatively thin conceptualization of transportation technologies and institutions within the production of spatial scale, I develop a scalar politics of mobility around an extended case study of Detroit. In doing so, I reveal the critical role of mobility in shaping scaling processes during the highway building era of the twentieth century and in the present new mobility moment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Super-Resolution in Sparse&#13;
Sensor Arrays</title>
<link href="https://hdl.handle.net/1721.1/139026" rel="alternate"/>
<author>
<name>Jin, Mumin</name>
</author>
<id>https://hdl.handle.net/1721.1/139026</id>
<updated>2022-01-15T04:10:46Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Super-Resolution in Sparse&#13;
Sensor Arrays
Jin, Mumin
Due to their robustness to weather and environmental conditions, radars are an important sensor in automotive and industrial applications. However, their utility in more advanced applications is increasingly limited by their resolution, which increases linearly with the number of radar elements in a uniform linear array (ULA). In this work, neural networks are trained to approximate the signals from a large aperture array from the signals obtained from smaller aperture sub-arrays. The training set consists of simulated radar responses to ideal point reflectors in noiseless vacuum, and the network is&#13;
trained to minimize the squared error between the network output and the normalized log-magnitude of the large aperture array signal in its Fourier Transform domain. In general, given signals from two sets of 12-element sub-arrays, the neural network can reproduce results more than 10 times closer to the signals of the 1024-element&#13;
array than signals from either of the input sub-array in terms of squared error in the Transform domain. The outputs of the neural network have over 91.97% probabilityof detection, &#119875;&#119863;, with 1.02% probability of false alarm, &#119875;&#119865;&#119860;, compared to 67.53% &#119875;&#119863; and 5.57% &#119875;&#119865;&#119860; with data from either sub-array. The results show the possibility for extracting more information by exploiting structure in real-world data with inexpensive, small sensors, and have major implications for the use of radar sensors in the automotive and industrial applications.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-Learning and Self-Supervised Pretraining for Few-shot Image Translation</title>
<link href="https://hdl.handle.net/1721.1/139025" rel="alternate"/>
<author>
<name>Rugina, Ileana</name>
</author>
<id>https://hdl.handle.net/1721.1/139025</id>
<updated>2022-01-15T04:04:58Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Meta-Learning and Self-Supervised Pretraining for Few-shot Image Translation
Rugina, Ileana
Recent advances in machine learning (ML) and deep learning in particular, enabled by hardware advances and big data, have provided impressive results across a wide range of computational problems such as computer vision, natural language, or reinforcement learning. Many of these improvements are however constrained to problems with large-scale curated data-sets which require a lot of human labor to gather. Additionally, these models tend to generalize poorly under both slight distributional shifts and low-data regimes. In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of ML.&#13;
&#13;
We follow this line of work and contribute a novel few-shot multi-task image to image translation problem. We then present several benchmarks for this problem using ideas from both meta-learning and contrastive-learning and improve upon baselines trained using simple supervised learning. Additionally, we contribute to another area of growing interest—applying deep learning to physical problems—and focus our efforts on modeling weather phenomena.&#13;
&#13;
We define an image translation problem between different radar and satellite sensor modalities and leverage spatial and temporal locality to pose it as a multi-task problem. We improve upon naive solutions that ignore this hierarchical dataset structure and demonstrate the effectiveness of meta-learning methods to solving real-world problems. We make our code available here.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Audio-Video Language Representations</title>
<link href="https://hdl.handle.net/1721.1/139024" rel="alternate"/>
<author>
<name>Rouditchenko, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/139024</id>
<updated>2022-01-15T04:05:35Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learning Audio-Video Language Representations
Rouditchenko, Andrew
Automatic speech recognition has seen recent advancements powered by machine learning, but it is still only available for a small fraction of the more than 7,000 languages spoken worldwide due to the reliance on manually annotated speech data. Unlabeled multi-modal data, such as videos, are now increasingly available in many different languages and provide opportunities to scale speech technologies. In this thesis, we introduce models and datasets for learning visually grounded spoken language from raw audio in videos. We propose a self-supervised audio-video model that learns from the English narration naturally present in instructional videos to relate spoken words and sounds to visual content. Our model can recognize spoken words and natural sounds in audio queries to retrieve relevant visual clips, supporting its application to video search directly using audio and spoken queries, without needing to transcribe speech to text. We further demonstrate that our model can learn multilingual audiovideo representations and can successfully perform retrieval on Japanese videos. Since our approach only requires audio-visual data without transcripts, we believe it is a promising direction to enable novel speech processing tools.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Business AI Implementation Methodology and Proof&#13;
of Concept ML Models to Improve Suture Quality at&#13;
Extrusion/Orientation</title>
<link href="https://hdl.handle.net/1721.1/139021" rel="alternate"/>
<author>
<name>Rawden, Katherine Suzanne</name>
</author>
<id>https://hdl.handle.net/1721.1/139021</id>
<updated>2022-01-15T03:40:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Developing Business AI Implementation Methodology and Proof&#13;
of Concept ML Models to Improve Suture Quality at&#13;
Extrusion/Orientation
Rawden, Katherine Suzanne
Digital transformation has begun to infiltrate all industries, signifying the advent of a new era: the fourth industrial revolution Through the digital transformation of business processes, key bottlenecks that limit firm growth can be mitigated, allowing for unprecedented scalability, scope, and opportunities for learning.&#13;
&#13;
The goals of this project are twofold within the value chain of a product family of absorbable sutures. The first goal is to provide an assessment of the driving motivations and structural transformation required for a medical device manufacturer to deploy business artificial intelligence (AI) and evolve into a digital firm. The second goal is to apply digital transformation methodology and machine learning (ML) to a proof of concept use case.&#13;
&#13;
To accomplish the first goal, a road map was developed for the deployment of business AI and an assessment of the digital maturity of the suture value chain was conducted. Random forest and linear regression ML models were developed to assist in root cause analysis at the extrusion/orientation process step of the suture manufacturing process.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Agenda Setting Effects on Twitter and Digital Media</title>
<link href="https://hdl.handle.net/1721.1/139018" rel="alternate"/>
<author>
<name>Schlessinger, Joseph Carson</name>
</author>
<id>https://hdl.handle.net/1721.1/139018</id>
<updated>2022-01-15T03:29:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Quantifying Agenda Setting Effects on Twitter and Digital Media
Schlessinger, Joseph Carson
Agenda setting theory describes how the media influences which issues enter the public agenda. New technologies have disrupted  media institutions and long-established agenda setting dynamics. This thesis studies agenda setting effects in two different ``new media'' environments: Twitter and digital media. &#13;
&#13;
On Twitter, I study the impact of a coordinated, astroturfed campaign to elevate political hashtags favorable to the Bharatiya Janata Party (BJP) to the Trending Topics pages. While the astroturfed hashtags did reach \textit{trending} status, I find little evidence that the coordinated tweets were popular. The highly retweeted tweets within the hashtag come from users outside of the coordinated campaign, and they are semantically distinct from the coordinated tweets. I also study the effects of the Trending Topics page, one of Twitter's agenda setting tools. I find \textit{trending} may have a small causal effect on hashtag use. While I found that most hashtag engagement came organically from network spread rather than \textit{trending}, the Trending Topics page \textit{did} expose users less connected to the BJP Twitter community to the astroturfed hashtag..&#13;
&#13;
In digital media, I study intermedia agenda setting, or the ability of media outlets to influence other outlets. I propose a novel pipeline for quantifying influence by tracing the diffusion of quotes across media outlets. While most outlets demonstrate limited influence outside of their country's media ecosystem, wire outlets are a notable exception. I find evidence of a Russian state media echo chamber, where outlets repeatedly follow each other's quotes. Outside of this echo chamber, Russian state media has some success in setting the boundaries (topics and quoted speakers) for other outlet's reporting, but they have little impact on the positions outlets take within these boundaries. Lastly, I compare the intermedia agenda setting capabilities of Russian and American state media.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Area-Efficient Integrated Gate Drivers</title>
<link href="https://hdl.handle.net/1721.1/139017" rel="alternate"/>
<author>
<name>Murray, Elizabeth K.</name>
</author>
<id>https://hdl.handle.net/1721.1/139017</id>
<updated>2022-01-15T03:20:08Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Design of Area-Efficient Integrated Gate Drivers
Murray, Elizabeth K.
MOSFET gate drivers are important in modern power electronics due to the ubiquity of applications that require the fast and controlled switching of power MOSFETs. A controller serves as the first building block in many such systems, but with limited drive current and logic level voltage signals, it is unable to directly turn on a discrete power MOSFET. Gate drivers serve as the critical interface between the controller output and the power MOSFET.&#13;
&#13;
This thesis explores some of the considerations in designing a gate driver for hardswitching applications and presents a new design for an area-efficient integrated gate driver. Simulation results show that, for a given die area, the new design gives better performance than existing topologies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Changing the Rules of the Game: Rule-Adjustment Mechanics in Tabletop Games</title>
<link href="https://hdl.handle.net/1721.1/139016" rel="alternate"/>
<author>
<name>Jorgensen, Teis</name>
</author>
<id>https://hdl.handle.net/1721.1/139016</id>
<updated>2022-01-15T03:17:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Changing the Rules of the Game: Rule-Adjustment Mechanics in Tabletop Games
Jorgensen, Teis
In most tabletop games, the rules of the game are typically defined and agreed upon by all players before play begins. However, there are some games which allow players to change the rules of the game during play. Through an exploratory design research methodology, the author defines a rule-adjustment mechanic and explores its application across a wide variety of tabletop games. The research culminates in the design and development of several game prototypes including SMOG, an educational game focused on passing environmental regulations. The paper concludes with a discussion around the creative, strategic, and educational opportunities that rule-adjustment mechanics offer players and a set of recommendations for educators and game designers looking to integrate such mechanics into their work.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiplexer Design for a Multi-Array Ultrasonic Imaging System</title>
<link href="https://hdl.handle.net/1721.1/139015" rel="alternate"/>
<author>
<name>Marcus, Colin</name>
</author>
<id>https://hdl.handle.net/1721.1/139015</id>
<updated>2022-01-15T04:06:45Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multiplexer Design for a Multi-Array Ultrasonic Imaging System
Marcus, Colin
Conventional ultrasound data-acquisition platforms have a limited number of I/O channels, and thus cannot be directly interfaced with multiple ultrasound transducer arrays. A high-voltage multiplexer is designed to allow a 128-signal DAQ to interface with up to eighteen 128-signal arrays in a time-multiplexed manner. The multiplexer is then implemented as part of a novel conformable multi-array ultrasound imaging system. Synthetic array ultrasound data is acquired and processed using delay-and-sum beamforming to form individual array images, then the images are combined to provide an expanded field of view compared to a single-array system.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Novel Additive Manufacturing Techniques for Cost Reduction in Space Launch Vehicles</title>
<link href="https://hdl.handle.net/1721.1/139014" rel="alternate"/>
<author>
<name>Johanson, Robert T.</name>
</author>
<id>https://hdl.handle.net/1721.1/139014</id>
<updated>2022-01-15T03:47:20Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Application of Novel Additive Manufacturing Techniques for Cost Reduction in Space Launch Vehicles
Johanson, Robert T.
Additive Friction Stir Deposition (AFSD) is a new type of Additive Manufacturing (AM) capable of high-rate deposition of aerospace aluminum. AFSD overcomes some drawbacks of other AM processes while still benefiting from an expanded design space and alternative sourcing options. AFSD is promising for the manufacture of large structural components but has drawbacks of its own. This project establishes a framework to evaluate the technical suitability and business case of AFSD. We assess the current state of AFSD and its potential application on a high-impact part. &#13;
AFSD has demonstrated good grain structure and ductility in aluminum deposition. However, the strength of the material post-deposition is below that of high temper forgings commonly used in the industry. Solution heat treatments may close this gap. Small-scale heat treatment studies have exceeded strength benchmarks but lacks data statistical significance, and the process carries significant risks.&#13;
If AFSD can overcome these technical challenges and mature its manufacturing systems, Cost-Benefit Analyses (CBAs) show that despite significant capital and development costs, AFSD has the potential to bring positive Net Present Value (NPV) to large aluminum structures in the space launch industry. Supply chain implications from vertical integration and increased sourcing options have the potential to drive additional value. In this paper, we propose an analysis framework and apply it to one application but anticipate other applications also justify the development of this enabling manufacturing technology.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Design of Electrophilic Gas Injection System for Plasma Blackout Mitigation during Hypersonic Reentry</title>
<link href="https://hdl.handle.net/1721.1/139013" rel="alternate"/>
<author>
<name>Caldelas II, Humberto Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/139013</id>
<updated>2022-01-15T04:03:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Experimental Design of Electrophilic Gas Injection System for Plasma Blackout Mitigation during Hypersonic Reentry
Caldelas II, Humberto Luis
Radio communication blackout during hypersonic reentry has been a prolonged issue for all planetary missions. Though blackout mitigation techniques were investigated heavily in the 1960s and 1970s, the lack of fidelity in computational tools and integrated ground/flight testing with adequate diagnostic techniques lead to limited research for integrated mitigation systems. A newfound need for a lightweight, simple, and effective blackout mitigation technique has resurfaced due to the return of capsule reentry vehicles for human exploration and the ever-increasing reliance on autonomous systems to fly the vehicle without an onboard human-in-the-loop. &#13;
&#13;
The predominant method for blackout mitigation agreed upon in literature is electrophilic gas injection. Though gas injection systems have been theorized and shown to work in a laboratory environment, there is little to no experimental demonstration of an integrated system that encompasses most, if not all, aerothermodynamic aspects of a reentering spacecraft. The focus of this thesis is on the experimental design of an integrated electrophilic gas injection system that can be used at various test facilities throughout the United States. This experiment will be used to demonstrate the feasibility of such a system and validate computational tools. A detailed analysis of design considerations for a first-of-its-kind system, the construction of components, and testing of components for design validation will be discussed. &#13;
&#13;
This thesis contains information authorized for DISTRIBUTION A: Approved for Public release via AEDC Information Release Authority IRA-5647, PA AEDC2021- 084. This thesis contains patent information protected by the MIT Technology Licensing Office under MIT Case Number 23238. Information has been excluded/redacted from this thesis in order to adhere to the aforementioned distribution statement and patent protection requirements.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building / Unbuilding</title>
<link href="https://hdl.handle.net/1721.1/139011" rel="alternate"/>
<author>
<name>Younker, Andrew R</name>
</author>
<id>https://hdl.handle.net/1721.1/139011</id>
<updated>2022-01-15T04:04:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Building / Unbuilding
Younker, Andrew R
Waves of recent protests across the United States confronting structural racism demand a reckoning with colonial and confederate histories which, far from being relegated to a distant past, continue to influence material and social formations in the present. A growing awareness of unstable environments destabilizes past collective memory-making while received mythologies are losing their power to define national narratives for the masses. The not-so-distant future is clouded with apocalyptic visions and existential threats. The present is haunted by both the past and future.&#13;
&#13;
Reciprocal networks of memory building and unbuilding are inscribed upon the surface of the land, or buried below, out of sight and out of mind. National monuments and parkland infrastructures stand as attractor points in these networks, reifying hegemony and reaching simultaneously into the past and future to both define and control relationships between water, land, humans and non-humans.&#13;
&#13;
This project traces the wake of the westward expansion of the United States through three of these sites and the watersheds they were constructed from. First, the Washington Monument which sits at the center of the National Mall, constructed from the wetlands of the Anacostia and Potomac Rivers. Second, the Jefferson National Expansion Memorial, also known as the Gateway Arch, which sits on ground stabilized by a levee at the meeting of the Missouri and Mississippi Rivers. Third, Mount Rushmore National Memorial, also known as the Shrine to Democracy, blasted and carved into ancient granite formations in the headwaters of the Missouri River.&#13;
&#13;
The apparent inevitability and permanence of these monumental sites are challenged through a kind of counter-tourism that builds the unbuilding left in the wake of progress. Tools of the design disciplines are used to reveal inconsistencies that lie at the foundations of these monuments and the larger project of nation building, opening up space to engage with both the terror and beauty overwritten by the ongoing and incomplete project of settler colonialism. The project is here translated from a short film into text and still images.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Material Use and Efficiency in Ultra-Thin Towers</title>
<link href="https://hdl.handle.net/1721.1/139010" rel="alternate"/>
<author>
<name>Thomson, Kyle</name>
</author>
<id>https://hdl.handle.net/1721.1/139010</id>
<updated>2022-01-15T03:21:05Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Material Use and Efficiency in Ultra-Thin Towers
Thomson, Kyle
Compared to 20 years ago, a new class of ultra-slender skyscrapers are emerging. These towers, sometimes called “pencil towers," are constructed at aspect ratios greater than 10:1 and are at least 985’ (300m) in height. This thesis looks to determine the material excess typically required for such towers and investigates possible theoretical strategies to improve their structural efficiency. Throughout the thesis, four main questions arise around the design and construction of these towers. The first is, to make such unprecedented towers, how much more material is used in comparison to traditional towers found in the deQo database? Secondly, based on this material usage, how much more of an impact does the construction of a pencil tower have compared to the traditional towers found in the deQo database? The third question is then, if an increase in materials helps to resist against the lateral forces of the wind, how do these towers need to be designed differently due to their slenderness? Finally, the fourth question is how do the material quantities of towers vary based on aspect ratio and varying porosities? The first question’s answer can be determined through a comprehensive study and analysis of 432 Park Avenue in NYC. This analysis shows that the material quantities of 432 Park Ave. are in fact greater than the materials used in structures such as Al Hamra Tower and The United Tower. Then through a conversion to GWP using ECC’s and SMQs a similar result is found where once again 432 Park Ave. will show significantly higher values compared to towers in the deQo database. On a chart of structures and their GWP, 432 Park Ave. displays 1024 kgCO2e/m2 and the next highest tower displays 460 kgCO2e/m2 . Next, using the conjugate beam method, the deflection limit for 432 Park Ave. is calculated to be H/500 from core and column design. However, on top of that there are two tuned mass dampers and several methods of vortex shedding reduction that take place on the structure. Finally, the comparison of aspect ratio, porosity, and material usage can be studied through a Matlab code and Excel charts. These charts show that the towers above 15:1 in ratio will tend to exhibit the greatest use of materials in ft3 /ft2 . Generally, the charts will also show that the lower the porosity the greater the material usage for a given aspect ratio.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vaulted Earthen Floor Systems for Low-Cost Housing Construction</title>
<link href="https://hdl.handle.net/1721.1/139007" rel="alternate"/>
<author>
<name>Gaitan, Sabrina</name>
</author>
<id>https://hdl.handle.net/1721.1/139007</id>
<updated>2022-01-15T03:47:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Vaulted Earthen Floor Systems for Low-Cost Housing Construction
Gaitan, Sabrina
This thesis explores the viable geometries of earthen vaulted floor systems for low-cost residential construction. Typical structural floor systems consist of a reinforced concrete flat slab, which is problematic as concrete and steel are expensive, carbon-intensive materials. As urbanization rates increase globally, informal and structurally inadequate settlements become more ubiquitous as a result. Vaulted structural floor systems can be constructed from earthen bricks to reduce cost and environmental impact. This thesis aims to investigate equilibrium solutions of barrel-vaulted structures through the use of local earth materials in emerging economies, with a particular focus in Central and South America.&#13;
&#13;
While artisans have been constructing vaults for centuries as roofing systems, this thesis investigates the highly indeterminate structural behavior and design of shell structures to broaden the scope of their application such that they can also serve as floor systems. Through the lower bound principles of masonry structural design, the spanning limits of this structural form are presented for adobe, compressed earth blocks (CEB), and compressed stabilized earthen brick (CSEB). The analysis of unreinforced masonry vaults is further explored in three dimensions through form finding methodologies that implement linear optimization to investigate viable load paths within a defined area under specified boundary conditions. The application of three-dimensional analysis introduces two-way behavior within the vault, decreasing the reaction forces, and ultimately reducing the cost of construction. This thesis shows the range of possible spans using unfired adobe, CEB, and CSEB for vaulted earthen floor systems in the residential sector.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Velocity Supply Chain: Redesigning a Long Lead Time, Short Shelf Life Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/139006" rel="alternate"/>
<author>
<name>Witt, Doug</name>
</author>
<id>https://hdl.handle.net/1721.1/139006</id>
<updated>2026-03-13T18:58:32Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">High Velocity Supply Chain: Redesigning a Long Lead Time, Short Shelf Life Supply Chain
Witt, Doug
Composite materials, which can be simultaneously lighter and stronger than aluminum, provide multiple advantages to the aerospace industry. However, the heat-sensitive resins used in these materials begin degrading as soon as they are manufactured. As a result, providers of composite raw materials manufacture all their goods on a make-to-order basis. In turn, long lead times are passed on to OEM customers,&#13;
such as Bell. Bell must manage an inbound supply chain with worst case parameters: expensive, cold chain material with long lead times and short shelf lives. Historically, Bell has scrapped over $ 1M/year of these products due to expiration, and meanwhile, multiple stockout events every year adversely impact production schedules.&#13;
&#13;
This project addresses these problems by imagining inventory as a control loop problem, where we take the system from its current form as an "open" loop and convert it to a self-correcting "closed" loop. We do this by first specifying an exact setpoint level: in our case, a base stock policy. We then improve the difference between specified level and actual level by improving our forecasting technique using three-month moving averages and information already contained within the Manufacturing Resource Planning (MRP) system. We make the system more responsive by making deliveries as frequent as possible. We find this model simultaneously reduces stockout risk, expiration risk, inventory levels, and lead time apparent to Bell. We make this&#13;
assessment using both historical and simulation data. &#13;
&#13;
Finally, this line of thinking spurs us to reexamine an old problem in supply chain theory: that of optimum lot size. We find using a Monte Carlo model that increasing lot size (or order minimums) beyond what is required during a review period increases stockout risk.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Empirical Evaluation of Software Security Risk</title>
<link href="https://hdl.handle.net/1721.1/139005" rel="alternate"/>
<author>
<name>Blessing, Jenny</name>
</author>
<id>https://hdl.handle.net/1721.1/139005</id>
<updated>2022-01-15T03:09:41Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Towards Empirical Evaluation of Software Security Risk
Blessing, Jenny
This thesis provides empirical metrics for different vectors for vulnerability introduction, with a particular focus on cryptographic software. Through quantitative analysis of source code and vulnerability metrics from a variety of cryptographic libraries, we arrive at a more precise notion of what types of modifications introduce a higher level of risk into a system. Empirical evidence of the causes of security risk will provide technically-grounded guidance in the ongoing policy debate over exceptional access, enabling the security community to more objectively evaluate proposed exceptional access systems.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Envisage: Investigating Design Intentions, Visual Perception through Eye Tracking of Architectural Sketches</title>
<link href="https://hdl.handle.net/1721.1/139002" rel="alternate"/>
<author>
<name>Zhang, Xiaoyun</name>
</author>
<id>https://hdl.handle.net/1721.1/139002</id>
<updated>2022-01-15T03:08:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Envisage: Investigating Design Intentions, Visual Perception through Eye Tracking of Architectural Sketches
Zhang, Xiaoyun
Are we able to perceive an architect’s intention through observation of his or her sketches? Yes, but it requires a probing process of observation. Across time and continents, master architects have developed a collection of the processes for expressing powerful design intentions through succinct and dynamic representation, or design sketches. Different types of sketches describe, express, or gesture about the architecture they represent. They deliver active ideas that are not limited to objects but provide a raw sense for both the perception and creation enabled through visual thinking.&#13;
&#13;
I propose a method to utilize eye-tracking as a translator between the graphics and the architects’ perception of three types of intention: shape, composition, and circulation. My hypothesis is that we can perceive how architects represent these intentions -- through the means of graphics, which allows a more ambiguous and dynamic translation between intention and sketches, we can probe the underlying process by observing a viewer’s eye movements. Furthermore, heat maps, obtained from eye movements, can be adapted to a machine learning algorithm -- Image-conditioned Generative Adversarial Networks (GANs). I use this algorithm to translate the raw sense of space and visual gesture to capture human-level information acquisition of these intentions. &#13;
&#13;
To demonstrate the work, I first discuss the history of visual power in design and a shift towards units and segmentation, covering the development from the emergence of design drawings to the innovation in parametric design. I then proceed with an eye-tracking study where I asked graduate architecture students to observe sketches by Louis Kahn. I study how the graphics of heat maps from eyetracking decode the participants’ perception of intentions in sketches based on a shared educational background in architecture. Then, I propose a framework of utilizing such a representation system to train machines to predict human-level view patterns. Finally, I examine how effective this system will function with an image-to-image machine learning algorithm known as the image-conditional GANs.&#13;
&#13;
From the study it can be implied that mechanical eye-movements reveal a shared visual-thinking procedure that has been unconsciously practiced by human designers. Such a procedure, if learned by machines, will facilitate a creative process that utilize such informal dynamics derived from eye movement in visual representation in design.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Terahertz Second-Harmonic Generation in Extreme-Confinement Cavities</title>
<link href="https://hdl.handle.net/1721.1/139000" rel="alternate"/>
<author>
<name>Ateshian, Lamia</name>
</author>
<id>https://hdl.handle.net/1721.1/139000</id>
<updated>2022-01-15T04:03:19Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Terahertz Second-Harmonic Generation in Extreme-Confinement Cavities
Ateshian, Lamia
It remains a standing challenge to produce high-power electromagnetic sources operating in the spectral range of 0.1-10 THz (the “terahertz gap"), a frequency band for applications ranging from spectroscopy to security and high-speed wireless communications. In this thesis, we will analyze a method to produce coherent radiation spanning the THz gap by second-harmonic generation (SHG) in low-loss dielectric structures, starting from the ∼100 GHz range. For this purpose, we present hybrid THz-band dielectric cavity designs that combine (1) nonlinear materials enhanced by phonon resonances with (2) extreme field concentration in high-quality-factor resonators. An efficient device for THz SHG would enable cascaded parametric frequency converters extensible into the mid-IR spectrum and beyond.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rational Fabrication of High-Performance and Scalable Opal Crystals for Thermo-Fluidic Applications</title>
<link href="https://hdl.handle.net/1721.1/138999" rel="alternate"/>
<author>
<name>Díaz-Marín, Carlos D.</name>
</author>
<id>https://hdl.handle.net/1721.1/138999</id>
<updated>2022-01-15T03:11:21Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Rational Fabrication of High-Performance and Scalable Opal Crystals for Thermo-Fluidic Applications
Díaz-Marín, Carlos D.
Inverse opals have continuously attracted interest as a scalable, ordered porous material capable of enhancing energy, fluid, mass, or ion transport in a wide variety of applications. In particular, in heat transfer applications they have been used as porous coatings for condensers and boilers for increased efficiency in steam power plants and in two-phase thermal management devices with the potential of enabling next-generation electronic devices with high power density. However, several challenges remain with the fabrication of high-performance inverse opals due to limitations and defects of the initial opal template that ultimately prevent these structures from fulfilling their potential.  &#13;
&#13;
In this thesis, we first present a review of opal fabrication techniques and their implementation in heat transfer applications. We highlight previous challenges using these methods to achieve highly permeable structures in a simple way and we introduce slope self-assembly as a means to overcome several of these challenges. Despite its potential, we describe how fundamental understanding of this method is lacking, which limits its use with an arbitrary sphere size.&#13;
&#13;
Second, in order to address this limited understanding, we develop a scaling-based model to elucidate the self-assembly process. Our model predicts the existence of two regimes: a gravity-driven flow regime for small colloidal particles, where the process is dominated by fluid flow, and a capillary-driven regime for large colloidal particles where the capillary forces between spheres dominate the process. With this model, we are able to predict and control the opal coverage on the substrate as a function of the experimental parameters. &#13;
&#13;
Third, we perform experimental validation of our model by fabricating opal samples under different combinations of sphere sizes, colloidal concentrations, slope angles, and temperatures and analyze these samples with a custom image processing code. Our results confirm the validity of our model: for spheres smaller than 2 μm, the process is dominated by gravity-driven flow and the coverage can be controlled by changing the temperature, angle of inclination of the substrate, colloidal concentration, and sphere diameter, while for spheres larger than 10 μm the process is dominated by the capillary-driven flow and it can be controlled by changing the initial volume of solution, the concentration, and the sphere size. &#13;
&#13;
Finally, we use the insights generated by our model to rationally fabricate millimeter-scale samples with monolayer coverage with sphere sizes between 500 nm to 10 μm, which is 10 times larger than the sphere size possible with the vertical deposition method, which has been commonly used for thermo-fluidic applications. Even larger, centimeter-scale samples are possible with some sphere stacking or small uncovered areas. Additionally, we show how this method can be used in a copper substrate, showing its applicability in heat transfer applications. Lastly, we highlight some future opportunities based on our work including the fabrication of multi-porous structures and the experimental tuning of the crystallinity of the opals while maintaining large-scale area coverage.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>To Build Home and to Live in (U)Hygge</title>
<link href="https://hdl.handle.net/1721.1/138997" rel="alternate"/>
<author>
<name>Li, Wuyahuang</name>
</author>
<id>https://hdl.handle.net/1721.1/138997</id>
<updated>2022-01-15T03:26:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">To Build Home and to Live in (U)Hygge
Li, Wuyahuang
Since the nineteenth century, Danish society has established a standard of normality through building bourgeois homes. To write about Danish homes is to write an ethnography of hygge, a nationalized, domestic aesthetic encapsulated in the sense of comfort, togetherness, and well-being. Meanwhile, in mainstream media and scholarship, minorities’ homes have been largely represented as "ghettos," distinct urban territories characterized by social dysfunction and unemployment.&#13;
&#13;
This thesis looks at the home as the site where citizenship is produced and performed through household objects, furniture, and architecture. With an archive built on lived experiences collected from minorities living in Denmark, this research examines precarious bodies and non-normative modes of living and loving to articulate uhygge—the strange, foreign, othered, and unhomely—as queerness in the heteronormative discourses of a nation, a spatial form of being together that maintains the histories of diverse struggles.&#13;
&#13;
To Build Home and to Live in (U)Hygge produces a three-act play that reconstitutes the cultural meanings of uhygge through acts of passing, turning, and arriving. The play unfolds narratives of home and belonging on the scales of homeland, house, and body. “Act I: To Pass, or Nobody Passes” positions migrants and queers of color as intersecting vectors of uhygge who haunt an essentialist Danish identity in order to re-evaluate failures in assimilating social norms. “Act II: The Politics of Turning” reflects on both the struggles of turning away from majoritarian projects, such as hygge, and the different coalitional publics that this “turning” enables. Finally, “Act III: A Home on Uranus” constructs a hybrid orientation between assimilation and marginalization to imagine a refuge built with jouissance, the delight in pain and danger.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GENERALIZING ROBUSTNESS VERIFICATION FOR MACHINE LEARNING</title>
<link href="https://hdl.handle.net/1721.1/138996" rel="alternate"/>
<author>
<name>Mohapatra, Jeet</name>
</author>
<id>https://hdl.handle.net/1721.1/138996</id>
<updated>2022-01-15T03:21:39Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">GENERALIZING ROBUSTNESS VERIFICATION FOR MACHINE LEARNING
Mohapatra, Jeet
Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. Although a lot of work has been done to quantify the robustness of DNN’s to ℓₚ norm bounded adversarial attacks there are still a few gaps between available guarantees and those needed in practice. In this thesis we focus on resolving two of these limitations. 1)While current verification methods mainly focus on the ℓₚ-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large ℓₚ-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose a framework Semantify-NN to extend ℓₚ norm verification to semantic verification. 2) Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved state-of-the-art provable robustness against ℓ₂ perturbations. A number of publications have extended the guarantees to other metrics, such as ℓ₁ or ℓ subscript ∞, by using different smoothing measures. Although the current framework has been shown to yield near-optimal ℓₚ radii, the total safety region certified by the current framework can be arbitrarily small compared to the optimal. We provide Higher Order Verification: a general framework to improve the certified safety region for these smoothed classifiers without changing the underlying smoothing scheme which allows the resulting classifier to be provably robust to multiple threat models at once.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finding the In-between Space</title>
<link href="https://hdl.handle.net/1721.1/138995" rel="alternate"/>
<author>
<name>Zhu, Emma (Yimeng)</name>
</author>
<id>https://hdl.handle.net/1721.1/138995</id>
<updated>2022-01-15T03:56:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Finding the In-between Space
Zhu, Emma (Yimeng)
Lines can be seemingly benign as a series of pixels or vector objects, but lines are political and social, as when they are used to draw a divide between groups of beings both in concrete terms in the way that lines demarcate borders and thus define patterns of migration and more obliquely in the way that lines are drawn between disciplines such as architecture and visual art. This sense of division takes on a more instrumental quality under Euclidean geometry which imbues lines with the authority to keep things tidy, neat, and hygienic…. Is there any line beyond a “clean” line? Ingold provided a taxonomy of lines in his book, ranging from thread, trace, cut, crack, to crease and more. These alternative interpretations of the line hint at non-Euclidean ways of perceiving the world. &#13;
&#13;
By examining the threads, cuts, cracks and creases, the lines that are not clean, or well defined, this thesis attempts to make room for the “in-between space.” The “in-between space” happens at various scales and in various contexts, from the very intimate level of perceiving one self’s (one’s own body’s) boundary as an in-between space to connect across entities, to the rendering of screens (medium and object) as liminal and porous lines facilitating unusual linkages, and eventually to the reinterpretation of borders and migration as spaces for collective imagination.&#13;
&#13;
The approach is mostly self-reflexive and self-referential, combining selected writings and works I did over the past three years, while suturing them with the thought experiment of “finding the in-between space.” As the title suggests, I am in the state of finding and not-knowing. I want to be as open as possible towards different perspectives.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Reliable AI via Efficient Verification of Binarized Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/138993" rel="alternate"/>
<author>
<name>Jia, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/138993</id>
<updated>2022-01-15T03:29:10Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Towards Reliable AI via Efficient Verification of Binarized Neural Networks
Jia, Kai
Deep neural networks have achieved great success on many tasks and even surpass human performance in certain settings. Despite this success, neural networks are known to be vulnerable to the problem of adversarial inputs, where small and human- imperceptible changes in the input cause large and unexpected changes in the output. This problem motivates the development of neural network verification techniques that aspire to verify that a given neural network produces stable predictions for all inputs in a perturbation space around a given input. However, many existing verifiers target floating point networks but, for efficiency reasons, do not exactly model the floating point computation. As a result, they may produce incorrect results due to floating point error.&#13;
&#13;
In this context, Binarized Neural Networks (BNNs) are attractive because they work with quantized inputs and binarized internal activation and weight values and thus support verification free of floating point error. The binarized computation of BNNs directly corresponds to logical reasoning. BNN verification is, therefore, typically formulated as a Boolean satisfiability (SAT) problem. This formulation involves numerous reified cardinality constraints. Previous work typically converts such constraints to conjunctive normal form to be solved by an off-the-shelf SAT solver. Unfortunately, previous BNN verifiers are significantly slower than floating point network verifiers. Moreover, there is a dearth of prior research that aspires to train robust BNNs.&#13;
&#13;
This thesis presents techniques for ensuring neural network robustness against input perturbations and checking safety properties that require a network to produce certain outputs for a set of inputs. We present four contributions: (i) new techniques that improve BNN verification performance by thousands of times compared to the best previous verifiers for either binarized or floating point neural networks; (ii) the first technique for training robust BNNs; previous robust training techniques are designed to work with floating point networks and do not produce robust BNNs; (iii) a new method that exploits floating point errors to produce witnesses for the unsoundness of verifiers that target floating point networks but do not exactly model 3floating point arithmetic; and (iv) a new technique for efficient and exact verification of neural networks with low dimensional inputs.&#13;
&#13;
Our first contribution comprises two novel techniques to improve BNN verification performance: (i) extending the SAT solver to handle reified cardinality constraints natively and efficiently; and (ii) novel training strategies that produce BNNs that verify more efficiently.&#13;
&#13;
Our second contribution is a new training technique for training BNNs that achieve verifiable robustness comparable to floating point networks. We present an algorithm that adaptively tunes the gradient computation in PGD-based BNN adversarial train- ing to improve the robustness.&#13;
&#13;
We demonstrate the effectiveness of the methods in the first two contributions by presenting the first exact verification results for adversarial robustness of nontrivial convolutional BNNs on the widely used MNIST and CIFAR10 datasets. No previous BNN verifiers can handle these tasks. Compared to previous (potentially incorrect) exact verification of floating point networks of the same architectures on the same tasks, our system verifies BNNs hundreds to thousands of times faster and delivers comparable verifiable accuracy in most cases.&#13;
&#13;
Our third contribution shows that the failure to take floating point error into ac- count leads to incorrect verification that can be systematically exploited. We present a method that efficiently searches inputs as witnesses for the incorrectness of robust- ness claims made by a complete verifier regarding a pretrained neural network. We also show that it is possible to craft neural network architectures and weights that cause an unsound incomplete verifier to produce incorrect verification results.&#13;
&#13;
Our fourth contribution shows that the idea of quantization also facilitates the verification of floating point networks. Specifically, we consider exactly verifying safety properties for floating point neural networks used for a low dimensional airborne avoidance control system. Prior work, which analyzes the internal computations of the network, is inefficient and potentially incorrect because it does not soundly model floating point arithmetic. We instead prepend an input quantization layer to the original network. Our experiments show that our modification delivers similar runtime accuracy while allowing correct and significantly easier and faster verification by input state space enumeration.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Exploration of KNN-based Neural Control of Pneumatically Actuated Artificial Muscle</title>
<link href="https://hdl.handle.net/1721.1/138992" rel="alternate"/>
<author>
<name>Koo, Bon H. Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/138992</id>
<updated>2022-01-15T03:30:06Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Exploration of KNN-based Neural Control of Pneumatically Actuated Artificial Muscle
Koo, Bon H. Brandon
The advantages of accurately and efficiently augmenting human motion, such as locomotion and limb and load manipulation, is relatively well defined. Such augmentation attempts to aid in resolving the human limitations of energy availability, efficiency, and magnitude; well-executed human augmentation can dispatch and manipulate greater amounts of energy with lower cost, extending the available resources over a greater amount of work. These advantages are highlighted in applications pertaining to many fields, one of the most noticeable being the defense and aerospace industry uses. The energy starved and limited deployment often seen in these areas greatly benefit from being able to conserve input and amplify output by its personnel. For example, the increasing loads required to be carried and manipulated by soldiers, especially in recent conflicts, as well as the crippling energy cost of operating an Extra-vehicular Mobility Unit (EMU) worn by astronauts are one of many use cases that highlight an increasing need for a streamline and efficient solution for the augmentation of human motion under extreme environments.&#13;
&#13;
The potential solution for these issues, suggested in this thesis, is the development of an exoskeleton. However, such a device must tackle a delicately interwoven two-part problem which must be addressed individually; what actuation is the user controlling, and how the user is controlling these actuators. In this thesis, we present the groundwork of addressing both of these problems through a parallel effort of prototyping, modelling, and experimenting for a promising method of neural control through K-Nearest Neighbor (KNN) sorting and the incremental development of a biomimetic actuator in the form of Pneumatic Artificial Muscles (PAMs).&#13;
&#13;
KNN classification is a lightweight non-parametric learning  algorithm that can rapidly identify and sort an input into pre-determined categories given initial training data sets. In this thesis, I present a method of using KNN classification to sort neural impulse data, collected through surface Electromyography (sEMG) on select muscle groups to predict the kind and magnitude of motion before/as a motion is executed by the human upper limb muscles. EMG data on 6 muscle groups, all in the arm/shoulder region, while performing one of 2 speeds of 6 motions (totalling 12 independent motions) were collected and real-time processed for key landmark features, which were then presented to the KNN algorithm as training data. It appears that the KNN sorting, based on the inputted training data, was sufficient to not only identify the type of motion presented, but also the magnitude of motion between the two magnitudes trained. Although the validity of considering this as a prediction is not clear in this thesis due to the lack of time-scale data relative to real motion, the KNN was capable of executing such sorting processes well within a time-frame that can be regarded as practically instant compared to human reaction times.&#13;
&#13;
PAMs show great promise as a substitute to rotary and linear actuation methods commonly used in exoskeletons and other biomimetic devices due to its mechanical similarity to human muscle; specifically, PAMs are a closer analog to myocytes than most other form of rotary/linear actuator utilized in exoskeleton applications today. The length and stiffness behavior is extremely similar to that of human muscles on a cellular level, and motor units can also be simulated through the usage of multiple PAMs in parallel. Such similarities allow for many muscular behaviors to be mimicked. Specifically, this thesis explores the potential of PAMs to accurately portray the well documented emergent muscular behavior of the Henneman's size principle, with the underlying hypothesis that states that a higher fidelity of biomimetic replication will lead to a more seamless and transparent exoskeleton device featuring higher perceived invisibility. Through prototyping and simulating multiple configurations of PAMs, this study shows the performance they can provide in single and group organizations, along with comparisons between the prototype PAM behavior and human muscle performance. Finally, this study suggests potential for the PAM's real-world deployment through simulating a realistic task utilizing these biomimetic properties.&#13;
&#13;
Furthermore, this thesis lays down a projected research path for these potential solutions moving forward. The KNN classification prediction model of muscle motion through the analysis of sEMG data in conjunction with further developments in increasing the production capabilities, stability, performance, and form factors of PAMs as actuators show great promise as a human augmentation exoskeleton platform.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Testbed Implementation for the Development of a New Technology to Treat Obstructive Sleep Apnea</title>
<link href="https://hdl.handle.net/1721.1/138989" rel="alternate"/>
<author>
<name>Franz, Erwin</name>
</author>
<id>https://hdl.handle.net/1721.1/138989</id>
<updated>2022-01-15T03:11:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Testbed Implementation for the Development of a New Technology to Treat Obstructive Sleep Apnea
Franz, Erwin
In 2015, the 5.9 million people diagnosed with Obstructive Sleep Apnea (OSA) cost the United States healthcare system USD 12.4 billion, while the 23.5 million people suffering from OSA undiagnosed extracted costs of USD 149.6 billion from comorbidities, degraded mental health, motor vehicle and workplace accidents, and lost productivity[9]. These numbers only increase when calculated for the global population. Current treatments are cumbersome, uncomfortable, and disrupt the normal sleeping environment, leading to noncompliance among patients. Serious, long-term consequences associated with lack of treatment for OSA in patients include an increased risk for cardiovascular, cerebrovascular and metabolic syndrome disorders that ultimately lead to premature death if untreated. &#13;
&#13;
The Therapeutic Technology Design and Development (TTDD) research group has been working on a Harvard Catalyst project that aims to develop a device to treat OSA. This new technology has the potential to improve compliance in OSA patients by unblocking the patient’s respiratory airway using a different method from the current standard of care. The goal of this research was to implement a testbed for this device using system design principles to optimize prototyping and iterations. The introduction of a testbed capable of simulating a tongue obstructing an artificial airway proved to reduce the time between prototype iterations and allowed the team to keep iterating in the development of the prototype without the need for human testing. For this use case, additional explorations showed that the testbed helped in identifying design flaws responsible for half of the reliability failures presented by the original prototype. Future explorations for this testbed include the validation of biomedical multi-physics computational models of human soft tissue for the design of new obstructive sleep apnea treatment mechanisms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MORE COMPLEX THAN WASTELAND REPARATIVE SITE HISTORY ALONG THE BOSTON-REVERE BORDER</title>
<link href="https://hdl.handle.net/1721.1/138986" rel="alternate"/>
<author>
<name>McCann, Tess Davenport</name>
</author>
<id>https://hdl.handle.net/1721.1/138986</id>
<updated>2022-01-15T03:01:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">MORE COMPLEX THAN WASTELAND REPARATIVE SITE HISTORY ALONG THE BOSTON-REVERE BORDER
McCann, Tess Davenport
In this project, I seek a way to establish a site without emptying a place. I examine the way that project proponents talk about the Suffolk Downs development, Boston’s largest-ever development project along the town border with Revere, and argue that they empty the site through the use of spatial and temporal metaphor. The emptiness of the site allows for, even requires, large-scale interventions that “solve” the “problem” posed by emptiness. I read these interventions in the context of solutionism, a framework that inherits Enlightenment-era ideas of human dominance over the non-human world. I turn to the history of Suffolk Downs and show that there has been a cycle of emptying and improving on this land over the past 300 years of settler presence on it. Previous generations of developers have similarly emptied this place by relying on the rhetorical trope of wasteland, which allowed for human technocratic intervention in the landscape. These interventions, I argue, tended to fail, creating new wastelands that needed improvement. By telling a history of Suffolk Downs, I suggest that, despite the prevailing development rhetoric, the place’s past is not singular and the space is not simply a container for development activity.&#13;
&#13;
I explore “repair” as a development paradigm that resists emptying at the oil storage facility owned by Irving Oil and Global Partners, which is adjacent to Suffolk Downs. Within the logic of repair, sites can be constructed not by emptying them, but rather by embracing what’s already there and what’s been there. At the oil farm site, storage, itself a condition of emptiness, is the stuff of the site, and can be used as the basis for design interventions. More broadly, repair allows for an interdisciplinary approach to site design and discourse and has the potential to include more voices in development processes. Repair is not a silver-bullet solution to development—and that’s largely the point. Because it resists emptying, repair can be radical; and history, because it clearly states “there’s something here,” can be reparative.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Use of Iterative Feedback in Private Frequency Estimation</title>
<link href="https://hdl.handle.net/1721.1/138984" rel="alternate"/>
<author>
<name>Richardson, Yaateh</name>
</author>
<id>https://hdl.handle.net/1721.1/138984</id>
<updated>2022-01-15T03:07:18Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">On the Use of Iterative Feedback in Private Frequency Estimation
Richardson, Yaateh
This thesis evaluates iterative feedback as a mechanism to optimize privatization algorithms, specifically the family of Local Differential Privacy (LDP) algorithms. The main contribution is the Iterative LDP Algorithm which uses iterative feedback to outperform generic LDP mechanisms in certain scenarios, including those with low response rates (number of samples). This addresses a major pain point of LDP mechanisms currently, as they are prone to significant losses in utility in those scenarios. Conversely, it is less effective than existing methods as the number of responses tends to infinity.&#13;
&#13;
This work contains extensive comparison of the Iterative LDP Algorithm to several existing LDP mechanisms and baselines for frequency estimation. Experiments show gains in estimation performance in high privacy regimes over synthetic and real data. These experiments help discern scenarios where the Iterative LDP Algorithm actually learns from regularization induced performance gains. Learning and substantial performance gains were observed when samples are generated from the power law family of distributions (distributions that look linear on logarithmically scaled axes). Several epidemiological, social network, and other important internet based counting phenomena are known to follow such distributions[25][5]. The Iterative LDP Algorithm outperforms existing variable privacy LDP mechanisms in the aforementioned regimes. Thus, iterative feedback is a viable enhancement for existing LDP mechanisms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Hydrogen Supply Chain for Low-Carbon Power Generation Under Future Uncertainties: A Tradespace Exploration Approach</title>
<link href="https://hdl.handle.net/1721.1/138983" rel="alternate"/>
<author>
<name>Chan, Sin Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/138983</id>
<updated>2022-01-15T03:37:28Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigating the Hydrogen Supply Chain for Low-Carbon Power Generation Under Future Uncertainties: A Tradespace Exploration Approach
Chan, Sin Kai
The decarbonization of the power sector, which has been heavily dependent on fossil fuels, has been one of the critical global issues following the Paris climate agreement. Renewable energy is seen as the clean and sustainable solution, but the lack of these resources due to non-uniform distribution in certain highly industrialized countries, especially those that rely on energy imports to meet demand, presents significant challenges to the energy transition plans. Hydrogen is a potential carrier for the export of surplus energy from regions with abundant renewable resources, as it can be stored and transported in bulk over long distances in various forms such as liquid hydrogen, ammonia, and organic hydrides.&#13;
&#13;
In this thesis, a systems engineering approach is taken to evaluate the entire value chain of hydrogen from its production to the end-use application in electricity generation. The multi-attribute tradespace exploration (MATE) technique is applied to study the cost and emissions impact of power generation using various hydrogen pathways compared to fossil fuels. The external and internal uncertainties are then analyzed using epoch-era analysis (EEA) and Monte Carlo simulation, respectively. The application of these methods is demonstrated through a case study on Japan, where the government has set an ambitious goal of halving the nation’s carbon emissions in a decade. The results suggest that the combustion of imported liquefied natural gas fuel in combined cycle gas turbine with carbon capture, utilization, and storage (CCUS) technology is a value robust solution in the immediate future, considering economic, technological, and policy uncertainties. While low- or zero-carbon hydrogen offers incremental utility, it is found to be not yet cost- competitive for large-scale adoption.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Efficient Data Structure for Implementing Splitter Hyperobjects in Task-Parallel Systems</title>
<link href="https://hdl.handle.net/1721.1/138981" rel="alternate"/>
<author>
<name>Qi, Qi</name>
</author>
<id>https://hdl.handle.net/1721.1/138981</id>
<updated>2022-01-15T03:27:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">An Efficient Data Structure for Implementing Splitter Hyperobjects in Task-Parallel Systems
Qi, Qi
In this thesis, I present and analyze the novel stack-augmented split-tree data structure to support splitter hyperobjects for task-parallel systems. Splitters can mitigate races on shared, nonlocal state, where parallel nested tasks make independent local modifications without affecting shared history. The data structure is inspired by “persistent” trees, but refined to achieve optimal performance in the common case. I prove that in a program with &#119899; splitter variables using the mechanism based on the stack-augmented split-tree data structure, read and write accesses to a splitter variable cost Θ(1) except for the first access after a task migration, which costs &#119874;(log &#119899; + log &#119863;) where &#119863; is the maximum depth of task nesting. This splitter data structure will enable the parallelization of search algorithms for computationally expensive applications, such as SAT solvers, theorem provers, and game-playing programs. This thesis also contains theory and implementation work on other topics related to task-parallel programming and work stealing schedulers. Some parts of this thesis represent joint work with William Kuszmaul.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of a Mathematical Approach to Rip Saw Arbor Design and Scheduling</title>
<link href="https://hdl.handle.net/1721.1/138979" rel="alternate"/>
<author>
<name>Birnbaum, Harry</name>
</author>
<id>https://hdl.handle.net/1721.1/138979</id>
<updated>2022-01-15T03:40:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Implementation of a Mathematical Approach to Rip Saw Arbor Design and Scheduling
Birnbaum, Harry
The following discussion will detail efforts to improve the margin profile of a hard wood flooring plant through reduction of wood waste using mathematical optimization techniques. This investigation focuses on the first step in the flooring manufacturing process: the rip saw. This step is one of the largest sources of waste in the process. Furthermore, it is highly automated and software driven. Recent technological implementations at the rip saw and the wealth of data recorded during the process, make it an ideal target for optimization.&#13;
&#13;
We first describe the details and challenges of the manufacturing process and the ways that a flooring company, AHF products, deals with these challenges. We then introduce a mathematical formulation for a column generation-based algorithm for rip saw arbor design and scheduling from a 1996 paper from Y. Fathi et al. We tweak the algorithm to accommodate the unique nuances of this particular plant and situation and we incorporate it into a wholistic arbor design and scheduling model. Finally, we discuss the challenges of implementing this complex model and ensuring that it is used after my internship.&#13;
&#13;
Along the way, we will discuss the challenges faced throughout research and modeling. I will share the other discoveries that appeared during development. Most significantly, we will discuss the importance not just yield alone but also demand adherence when considering rip saw set up. We will also share the many other projects and benefits that come from tracking data on the manufacturing process and making that data easily available to stakeholders.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flow: a microservice architecture for achieving confidence in the compatibility of deployed microservices</title>
<link href="https://hdl.handle.net/1721.1/138976" rel="alternate"/>
<author>
<name>Talkar, Arman</name>
</author>
<id>https://hdl.handle.net/1721.1/138976</id>
<updated>2022-01-15T03:08:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Flow: a microservice architecture for achieving confidence in the compatibility of deployed microservices
Talkar, Arman
When developing a Software-as-a-Service (SaaS) product using a microservices architecture, there often exists a conflict between product stability, by virtue of testing, and agility/throughput, by virtue of being able to deploy independently and quickly. This thesis presents Flow, a pattern for microservice development that bridges this tradeoff. Using semantic versioning, explicit specification, and development/deployment time tooling, Flow provides reasonable assurance that deployed microservices will be compatible with one another without the need for time-consuming, comprehensive end-to-end tests. When combined with integration testing, Flow provides assurance of both the compatibility and functionality of deployed microservices. Flow’s key advantage over existing solutions is its extremely fast feedback loop that provides developers with compatibility information at compile-time, allowing development to be compatibility-aware. Flow is demonstrated using a visual model, a standalone artifact that simulates the state and enforcement rules that comprise the pattern. The model serves as a blueprint for the full-fledged tooling that would enforce the pattern in a production environment.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Graph Neural Network Training on Large Graphs</title>
<link href="https://hdl.handle.net/1721.1/138975" rel="alternate"/>
<author>
<name>Stathas, Nickolas</name>
</author>
<id>https://hdl.handle.net/1721.1/138975</id>
<updated>2022-01-15T04:09:09Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Optimizing Graph Neural Network Training on Large Graphs
Stathas, Nickolas
Graphs can be used to represent many important classes of structured real-world data. For this reason, there has been an increase of research interest in various machine learning approaches to solve tasks such as link prediction and node property prediction. Graph Neural Network models demonstrate good performance on such tasks. However, the depth of the models and the size of the graphs they can be trained on is constrained either by the low processing throughput of CPUs or by the limited memory capacity of GPUs. Techniques such as neighborhood sampling are often used to create smaller mini-batch training examples that fit in GPU memory. In this thesis, I provide a systematic performance analysis of GNN training codes written using PyTorch Geometric, the most popular machine learning framework for GNNs. Through this performance analysis, I uncover significant performance bottlenecks related to neighborhood sampling and GPU data transfers. To address these issues, I create FastPyG: a performance-engineered fork of PyTorch Geometric, which achieves a 3-6× speedup over comparable PyTorch Geometric codes without impacting model accuracy. The core contribution included in FastPyG is fast_sampler, an efficient and parallel neighborhood sampling implementation in C++.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multiple Target Tracking in Experimental Multistatic MIMO mmWave Radar Sensor Networks</title>
<link href="https://hdl.handle.net/1721.1/138974" rel="alternate"/>
<author>
<name>Denove, ENS George Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/138974</id>
<updated>2022-01-15T03:42:12Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Multiple Target Tracking in Experimental Multistatic MIMO mmWave Radar Sensor Networks
Denove, ENS George Thomas
Location awareness of non-collaborative targets within 5G and beyond systems is becoming ever more prominent. This paper presents a passive, hardware-agnostic multiple target tracking (MTT) system that addresses the shortcomings in current wireless position technology and is capable of seamlessly integrating with millimeter-wave (mmWave) communication infrastructure. The developed system is a radar sensor network (RSN) composed of distributed low-cost mmWave devices, which are designed to simultaneously transmit and receive for improved network throughput. We develop Doppler compensation and signal decoding algorithms integral to properly resolving target position information through multistatic sensing channels. Results indicate our system’s advancements achieve performance improvements over existing systems in non-collaborative target detection and MTT in harsh wireless environments.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data for Housing Justice: Examining Activists’ Use of Open Government Data for Housing Justice in Boston, MA and New York, NY</title>
<link href="https://hdl.handle.net/1721.1/138973" rel="alternate"/>
<author>
<name>Navalkha, Chenab</name>
</author>
<id>https://hdl.handle.net/1721.1/138973</id>
<updated>2022-01-15T03:55:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data for Housing Justice: Examining Activists’ Use of Open Government Data for Housing Justice in Boston, MA and New York, NY
Navalkha, Chenab
Over the past decade, governments the world over have expanded access to public data through open government data (OGD) portals, from the local to national levels. The Organization for Economic Cooperation and Development describes OGD as a philosophy and a set of politics that aims to serve three social, political, and economic ends: transparency, accountability and value creation. Despite governmental efforts to make data public, research shows that OGD is underutilized and that little is known about citizens’ preferences and interests in utilizing these data. In the U.S., a recent study of a grassroots organization focused on affordable housing described a case of resident-initiated data collection to counteract misrepresentations and misalignments of the local municipal data. Given this disjuncture between the data needs of grassroots actors and the data provided by OGD systems, my study focuses on understanding how local activists negotiate limitations of public data and develop strategies to collect the information they need in their broader campaigns for housing justice. Applying theories of data feminism and insurgent planning, I analyze data practices of housing data activists in Boston and New York City. Through their activities, housing data activists act as data intermediaries who bridge the gap between OGD systems and residents and community organizers. In doing so, they not only facilitate local government’s fulfillment of its goals of transparency and accountability, but also pursue a more liberatory and justice-oriented future for communities facing the threat of displacement. Housing ‘data activists’ utilize a data justice approach that relates historical and contextual analysis of structural oppression to the contemporary geography of the eviction crisis, and proactively counters parallel data practices within the real estate industry to facilitate tenant management and real estate speculation. I argue that OGD systems represent a new opportunity for government officials to take action in order to redress the long-standing power differentials between local tenants and organizers and the real estate industry. If government officials take seriously the values of transparency and accountability, they must take cues from the housing data activists in redesigning OGD systems in such a way that privileges and facilitates use by local residents over use by real estate firms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpolated Experience Replay for Improved Sample Efficiency of Model-Free Deep Reinforcement Learning Algorithms</title>
<link href="https://hdl.handle.net/1721.1/138972" rel="alternate"/>
<author>
<name>Sander, Ryan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/138972</id>
<updated>2022-01-15T03:59:52Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Interpolated Experience Replay for Improved Sample Efficiency of Model-Free Deep Reinforcement Learning Algorithms
Sander, Ryan M.
The human brain is remarkably sample efficient, capable of learning complex behaviors given limited experience [16]. This sample efficiency property is crucial for effectively training robust deep reinforcement learning agents on continuous control tasks - when limited experience is available, poor sample efficiency can yield sub-optimal and unstable policies. To improve sample efficiency in these tasks, we propose Neighborhood Mixup Experience Replay (NMER) and Bayesian Interpolated Experience Replay (BIER), modular replay buffers that interpolate transitions with their closest neighbors in normalized state-action space. NMER preserves a locally linear approximation of the transition manifold by only interpolating transitions with similar stateaction features. BIER expands upon NMER by predicting interpolated transitions queried by NMER using learned Gaussian Process Regression models defined over a transition’s neighborhood. These interpolated transitions, predicted via Bayesian linear smoothing, are then used to update the policy and value functions of deep reinforcement learning agents in a likelihood-weighted fashion. NMER and BIER achieve greater sample efficiency than other state-of-the-art replay buffers when evaluated on model-free, off-policy reinforcement learning algorithms and OpenAI Gym MuJoCo environments. This improved sample efficiency can enable agents to learn robust and generalizable policies on continuous control tasks in settings where data is limited, such as many real-world robotics tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Challenges and Opportunities to Achieving Equitable Residential Building Electrification in Chicago</title>
<link href="https://hdl.handle.net/1721.1/138969" rel="alternate"/>
<author>
<name>Kim, Amber</name>
</author>
<id>https://hdl.handle.net/1721.1/138969</id>
<updated>2022-01-15T03:49:31Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Challenges and Opportunities to Achieving Equitable Residential Building Electrification in Chicago
Kim, Amber
In this thesis, I answer the question: what are the challenges and opportunities to achieving a just and equitable residential building electrification strategy in Chicago? The focus on equitable residential building electrification is motivated by 1) Chicago’s need to tackle residential building gas emissions in order to meet its decarbonization goals, and 2) the use of an energy justice framework. I pursue my research question through a mixed-method approach of stakeholder interviews, mapping, and statistical analysis. Stakeholder interviews paint the picture of a politically challenging landscape with a few key opportunities to advance equitable residential building electrification. The mapping and statistical analysis identify socio-demographic patterns that pose equity challenges for residential electrification. Based on these findings, I conclude with recommendations for Chicago planners, policymakers, and stakeholders to work towards a just and equitable residential building electrification strategy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>'A Bridge Over the Chasm': Rhetoric and Reflexivity in Housing Advocacy</title>
<link href="https://hdl.handle.net/1721.1/138968" rel="alternate"/>
<author>
<name>Kelly, Devin</name>
</author>
<id>https://hdl.handle.net/1721.1/138968</id>
<updated>2022-01-15T03:11:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">'A Bridge Over the Chasm': Rhetoric and Reflexivity in Housing Advocacy
Kelly, Devin
As we cast housing in the language of crisis, development, shortage, and units, we lose sight of its value in the context of social relations and human wellbeing. The rhetoric that has evolved to explain gaps in housing access intersects powerfully with homelessness policy and advocacy, and ideas about leadership and solutions. In a case study of a housing advocacy subculture in Anchorage, Alaska, I ask whether naming, and critically examining, one’s own experiences of being housed can disrupt habitual ways of acting and leading and create more informed, collaborative, compassionate, and transformational approaches to change in the housing and homelessness arena. Through a lens of critical reflexivity, I identify interlocking structural conditions, or “blueprints,” that constitute housed rhetoric and relations. I propose adapting a series of existing action-based tools to unpack these blueprints and support inclusive, collaborative policy work across difference
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Make vs. Buy Strategy for Expendable and Attritable Aircraft Engine Development</title>
<link href="https://hdl.handle.net/1721.1/138967" rel="alternate"/>
<author>
<name>Soybel, Jamison</name>
</author>
<id>https://hdl.handle.net/1721.1/138967</id>
<updated>2022-01-15T04:06:03Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Designing a Make vs. Buy Strategy for Expendable and Attritable Aircraft Engine Development
Soybel, Jamison
Military aircraft engine customers are increasingly demanding agility and affordability for both engine development and production. In this thesis, we propose that this inflection point in aerospace and defense towards agility and affordability will be characterized by time-based competition and its "winners" will be the firms that are most successful in creating agile, innovative development programs and flexible, responsive production supply chains. &#13;
&#13;
This thesis builds on relevant operations frameworks that have supported similar questions in adjacent industries and we propose novel analysis that can support in identifying and resolving key lead time drivers and provide frameworks that can help firms understand their options. Through our analysis, we determine that by applying various engineering, design, and operations strategies that leverage novel technologies like additive manufacturing, reducing production lead times by up to 45% can likely be feasible for expendable and attritable engines. As capabilities are further refined, firms can also unlock many significant cost savings opportunities, setting the stage for an era of agility and affordability for expendable and attritable engines in the military aircraft engine industry.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aviation Effects on Local Business: Mapping Community Impact and Policy Strategies for Noise Remediation</title>
<link href="https://hdl.handle.net/1721.1/138966" rel="alternate"/>
<author>
<name>Bullock, Carson</name>
</author>
<id>https://hdl.handle.net/1721.1/138966</id>
<updated>2022-01-15T04:09:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Aviation Effects on Local Business: Mapping Community Impact and Policy Strategies for Noise Remediation
Bullock, Carson
Changing flight procedures present a natural experiment which can be leveraged to study the effects of aviation noise change on business activity. While it is widely recognized that aviation produces a variety of economic benefits and environmental disbenefits, the effects of noise on businesses sit at an understudied intersection of economic and environmental impact. Using geospatial analysis of Boston and Chicago, two metropolitan areas which experienced flight path changes, this thesis assesses the extent to which businesses near airports relocate or close in response to noise increases. Business activity was compared before and after noise changes to form the basis of a difference-in-differences approach, which controls for many of the other factors which affect business activity at a given location. This study also acts as a revealed preference approach for assessing the implicit costs of aviation noise and the role of regulators in responding to those costs. No statistically significant aggregate effect of aviation noise on business activity was found. For outlier locations with large business changes and large noise changes, exogenous non-noise factors were identified which are likely responsible. Available evidence suggests that regions of large noise increase have comparable business growth to regions which do not experience noise change, even after controlling for the effects of geographic region and initial noise levels.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Inversion of Programs</title>
<link href="https://hdl.handle.net/1721.1/138965" rel="alternate"/>
<author>
<name>Morejon, David</name>
</author>
<id>https://hdl.handle.net/1721.1/138965</id>
<updated>2022-01-15T03:18:37Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Parametric Inversion of Programs
Morejon, David
Programmers and mathematicians often find themselves in situations where they must solve an inverse problem for a particular function. Solving an inverse problem generally means determining the input that produced a given output. However, this is ambiguous for most functions since the function may have multiple inputs that produce an output. Parametric inversion takes a new approach to solving inverse problems by proposing a method to make a non-injective (many-to-one) function invertible through a parameter. This parameter is used to distinguish between elements in the input space of a function that produced a given output. The existing parametric inversion theory primarily describes inverting mathematical functions. This work extends the theory to encompass inversion of programs. Specifically, we introduce a language, IR, and address problems related to inversion of a simple code block, control flow, and re-use of variables. This includes a practical implementation in Julia that is able to correctly invert a suite of sample programs.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Field-Portable Dissolved Gas Sensing and Perspectives in Aqueous Microplastic Detection</title>
<link href="https://hdl.handle.net/1721.1/138962" rel="alternate"/>
<author>
<name>Blevins, Morgan Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/138962</id>
<updated>2022-01-15T03:59:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Field-Portable Dissolved Gas Sensing and Perspectives in Aqueous Microplastic Detection
Blevins, Morgan Grace
Global temperature rise and increased atmospheric carbon dioxide (CO2) levels have affected the health of the world’s ocean and water ecosystems, impacting the balances of natural carbon cycling and causing ocean acidification. Additionally, as global temperatures rise, thawing permafrost has stimulated increased release of methane (CH4), a gas with a shorter lifetime in the atmosphere but with even more heat trapping ability than CO2. In situ analysis of dissolved gas content in surface waters is currently performed with large, expensive instruments, such as spectrometers, which are coupled with gas equilibration systems, which extract dissolved gas from water and feed it to the sensor. Accurate, low cost, and portable sensors are needed to measure the dissolved CH4 and CO2 concentration in water systems to quantify their release and understand their relationship to the global carbon budget. At the same time, while greenhouse gases are well established threats to water ecosystems, the ubiquity and potential consequences of microplastics in aqueous environments are just beginning to be recognized by the environmental research community. Microplastics (MPs) are small particles of polymer debris, commonly defined as being between 1 µm and 1000 µm. Despite the pervasiveness of MPs, our ability to characterize MPs in the environment is limited by the lack of technologies for rapidly and accurately identifying and quantifying MPs. This thesis is concerned with the engineering challenges prompted by the need for high quality and quantity environmental data to better study and the impact, cycling, and prevalence of these pollutants in aqueous environments. &#13;
&#13;
Three distinct investigations are presented here. First, the design of the LowCost Gas Extraction and Measurement System (LC-GEMS) for dissolved CO2 is presented. At just under $600 dollar to build, the LC-GEMS is an ultra-portable, toolbox-sized instrument for dissolved gas sensing in near-surface waters. The LCGEMS was characterized in the lab and demonstrated linear relationships with dissolved CO2 as well as temperature. Lab calibrations and subsequent field testing in the Little Sippewissett Marsh, in Falmouth, Massachusetts showed that the LCGEMS captures both diurnal and minute-time scale trends in dissolved CO2. &#13;
&#13;
Second, this thesis presents the novel design of three simple and low-cost planar nanophotonic and plasmonic structures as optical transducers for measuring dissolved CH4. Through simulations, the sensitivity of the structures are evaluated and found to exhibit superior performance in the reflectance intensity readout mode to that of the standard surface-plasmon-polariton-mode Spreeta sensor. A practical, small, and low-cost implementation of this chip with a simple intensity-based measurement scheme is proposed. This design is novel in the space of dissolved gas monitoring because it shows potential to measure directly in the water phase while being robust and low-cost to implement. &#13;
&#13;
Finally, this thesis presents a literature review and perspective to motivate the development of field-deployable microplastic sensing techniques. A framework for field-deployable microplastic sensing is presented and seeks to inform the MP community of the potential in both traditional MP analysis techniques and unconventional methods for creating rapid and automated MP sensors. The field-deployabilty framework addresses a full scope of practical/technological trade-offs to be considered for portable MP detection.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Modelling and Treatment Optimization of Acute Endovascular and Respiratory Conditions</title>
<link href="https://hdl.handle.net/1721.1/138959" rel="alternate"/>
<author>
<name>Dillon, Tom</name>
</author>
<id>https://hdl.handle.net/1721.1/138959</id>
<updated>2022-01-15T03:30:11Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computational Modelling and Treatment Optimization of Acute Endovascular and Respiratory Conditions
Dillon, Tom
This thesis aims to use computational tools and a deterministic clinical design process to optimize treatment for acute endovascular and respiratory conditions. Specifically, focus is placed on optimizing treatment for two acute pathologies: (1) the Coronavirus disease 2019 (COVID-19), and (2) Abdominal Aortic Aneurysms (AAA). &#13;
&#13;
In light of the recent COVID-19 pandemic, a low-cost, rapidly deployable emergency ventilator design using a novel fluidic oscillator was developed. The design addresses potential ventilator shortages resulting from the ongoing and future pandemics by converting a continuous positive airway pressure (CPAP) machine into a mechanical ventilator using a part that is (i) inexpensive, (ii) easily manufactured without the need for specialized equipment, (iii) simple to assemble and maintain, (iv) does not require any electronics, and (v) has no moving components that could be prone to failure. A Computational Fluid Dynamics (CFD) model was used to assess flow characteristics of the system, and a prototype was developed and tested with a commercial benchtop respiratory simulator. The simulations showed clinically relevant periodic oscillations and outlet pressures between 8-23 cm H2O. Both the prototype and simulations responded promptly to disrupted oscillations, an analogue for patient‐initiated breaths. &#13;
&#13;
AAA is a swelling in the lower portion of the aorta, the largest blood vessel in the body. The incidence of this potentially fatal condition is 5-10 cases per 100,000 in the U.S.. The preferred treatment for AAA is minimally invasive endovascular repair (EVAR), whereby a compliant tubular material reinforced with a metallic stent (an endograft) is implanted inside the aneurysm. For aneurysms that extend across major abdominal vessels (juxtarenal aneurysms), a fenestrated (or sub-branched) endograft is required. The lead time to obtain a patient-specific fenestrated graft from a commercial manufacturer is in the order of a few weeks, which is often unsuitable for patients that present with an emergent medical condition. Physicians instead choose mostly to manually modify off-the-shelf non-fenestrated endografts, though this process is often tedious and subject to calculation inaccuracies. In this thesis, a computer program for automated fitting of fenestrations on non-fenestrated endografts is proposed - "FenFit". FenFit provides the physician with an efficient, intuitive user interface for modifying endovascular grafts, developed using MATLAB GUI designer. A novel search algorithm using 3D to 2D projection mapping was developed to determine the optimal placement of fenestrations on the endograft at reduced computational cost, and a bijective conformal mapping algorithm was developed for texture mapping of the fenestrations to the 3D aortic graft space. A pilot clinical study was conducted in conjunction with our collaborators at Beth Israel Deaconess Medical Center (BIDMC), Boston, to evaluate the efficiency of FenFit against physician manual planning. Results to date have shown that FenFit can reduce workflow planning time from 22.5 minutes to 32 seconds (n = 25, p &lt; 0.001). In 20% of cases, FenFit found a valid graft alignment where the physician could not via trial and error. &#13;
&#13;
Guided by computational tools, these combined bodies of work propose expedited, patient-specific treatment for urgent medical conditions. It is hoped that these accelerated treatment regimes may ultimately translate to improved clinical outcomes and reduced fatality rates.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>M.I.celium mexicanus: Rejecting Modernity through Zapotec Futurism</title>
<link href="https://hdl.handle.net/1721.1/138958" rel="alternate"/>
<author>
<name>Torres, Lynced Angelica</name>
</author>
<id>https://hdl.handle.net/1721.1/138958</id>
<updated>2022-01-15T03:45:02Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">M.I.celium mexicanus: Rejecting Modernity through Zapotec Futurism
Torres, Lynced Angelica
[M.I.]celium mexicanus is an entry point for architects and humans to consider transforming their relationship to the Earth’s critical zone through reconciliation with mushrooms to cultivate fungal allyship. The thesis examines and reimagines a future of building that drives towards the biological vs. that which is mineralized and controlled through unempathetic forces such as extraction through mining, greenwashing renewable energy to sustain mining production, and commercialization of architecture and planning practices. These elements are contaminants in the culture and lives of the Zapotec community residing in Juchitan, Oaxaca and perpetuate a historical system of colonisation and exploitation by not only foreign powers, but their own country and people.&#13;
&#13;
The city itself currently as of 2021 has not completely been able to rebuild the damage faced in the event of the 2017 hurricane that struck in the southern coast of the Isthmus de Tehuantepec. Government aid is minimal and services towards westernized modular building units like the concrete block, which are not ideal given the hot climate, serve as a unitized symbol for economic status, and is also susceptible to destruction. &#13;
&#13;
The house and temple of the future embeds all the ideals, values, and ACTIONS that it may collectively take to revitalize the very soil and territory that offers itself as a substrate for life. The actions reflect and respect the rituals of the “The People” as they are no longer considered inhabitants of the past, incapable of appreciating and forging technology for the modern world. Rather, in an act of architectural and environmental anarchy, they guide the future away from extraction and towards circular economies through their collective wisdom of the past, experience in the survival of countless apocalypses, and with their close ties to mushrooms.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Space of mind The Hidden Architecture in the Time of Pandemic</title>
<link href="https://hdl.handle.net/1721.1/138957" rel="alternate"/>
<author>
<name>Xu, Ziyu</name>
</author>
<id>https://hdl.handle.net/1721.1/138957</id>
<updated>2022-01-15T03:17:14Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Space of mind The Hidden Architecture in the Time of Pandemic
Xu, Ziyu
The thesis sets its background in the current and post pandemic context, where cognitive activities have outranged the imagination of static architecture. The virtual work environment raised a lot of questions to the physical rooms we are inhabiting in. The definition of location and space are blurred and gradually disconnected from programs, productivity and memories. In this project, Architectural spaces are envisioned to be the echo of actions and the extension of one’s state of mind. With the approach of Worldbuilding based on cognitive activities and accumulated work traces, three virtual scenes are proposed to reveal the spatial construct we are not aware of: Follies of Cognitive Labor, Field of Imagery, and Garden of Data. Each of the scenes starts with a different type of digital material and adopts its own growing mechanism. After the process of recording, work traces and cognitive activities are translated into spatial notations and Loci that represent the richness and dynamics of cognitive realm.&#13;
&#13;
In a similar way to making physical architecture, the methodology of making virtual architecture addresses three questions. Firstly. How to collect building materials? Secondly, what and how to construct? Thirdly, how can the built objects be perceived and navigated? The method of digital construction provides the potentials of enhancement or decay of the created Loci (built objects) based on interactions such as refocusing, and recalling. Essentially, the work aims to explore general ways and tools to materialize productivity and memories and the design part will be presented through the lens of my perception as an example.&#13;
&#13;
The project envisions infinite recording and formation of spaces as results of irregularity of mental activities. The behaviors of working, browsing, pausing depict the infinite making and collapsing of spaces. Architecture is gaining another dimension and thus should be described differently.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Impulse Audio Source Separation using Generative Adversarial Networks for Phase Generation</title>
<link href="https://hdl.handle.net/1721.1/138956" rel="alternate"/>
<author>
<name>Piercy, Phoebe K.</name>
</author>
<id>https://hdl.handle.net/1721.1/138956</id>
<updated>2022-01-15T03:26:16Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Improving Impulse Audio Source Separation using Generative Adversarial Networks for Phase Generation
Piercy, Phoebe K.
This thesis explored separating impulse noise from a desired signal, for the purposes of hearing protection for soldiers and musicians. An evaluation of current techniques in source separation, such as matrix demixing methods (Independent Component Analysis, Independent Vector Analysis), and masking methods (Ideal Ratio Mask, Ideal Binary Mask), amongst others, concluded that Time-Frequency masking of the noisy signal spectrogram was the best candidate audio separation method for dynamic soundscapes such as tactical fields and music. We followed with an experimental investigation of the role of phase in Time-Frequency masking, finding its importance to the intelligibility of speech to be paramount. In particular, the construction of a Complex Ideal Ratio Mask (cIRM), altering both magnitude and phase information in the spectrogram, was identified as the most promising method of impulse source separation, with separated speech intelligibility comparable to clean speech. This motivated us to develop a method to generate an approximation of the cIRM, but without prior source information. As such, the growing use of neural networks as a tool in source separation and phase estimation was presented and evaluated. Experiments were conducted to evaluate the potential of Generative Adversarial Networks (GANs), often used in image transformation, in generating the phase of the cIRM, with human test subjects to evaluate whether intelligibility of separated speech was improved. The GAN showed promise in generating phase-like results, although imperfect transformation resulted in an audible quality decrease, suggesting that the approach was unlikely to produce the natural sound required by musicians. However, for the tactical case, where intelligibility is valued over quality, consonant reconstruction and improved impulse attenuation was observed using our GAN-estimated cIRM. This improvement was reflected in an increase in the signal to noise ratio as compared to clean speech, and a decrease in the same metric compared to the impulse noise, demonstrating the improved clean speech contributions, and the reduction in impulse noise contributions in the separated output. These results show the potential, with better resources, for GAN-generated phase to be used to improve intelligibility during audio source separation of impulse noise from speech, and motivates further exploration on this topic.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing Opioid Overdose: From Prediction to Operationalization</title>
<link href="https://hdl.handle.net/1721.1/138952" rel="alternate"/>
<author>
<name>Kaw, Neal</name>
</author>
<id>https://hdl.handle.net/1721.1/138952</id>
<updated>2022-01-15T03:27:22Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Preventing Opioid Overdose: From Prediction to Operationalization
Kaw, Neal
The opioid epidemic remains a significant public health challenge in the US. A potential catalyst for reducing the incidence of opioid-related harm is the development and operationalization of risk stratification models. Prior work has focused on the statistical performance of such models without considering operational implications. Predicting the most severe outcome (fatal overdose) is a particular challenge due to imbalanced datasets. We partner with Staten Island Performing Provider System to access claims data and electronic health records for the patient population on Staten Island. For this population, we develop a single machine learning model for predicting a full range of adverse opioid-related events, and achieve an area under the receiver operating characteristic curve of 0.95, 0.87, 0.83 for the outcomes of any adverse opioid event, opioid overdose, and fatal opioid overdose, respectively, even in the absence of training data on fatal overdoses. Subsequently, we conduct a rolling horizon analysis to evaluate the capacity requirements of intervention policies leveraging the model. We find that the model can be used to identify a small intervention cohort (1% of the highest-risk patients) which includes the majority (69%) of adverse opioid events, allowing for targeted interventions with limited intervention capacity. Finally, we quantify the tradeoff between predictive performance and concerns that arise in implementation, such as interpretability, delay in data feeds, and prediction window length. Our results suggest that predictive performance does not need to be sacrificed to satisfy implementation concerns.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Research on Corporate Bond Defaults in the Chinese Market</title>
<link href="https://hdl.handle.net/1721.1/138950" rel="alternate"/>
<author>
<name>Chen, Yiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/138950</id>
<updated>2022-01-15T03:34:50Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A Research on Corporate Bond Defaults in the Chinese Market
Chen, Yiwen
Using data from the Chinese fixed income market, this thesis builds up a logistic regression model mainly consisting of both financial condition variables and financial report quality variables. The analysis suggests the degree of effect for different variables and thus provides a reference for credit risk assessment. Supporting evidence is also provided to show that the model can predict default one year in advance effectively and perform better than the main rating agency companies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Examples and Distribution Shift: A Representations Perspective</title>
<link href="https://hdl.handle.net/1721.1/138945" rel="alternate"/>
<author>
<name>Nadhamuni, Kaveri</name>
</author>
<id>https://hdl.handle.net/1721.1/138945</id>
<updated>2022-01-15T03:39:30Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Adversarial Examples and Distribution Shift: A Representations Perspective
Nadhamuni, Kaveri
Adversarial attacks cause machine learning models to produce wrong predictions by minimally perturbing their input. In this thesis, we take a step towards understanding how these perturbations affect the intermediate data representations of the model. Specifically, we compare standard and adversarial representations for models of varying robustness based on a variety of similarity metrics. In fact, we find that it’s possible to detect adversarial examples by examining nearby examples, though we also find that this method can be circumvented by an adaptive attack. We then explore methods to improve generalization to natural distribution shift and hypothesize that models trained with different notions of feature bias will learn fundamentally different representations. We find that combining such diverse representations can provide a more comprehensive representation of the input data, potentially allowing better generalization to novel domains. Finally, we find that representation similarity metrics can be used to predict how well a model will be able to transfer between tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Ultra-Low Power CMOS GHz Circulator</title>
<link href="https://hdl.handle.net/1721.1/138944" rel="alternate"/>
<author>
<name>Morimoto, Yukimi</name>
</author>
<id>https://hdl.handle.net/1721.1/138944</id>
<updated>2022-01-15T03:28:26Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Investigation of Ultra-Low Power CMOS GHz Circulator
Morimoto, Yukimi
Quantum computing is a future solution for unprecedented computation power to solve today’s challenges, such as simulating molecules and forecasting natural disasters more accurately. A practical quantum computing system needs thousands or even millions of qubits. However, the quantum computing systems today have demonstrated operation with only a few to hundreds of qubits. CMOS circuits are candidates for readout and control circuits for a practical quantum computer for their compactness and scalability. Especially, building CMOS readout and control circuits at cryogenic temperatures can reduce power loss at the interface of the circuitry and the qubits and improve speed.&#13;
&#13;
Circulators are one of the most common blocks in qubit readout circuits, but most cryogenic circulators today are bulky. This project investigated the design of scalable cryo-CMOS circulators. To reduce power consumption below the cooling budget in a cryogenic regime while maintaining losses and isolation performances, I took three approaches -1) decreasing the modulation frequency, 2) increasing the transistor size, and 3) using a more advanced technology node. Phase shifters and filters were redesigned to reduce the modulation frequency, and a full-duplexer was implemented based on the previous work [1], with TSMC 65 nm technology and Intel 22 nm technology. The circuit was simulated for both at room temperature and at 4 K.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Injectability and Viability of Cells using Viscoplastic Lubricated Flows</title>
<link href="https://hdl.handle.net/1721.1/138941" rel="alternate"/>
<author>
<name>Dhulipala, Somayajulu</name>
</author>
<id>https://hdl.handle.net/1721.1/138941</id>
<updated>2022-01-15T03:14:33Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Enhancing Injectability and Viability of Cells using Viscoplastic Lubricated Flows
Dhulipala, Somayajulu
Hydrogels have been used as scaffolds and structural supports for cell growth. However, injecting these hydrogels requires substantial forces which also lead to high shear in the cell and ultimately cell death. Current methods to mitigate this problem suffer from limited applicability, poor durability, lack of stability and no significant enhancement in injectability. Here, we propose a viscoplastic lubricated gel co-flow, where the flow of the cell-laden payload through needles is facilitated by coaxial lubrication from a lower yield stress gel, to mitigate shear death, enhance injectability and enable stable flow. In this study, we optimize fluidic and flow parameters to minimize drag and shear on the payload gel. We establish regime maps of stable coaxial lubrication using both simulations and experiments. The velocity profile inside the needle is plotted using PTV to visualize the shear-free transport of the inner payload. Experimentally, we were able to achieve a 4x reduction in injection force and 5x increase in plug region (zero-shear region). Finally, we propose a theoretical model to explain the simulations and experimental results.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe and Efficient Motion Planning through Chance-Constrained Nonlinear Optimization</title>
<link href="https://hdl.handle.net/1721.1/138939" rel="alternate"/>
<author>
<name>Dawson, Charles Burke</name>
</author>
<id>https://hdl.handle.net/1721.1/138939</id>
<updated>2022-01-15T03:33:53Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Safe and Efficient Motion Planning through Chance-Constrained Nonlinear Optimization
Dawson, Charles Burke
Uncertainty is the harsh reality for robots deployed in the real world. Outside of a carefully-structured laboratory environment, neither the locations of obstacles nor the true state of the robot can be known with perfect certainty. This makes planning safe maneuvers challenging, particularly for robots with many degrees of freedom and rich geometry. Existing uncertainty-aware planners fall short by considering only uncertainty in the environment or uncertainty in the robots' state. In this thesis, we develop a chance-constrained trajectory optimization framework to address this gap in the state of the art, which we call Sequential Convex Optimization with Risk Allocation (SCORA). This planner is capable of solving challenging, high-dimensional motion planning problems while managing the risk due to uncertainty in the environment and in the robots own state. In addition, SCORA supports robots with nonlinear dynamics and arbitrary geometry, and it outperforms state-of-the-art planners in terms of both safety and planning time on a range of robotics tasks, including autonomous parallel parking, control of a mobile robot arm, and planning for multi-agent manipulation tasks.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Complexity of  Nonconvex-Strongly-Concave Smooth Minimax Optimization Using First-Order Methods</title>
<link href="https://hdl.handle.net/1721.1/138938" rel="alternate"/>
<author>
<name>Li, Haochuan</name>
</author>
<id>https://hdl.handle.net/1721.1/138938</id>
<updated>2022-01-15T03:34:57Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">On the Complexity of  Nonconvex-Strongly-Concave Smooth Minimax Optimization Using First-Order Methods
Li, Haochuan
The problem of minimax optimization arises in a wide range of applications. When the objective function is convex-concave, almost the full picture is known. However, the general nonconvex-concave setting is less understood. In this work, we study the complexity of nonconvex-strongly-concave minimax optimization using first-order methods. First, we provide a first-order oracle complexity lower bound for finding stationary points of nonconvex-strongly-concave smooth min-max optimization problems. We establish a lower bound of Ω ( √ &#120581;&#120598;⁻²) for deterministic oracles, where &#120598; defines the level of approximate stationarity and &#120581; is the condition number, which matches the existing upper bound achieved in (Lin et al., 2020b) up to logarithmic factors. For stochastic oracles, we provide a lower bound of Ω (︀√ &#120581;&#120598;⁻² + &#120581; ¹/³ &#120598; ⁻⁴)︀ . Second, we study the specific first-order algorithm, gradient descent-ascent (GDA). We show that for quadratic or nearly quadratic nonconvex-strongly-concave functions under our assumptions, two-time-scale GDA with appropriate stepsizes achieves a linear convergence rate. Then we also extend our result to stochastic gradient descent-ascent (SGDA).
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Control and Convolutional Neural Net Based Pose Estimation for On-Orbit Assembly</title>
<link href="https://hdl.handle.net/1721.1/138935" rel="alternate"/>
<author>
<name>Dolan, Sydney</name>
</author>
<id>https://hdl.handle.net/1721.1/138935</id>
<updated>2022-01-15T03:32:00Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Control and Convolutional Neural Net Based Pose Estimation for On-Orbit Assembly
Dolan, Sydney
On-orbit assembly is a critical technology for the future of space exploration, as it will enable larger, adaptive space structures that can support a variety of space exploration needs. The field of on-orbit assembly is still nascent, and few projects exist to technically investigate its feasibility. TESSERAE, or Tessellated Electromagnetic Space Structures for Exploration of Reconfigurable Adaptive Environments, a project out of the MIT Media Lab Space Exploration Initiative, is a multi-year research effort to develop a set of self-assembling, decentralized tiles that use electromagnets to dock with one another and build space structures. Successful early stage technology demonstrations have occurred on both zero-g flights and on the International Space Station, validating the basic premise of operation. Before TESSERAE can be demonstrated in an open orbit environment, an on-board flight computer must be developed to model the dynamics of the system and control its assembly. &#13;
&#13;
This thesis seeks to contribute to the development of on-orbit assembly by contributing a first analysis of multi-unit, agentless control for autonomous self-assembly in orbit. In specific, three areas are investigated: path planning, control for proximity operations, and monocular pose estimation. Path planning for on-orbit assembly is analogous to the traveling salesman problem, as the tiles will have to visit each another once to create the whole structure. As a result, two search algorithms, a greedy algorithm and branch and bound algorithm, are evaluated on their ability to produce a result with minimal ΔV. The branch and bound algorithm has the best performance, as it is an exhaustive search method guaranteed to find the global minimum.&#13;
&#13;
Next, a sliding mode controller is implemented for rendezvous and station-keeping between tiles. The control force is adapted to be realistic for the electromagnetic attraction between tiles. Using electromagnets for control poses a unique challenge due to the nonlinearity of the system dynamics, and of the magnets. The controller was able to drive the tiles to rendezvous, even with the conservative approach to electromagnetic force modeling. These results indicate that TESSERAE tiles will be able to rendezvous with one another even when meters apart. &#13;
&#13;
Finally, this work investigates the potential of convolutional neural network based estimation methods for on-orbit assembly. The pose estimation network is broken into three steps: object detection, landmark regression, and pose estimation. A comparative analysis of several object detection networks, landmark regression approaches, and pose solvers is presented. This work found that bottom-up landmark regression methods are the most suitable for on-orbit assembly, but their accuracy needs to be improved more before implementation. An increase in the size of the training dataset for the convolutional neural network will dramatically improve the results of the pose estimation. Future work will develop additional images, as well as investigate the effects of maintaining the resolution throughout the pipeline to improve the feature detection for pose estimation.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A computational framework for the large scale simulation of the dynamics of highly flexible filaments in a viscous flow</title>
<link href="https://hdl.handle.net/1721.1/138933" rel="alternate"/>
<author>
<name>Chomette, Grégoire Alain</name>
</author>
<id>https://hdl.handle.net/1721.1/138933</id>
<updated>2022-01-15T04:06:23Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">A computational framework for the large scale simulation of the dynamics of highly flexible filaments in a viscous flow
Chomette, Grégoire Alain
In science and engineering, the behavior of filaments immersed in viscous flows is of significant interest for a wide range of biological applications. In order to simulate the dynamics of the filaments, in this thesis we develop, validate, and test a physics-based computational framework by coupling finite elements and boundary integral methods. Then we introduce a data-driven approach to complement the high-fidelity numerical methods with an inexpensive surrogate model based on neural networks.&#13;
&#13;
First, we present the methods to model the dynamics of slender fibers immersed in a viscous fluid within a physics-based computational framework. Motivated by the very large deformations that the fibers experience especially in the case of extreme aspect ratios, we implement a finite element framework based on the exact Kirchhoff-Love beam formulation. This method constitutes a significant improvement with respect to other proposed approaches which simulate the fibers only in the regime of high rigidity, thus ignoring the complex non-linear deformations that very slender filaments experience. The long-range hydrodynamic interactions of the filaments are modeled through the slender body theory for Stokes flows, where the disturbance motion of the incompressible viscous flow due to the presence of the slender bodies can be approximated by a distribution of Stokeslets. We use GPU resources to solve the dense system generated by the fluid model and push the scale to a cloud of &#119874;(500) flexible filaments. Additionally, to address the numerical instabilities observed as filaments come close to each other, we incorporate a model to enforce rigid contact by adding a discrete repulsion force to the filaments when penetration is detected. We validate the integrated computational model against experimental data on the sedimentation of slender filaments in a viscous flow, a first amongst comparable work. Finally, we use the model to explore the very low stiffness regime and get new physical insights on the equilibrium velocities and stability of sedimenting filaments.&#13;
&#13;
Then, we establish the framework to complement physics-based methods with data-driven machine learning approaches, and test our model on a proof of concept problem. More specifically, we employ a neural network functional to predict the macroscopic stiffness of porous structures from their geometries. We train the weights of the model with synthetic data associating the effective Young’s moduli of porous structures to the size of the pores present in the material. To mitigate the high cost of generating data with PDE solvers, we investigate recent advances in active learning to lower the number of training points required to reach a given level of accuracy. We then discuss the trade-off between predictive error and number of training points, and show that active learning can improve the performance of the model by more than an order of magnitude. Finally, in addition to the benefit of low predictive cost of the surrogate model, we take advantage of the functional nature of the neural network to solve the inverse problem consisting in finding the best geometry yielding a target macroscopic stiffness.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Modeling of Osmotically Assisted Membrane Separations with Multicomponent Solution-diffusion Theory</title>
<link href="https://hdl.handle.net/1721.1/138928" rel="alternate"/>
<author>
<name>Foo, Zi Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/138928</id>
<updated>2022-01-15T03:24:29Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Computational Modeling of Osmotically Assisted Membrane Separations with Multicomponent Solution-diffusion Theory
Foo, Zi Hao
Osmotically assisted membrane processes (OAMP), such as forward and counter-flow reverse osmosis, are a class of separation technologies that leverage osmotic pressure differences for water purification. Accurate modeling of the solute-coupling effects for transmembrane transport is integral to the development, and subsequent optimization, of OAMP unit operations. Theoretically, in binary mixtures, species separation is achieved due to the solution-diffusion mechanism, which results from the combination of selective species partitioning and the relative diffusive rates within the membrane matrix.&#13;
&#13;
To model membrane separations involving multicomponent mixtures, the transport equations are commonly linearized with the binary solution-diffusion model. Referred to as the method of superposition, the transport between multicomponent mixtures is decoupled into a series of binary transport processes, where the resultant species fluxes are computed as linear combinations of the fluxes from individual binary solution-diffusion models. This method benefits from the usage of binary membrane parameters, such as the support structural parameter and the water and solute permeability coefficients, which are easily characterized with established experimental protocols.&#13;
&#13;
However, recent publications highlighted that errors of up to 66 % were obtained for water and solute fluxes when multicomponent transport was modeled with superposition. The deviations were attributed to solute-solute and solute-solvent coupling effects, which are significant in multicomponent mixtures at moderate to high concentrations, or in mixtures with large excess Gibbs energy of mixing.&#13;
&#13;
In this study, we develop a new multicomponent solution-diffusion model by combining the frameworks of multicomponent Fickian diffusion and solution-diffusion theory. The derived model introduces multicomponent membrane parameters, which are coined as the diagonal and cross solute permeabilities, to incorporate the impact of species interactions on the chemical potentials. These multicomponent membrane parameters are highly concentration dependent, but recover their binary limits as the concentrations of the counter solutes tend towards infinite dilution. For multi-electrolyte mixtures, the diagonal and cross solute permeabilities can be computed from the multicomponent diffusion coefficients and the membrane’s binary solute permeabilities.&#13;
&#13;
By incorporating solute coupling effects with multicomponent solution-diffusion, this study demonstrates significant improvements in model-to-experiment agreement for species fluxes. The model was evaluated relative to experimental data for 7 unique combinations of forward osmosis processes involving ternary electrolyte mixtures. The average absolute deviation (AAD) of the solution-diffusion model decreased from 21.0 % to 3.0 % when solute coupling was incorporated.&#13;
&#13;
In the absence of multicomponent diffusion coefficients, we explore an alternative method to regress coupling phenomena. We propose the modeling of the effective membrane permeabilities as a linear combination of the binary permeability with an excess permeability. Analogous to solution thermodynamics, the latter parameter quantifies the extent of departure from ideality arising from solute-solute interactions within the membrane matrix. The proposed method introduces two additional regression parameters for each solute present in the separation. Using the H2O-NaCl-EtOH forward osmosis processes as a case study, we demonstrate the model’s robustness in regressing water and solute fluxes across a wide range of concentrations. The AAD of the solution-diffusion model reduced from 66.1 % to 7.2 % over a range of NaCl concentrations from 0 to 1.5 M and EtOH mass fractions of 0 to 0.5.&#13;
&#13;
In essence, this study demonstrates that significant improvements in multicomponent species fluxes can be obtained when solute interactions are incorporated. With the improved model, we envision that more accurate energetic and techno-economic performance of desalination systems can be predicted, leading to better representation of the viabilities of the technologies.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Financial Markets and Misallocation</title>
<link href="https://hdl.handle.net/1721.1/138926" rel="alternate"/>
<author>
<name>Yu, Jiaheng</name>
</author>
<id>https://hdl.handle.net/1721.1/138926</id>
<updated>2022-01-15T03:39:27Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Learning from Financial Markets and Misallocation
Yu, Jiaheng
I quantify how information frictions and learning from financial markets affect resource misallocation. I develop a dynamic model that features financial markets guiding managers in large investment decisions – mergers and acquisitions. Due to information frictions, mis-valuation of own firms and the potential gain from mergers and acquisitions prevent socially beneficial resource reallocation from happening. Compared to David et al. (2016), learning from the financial markets accumulates&#13;
over time, and also occurs upon the announcement of the mergers and acquisitions. In the structural estimation, I target novel data moments including sensitivity of merger deal cancellation to announcement period returns to identify learning. The estimates suggest that a 50% decline in stock price informativeness locally would lead to 1.64% output loss for the US economy.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Introducing New Transportation Services on the Community Engagement of Elderly People and Parents</title>
<link href="https://hdl.handle.net/1721.1/138918" rel="alternate"/>
<author>
<name>Kimura, Keiji</name>
</author>
<id>https://hdl.handle.net/1721.1/138918</id>
<updated>2022-01-15T03:01:59Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">The Effect of Introducing New Transportation Services on the Community Engagement of Elderly People and Parents
Kimura, Keiji
Local cities in Japan have been struggling with keeping their sustainability due to aging and population decrease. One of the urgent issues they have faced is elderly people's lack of accessibility to public transportation. They have introduced new community transportation services to keep the minimum transportation accessibility to elderly people, but they are worried about the increase in subsidy due to the service.&#13;
&#13;
In a more general sense, citizens' community engagement is a critical factor in keeping the cities sustainable. It is considered that providing new community transportation services to not only elderly people but also to the other citizens can leverage the increase in the cities' total community engagement. Especially, since parents who take care of their small children have had difficulties using public transportation services, they can utilize the new modes to increase their community engagement.&#13;
&#13;
This research aims to investigate the synergetic effect of introducing the new transportation modes on the community engagement of both elderly people and parents taking care of their small children. Because the fares of the new modes are critical in citizens’ transportation mode selections and the financial performance of municipalities, the specific research question in this research is as follows: What fares of newly introduced transportation modes lead to increased engagement, lower cost, and equity, that is, balancing between elderly people and parents?&#13;
&#13;
An agent-based simulation model is developed to quantify the effect of new transportation modes on the community engagement of citizens. As a case study, this model is applied to Odawara City, one of a local city in Japan, to investigate the fare sensitivities of community buses and door-to-door vans on the community engagement of 75 years old or older people and parents taking care of their children younger than six years old.&#13;
&#13;
First, the effect of the two modes on the behavior of elderly people and parents is evaluated separately. The simulation results show that introducing the two modes increases elderly people's community engagement by at most 21%. There are at least two preferable combinations of the fares of the new modes that satisfy more than a 10% increase in the community engagement and positive net present value per person of the investment in the new modes. It is also revealed that the new modes increase parents' community engagement level, but their impact is only at most 3%, much smaller than that on elderly people. It is considered there are two factors for this small impact on parents' community engagement. The first is that parents have less free time to spare in community engagement. The second is that the new modes can bring them longer travel time because their speeds are generally slower than the existing trains/buses and/or private cars, while the core needs of the parents include shorter travel time.&#13;
&#13;
Secondly, the synergetic effect of the two modes is investigated by simultaneously simulating the behavior of elderly people and parents. It is revealed that the new modes increase the total community engagement of both elderly people and parents by at most 18%. There is one preferable combination of fares in terms of both the community engagement and the financial aspect. It is also shown that there is no synergetic effect among the behavior of elderly people and parents. In the tradespace analysis, the cases that the beneficiary is only the elderly people dominate the other cases. On the other hand, the results can be interpreted as that the measures to support elderly people's public transportation accessibility do not harm the parents’ behavior but support their daily activities.&#13;
&#13;
Lastly, simulation results with three different adoption ratios are compared to identify the sensitivity of the simulation results to the adoption ratio, that is, the ratio of the activity sensitivity increase that is proportional to the utilization ratio of the new modes. When the adoption ratio in each population type is half of the original simulations, the total community engagement is decreased by half, and no preferable combination of fares is found.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for Anti-Displacement Development: An Affordable Housing Study in Central Falls</title>
<link href="https://hdl.handle.net/1721.1/138917" rel="alternate"/>
<author>
<name>Cafferky, Patricia</name>
</author>
<id>https://hdl.handle.net/1721.1/138917</id>
<updated>2022-01-15T03:52:47Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Planning for Anti-Displacement Development: An Affordable Housing Study in Central Falls
Cafferky, Patricia
The City of Central Falls is experiencing a degree of development and real estate speculation that it has not seen since the Industrial Revolution. At only 1.2 square miles and with a population of over 19,000, Central Falls is the smallest municipality in the state and third densest municipality in New England. It is also a housing insecure, high-need community, with a 30% poverty rate and an owner-occupied housing rate of only 16%. The city was a center for industry in the 19th and early 20th centuries, and has been a home to immigrants throughout its history. As manufacturing declined in the US, Central Falls was greatly impacted, and many of the mills which had once employed the community were left vacant or underutilized. Today, the city is seeing huge public and private investment. The Rhode Island Department of Transportation (RIDOT) is opening a new MBTA commuter rail station and bus hub in the city in 2022, which will connect it to Boston and Providence. Simultaneously, the city’s Conant Thread Mill District and Roosevelt Historic Mill District have dozens of old textile and manufacturing mills which are seeing rising speculative interest from both private investors and public entities. These two forces – improved public transportation and an undervalued building stock ripe for redevelopment – have the potential to bring new sources of economic growth to a city which sorely needs it, to catalyze gentrification, and, by extension, to cause cultural and residential displacement.&#13;
&#13;
Concurrent to this, newly elected Central Falls Mayor Maria Rivera has decided to prioritize affordable housing creation and preservation, leading a Housing Summit within her first hundred days in office and intending on developing a city housing plan. Per the Rhode Island Comprehensive Planning and Regulation Act, each municipality also needs to develop a comprehensive plan for state approval every ten years, which Central Falls will be undertaking in the near-term. This thesis aims to contribute to these ongoing efforts, and so conducted an affordable housing study as a client-based project for the Central Falls Office of Planning and Economic Development. Through evaluating the present state of affordable housing in the city, its challenges and opportunities, the thesis attempts to answer both if Central Falls is at risk of gentrifying and what measures should be taken to shore up and improve the municipality’s stock of affordable housing. An affordability analysis, zoning analysis, policies and programs analysis, and site opportunities analysis, as well as 14 semistructured interviews make up the basis of the research for the study. The culmination of the thesis is a set of recommendations for the planning department, city council, and mayor to consider as they move forward.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ship-pack replenishment optimization in a two-echelon distribution system with lost sales and seasonal product obsolescence</title>
<link href="https://hdl.handle.net/1721.1/138915" rel="alternate"/>
<author>
<name>Byanna, Nikhil</name>
</author>
<id>https://hdl.handle.net/1721.1/138915</id>
<updated>2022-01-14T03:40:43Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Ship-pack replenishment optimization in a two-echelon distribution system with lost sales and seasonal product obsolescence
Byanna, Nikhil
As a retailer attempts to leverage a two-echelon distribution system to forward-deploy inventory, a number of cost elements must be considered when deciding the quantity of a given SKU to replenish to the forward-deployed node. These costs include processing costs, transportation costs, and obsolescence costs, among others. An important consideration that is often overlooked is the elevated processing costs related to sending single units, or “eaches”, instead of units packed in full case quantities. Furthermore, for seasonal retailers, there are often times large obsolescence costs associated with products that reach the end of their life-cycle and are still stocked at the forwarddeployed node. The goal of this work is to develop a cost-optimal replenishment strategy with high inventory productivity, taking into consideration the end-to-end supply chain costs that are faced when forward-deploying seasonal products in a two-echelon distribution system. &#13;
&#13;
The thesis focuses on the decision of replenishing eaches versus full cases, and in the case of full case replenishment, the optimal rounding logic to use when deciding whether to replenish a full case of a stock-keeping-unit (SKU) or to send zero units. In the most simplistic case of full case replenishment, a retailer will decide to replenish a full case if the order quantity is non-zero. However, a more nuanced approach that replenishes a full case at specific thresholds (i.e. when the order quantity is at least 50% of the full case quantity) can lead to lower end-to-end supply chain costs. The thesis creates a simulation model that uses a base stock model to estimate the resulting inventory productivity of the system and the costs associated with each replenishment policy. &#13;
&#13;
The model simulates over 8,000 SKUs for a seasonal retailer and finds that the proposed replenishment system can significantly improve inventory productivity (i.e. inventory turns) relative to the retailer’s current replenishment system. We pilot the inventory policy in the retailer’s forward deployed node and validate that the system’s inventory performance in practice is comparable to the modeled performance. The validation provides further confidence in our cost optimization model, which finds that an optimal replenishment policy can lead to between 2.7% - 4.1% savings relative to a baseline replenishment policy that replenishes eaches. A sensitivity analysis is also conducted on cost inputs to show the impact of optimal replenishment decisions if the retailer’s cost structure is impacted. The cost input change is formulated as an internal carbon tax that would significantly impact transportation costs. The sensitivity analysis concludes that the optimal replenishment policy consistently yields 0.7 - 1.8% savings as costs are varied.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapid vegetable tanning of heavy leather</title>
<link href="https://hdl.handle.net/1721.1/138910" rel="alternate"/>
<author>
<name>Creasy, William Murlin,
            1905-1987.</name>
</author>
<id>https://hdl.handle.net/1721.1/138910</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Rapid vegetable tanning of heavy leather
Creasy, William Murlin,
            1905-1987.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaves 37-38).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies on control of respiration of apples by packaging methods</title>
<link href="https://hdl.handle.net/1721.1/138909" rel="alternate"/>
<author>
<name>Jurin Striseo, Vatren.</name>
</author>
<id>https://hdl.handle.net/1721.1/138909</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1962-01-01T00:00:00Z</published>
<summary type="text">Studies on control of respiration of apples by packaging methods
Jurin Striseo, Vatren.
Thesis: M.S., Massachusetts Institute of Technology, Department of Food Technology, 1962; Includes bibliographical references (leaves 67-68).
</summary>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven customer segmentation: Assessing disparities in COVID impact on public transit user groups and recovery</title>
<link href="https://hdl.handle.net/1721.1/138908" rel="alternate"/>
<author>
<name>Luo, Rachel Li-Jiang</name>
</author>
<id>https://hdl.handle.net/1721.1/138908</id>
<updated>2022-01-14T03:00:42Z</updated>
<published>2021-06-01T00:00:00Z</published>
<summary type="text">Data-driven customer segmentation: Assessing disparities in COVID impact on public transit user groups and recovery
Luo, Rachel Li-Jiang
COVID-19 triggered an unprecedented global lockdown and severely dampened public transit ridership, which was down 62% year-on-year across the U.S. through Q4 2020 [1]. Beyond these stark headline figures, more granular views of whose transit ridership patterns changed and how are needed to aid cash-strapped transit agencies in understanding both the operational and equity impacts of COVID-19 and assessing possible recovery strategies [2].&#13;
&#13;
This thesis examines these questions in the Metro Boston region by applying k-means clustering to smart card data from the Massachusetts Bay Transportation Authority (MBTA). We empirically determine customer segments based on passengerlevel pre-pandemic transit ridership patterns during January 13 - February 16, 2020, using data from 22.6 million trips by 1.5 million passengers. We then trace how COVID-19 produced differential churn rates and travel behaviour modifications among these distinct passenger groups. We find that COVID-19 induced churn among rail commuter segments key for supporting MBTA fare revenues, while bus riders and those who frequently rode rail off-peak—groups that covered the majority of reducedfare and vulnerable passengers—were most likely to continue using the system.&#13;
&#13;
Our findings suggest that in the near term, the MBTA can support a ridership and revenue rebound by working closely with large employers involved in the MBTA "Perq" corporate pass program to plan for reopening. This can also position the MBTA to better gauge the need to redesign or reprice Perq to offer greater flexibility for workers who may be adopting remote work longer term and therefore commuting less frequently to the office. Further, our analysis reveals consistency in ridership patterns among bus passengers even during crisis times. In the medium term as the MBTA considers network redesigns to meet post-pandemic travel needs, existing plans for bus upgrades do not necessarily need heavy modification because COVID-19 did not completely redefine these passengers’ transit usage patterns. This gives a base level of certainty for the MBTA’s planning process, as it seeks to track and shape the uncertainty that COVID-19 has brought to demand on the rail side of its network. Finally, by supplementing our quantitative analysis with an overview of COVID-19 responses by other major U.S. transit agencies, we suggest that the MBTA can better weather future emergencies like COVID-19 by making longer-term efforts to shift its operating revenue mix away from volatile fare revenues towards more stable and resilient revenue sources such as sales and property taxes, and complementing this with sustainable financial management.&#13;
&#13;
The framework offered in this thesis for dissecting passenger ridership behavior and tracking passenger churn and cluster-switching can be applied to other transit agencies to detail either background ridership behavioral changes in normal years or rapid step-changes during a mobility crisis. Understanding passengers’ ridership demand at the cluster level can inform both immediate actions that transit agencies can take to enable recovery, as well as support network redesign and long-term resilience.
</summary>
<dc:date>2021-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Flow of a Material Handling System</title>
<link href="https://hdl.handle.net/1721.1/138870" rel="alternate"/>
<author>
<name>Vigil, Shane J.</name>
</author>
<id>https://hdl.handle.net/1721.1/138870</id>
<updated>2025-10-28T20:26:22Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Automating Flow of a Material Handling System
Vigil, Shane J.
Amazon uses a system of interconnected manned work processing stations linked by conveyances that route items from different parts of the warehouse into a single order for packing. This thesis will examine one such system, which circulates items throughout the system in trays. Leaders manage and tune the production rate by adjusting the number of trays within the system to maximize throughput. This task takes considerable time and requires operators to manually add and remove trays from the system. To reduce time spent by leaders in managing trays, automated solutions are investigated. It is determined that the optimal number of trays within the system is dynamic. Furthermore, physical constraints of the system prevent an automated solution that simply inserts and removes trays based off an algorithm. This study uncovers unrealized throughput by creating a model of the system that outputs the ideal tray count based off historical data and mathematical constraints. Additionally, this thesis explores an automated solution that supplies and removes trays based off localized blockage and starvation.&#13;
&#13;
A Work Domain Analysis and a human factors study laid the foundations for automation. Simulation demonstrates the potential for a 21.2% production rate increase and a release of 7.14 hours/day for other tasks. Implementation of the model with an alert system increases throughput 20% during maximum production with a median error of 8.11% when targeting a desired throughput. These techniques can be extended to other circulation systems in manufacturing. As Industry 4.0 grows, the management of human-machine relations becomes critical for safety and performance.; A Work Domain Analysis and a human factors study laid the foundations for automation. Simulation demonstrates the potential for a 21.2% production rate increase and a release of 7.14 hours/day for other tasks. Implementation of the model with an alert system increases throughput 20% during maximum production with a median error of 8.11% when targeting a desired throughput. These techniques can be extended to other circulation systems in manufacturing. As Industry 4.0 grows, the management of human-machine relations becomes critical for safety and performance.
Thesis: M.B.A. Massachusetts Institute of Technology, Sloan School of Management, June, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 71-73).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geosynchronous Satellite Maneuver Classification and Orbital Pattern Anomaly Detection via Supervised Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/138773" rel="alternate"/>
<author>
<name>Roberts, Thomas González</name>
</author>
<id>https://hdl.handle.net/1721.1/138773</id>
<updated>2025-10-27T17:56:55Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Geosynchronous Satellite Maneuver Classification and Orbital Pattern Anomaly Detection via Supervised Machine Learning
Roberts, Thomas González
Due to the nature of the geosynchronous (GEO) orbital regime, where space objects orbit the Earth once per sidereal day, GEO satellites can appear fixed to a position in the sky when observed from the Earth’s surface. This unique orbital characteristic makes GEO satellites ideal for telecommunications missions that require Earth-fixed antennas to send and receive signals, such as television broadcasts or military communications. To maintain their position relative to the Earth’s surface, GEO satellites must station-keep, or regularly expend onboard propellant to counteract the natural forces in the near-Earth space environment that perturb their orbital trajectories. Less frequently, GEO satellites perform maneuvers to alter their orbital characteristics more drastically. One such maneuver is a longitudinal shift: changing a GEO satellite’s sub-satellite point from one position on the Earth’s equator to another. Such a maneuver often requires both a series of impulsive thrusts and a period of natural drift. &#13;
&#13;
This work describes an approach for detecting the components of longitudinal shift maneuvers—including the patterns associated with initiating and ending eastward and westward drifts—using convolutional neural networks trained on publicly available two-line element (TLE) data from the U.S. Space Command’s (SPACECOM) space object catalog. A method for converting TLE data to geographic position histories—longitude, latitude, and altitude positions over time in the Earth-centered, Earth-fixed geographic reference frame—and labeling longitudinal shift maneuvers by inspection is described. A preliminary maneuver detection algorithm is designed, trained, and tested on all GEO satellites in orbit from January 1 to December 31, 2020. Performance metrics are presented for algorithms trained on two different training data sets corresponding to five and ten years’ worth of geographic position time-histories labeled with longitudinal shift maneuvers.&#13;
&#13;
When detected, longitudinal shift maneuvers can be used to identify anomalous behavior in GEO. In this work, a satellite’s behavior is considered nominal if it adheres to the satellite’s pattern of life (PoL)—its previous on-orbit behavior made up of sequences of both natural and non-natural behavioral modes, including routine station-keeping, other on-orbit maneuvers, and uncontrolled motion—and anomalous if it deviates from the satellite’s PoL. Identifying anomalous satellite behavior is of critical interest to space situational awareness (SSA) system operators, who may choose to task their sensors to obtain more observations of anomalous behavior, and satellite operators themselves, who may wish to diagnose its root cause. Applications of this work for international space policymaking, including the development of on-orbit norms of behavior and the distribution of spectral and physical space in GEO, is also discussed.; This work describes an approach for detecting the components of longitudinal shift maneuvers—including the patterns associated with initiating and ending eastward and westward drifts—using convolutional neural networks trained on publicly available two-line element (TLE) data from the U.S. Space Command’s (SPACECOM) space object catalog. A method for converting TLE data to geographic position histories—longitude, latitude, and altitude positions over time in the Earth-centered, Earth-fixed geographic reference frame—and labeling longitudinal shift maneuvers by inspection is described. A preliminary maneuver detection algorithm is designed, trained, and tested on all GEO satellites in orbit from January 1 to December 31, 2020. Performance metrics are presented for algorithms trained on two different training data sets corresponding to five and ten years’ worth of geographic position time-histories labeled with longitudinal shift maneuvers.; When detected, longitudinal shift maneuvers can be used to identify anomalous behavior in GEO. In this work, a satellite’s behavior is considered nominal if it adheres to the satellite’s pattern of life (PoL)—its previous on-orbit behavior made up of sequences of both natural and non-natural behavioral modes, including routine station-keeping, other on-orbit maneuvers, and uncontrolled motion—and anomalous if it deviates from the satellite’s PoL. Identifying anomalous satellite behavior is of critical interest to space situational awareness (SSA) system operators, who may choose to task their sensors to obtain more observations of anomalous behavior, and satellite operators themselves, who may wish to diagnose its root cause. Applications of this work for international space policymaking, including the development of on-orbit norms of behavior and the distribution of spectral and physical space in GEO, is also discussed.
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, Institute for Data, Systems, and Society, Technology and Policy Program, June, 2021; Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 75-79).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Autonomy in Commercial Aviation: An Ontology and Framework for Automating Unmanned Aircraft Systems (UAS)</title>
<link href="https://hdl.handle.net/1721.1/138771" rel="alternate"/>
<author>
<name>Chevallier, Juliette</name>
</author>
<id>https://hdl.handle.net/1721.1/138771</id>
<updated>2025-10-27T17:58:53Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Enabling Autonomy in Commercial Aviation: An Ontology and Framework for Automating Unmanned Aircraft Systems (UAS)
Chevallier, Juliette
Autonomous air vehicles are rapidly gaining interest within the aviation industry with novel business cases such as urban air mobility, package delivery, and more. However, these increasingly autonomous systems come with increasingly numerous and complex inputs that software must handle. This limitless set of inputs must ensure that autonomous system decisions will translate to operations that are safe for the general public. This thesis contributes to knowledge by introducing an ontology and framework, with supporting analyses, to align individuals before beginning research and product development efforts in autonomous vehicles. This framework and the supporting ontology and analyses seek to provide a quantitative, repeatable method for describing the increase of operational uncertainty with the increase in automation for a UAS.
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2021; Thesis: M.B.A., Massachusetts Institute of Technology, Department of SLOAN, June, 2021; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 258-261).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Feasibility Study of CubeSat Architectures for Space Debris Removal from Low Earth Orbit</title>
<link href="https://hdl.handle.net/1721.1/138733" rel="alternate"/>
<author>
<name>Clark, Christopher P.</name>
</author>
<id>https://hdl.handle.net/1721.1/138733</id>
<updated>2025-10-27T18:00:44Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">A Feasibility Study of CubeSat Architectures for Space Debris Removal from Low Earth Orbit
Clark, Christopher P.
The rapid increase of space debris in low earth orbit has had tangible impacts on government and commercial missions and caused growing concern among the space community. Removal of collision-prone objects using deorbiter satellites represents a viable strategy for stabilizing the debris population, but the expense and lack of immediate economic benefit reduce the likelihood of decisive action. In an effort to describe a new family of low-cost deorbiter spacecraft, this thesis explores the utility of CubeSats for debris removal.; Three of the most widely tested methods for capturing uncooperative debris objects are applied to CubeSat-specific mission scenarios. Limiting factors are noted for each method, and a dynamics simulation is used to approximate success probabilities. Additionally, three deorbit methods using flight-proven technologies are considered for use aboard CubeSats. A satellite design model is developed and integrated with heuristic optimization in order to identify cost-optimal deorbiter CubeSat designs.; Results suggest that CubeSats are capable of capturing and deorbiting certain families of debris objects defined by mass and altitude, particularly objects with negligible rotational properties. Feasible CubeSat designs are discovered for all three of the deorbit methods. It is concluded that CubeSat-based debris removal is an area deserving of further exploration, as it could represent a uniquely cost-effective method for removing dangerous debris from low-earth orbit.
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 231-241).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation to determine the practical application of natural bank gravel as a protective filter for an earth embankment</title>
<link href="https://hdl.handle.net/1721.1/138712" rel="alternate"/>
<author>
<name>Hurley, Henry W.</name>
</author>
<author>
<name>Newton, Carroll T.</name>
</author>
<id>https://hdl.handle.net/1721.1/138712</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1940-01-01T00:00:00Z</published>
<summary type="text">An investigation to determine the practical application of natural bank gravel as a protective filter for an earth embankment
Hurley, Henry W.; Newton, Carroll T.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1940; Appendix contains numerous pamphlets.
</summary>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Switching characteristics of the varactor-diode parametron</title>
<link href="https://hdl.handle.net/1721.1/138710" rel="alternate"/>
<author>
<name>Woodward, Charles Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/138710</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Switching characteristics of the varactor-diode parametron
Woodward, Charles Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaves 117-119).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a centrifugal compressor integrated with a hermetic motor for automotive airconditioners</title>
<link href="https://hdl.handle.net/1721.1/138693" rel="alternate"/>
<author>
<name>Yun, Hayong.</name>
</author>
<id>https://hdl.handle.net/1721.1/138693</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Design of a centrifugal compressor integrated with a hermetic motor for automotive airconditioners
Yun, Hayong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1993; Includes bibliographical references (leaves 103-109).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noise analysis of a homomorphic automatic volume control system.</title>
<link href="https://hdl.handle.net/1721.1/138689" rel="alternate"/>
<author>
<name>Medress, Mark Frederick.</name>
</author>
<id>https://hdl.handle.net/1721.1/138689</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">Noise analysis of a homomorphic automatic volume control system.
Medress, Mark Frederick.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering, 1968; Bibliography: leaves 59-60.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plasma oscillations in an applied electric field.</title>
<link href="https://hdl.handle.net/1721.1/138687" rel="alternate"/>
<author>
<name>Watson, Duncan Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/138687</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Plasma oscillations in an applied electric field.
Watson, Duncan Charles.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic analysis of axisymmetric pile groups</title>
<link href="https://hdl.handle.net/1721.1/138682" rel="alternate"/>
<author>
<name>Tyson, Thomas R.</name>
</author>
<id>https://hdl.handle.net/1721.1/138682</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Dynamic analysis of axisymmetric pile groups
Tyson, Thomas R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1983; Bibliography: leaves 67-68.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System management of a redundant clocking network</title>
<link href="https://hdl.handle.net/1721.1/138677" rel="alternate"/>
<author>
<name>Manush, Charles Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/138677</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">System management of a redundant clocking network
Manush, Charles Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1976; Bibliography: p.110.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer controlled transmit receive system for an ultrasonic phased array transducer.</title>
<link href="https://hdl.handle.net/1721.1/138675" rel="alternate"/>
<author>
<name>Martin, Robert Randall.</name>
</author>
<id>https://hdl.handle.net/1721.1/138675</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Computer controlled transmit receive system for an ultrasonic phased array transducer.
Martin, Robert Randall.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An assessment of creep formulations for concrete structures</title>
<link href="https://hdl.handle.net/1721.1/138674" rel="alternate"/>
<author>
<name>Martore, Joseph Albert.</name>
</author>
<id>https://hdl.handle.net/1721.1/138674</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">An assessment of creep formulations for concrete structures
Martore, Joseph Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 126-131.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High resolution submillimeter-wave spectroscopy using non-collinear mixing of laser radiation.</title>
<link href="https://hdl.handle.net/1721.1/138672" rel="alternate"/>
<author>
<name>Mandel, Paul D.,&#13;
            1953-</name>
</author>
<id>https://hdl.handle.net/1721.1/138672</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">High resolution submillimeter-wave spectroscopy using non-collinear mixing of laser radiation.
Mandel, Paul D.,&#13;
            1953-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design considerations for the implementation of the front end of an optimum VLF receiver</title>
<link href="https://hdl.handle.net/1721.1/138671" rel="alternate"/>
<author>
<name>Marsicano, Dennis Vincent.</name>
</author>
<id>https://hdl.handle.net/1721.1/138671</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Design considerations for the implementation of the front end of an optimum VLF receiver
Marsicano, Dennis Vincent.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Bibliography: leaves 255-263.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a mercury arc stroboscope</title>
<link href="https://hdl.handle.net/1721.1/138661" rel="alternate"/>
<author>
<name>Beardsley, Kenneth D.</name>
</author>
<id>https://hdl.handle.net/1721.1/138661</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">Development of a mercury arc stroboscope
Beardsley, Kenneth D.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1930; Includes bibliographical references (leaves 56-57).
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The calculation of three-phase short-circuit currents of a synchronous machine by means of the differential analyzer</title>
<link href="https://hdl.handle.net/1721.1/138660" rel="alternate"/>
<author>
<name>Kingsley, Charles,
            1904-1994.</name>
</author>
<id>https://hdl.handle.net/1721.1/138660</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">The calculation of three-phase short-circuit currents of a synchronous machine by means of the differential analyzer
Kingsley, Charles,
            1904-1994.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaf 78).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The solution of vacuum tube circuit problems by means of the differential analyzer</title>
<link href="https://hdl.handle.net/1721.1/138659" rel="alternate"/>
<author>
<name>Radford, William H.</name>
</author>
<id>https://hdl.handle.net/1721.1/138659</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">The solution of vacuum tube circuit problems by means of the differential analyzer
Radford, William H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaves 97-98).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the application of the convergence theorems of continued fractions to the construction of electrical networks</title>
<link href="https://hdl.handle.net/1721.1/138656" rel="alternate"/>
<author>
<name>Zaroodny, Margaret.</name>
</author>
<id>https://hdl.handle.net/1721.1/138656</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1934-01-01T00:00:00Z</published>
<summary type="text">On the application of the convergence theorems of continued fractions to the construction of electrical networks
Zaroodny, Margaret.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1934; Includes bibliographical references (leaf 44).
</summary>
<dc:date>1934-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Retention of women in the military : a look at the Coast Guard Academy and its graduates</title>
<link href="https://hdl.handle.net/1721.1/138655" rel="alternate"/>
<author>
<name>Wells, Claudia P.
            (Claudia Paula)</name>
</author>
<id>https://hdl.handle.net/1721.1/138655</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1996-01-01T00:00:00Z</published>
<summary type="text">Retention of women in the military : a look at the Coast Guard Academy and its graduates
Wells, Claudia P.
            (Claudia Paula)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1996; Includes bibliographical references (leaves 140-142).
</summary>
<dc:date>1996-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of the damping torque in a salient pole machine</title>
<link href="https://hdl.handle.net/1721.1/138654" rel="alternate"/>
<author>
<name>Stromberg, Tage Valter.</name>
</author>
<id>https://hdl.handle.net/1721.1/138654</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">An analysis of the damping torque in a salient pole machine
Stromberg, Tage Valter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1936; Includes bibliographical references (leaf 80).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managerial autonomy vs. public accountability in public enterprises</title>
<link href="https://hdl.handle.net/1721.1/138651" rel="alternate"/>
<author>
<name>Minion, Douglas Wayne.</name>
</author>
<id>https://hdl.handle.net/1721.1/138651</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Managerial autonomy vs. public accountability in public enterprises
Minion, Douglas Wayne.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1964; Includes bibliographical references (leaves 91-93).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of tandem helicopter rotor-rotor interference effects on the thrust of the rear rotor</title>
<link href="https://hdl.handle.net/1721.1/138649" rel="alternate"/>
<author>
<name>Sarkar, Anadi S.</name>
</author>
<id>https://hdl.handle.net/1721.1/138649</id>
<updated>2025-10-30T17:51:24Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Study of tandem helicopter rotor-rotor interference effects on the thrust of the rear rotor
Sarkar, Anadi S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaf 60).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of the limited pressure fuel air cycles</title>
<link href="https://hdl.handle.net/1721.1/138648" rel="alternate"/>
<author>
<name>Sabat, Donald J.</name>
</author>
<author>
<name>Ahmed, Maftoon.</name>
</author>
<id>https://hdl.handle.net/1721.1/138648</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1962-01-01T00:00:00Z</published>
<summary type="text">An analysis of the limited pressure fuel air cycles
Sabat, Donald J.; Ahmed, Maftoon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1962; Includes bibliographical references (leaf 70).
</summary>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of head diffraction on sound recording</title>
<link href="https://hdl.handle.net/1721.1/138644" rel="alternate"/>
<author>
<name>Holmes, Jerry Dale,
            1937-</name>
</author>
<id>https://hdl.handle.net/1721.1/138644</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">The effect of head diffraction on sound recording
Holmes, Jerry Dale,
            1937-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1960; Includes bibliographical references (leaf 51).
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organization for a new technology</title>
<link href="https://hdl.handle.net/1721.1/138642" rel="alternate"/>
<author>
<name>Nezbeda, Edward Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/138642</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">Organization for a new technology
Nezbeda, Edward Charles.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1959; Includes bibliographical references (leaves 115-118).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic programming for business data processing machines</title>
<link href="https://hdl.handle.net/1721.1/138637" rel="alternate"/>
<author>
<name>Fitzgerald, Edward Lewis.</name>
</author>
<id>https://hdl.handle.net/1721.1/138637</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Automatic programming for business data processing machines
Fitzgerald, Edward Lewis.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1956; Bibliography: leaves 71-75.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sampled-data feedback systems and iterative procedures for simultaneous equations</title>
<link href="https://hdl.handle.net/1721.1/138636" rel="alternate"/>
<author>
<name>Watson, Paul Clark.</name>
</author>
<id>https://hdl.handle.net/1721.1/138636</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1955-01-01T00:00:00Z</published>
<summary type="text">Sampled-data feedback systems and iterative procedures for simultaneous equations
Watson, Paul Clark.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1955; Includes bibliographical references (leaf 60).
</summary>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of ash content in Black Mesa coal by gamma-ray attenuation.</title>
<link href="https://hdl.handle.net/1721.1/138630" rel="alternate"/>
<author>
<name>May, Stephen Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/138630</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Analysis of ash content in Black Mesa coal by gamma-ray attenuation.
May, Stephen Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1973; Bibliography: leaves 47-50.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resonance fluorescence in nitrogen dioxide.</title>
<link href="https://hdl.handle.net/1721.1/138629" rel="alternate"/>
<author>
<name>Golin, Jeffrey Ross.</name>
</author>
<id>https://hdl.handle.net/1721.1/138629</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Resonance fluorescence in nitrogen dioxide.
Golin, Jeffrey Ross.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1971; Bibliography: leaves 39-41.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observations on symmetry and on development in the brain of the rotifer Asplanchna brightwelli.</title>
<link href="https://hdl.handle.net/1721.1/138626" rel="alternate"/>
<author>
<name>Seldon, Henry Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/138626</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Observations on symmetry and on development in the brain of the rotifer Asplanchna brightwelli.
Seldon, Henry Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Biology, 1972; Vita.; Bibliography: leaves 102-104.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulation of chondrocyte biosynthesis in epiphyseal cartilage--the role of interstitial pH</title>
<link href="https://hdl.handle.net/1721.1/138625" rel="alternate"/>
<author>
<name>Boustany, Nada.</name>
</author>
<id>https://hdl.handle.net/1721.1/138625</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Regulation of chondrocyte biosynthesis in epiphyseal cartilage--the role of interstitial pH
Boustany, Nada.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1991; Includes bibliographical references (leaves 101-103).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discharge coefficients for submerged, broad-crested weirs.</title>
<link href="https://hdl.handle.net/1721.1/138622" rel="alternate"/>
<author>
<name>Thomas, William Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/138622</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1966-01-01T00:00:00Z</published>
<summary type="text">Discharge coefficients for submerged, broad-crested weirs.
Thomas, William Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1966; Bibliography: leaves 67-68.
</summary>
<dc:date>1966-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis and investigation of two-dimensional flow through wire screen</title>
<link href="https://hdl.handle.net/1721.1/138618" rel="alternate"/>
<author>
<name>Bonneville, Jacques M.&#13;
            (Jacques Marcel)</name>
</author>
<author>
<name>Harper, David B.</name>
</author>
<id>https://hdl.handle.net/1721.1/138618</id>
<updated>2022-01-04T17:32:15Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">An analysis and investigation of two-dimensional flow through wire screen
Bonneville, Jacques M.&#13;
            (Jacques Marcel); Harper, David B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1951; Includes bibliographical references (leaves 86-87).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Initial characteristics of density current flow</title>
<link href="https://hdl.handle.net/1721.1/138616" rel="alternate"/>
<author>
<name>Braucher, Ernest P.</name>
</author>
<id>https://hdl.handle.net/1721.1/138616</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1950-01-01T00:00:00Z</published>
<summary type="text">Initial characteristics of density current flow
Braucher, Ernest P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1950; Bibliography: leaf 59.
</summary>
<dc:date>1950-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of factors relating to stability in frozen milk</title>
<link href="https://hdl.handle.net/1721.1/138615" rel="alternate"/>
<author>
<name>Shuster, Herbert V.</name>
</author>
<author>
<name>Sidd, Edward G.</name>
</author>
<id>https://hdl.handle.net/1721.1/138615</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">A study of factors relating to stability in frozen milk
Shuster, Herbert V.; Sidd, Edward G.
Thesis: M.S., Massachusetts Institute of Technology, Department of Food Technology, 1949; Bibliography: leaves 171-173.
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chemical properties of copper-tin intermetallic compounds</title>
<link href="https://hdl.handle.net/1721.1/138614" rel="alternate"/>
<author>
<name>Servi, Italo S.</name>
</author>
<id>https://hdl.handle.net/1721.1/138614</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">Chemical properties of copper-tin intermetallic compounds
Servi, Italo S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1949; Bibliography: leaves [98-99].
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chief executive officers in America--is there a relationship between their backgrounds and their company's performance?</title>
<link href="https://hdl.handle.net/1721.1/138613" rel="alternate"/>
<author>
<name>Stefany, Frederick Nelson.</name>
</author>
<id>https://hdl.handle.net/1721.1/138613</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Chief executive officers in America--is there a relationship between their backgrounds and their company's performance?
Stefany, Frederick Nelson.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1993; Includes bibliographical references (leaf 35).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the surface tension controlled regime of oil spread,</title>
<link href="https://hdl.handle.net/1721.1/138611" rel="alternate"/>
<author>
<name>Lee, Robert A. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/138611</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">A study of the surface tension controlled regime of oil spread,
Lee, Robert A. S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Lacking leaf 28.; Bibliography: leaf 20.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new look at the competitive position of the inverted-cycle gas turbine for waste-heat utilization and other applications.</title>
<link href="https://hdl.handle.net/1721.1/138610" rel="alternate"/>
<author>
<name>Dunteman, Norman Richard Albert.</name>
</author>
<id>https://hdl.handle.net/1721.1/138610</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">A new look at the competitive position of the inverted-cycle gas turbine for waste-heat utilization and other applications.
Dunteman, Norman Richard Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1970; Bibliography: leaf 240.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determination of multi modal system parameters through random data analysis,</title>
<link href="https://hdl.handle.net/1721.1/138609" rel="alternate"/>
<author>
<name>Kirk, James Allen.</name>
</author>
<id>https://hdl.handle.net/1721.1/138609</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">Determination of multi modal system parameters through random data analysis,
Kirk, James Allen.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1969; Bibliography: leaf 22.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of types of controls used for guided missiles</title>
<link href="https://hdl.handle.net/1721.1/138607" rel="alternate"/>
<author>
<name>Arkin, Shepard M.</name>
</author>
<id>https://hdl.handle.net/1721.1/138607</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">An investigation of types of controls used for guided missiles
Arkin, Shepard M.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of analogues with respect to storm tracks</title>
<link href="https://hdl.handle.net/1721.1/138606" rel="alternate"/>
<author>
<name>Graves, Leon F.</name>
</author>
<id>https://hdl.handle.net/1721.1/138606</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">A study of analogues with respect to storm tracks
Graves, Leon F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1946; Bibliography: leaf [45].
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NuFloW--a programming environment for the NuMesh computer</title>
<link href="https://hdl.handle.net/1721.1/138603" rel="alternate"/>
<author>
<name>Laffont, Philippe P.</name>
</author>
<id>https://hdl.handle.net/1721.1/138603</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">NuFloW--a programming environment for the NuMesh computer
Laffont, Philippe P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1991; Includes bibliographical references (leaves 52-53).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental investigation of exit effects on current distribution in an MHD channel.</title>
<link href="https://hdl.handle.net/1721.1/138602" rel="alternate"/>
<author>
<name>Karkosak, John James.</name>
</author>
<id>https://hdl.handle.net/1721.1/138602</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Experimental investigation of exit effects on current distribution in an MHD channel.
Karkosak, John James.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effectiveness factors in spherical catalyst particles: Langmuir-Hinshelwood rate equations.</title>
<link href="https://hdl.handle.net/1721.1/138595" rel="alternate"/>
<author>
<name>Knudsen, Christian White.</name>
</author>
<id>https://hdl.handle.net/1721.1/138595</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Effectiveness factors in spherical catalyst particles: Langmuir-Hinshelwood rate equations.
Knudsen, Christian White.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The gilded closet : media, privacy, and power in unequal times</title>
<link href="https://hdl.handle.net/1721.1/138586" rel="alternate"/>
<author>
<name>Aasen, Ryan.</name>
</author>
<id>https://hdl.handle.net/1721.1/138586</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The gilded closet : media, privacy, and power in unequal times
Aasen, Ryan.
This thesis broadly interrogates the way three media technologies in the history of the United States have been used in relationship to wealth, sexuality, and the emergence of "the right to privacy" in the late 19th century. This includes photography in the First Gilded Age, cable television in the 1970s and the beginnings of the neoliberal economy, and networked media in the 2010s with the rise of surveillance capitalism and what some refer to as a Second Gilded Age. Drawing on Marxist and Queer theorists to analyze the inherent power structures across media, privacy, sexuality, and wealth, this text exposes new media environments as consistent sites of conflict between various classes of people and forms the theoretical and conceptual basis of my artistic practice.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis. "September 2020."; Includes bibliographical references (pages 71-79).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Being in the world as if there's nothing from the first : a praxis-framework for emergence</title>
<link href="https://hdl.handle.net/1721.1/138585" rel="alternate"/>
<author>
<name>Tang, Casey.</name>
</author>
<id>https://hdl.handle.net/1721.1/138585</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Being in the world as if there's nothing from the first : a praxis-framework for emergence
Tang, Casey.
Life is an ongoing process of unfolding within a continuum of matter-cognition-semiotics. Evolutionary dynamics and biophysical forces exhibit end-directed (teleonomic) behavior. They increase interconnection over time, integrating antecedent foundational emergent layers into new aggregations, with their own forms, semiotics, and cognition capable of better navigating the environment from which it emerged. Our current technologies and systems, an outcome of these currents of aggregation and agency, are increasing capabilities to interconnect and integrate across abiotic, biotic, semiotic, and cognitive spheres, leading to strong emergence and enframing. Critical aesthetic practices enable us to become conscious of the dominant epistemic, technological, and semantic structures that have become enmeshed in our perception giving us more agency, increasing our evolutionary flexibility, and allowing us to influence our becoming. By understanding underlying biophysical forces, evolutionary dynamics, and the relation of entities as a space inseparable from "Being," artists and cultural producers engaged in critical aesthetic practice can more easily perceive, embody, and analyze deep interconnections and dynamics within a world of increasing integration and complexity.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 81-84).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In between empathy and wonder lies the contamination that makes us human</title>
<link href="https://hdl.handle.net/1721.1/138584" rel="alternate"/>
<author>
<name>Hsu, Yuping,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/138584</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">In between empathy and wonder lies the contamination that makes us human
Hsu, Yuping,
            S.M.
            Massachusetts Institute of Technology.
This thesis reinspects the biological subjectivity of empathy, to reconstruct the act of empathic projection through its "auto-hetero-subjects," encountering the microbial universe via empathy as an aesthetic experience. Empathy is a term that is often taken for granted, referring to a capacity to share and understand another person's feelings or experiences. This thesis will defamiliarize that understanding, question its limits, and introduce it in the context of art and aesthetics. Contamination is invoked as a signifier that is both material--endosymbiosis; microbiome; the human virome--and affect, the moment that intrudes consciousness--empathy, wonder, or something in between. The role that the body and the gut plays in the performance of empathy with its constituent microbes is re-conceptualized by drawing from the history of aesthetics, neuroscience, psychoanalysis, and microbiology. The process of fermentation acts as a muse for the body, in its abjectness and with its symbiotic affordances, to construct an empathy that is embodied within a multiplicity of bodies. This thesis speculates on the reenactment of a different kind of empathetic subject, one that is many and reflects the desire of many. Through deconstructing the concepts of empathy, wonder, and contamination in parallel with my own art practice, I will examine the role of art in producing affective relationships, and thereby generating alternative sensibilities for empathic ways of becoming with more-than-human worlds.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 61-65).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tessituras Abertas : pessimistic, yet persistent in other possible imaginaries</title>
<link href="https://hdl.handle.net/1721.1/138583" rel="alternate"/>
<author>
<name>Bastos Lages, Luíza.</name>
</author>
<id>https://hdl.handle.net/1721.1/138583</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Tessituras Abertas : pessimistic, yet persistent in other possible imaginaries
Bastos Lages, Luíza.
The present thesis addresses the compound of aesthetics and politics, motivated by the philosophical inquiry on the political potency of art. By doing so, this thesis also offers an investigation on the enrooted practice of authoritarianism in Brasil, by focusing on the recent institutionalized political events that contributed to the rise of an extreme right-wing government in the country in 2018. The philosophical inquiry on the articulation of aesthetics and politics, I pursued, mainly, through a close reading of the work of Jacques Rancière. By studying his thinking on the autonomous sensible experience of every and all subjects, as one possible lever, through the experience of the sensorium, for the possibility of autonomy, I will argue that the forms of visibility and discourses that define art, especially as an institution, from the perspective of art's political operativity, get blurred. Concomitantly, I examine recent institutionalized political events in Brasil that contributed towards the conditions to the rise of an authoritarian government in the country, as a means, as I will argue, for the reaffirmation and intensification of politics of extraction and neoliberalism, grounded on renewed imperialist relations. Imperialism that, in the 21st century, expresses a new ambition: beyond constituting a political form for the renewal and intensification of capitalism, as it has been since the beginning of the 20th century, in the present, in the face of growing planetary climate catastrophes, it also becomes an instrument to safeguard forms of life - merely framed as natural resources - for the central countries in the capitalist system.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis. "September 2020."; Includes bibliographical references (pages 144-154).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban perceptual modeling : a speculative framework for artistic intervention/</title>
<link href="https://hdl.handle.net/1721.1/138582" rel="alternate"/>
<author>
<name>Ledwidge, Matthew Jacob.</name>
</author>
<id>https://hdl.handle.net/1721.1/138582</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Urban perceptual modeling : a speculative framework for artistic intervention/
Ledwidge, Matthew Jacob.
Cities have always been informational systems which impact or manage the perceptual and behavioral experience of human beings. Recent discussions primarily in urban planning, architecture, and the environmental humanities have pointed towards the ways in which contemporary cities are generating increasingly complex images, deploying a range of computational technologies, regulatory frameworks, and social norms which are less legible to people and more intertwined within technoscientific governance. Computational technologies of perceptual simulation in particular, are producing a new political, aesthetic, and epistemic terrain to consider within the field of urban planning and architecture. In this text I will provide an account of the development of technologies of modeling perception in urban space. I will claim that greater understanding of the histories and technical infrastructures of modelled perception and the co-produced nature of perception in real urban spaces provides the basis for a critical reevaluation of planning practices. I will present a number of artistic experiments which were undertaken to critically inhabit these new conceptual terrains and outline a speculative framework for future artistic practices to work with the ongoing impacts of these technologies on the everyday experience of urban space. I will discuss how these social and technical conditions might constitute a site of leverage towards a new political and aesthetic optic for engaging with urban experience and outline the basis for a framework of an artistic research practice operating around these concerns.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 43-47).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydroponic container farms : validation of a building energy model and its integration in urban design</title>
<link href="https://hdl.handle.net/1721.1/138581" rel="alternate"/>
<author>
<name>Liebman-Peláez, Mariana.</name>
</author>
<id>https://hdl.handle.net/1721.1/138581</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Hydroponic container farms : validation of a building energy model and its integration in urban design
Liebman-Peláez, Mariana.
Controlled environment agriculture (CEA) systems, or plant factories, have developed within the urban context following efforts to expand local food production and provide an alternative to conventional agriculture with lower rates of greenhouse gas emissions and resource consumption. One urban CEA system, container farms, consist of vertical hydroponic farms inside retrofitted shipping containers. The artificially controlled interior environments within container farms along with their portability and modularity allow container farms to grow food in a variety of otherwise unused locations regardless of climate and daylight availability. While container farms and plant factories in general may provide a promising option for sustainable urban agriculture, they are highly energy intensive, particularly for lighting and thermal control. As a result, urban designers and policy makers require holistic assessment tools and methodologies to understand the viability of plant factories in reducing the greenhouse gas emissions of food systems. However, due to limitations of building performance simulation (BPS) tools, existing urban design methodologies assess the energy use of plant factories using simplified building energy models that omit the energetic effects of plants. While previous studies have developed methods that consider plant-air interactions within BPS tools through the use of co-simulators, to date there has been a lack of energy validation studies for such models. This research attempts to bridge this gap by validating a first-principle hourly energy model for an operational hydroponic container farm located in Boston, Massachusetts. The energy model (NMBE of 3% and CV[RMSE]of 9%) combines a plant evapotranspiration model in parallel with a BPS tool, EnergyPlus. The validation focuses on the reliability of the energy model in predicting hourly conditioning loads and comments on the practical challenges and limitations of modeling hourly conditioning for container farms and other plant factories. Second, this research uses the validated energy model to simulate methods for reducing conditioning loads of container farms under various climate and upgrade scenarios. Finally, this research explores the integration of container farms in an urban neighborhood and the potential for reducing additional demands on the neighborhood's energy supply system.
Thesis: S.M. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis. "September 2020."; Includes bibliographical references (pages 53-56).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining the feasibility of a novel ground-storage cooling system</title>
<link href="https://hdl.handle.net/1721.1/138580" rel="alternate"/>
<author>
<name>Tang Liwen, Nicole.</name>
</author>
<id>https://hdl.handle.net/1721.1/138580</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Examining the feasibility of a novel ground-storage cooling system
Tang Liwen, Nicole.
The Boston climate is known for its long, cold winters but it also suffers from hot, humid summers. The dehumidification needed to maintain occupant comfort in summer is often provided by condensing the excess moisture onto surfaces cooled by cold water. The systems currently used to provide the cold water have limited efficiencies, so alternative systems must be sought in order to achieve reductions in building energy use and to reduce the rate of climate change. This research examines the feasibility of a ground-cooling storage system that stores the abundant Boston winter cold in an underground block of soil to provide dehumidification in summer. In winter, heat exchangers use the cold air to produce cold water, which flows through a set of pipes in the soil block, cooling the soil. In summer, the cooling stored in the soil block is used to provide cold water for the dehumidifier, thus meeting the latent cooling loads of the building. The physical scale of the system required was found to be reasonable, relative to typical building sizes. The soil block, which does not use any valuable program space, was sized as less than 10% of the overall building size and did not require deep excavation. Winter thermal modeling showed that the soil block could be fully charged in a typical winter season. The summer thermal modeling showed that the system can meet the majority of the building cooling loads and is capable of responding to cooling peaks. The system energy use is primarily driven by the use of the heat exchangers for winter charging. The system was estimated to have a coefficient of performance of 71, which is much higher than that of comparable systems used for dehumidification. In conclusion, this feasibility study found that the proposed system shows promising results as an alternative to conventional systems and is worth further investigation.
Thesis: S.M. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 101-102).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A dedicated mechanism for forgetting : fiction and the ghosts of the plantationocene</title>
<link href="https://hdl.handle.net/1721.1/138579" rel="alternate"/>
<author>
<name>Valladares, Nancy Dayanne.</name>
</author>
<id>https://hdl.handle.net/1721.1/138579</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A dedicated mechanism for forgetting : fiction and the ghosts of the plantationocene
Valladares, Nancy Dayanne.
A Dedicated Mechanism for Forgetting, narrates the process and research behind the films Botanical Ghosts and The Density of Breath, expanding on two years of my artistic research and practice. Part fiction, part epistolary exchange, A Dedicated Mechanism for Forgetting is a meditation on conceptions of vision and sensitivity to light beyond anthropocentric views. Narrating the story of Dorothy Hughes Popenoe and her encounter with the fruit of the Ackee Tree ( Blighia Sapida) at Lancetilla Botanical Gardens, the project retells histories of plant transportation and botanical exchange in the north coast of Honduras through the lens of plant agency, filmmaking, critical plant studies, and magic. Utilizing this approach, this thesis locates itself within recent botanical and speculative turns and points towards the potential of a vegetal configuration for art practice and fictioning.
Thesis: S.M. in Art, Culture and Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 75-80).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Air quality impacts of crop residue burning in India and mitigation alternatives</title>
<link href="https://hdl.handle.net/1721.1/138578" rel="alternate"/>
<author>
<name>Lan, Ruoyu,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/138578</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Air quality impacts of crop residue burning in India and mitigation alternatives
Lan, Ruoyu,
            S.M.
            Massachusetts Institute of Technology.
Crop residue burning is a leading contributor to air pollution and ill health in India. Despite current bans to curtail agricultural fires, burning persists because of a lack of alternatives that are both effective and politically viable. This thesis applies adjoint of the GEOS-Chem regional chemistry-transport model in combination with epidemiological and economic models to inform rational decision-making. First, this thesis estimates the premature deaths as 43,000-73,000 valued at 10-23 billion USD in India attributable to exposure to ambient fine particulate matter (PM2.5) from crop residue burning, and finds Punjab, Haryana, and Uttar Pradesh contribute the majority (83%-95%) over 2005-2016, with 35-40% of impacts occurring in densely populated areas downwind. Second, this thesis quantifies the sensitivity of net impacts to potential changes in space and time, suggesting that relatively significant air quality benefits across India could be achieved in southeast Punjab; promoting burning earlier in the morning in November in Punjab alone could prevent up to 8,700 (95% CI: 5,700-12,000) premature deaths annually, valued at 2.2 (95% CI: 0.22-7.0) million USD. Third, this thesis compares the cost and benefit of mitigation alternatives for both the public and private sectors. The findings support the use of targeted and potentially low-cost alternatives rather than bans.
Thesis: S.M. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, September, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 49-55).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The underwater application of exothermic welding.</title>
<link href="https://hdl.handle.net/1721.1/138576" rel="alternate"/>
<author>
<name>Anderssen, Arthur Harald.</name>
</author>
<id>https://hdl.handle.net/1721.1/138576</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The underwater application of exothermic welding.
Anderssen, Arthur Harald.
Thesis: Ocean E., Massachusetts Institute of Technology, Department of Ocean Engineering, 1972; Bibliography: leaf 73.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New methods for load flow calculation without any swing bus.</title>
<link href="https://hdl.handle.net/1721.1/138572" rel="alternate"/>
<author>
<name>Yamane, Katsumi.</name>
</author>
<id>https://hdl.handle.net/1721.1/138572</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">New methods for load flow calculation without any swing bus.
Yamane, Katsumi.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1971; Includes bibliographical references.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simple coding techniques for high-frequency radio communications.</title>
<link href="https://hdl.handle.net/1721.1/138569" rel="alternate"/>
<author>
<name>Goldfein, Henry David.</name>
</author>
<id>https://hdl.handle.net/1721.1/138569</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">Simple coding techniques for high-frequency radio communications.
Goldfein, Henry David.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering, 1968; Bibliography: leaf 38.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of a bridge deck as a plane grid.</title>
<link href="https://hdl.handle.net/1721.1/138568" rel="alternate"/>
<author>
<name>Efimba, Robert Elangwe.</name>
</author>
<id>https://hdl.handle.net/1721.1/138568</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Analysis of a bridge deck as a plane grid.
Efimba, Robert Elangwe.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1965; Bibliography: leaf 32.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedding SLIP in BCPL.</title>
<link href="https://hdl.handle.net/1721.1/138567" rel="alternate"/>
<author>
<name>Fulton, John Dix.</name>
</author>
<id>https://hdl.handle.net/1721.1/138567</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">Embedding SLIP in BCPL.
Fulton, John Dix.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaf 71.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Lagrange multiplier method for solving multi-objective linear programs</title>
<link href="https://hdl.handle.net/1721.1/138565" rel="alternate"/>
<author>
<name>Ramakrishnan, V. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/138565</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">A Lagrange multiplier method for solving multi-objective linear programs
Ramakrishnan, V. S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1990; Includes bibliographical references (leaves 50-55).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A statistical study of the strength of steel columns.</title>
<link href="https://hdl.handle.net/1721.1/138563" rel="alternate"/>
<author>
<name>Rokach, Abraham Jacob.</name>
</author>
<id>https://hdl.handle.net/1721.1/138563</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">A statistical study of the strength of steel columns.
Rokach, Abraham Jacob.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1970; Bibliography: leaf 72.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The use of nonlinear bias effects in orbital navigation.</title>
<link href="https://hdl.handle.net/1721.1/138562" rel="alternate"/>
<author>
<name>O'Donnell, Robert Peter.</name>
</author>
<id>https://hdl.handle.net/1721.1/138562</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">The use of nonlinear bias effects in orbital navigation.
O'Donnell, Robert Peter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1968; Bibliography: leaves 92-93.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An integration technique for aircraft navigation sensors.</title>
<link href="https://hdl.handle.net/1721.1/138561" rel="alternate"/>
<author>
<name>Knauf, Albert Edward,&#13;
            Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/138561</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">An integration technique for aircraft navigation sensors.
Knauf, Albert Edward,&#13;
            Jr.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1968; Lacking leaves 6, 20, 40 and 74.; Bibliography: leaves 105-118.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of many-body approximation techniques in simple non-linear coupled system of fermions and oscillators.</title>
<link href="https://hdl.handle.net/1721.1/138556" rel="alternate"/>
<author>
<name>Krishnamurthy, Venkataramanaiah.</name>
</author>
<id>https://hdl.handle.net/1721.1/138556</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Study of many-body approximation techniques in simple non-linear coupled system of fermions and oscillators.
Krishnamurthy, Venkataramanaiah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1978; Includes bibiliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parity simulation of static power conversion systems.</title>
<link href="https://hdl.handle.net/1721.1/138555" rel="alternate"/>
<author>
<name>Medora, Noshirwan Kaikhushru.</name>
</author>
<id>https://hdl.handle.net/1721.1/138555</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Parity simulation of static power conversion systems.
Medora, Noshirwan Kaikhushru.
Thesis: Elec. E., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer simulation of macrosegregation in ESR ingots</title>
<link href="https://hdl.handle.net/1721.1/138552" rel="alternate"/>
<author>
<name>Furlong, Robert Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/138552</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Computer simulation of macrosegregation in ESR ingots
Furlong, Robert Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing maritime force structure options for the U.S. defense program.</title>
<link href="https://hdl.handle.net/1721.1/138538" rel="alternate"/>
<author>
<name>Wright, Christopher Cramer.</name>
</author>
<id>https://hdl.handle.net/1721.1/138538</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Developing maritime force structure options for the U.S. defense program.
Wright, Christopher Cramer.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1976; Bibliography: leaves 161-162.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task descriptors for automated assembly</title>
<link href="https://hdl.handle.net/1721.1/138536" rel="alternate"/>
<author>
<name>Simunović Simunović, Sergio Natalio.</name>
</author>
<id>https://hdl.handle.net/1721.1/138536</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Task descriptors for automated assembly
Simunović Simunović, Sergio Natalio.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. At head of title: T-624.; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Environmental factors affecting human and rat placental lactogen.</title>
<link href="https://hdl.handle.net/1721.1/138533" rel="alternate"/>
<author>
<name>Boulvard, Marie-Thérèse.</name>
</author>
<id>https://hdl.handle.net/1721.1/138533</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Environmental factors affecting human and rat placental lactogen.
Boulvard, Marie-Thérèse.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1975; Vita.; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A venture for art + development : examining the symbiosis relationship between China's art market and real estate industries</title>
<link href="https://hdl.handle.net/1721.1/138531" rel="alternate"/>
<author>
<name>Ni, Ruichen.</name>
</author>
<id>https://hdl.handle.net/1721.1/138531</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">A venture for art + development : examining the symbiosis relationship between China's art market and real estate industries
Ni, Ruichen.
In the past two decades, in China, integrating art components, cultural institutions, and various artistic scenes in commercial real estate has become popular development and management practices in first-nd second-tier cities. The capital flow between the two industries has become increasingly frequent. Real estate companies have evolved from space providers for arts to a more influential stakeholder in the local art ecosystem, facilitating the growth of art industries. Through collaborations with real estate projects artists find alternative ways to sustain art production as well as a widened and closer connection to the general public. Successes in both ends have encouraged the joint venture on art + real estate to continue. This thesis critically investigates forms of symbiosis between art and real estate industries in China. The research aims to reveal and reflect on the two industries' interdependency with a close examination of the artist communities' historical evolution in cities and new endeavors combined art and real estate. It focuses on the following questions: how do real estate and art professionals collaborate and benefit from the joint venture? What are the motivation, idea, and vision? Are the strategies effective? Are there issues under the flourishing market? Who are the winners and losers? Through site visits, interviews with selected real estate and art professionals, artists, and a gathering of secondary sources, the research categorizes and analyzes art + development collaborations into two primary forms: art districts and art placement. It zooms into two representative case studies that are publicly regarded to be successful and innovative: Aranya and K11. The thesis's objective is to remind real estate about the value and power of art in urban developments beyond its function as a commercialized product, suggesting a more ethical business thinking and recommendations to create a favorable art ecosystem around real estate projects.
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, February, 2021; Thesis: S.M. in Real Estate Development, Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate, February, 2021; Manuscript.; Includes bibliographical references (pages 138-148).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-destructively detecting spinodal decomposition at a distance towards developing gigahertz ultrasonics for in-vessel inspection</title>
<link href="https://hdl.handle.net/1721.1/138530" rel="alternate"/>
<author>
<name>Al Dajani, Saleem AbdulFattah Ahmed.</name>
</author>
<id>https://hdl.handle.net/1721.1/138530</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Non-destructively detecting spinodal decomposition at a distance towards developing gigahertz ultrasonics for in-vessel inspection
Al Dajani, Saleem AbdulFattah Ahmed.
Given the existential climate crisis faced by mankind and the world, the lifetime and sustainability of nuclear reactors as a carbon-free source of renewable energy depend on the susceptibility of their structural components to environmental degradation. In particular, critical components for light water reactors (LWRs) evolve over decades in service, losing ductility and toughness due to thermal and irradiation aging. Techniques to monitor their health cannot be easily applied in the field due to their destructive, expensive, or immobile nature. Thus, non-destructive evaluation (NDE) methods are sought to monitor and evaluate the health of major LWR components such as core barrels, steam generator tubes, or primary coolant pipes and are often required by policy, such as NRC policy 10-CFR-50.65. Here we demonstrate the use of gigahertz, non-contact ultrasonics to gauge the state of cast austenitic stainless steels (CASS), used in some of the largest components in LWR primary systems. We do so by linking changes in their surface acoustic wave (SAW) characteristics using transient grating spectroscopy (TGS) to transmission electron microscopy (TEM)-verified evidence of spinodal decomposition and G-phase precipitation. In this thesis, thermal aging is shown to induce SAW peak splitting in spinodally decomposed CASS alloys, correlated strongly with lowered toughness and decreased ductility. Furthermore, statistical testing on the number of SAW peak splits observed show that the second SAW peak significantly appears more frequently and is significantly different in frequency in comparison to counts and frequencies measured in unaged specimens. The ability of this technique to non-destructively detect microstructural degradation at a distance in a predictive manner in the case of CASS motivates extending gigahertz ultrasonics to detect other LWR material degradation modes as an in-vessel inspection technique, such as reactor pressure vessel (RPV) embrittlement. This allows for the greater use of NDE techniques for confident monitoring of LWR structural material health to 80 years and beyond, saving costs by minimizing structural replacements until needed and maximizing energy production by preventing early decommission until necessary.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, February, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 107-116).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of low-cost airlines in different global regions</title>
<link href="https://hdl.handle.net/1721.1/138528" rel="alternate"/>
<author>
<name>Raigangar, Akash Bharat.</name>
</author>
<id>https://hdl.handle.net/1721.1/138528</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Evolution of low-cost airlines in different global regions
Raigangar, Akash Bharat.
Low-cost air carriers have been growing aggressively in many regions around the world and today offer a significant proportion of the available flight and seat capacity in the markets where they operate. This thesis studies the airline capacity evolution of low-cost carriers (LCCs) between 2009 and 2018 in several different types of market regions, both long and short-haul as well as mature versus emerging air travel markets. Between 2009 and 2018, domestic India and domestic China had the fastest growth in total airline capacity, growing at a CAGR of 12.0% and 11.1% respectively. The more mature markets saw much slower growth - the intra-Europe region grew at a 5.6% CAGR while both the domestic US and domestic Australia regions grew at a CAGR less than 3% during the 10-year period. The domestic India region had the highest gain in low-cost carrier share of capacity, which grew by 26 points - from 43% of domestic capacity in 2009 to 69% in 2018. The growth of low-cost carrier capacity share was much smaller or even negative in other regions: 10 points in intra-Europe (from 40% to 50%), 9 points in domestic China (from 3% to 12%), 9 points in domestic US (31% to 40%), 7 points in the transatlantic region (2% to 9%) and -22 points in domestic Australia (50% to 28%). In the mature markets analyzed (except Australia), much of the growth in the low-cost carrier sector has come from ultra-low-cost carriers adding capacity as traditional LCCs are now moving upmarket due to increasing costs. While total capacities in the emerging markets grew rapidly, the growth has come from different sources. LCCs in India have grown significantly, primarily due to the very large demand for low-fare air travel in the country. On the other hand, much of the growth in China has come from legacy carriers; the slower growth in the Chinese LCC sector is explained by unsupportive government regulations and a lack of low-cost airport infrastructure. LCCs operating in the transatlantic region have also seen slow growth due to the various difficulties of operating low-cost in long haul sectors, with many LCCs going bankrupt and ceasing operations.
Thesis: S.M., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 111-116).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial robustness of deep learning models : an error-correcting codes based approach</title>
<link href="https://hdl.handle.net/1721.1/138527" rel="alternate"/>
<author>
<name>Gupta, Samarth (computation scientist)</name>
</author>
<id>https://hdl.handle.net/1721.1/138527</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Adversarial robustness of deep learning models : an error-correcting codes based approach
Gupta, Samarth (computation scientist)
Efficient operation and control of modern day urban systems such as transportation networks is now more important than ever due to huge societal benefits. Low cost network-wide sensors generate large amounts of data which needs to processed to extract useful information necessary for operational maintenance and to perform real-time control. Modern Machine Learning (ML) systems, particularly Deep Neural Networks (DNNs), provide a scalable solution to the problem of information retrieval from sensor data. Therefore, Deep Learning systems are increasingly playing an important role in day-to-day operations of our urban systems and hence cannot not be treated as standalone systems anymore. This naturally raises questions from a security viewpoint. Are modern ML systems robust to adversarial attacks for deployment in critical real-world applications? If not, then how can we make progress in securing these systems against such attacks? In this thesis we first demonstrate the vulnerability of modern ML systems on a real world scenario relevant to transportation networks by successfully attacking a commercial ML platform using a traffic-camera image. We review different methods of defense and various challenges associated in training an adversarially robust classifier. In terms of contributions, we propose and investigate a new method of defense to build adversarially robust classifiers using Error-Correcting Codes (ECCs). The idea of using Error-Correcting Codes for multi-class classification has been investigated in the past but only under nominal settings. We build upon this idea in the context of adversarial robustness of Deep Neural Networks. Following the guidelines of code-book design from literature, we formulate a discrete optimization problem to generate codebooks in a systematic manner. This optimization problem maximizes minimum hamming distance between codewords of the codebook while maintaining high column separation. Using the optimal solution of the discrete optimization problem as our codebook, we then build a (robust) multi-class classifier from that codebook. To estimate the adversarial accuracy of ECC based classifiers resulting from different codebooks, we provide methods to generate gradient based white-box attacks. We discuss estimation of class probability estimates (or scores) which are in itself useful for real-world applications along with their use in generating black-box and white-box attacks. We also discuss differentiable decoding methods, which can also be used to generate white-box attacks. We are able to outperform standard all-pairs codebook, providing evidence to the fact that compact codebooks generated using our discrete optimization approach can indeed provide high performance. Most importantly, we show that ECC based classifiers can be partially robust even without any adversarial training. We also show that this robustness is simply not a manifestation of the large network capacity of the overall classifier. Our approach can be seen as the first step towards designing classifiers which are robust by design. These contributions suggest that ECCs based approach can be useful to improve the robustness of modern ML systems and thus making urban systems more resilient to adversarial attacks.
Thesis: S.M. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, February, 2020; Manuscript.; Includes bibliographical references (pages 77-80).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial variability of peat subsidence in Southeast Asia by land use</title>
<link href="https://hdl.handle.net/1721.1/138521" rel="alternate"/>
<author>
<name>VanHemel, Amber R.</name>
</author>
<id>https://hdl.handle.net/1721.1/138521</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Spatial variability of peat subsidence in Southeast Asia by land use
VanHemel, Amber R.
Variograms are a valuable tool for geospatial analysis as they provide a description of how data are correlated as a function of separation distance. Their application can prove valuable for not only analyzing variations in environmental parameters such as peat subsidence in two dimensional space, but also to understand the factors governing these variations. With the deforestation and drainage of wetlands in Southeast Asia, there has been a gradual sinking across large areas of land (i.e. subsidence). Exposed carbon-rich peat oxidizes and subsequently subsides, emitting carbon dioxide into the atmosphere. Interferometric Synthetic Aperture Radar (InSAR) satellite data coupled with geostatistical analysis have been used to obtain key information on how subsidence varies at a fine spatial resolution. This project used gridded datasets from 8 sites across Indonesia and Malaysia to quantify the variance and spatial continuity of subsidence by land use type at a short spatial scale (1 kilometer). Subsidence rates in smallholder area and moderately degraded peatswamp forest were found to be autocorrelated within a few hundred meters, ranges of 270 ± 30 m and 230 m ± 40 m, respectively.
Thesis: M. Eng. in Environmental Engineering Science, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, September, February, 2020; "February 2020." Manuscript.; Includes bibliographical references (pages 32-33).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contributions to automatic meshing in the AMORE scheme</title>
<link href="https://hdl.handle.net/1721.1/138520" rel="alternate"/>
<author>
<name>Foo, Angus.</name>
</author>
<id>https://hdl.handle.net/1721.1/138520</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Contributions to automatic meshing in the AMORE scheme
Foo, Angus.
Traditional Finite Element (FE) analysis requires the discretisation of continuous bodies into connected meshes of triangles and quadrilaterals (in 2D; tetrahedrals [tet] and hexahedrals [hex] in 3D) elements. Besides the restrictions due to compatibility of adjacent elements, one primary concern regarding mesh generation is that of minimizing the distortion of elements and the number of distorted elements so as to reduce the discretisation error. This has generally steered research in 2D mesh generation techniques away from grid-based methods, which tends to generate significant numbers of distorted elements; additionally, such methods are generally not considered at all in 3D mesh generation. Furthermore, significant amounts of man-hours are used during the meshing phase of FE analyses to partition and prescribe element types, where the ability to mesh portions of the geometry with hex elements is preferred over using tet elements in the mesh. The recent advances in the theory of Overlapping Finite Elements (OFE) now allow for the use of distorted elements without compromising on the accuracy of the FE analysis. However, a trade-off arises because more degrees of freedom (DOFs) are required at triangular (and tetrahedral) nodes. We propose the reintroduction of optimised 2D grid-based mesh generation techniques to decrease the DOFs in a way that is generalizable to arbitrary 3D geometries, as part of a step towards a truly automated meshing paradigm, referred to as the Automatic Meshing with Overlapping and Regular Elements (AMORE), which requires minimal-to-no input from the engineer.
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, February, 2020; Manuscript.; Includes bibliographical references (pages 75-78).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to trust in forecast information sharing</title>
<link href="https://hdl.handle.net/1721.1/138519" rel="alternate"/>
<author>
<name>Zhang, Pengbo,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/138519</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Learning to trust in forecast information sharing
Zhang, Pengbo,
            S.M.
            Massachusetts Institute of Technology.
This thesis follows and extends the discussion of Özer et al. (2011) on trust in forecast information sharing. We propose a method for belief learning and for updating. The effects of production cost (which indicate the risk) and market uncertainty (which indicates the accuracy of the private information) are analyzed quantitatively. Since complicated Nash equilibria from traditional game theory analysis often fail in real-life scenarios, we formulate simpler assumptions so that the strategies of both sides are not complicated. We compare the similarities and differences between the structure of our model and the structure of other behavioral models related to bounded rationality or cheap talk. We characterize how the supply chain environment changes trust and decisions. We find out that initial beliefs do not matter because they will be quickly adjusted by the market: the limiting behavior, as t --&gt; [infinity], depends only on the retailers' trustworthiness and supply chain environment. Since the retailer's trustworthiness and belief is un-observable, we perform latent profile analysis to fit the model on the experiment conducted by Özer et al. (2011), and test the end game effect and out-of-sample fit.
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, September, 2019; Manuscript.; Includes bibliographical references (pages 93-94).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of neurogranin in modulating contextual memory and plasticity : FMRP involvement and adrenergic-dependent facilitation</title>
<link href="https://hdl.handle.net/1721.1/138518" rel="alternate"/>
<author>
<name>Templet, Sebastian
            (Sebastian Boyd)</name>
</author>
<id>https://hdl.handle.net/1721.1/138518</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The role of neurogranin in modulating contextual memory and plasticity : FMRP involvement and adrenergic-dependent facilitation
Templet, Sebastian
            (Sebastian Boyd)
Activity-dependent changes in neuronal properties (neuronal plasticity) are critical for information processing and storage in the brain. It is well-established that protein synthesis is essential for both memory formation and the long-lasting changes in synaptic strength that accompany learning. However, it's still unclear when protein synthesis needs to occur relative to the experience to form durable memories, and the identities and roles of crucial proteins in these processes have not been elucidated. Neurogranin, a small protein that regulates calcium-dependent signaling, is poised to modulate both memory and synaptic plasticity. This thesis aims to provide insights into the molecular underpinnings mediating context memory formation in the hippocampus. By combining molecular, behavioral, pharmacological, and viral manipulations, we assessed the role of neurogranin in hippocampal memory formation and synaptic plasticity. We observed a rapid, activity-dependent upregulation of neurogranin mediated by FMRP. Neurogranin was found to be regulated by the adrenergic system, and our data suggested a role in the adrenergic-mediated enhancement in memory formation and a form of synaptic plasticity known as long-term potentiation. These findings strongly suggest that neurogranin plays an important role in regulating memory and synaptic plasticity.
Thesis: S.M. in Neuroscience, Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, February, 2020; Manuscript.; Includes bibliographical references (pages 49-57).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systems approach to tracing the evolution of MIT's campus from 1920-2020</title>
<link href="https://hdl.handle.net/1721.1/138513" rel="alternate"/>
<author>
<name>De Filippi, J. Roland.</name>
</author>
<id>https://hdl.handle.net/1721.1/138513</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A systems approach to tracing the evolution of MIT's campus from 1920-2020
De Filippi, J. Roland.
MIT offers many unique opportunities to its students. I chose to take a path less traveled and investigate with a systems view the evolution of MIT's main campus by considering its population, finances, spaces, and their purpose in an integrated way. Over a century of population, building and financial data was integrated dating back to 1940. MIT's main campus, opened in 1916 on a 50-acre site along the Charles River in Cambridge, Massachusetts, has grown in the past century from a campus of 978,000 to 11,261,000 square feet, or a factor of approximately 11.5. The population has grown from 2,374 students, 117 faculty, and an estimated 726 staff to 11,574 students, 1,056 faculty, and 11,651 staff, or a factor of approximately 7.5. From 1940 to 1946, research expenditures per faculty, in 2019 dollars, grew from $1470/year/faculty member to $231,000/year/faculty member. By 2019 this number was $740,000/year/faculty member. A structured organization of the data into decade-length time periods and detailed analysis of this data confirms the hypothesis that a correlation exists between population and funding as educational and research activities drive the building of functional space as a supply to meet this evolving demand. These data also show an evolution from a university mission of training engineers' mens et manus, - minds and hands - to a mission of state-of-the-art research to advance both science and industry. While the findings are conclusive, they are not so strong as to offer a predictive capability; the future history of MIT's campus is yet to be written. However, the systems analysis presented here should assist in creating realistic scenarios that are grounded in validated ratios of population (faculty, students and staff), finances, spaces and activities that are all linked to each other. I detail possible directions for further research that might strengthen the relationships across the campus-wide systems model so that it can be used to predict or at least bound future scenarios based on varying demand inputs. In light of the coronavirus disease 2019 (COVID-19) pandemic, a systems-level understanding of how changes to the campus population and/or funding creates emergent changes to space needs offers MIT's planners the ability to respond more quickly and with more accuracy.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. Vita.; Includes bibliographical references (pages 145-146).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The future of fashion &amp; human gesture control : exploration of a wearable communication device for sign language speakers</title>
<link href="https://hdl.handle.net/1721.1/138512" rel="alternate"/>
<author>
<name>Booker, Dextina Alana.</name>
</author>
<id>https://hdl.handle.net/1721.1/138512</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The future of fashion &amp; human gesture control : exploration of a wearable communication device for sign language speakers
Booker, Dextina Alana.
This thesis is a design research project which began as an exploration of how to leverage clothing to further enhance capabilities and human computer interaction. While on this journey, this objective evolved with the help of continuous refinement of three key frames of reference; the experience of people with hearing impairments in India, the intersection of fashion and technology, and the concept of universal design. The purpose of this research was to uncover the needs of a communication system to translate sign language through the case study of people who need to translate spoken languages discreetly. The latent need for the people with hearing loss provided the framework for further study communication between spoken languages which could result in benefiting a broader audience. Wearables allow users to communicate without interruption and have become a pervasive fashion statement with over 526 million connected wearable devices. Gesture control technology sensors including EMG sensors, accelerometers, cameras, flex sensors, etc. can detect a range of gestures through their sensing elements. Each sensor has the potential to meet the needs uncovered in this thesis, but they also have limited capabilities. This thesis provides the requirements of a wearable gesture controlled translation solution for the context of people with hearing loss in India using humanistic co-design (human centered design+) methodology. It also addresses the ethical implications of a solution and the potential for erasure of deaf culture as well as the potential to create purpose, job opportunities and to increase the quality of life for people with hearing loss.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (pages 64-71).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling the enablers : transforming the lives of middle-aged Indian women</title>
<link href="https://hdl.handle.net/1721.1/138511" rel="alternate"/>
<author>
<name>Bansal, Nikita.</name>
</author>
<id>https://hdl.handle.net/1721.1/138511</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Enabling the enablers : transforming the lives of middle-aged Indian women
Bansal, Nikita.
Do you remember how your mother would let go of her hobbies and make yours her own? Do you remember how she left her job to devote her full time to you and her husband? If your mother is a homemaker, do you acknowledge and appreciate the work she does for you? She knows about your dreams and goals, have you ever asked her about hers? It's easy to say - she does not have confidence, but have you wondered why? In India, out of 220 million middle-aged women, 77 million of them in spite of living in urban areas have still been unable to follow their dreams and live a truly passionate life. Government, NGOs, startups and awareness camps are all in place for the younger female population, but have we forgotten the population which was once young but deprived of such resources during their youth? Do they have your support and ears to now acknowledge their inner aspirations, which they couldn't when they were your age for lack of freedom in those days? I, with my thesis, have explored these questions and understood that the ecosystem of a middle-aged Indian woman is contained majorly by her husband and children. In order for anyone to really impact her life, an initiative needs to reach her home and talk through her family in the most playful manner and of gender neutral nature. I took the route of family boardgames to raise the collective consciousness of family and nudge them to indulge in deeper conversations. The games are designed to initiate reflective dialogues among the family members on topics from - what each of them really want from their life and how they can best support each other in achieving those dreams, to debunking the stereotypes and stigmas in the society in a gentle and playful manner
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from PDF version of thesis.; Includes bibliographical references (page 72).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multivariable control of a twin lift helicopter system (TLHS) using the LQG/LTR design methodology</title>
<link href="https://hdl.handle.net/1721.1/132910" rel="alternate"/>
<author>
<name>Rodriguez, Armando Antonio.</name>
</author>
<id>https://hdl.handle.net/1721.1/132910</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Multivariable control of a twin lift helicopter system (TLHS) using the LQG/LTR design methodology
Rodriguez, Armando Antonio.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Bibliography: p. 281-282.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mentoring and developmental relationships between senior exeuctive women and junior female managers</title>
<link href="https://hdl.handle.net/1721.1/132909" rel="alternate"/>
<author>
<name>Fischl, Patricia W.</name>
</author>
<id>https://hdl.handle.net/1721.1/132909</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Mentoring and developmental relationships between senior exeuctive women and junior female managers
Fischl, Patricia W.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 193-197.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New product forecasting of appliance and consumables : bass model</title>
<link href="https://hdl.handle.net/1721.1/132906" rel="alternate"/>
<author>
<name>Babu, Keval
            (Keval Vipul)</name>
</author>
<id>https://hdl.handle.net/1721.1/132906</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">New product forecasting of appliance and consumables : bass model
Babu, Keval
            (Keval Vipul)
Drinkworks is a joint venture between Anheuser-Busch and Keurig Green Mountain Inc. that has been formed to develop and launch a one-of-a-kind in-home alcohol drink system. One of the major challenges faced by Drinkworks before launching their new product was to predict its demand throughout the different stages of the product life-cycle. Being an emerging organization, Drinkworks needed a systematic demand planning tool to generate baseline strategic and operational forecasts of their new product. These baseline forecasts would be further used as a starting point for sales and operations planning, production planning and material resource planning. This thesis project focuses on selecting appropriate mathematical models to forecast demand of a new product throughout its life-cycle. This thesis concentrates on the use of Bass model to forecast the demand for Drinkworks' appliance in the initial launch phase with minimal market knowledge. Moreover, this thesis also explains the methodology to forecast pod consumption using the average consumption rate per appliance, cumulative appliances sold to the retailers, cluster analysis, and appliance forecast.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2018; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 57-58).
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mass production readiness of a hardware start-up : assessing and improving product designs for manufacturing and assembly</title>
<link href="https://hdl.handle.net/1721.1/132905" rel="alternate"/>
<author>
<name>Bhakuni, Abhimanyu Singh.</name>
</author>
<id>https://hdl.handle.net/1721.1/132905</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">Mass production readiness of a hardware start-up : assessing and improving product designs for manufacturing and assembly
Bhakuni, Abhimanyu Singh.
The US food and restaurant industry is witnessing a step change due to recent advancements in smart automation. Increasing labor costs, rising living costs and shortage of skilled labor is forcing restaurateurs to look for alternatives to remain in operation and maintain profit margins. Industrial robots, equipped with artificial intelligence and machine learning capabilities are now penetrating into commercial kitchens. The idea is to establish a symbiotic and highly efficient process flow between humans and robots in order to increase throughput, enhance food quality and improve customer experience. Miso Robotics, a robotics start-up based in Pasadena, is one of the leading change makers in this direction with a vision to create a kitchen for the future, integrated fully with robotic kitchen assistants. The company, through its flagship product Flippy, has demonstrated the capability to emulate human cooking behavior by making hamburgers. They also recently added frying capability to Flippy, which addresses a huge market need across the globe. In the summer of 2018, three graduate students from MIT worked with the team of Miso Robotics and consulted them on challenges related to product design, manufacturing and scalability. This thesis explores some of the challenges faced by a start-up during its production scale-up phase. It does so by presenting three case studies and emphasizing upon the importance of design for manufacturing and assembly, 3D printing and knowledge of existing solutions before solving a problem.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2018; Cataloged from the PDF version of thesis. "September 2018." "Due to the condition of the original material, there are unavoidable flaws in this reproduction. We have made every effort possible to provide you with the best copy available."--Disclaimer page.; Includes bibliographical references (pages 66-67).
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring key barriers to consumer adoption of meat analogues : meat attachment and cultural identity</title>
<link href="https://hdl.handle.net/1721.1/132904" rel="alternate"/>
<author>
<name>Ortiz-Luis, Lara
            (Larisse-Ann Yee)</name>
</author>
<id>https://hdl.handle.net/1721.1/132904</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Exploring key barriers to consumer adoption of meat analogues : meat attachment and cultural identity
Ortiz-Luis, Lara
            (Larisse-Ann Yee)
The industrial meat production system has large scale environmental impacts from depleting natural resources such as water and land and emitting dangerous greenhouse gases, while negatively affecting human health. The inefficiencies of converting plant matter into animal meat is particularly pronounced for beef. Despite these effects, demand is on the rise across the world. Over the past five years, new companies have produced sophisticated meat analogues in the form of plant-based and cultured proteins with a value proposition of keeping meat's taste and cost while decreasing environmental impact. Barriers in pricing, technology, and distribution are currently top of mind for businesses competing in this new industry. Through a literature review, this paper investigates another crucial barrier to consumer adoption in psychological meat attachment and cultural food identity. I then propose an experimental study to test the hypothesis that a matched identity frame (i.e. masculine framing) could induce higher willingness to substitute a plant-based meat option for a conventional meat option of a given dish.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, May, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 56-62).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-classification and object detection in intelligent manufacturing</title>
<link href="https://hdl.handle.net/1721.1/132903" rel="alternate"/>
<author>
<name>Yu, Kaili,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132903</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Multi-classification and object detection in intelligent manufacturing
Yu, Kaili,
            S.M.
            Massachusetts Institute of Technology.
Defect detection in industries is typically conducted manually. While there are state-of-the-art machine vision techniques for automated inspection systems, there is still a gap between research advancement and practical applications, especially for manufactures with high volume and low margin. The thesis aims to develop a computer vision system for automated galvanized steel tube defect detection. Based on images collected from a Japanese steel tube producer, multiple methods were explored and tested. Firstly, inception v4 was used as an image classification model. Its performance was first tested on an online dataset, then on our own cleaned dataset. In the next step, since classification only labels a whole image, object detection algorithms were then used for indicating locations as well as the defect class. Several object detection algorithms were adopted and compared: Faster R-CNN, YOLO v4, and YOLO v5. They achieved mAP@0.5 of 94.31%, 95.22%, 75.5% respectively, and recall rates of 67%, 89%, 73.5% respectively, which demonstrated promising results for applications on the production line. However, the results were primarily limited by the quantity and quality of images. Future work could focus on advanced data augmentation, further cleaning on collected data, and improvement in raw image quality. Furthermore, the algorithms need to be validated with real-time inspection speed and on more classes of defects.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 92-97).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multidisciplinary design optimization of part geometry in CAD</title>
<link href="https://hdl.handle.net/1721.1/132902" rel="alternate"/>
<author>
<name>Mimery, David Richard.</name>
</author>
<id>https://hdl.handle.net/1721.1/132902</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Multidisciplinary design optimization of part geometry in CAD
Mimery, David Richard.
Multidisciplinary design optimization (MDO) is the process of searching for designs which best satisfy a set of objectives, while respecting that the appropriateness of a design is dependent on more than just a single discipline. Traditionally, MDO has been conducted on a higher system level where there is less granular technical detail and more general abstractions applied to the design. With the advances of modern computing and commercially available software tools, it is now possible to conduct MDO at a more granular level such that the part geometry in CAD can be guided by the process. In this thesis, an MDO workflow is created for purposes of improving the design of a manufactured baseplate. Off-the-shelf commercial software tools (ANSYS and OptiSLang) are used to develop the modular workflow, through an iterative improvement process. Final designs are successfully generated from the workflow and compared to baseline and reference designs, with respect to the temperature, cost and frequency properties related to the design. The results obtained in this project demonstrate the potential for MDO to improve the way in which product design and development is conducted in the future.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 93-94).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-free tracking control of an optical fiber drawing process using deep reinforcement learning</title>
<link href="https://hdl.handle.net/1721.1/132901" rel="alternate"/>
<author>
<name>Kim, Sangwoon,&#13;
            (Mechanical engineer)&#13;
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132901</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Model-free tracking control of an optical fiber drawing process using deep reinforcement learning
Kim, Sangwoon,&#13;
            (Mechanical engineer)&#13;
            Massachusetts Institute of Technology.
A deep reinforcement learning (DRL) approach for tracking control of an optical fiber drawing process is developed and evaluated. The DRL-based control is capable of regulating the fiber diameter to track either steady or varying reference trajectories in the presence of stochasticity and non-linear delayed dynamics of the system. With about 3.5 hours of real-time training, it outperformed other control models such as open-loop control, proportional-integral (PI) control, and quadratic dynamic matrix control (QDMC) in terms of diameter error. It does not require analytical or numerical model of the system dynamics unlike model-based approaches such as linear-quadratic regulator (LQR) or model predictive control (MPC). It can also track reference trajectories that it has never experienced in the training process.¹
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 73-76).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A case study of multidisciplinary design optimization implementation process management</title>
<link href="https://hdl.handle.net/1721.1/132900" rel="alternate"/>
<author>
<name>Yazbeck, Antoine.</name>
</author>
<id>https://hdl.handle.net/1721.1/132900</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A case study of multidisciplinary design optimization implementation process management
Yazbeck, Antoine.
Multidisciplinary Design Optimization (MDO) has been developed in the last decades in the aircraft industry with the aim of optimizing complex products while cutting costs and product development time. Despite this, MDO still has not propagated through industry to become common practice. There are several reasons for this, including the lack of educated new graduates of these topics. From an organizational and management perspective, there is often a lack of understanding on what is involved in an MDO implementation, which is a further deterrent. In complex designs, different systems from a multitude of engineering disciplines are interdependent. This stresses the importance of involving various domain experts in the design process to improve the design from diverse engineering perspectives. Involving more engineers in the design process early on raises the challenges of collaboration, known to be an important barrier to MDO implementation in industry. Another barrier is the unavailability and lack of MDO experts; those who understand the MDO process and know the implementation tasks involved both on an engineering and management prospect. In a goal to address the mentioned implementation challenges, this thesis draws an MDO framework using ANSYS software. The process for planning, implementing, evaluating and improving the workflow are described in detail. In this way, this thesis can serve as a "How-To" guide. Furthermore, an examination is conducted on the impact of varying the fidelity of models and simulations, to the final optimized designs. A real industry example of a manufactured electronics case is used throughout the thesis, however, the framework and approach is presented in general terms so that it may be applied to other commercial software platforms or solutions.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 77-78).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classification on real-time videos of galvanized steel surface defect using support vector machines and convolutional neural network, based on data created by generative adversarial networks</title>
<link href="https://hdl.handle.net/1721.1/132899" rel="alternate"/>
<author>
<name>Lemoine, Gauthier Bruno Pierre Jacques.</name>
</author>
<id>https://hdl.handle.net/1721.1/132899</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Classification on real-time videos of galvanized steel surface defect using support vector machines and convolutional neural network, based on data created by generative adversarial networks
Lemoine, Gauthier Bruno Pierre Jacques.
With the current surge of Industry 4.0 in high-end technology industries, enabling complete digitalization and machine-to-machine interaction, and with the vulgarization of its techniques, commodity-based industries are now attracted by its associated benefits, such as higher flexibility, faster troubleshooting, and increased productivity and quality. In this vein, this project explores the use of video images to identify surface defects on galvanized steel tubes in real-time during production. To meet the criteria of accuracy, robustness, and speed, a conventional Support Vector Machine was first tested, and proved to be moderately accurate at 91% and moderately-robust, but satisfying the real-time constraint. In order to increase accuracy, different conventional and custom architectures of Convolutional Neural Networks were then used, through both transfer learning and scratch learning, and showed higher robustness and accuracy at 98% but lower speed. To decrease the inference time, techniques such as pruning and binarization were tested. While the binarized architecture showed a significant drop in accuracy, pruning showed a 30% compression ratio for the same accuracy. In parallel, to increase the robustness, different Generated Adversarial Networks architectures were designed to generate synthetic images of the defects to nourish the datasets. It was then shown that mixed synthetic datasets increased the robustness of the CNN classification models.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 66-68).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fault detection in a continuous production line using adaptive control chart limits</title>
<link href="https://hdl.handle.net/1721.1/132898" rel="alternate"/>
<author>
<name>Wilson, Sara M.
            (Sara Mae)</name>
</author>
<id>https://hdl.handle.net/1721.1/132898</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Fault detection in a continuous production line using adaptive control chart limits
Wilson, Sara M.
            (Sara Mae)
The fourth industrial revolution, known as Industry 4.0, has emerged in the past few decades. With its focus on digitization and interconnectivity between devices, data collection, and operator behavior, implementing Industry 4.0 in a factory gives manufacturers the ability to monitor manufacturing processes in real-time. By monitoring processes in real-time, operators can boost productivity and reduce waste by identifying issues in the manufacturing line faster and more frequently. This research was based on work completed at Industrial ML, a Cambridge-based, machine learning company that offers real-time production and quality monitoring to factories via their platform. The data used is from the manufacturing line of one of IML's clients, Industrial Steel, based in Japan. This thesis presents a comprehensive method for analyzing equipment data from a manufacturing line to determine which process control charts and equations are best-suited for real-time monitoring of the line. By evaluating the performance of X-Bar Charts, regressions, and S Charts in monitoring the various processes on the Industrial Steel manufacturing line, a different monitoring method was created. This method utilizes S Charts with 95th and 99th percentile limits calculated from historical data as upper limits and no lower limits to accommodate the low variance nature of many processes. This method's efficacy was tested by calculating the fraction of points from numerous long periods of continuous production (8 hours or more) that lay within these historical data percentile limits. For the variables analyzed, the percentile limits contained 95-99% of the data points. Some of the data ranges showed a higher variance of the data from the sensors; a set of higher variance limits were set for these ranges. A set of process control rules, adapted from the WECO rules, were established to guide how to determine out of control points on these S Charts with percentile limits.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2020; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 103-105).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems approach for evaluating the transitioning oil and gas commercial market</title>
<link href="https://hdl.handle.net/1721.1/132897" rel="alternate"/>
<author>
<name>Williams, Caitlin
            (Caitlin Louise)</name>
</author>
<id>https://hdl.handle.net/1721.1/132897</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Systems approach for evaluating the transitioning oil and gas commercial market
Williams, Caitlin
            (Caitlin Louise)
The United States retail industry will continue to create value for Supermajors with branded retail networks. Regulation requiring efficiency improvements and the distribution of lower emission fuel substitutes will require Supermajors to evolve to maintain their competitive positions in the market. Supermajors ability to reliably produce energy at scale and their growing capabilities in optimizing their business through digital applications uniquely positions them to succeed in the future. Supermajors should look at regulation as an opportunity to grow profitability. Supermajors ability to understand lower emission energy systems in the context of their legacy assets will be critical to delivery financial results in the future. Technological advancements among lower emission transportation energy substitutes, like electricity and hydrogen, present an opportunity for Supermajors to diversify their fuel offerings to meet future transportation energy needs. Supermajors should be cautious of early investment in these alternatives considering the financial risk but should recognize the potentially greater risk of failing to act in time. Supermajors' retail networks provide the optimal platform to improve their corporate image. Supermajors consistently highlight the actions they are taking to develop lower emission alternatives and the contributions they make to the communities in which they operate. However, Supermajors should also consider targeting the customer experience offered by their brand considering the success Independents have experienced by employing that strategy. This appears to be a more effective approach compared to placing emphasis on fuel quality advantages.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 93-111).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The potential for plant-based Meat in Africa - a proposed new approach using a system design methodology</title>
<link href="https://hdl.handle.net/1721.1/132896" rel="alternate"/>
<author>
<name>Smith, Thomas Llewellin.</name>
</author>
<id>https://hdl.handle.net/1721.1/132896</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">The potential for plant-based Meat in Africa - a proposed new approach using a system design methodology
Smith, Thomas Llewellin.
This Thesis explores the potential application of new plant-based protein technologies to Sub-Saharan Africa. It demonstrates the use of a system design methodology to evaluate, assess and select a new approach to protein production. This is an important topic, because global protein production systems are under pressure to reduce their environmental footprint. It is an interesting topic right now, because new protein technologies are emerging which have the potential to soon disrupt industrial livestock farming. The system design approach means thinking in terms of protein production as a system consisting of individual parts (farms, value chain, retail outlets etc.) and their interactions, which together deliver value to the protein consumers. The stakeholders and users of the system are analyzed in order to understand and prioritize their needs in terms of the system goals. This approach allows us to creatively examine the individual parts for alternatives, whilst assessing expected system performance in terms of the overall value delivered over time. The Thesis focuses on Africa's fast-growing and fast-urbanizing populations with their growing demand for protein. A common operating factor is malnourished populations, due to diets based on low-quality plant sources, and existing protein production systems which are inefficient, unsustainable and harming the environment. The work thoroughly analyses published research on the technical and operational aspects of new and old protein production. Interviews were conducted with experts in both protein and Africa. The comparison of new techniques for producing proteins suggest that new plant-based methods have the most immediate potential. The proposed system is based upon three simple ideas, which together lead to an interesting outcome: -- Product Platform Architecture - firstly, a product should use a Platform Architecture in order to keep development costs low, and yet allow the product to be adapted to different local markets in Africa -- Franchise Model - the best way to achieve scale is to work with local entrepreneurs through franchising, an approach which enables allocating responsibilities and risks within the system hierarchy -- Lean Operating Model - finally, the operating entity has to be exceptionally lean by design, in order to ensure an affordable product for consumers - an idea known as a Base of the Pyramid (BOP) strategy A case study of Southern Nigeria illustrates the concept.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 55-58).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximizing value creation in agile sprints</title>
<link href="https://hdl.handle.net/1721.1/132895" rel="alternate"/>
<author>
<name>Thekkupadam Narayanan, Nithin.</name>
</author>
<id>https://hdl.handle.net/1721.1/132895</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Maximizing value creation in agile sprints
Thekkupadam Narayanan, Nithin.
Agile software development principles prioritize the delivery of value through working software. Earlier value creation is preferred to reduce the time to market and get sooner feedback from customers. A challenge in planning agile sprints to achieve value upfront is the tension that exists between the value, resources, size of each feature or story deliverable, and the dependencies among them. While the role of effort and resource constraints in value creation has been studied extensively, the role of dependencies has not been fully addressed in the agile context. In this thesis, we propose a framework to improve value delivery in agile software development by decoupling cyclic dependencies to achieve more robust multi-sprint plans in a scaled agile environment. We analyze this novel approach using an arbitrary test dataset to demonstrate how different decoupling methods yield different value trajectories. We also suggest an optimization method to maximize such value creation through sequencing by simultaneously considering timing, dependencies, and resource allocation. We perform a brute-force optimization approach on the test dataset to demonstrate how more rapid value creation can be achieved over multiple sprints.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 26-28).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connecting the military radiofrequency capability ecosystem : an industry platform approach to deliver at the speed of relevance</title>
<link href="https://hdl.handle.net/1721.1/132894" rel="alternate"/>
<author>
<name>Robinson, Joseph B.
            (Joseph Brian)</name>
</author>
<id>https://hdl.handle.net/1721.1/132894</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Connecting the military radiofrequency capability ecosystem : an industry platform approach to deliver at the speed of relevance
Robinson, Joseph B.
            (Joseph Brian)
The 2018 United States (U.S.) National Defense Strategy identifies the critical need for the U.S. Department of Defense (DoD) to "deliver performance at the speed of relevance." This thesis asks the question of "how" the U.S. military can deliver radiofrequency (RF) spectrum capabilities at the speed of relevance. RF capabilities provide critical DoD functions and are increasingly important for military operations. However, the RF spectrum continues to become more congested and contested; military capabilities must continue to perform in a growing Volatile, Uncertain, Complex, and Ambiguous (VUCA) world. This thesis explores military and industry stakeholders' current systemic challenges in rapidly delivering RF systems. A combination of literature review, stakeholder interviews, and a web-based survey are used to analyze the RF capability ecosystem. The thesis 1) presents a set of challenges identified to "deliver [RF capabilities] at the speed of relevance" and 2) evaluates how an industry platform approach can address these challenges. Stakeholder interviews and survey results show that most problems are based on challenges in acquisition, knowledge, and the use of standards. Additionally, the results show that though nearly all respondents identified value in transaction (97%) and innovation (99%) platforms, the value delivered in the web-based survey was not sufficient to generate network effects. Ten industry platform use cases are analyzed with a final recommendation to test the platform strategy through a hybrid industry platform prototype focused on delivering flexible, multi-function RF capabilities.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 201-209).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Blockchain-as-a-service : the effect of cloud computing and vice-versa</title>
<link href="https://hdl.handle.net/1721.1/132893" rel="alternate"/>
<author>
<name>Nwachukwu, Tochi.</name>
</author>
<id>https://hdl.handle.net/1721.1/132893</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Blockchain-as-a-service : the effect of cloud computing and vice-versa
Nwachukwu, Tochi.
A Blockchain is a distributed database or ledger of validated and verified records of transactions and exchanges executed between shared parties participating in the chain. Blockchain is intrinsically the technology that enables decentralized cryptocurrencies like Bitcoin and Ethereum. Recently, Public Cloud providers like Microsoft (Azure), Amazon (AWS) and IBM have moved to provide service platforms to enable enterprises, governments and consumers to build and deploy secure Blockchain networks. From common themes like: cost, performance, scalability, identity, privacy and security, this thesis aims to qualitatively evaluate the effect of block chain technology on public cloud offerings, and vice-versa. A Cloud environment is not necessarily needed to participate in a blockchain. However, the use of cloud computing makes participation much easier and seamless than conventional, on-premise solutions. A blockchain networks a large number of nodes, and is only as strong as its weakest link. Cloud computing helps to address potential issues of scalability, security consistency and performance. In evaluating the role of the Cloud in the ease of blockchain integration, this thesis would also address areas where concerns like cost could help drive more dynamic cloud offerings, and where the Cloud could play a part in driving blockchain adoption.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis. "February 2021."; Includes bibliographical references (pages 77-79).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deal rationales for technology M&amp;A : an analysis of the two year value generated</title>
<link href="https://hdl.handle.net/1721.1/132892" rel="alternate"/>
<author>
<name>Manyala, Sucharitha.</name>
</author>
<id>https://hdl.handle.net/1721.1/132892</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Deal rationales for technology M&amp;A : an analysis of the two year value generated
Manyala, Sucharitha.
The technology sector is driven by rapid innovations, the pace and magnitude of technological changes and complexities, and a reliance on specialized skills and expertise. Not all firms are able to organically develop all the technologies and capabilities they need to stay competitive. Mergers and acquisitions (M&amp;A) give buyers looking to achieve strategic goals an alternative to organic growth. Technology companies have often pursued M&amp;A as a means to acquire new technology, as an alternative to organic technology development. Strategic motives such as broadening scope, achieving cost synergies, getting access to skills or technologies faster, and several others drive companies to engage in M&amp;A. These motives are widely reported in the literature and are deemed to be critical in improving an organization's financial performance and increasing shareholder value. This thesis summarizes a subset of these motives or deal rationales that are most prominent in the technology sector, analyzes a sample of past technology deals between 2008 and 2018, and categorizes them based on the strategic intent behind the deal-making. This study examines the long-term value these motives generate in each category by measuring and comparing financial metrics pre- and post-merger. Various financial metrics like quick-ratio, CAGR (compound annual growth rate) and total shareholder return (TSR) are analyzed in the process and ultimately TSR has been chosen to measure the long-term value of the deals in this study. This study concludes "Cross-Selling" and "Acquiring technical capabilities" as having probability of higher returns and higher success rate among those studied.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data driven artificial intelligence techniques in renewable energy system</title>
<link href="https://hdl.handle.net/1721.1/132891" rel="alternate"/>
<author>
<name>Ning, Ke.</name>
</author>
<id>https://hdl.handle.net/1721.1/132891</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Data driven artificial intelligence techniques in renewable energy system
Ning, Ke.
Today's power grid is composed of different kinds of distributed energy resources (DER) such as solar panels, wind farms, batteries and power transformers. DERs often come with data interfaces and IoT sensors which generate large amounts of data. Besides monitoring device status, those data can be utilized to improve system efficiency and generate additional values. My thesis is to examine the benefits of technologies that incorporate AI algorithms on the growing DER data in a technical perspective; First, a new field after IoT technology, called AIoT (Artificial Intelligence Internet of Things) is introduced, which are new technologies combining artificial intelligence (AI) and IoT to each other and creating new opportunities in the distributed energy resources (DER) field. Second, the thesis focuses on three areas of AIoT applications (1) fault prediction in photovoltaic system and power transformers; (2) remaining useful life (RUL) prediction of IoT enabled equipment; (3) AI-enabled algorithms can automate processes and make real time grid system optimization, such as energy storage, demand response (DR) and grid flexibility. The main focus is on data driven AI techniques that differentiate from traditional statistics or knowledge-based systems, present algorithm applicability, compare improvement over traditional method and business value created in each area. Finally, in the smart grid concept, all AIoT powered distributed energy resources (DER) can be aggregated in terms of virtual power plant (VPP), which enable the management of efficient and reliable power network on a large scale, and coordinate demand and supply in real-time. The AI enabled VPP architecture is presented, which utilized all the AIoT technologies and can provide valuable system capacity, flexibility and reliability.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 60-66).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Powering through the turn : finding time for concept exploration before industry stagnation</title>
<link href="https://hdl.handle.net/1721.1/132890" rel="alternate"/>
<author>
<name>Noble, Connery.</name>
</author>
<id>https://hdl.handle.net/1721.1/132890</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Powering through the turn : finding time for concept exploration before industry stagnation
Noble, Connery.
The dichotomy of exploration and exploitation has been used in literature for many years to distinguish the needs of exploring new innovation/creating new markets versus exploiting existing capabilities/markets. This concept has been studied across various disciplines, such as organizational learning, leadership, and innovation strategy. In this thesis, we examine how this tensions plays out in large corporations, specifically in how engineering teams prioritize activities in early stage development. We argue that engineering teams inherently trade-off between exploration and exploitation during development but would benefit by more intentionally and explicitly considering their strategy, in order to ensure their efforts stay aligned with the long-term goals of the organization. Using survey data collected from over 900 system engineers and managers across a range of industries, we analyzed how engineers and organizations consider early stage development efforts, and what factors affect their importance. Notably, we observed that as an organization's market growth decreases, attention to architecture and design innovation within engineering teams also decreased. Eventually there is a tipping point in which market projections are so dire that engineering teams appear to undergo a drastic shift to refocus on exploration efforts. We also find that engineers struggle to maintain a consistent mental model of how much time and effort their organization currently wants to (or should) spend between product development phases. We argue these findings show the lack of an effective innovation strategy at the product development level, as it is inline with common pitfalls identified in other innovation strategy literature.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 71-76).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting SatCom-Enabled Early Warning Systems in Indonesia</title>
<link href="https://hdl.handle.net/1721.1/132889" rel="alternate"/>
<author>
<name>Nikicio, Ajie Nayaka.</name>
</author>
<id>https://hdl.handle.net/1721.1/132889</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Architecting SatCom-Enabled Early Warning Systems in Indonesia
Nikicio, Ajie Nayaka.
Indonesia lies within the Ring of Fire, making the country highly prone to geophysical disasters such as earthquakes and tsunamis, in addition to weather-related disasters such as floods, landslides, and wildfires. One effective way to reduce the risk of getting hit by these natural disaster hazards is through the deployment and operation of early warning systems. Early warning systems are generally responsible for two things: identifying the hazard precursors and delivering the warning in a timely manner. In both of these functions, wireless communication plays a critical role. Terrestrial communication, however, is often compromised when a disaster hits. Satellite communication (SatCom) offers a promising alternative not only for warning transmission, but also precursor detection from the thousands of disaster monitoring sensors deployed. It enables the placement of such sensors in remote areas, often closer to the source of the hazards. This thesis uses system architecture concepts to evaluate the pros and cons of the various terrestrial and satellite communication technologies in the context of early warning systems and suggest the best architecture for each use case. Based on the results of the analysis, satellite L-band, S-band, amateur radio, and newer technologies such as satellite LPWAN and GSM can provide significant benefits in terms of performance and cost. Additionally, the benefit of combining technical development and community engagement are highlighted for a sustainable early warning system. Findings from this thesis are hoped to provide the relevant government agencies in Indonesia and other countries with similar challenges for disaster risk reduction.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis. "February 2021."; Includes bibliographical references (pages 96-110).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digitalizing R&amp;D in manufacturing sector : machine learning, infrastructure, system architecture and knowledge management</title>
<link href="https://hdl.handle.net/1721.1/132888" rel="alternate"/>
<author>
<name>Li, Xuedong
            (Xuedong D.)</name>
</author>
<id>https://hdl.handle.net/1721.1/132888</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Digitalizing R&amp;D in manufacturing sector : machine learning, infrastructure, system architecture and knowledge management
Li, Xuedong
            (Xuedong D.)
This thesis addresses the topic of data utilization and data analytics in research and development (R&amp;D) functions of the manufacturing sector. Many companies in the manufacturing sector have generated significant quantities of data in their histories, but only a tiny part of these data is utilized. With the significant progress in big data analytics and machine learning, the companies in the manufacturing sector are able to upgrade their R&amp;D capability by establishing a system to better collect and analyze their data. Using machine learning can tremendously enhance R&amp;D's capability in interpreting data and giving recommendations regarding solutions. The data system could also help improve an R&amp;D organization's productivity by significantly reducing repeated work. This thesis designs an R&amp;D system that collects R&amp;D data by lab automation, analyzes data by built-in machine learning algorithms, and provides recommendations by gathering inputs for development targets. This thesis also covers aspects of knowledge management within the corporation when implementing such a data system. The organizational capability to implement this data system is also discussed.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 145-152).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatio-temporal comparative analysis of scooter share in Washington D.C.</title>
<link href="https://hdl.handle.net/1721.1/132887" rel="alternate"/>
<author>
<name>Jassar, Gulsagar Singh.</name>
</author>
<id>https://hdl.handle.net/1721.1/132887</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Spatio-temporal comparative analysis of scooter share in Washington D.C.
Jassar, Gulsagar Singh.
Geospatial-temporal data for different e-scooter firms was collected and investigated for differences in e-scooter usage patterns among customers of the firms. Computational analysis using predictive algorithms and correlation analysis was done to find co-relationally important features for predicting the dependent variable. Data-preprocessing included computing trips from geospatial data and dividing the city into smaller clusters for analysis using geohashes. Hourly weather data was added to the geospatial temporal data to account for weather impact on the number of trips. The Spatio-temporal analysis shows a correlation between the percentage of scooters parked at a location and the success rate of the firm with the highest scooters getting the highest number of trips.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis. "February 2021."; Includes bibliographical references (pages 76-79).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and evolution of large scientific experimental facilities : strategy and implementation</title>
<link href="https://hdl.handle.net/1721.1/132886" rel="alternate"/>
<author>
<name>Fry, Jonathan
            (Jonathan George)</name>
</author>
<id>https://hdl.handle.net/1721.1/132886</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Design and evolution of large scientific experimental facilities : strategy and implementation
Fry, Jonathan
            (Jonathan George)
This thesis is about the design and evolution of large scientific facilities that are used to probe the unknown mysteries of science and create a better future for humanity. These include globally distributed systems for quantum physics, confined fusion and imaging the earliest galaxies that formed after the Big Bang, among others. At the beginning of large scientific project's lifecycle there is often not a clear path to the final use case, a lot of uncertainty with immature technology and budgetary constraints. This thesis aims to gain key insights on how large scale research and development facilities can be optimally designed to take a "long sighted" approach in scientific research. In addition, the research presented has found in looking at a variety of existing, large scale scientific projects and talking with experienced project leaders, tools and techniques that can be leveraged to provide a balanced, system engineering approach to effectively build systems for upgrades and future use cases. Further to the classical system engineering and project management tools, this thesis presents an additional framework, utilizing Technology Roadmapping and Multidisciplinary Design Optimization, MDO, to aid in the foresight and success of large, R&amp;D type projects and their evolution.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 179-189).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing discovered scope within hybrid agile stage-gate project delivery systems</title>
<link href="https://hdl.handle.net/1721.1/132885" rel="alternate"/>
<author>
<name>Johnson, Thomas M.
            (Thomas Merle)</name>
</author>
<id>https://hdl.handle.net/1721.1/132885</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Managing discovered scope within hybrid agile stage-gate project delivery systems
Johnson, Thomas M.
            (Thomas Merle)
Complex mechatronic projects have machine functionality dependent upon substantial embedded software content delivered in coordination with the hardware componentry. This situation creates a dilemma for project leadership as they determine which methods to utilize for managing the project. One option is to utilize a hybrid approach where comingled Stage-Gate and Agile methods serve both hardware and software activities. However, the need for synchronized delivery schedules between the hardware and software components is not addressed well by Agile methods, which do not emphasize forward planning. In contrast, the uncertainty in defining software scope challenges the up-front scope definition relied upon by Stage-Gate methods. Three independently operating project delivery systems have each spent more than ten years weaving Agile software development methods into the classic Stage-Gate approaches to make their hybrid project management systems. This study interviews Agile and Stage-Gate leadership roles within each of these three project delivery systems to identify what has evolved to keep the schedule expectations for scope delivery aligned to the discovery of additional scope while software development progresses. This study finds that both the Stage-Gate and Agile leaders interviewed call for more work to be done in the project planning stage to improve the inclusion of more rigorous software scope identification activities. It also finds several differences in the design stage activities across the groups studied concerning how they accommodate the discovery of new software scope into the overall scope and schedule expectations for the project, each with a differing level of effectiveness. The most effective traits include the formalized identification and capture of the product decomposition and architecture so that it can be used to estimate software scope, schedule, and resources more accurately upfront in the planning stage. During the design stage, the most effective project delivery systems leverage the cultural acknowledgment and leadership's enforcement of the stakeholders' need to adjust their scope expectations in response to new scope discoveries. The addition of repeating two-month planning events deliver timely forecasts of software deliveries, and frequent scope management meetings allow for rapid adjustment to software scope discoveries. Each software delivery system added dedicated Software Delivery Lead roles to act as the liaison between the Agile and Stage-Gate management methods and to formulate mitigation activities with the rest of the functional area leads to the mechatronic product. Project delivery system developers may use these findings as a set of lessons learned to guide their pursuits with the integration of Agile practices into an existing Stage-Gate process. Others could build upon these findings by repeating the activity with other case studies to see if a pattern emerges, which could guide the creation of more specific Agile Stage-Gate frameworks.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 83-84).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A maturity model for process data analytics in biopharmaceutical manufacturing</title>
<link href="https://hdl.handle.net/1721.1/132884" rel="alternate"/>
<author>
<name>Egaña Tomic, Tomas C.</name>
</author>
<id>https://hdl.handle.net/1721.1/132884</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">A maturity model for process data analytics in biopharmaceutical manufacturing
Egaña Tomic, Tomas C.
The Biopharmaceutical industry continues to add a record number of life-saving biological therapies every year, which builds up pressure to make their manufacturing processes faster, more consistent, and more productive. Increased digitalization is expected to address these needs by means of new capabilities related to the analysis of the data collected in the manufacturing process (a.k.a. process data analytics). The objective of this work is to research a framework with which to assess the ability of a Biopharmaceutical company to exploit process data analytics in drug substance manufacturing of monoclonal antibodies. A comprehensive view of the potential benefits of process data analytics is provided, as well as a detailed account of the improvements required to realize those benefits. The framework was built using the published information of analytics use cases, the opinions of experienced practitioners of four major biopharmaceutical companies, and other guidelines built to address similar topics in other industries. Throughout the process, a detailed account of the complexities involved in the deployment of process data analytics was captured and explained. Additionally, four approaches driving the value of analytics for biopharmaceutical processing were identified and used to classify the different use cases. The result is a maturity model of the manufacturing site that describes four archetypical states of process data analytics implementation. They are characterized in terms of the mechanics of value creation and the requirements from informational technology (IT), operational technology (OT), and external sources of information. This model provides the basis upon which biopharmaceutical manufacturers or industry consortiums can further specify its content and generate an assessment tool to guide their manufacturing strategies.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis. Page 148 blank.; Includes bibliographical references (pages 139-147).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Absorptive capacity and innovative performance frameworks for SMEs : case studies from manufacturers in Indonesia</title>
<link href="https://hdl.handle.net/1721.1/132882" rel="alternate"/>
<author>
<name>Anjani, Nyoman.</name>
</author>
<id>https://hdl.handle.net/1721.1/132882</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Absorptive capacity and innovative performance frameworks for SMEs : case studies from manufacturers in Indonesia
Anjani, Nyoman.
As the fourth most populous country globally, Indonesia is the world's 10th-largest manufacturing power, according to the United Nations (Schonhardt, 2016). The manufacturing sector employs 14.72% of the total Indonesian workforces (BPS, 2018). With the demographic bonus that will happen in 2045, around 208 million Indonesians will enter the workforces. Thus, Indonesia has the potential to become the hub of the manufacturing industry in Southeast Asia. This thesis aims to help industry practitioners in the manufacturing industry improve their competitive advantages and innovative performance by building the absorptive capacity. Absorptive capacity is a firm's ability to acquire, assimilate, and transform new knowledge and valuable information to upgrade its core capabilities in response to the changing economic environment. It indicates a firm's innovative activities and influences the sustainability of a firm's competitive advantages. This thesis answers the question that has not been explored previously: how should the absorptive capacity and innovative performance of Small Medium Enterprises (SMEs) in developing countries, where resources for research and development (R&amp;D) are limited, be evaluated. All the factors that drive a firm's absorptive capacity and innovative performance were evaluated using a case study approach and direct field interviews with twelve manufacturers in Indonesia. The case studies provide vivid illustrations about Indonesia's SMEs and manufacturing industry conditions: the capabilities, challenges, and opportunities for further improvements. Finally, the case studies' findings were used to build conceptual frameworks that guide practitioners. Absorptive capacity is a multidimensional construct that can drive a firm's innovative performance. It is driven by a firm's capabilities and efforts to learn new knowledge and adapt to the environment. This thesis is seed research for multi-year research collaboration between MIT and Indonesia to build an integrated sustainable design and supply chain model to enhance Indonesia's manufacturing industry.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 88-92).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volatility trading system design with scaling Risk Management</title>
<link href="https://hdl.handle.net/1721.1/132881" rel="alternate"/>
<author>
<name>Zhou, Bin,
            S.M.
            Massachusetts Institute of Technology (2020)</name>
</author>
<id>https://hdl.handle.net/1721.1/132881</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Volatility trading system design with scaling Risk Management
Zhou, Bin,
            S.M.
            Massachusetts Institute of Technology (2020)
We propose a volatility trading system that comprises two uncorrelated components. The first component is astraddle long-short strategy which profits by anticipating changes in the volatility of stocks within the SP 500 Index.The second component is a filtered out-of-the-money put writing strategy on the SP 500 Index which profits by collecting premiums while avoiding losses that would occur during market selloffs by using the Absorption Ratio to detect fragile market regimes. We combine these two components into a portfolio by weighting them in such a way that they contribute equally to total portfolio risk.In addition,we include a dynamic hedging overlay to provide further protection to the portfolio.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (page 41).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An integrated model-based approach to improving project control in Department of Defense acquisition</title>
<link href="https://hdl.handle.net/1721.1/132880" rel="alternate"/>
<author>
<name>Carson, Christopher E.
            (Christopher Everett)</name>
</author>
<id>https://hdl.handle.net/1721.1/132880</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">An integrated model-based approach to improving project control in Department of Defense acquisition
Carson, Christopher E.
            (Christopher Everett)
The United States no longer has the luxury of overspending on military weapon systems. Military programs have steadily cost more, taken longer, and delivered less. How can the Department of Defense reverse this trend? The Department of Defense prescribes the use of an Earned Value Management System (EVMS) to control large, complex engineering projects. According to academic literature, the earned value method can be an effective project control technique but also has significant flaws. Modern integrated project models allow for innovative new approaches to project control which may be superior to the earned value method. Department of Defense policy reveals that integrating cost, schedule, and scope; accurately forecasting project status to allow for proactive decision making; and effective risk mitigation are the most important features of a project control method. This thesis reviews earned value method research and Department of Defense EVMS policy. This thesis also evaluates four project control methods through an experiment that uses an integrated project model. Subject to the specific conditions represented in the model, a Multiple Risk Level model-based control method enabled more proactive decision making than a modified version of the earned value method in the experiment. However, the Multiple Risk Level model did not forecast or enable risk mitigation as well as the modified earned value method in the experiment. The results of this analysis suggest that the ideal project control technique depends on the goals, nature, and environment of the project. Therefore, the Department of Defense should use integrated project models to tailor project control strategies to best suit acquisition programs.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, February, 2021; Cataloged from the official version of thesis.; Includes bibliographical references (pages 193-199).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From digitalization to P&amp;L : integrating the value chain of energy industry to improve social and financial profits</title>
<link href="https://hdl.handle.net/1721.1/132879" rel="alternate"/>
<author>
<name>Yang, Fei,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132879</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">From digitalization to P&amp;L : integrating the value chain of energy industry to improve social and financial profits
Yang, Fei,
            S.M.
            Massachusetts Institute of Technology.
The business landscape for a typical super major oil company has shifted tremendously in the past few years, displaying in numerous aspects including the much lower oil and gas prices, the significant learning curves on unconventional assets, and the competition with alternative energy sources. In this challenging environment, integration and optimization of the full value chain serve as an excellent opportunity for the enterprise to improve its financial and social benefits. Utilizing architecting innovative enterprise strategy (ARIES) framework, literature reviews, inputs from subject matter experts, and digital applications, this thesis explores the potential of systematically transforming a typical major oil company to integrate its value chain. The current status of the enterprise was reviewed through ten enterprise lenses, the desired future of the enterprise was envisioned, and the step-by-step transformation was designed. Numerous methodologies were applied to effectively architect the transformation. The utilization of digital tools to integrate and optimize the value chain was assessed. The applications of digitization and digitalization were clarified; the three building blocks of the digital core of the enterprise, including data engineering, data science, and business intelligence, were explored. The pathway of adding value via data utilization across the value chain was investigated in the research, followed by two case studies of using digital solutions to optimize long-term and short-term benefits. Due to the complex nature of the value chain, compounded with the challenges from the broader ecosystem, a systematic approach to architect the transformation, primarily the ARIES framework, is found to exhibit high potential to fulfill the needs of the stakeholders. With a clear business vision, a robust digital core, and an integrated decision-making process, digital applications can be potent tools to enable the optimization of the value chain for an oil enterprise.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 87-89).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Validity of innovation processes on outcome and performance</title>
<link href="https://hdl.handle.net/1721.1/132878" rel="alternate"/>
<author>
<name>Yu, Kevin,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132878</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Validity of innovation processes on outcome and performance
Yu, Kevin,
            S.M.
            Massachusetts Institute of Technology.
Innovation is a nebulous and subjective field that's definitions and validity is often put into question. To study this field, it is critical to look at thought leaders within the field as well as well established institutions of innovation and analyze their processes to establish a common framework. This paper expands on common ideas presented by people within the design innovation field who write and educate the public about innovation processes. It also looks at educational institutions and departments whose goal is to establish innovation processes and engage people in the innovation process. From there, a common high-level framework of innovation is extrapolated into a map of modules. The modules can then be used as an analysis tool for past innovation engagements as well as future innovation planning. Additionally, certain innovative characteristics and behaviors are also identified from popular literature and educational frameworks and surveyed for their impact. In the study presented, financial data is collected from students who engage in a school project that requires the design, manufacturing, and sales of original products. The students are then surveyed on various innovative behaviors and asked to compare themselves to other teams within their cohort. Through anecdotal evidence and interviews, specific teams' processes are summarized. The data and analysis can offer a holistic perspective to the innovation processes as well as evidence of correlations between specific innovations and their effectiveness on financial impact.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 48-49).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remodeling newsonomics : a study of the transformation of business models in journalism</title>
<link href="https://hdl.handle.net/1721.1/132877" rel="alternate"/>
<author>
<name>Yan, Xiaoyu,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132877</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Remodeling newsonomics : a study of the transformation of business models in journalism
Yan, Xiaoyu,
            S.M.
            Massachusetts Institute of Technology.
The past three decades have been exciting yet challenging times for journalism. Thanks to massive advancements in technical capability, journalists were able to reach global audiences and produce more diverse, insightful and powerful reporting. However, the Internet also profoundly disrupted long-established business models and proved journalism to be a seemingly unprofitable venture. Major revenue streams have been in staggering decline. As a result, newsrooms were forced to shrink their staff and ambitions, and the general public was losing access to accurate and important information. It is urgent to examine the limitations of current models and envision possible paths forward, as journalism is crucial to building and maintaining healthy civic discourses, a robust public sphere, and a well-functioning democratic society. This thesis intends to focus on independent news media, primarily legacy newspapers, in contemporary liberal democracies in the U.S. and Europe. It aims to answer the following questions, with each detailed in its respective chapter: how has the Internet reshaped the news business; what are the current major revenue streams (e.g., advertising and subscription) as well as the lesser-known, emerging monetization methods (e.g., micropayment and service bundling); and how do new business models drive organizational design and product thinking, in the lens of digital subscriptions.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 62-64).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A conceptual design and assessment of a low-cost augmented reality headset</title>
<link href="https://hdl.handle.net/1721.1/132876" rel="alternate"/>
<author>
<name>Wu, Ming-Hui,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132876</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A conceptual design and assessment of a low-cost augmented reality headset
Wu, Ming-Hui,
            S.M.
            Massachusetts Institute of Technology.
Along with the exponential advancement of technologies, the way people receive information and communicate with one another has been constantly changing, moving from traditional letters, newspapers and televisions to modern computers and smartphones. In the meantime, screens of various sizes have also flooded our lives, ranging from wall-mounted projections to the tiny screens of wearables. Every interface is a medium where information can be displayed, delivered and digested. In recent years, as smartphones mature, numerous industry reports have predicted that augmented reality (AR) will bring the next transformational change, replacing smartphones and becoming a significant medium for daily communications. However, many of the technical barriers remain unconquered, resulting in slow market penetration and user adoption. In this paper, a conceptual design of a low-cost AR device will be proposed along with extensive market analysis and in-depth user research. The goal is to significantly univeil the actual user needs and lower the technical barriers of entry for ordinary consumers to purchase, enjoy and co-create AR devices and applications.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 53-54).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning for well rate estimation : integrated imputation and stacked ensemble modeling</title>
<link href="https://hdl.handle.net/1721.1/132875" rel="alternate"/>
<author>
<name>Wilson, Oliver John.</name>
</author>
<id>https://hdl.handle.net/1721.1/132875</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Machine learning for well rate estimation : integrated imputation and stacked ensemble modeling
Wilson, Oliver John.
This thesis describes a stacked ensemble, supervised machine learning problem for well rate estimations utilizing well test features that are far from independent and identically distributed (IID), and exhibit missing data with a not missing at random (MNAR) classification from three different oil fields. This research introduces a novel integrated imputation procedure that combines the imputation model selection with the cross-validation procedure for downstream model tuning without data "leakage"--the primary objective shifts from minimizing the imputation data error to minimizing the downstream hold-out error. A stratified time-slicing rolling forecast cross-validation procedure is implemented to minimize over-fitting from the plethora of statistical assumptions that are violated. This thesis seeks to test a framework that will enable well rate estimations for fields available well test data to improve well surveillance capabilities in order to maximize production metrics and minimize adverse health and environmental impacts.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. "September 2020."; Includes bibliographical references (pages 115-118).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural and aesthetic design applications of flexible, thin-film solar cells to power off-grid tensile structures</title>
<link href="https://hdl.handle.net/1721.1/132874" rel="alternate"/>
<author>
<name>Wanyiri, Juliet Wanjiru.</name>
</author>
<id>https://hdl.handle.net/1721.1/132874</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Structural and aesthetic design applications of flexible, thin-film solar cells to power off-grid tensile structures
Wanyiri, Juliet Wanjiru.
Despite global trends in decreasing costs of silicon-based solar panels, the adoption of solar energy solutions as an alternative to fossil fuels has been impeded by high installation and manufacturing costs, as well as challenges in the customization of solar panels for different products and environments. Moreover, silicon-based photovoltaic cells, due to their rigid nature, change the aesthetic of the surfaces on which they are placed and often only provide the singular function of harvesting energy. The current solar energy products function independently from the architecture on which they are installed, making them difficult to blend in with the design and functional requirements of the products and buildings on which they are installed. Fundamentally, the installation costs associated with silicon crystalline PV cells account for a significant percentage of solar energy solutions. This thesis aims to push the boundaries of solar panels to provide the dual functionality of energy harvesting and architectural structure, while either maintaining or improving the aesthetics of the architecture on which they are placed. To achieve this, this research explores a new use case for flexible thin-cell solar panels that includes the use of organic photovoltaic (OPV) and perovskite solar cell technology. Through a product-design approach, this thesis explores use cases where the technology's uniquely-flexible, ultra-thin, lightweight, and low-cost key features are best applied as a solar energy source. Particularly, this research focuses on off-grid architecture with non-rigid roofing structures where fossil fuels are currently used as the primary energy source. Through design research and stakeholder interviews, a key insight that was uncovered was the opportunity to integrate flexible OPV solar cells in glamping and luxury safari camp as an alternative to the current option of diesel fuel. This achieves the goal of providing a clean energy source while maintaining the aesthetic of the luxury camp and the outdoor safari experience.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official pdf version of thesis. "May 2020."; Includes bibliographical references (pages [75]-[77]).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rider multihoming in the United States rideshare market</title>
<link href="https://hdl.handle.net/1721.1/132873" rel="alternate"/>
<author>
<name>Valderrama, Daniel X.
            (Daniel Xavier)</name>
</author>
<id>https://hdl.handle.net/1721.1/132873</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Rider multihoming in the United States rideshare market
Valderrama, Daniel X.
            (Daniel Xavier)
This thesis examines rider multihoming in the US ridesharing market. Ridesharing services experience substantial multihoming on both sides of the platform, and appear to suffer from a combination of a lack of differentiation as well as low multihoming costs. Through an informational interview, a qualitative survey, and a conjoint survey and analysis, rider preferences were able to be categorized and quantified. An adapted conjoint survey and analysis allowed for a simulation of rider decisions to accept a ride or multihome along price, time, and company attributes. With baseline thresholds, examining the prevalence of multihoming with use of several multihoming reduction strategies, have shown that network bridging strategies may have an impact in reducing the prevalence of multihoming among riders. In-App Promotions and Incentive-based strategies, meanwhile, have shown to have the opposite results, showing an increased tendency to multihome in riders that utilize them.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 110-114).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for scalability in a start-up environment</title>
<link href="https://hdl.handle.net/1721.1/132872" rel="alternate"/>
<author>
<name>Deghuee, Rachel Elizabeth.</name>
</author>
<id>https://hdl.handle.net/1721.1/132872</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">Design for scalability in a start-up environment
Deghuee, Rachel Elizabeth.
Rising labor costs and increasing demand for restaurant food has led to a shortage of restaurant workers. Restaurant owners are having difficulty finding staff while maintaining profit margins. Miso Robotics, based in Pasadena, California, produces an Al-guided robot that automates the most arduous kitchen tasks. The Miso Robotics Kitchen Assistant, colloquially known as "Flippy," began serving the public in March of 2018. Three students from MIT's Advanced Manufacturing and Design program were selected to help prepare the product for full-scale deployment. Due to resource constraints, many start-up companies cannot implement mature manufacturing processes. However, viable improvements that suit the company and set it up for the early stages of manufacturing and scaling were made. Mechanical design and manufacturing in a start-up environment differs from designing at a larger, well established company. The relative importance of cost, lead time, reliability, supplier selection, and design turnaround differ due to the prioritization on base functionality over repeatable design. Transitioning from a prototype to a production-ready design must be done carefully to ensure company success. Three case studies are examined to analyze the transition from prototype design for pilot installations, to limited deployment, to wide-scale implementation. This thesis serves as a guide for future design and process changes necessary to meet anticipated demand are provided for any hardware startup.
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, September, 2018; Cataloged from the PDF version of thesis.; Includes bibliographical references (pages 61-62).
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A survey of human-centered design methodologies for a new hybrid approach in product and experience innovation</title>
<link href="https://hdl.handle.net/1721.1/132870" rel="alternate"/>
<author>
<name>Trevino Ruiz, Javier.</name>
</author>
<id>https://hdl.handle.net/1721.1/132870</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A survey of human-centered design methodologies for a new hybrid approach in product and experience innovation
Trevino Ruiz, Javier.
Today, design is not just considered to be creating something beautiful but is understood and adopted as a problem-solving tool for creative idea generation for disruptive and innovative products, services, and experiences. There has been a boom in design methodologies with a big marketing campaign behind tools and methodologies that have become buzz words. The diversity of the design methods in use raises the questions of which is the best framework, and which is the best method for product design, user experience, or service design. Is there a universal method that can be used for all types of projects, or does it depend on the goals to be achieved? This thesis analyzes the strengths and weaknesses of thirty methodologies according to five simplified categories of the design process: inspire, ideate, experiment, test, and validate. Using this analysis, I chose and combined the strongest design methodologies in each stage to create a new hybrid methodology based on the strengths of each. The new proposed hybrid methodology includes steps from ten popular approaches to design: Jobs-To-Be-Done, Product Design &amp; Development, Experience Design, UX Strategy, Design Thinking, The Lean Product Playbook, Value Proposition Design, and The Circular Guide. The combination of these Human-centered methodologies includes qualitative and quantitative research, creative thinking, innovation strategy, concept building, testing, and business validation. The main purpose of this research was to create a hybrid methodology to be used in different types of projects from product design, service design, experience design; but it is flexible enough to also be used in projects like healthcare, public policy, energy, physical spaces, technology (e.g., IoT), and Fintech.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 68-69).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An approach to developing a resilient high-speed rail enterprise architecture through digital transformation</title>
<link href="https://hdl.handle.net/1721.1/132869" rel="alternate"/>
<author>
<name>Takagi, Ryuichi (Scientist in system design and management) Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132869</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">An approach to developing a resilient high-speed rail enterprise architecture through digital transformation
Takagi, Ryuichi (Scientist in system design and management) Massachusetts Institute of Technology.
Japanese Shinkansen has been a safe and reliable mode of transportation in Japan, and JR-Central has developed a well-organized operating system architecture to ensure safety, reliability, and profitability. However, now is the time for the company to transform its architecture because the landscape surrounding the enterprise and stakeholders' needs are rapidly changing. For instance, a lifetime employment custom, which was the long-established practice in large Japanese firms, is disappearing, and the mindset of younger workers in career development can change. Digital technology, which changes general business and the way of working, is available and applicable to all industries. While the Shinkansen system has been successful thus far, the enterprise needs to develop a future architecture that fits with those trends and environments. The objective of this study is to investigate and generate a resilient Shinkansen enterprise architecture against the changes in landscape and stakeholders' needs by implementing digital transformation. The thesis uses the ARIES framework to analyze capabilities of the current enterprise architecture and identify points that require transformation. In addition, literature review and case studies are performed to identify key characteristics to proceed with digital transformation, organizational transformation, and talent management. On the basis of the analyses and literature review, this thesis generates alternative architectures that meet emerging stakeholders' needs and fit the changing landscape of JR-Central. Furthermore, the alternatives are evaluated in context of several extreme scenarios, including the worst-case scenario due to impacts from COVID-19. The results indicate that an alternative architecture that consists of small internal IT and HR units, external intrapreneurs, and corporate venture capital is the best in terms of resiliency. The thesis also provides an implementation plan for an architecting team of JR-Central. The result of this research can be utilized to design a resilient architecture to maintain its safe and reliable Shinkansen operation and create values to survive and grow in the changing landscape and uncertain future in the Japanese railway industry.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 136-140).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using STPA and CAST to design for serviceability and diagnostics</title>
<link href="https://hdl.handle.net/1721.1/132868" rel="alternate"/>
<author>
<name>Slominski, Hannah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/132868</id>
<updated>2025-10-30T17:51:24Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Using STPA and CAST to design for serviceability and diagnostics
Slominski, Hannah M.
OEM industries are facing increased challenges providing proactive and reactive equipment support. Increased product complexity and the fast rate of technology change make problems difficult to understand, prevent, and resolve. The cost of machine unavailability is extreme, and reliability-based design methods ignore service time as a key contributor to machine unavailability. Serviceability and diagnostics are an important control to minimize customer losses when problems do occur. Methods are needed that identify serviceability needs early in the product development process while managing product complexity. STAMP (System-Theoretic Accident Model and Processes) is an accident causality model developed as a new engineering approach to system safety. While it was originally created for safety, its foundation in systems theory lends itself to other emergent properties, like serviceability. This research demonstrates that STAMP techniques can be applied to address existing serviceability issues and to guide service-friendly system design in early, conceptual design phases. Two case studies, drawn from industry, are explored to verify the effectiveness of applying STAMP to serviceability. Both case studies successfully generated hardware, software, and operator interface design requirements. They also produced recommendations for the product development and support processes. By using STAMP techniques to understand system interactions and strengthen service control structures, OEMs can address many of the challenges they are currently facing providing serviceability and support.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 93-94).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing the empathy UX : a study in building empathy through technology and media</title>
<link href="https://hdl.handle.net/1721.1/132867" rel="alternate"/>
<author>
<name>Stinnett, Aaron
            (Aaron D.)</name>
</author>
<id>https://hdl.handle.net/1721.1/132867</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Developing the empathy UX : a study in building empathy through technology and media
Stinnett, Aaron
            (Aaron D.)
This study represents the culmination of historical research on the topic of building empathy (perspective-taking, or cognitive empathy for this context) through identity expressed over digital media channels, or in a cognitive empathy portal of sorts. Through literature review, relevant topics around digital media as a tool to build empathy are introduced. A controlled digital experiment (in a portal-like digital experience) then integrates research into a single framework that can be leveraged in future iterations of a cognitive empathy portal. Results of this experiment show that no consistent link between a single score in cognitive empathy and accuracy in predicting behavior exists. Furthermore, considerations around testing for self-reporting of empathy components in future research should be taken into account and explored rigorously.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 26-37).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing the future architecture of high-speed railway maintenance in Japan</title>
<link href="https://hdl.handle.net/1721.1/132866" rel="alternate"/>
<author>
<name>Soeda, Yuki,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132866</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Analyzing the future architecture of high-speed railway maintenance in Japan
Soeda, Yuki,
            S.M.
            Massachusetts Institute of Technology.
Most of Japan's railway infrastructure is decades old and requires efficient maintenance. However, pursuing efficiency is not easy because of strict safety requirements and many stakeholders that need to be considered. This study uses the ARIES framework to discuss the strategies that Japanese railway companies should take to achieve sustainable growth by maintaining safety and improving efficiency at the same time from the perspective of organizational design. The study uses the Central Japan Railway Company (CJR) as a representative of the industry. Firstly, the external environment of the organization is analyzed using seven important ecosystem factors, and stakeholder analysis identifies the key stakeholders and their values. In addition, the internal situation of the organization is investigated through eight view elements. The results of these analyses show that while CJR has strengths in infrastructure, a safety-oriented organizational culture, and specialized processes for railway operations, it has weaknesses in monitoring information and flexibility of decision-making, which may make it difficult to respond to changes in the external environment, such as population decline and the impact of COVID-19. These findings are incorporated into SWOT analysis and X-matrix analysis, and the possible strategies for CJR are discussed. As a result, six concepts and three alternative architectures based on these concepts are generated. Each alternative architecture is evaluated based on eight criteria, including safety, efficiency, and flexibility. The evaluation finds that aiming for a technology and data-oriented organization may be the most reasonable option for CJR. The study also suggests that a small, cross-functional team should lead the transformation and confirms the robustness of the selected architecture in three possible scenarios. Lastly, a feasible implementation plan with detailed architecture and alignment of internal metrics and processes is proposed.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 86-88).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technological improvement rate estimates for all technologies: Use of patent data and an extended domain description</title>
<link href="https://hdl.handle.net/1721.1/132865" rel="alternate"/>
<author>
<name>Singh, Anuraag,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132865</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Technological improvement rate estimates for all technologies: Use of patent data and an extended domain description
Singh, Anuraag,
            S.M.
            Massachusetts Institute of Technology.
Complex and highly interdependent socio-technical systems are necessary for sustaining, governing, entertaining and nourishing human society. Such systems fulfill their objectives by incorporating ever-improving technologies. A systematic understanding of technology and the pace of technical change is thus critical for policymakers and stakeholders to make well-informed decisions and avoid costly mistakes and omissions. This work reviews past work on technological forecasting and decision making and builds on new research to introduce a systematic approach to technological decision-making. This document describes why information about technology improvement rates matters to technological decision-making, the theoretical framework for doing so, a repeatable methodology and an online system making available this capability to stakeholders. Despite and somewhat because of the complexity of the "whole" socio-technical system, the regularity of constant annual performance improvement is a very strong empirical fact with substantial theoretical underpinning. This regularity lies at the heart of integrating objective data into overall decision processes concerning items affected by the timing of technological change. The methodology uses a broad, easy to use, database of the rates of technological change that covers almost all technologies. We do this by using prior work (accomplished as part of the effort that enabled this thesis) establishing a correspondence of 97.14% of all patents within the entire US patent system to a set of 1757 technology domains and estimating their rates of improvements. We describe the development of a new web-based technology search tool and apply the methodology to a case study of the automotive industry. We believe these results herald a new era of data-driven technological decision-making. Using this new framework and the tool, stakeholders can make timely and "good enough" technology forecasts available without requiring extensive modelling initiatives.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 54-58).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Platform thinking and business of selling groceries online : assessing US and China based platforms</title>
<link href="https://hdl.handle.net/1721.1/132864" rel="alternate"/>
<author>
<name>Singh, Sarabjeet
            (Sarabjeet Sabby)</name>
</author>
<id>https://hdl.handle.net/1721.1/132864</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Platform thinking and business of selling groceries online : assessing US and China based platforms
Singh, Sarabjeet
            (Sarabjeet Sabby)
The business of selling fresh food to individual customers on an ecommerce platform is expected to be $40 billion USD sales generating industry by 2023¹. The online marketplace faces various challenges - a) low profit margins, b) changing consumer habits, c) the onerous task of managing customer experience on the site, and customer delivery experience in logistics, and d) massive scale of operations required to accomplish selling groceries online². What can emerging or even established fresh food ecommerce platforms learn from Walmarts, Amazons, Alibabas and JDs of the world? What are some of the challenges both existing and new players must address in a competitive market to delight customers, and win? The goal of the thesis is to introduce and assess Online Grocery space in the US and China, and understand and evaluate them on platform thinking, and growth strategies for the business. We conclude by looking at emergent trends in the space, especially as it relates to Covid-19, that has been a black swan event for the industry in general. Please note much of the document was written before the pandemic emerged.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 98-115).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System design and optimization of an aerial refueling system for transcontinental flights</title>
<link href="https://hdl.handle.net/1721.1/132863" rel="alternate"/>
<author>
<name>Rong, Keran.</name>
</author>
<id>https://hdl.handle.net/1721.1/132863</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">System design and optimization of an aerial refueling system for transcontinental flights
Rong, Keran.
Currently, intercontinental flights are long-haul flights, and commercial aircraft are not refueled during the flight. As a result, the fuel consumption of intercontinental flights increases exponentially with the distance travelled, because these long-haul flights consume extra fuel due to their weight gain. Intercontinental aviation already accounts for a significant portion of global carbon emissions and this is expected to grow rapidly in the foreseeable future. Therefore, aircraft emissions from transcontinental flights have become a global challenge both socially and technologically. In this study, we propose a floating air refueling system (FARS) to reduce fuel costs on intercontinental flights. In this system, we launch a tanker to refuel incoming intercontinental aircraft. Through the refueling process, intercontinental flights avoid the exponential fuel consumption caused by the additional fuel required, and can potentially reduce aircraft emissions. This thesis presents the design of a floating aerial refueling system, including stakeholder analysis, system architecture design and economic feasibility analysis. In addition, we propose a method for mathematical simulation and optimization of FARS using different techniques. Finally we analyze FARS's feasibility and sensitivity based on case studies. The case study of Singapore Airlines SQ21 shows that our optimized design can save up to 39,415 tons of jet fuel annually over a 25-year life cycle, with a net present value of USD 266 million.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 145-149).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The influence of gasoline prices and consideration sets on the fuel economy of new vehicle sales</title>
<link href="https://hdl.handle.net/1721.1/132862" rel="alternate"/>
<author>
<name>Ruckdaschel, James David.</name>
</author>
<id>https://hdl.handle.net/1721.1/132862</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The influence of gasoline prices and consideration sets on the fuel economy of new vehicle sales
Ruckdaschel, James David.
Understanding the factors that influence consumer investment in fuel economy when purchasing a new vehicle is critical for stakeholders including environmental policy makers, automotive manufacturers and oil companies. The energy economics literature shows that consumers are relatively rational in how much fuel economy they purchase in response to changes in gas price. Yet the marketing literature suggests that consumers only consider a small number of vehicle makes/models - as few as 2-6 - when making their purchase decision. Given this, we consider the extent to which consumer's rational response to gas price changes is achieved by including different vehicles in their consideration set, versus choosing differently from within their consideration set. We analyze data from 210,885 responses to a new vehicle customer satisfaction survey collected over 9 years in which respondents state the vehicles they considered purchasing in addition to the vehicle they ultimately purchased. Our findings show that as gasoline prices rise, their purchased vehicle fuel economy increases more than their consideration set average fuel economy does, with both increasing. This is the result of considering more fuel-efficient vehicles and also purchasing higher within their consideration set fuel economy range. The degree to which the consumer adjusts is shown to correspond to the importance they place on the environment during their shopping process. Increased consideration and adoption of alternative fuel vehicles are found to be one mechanism the consumer uses to make these adjustments. Finally, we highlight how changing gasoline prices result in differing consideration set behavior for buyers of low and high fuel economy vehicles.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 56-57).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting at-risk students from disparate sources of institutional data</title>
<link href="https://hdl.handle.net/1721.1/132861" rel="alternate"/>
<author>
<name>Rayasam, Ajay S.
            (Ajay Siva)</name>
</author>
<id>https://hdl.handle.net/1721.1/132861</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Predicting at-risk students from disparate sources of institutional data
Rayasam, Ajay S.
            (Ajay Siva)
In the past few years, the Mental Health Crisis in Higher Education has captivated the nation. This may be due in part to high profile cases, shifts in cultural attitudes, or increased demand for treatment. Regardless of the cause, student mental health has now become an epidemic. At MIT, there are over 4,000 consultations, 200 wellbeing checks and 50-70 psychiatric hospitalizations annually. In order to combat this challenge, most institutions invest in services such as mental health counseling or emergency response teams. However, these services are primarily used for students who self-report symptoms or for extreme cases. Unfortunately, of the nearly 3 million college dropouts per year, more than 40% did not report their mental illness. While the institutions have promoted mental health awareness, many students, who suffer from mental illness, remain undiscovered. As a result, this thesis proposes an novel approach -- using artificial intelligence to identify those hidden students. By leveraging non-invasive data found within the institution, machine learning can predict at-risk students before any symptoms occur. By doing so, the institutions could prevent dropouts, leaves of absences and deaths due to mental illness.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 65-68).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rhetorical fractures : designing for social movement growth using ancient and contemporary tools</title>
<link href="https://hdl.handle.net/1721.1/132860" rel="alternate"/>
<author>
<name>Ravenel, John Bishop.</name>
</author>
<id>https://hdl.handle.net/1721.1/132860</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Rhetorical fractures : designing for social movement growth using ancient and contemporary tools
Ravenel, John Bishop.
Informational weapons may be more lethal and are certainly less predictable than the nuclear warheads capturing the focus of strategic and military planners since the 1940's. Genres considering mass persuasion and social and political movements remain disparate fields. In this thesis, four distinct academic verticals of thought are considered, concerning how to make sense of networks of ideas, people, and organizations. These genres include ancient rhetoric, enterprise design theory including stakeholder salience frameworks, social movement theory, and network science consisting of game and graph theory. The purpose of the thesis is to better understand the foundational forms that imbue meaning and create impact for social and political movements. The premise is to consider ostensibly inexplicable political events and strategies, termed "rhetorical fractures," that seem to be immediately and durably effective. Through analyzing aberrant events, this thesis hopes to identify the fundamental forms causing impact through social and political movements.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 141-169).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collaboration effectiveness in energy research and development : an empirical study of patents</title>
<link href="https://hdl.handle.net/1721.1/132859" rel="alternate"/>
<author>
<name>Rahill, Daniel F.</name>
</author>
<id>https://hdl.handle.net/1721.1/132859</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Collaboration effectiveness in energy research and development : an empirical study of patents
Rahill, Daniel F.
Innovation is critical for any business. A key driver of innovation is developing and adopting new technology. One key component to company innovation is the choice to develop technology internally or in collaboration with external resources. This study explores the role of such collaborations in contributing to innovation in the energy industry through evaluation of patent data across both traditional oil and gas patents and renewable patents at both the individual patent-level and company-level. Evaluation of collaboration trends finds significant evidence of increasing and accelerating partnership frequency and size in the energy sector with the notable exception of renewables. Collaboration as defined by more than one assignee is linked to higher patent quality as defined by the PageRank of patent citations, consistent with previous findings. The novel finding is that interactions between patent technology and collaboration are meaningful. The results that renewables have less frequent and less effective partnerships is counter to expectations. One possible explanation is related to the maturity and focus areas of the industry. Company-level evaluation are largely inconclusive but do find limited evidence higher revenue deceases the effectiveness of partnering in general but increases the effectiveness of partnering with a research institute. Finally, financial evaluation findings are largely consistent with existing literature, notably patent quality is reflected in increased company revenue. Patent value as defined by quality per research and development spend largely aligns with patent quality. This suggests patent quality appears to be a good proxy for patent value and the relative cost differences associated with partnering do not appear significant on average.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 80-82).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable and inclusive last-mile transportation for developing countries</title>
<link href="https://hdl.handle.net/1721.1/132858" rel="alternate"/>
<author>
<name>Rodriguez Tovar, Jairo Ernesto.</name>
</author>
<id>https://hdl.handle.net/1721.1/132858</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Sustainable and inclusive last-mile transportation for developing countries
Rodriguez Tovar, Jairo Ernesto.
This is a last-mile mobility business model for developing countries. It analyzes current innovation and trends in rideshare and last-mile transportation. It states the challenges of transportation systems in our urban ecosystems, as well as the intimate relationship between transportation and energy demand, energy sources, and environmental impact. This thesis mimics a pilot case study in Bogota, Colombia, a city with tremendous mobility challenges, high unemployment, and robust bicycle infrastructure; characteristics that multiple cities in Latin America have in common. Finally, it proposes a last-mile transportation system that is sustainable, efficient, and inclusive with a concept vehicle and an innovation strategy.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 44-46).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A multispectral imaging method and device to detect and quantify the presence of fluid in the middle ear to facilitate the diagnosis and triage of ear infections</title>
<link href="https://hdl.handle.net/1721.1/132857" rel="alternate"/>
<author>
<name>Rajamanickam, Gokul Prasath.</name>
</author>
<id>https://hdl.handle.net/1721.1/132857</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A multispectral imaging method and device to detect and quantify the presence of fluid in the middle ear to facilitate the diagnosis and triage of ear infections
Rajamanickam, Gokul Prasath.
Middle ear infections or otitis media that cause inflammation of tympanic membrane and fluid buildup in the middle ear cavity accounts for 2-3 million hospital visits every year [34]. As per an epidemiological study conducted from 2006 - 2016 on 685 children, between the ages of 1-3 years, roughly 60% had at least one hospital visit due to ear infections [3]. Despite the high incidence, the diagnosis of otitis media is only 50% accurate (a coin toss) due to the subjective nature of diagnosis as the physicians look at the ear drum and detect the fluid behind the ear drum. To detect the fluid with high sensitivity and accurately diagnose middle ear infection, we propose a multispectral visible - nIR otoscope that operates in the range of 600 nm - 1050 nm. We have performed experiments to demonstrate the proof of concept of our device on phantoms that includes, 3D printed middle ear structure, tympanic membrane made of silicone, and orange juice as ear fluid all of which mimics the properties of human ear. The multispectral otoscope showed highest contrast between ossicles and fluid at 1000 nm which shows low attenuation of fluid and tympanic membrane at NIR wavelengths. The system is calibrated against a diffuse reflection surface to account for variations in source and detector. Our experiments showed that empty phantoms yielded almost equal contrast across the entire visible- NIR wavelength. Once the fluid is filled, the contrast increased by 30 ± 10 % in the visible wavelength (600 nm - 750 nm) and 120 ± 20 % in nIR wavelength (900 nm - 1000 nm). This 80% - 100% difference in contrast between visible and NIR wavelength is used to detect and highlight the areas of the middle ear filled with fluid.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis. Page 68 blank.; Includes bibliographical references (pages 65-67).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for community resilience in the age of disasters : a case study in Puerto Rico</title>
<link href="https://hdl.handle.net/1721.1/132856" rel="alternate"/>
<author>
<name>Qin, Yiyuan.</name>
</author>
<id>https://hdl.handle.net/1721.1/132856</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Design for community resilience in the age of disasters : a case study in Puerto Rico
Qin, Yiyuan.
In September 2017, Puerto Rico, home to 3.2 million people, suffered catastrophic damages as category-5 Hurricanes Irma and Maria made direct landfalls on the Island. Their effects on people's health and safety were devastating and long-term. In the face of climate change, places like Puerto Rico are likely confronted with more frequent and more destructive natural disasters. The need to better prepare the Island for future disasters is immense and urgent. Combining primary and secondary research, this thesis applies a human-centered and system-minded design approach to identify and analyze the current strengths and gaps in the disaster response and recovery efforts in Puerto Rico after Hurricane Maria. I conducted interviews and participatory observations with individuals and organizations in the field, ranging from community-based organizations to aid agencies. This thesis reveals that although Hurricane Maria touched virtually all parts of the Island, the vulnerable populations were disproportionately affected. In response to the inefficiencies of local governments and federal agencies, citizens and community groups emerged to respond to the aftermath of Hurricane Maria. However, there is a clear gap in the current disaster management system in engaging and empowering citizens and communities to respond to the growing challenges of natural disasters. Based on these findings, this thesis lays out a set of design recommendations for leveraging disaster information and knowledge management systems to promote collaboration across key actors to enhance disaster resilience in Puerto Rico and other relevant contexts.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 77-87).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring insulin regimen from clinical notes : using natural language processing techniques to extract data from free text records</title>
<link href="https://hdl.handle.net/1721.1/132855" rel="alternate"/>
<author>
<name>Pushpanathan, Monisha.</name>
</author>
<id>https://hdl.handle.net/1721.1/132855</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Inferring insulin regimen from clinical notes : using natural language processing techniques to extract data from free text records
Pushpanathan, Monisha.
Insulin Regimen refers to instructions prescribed by a clinician indicating the kind of insulin to take (long-acting, short-acting, etc), how much insulin to take (dosage) and how often to take it (frequency). Determining the daily insulin regimen for diabetic patients is more of an art than a science. Clinicians who care for diabetic patients carefully assess the patient's blood glucose levels, medical history and symptoms before prescribing insulin medication. The challenge for clinicians is often in accessing the historical insulin regimen prescribed to patients, which is hidden in unstructured clinical notes. The reason that is a problem is that the individual clinician is unable to draw on the wisdom that might exist in collective experience. Additionally, having access to a patient's historical insulin regimen can help identify patient groups with distinct insulin regimen patterns, analyze total and average daily insulin consumption of different patient groups, discover patient groups showing variation in their insulin regimen, etc. In this thesis, we treat insulin regimen extraction from clinical notes as an information extraction problem and explore machine learning methods focused on extracting this information from prescription lists available in outpatient clinical notes. We explore two n-gram models - Logistic Regression and Conditional Random Field and analyze their performance. We also explore models using contextual word representations from the domain specific pretrained language models, character level embeddings and auxillary features constructed from external knowledge sources and analyze their performance. We find that our final Multi Layer Perceptron method using contextual word representations gives a micro averaged F1 score of 0.98 and is able to detect patterns that go undetected by n-gram models. We then apply a rule based post processing system to convert the extracted insulin regimen into a normalized timeseries format. We analyze the extracted insulin regimen information and find that, in most cases, prescription lists in clinical notes contain an accurate account of the current insulin regimen prescribed to patients. However, supporting insulin regimen information such as patient specific glycemic targets, basal-bolus insulin ratio, etc are available only in the narrative text in clinical notes. We also examine the data to find patient samples with interesting insulin regimen patterns such as those changing from a long-short to combined insulin regimen and vice versa from our extracted dataset.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 85-88).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semiconductor industry merger and acquisition activity from a technology maturity and intellectual property perspective</title>
<link href="https://hdl.handle.net/1721.1/132854" rel="alternate"/>
<author>
<name>Pennington, James T.</name>
</author>
<id>https://hdl.handle.net/1721.1/132854</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Semiconductor industry merger and acquisition activity from a technology maturity and intellectual property perspective
Pennington, James T.
A major method of acquiring the rights to technology is through the procurement of intellectual property (IP), which allow companies to both extend their technological advantage while denying it to others. Public databases such as the United States Patent and Trademark Office (USPTO) track this exchange of technology rights. Thus, IP can be used as a public measure of value accumulation in the form of technology rights. As perceived value increases in the child company, M&amp;A occurs . Extensive bodies of research exist concerning merger and acquisition (M&amp;A) activity. This is likely due to the increasing trend of M&amp;A in the market overall and the trillions of dollars involved. Between 1985 and 2018, US M&amp;A value increased by 5.32% with 2017 US deals alone amounting to $1.7 trillion. These figures demonstrate the increasing importance of M&amp;A. This is especially true in technology-centric industries where prior surveys identify a specific product or technology as the prime motivator for mergers. Understanding the transfer of technology and its value will become important in the future if high-tech industries also follow this increasing trend. This study explores M&amp;A activity within the context of the semiconductor industry by focusing on two parent companies, Intel and AMD, and their child company acquisitions from 1997 to 2017. These acquisitions total more than $53 billion and extend into 35 separate high-tech industries outside the parents' core semiconductor business. In terms of IP as assets, all 91 acquired companies represent 5K in pipeline patent applications and 37K patents. The research suggests that there is a buildup of technological value as measured by the increase of applications and patents by the child company prior to the merger event with the parent (e.g. Intel or AMD). Additional relationships such as child company M&amp;A acquisition value to IP quantities, IP lifespan to child company lifespan, and technology maturity are explored. This study also proposes and implements a TRL (Technology Readiness Level) scale specific to the semiconductor industry and maps it to IP cycle times (USPTO processing times). The application of TRLs to IP data creates an approximate idea as to which maturity of technology is most valuable: new concepts or mature ideas. Results suggest that the child companies seek technology IP with higher TRLs, and these child companies are in turn acquired by the parent company (e.g. Intel or AMD).
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 73-76).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved reservoir characterization by incorporating geodetic data in a western Kazakhstan oilfield</title>
<link href="https://hdl.handle.net/1721.1/132853" rel="alternate"/>
<author>
<name>Pickering, Michael Vance.</name>
</author>
<id>https://hdl.handle.net/1721.1/132853</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Improved reservoir characterization by incorporating geodetic data in a western Kazakhstan oilfield
Pickering, Michael Vance.
Reservoir characterization for petroleum systems stands as a key source of competitive advantage amongst operators. Exploration and field development activities require large investments of engineering time and capital to be made in pursuit of improved short- and long-term economic decision-making posture. For this reason, identifying workflows from adjacent earth science fields of study to tackle reservoir characterization challenges is of utmost importance. Examined in this thesis is the idea of whether improved reservoir characterization can be a corollary of the modern advancements in geodesy. This thesis proposes a holistic workflow to improve subsurface reservoir characterization using commercially available tools and insights by incorporating modern Interferometric Synthetic Aperture Radar (InSAR) geodetic data to better inform model assumptions. Surface deformation measurement by InSAR is used to provide subsurface insights using as an example the "Polygon" field in Kazakhstan. This workflow leads to several key conclusions which could only be realized traditionally by drilling additional exploratory wells to collect the necessary data. Firstly, early seismic field work identified the presence of several faults which divide the area of study into three distinct blocks, which were assumed to be impermeable at the boundaries. However, due to the flow directions and rock deformation observed in the simulated reservoir and geomechanical models, only one of the three blocks exhibits "compartmentalization," or impermeable bounding at the faults. With a displacement of 20 meters at the fault faces, expected permeability values for these fault boundaries were anticipated to be less than 1 millidarcy of permeability; however, permeability above 1 Darcy exists across two of the three block fault boundaries. The geomechanical model of the reservoir predicts subsidence at surface while InSAR shows localized uplifts of several centimeters on the western and eastern edges of the studied blocks. The only way to match the subsurface geomechanics model with the directly measured InSAR is to implement a no-flow boundary at the eastern fault that delineates the westernmost block. As a result, a strategy for improved recovery efficiency from the western compartmentalized block is proposed to enhance pressure maintenance and improve waterflood effectiveness. Also, further geomechanical and InSAR comparison suggests the presence of a weak edge aquifer influx behavior in the area of study flowing north to south. In summary, the combination of reservoir simulation, geomechanical modelling, and direct InSAR measurement represents a significant opportunity for improving reservoir characterization using readily available techniques at an incrementally low cost.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 135-139).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology elasticity : demand impact on the commercial success of regional aircraft</title>
<link href="https://hdl.handle.net/1721.1/132852" rel="alternate"/>
<author>
<name>Murbach Koga, Tiago.</name>
</author>
<id>https://hdl.handle.net/1721.1/132852</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Technology elasticity : demand impact on the commercial success of regional aircraft
Murbach Koga, Tiago.
The law of demand is the simplest and the most important concept in microeconomics. It describes the relationship between the demand of a good and its price and the slope of the curve is defined as the price elasticity. The revenue -- the product of price and demand -- is the imperative attribute to define the company value. The valuation considers not only the actual revenue, but the year-to-year revenue growth rate. It is known that innovative companies have a high revenue growth rate, because they offer products and services that meet the consumer needs. However, microeconomics theory models lack to unify the demand, price and utility of technologically enabled goods and services. This work proposes a new approach to add the technology component in the law of demand as a function of price and utility. The utility of goods or services is described as Multi-Attribute Utility (MAU) and applied to an aviation case study. The MAU considers not only the technical aircraft performance attributes but also the social aspects of regional aircraft - twin-turboprops and jets - into the market. The data analysis shows that price and MAU have a positive correlation of 0.92 for regional jets. Similarly, the twin-turboprop airplanes have a correlation of 0.86. The price and demand are mildly positively correlated with demand. In other words, higher priced aircraft have higher demand. Conversely, lower prices correlate with lower demand. This contradicts the law of demand that states that demand varies inversely with price. These inconsistencies are explained by the concept of Technology Elasticity. In this new approach a good can have multiple levels of demand at the same price, and those levels are graded by MAU iso-utility contour lines. This approach is explained with the tea market case that the economist Alfred Marshall proposed his original theory in 1890. The actual regional jet and twin-turboprop aircraft market data is used to validate the new approach. This thesis ends with a reflection about the regional aircraft landscape and it suggests the application of technology elasticity to other markets.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages [101]-106).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Project of the sharing economy in lodging in Tokyo and NYC</title>
<link href="https://hdl.handle.net/1721.1/132851" rel="alternate"/>
<author>
<name>Nakashima, Koji,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132851</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Project of the sharing economy in lodging in Tokyo and NYC
Nakashima, Koji,
            S.M.
            Massachusetts Institute of Technology.
The sharing economy in lodging such as Airbnb gives travelers many different kinds of accommodations in many different locations at cheap prices. However, many local governments are struggling with how to incorporate such businesses in their society. In Japan, although the business has the merit of absorbing the rapid growth of travelers from abroad, many local governments hesitate to accelerate these businesses because they are concerned about safety and security. Also, New York City is very negative about these businesses. The City thinks that the sharing economy in the lodging business is responsible for the increasing housing fees and the decrease in rental property vacancies in NYC. The aim of this study is to create systems that are safe and socially acceptable for Tokyo and NYC. In order to create the systems for Tokyo and NYC, we carry out three analyses. In the first analysis, we clarify for whom the architectures are made and what parties are related to the business. Hosts, guests, neighbors, local governments, investors and the home-sharing company are involved in the home-sharing ecosystems. Then a system model which shows the activities of the home-sharing business is created in order to get the insights of the platform for the potential architectures. Also the system model is used for investigating the safety risks in the home-sharing dynamics. The system model shows the stakeholders' activities such as the home-sharing transactions, hosts' services and the regulation of the local government. Next, we find the risks in the home-sharing ecosystems. We investigate more detailed interactions between the home-sharing company, neighbors, hosts and guests in the system model. Then we find all the possible risks in the interactions. This analysis is carried out for three different room types (Entire House, Private Room, Shared Room) because different kinds of risks are found in different room types. From the analysis, we find 10 important factors which consist of the performance of the architectures. (room types, ID check systems, entrance locks, key delivery methods, room locks, fire alarms and detectors, fire prevention facilities, security systems and detection methods for privacy invention such as hidden cameras) The potential architectures are created by combining the options in the 10 factors. We create more than fifteen thousand different architectures for Tokyo and NYC, separately. Then we evaluate these architectures based on safety, social acceptability, and convenience. The cultural and social factors are also investigated because they can lead to differences in the preferred architectures. The preferred architecture in Tokyo is found to be Entire House with the entrance lock of either a PIN code or a regular key, while it is Private Room with the entrance secured by a regular key in NYC. To extend the study, safe and socially acceptable architectures in other cities can be investigated by following the steps in this thesis. Introducing a house type accommodation would be another extension because only the apartment type is considered in this thesis to reduce the complication of calculation. Then there would be additional features that need to be considered for the house type accommodation, such as gardens. The cost in view of multiple stakeholders can also stretch the study. This includes hosts' OPEX, accommodation prices and costs of the sharing economy companies.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 104-109).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-criteria design analysis of sensor systems for railway level crossings</title>
<link href="https://hdl.handle.net/1721.1/132850" rel="alternate"/>
<author>
<name>Miyashita, Yu,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132850</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Multi-criteria design analysis of sensor systems for railway level crossings
Miyashita, Yu,
            S.M.
            Massachusetts Institute of Technology.
The prevention of railway accidents on level crossings is of critical importance for railway companies because of their high frequency and considerable severity. As a countermeasure against the accidents, obstacle detection systems are widely installed and used in a large number of level crossings in Japan. However, current obstacle detection systems can typically detect only automobiles and not pedestrians and can be improved in accuracy and reliability performance. This thesis develops a method for effectively determining the best combinations of sensors that can detect human-sized objects on railway level crossings with the highest utility and least costs. The method assesses combinations of up to three sensor technologies such as LIDAR, stereo camera, and millimeter laser radar. We evaluate each sensor and find the best combination of multiple sensors according to a set of performance criteria. The analysis was conducted using empirical data of 1,800 high-risk level crossings in Japan. The results show that if uniform emphasis is given to criteria related to safety and stability as well as when emphasis is solely on safety or solely on stability, in all cases the highest utility at lowest cost is provided by wide lens stereo cameras. The utility increases by 36% if a combination of stereo cameras and LIDARs are used, however the cost of such two-sensor systems increases by four folds. A system safety analysis was also performed, and transportation safety from the viewpoint of not only sensors but also the larger system that includes humans was analyzed. Using the Systems Theoretic Process Analysis, we find that the system safety is highly dependent on both human factors and system structure. The analysis results show that automating the emergency braking system can be an effective countermeasure against railway accidents caused by humans. Overall, the analysis tool developed in this work allows for analysis and simulations to be easily updated with new performance specifications of sensors and utility functions. The tool can be used for rapidly analyzing new systems when new types of sensors become available, costs change, or new sets of criteria for system performance (expressed as a utility) need to be assessed.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 100-102).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A digital approach to the management of brownfields</title>
<link href="https://hdl.handle.net/1721.1/132849" rel="alternate"/>
<author>
<name>Partington, Ben
            (Benjamin Francis)</name>
</author>
<id>https://hdl.handle.net/1721.1/132849</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A digital approach to the management of brownfields
Partington, Ben
            (Benjamin Francis)
This thesis investigates analytic and data-mining methods that can be used for the management of petroleum brownfields, specifically as it applies to the surveillance, analysis, &amp; optimization of gas lifted oil wells. Building on the output of validated physics-based models, this thesis investigates a range of analytic methods which may be used to determine a probable depth of gas lift injection of wells without pressure gauges, and finds that the Random Forest method coupled with a k-means clustering algorithm can offer good results. Additionally, this thesis shows how a pan matrix profile may be used to efficiently identify patterns (motifs) in the real time pressure signatures of wells. Understanding of the motifs are assessed through a physics-based model, providing a useful tool for engineers to perform surveillance of large well count areas, which are typical for brownfields.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. "September 2020."; Includes bibliographical references (pages 121-129).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental feedback interfaces for consumer activity tracking wearable devices</title>
<link href="https://hdl.handle.net/1721.1/132848" rel="alternate"/>
<author>
<name>Lloyd, Christopher Noel.</name>
</author>
<id>https://hdl.handle.net/1721.1/132848</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Experimental feedback interfaces for consumer activity tracking wearable devices
Lloyd, Christopher Noel.
Commercial wearable activity trackers have sophisticated monitoring capabilities and digital user interfaces that report personal health metrics; however, these devices have not yet achieved their goal of dramatically improving the wellbeing and performance of users. This research identifies latent aspects of wearables that might improve wellbeing. A group of commercial wearable users are interviewed to determine unmet and latent needs. Qualitative interview data is leveraged to propose a case study of a flower robot as a figurative feedback interface that uses moving mechanisms to express the user's sleep quality and promote improved sleeping habits. The robotic flower user interface is divided into two components that are fabricated and tested separately: 1) a flower that blooms and 2) a stem that changes posture. The control system is fabricated, programmed, and tested to successfully retrieve the researcher's personal sleep data from a public API and actuate the stem and flower. The flower robot prototype is a proof of concept of a novel commercial activity tracking wearable interface. Further testing is required to determine if a robotic avatar can increase relevant task performance, change user behavior change, or improve health metrics.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 59-62).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An exploration of spinal care injury treatment : opportunities to improve functional recovery and independence for patients with incomplete spinal cord injuries</title>
<link href="https://hdl.handle.net/1721.1/132847" rel="alternate"/>
<author>
<name>Platt, Evan
            (Evan Hartley)</name>
</author>
<id>https://hdl.handle.net/1721.1/132847</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">An exploration of spinal care injury treatment : opportunities to improve functional recovery and independence for patients with incomplete spinal cord injuries
Platt, Evan
            (Evan Hartley)
A spinal cord injury (SCI) is a severe life-changing event, and usually results in significant complications and loss of function. The severity and complexity of these injuries make them difficult to treat. This thesis seeks to identify the most significant opportunities for improving SCI treatment. It explores the different elements of SCI care within the ICU, inpatient rehabilitation, and outpatient rehabilitation settings from the perspective of the patient and the associated stakeholders. Through this exploration, this paper uncovers a comprehensive list of potential opportunities. This paper down-selected from that list to three high-potential opportunities based on the amount of benefit potential solutions could deliver. These were determined to be motor strengthening, ambulation recovery, and neurogenic bowel dysfunction. Each high-potential opportunity was assessed based on how well existing, emerging, and future solutions meet SCI patients' needs. It was concluded that a wireless closed-loop neuromuscular electrical stimulation solution should be further investigated to improve patients' quality of life.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 100-118).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic modelling of Japan's transition to offshore energies</title>
<link href="https://hdl.handle.net/1721.1/132846" rel="alternate"/>
<author>
<name>Liew, Caine Xia Ri.</name>
</author>
<id>https://hdl.handle.net/1721.1/132846</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Dynamic modelling of Japan's transition to offshore energies
Liew, Caine Xia Ri.
The feasibility of utilizing the offshore environment and its resources to address the energy challenges identified by the Japanese Government is presented in this thesis. The thesis will be framed by the key energy challenges which include: (1) energy security, (2) environmental impact, (3) economy efficiency and (4) safety. The unique energy situation that Japan is in due to its geography, historic energy policies and energy economy will be considered as well. Subsequently, possible offshore energies to address the challenges Japan faces such as its lack of land space, societal acceptance of nuclear energy, lack of energy resources and its high frequencies of seismic activities will be examined. Finally, using system dynamics modelling, an abstracted model of Japan's energy industry will be used to study the feasibilities and the potential impacts of the proposed offshore solutions. Specifically, the model examines the impacts on Japan's energy self-sufficiency, electricity pricing and CO₂ emissions. The model will show that based on Japan's Business As Usual (BAU) approach, it would likely not meet its intended energy security, economic and, environmental targets. Two key conclusions are drawn from the study on Japan's energy policy and modelling results. First, Japan's decision to meet the diverse range of demands on their energy solution leads them to set inconsistent energy goals. This in turn overly restricts their energy solutioning. Second, that greater energy diversity through offshore energies will improve the prospects of helping Japan reduce projected electricity prices, enhance Japan's energy security through greater self-sufficiency and help reduce CO₂ emissions significantly.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 69-75).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>access with(out) judgment</title>
<link href="https://hdl.handle.net/1721.1/132845" rel="alternate"/>
<author>
<name>Lin, Dai,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132845</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">access with(out) judgment
Lin, Dai,
            S.M.
            Massachusetts Institute of Technology.
access with(out) judgment takes the reader on a learning expedition by means of reading, writing, and doodling. Since learning is accessible and actionable for any reader, this text prioritizes opportunities for research-driven practice - the acquisition, contemplation, integration, synthesis, and expression of knowledge on the part of the reader - over descriptions of research that is of convention with traditional academic publications. Research for this text includes, but is not limited to, analyses of and references to 60+ years of neuroscience research, 130+ years of psychology research, and a few centuries of history and lessons learned from the western education system (and its impact on non-western nations). Content within this text incorporates several millennia of philosophy of knowledge, philosophy of logic, and systems of knowledge as captured in written texts and as deduced through means of scientific study (such as through anthropology and linguistics). This text is organized as a collection of access points for a self-directed learning experience. This non-traditional format responds to and complements the state of modern-day technology, whence readers can access a global information system. access with(out) judgment encourages readers to explore multi-dimensional learning pathways driven by their individual sense of curiosity.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 196-207).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards measuring attention allocation in model-based engineering teamwork</title>
<link href="https://hdl.handle.net/1721.1/132844" rel="alternate"/>
<author>
<name>Manandhar, Prakash.</name>
</author>
<id>https://hdl.handle.net/1721.1/132844</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Towards measuring attention allocation in model-based engineering teamwork
Manandhar, Prakash.
Large organizations succeed when they are operating with systemic awareness. A similar dynamic could be in effect at the level of teams and teams of teams within engineering organizations. Engineering teams are tasked with solving problems of multidisciplinary nature with multiple stakeholder constraints. For better performance, teams need to be aware of multiple constraints simultaneously. This thesis explores the design of sensors to measure team awareness using team attention in the problem and solution spaces. The concept of situational awareness and attention allocation has been studied in literature in the context of user interaction design for individuals or teams in areas where team members could be overwhelmed with information. Engineering problem solving is of a similar nature where team members have to be aware of multiple information sources and bring their attention to the right pieces of information to make decisions. To define what is "right", the concept of a problem space and solution space is defined. It is hypothesized that higher performing teams allocate their attention to systemically significant portions of the problem and solution spaces. An experiment is designed assuming that short strategy discussions are result in greater attention allocation to systemically significant portions of the problem and solution spaces. The concept of using a strategy discussion to spur creativity was based on literature in team creativity and innovation which posited that a process consisting of iterations of first divergent and then convergent thinking results in greater innovation. A model or toy problem of designing an innovation campus is chosen. Data from 50 teams spending one hour with a model exploration interface are analyzed. This problem was designed to be easily understandable in a short time by conference-goers from which a pool of volunteer participants were assigned randomly to 50 teams of 2 to 4 participants per team. Out to 50 teams, 14 teams were assigned to a control group who were instructed to perform a placebo discussion instead of a strategy discussion. Two other groups consisting of 14 and 15 teams respectively had strategy discussions at the beginning and middle of the experiment time-slot. Pareto frontier based ranking methods are used to rank team performance. The performance across different groups are compared using hypothesis testing methods. Results suggest that strategy discussions help to arrive at more effective problem-solving. These results are not statistically significant at 95% confidence level (they were significant at 85% confidence level) using a Kruskal-Wallis hypothesis, they do provide promising directions for further work. Besides directly testing the hypothesis, other observations were made on the data. One interesting observation was that teams that had strategy discussions tended to perform better as they executed more simulations, while teams that had the placebo discussion tended to perform worse as they executed more simulations. The data gathered during the experiments has not been fully analyzed due to the scope of the thesis. Further work that could be done include analyzing user interface "fingerprints" to measure attention allocation directly to test the assumption that strategy discussion during a decision making session results in higher attention allocation to systemically significant portions of the problem and solution spaces. An attempt is made at defining the concept of attention allocation, and quantitatively measuring how much of attention is allowed to systemically significant portions of the problem and solution spaces. Further work is also warranted in exploring alternate definitions and calculation of this metric.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 93-98).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying an uncertainty-based acquisition strategy framework to select an appropriate approach for new product or system in the Military</title>
<link href="https://hdl.handle.net/1721.1/132843" rel="alternate"/>
<author>
<name>Lew, Donald K.,
            Jr.
            (Donald Kai-Kean)</name>
</author>
<id>https://hdl.handle.net/1721.1/132843</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Applying an uncertainty-based acquisition strategy framework to select an appropriate approach for new product or system in the Military
Lew, Donald K.,
            Jr.
            (Donald Kai-Kean)
Given an uncertainty-based world with changing stakeholder needs, stakeholder objectives, operating environments, and technologies, there is a paradigm shift in systems engineering from systems built to last to systems built to evolve. This is further observed when developing a product/system with a tension between confidence in requirements and the ability to respond to requirements. This natural tension of uncertainties requires a framework for balancing the need-space and solution-space between acquisition managers &amp; chief engineers and warfighters. Embedding flexibility in a product or system is a method to foster evolvability which can sustain value delivery to its stakeholders in a feasible time and cost-effective way after the product or system has been fielded. This research and subsequent findings develop a foundational framework that was tested and analyzed from past military acquisitions through anonymous surveys and voluntary interviews. This uncertainty-based strategy framework is intended to help guide the tailorable traditional pathway acquisition process during the material solution analysis phase and can provide valuable insight into guiding the correct acquisition strategy quadrant for system design.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 86-88).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The future of retail</title>
<link href="https://hdl.handle.net/1721.1/132842" rel="alternate"/>
<author>
<name>Foncillas, Blanca
            (Lepach Foncillas)</name>
</author>
<id>https://hdl.handle.net/1721.1/132842</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The future of retail
Foncillas, Blanca
            (Lepach Foncillas)
The retail environment in the United States has passed through several massive changes since the 1880's. From its beginnings, retail has connected buyers to suppliers; even today, that basic premise has not changed. This paper will walk through the various eras of retail highlighting innovations and trends that have carried forward to the present. It will analyze how retailers have survived or failed, and discuss the potential opportunities and disruptions currently occurring in the retail sector. We will discuss the effect of the world pandemic on businesses in the United States, asking questions such as: what gaps the pandemic has exposed in the industry and which retailers will be the ones to survive the pandemic. Finally, we will look at projections and trends for the next five to ten years in the future of retail. We will take a deep dive into new technologies, tools and business models that are helping retailers achieve growth and success. We will look at the winners and losers, the change in consumer buying behavior, and the types of disruptions, innovations and changes we expect in the future. The aim of this paper is to provide basic guidelines for retailers to plan for likely trends in the future; to showcase retailers that are successful and retailers that are failing; and to uncover new technologies, processes and tools used today to prepare for the rapid changes that will come in the future.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 48-53).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian calibration of in-line inspection tool tolerance</title>
<link href="https://hdl.handle.net/1721.1/132841" rel="alternate"/>
<author>
<name>Lee, Jeffrey Liang.</name>
</author>
<id>https://hdl.handle.net/1721.1/132841</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Bayesian calibration of in-line inspection tool tolerance
Lee, Jeffrey Liang.
Calibration of Magnetic Flux Leakage (MFL) In-line Inspection (ILI) tools is an important part of the overall pipeline integrity management process. Over-called or under-called corrosion features can have significant impacts on safety and resource management. This thesis examines methods for improving the Validation and Calibration processes using Bayesian Inference. The focus is on improving the tolerance that is applied to undug features to optimize the execution of risk-based repairs. A simulated data set was generated, with two separate categories, one which represents tool performance on basic features and another for challenging features. The calculated parameters of [alpha], [beta], and [sigma], were calculated using a Bayesian model leveraging a Markov Chain Monte Carlo simulator. The [sigma] parameter is used to determine the appropriate tolerance to apply and was compared with a [sigma] calculated via the method recommended by API 1163. Results from the example data set show that in challenged situations, the Confidence Level of the tool performance can be increased from 89% to 95% and the mean average error can be decreased using the Bayesian Inference model. Opportunities to use the methods outlined to improve other processes in ILI validation are discussed. By appropriately updating the likelihood used in the Bayesian model with dig data, the tolerance can more accurately represent the undug features and risk management decisions can be conducted accordingly.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 65-67).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital twin technology for enhanced upstream capability in oil and gas</title>
<link href="https://hdl.handle.net/1721.1/132840" rel="alternate"/>
<author>
<name>LeBlanc, Mollie B.
            (Mollie Burke)</name>
</author>
<id>https://hdl.handle.net/1721.1/132840</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Digital twin technology for enhanced upstream capability in oil and gas
LeBlanc, Mollie B.
            (Mollie Burke)
Digital twins are receiving considerable attention as a cutting-edge technology that will transform the oil and gas industry. Powered by a digital thread that connects data across the product lifecycle, a digital twin virtually mirrors or emulates processes, assets, and projects in real-time to generate highly valuable insights. Promises of value creation, delivering optimized production, increased reliability, improved safety, and enhanced foresight, are now driving oil and gas operators to realize their potential. Despite these claimed and expected benefits, realized value is often hard to quantify and explicitly link to digital twin technology. And in addition, consistency in definition and availability of a reference architecture is lacking, resulting in a gap of a standard approach in implementing this technology. This thesis attempts to investigate and summarize the digital twin's enabling technologies (e.g., model-based systems engineering, network infrastructure, the Internet of Things (IoT), and automation) and provide insight into industry digital twin applications in use today. Modeling a simple production facility demonstrates that digital twins have the potential to improve the prediction and mitigation of facility failures leading to overall higher availability and improved financial outlooks for projects. The simulation results of a highly robust and integrated digital twin used on an offshore, deepwater facility showed an improved NPV of $211 million over 27 years. With enhanced upstream capabilities enabled by a digital twin, considerations to reducing daily physical inspection requirements are made more feasible. However, as costs for offshore personnel decrease, the cost of software development and maintenance will increase sharply. Oil and gas assets are more enabled to be monitored and controlled remotely through this increased rigor and oversight provided by the digital twin platform. From the perspective of a three-component digital twin framework consisting of modeling and analytics, enablement technology, and data, a digital twin can provide value from a virtual proxy to a fully autonomous system.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. Page 180 blank.; Includes bibliographical references (pages 171-179).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy preserving framework for federated learning in genomics</title>
<link href="https://hdl.handle.net/1721.1/132839" rel="alternate"/>
<author>
<name>Kokje, Yashashree.</name>
</author>
<id>https://hdl.handle.net/1721.1/132839</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Privacy preserving framework for federated learning in genomics
Kokje, Yashashree.
With the advent of machine learning, organizations today collect and process data at an unprecedented scale. This has led to rapid growth in innovation across industries, but also poses numerous challenges around maintaining user privacy. Specifically, in the field of healthcare and genomics where data is highly sensitive. Unlike credit cards or passwords, one's genomic information cannot be modified at will and has the ability to uniquely identify the individual. The objective of this thesis is to develop an easily configurable framework that would allow organizations to collaborate and advance genomic research without directly sharing user data with each other. This thesis includes the development of a privacy preserving framework for federated learning on genomic datasets that are distributed across organizational silos. PAGe (Privacy Aware Genomics) has been open-sourced and has a low barrier to entry. A packaged runtime environment is available that includes popular bioinformatics tools and machine learning libraries. Experimental setup is controlled through configuration files, allowing users to easily terminate, restart or reproduce results. Finally, there is an in depth evaluation of the framework using Type 2 Diabetes disease risk prediction as a case study with the 1000 genomes dataset as input.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 57-59).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for human behavior to enable circular packaging</title>
<link href="https://hdl.handle.net/1721.1/132838" rel="alternate"/>
<author>
<name>Lakhani, Sabira.</name>
</author>
<id>https://hdl.handle.net/1721.1/132838</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Designing for human behavior to enable circular packaging
Lakhani, Sabira.
Society's linear model of consumption - make, use, and throw - is not sustainable. Waste management systems have not been built to handle the production and consumption patterns of the modern age nor are they equipped to swallow the dramatic escalations and changes in product packaging. Single use packaging is an issue that resonates with customers and helps them understand the impacts of climate change, which creates an opportunity to engage with interested stakeholders and incite customer action that could lead to wider and longer-term behavioral and system changes that benefit the environment. This thesis leverages the human-centered design process to understand the context of and challenges with packaging today for a consumer technology company, uncover insights and form a specific research question, generate potential solutions, and gather user feedback on the potential solutions. This thesis presents findings from users on concepts to reduce the environmental impact of single use packaging and highlights themes in human behavior that could inform packaging design for sustainability.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 76-77).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Short-term traffic forecasting for a smart satellite communications system</title>
<link href="https://hdl.handle.net/1721.1/132837" rel="alternate"/>
<author>
<name>Jones, Damon E.,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132837</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Short-term traffic forecasting for a smart satellite communications system
Jones, Damon E.,
            S.M.
            Massachusetts Institute of Technology.
Satellite communications systems are undergoing a modernization to efficient capacity allocation from a traditional "bent pipe" or static allocation. One challenge to address with a more precise usage of satellite resources is the change in user terminal traffic during a complete cycle of the system: collecting data, generating a constellation setting solution, transmitting the new solution to each satellite and executing changes to the satellite's parameters. As the system's cycle time grows the user's desired data rate changes causing an optimized solution based on an erroneous traffic model. This thesis proposes a comparison of single user models using a gradient boosting algorithm, and a multi-user model using Long- Short Term Memory neural networks (LSTM) or Gated Recurrent Unit neural networks (GRU) to forecast terminal traffic. Each algorithm was tuned using a two-stage design of experiments process consisting of a fractional screening design to identify impactful hyper-parameters and a central composite design to find optimal model settings. During a holdout period, the mean absolute percentage error using a 15-minute lag was 10.7% with a standard deviation of 2.6% over a month of forecasting. Networks using a GRU layer and tuned with Random Search had the best average performance with an error of 9.6% and standard deviation of 2.6%, outperforming the best found XG Boost models with an error of 9.9% and standard deviation of 3.5%.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis. Page 74 blank.; Includes bibliographical references (pages 71-73).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Influencer persona and audience engagement : an analysis of the user decision-making differences between traditional and short-video-based social media</title>
<link href="https://hdl.handle.net/1721.1/132836" rel="alternate"/>
<author>
<name>Wang, Anping,
            M. S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132836</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Influencer persona and audience engagement : an analysis of the user decision-making differences between traditional and short-video-based social media
Wang, Anping,
            M. S.M.
            Massachusetts Institute of Technology.
Social media influencer marketing, which is a new marketing method that takes on the idea of celebrity endorsement, is a growing phenomenon. The ever-increasing influence of social media as seen in society today places it as a key modern-day content-driven marketing movement. The atmosphere of different social media platforms, the persona of influencers, and the qualities of influencers' posted content are all possible factors that have an impact on the outcomes of influencer marketing. This thesis explores the following questions: Are there any differences between short-video and traditional social media platforms on the influencer persona establishment? What, if any, are the reasons for the differences regarding persona establishment from both the audiences' and the influencers' points of view? How do influencer personae affect the decision-making processes of their audiences, and are there methods that can quantify those influences? Presented in the thesis are A) The definitions and subsequent analyses of different social media platforms, influencers, personae from human-centered design, and behavioral marketing research aspects; B) Interviews and surveys of influencers and audiences on personae topics; C) Influencer-audience interaction data collections and analyses; D) The development process of a method that quantifies influencer personae evaluations.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 114-117).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maternal mental health and child outcomes: a human-centered design perspective on preventive mental health care</title>
<link href="https://hdl.handle.net/1721.1/132835" rel="alternate"/>
<author>
<name>Venkatachari, Ramaa.</name>
</author>
<id>https://hdl.handle.net/1721.1/132835</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Maternal mental health and child outcomes: a human-centered design perspective on preventive mental health care
Venkatachari, Ramaa.
The lifelong cost of mental health disorders to children, families, and society is substantial and far-reaching. Mental health conditions are also significant contributors to the global disease burden. To reduce this burden and improve well-being across the lifespan, a preventive approach to mental health focused on awareness and early intervention is necessary. Neural development begins soon after conception and rapidly continues for the first five years after the birth of a child. Preventive measures thus need to be taken even before birth. Better management of mental health during the perinatal and postpartum periods can improve child outcomes. This thesis analyzes the existing treatments and care delivery methods for maternal mental health conditions through systems and design thinking frameworks. The overall objective was to evaluate the efficacy of the mental healthcare system in the United States and to advocate for a patient-centric approach to care. The outcome of this study is a set of recommendations for innovators in the mental healthcare space. These are meant to serve as a guide to develop interventions that augment functionality with empathy.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 61-64).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The evolution, not revolution, of digital integration in oil and gas</title>
<link href="https://hdl.handle.net/1721.1/132834" rel="alternate"/>
<author>
<name>Trevathan, Michael
            (Michael Thomas)</name>
</author>
<id>https://hdl.handle.net/1721.1/132834</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The evolution, not revolution, of digital integration in oil and gas
Trevathan, Michael
            (Michael Thomas)
High impact digital innovations present opportunities for organizations to transform their business capabilities to adapt for future sustainability. To adopt new platforms offered by disruptive technologies, organizations must alter or retire existing business models, create and develop new competencies, and build an agile business culture. An organization's failure to respond to evolving digital initiatives will inevitably lead to a loss of competitive advantage and even obsoletion. Undertaking and managing transformative digital solutions may seem risky, but the alternative is riskier. This thesis explores the opportunities associated with integrating digital technologies into established oil and gas (O&amp;G) organizations where transformation will be exceedingly difficult. Investing in the right technologies that fit the organizational size, competencies, and culture is critical for the success of adopted digital initiatives. Case studies reviewing digital investment portfolios within the O&amp;G industry are presented to evaluate the investment size, capabilities, and realized value creation associated with digital integration on design and operations. A systems approach was employed to understand the barriers and limitations to digital integration in the following areas: data value chain and workflows, data architecture standardization, and end-to-end lifecycle integration, with emphasis on O&amp;G drilling and completion operations. Additionally, a business strategy roadmap was created to recommend realized value opportunities for a digital investment portfolio to succeed in this constantly evolving marketplace.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 150-159).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The use of cost, schedule, and performance in the implementation of defense acquisition initiatives</title>
<link href="https://hdl.handle.net/1721.1/132833" rel="alternate"/>
<author>
<name>Visosky, Daniel J.
            (Daniel Joseph)</name>
</author>
<id>https://hdl.handle.net/1721.1/132833</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The use of cost, schedule, and performance in the implementation of defense acquisition initiatives
Visosky, Daniel J.
            (Daniel Joseph)
In the past 20 years, there have been no fewer than five major acquisition reform initiatives in the United States Air Force. Two of these initiatives, Open Systems Architecture and Middle Tier Acquisitions Rapid Prototyping, stand to change the way the Air Force acquires and engineers weapon systems due to their potential impact on Cost, Schedule, and Performance. Because of this impact, can analysis measure the effect of reform initiatives on acquisition programs and identify future combinations of initiatives to maximize benefit for the Air Force? This research analyzed the acquisition program outcomes before and after the implementation of a reform initiative utilizing the following variables: cost, schedule, performance, ease of use, and difficulty to implement. A tradespace analysis of the variables was then conducted to show how policymakers could theoretically make informed decisions on how best to implement, modify, or combine these initiatives. As the basis for research, quantitative data would be ideal for performing this analysis; however, the ability to gather this type of data before reform initiative implementation was not possible for this thesis. Due to this lack of data, qualitative information (survey techniques, and the documented purposes of the reform initiative), as well as model-based parametric analysis, were used. The research shows that, while it is possible to analyze a reform initiative utilizing this method, decision-makers should be cognizant that there are limitations to this type of predictive modeling; as such, the USAF should continue to thoroughly analyze initiatives before implementation, perform surveys through "policy gaming" when possible, ensure initiatives are not counter to each other and consider combining reform initiatives in the future.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 99-100).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining policy for a system dynamics model using reinforcement learning</title>
<link href="https://hdl.handle.net/1721.1/132832" rel="alternate"/>
<author>
<name>Thomas, Aditya.</name>
</author>
<id>https://hdl.handle.net/1721.1/132832</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Determining policy for a system dynamics model using reinforcement learning
Thomas, Aditya.
System dynamics allows managers and policy makers to analyze problems with non-linear feedback structures and thus counter-intuitive behavior. A main tool of system dynamics is to build a computational model of a system and analyze it to determine suitable policies to move the system to a desired goal. This work aims at using methods and algorithms from reinforcement learning to determine suitable policies for a system dynamics model. We introduce the techniques, methods and algorithms of reinforcement learning and apply them to a classical model from the system dynamics literature.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 42-43).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the impact of technology progress on the future architecture of Japanese space enterprise</title>
<link href="https://hdl.handle.net/1721.1/132831" rel="alternate"/>
<author>
<name>Tamura, Yasutsugu.</name>
</author>
<id>https://hdl.handle.net/1721.1/132831</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Investigating the impact of technology progress on the future architecture of Japanese space enterprise
Tamura, Yasutsugu.
Historically, the Japanese space development has been performed by the government and the national space agency. However, some key figures of merits of space technology reach a mature enough level and people begin to change their attitude to space technology. In this context, the ecosystem and the stakeholders around JAXA have been changing dramatically and the expectation on JAXA is increasing more and more. In this research, ARIES, a system architecting framework is applied to exploration of the generation of the future desired architecture for JAXA by considering the external and internal landscape, stakeholders, and the current architecture. Also, space technology progress is analyzed as an additional process of the ARIES framework in order to generate a holistic envisioned future. Based on these analysis, the gap between current architecture and future desired architecture is identified and alternative architectures are evaluated with unweighted decision matrix and weighted SWOT matrix. As a result, the future architecture named "3Ps architecture", which has three functions; Platformer, Partner, and Purchaser, is generated. This research provides implementation strategy as well as 3Ps architecture, and the strategy shows that sustainable transformation under limited resources is important for the future space ecosystem in Japan. This analysis provides a desired future architecture of Japanese space agency to maximize the outcomes of the investment from the Japanese government for the next decade. The result of this research can be utilized to create an action plan for the transformation. As a future work, multiple stakeholders can join in this research in order to discuss further and create a more sophisticated strategy and a detailed action plan.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 112-116).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systemic approach toward scalable, reliable and safe satellite constellations</title>
<link href="https://hdl.handle.net/1721.1/132830" rel="alternate"/>
<author>
<name>Kharsansky, Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/132830</id>
<updated>2025-10-30T17:51:24Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A systemic approach toward scalable, reliable and safe satellite constellations
Kharsansky, Alan.
Constellations of hundreds to thousands of satellites are becoming a reality. Nevertheless, the unprecedented scale of these systems is creating new sorts of challenges and risks for the designers and operators, mainly due to the high level of automation required. This study demonstrates how architectural decisions like the constellation topology, type of connectivity, and the level of automation affect the scalability, reliability, and safety of these constellations. A survey of past, current, and planned constellations was conducted to identify key architectural decisions and create representative architectures to analyze using a novel process called Conceptual Architecture Development. These high-level conceptual architectures were refined and analyzed using Systems Theoretic Process Analysis (STPA), and a qualitative assessment and a comparison of the emergent properties were performed. The results suggest that increased automation improves the scalability of the system, mostly when human controllers' responsibilities are shifted from individual satellite management to constellation management. However, increased automation also creates new responsibilities for human controllers and does not necessarily improve the safety and reliability of the system. Human-related causal factors found in lower levels of automation are mostly translated into software-related causal factors in higher levels of automation instead of being eliminated, and new types of hazards arise from the introduction of human-automation interfaces. Moreover, other architectural decisions, such as ground connectivity type, can negatively impact the safety and reliability of the constellation, mostly for slightly automated systems. This study shows that architectural decisions can significantly affect the resulting emergent properties of a system and that there is a tradeoff between automation, safety, and reliability that should not be overlooked. Designers and operators should analyze this tradeoff and the development and operational costs in order to select the best-suited architecture for their constellations based on their expertise, technology strategy, and constellation size.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 109-113).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An inclusive design framework for autonomous vehicles to create valuable experience for elderly</title>
<link href="https://hdl.handle.net/1721.1/132829" rel="alternate"/>
<author>
<name>Jain, Samip.</name>
</author>
<id>https://hdl.handle.net/1721.1/132829</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">An inclusive design framework for autonomous vehicles to create valuable experience for elderly
Jain, Samip.
Autonomous vehicles are approaching the market faster than we imagined and the ageing population is growing year after year. Today the world is facing several challenges related to personal mobility like traffic congestion, pollution, road accidents, and many more. The autonomous vehicle is a promising technology that has the potential to address some of these challenges. While we consider personal mobility, there is an opportunity to introduce autonomous vehicles for the ageing population. Right now, we are designing the technology keeping potential users in mind, which may not include the older persons. Since this technology is at the initial stage, we could intervene and design a better inclusive technology irrespective of age and health conditions. In this research, we are applying human-centered design approach to study autonomous vehicles and elderly people of tomorrow. We have leveraged various qualitative research methodologies and a human-centered design approach to better understand the domain. As a result, we are proposing a novel framework which might help us design a better inclusive technology for the future.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 65-68).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System engineering applied to early phase offshore oil and gas projects</title>
<link href="https://hdl.handle.net/1721.1/132828" rel="alternate"/>
<author>
<name>Johnson, Allison,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132828</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">System engineering applied to early phase offshore oil and gas projects
Johnson, Allison,
            S.M.
            Massachusetts Institute of Technology.
Companies around the world have established project development processes that begin with identifying an opportunity and end in execution with various iterations of developing design in between. Those various design iterations are critical to the success of any project. For complex projects, the inability to identify and evaluate feasible architectures in early design phases leads to long, iterative and costly design cycles. This thesis will explore application of both system engineering and system architecture tools and processes to early phase design of an offshore oil and gas processing facility. Base principles of decomposition, form to function mapping utilizing object-process methodology, and design structure matrices leading to development of tradespace modeling techniques will be explored. Application of these methods will provide insight to developing an understanding of the entire landscape of possible architectures and ensure that all options are considered in development of these complex systems. Application of these tools will identify new concepts, highlight preferred architectures, and identify variables or constraints requiring further architecting throughout the project development cycle. These outcomes highlight the ability to evaluate complex projects utilizing modeling tools ultimately leading to reduced design iterations and subsequently reduced development costs.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 73-75).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a measurement device for bread dough proofing</title>
<link href="https://hdl.handle.net/1721.1/132827" rel="alternate"/>
<author>
<name>Hsu, Emily Jane.</name>
</author>
<id>https://hdl.handle.net/1721.1/132827</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Design of a measurement device for bread dough proofing
Hsu, Emily Jane.
The structure of yeasted breads is created during the multiple stages of bread-making: mixing, proofing, and shaping. These stages serve to develop a network of gluten and air bubbles which leaven the dough and allow it to rise and achieve its final form during baking. One of the most time-sensitive and critical stages is the final period before baking, also known as the final proof. During this stage, starches in the flour break down into sugars, which are consumed by the yeast. The yeast then produces bubbles of carbon dioxide that are suspended in the dough's gluten structure. The goal of the final proof is to create the optimal dough structure for the highest bread rise during baking. However, there is a narrow window of time in which the dough is optimally proofed. If the dough is left to proof for too long, also known as overproofing, the air bubbles will grow so large that they pop and tunnel, resulting in the bread collapsing in the oven. An underproofed dough may never achieve the correct rise in baking. The boundary between the proper proofing and an over- or under-proofed dough can be as little as fifteen minutes. This optimal window is dependent on the type of dough, ambient temperature, and humidity. Without controlling each of these factors, non-industrial bakers must rely on experience or the imprecise "poke test" to ascertain whether the dough is properly proofed. This research work seeks to design a device that quantitatively measures the dough's level of proofing and identifies when the dough is optimally proofed and ready for baking. By using a precise measurement for dough structure, the non-industrial baker can then adapt to any variable that affects the final proof.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 43-45).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systems architecture approach to the design of autonomous underwater vehicles and their servicing platforms</title>
<link href="https://hdl.handle.net/1721.1/132826" rel="alternate"/>
<author>
<name>Horton, Brendan K.
            (Brendan Kelly)</name>
</author>
<id>https://hdl.handle.net/1721.1/132826</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A systems architecture approach to the design of autonomous underwater vehicles and their servicing platforms
Horton, Brendan K.
            (Brendan Kelly)
Autonomous underwater vehicles hold great potential in the realms of industry, military, scientific, and personal usage. The applications of intelligently applied autonomous functionality could improve work performed on subsea infrastructure, commercial shipping lane maintenance, canal and channel observation, search and rescue, military applications, as well as general scientific research. Given such potential, and supposing that existing technological barriers to progress could be overcome, what could a potential system architecture of future autonomous underwater vehicles look like? Fundamentally this thesis asks: "could novel architectures of AUV systems - specifically pairing AUVs to remote service platforms - lead to significant performance increases?" In approaching this subject, a specific case study is leveraged where autonomous underwater vehicles were extensively used: the search for Malaysian Air flight 370. This specific mission profile has been extensively documented by others laying a comprehensive framework. It represents the single largest search and rescue operation ever performed. Within this thesis, whole-system performance metrics of this search and rescue operation are compared against calculated performance metrics of systematically generated possible architectures. In decomposing the system into its functional elements, a deterministic evaluation is executed followed by a probabilistic examination of the system as modeled. The results of the probabilistic model are also interpreted via a Pareto ranking methodology where Pareto surfaces are identified in multidimensional tradespaces. These component cases which comprise the Pareto surface are subsequently removed from the dataset, and the process is run again. This iterative approach demonstrated that the top ten performing architectures were comprised entirely out of architectures with either one or four AUVs. The outputs of these models are subsequently compared against the baseline system used in the search for MH370. Following the analysis, a major fault was identified in the foundations of all of the models surrounding a figure of merit wherein the time to the seafloor was calculated for all architectures. All of the top ten performing design vectors - systems which contained one or four AUVs - were unchanged due to this error. Architectures which were affected by this error -- systems with more than four AUVs -- were impacted negatively. Several methods of re-imagining the error are presented herein as complexities that are inherent in the system, which are not handled by these models. These new emergent complexities were present in the system prior to the model construction, but unaccounted for. Discovery of this faulty assumption laid bare several architectural decisions which are unexplored in this thesis, but could provide the foundation for future work in this space. The outcome of these modeling efforts suggests that pairing an autonomous underwater vehicle with an autonomous service platform can result in increases in all performance metrics. Specific metrics which are improved include daily search area rate, calendar mission completion time, and total project cost. This improvement is specifically calibrated to the case study of MH370, but the performance metrics themselves are not exclusively applicable to search and rescue operations. This model indicates that such a system could accomplish the same mission in less time for half the cost. This thesis presents a vision of future autonomous underwater vehicle systems in which daily operational time, search area rates, calendar mission completion times, and total system costs can all be improved relative to the existing standards. Such improvements are equally applicable to commercial, industrial, military, civilian, and scientific endeavors in which autonomous underwater vehicles could be a potential tool.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 177-184).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring pro-social message in job postings using machine learning</title>
<link href="https://hdl.handle.net/1721.1/132825" rel="alternate"/>
<author>
<name>Hong, Zhuoqiao.</name>
</author>
<id>https://hdl.handle.net/1721.1/132825</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Measuring pro-social message in job postings using machine learning
Hong, Zhuoqiao.
When searching for jobs, job applicants are not only motivated by monetary compensation alone, the meaning and social effects of the work also matter. Pro-social motivation, the desire to have a positive impact on other people or social collectives also play an important role in job searching. On the other hand, organizations also have many incentives to promote pro-social jobs during the recruiting processes and accordingly design pro-social characteristics in job postings. Using latest machine learning techniques, we could possibly quantify pro-social characteristics in massive amount of job postings and potentially predict pro-social messages advertised in online job postings. In this thesis, we take up the challenge of developing novel measures of pro-social that satisfactorily address the problems identified with existing measures of pro-social. We proposed implementations of two different machine learning approaches to quantitatively measure pro-social messages from over five million online job postings documentation and effectively predict pro-social jobs, with 79% and 94% prediction accuracy yield from methodology I and methodology II respectively. Based on those approaches, we evaluate the model performance and measure correlation of industries' use of pro-social messages in job postings to compare the effectiveness of two models on several metrics.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 75-79).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI assistant for the oil &amp; gas production engineer</title>
<link href="https://hdl.handle.net/1721.1/132824" rel="alternate"/>
<author>
<name>Heilbrun, Brian J.
            (Brian James)</name>
</author>
<id>https://hdl.handle.net/1721.1/132824</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">AI assistant for the oil &amp; gas production engineer
Heilbrun, Brian J.
            (Brian James)
In the Oil &amp; Gas industry Production Engineers are responsible for monitoring well performance and ensuring that each well produces at its target rate. Wells can experience a wide variety of problems that negatively impact production. It is the Production Engineer's responsibility to identify and fix these problems as early as possible. Well tests that measure how much oil, gas, and water a well is producing are taken for each well once a month. Production Engineers are able to identify when a well has a problem by observing trends in well test data. When a well's production declines faster than expected, the Engineer will conduct a study of all the activity in the area in hopes of identifying what events caused the change in behavior. The sheer volume of wells for which an Engineer is responsible, coupled with the amount of time it takes to investigate each problem, poses a major challenge. We have developed a program capable of monitoring well performance that can identify and describe changes in well performance. When a change is detected, the program investigates data from nearby wells and provides a summary of the events it deems most likely to be responsible for causing the change. The program can make suggestions based on these findings and provide daily reports that allow Engineers to focus on executing the solution rather than investigating the problem. By creating a framework that allows the program to make sense of event-behavior pairs, we have created an assistant that supports Production Engineers in their most critical role. Furthermore, we have tested the systems that will allow this program to act as a true assistant to the Engineer, and not simply function as another tool that must be learned. In addition to performance monitoring and root cause identification already described, these systems include speech-based interaction, data querying, and results visualization.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. "September 2020."; Includes bibliographical references (page 88).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Interfaces : utilizing real-time biofeedback in the wild to elicit subconscious behavior change</title>
<link href="https://hdl.handle.net/1721.1/132823" rel="alternate"/>
<author>
<name>Haghighi, Nava.</name>
</author>
<id>https://hdl.handle.net/1721.1/132823</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Self-Interfaces : utilizing real-time biofeedback in the wild to elicit subconscious behavior change
Haghighi, Nava.
In this thesis, I introduce Self-Interfaces as a method for creating behavior change. Self-Interfaces are interfaces that intuitively communicate relevant aspects of covert physiological signals through biofeedback to give the user insight into their behavior and assist them in creating behavior change. The human heartbeat is a good example of an intuitive and relevant haptic biofeedback; it does not distract and is only felt when the heart beats fast. My vision is to identify other covert physiological processes and instances in which they become useful, and augment our awareness of those signals in order to create behavior change. As a first case-study, I develop the Self-Interface for Electrodermal Activity (EDA), which is designed to help regulate attention and interest in users with Attention Deficit Hyperactivity Disorder (ADHD). EDA is a covert physiological signal correlated with high and low arousal affective states. Three studies were carried out to: 1. identify the design criteria for development of the EDA Self-Interface, 2. identify guidelines to reduce the cognitive load imposed by the haptic biofeedback signal, and 3. identify the aspects of the EDA that are relevant and insightful for the ADHD population. The insights from these studies contributed to the design and development of the EDA Self-Interface which has three components: EDA Sensor (Affectiva E4 Sensor), a wearable haptic biofeedback interface, and a phone app to process the EDA data and communicate it with the wearable interface. Lastly, I discuss the evaluation criteria for the EDA Self-Interface and propose a longitudinal study for such evaluation.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 105-114).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating roadblocks to artificial intelligence adoption in enterprises through a systems perspective</title>
<link href="https://hdl.handle.net/1721.1/132822" rel="alternate"/>
<author>
<name>Ghorpade, Avinash
            (Avinash Gulabrao)</name>
</author>
<id>https://hdl.handle.net/1721.1/132822</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Investigating roadblocks to artificial intelligence adoption in enterprises through a systems perspective
Ghorpade, Avinash
            (Avinash Gulabrao)
Artificial Intelligence (AI) is a new digital technology and strategy imperative. It can have an enormous influence on the economy and society. In 1956, the term AI was introduced at the Dartmouth conference and used mainly in computer science research and academic domain. AI experienced several ups and downs since its inception. However, last the last few years, the availability of massive amounts of data, advanced algorithms, and an exponential increase in computing power is fueling its growth. It is acting as a key driver and value creator for industries such as healthcare, finance, education, manufacturing, and retail. Although a few enterprises are successful in adopting AI, others are struggling to identify potential AI use cases and realize investment returns. There are significant challenges enterprises need to overcome to adopt AI. This research aims to inform the successful enterprise adoption of AI by presenting a systems perspective and investigating the roadblocks. Based on the research work conducted, the six most dominant roadblocks for the successful adoption of AI are identified using literature survey approach and synthesizing learnings from AI-adoption failure cases. The identified roadblocks are: not recognizing the limits of current AI technologies, not recognizing the need for human judgment and involvement, lack of enterprise capabilities to manage risks associated with embracing AI, lack of strategy to market AI products and services, difficulty in moving from the AI-pilot stage to real-world applications stage, and not actively engaging all the stakeholders. Adopting holistic thinking is one approach to address the roadblocks faced in adopting AI at an enterprise level.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 102-118).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring architectural transformation to improve value of plant EPC business : case study of LNG production plant</title>
<link href="https://hdl.handle.net/1721.1/132821" rel="alternate"/>
<author>
<name>Fukatsu, Takeshi,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132821</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Exploring architectural transformation to improve value of plant EPC business : case study of LNG production plant
Fukatsu, Takeshi,
            S.M.
            Massachusetts Institute of Technology.
Plant EPC projects, typically for LNG production plants, are the most expensive types of facilities in the world. While many of them have been built in the last several decades, the EPC cost did not decrease, and rather a significant cost increase in the last decade was observed. The most significant reasons are that the requirements became more complex, associated structure and piping facilities became more complex and heavier, more design change occurred during the EPC, supply-chains became more dominant and higher priced, workforce inflation occurred. Cost overrun and schedule delay of plant EPC business is quite common; however, it causes heavy pain for many associated enterprises. While many lessons have accumulated in organizations and many experienced brilliant engineers and project managers are doing their best to eliminate cost overruns and schedule delays, not many projects can successfully manage their completion within budget on schedule. This makes the LNG price higher. On the other hand, societal pressure has become high to decrease LNG price. While the world's LNG demand is expected to increase, many of the off-takers cannot make economic sense above the LNG price of $6/mmBtu because high pressure to reduce carbon emission renders their infrastructures more complex and expensive. LNG is a prospective energy source that can reduce world carbon emission substituting for coal and oil that emit twice as much greenhouse gas as LNG does per unit energy. To reduce global warming, EPC contractors of LNG plants could play a significant role. This thesis explores the current problems faced by their EPC business, and seeks to make EPC more valuable and lower cost by transforming the enterprise architecture of EPC contractors.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 134-142).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing environmental risks with flexibility : case of phosphate fertilizer industry in Morocco</title>
<link href="https://hdl.handle.net/1721.1/132820" rel="alternate"/>
<author>
<name>Cadario, Adèle
            (Adèle Eve Maire Ferrazzini Cadario)</name>
</author>
<id>https://hdl.handle.net/1721.1/132820</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Managing environmental risks with flexibility : case of phosphate fertilizer industry in Morocco
Cadario, Adèle
            (Adèle Eve Maire Ferrazzini Cadario)
This thesis develops and demonstrates, in the context of environmental uncertainties, a process to 1. Quantitatively assess the effects of uncertainties on long-term enterprise performance, 2. Open the design space of strategic planning using real options to mitigate risks and take advantage of positive opportunities. Global warming, which is producing more frequent extreme weather events and driving in-depth societal transformations, increases the need to change usual habits of grounding strategic planning on deterministic forecasts, and pushes for realistic evaluation of potential results under uncertainties. We use a screening model to reproduce enterprise cash-flows and evaluate its net present value under thousands of scenarios (Monte Carlo simulation). This high-level evaluation enables us to test different strategies and compare the distribution of potential outcomes. Overall, we can realistically explore a larger design space for strategic planning, and intentionally integrate flexibility in design with an understanding of potential gains and required preparation. We apply the analysis to a case study inspired by OCP Group, the Moroccan major phosphate mining and fertilizer manufacturer. We examine the fluctuations of commodity markets, and the transformations led by environmental concerns. We recognize that environmental constraints can regionally change the systems of production, the demand, and could deeply impact global fertilizer markets. We especially focus on the risks of an international over-supply, caused by potential drastic decrease in East Asian consumption, and a regional change in the requirements for phosphate rock (e.g., limitation in heavy metals concentration). These could create parallel markets and change the flow of production. Our quantitative analysis indicates the desirability of exploring strategic drivers to complement the traditional price/volume approach. Flexible capacity expansion, in terms of both volume and type of products, coupled with a systemic allocation of production to markets across the industrial bandwidth (instead of a sales strategy by product line), could improve expected NPV significantly.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 82-85).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating economic development through innovation : university cases in research, education and catalyzing innovation</title>
<link href="https://hdl.handle.net/1721.1/132819" rel="alternate"/>
<author>
<name>García Sánchez, Juan Cristóbal.</name>
</author>
<id>https://hdl.handle.net/1721.1/132819</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Accelerating economic development through innovation : university cases in research, education and catalyzing innovation
García Sánchez, Juan Cristóbal.
The primary role of universities in strengthening economic development is in accelerating innovation and entrepreneurship. The framework Universities as Engines of Economic Development codifies a set of practices that collectively illustrate how universities can more effectively accelerate innovation, and contribute to sustainable economic development. This qualitative interdisciplinary study complements this systemic framework by illustrating with eleven case studies how these practices of knowledge exchange in research, education, and catalyzing innovation, are applied across different types of universities and circumstances. Results show how universities invigorate discoveries, enhance the training of future researchers, and improve direct diffusion of knowledge exchange by supporting investigator-driven mechanisms that solve societal challenges through interdisciplinary collaborations--among scholars, industry, and government--while influencing education, science policy and practice. Students are educated through integrated curriculum, real-world projects and independent learning to scale up tech ventures by leveraging the resources of local ecosystems and providing access to university intellectual property, tangible research products, incubators, labs, funding and professional entrepreneurial networks. These case studies reveal how universities can accelerate sustainable prosperity through innovation and help stakeholders reach their full potential.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. "September 2020."; Includes bibliographical references (pages 81-89).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systems analysis and technology roadmap for fall mitigation systems for the elderly</title>
<link href="https://hdl.handle.net/1721.1/132818" rel="alternate"/>
<author>
<name>Enti Ranga Reddy, Vikas Reddy.</name>
</author>
<id>https://hdl.handle.net/1721.1/132818</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A systems analysis and technology roadmap for fall mitigation systems for the elderly
Enti Ranga Reddy, Vikas Reddy.
Falls and fall related injuries in the elderly (aged 65 and older) are a major health challenge - both to the affected individual and to the public health system. Approximately 28-35% of the elderly fall each year and falls lead to 20-30% of mild to severe injuries, and are underlying cause of 10-15% of all emergency room (ER) visits. Falls cause 90% of the hip fractures in the elderly and also result in medical complications and high morbidity if the person does not receive prompt medical attention. A fall mitigation system (FMS) is either a wearable or ambient system that detects falls, reduces fall related injuries and issues emergency alerts to prevent the long-lie. Current FMS have poor user adoption and are not as effective in preventing the long-lie. This thesis uses a systems approach to analyze architectures for a fall mitigation system architecture that can detect falls, reduce injury and issue emergency alerts to reliably prevent the long-lie in independent elders. A National Health Interview Survey data was analyzed to understand the causes for falls, types of fall related injuries and common fall locations for community dwelling elders. A concept of operations was defined based on these findings and a user survey was conducted to understand the needs of community dwelling elders and the results were analyzed to prioritize system requirements for a fall mitigation system (FMS). An FMS was decomposed into six level 2 functions and the various form choices for each of these functions were analyzed and rated for performance, power consumption and cost. Five different fall mitigation system architectures were analyzed and the Distributed-Hybrid architecture had the highest performance while the Integrated-Wearable architecture had the lowest power consumption. Future technology trends in robotics, AI, neuromorphic computing and energy harvesting were studied to create a long-term strategic roadmap for fall mitigation systems. Neuromorphic architectures for computing and sensing offer the biggest performance per unit power unlock for fall mitigation systems.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 71-76).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of production policy in U.S. Naval Aviation's Primary Flight training</title>
<link href="https://hdl.handle.net/1721.1/132817" rel="alternate"/>
<author>
<name>Hanley, Nicholas R.
            (Nicholas Ryan)</name>
</author>
<id>https://hdl.handle.net/1721.1/132817</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">An analysis of production policy in U.S. Naval Aviation's Primary Flight training
Hanley, Nicholas R.
            (Nicholas Ryan)
The United States (US) Navy struggles to sustain its ranks of aviators; it therefore seeks to produce more pilots, more quickly, without additional resources. This thesis employs the Architecting Innovative Enterprise Strategy (ARIES) framework, Factory Physics methodologies, and experimental models to investigate new policies, organizational structures, processes, and knowledge that support this imperative in the Navy's Primary Flight Training commands. It addresses promising changes to Primary and how to facilitate them. The ARIES framework, and associated stakeholder interviews, logically investigate the qualitative intricacies of Primary to illustrate its operation. Quantitative internal baseline methods suggest policies for student inventory management, student prioritization, and aircraft allocation. Each technique is tested by a joint discrete process and agent-based student model. This investigation suggests that Primary is challenged by an excessive student inventory and unclear operations policies. It asserts that these two factors create excessive wait time and resource-wasting rework that drastically reduce production performance. Experimentation results qualify the trends of these detriments and quantify their impacts on throughput and training time. The work concludes that a tightly governed start rate can be paired with three concurrent policies to raise average throughput by 62% and reduce average time to train by 52%. 1. Prioritize students by their total time in training to reduce the impacts of rework. 2. Allocate resources to the largest queues to increase peak performance and capacity. 3. Manage student inventory via a constant work in process (CONWIP) policy to reduce the impacts of rework and dampen sensitivity to resource variations. It also suggests minimally disruptive changes to Primary's architecture that aim to reduce organizational, knowledge, and process complexities while promoting sustainability, scalability, and evolvability in the enterprise. Four core concepts summarize the rearchitecting effort: 1. Employ data analytics in the current infrastructure to aid in decision making. 2. Balance organizational centralization to support flexible but consistent performance. 3. Consolidate and reinforce institutional knowledge in stable employees. 4. Promote knowledge sharing and coordination to improve organizational learning. This thesis asserts that application of these new policies and re-architecting concepts will promote production performance, organizational knowledge, and proactive management.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 171-173).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data literacy in the digital age : experience design for a workplace learning solution</title>
<link href="https://hdl.handle.net/1721.1/132816" rel="alternate"/>
<author>
<name>Doherty, Oladipupo J.
            (Oladipupo Josiah)</name>
</author>
<id>https://hdl.handle.net/1721.1/132816</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Data literacy in the digital age : experience design for a workplace learning solution
Doherty, Oladipupo J.
            (Oladipupo Josiah)
It is estimated that 2.5 Quintillion bytes of data is produced everyday¹. Majority of this data originates from many business processes rapidly migrating to cloud solutions. Although this is an economic benefit to the organizations that are making this transition, many of the internal teams at the forefront of this change are ill-equipped to understand, analyze, and take advantage of these business insights; in other words there exists a deficiency in data-literacy. Our motivation was to explore the impact of this digital phenomenon on human productivity across 3 major fronts: traditional customer service channels, brand/crowd-generated instructional content, and peer networks. A user research survey was conducted to better understand the problem space, and an inference was made to address the issues surrounding data-literacy in the workplace. This concluded with a service framework, which was conceptualized to provide a starting point for organizations to begin this cultural shift, and empower their employees with the skills necessary to take full advantage of its vast data resource.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis. "September 2020."; Includes bibliographical references (pages 62-63).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Chicken or the Egg Problem : strategies for populating multi-sided business platforms</title>
<link href="https://hdl.handle.net/1721.1/132815" rel="alternate"/>
<author>
<name>Cunningham, Andrew,
            S.M.
            (Andrew James)
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132815</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">The Chicken or the Egg Problem : strategies for populating multi-sided business platforms
Cunningham, Andrew,
            S.M.
            (Andrew James)
            Massachusetts Institute of Technology.
Platform businesses such as Google, Amazon, VISA and Apple are major players in today's economy. But how do platform businesses start? Why would a customer visit Amazon Marketplace if there were no products, and why would businesses sell products on Amazon if there were no customers? This is a critical challenge for new platforms, and is known as the Chicken or the Egg Problem. This paper explores both successful and unsuccessful previous attempts to solve this challenge, identifies critical strategies that were used, and outlines recommendations for future platform businesses.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 85-88).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A vascular imaging system for longitudinal registration and mapping of superficial vessels with quantitative analysis</title>
<link href="https://hdl.handle.net/1721.1/132814" rel="alternate"/>
<author>
<name>Chen, Hongling,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132814</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A vascular imaging system for longitudinal registration and mapping of superficial vessels with quantitative analysis
Chen, Hongling,
            S.M.
            Massachusetts Institute of Technology.
Superficial vasculature presented on human skin is a stable and unique network across different individuals and contains important physiological information that is less understood and studied. Potential clinical applications include monitoring the progress of the peripheral arterial disease, assessment of revascularization during surgical interventions, and early assessment of skin cancer from melanoma imagery. Some non-clinical applications include biometric scanning and relocalization for ultrasound imaging. To bridge the knowledge gap between the technology and these potential applications, a reliable, robust, and versatile platform is necessary. My thesis project involves the design and development of a platform for longitudinal superficial vasculature imaging, as well as robust computational algorithms to characterize and quantify vasculature networks. The technology used for the system includes near-infrared (NIR) optics and illumination source in the biological tissue window (750nm-940nm) optimized for hemoglobin absorption. The algorithms used, including segmentation, registration, and graph-based network analysis, are developed and implemented in Matlab. Some of my results include evidence of longitudinal vascular stability, relocalization capability, vasculature features on different parts of the human body.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 58-60).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of consumer responses to organic food pricing</title>
<link href="https://hdl.handle.net/1721.1/132813" rel="alternate"/>
<author>
<name>Fadaie, Ameneh.</name>
</author>
<id>https://hdl.handle.net/1721.1/132813</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Exploration of consumer responses to organic food pricing
Fadaie, Ameneh.
In this research, we aimed to understand how we can motivate customers to make more responsible choices in regard to organic food purchases. Also, we studied how customers' purchase attitude and their purchase behavior differ from each other. We designed four different kind of experiments to examine whether different pricing, consensus principle, environmental and health benefits awareness can have impacts on the number of organic purchases. We also had sale experiment and online survey to study if customer's attitude and behavior are different from each other. The findings in this study suggest that there is no significant difference between the organic purchase behavior and customers' purchase intent. Moreover, we found that there are no significant differences between different experiments and treatments. However, our results reveal that baby boomers were more willing to purchase organic pecans compared to generation X.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 59-62).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive defense against adverserial artificial intelligence at the edge of the cloud using evolutionary algorithms</title>
<link href="https://hdl.handle.net/1721.1/132812" rel="alternate"/>
<author>
<name>Djeffal, Sofiane.</name>
</author>
<id>https://hdl.handle.net/1721.1/132812</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Adaptive defense against adverserial artificial intelligence at the edge of the cloud using evolutionary algorithms
Djeffal, Sofiane.
While moving to the cloud increases flexibility for many organisations around the world, it also presents its own set of operational risks and complexities when it comes to keeping data and workflows secure. As data becomes digitized, it is becoming more fruitful for bad actors to try to engage in data theft or disrupt online services for their financial gain, corporate espionage, or general intent to disrupt a service. Computers are also becoming more powerful and sophisticated than ever, allowing them to brute force what were once considered top of the line cryptographic ciphers and algorithms in no time. The cost of protecting an infrastructure is increasing both financially and in terms of human resources needed to support a system's security. Companies are relying on the cloud to provide that protection, and one of the ways the cloud provides it is through Edge nodes that sit in front of their infrastruture. Edge nodes are the first line of defense against threats to a web application. This thesis explores a new heuristic for approaching threat generation and detection in a network. It aims to demonstrate that with a proper grammar definition along with a strategy, and a reward system, a genetic algorithm can perform better than the existing rulebased system used to generate and defend against a wide breadth of attacks. This proposes solution focuses on three types of attacks: Data Exfiltration, Server Hijack, and Denial of Service. The goal is to demonstrate that computationally searching for vulnerabilities does not scale well with a rule based system while a genetic algorithm can handle an increase of breadth in attacks with more elegance and better results.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 89-91).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Management of cross-team interfaces in large-scale agile development</title>
<link href="https://hdl.handle.net/1721.1/132811" rel="alternate"/>
<author>
<name>Crofoot, Lisa.</name>
</author>
<id>https://hdl.handle.net/1721.1/132811</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Management of cross-team interfaces in large-scale agile development
Crofoot, Lisa.
Agile product development promises improved productivity, fast development cycles, and high employee satisfaction. Large-scale agile development frameworks (e.g., SAFe, Sage, Scrum [at symbol] Scale, Spotify, LeSS) adapt agile principles for programs where multiple teams must work together to build complex products and services. In this research, we explore how large-scale agile organizations manage cross-team interfaces and dependencies. We reviewed existing frameworks and interviewed fourteen individuals from six different organizations. We learned that many large-scale agile practices act as coordination mechanisms. Large-scale agile programs use these coordination mechanisms to: (1) reduce the quantity and complexity of cross-team interfaces; (2) identify interfaces up front; (3) surface interfaces as they emerge in development; (4) manage interfaces during development; and (5) build a shared understanding between teams. Experienced practitioners consider how agile roles, events, artifacts, and other mechanisms contribute to coordination in each of these areas. While large-scale agile frameworks provide recommended practices, we suggest programs should adapt these approaches to fit their specific needs. Future research may help to evolve large-scale agile practices by further exploring: product, process and program architecture, coordination mechanisms and effectiveness, and leadership and accountability.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 59-61).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Humans, machines, &amp; entrepreneurship : an agenda to harness the potential of emerging technologies</title>
<link href="https://hdl.handle.net/1721.1/132810" rel="alternate"/>
<author>
<name>Creamer, Joshua,
            S.M.
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132810</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Humans, machines, &amp; entrepreneurship : an agenda to harness the potential of emerging technologies
Creamer, Joshua,
            S.M.
            Massachusetts Institute of Technology.
We live at a time of technological change that is unprecedented in its pace, scope, and breadth of potential impact. Technological progress, specifically general purpose technologies, is the main driver of aggregate economic growth. It increases productivity, which is what determines the wealth of nations and the living standards of individuals. However, despite impressive technological advancements, productivity growth has actually slowed. Entrepreneurship, particularly innovation-driven entrepreneurship, is recognized as the central change agent to unlocking technological advances, driving productivity improvement, and advancing social transformation. However, literature demonstrates that despite stories in the media, innovation-driven entrepreneurship and business dynamism has steadily declined over the past twenty years. We provide four recommendations aimed towards helping society harness recent technological advances and translate them into improved living standards. We then apply these recommendations to contribute to our understanding of how we might best accelerate the development of entrepreneurs and new entrepreneurial ventures that leverage AI and digital technologies for good of society in ethical ways.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 51-59).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of hierarchy to STPA : a human factors study on vehicle automation</title>
<link href="https://hdl.handle.net/1721.1/132809" rel="alternate"/>
<author>
<name>Cabosky, Rachel
            (Rachel Lynn)</name>
</author>
<id>https://hdl.handle.net/1721.1/132809</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Application of hierarchy to STPA : a human factors study on vehicle automation
Cabosky, Rachel
            (Rachel Lynn)
In a world where vehicle automation designed to remove "human error" is increasingly present on our roadways, are we actually safer? As we replace human tasks and decision making, the machines and the software used to substitute these actions become more complex. This increased complexity drives the need to thoroughly understand changes to the associated risk as well as the impacts to, and changing relationships with, the human driver. System-Theoretic Process Analysis (STPA) has been proven as an effective tool to evaluate risk by analyzing the system as a whole rather than at the component level. Notably, STPA includes, and evaluates, the operator as a part of the system. Additionally, STPA methodology provides the means to simply depict and communicate intricate system controls. Though it is clear that STPA can be performed with a range of system specificity, it has yet to be documented what types of recommendations can be provided as more complexity and detail is included in the system description. This thesis is used to demonstrate that STPA can be performed iteratively, and that significant insights to the system design can be obtained at each iteration or level. This method of evaluation includes the human factors extension and basic scenario generation to supplement the refinement process. To perform this analysis, an SAE Level 2 feature intended for highway traffic assist, proposed by Zenuity, is evaluated at three levels of detail--focusing on the driver-feature interface. Iteration and refinement are possible at all steps of STPA, but special attention is given here to the control structures, unsafe control actions, and scenarios. This work benefits risk management and hazard analysis by offering a methodology for managing complexity through hierarchical iteration, such that insights can be derived early and be refined throughout the analysis process.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 127-129).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban data governance and policies : a comparison using case studies</title>
<link href="https://hdl.handle.net/1721.1/132808" rel="alternate"/>
<author>
<name>Chan, Shelley
            (Shelley Claire)</name>
</author>
<id>https://hdl.handle.net/1721.1/132808</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Urban data governance and policies : a comparison using case studies
Chan, Shelley
            (Shelley Claire)
Due to increasing internet access and mobile phone usage, the collection of data has exploded globally in the past decade. At the same time, the processing power and storage capabilities have become cheap and prevalent, and sophistication and complexity of AI and machine learning algorithms have advanced and are widespread. Data is a new type of capital. Public governance has not yet caught up, and because of the specialized technical expertise required, the public sector has permitted and sometimes even invited private companies to make the rules. Caught up in these trends is a tension between cities wanting to be at the forefront of technology while needing to act in the public interest and protect the marginalized and underrepresented. This thesis utilizes a comparative analysis of two case studies, Toronto and New York, to examine the existing urban data governance models and identify some learnings from the comparison. Within each case study, the thesis employs systems architectural concepts such as stakeholder mapping and architectural decisions to illustrate differences and highlight shortcomings and potential recommendations to address in future policy proposals.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 65-68).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the challenges faced by FDA-regulated early-stage medical device startups and how to approach them</title>
<link href="https://hdl.handle.net/1721.1/132807" rel="alternate"/>
<author>
<name>Bui, Chinh
            (Chinh Thi Diem)</name>
</author>
<id>https://hdl.handle.net/1721.1/132807</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A study of the challenges faced by FDA-regulated early-stage medical device startups and how to approach them
Bui, Chinh
            (Chinh Thi Diem)
It is never easy to start a company with a novel product, let alone building hardware that will be rigorously regulated by the United States Food and Drug Administrations (FDA). This thesis explores the difficulties faced by startups and proposes a step-by-step guide for the founders to navigate this challenge. The guide seeks information from the regulations database, and from interviews with stakeholders involved in medical product developments. First, the device requirements by the FDA and the device application process are presented in the style of a submission guide. The different costs to the startups are then discussed followed by guidelines to founders to steer their startup ship. The guideline focuses on 1) explaining the personas and challenges faced by the stakeholders in medical device development, 2) presenting the best practices in medical product development, 3) recommendations to adopt an electronic Quality Management System with software and tools and finally 4) pointers to useful resources those founders may have to spend months to collate. This thesis is a starting point for anyone thinking of creating medical devices regulated by the FDA, regardless of background and experience.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 69-74).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology roadmapping and design optimization of an innovative mineral-organic adhesive for bone repair</title>
<link href="https://hdl.handle.net/1721.1/132806" rel="alternate"/>
<author>
<name>Brown, Michael C.,
            S.M.
            (Michael Christopher)
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132806</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Technology roadmapping and design optimization of an innovative mineral-organic adhesive for bone repair
Brown, Michael C.,
            S.M.
            (Michael Christopher)
            Massachusetts Institute of Technology.
As medical devices become more complex, the need for methodical and structured design processes has never been greater. Due to the great complexity of the aerospace industry, both qualitative and quantitative methods of technology planning and design assessment have been implemented with great success in that industry. These methods, such as technology roadmapping and multi-disciplinary design optimization, show great promise in the medical device field that has traditionally lacked such rigor. This research accomplishes four objectives: Benchmarking of the current development methods used in the medical device industry; Evaluating the current state of the art of adhesive biomaterials; Application of technology roadmapping methods as they relate to the medical device industry, specifically bone adhesives; and, Development of a multidisciplinary design optimization model used for the development of a novel mineral-organic adhesive used in lumbar spine fusion procedures. A Multi-objective optimization found that an optimal design of the mineral-organic adhesive resulted in a slight (1 minute) increase in surgical time, it resulted in a significant reduction of approximately $1,020 in product cost, and more importantly, a reduction in the estimated healing time from 72 to 24 weeks as compared to the baseline design for utilization in the lumbar spine fusion surgical procedure. By accomplishing these four objectives, this thesis outlines the methods and models necessary to bring to market paradigm shifting technologies that will be the catalyst for significant change in the healthcare industry.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis. "May 2020."; Includes bibliographical references (pages 79-83).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Valuing investments in agile project design : example for upstream oil and gas development</title>
<link href="https://hdl.handle.net/1721.1/132805" rel="alternate"/>
<author>
<name>Brown, Katherine A.,
            S.M.
            (Katherine Amae)
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132805</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Valuing investments in agile project design : example for upstream oil and gas development
Brown, Katherine A.,
            S.M.
            (Katherine Amae)
            Massachusetts Institute of Technology.
Traditional oil and gas companies will continue to face market uncertainties in the coming decades. With increased pressure to confront climate change and energy technology innovations, the future demand and pricing for petroleum products is unclear. Hydrocarbons will continue to play a part in the changing energy landscape. However, companies will need to revisit what capital is spent on and how they spend it. Instead of tying up capital in a one-time massive investment decision, made under long term assumptions, agile investing gives power back to the business decision maker. This thesis has developed a computationally efficient model for valuing systems built for agile investing. It combines system architecture principles, real options valuation, and object-oriented programming. Investment decisions under uncertainty are simulated by combining optimization algorithms and Monte Carlo sampling. The approach allows expansion decisions to be included in the early stages of system architecture design. In industry, definition of subsystem requirements is an influential step in project development, setting up the costlier and time intensive detailed engineering, procurement and construction. Practicality is demonstrated through application to a realistic, but hypothetical case study. We explore the development of an upstream, onshore oil field. The system is decomposed into several subsystems accomplishing fluid extraction, processing and sales. The model simulates their physical and economic interactions to calculate performance metrics of net present value, capital expenditure, system capacity, emissions and others. We investigate performance changes based on subsystem sizing and installation timing. The analysis shows how agility can increase expected value while reducing investment risk. Overall expected value increases by 5% and the initial capital commitment is only one-sixth the cost of a full production system. The value is created by earlier positive cashflows, hedging commitment against falling oil prices and quick expansion opportunism in the case of rising prices. Using the same model, subsystems are refined and then expanded to investigate combustion emissions. By incorporating cleaner fuel sources, combustion emissions can be reduced by 70%. We conclude by recommending specific subsystem requirements for an agile investment design. Keywords: agility, oil and gas, system architecture, real options, Monte Carlo simulation, integer optimization, managing uncertainty, design flexibility, object-oriented programming
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 80-81).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the design of a car seatbelt : a study of the invention and a proposal to minimize the risk of injuries during pregnancy</title>
<link href="https://hdl.handle.net/1721.1/132804" rel="alternate"/>
<author>
<name>Briones Panadero, Helena.</name>
</author>
<id>https://hdl.handle.net/1721.1/132804</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Analysis of the design of a car seatbelt : a study of the invention and a proposal to minimize the risk of injuries during pregnancy
Briones Panadero, Helena.
Current vehicle seat belts are not tested for performance with pregnant women and greatly ignore the damage that can be caused to an unborn baby. In fact, the automotive seat belt has undergone almost no change since it was first patented in 1958. The forces from the seat belt against an expectant mother's abdomen leads to the tearing of the placenta (known as placenta-abrupto) causing fetal demise. According to a study on Fetal Deaths Related to Maternal Injury, it was concluded that "motor vehicle crashes are the leading cause of fetal deaths related to maternal trauma" (H. B. Weiss et al., 2001). This thesis analyses the invention of the car seatbelt and the evolution of its engineering, the data that some studies have provided in regards to pregnancy and the use of the automobile and the causes of injuries and death of the unborn babies due to the design of the safety measures. The outcome of this work is a compilation of data through interviews and surveys of pregnant women, doctors and specialists and a proposal for an updated design of the current seatbelt that is nowadays being used worldwide to minimize the risk of injuries during pregnancy. The analysis of the data is integrated in a proposal of improvement for the current design and future steps to be taken in order to enhance the material structure of the three-point seatbelt design.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 74-77).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting the future of a global automobile supplier : a socio-technical perspective</title>
<link href="https://hdl.handle.net/1721.1/132803" rel="alternate"/>
<author>
<name>Bilal, Badrul.</name>
</author>
<id>https://hdl.handle.net/1721.1/132803</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Architecting the future of a global automobile supplier : a socio-technical perspective
Bilal, Badrul.
Automotive industry is at the cusp of a revolutionary change. Technologies such as autonomous driving, connected vehicles, electrification of the powertrain and shared mobility (commonly referred to as ACES - autonomous, connected, electric and shared vehicles) are not only disrupting the industry but are also uncovering new business models that were non-existent a few years ago (Modi et al., 2018, p. 31). As a result of this disruption, Automotive Original Equipment Manufacturers (OEM) and their suppliers face several new challenges and are transforming themselves to adapt to these changes. Although many studies in the recent past have investigated the impact of ACES on OEMs, not many have studied the impact of this disruption on suppliers to OEMs. Automotive suppliers are an important stakeholder contributing extensively to the success of OEMs through close collaboration and partnerships. Using literature reviews and knowledge gathered from stakeholder interviews, this thesis uses the architecting innovative enterprise strategy (ARIES) framework (Nightingale &amp; Rhodes, 2015, p. 15) to uncover the challenges involved when an automotive supplier embarks on a transformational path necessitated by the ACES revolution. The thesis proposes several architectures and evaluation criteria that could be used to determine the preferred architecture for a global automotive supplier to adopt in their quest to successfully transform and adopt a more agile culture in the face of challenges brought about by the ACES disruption.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis. Page 106 blank.; Includes bibliographical references (pages 100-105).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive analytics for crude oil tanker markets</title>
<link href="https://hdl.handle.net/1721.1/132802" rel="alternate"/>
<author>
<name>Babakan, Kayhan.</name>
</author>
<id>https://hdl.handle.net/1721.1/132802</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Predictive analytics for crude oil tanker markets
Babakan, Kayhan.
Tanker markets are one of the many markets to experience extreme volatility, historically realizing drastic swings in earnings of up to 260% week over week. This volatility has placed pressure on tanker market participants to forecast future returns, create guidance for their investment decisions, and develop an analytical advantage. In this thesis, we develop analytics models to predict average earnings in the VLCC and Suezmax tanker market segments. Through the use of principal components regression, we forecast market returns with endogenous and exogenous market factors. A challenge lies in the fact that key variables--supply, demand, and utilization--are not necessarily available at the time of prediction. Accordingly, we develop an original two-step framework that first predicts vessel supply using classification models, and then embeds the imputed variables into the downstream principal components regression model. Methodologically, this procedure provides a novel approach to integrate classification outputs into a downstream predictive model. Based on our findings, we apply the forecast to two investment decisions; how to hedge the tanker market using time charter contracts and how to determine the optimal economic approach to lightering considering uncertainty in demand and in market returns. In both instances, we demonstrate how the use of an accurate tanker market forecast can be leveraged to make better managerial decisions, historically amounting up to 35 million dollars per year in the lightering decision and 10 million dollars per contract in the time charter investment.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 69-70).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A system-theoretic approach to oil and gas assurance programs</title>
<link href="https://hdl.handle.net/1721.1/132801" rel="alternate"/>
<author>
<name>Baylor, Brandon S.
            (Brandon Scott)</name>
</author>
<id>https://hdl.handle.net/1721.1/132801</id>
<updated>2025-10-30T17:51:24Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">A system-theoretic approach to oil and gas assurance programs
Baylor, Brandon S.
            (Brandon Scott)
Chevron, one of the world's leading integrated energy companies, faces new challenges as it aggressively pursues digital innovation and acceleration. Oil and gas well construction, in particular, will continue to incorporate automation to enhance capabilities and gain a competitive advantage. These changes to the technology landscape will fundamentally alter the nature of well construction and the interactions pertaining to well design, operation, and maintenance. WellSafe, Chevron's well control assurance program, was created to ensure process safety hazards are controlled and to prevent large-scale incidents. Since its inception in 2015, WellSafe has brought incremental improvements. To continuously adapt and keep pace with the ongoing digital transformation, WellSafe must use systems engineering principles, methods, and tools to improve in the face of a changing environment. System-Theoretic Accident Models and Processes (STAMP) and System-Theoretic Process Analysis (STPA) developed by MIT's Nancy Leveson help assess WellSafe and uncover opportunities to improve. This thesis analyzes the WellSafe assurance program and generates system requirements based on causal factors that impact the efficacy of the program. This, in turn, helps identify safe system boundaries and constraints that must be enforced to achieve system safety. This thesis demonstrates the value of STPA as an integrated analysis method and offers specific recommendations to improve the WellSafe program.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 189-191).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of MBSE to oil and gas project / product management cycle : a model-based development approach for engineering management and design</title>
<link href="https://hdl.handle.net/1721.1/132800" rel="alternate"/>
<author>
<name>Asa, Funmilola Adeoti.</name>
</author>
<id>https://hdl.handle.net/1721.1/132800</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Application of MBSE to oil and gas project / product management cycle : a model-based development approach for engineering management and design
Asa, Funmilola Adeoti.
Considering the large capital outlay and the long duration for recouping investments on Oil and Gas projects, it is concerning that a number of projects in the Industry continue to exceed their approved cost and schedules by significant margins. Engineering is often named as a culprit for Project execution issues manifesting in engineering and construction rework, start-up delays, startup performance and early facility life issues. Worthy of note is the increasing complexities of Oil and Gas production facilities and systems stemming from more remote operational locations, newer production technologies and a drive for autonomous facilities. Hence, the need for an Engineering approach to address current system development issues and poised to take on the complexity challenges of the systems of the future. Despite the benefits of Model Based System Engineering (MBSE), and System Engineering broadly, in addressing system complexities in industries like Aerospace, there are sparse references that address benefits of such as approach in the Oil and Gas industry. In addition, there is a gap in literature on the Oil and Gas Industry that analyze the underlying design approach, used over decades in the industry, relative to project outcomes. This research attempts to address the gaps using a case-study approach to analyze MBSE implementation in Aerospace for insights towards an implementation in the Oil and Gas Industry. This research evaluates the underlying discipline-based design approach in the industry against a System Engineering benchmark; analyzes key design issues categories in the industry identifying candidates for MBSE Application; and presents an MBSE Implementation scorecard for the Oil and Gas Industry. The main contribution of this research is the development of a framework for System design in the Oil and Gas Industry as part of the System/Product Development cycle. The framework addresses the underlying design approach as a contributor to Engineering process outcomes; and provides a method that facilitates a systemic approach to design enabled by appropriate modeling to ensure systemic emergence is understood; and adequately characterized with direct impacts to the Engineering process and downstream System development activities. It proffers a new way of thinking design on Oil and Gas projects.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 208-217).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inrush transient generation and line impedance estimation</title>
<link href="https://hdl.handle.net/1721.1/132798" rel="alternate"/>
<author>
<name>Saathoff, Erik K.
            (Erik Karl)</name>
</author>
<id>https://hdl.handle.net/1721.1/132798</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Inrush transient generation and line impedance estimation
Saathoff, Erik K.
            (Erik Karl)
An inrush transient contains extensive information that permits load identification, condition monitoring, and line impedance estimation. A power system monitor's (PSM) ability to identify a load based on its inrush behavior depends on the training exemplars used to create and optimize the load identification algorithm. This work discusses the use a phase-controlled switch that can be used in situ to integrate the effects of source and line impedance into the inrush data, and to generate transients at controllable turn-on phase angles relative to the voltage line-cycle. The resulting exemplars are more realistic than those generated with conventional techniques such as testing with an AC power supply. The control over angle also enables efficient investigation of a load's transient variability space. Testing loads in fault conditions expands the variability space, allowing load identification algorithms to correctly identify faulty loads and perform diagnostics. The large, high-frequency current that inrush transients inject into the line provides excellent excitation for line impedance estimation. Previous switching based approaches focus on fitting the line impedance to a model, i.e. parametric impedance estimation. This thesis extends previous work by providing the current excitation with common electrical loads rather than using capacitors, inductors, and short circuits. Non-parametric impedance estimation is also demonstrated. Inrush transients, and other transients generated by switching the load on and off rapidly, generate current with wide-bandwidth spectral content to replace previously used sinusoidal injection sweeps.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 195-198).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing creative learning experiences for teachers</title>
<link href="https://hdl.handle.net/1721.1/132794" rel="alternate"/>
<author>
<name>Bueno Gómez, Luciana.</name>
</author>
<id>https://hdl.handle.net/1721.1/132794</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Designing creative learning experiences for teachers
Bueno Gómez, Luciana.
Most curricula include in one form or another creativity as one of many students' expected learning outcomes. In order to achieve creative learning, we must first achieve creative teaching. However, the tools that educators use to design learning experiences remain incredibly linear and constrain experimentation, collaboration, and building on what others have created. This thesis dives deeper into an educators' thinking and design process, and explores how alternative tools, such as Trello and Milanote, might support teachers in developing creative learning experiences. This work is composed of interviews and workshops run with educators from different backgrounds, contexts, and expertise. The outcome of this study is a set of insights, artifacts, and design principles that shaped the design of a new digital tool: 'RemixEd'. The broader vision of this work is to create a platform that encourages creative lesson design and empowers educators as designers.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, May, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 106-108).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organizational Architecture Design and Assessment of Statistical Feasibility for NPM Implementation in an Airplane Subassembly</title>
<link href="https://hdl.handle.net/1721.1/132793" rel="alternate"/>
<author>
<name>Daigle, Lea&#13;
            (Lea A.)</name>
</author>
<id>https://hdl.handle.net/1721.1/132793</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Organizational Architecture Design and Assessment of Statistical Feasibility for NPM Implementation in an Airplane Subassembly
Daigle, Lea&#13;
            (Lea A.)
Company Z is a ubiquitous name and prominent leader in the aerospace industry,&#13;
maintaining dominance in part by continuously seeking to improve. Company Z is now&#13;
embracing a charter to become a Global Industrial Champion in manufacturing by&#13;
developing strategies to improve manufacturing quality, speed, and cost. As part of this effort&#13;
Company Z is implementing Novel Production Method (NPM) on a new aircraft, Aircraft&#13;
ABC. This document focuses specifically on Assembly A, a primary assembly in Aircraft&#13;
ABC. NPM is a process in which all piece part holes are drilled precisely and accurately upon&#13;
manufacture and later assembled with no match-drilling necessary on the assembly line. This&#13;
promises to significantly reduce cycle time while simultaneously improving assembly quality&#13;
and speed.&#13;
Accurate tolerance decisions for piece part hole diameters, hole positions, and hole&#13;
patterns are imperative for NPM success on Assembly A. As Assembly A is in the early design&#13;
stages, no measurement data exists to aid in determining which tolerances will yield a successful&#13;
assembly. To supplement this data gap, measurement and pass/fail data from other aircraft&#13;
were used to simulate Assembly A pass/fail rates using Close Ream, Class 1, and Class 2A&#13;
tolerance quality tiers. Results from this analysis indicate probable Assembly A NPM success&#13;
using Class 1 quality hole tolerances for non-complex parts and Class 2A hole tolerances for&#13;
complex parts.&#13;
It is also imperative to restructure Assembly A organizational architecture to&#13;
accommodate the radical innovation required to implement NPM. The existing organizational&#13;
model invites many improvement opportunities in communication, collaboration, and shortened&#13;
learning cycles. A high velocity learning approach is used to examine the current organizational&#13;
structure and offer adaptation strategies. It is recommended that the current Agile team structure&#13;
be adapted to include more diverse job functions and to include other Company Z aircraft&#13;
organizations as well as strategic suppliers as partners. It is additionally recommended that a&#13;
larger emphasis be placed on data distribution across business units. The implementation of&#13;
these organizational changes and the aforementioned engineering strategies will vastly improve&#13;
the efficiency of NPM implementation in Assembly A.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020; Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020; Cataloged from student-submitted PDF of thesis.; Includes bibliographical references (pages 72-73).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the SFLC Industrial Operations Organization and delivery of depot maintenance to stakeholders through a systems thinking Approach</title>
<link href="https://hdl.handle.net/1721.1/132792" rel="alternate"/>
<author>
<name>Jones, Eric J.
            (Eric Jamison)</name>
</author>
<id>https://hdl.handle.net/1721.1/132792</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">Evaluating the SFLC Industrial Operations Organization and delivery of depot maintenance to stakeholders through a systems thinking Approach
Jones, Eric J.
            (Eric Jamison)
The U.S. Coast Guard has been part of several major organizational transformations. A little over a decade ago, the Naval Engineering enterprise underwent a significant organizational transformation. Due to the nature of the Coast Guard's organizational size, expansive geographical laydown of its cutters and boats, and inherent responsibilities, the Coast Guard must maximize the use of each finite resource. The purpose of this thesis is to examine and analyze the current transformed Surface Forces Logistics Center Industrial Operations Division (SFLC-IOD) organization. Additionally, it evaluates how can systems-thinking inform future enterprise transformation opportunities for improved efficiencies in the delivery of depot-maintenance to the surface fleet. Moreover, the objective is to propose alternative enterprise architectures that deliver value to all stakeholders. The primary methodology for this research utilizes the Architecting Innovative Enterprise Strategy (ARIES) framework, supported by literature reviews, and internal and external stakeholder interviews. This research identifies four alternative architectures that provide value to the SFLC customer ecosystem; the selected architecture supports a gap identified in the stakeholder analysis by providing a dedicated industrial depot-maintenance service to major cutters clustered at dense centers of gravity. Additionally, it focuses on providing dependable and repeatable specialized services to its waterfront customers; this affords the surface fleet requisite flexibility in operational planning and execution of its mission. The qualitative analysis suggests that the U.S. Coast Guard should explore a self-sustaining depot-maintenance posture due to the U.S. Navy's increased dependency on private commercial, industrial base activities in support of their non-nuclear surface naval fleet.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, September, 2020; Cataloged from the official version of thesis.; Includes bibliographical references (pages 98-100).
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The factory of coexistence</title>
<link href="https://hdl.handle.net/1721.1/132767" rel="alternate"/>
<author>
<name>Konjicanin, Melika,
            M. Arch
            Massachusetts Institute of Technology.</name>
</author>
<id>https://hdl.handle.net/1721.1/132767</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">The factory of coexistence
Konjicanin, Melika,
            M. Arch
            Massachusetts Institute of Technology.
Since the fall of Yugoslavia thirty years ago, Bosnia and Herzegovina's once booming industrial system has left a landscape of its skeletons. Each town in the country that had been oriented around factory life now houses a ruin - a constant reminder of what once was. The negative effects of the fall of the country's industrial system are experienced universally among its citizens, socially, economically, and environmentally. Once these industrial infrastructures brought prosperity to towns, though their environmental impact was neglected. Today they continue to exist on contaminated land, within the context of an ethnically segregated country, ruled by a nepotistic political elite. The complexity and corruption of the government's inner workings implies the lack of any system in place to protect both its citizens and their cultural history, which includes the factories. Twenty-five years after the end of the war, Bosnia and Herzegovina is still rebuilding itself (or in most cases, failing to). This thesis proposes a modest first step towards an alternate approach of revitalization through the active healing of an industrial ruin. The defunct factory building will serve as both a locus for conversations of reflection on the nation's past, and as a functional reminder that social, economic and environmental life cycles can be healed and renewed. The Factory of Coexistence is a new expanded architectural typology that reintroduces the industrial ruin back into cycles of life. Sited in the ruins of the first factory in Bosnia and Herzegovina and the first steel plant in southern Europe, the Factory of Coexistence exploits the transformative potential of the ruin in the rewriting of social, economic and environmental stories. In the Factory of Coexistence, architecture is a medium that reconnects us with the past, while acting in the present to transform the future.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, February, 2021; Cataloged from the official pdf of thesis.; Includes bibliographical references (page 117).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seven ways of reading : the House of the Seven Gables</title>
<link href="https://hdl.handle.net/1721.1/132766" rel="alternate"/>
<author>
<name>Dannin, Isadora
            (Isadora Simone Stahl)</name>
</author>
<id>https://hdl.handle.net/1721.1/132766</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Seven ways of reading : the House of the Seven Gables
Dannin, Isadora
            (Isadora Simone Stahl)
The House of the Seven Gables is the name given to a house in Salem, MA, constructed in 1668, that now, arguably, has seven gables. It would seem logical to assume that the book written by Nathaniel Hawthorne in 1851, titled with the same name, would be about this house. However, the timeline of these namings is backwards, and the writer strictly denied the relation, instead likening the house of the story to a "castle in the air": a haunted, fantastical construction and metaphorical container for the moments of crisis when history repeats itself. The setting for a ghost story. In other words, it shouldn't be read into as a real thing. Of course, this denial can't be taken too literally. The property on Turner Street was indeed once owned by a cousin of Hawthorne's, and his time spent playing cards with her in the parlor is well documented. As it stands, the house in Salem is a historic landmark, revered both as a figment of literary mythology, and as one of the oldest and largest intact examples of colonial architecture in the former Massachusetts Bay Colony. As such, it stands obliquely for over 350 years of American history and national identity, both in its physicality and in Hawthorne's portrayal. Through the practice of close reading, this thesis designs a set of ways of apprehending the house as a living document, which like a text, can be read to hold a multiplicity of associated social and political meanings in its constructive details, its structural syntax, its contents and their stylings, and its siting. The method is in the repetitive act of representation in order to depict the house 'for what it is' by reenacting its intimations. Seven chapters, each refocusing the lens through which the house is imaged, set out to make visible the intersecting narratives latent in its architecture. My aim is not to resolve complexities, redundancies, or the stubbornness of the present architectural articulation, rather to elucidate their sources and implications: the vestigial ghosts of an alternate set of hauntings.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, February, 2021; Cataloged from the official pdf of thesis.; Includes bibliographical references (pages 159-173).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gardens of resistance</title>
<link href="https://hdl.handle.net/1721.1/132765" rel="alternate"/>
<author>
<name>Jhaveri, Nynika
            (Nynika P.)</name>
</author>
<id>https://hdl.handle.net/1721.1/132765</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">Gardens of resistance
Jhaveri, Nynika
            (Nynika P.)
Over the last few millennia, the city that today is the seat to the world's largest "democracy" has served as the nerve center for generations of empires and emperors, political paradigms and intersecting identities. As for most capital cities such as New Delhi, alongside entrenched political regimes come the evolution of a parallel legacy of fighting against, opposing and obstructing, and resisting. Whether manifesting as the rallying cries at mass protests, as the purposeful strokes on canvas in practices of critical art, or as the defiant lyrics and rhythms in musical compositions, resistance is instrumental in the vocabulary of any effective political vision. Considering the Central Vista Complex in Lutyens' New Delhi specifically, we look at a political urban fabric that has embodied these simultaneous histories for the past century, as a site of power and of resistance to that same power, as belonging to the governing and to the governed. Built as a monumental colonial project in opposition to Delhi's existing Mughal city center in 1911, appropriated as a symbol of a new nation's power as a post colonial inversion in 1947, serving as a site for rallies, protests, and parades engaging the growing pains of independence and modernization in 60s and 70s, and finally as part of a repressive, autocratic re-branding resisting due process and dialogue in 2020, the site's spatial politics have also witnessed a plethora of resistances. This thesis questions the role of architecture in envisioning and engaging the tools of resistance in the context of such political sites. It narrates the stories of three actors as they reclaim the Complex's Mughal Gardens - landscapes historically seen as spaces of utopic experimentation and speculation - as spaces of their own resistance. Considering the architectural tools of process, scale, materiality, and temporality, the actors strive to re-inscribe an entirely new set of contemporary cultural and civic values into an otherwise charged landscape, a form of socio-spatial resistance in response to their own historical moments.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, February, 2021; Cataloged from the official pdf of thesis.; Includes bibliographical references (pages 158-159).
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
</feed>
